path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
ZebraKet/Alex test files/Create Data file.ipynb | ###Markdown
Creating data for Cost and Price by supplier and for each item. 
###Code
import numpy as np
import pandas as pd
import random
Ns=10 #num suppliers
Ni=20 #num items
#Price Matrix
#random.seed(a=1, version=2)
P=np.zeros(Ni)
C=np.zeros((Ns, Ni))
#cost of all items for every supplier
for i in range (Ni):
#base cost used to calculate the cost of the item at each supplier
c_base=random.random()*1000
for s in range (Ns):
#Cost of item fluctuates from c_base
C[s,i]=random.randint(90,130)/100*c_base
#price of item, for now, is the maximum cost of item from any supplier
P = np.amax(C, axis=0)
pd.DataFrame(C).to_csv("cost.csv")
pd.DataFrame(P).to_csv("price.csv")
print(C)
print(P)
###Output
[ 399.42192098 210.92947497 440.63668461 29.81131108 20.93315434
414.96399452 556.60955487 463.05095778 979.20480016 501.97462792
814.51940014 506.44851709 864.52634398 561.74784941 1038.34017353
1096.79445495 1007.34361661 1279.65998659 30.5780668 786.1882834 ]
|
10ML/3_ModSel.ipynb | ###Markdown
1. Ajuste de hiperparámetros: Métodos alternativos a fuerza bruta. Validación cruzada de métodos especificos.Algunos modelos pueden ajustar datos para un rango de parámetros casi tan eficientemente como para un parámetro. Esta característica tiene la ventaja de realizar una validación cruzada más eficiente para la selección de modelo para este parámetro. Ejercicio: Para los siguientes modelos de regresión obtenga los mejores parámetros y el score de MSE utilizando el dataset de "boston house-prices":
###Code
#linear_model.LarsCV([fit_intercept, …])
from sklearn.datasets import load_boston
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn import linear_model
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
boston = load_boston()
X=boston.data
y=boston.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
modelo=linear_model.LarsCV(cv=5).fit(X,y)
plt.figure(figsize=(10,10))
for i in range((modelo.mse_path_.shape[1])):
plt.plot(modelo.mse_path_[:,i],label=r'Fold: %d'%i)
plt.ylabel("MSE")
plt.xlabel(r"$path$")
plt.legend()
plt.show()
y_pred=modelo.predict(X_test)
print("Los coeficientes son:",modelo.coef_)
print("MSE: %.3f"%mean_squared_error(y_pred,y_test))
from sklearn.datasets import load_boston
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn import linear_model
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
boston = load_boston()
X=boston.data
y=boston.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
modelo=linear_model.LassoCV(cv=5).fit(X,y)
plt.figure(figsize=(10,10))
for i in range((modelo.mse_path_.shape[1])):
plt.plot(modelo.mse_path_[:,i],label=r'Fold: %d'%i)
plt.ylabel("MSE")
plt.xlabel(r"$path$")
plt.legend()
plt.show()
y_pred=modelo.predict(X_test)
print("Los coeficientes son:",modelo.coef_)
print("MSE: %.3f"%mean_squared_error(y_pred,y_test))
#linear_model.LassoLarsCV([fit_intercept, …])
from sklearn.datasets import load_boston
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn import linear_model
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
boston = load_boston()
X=boston.data
y=boston.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
modelo=linear_model.LassoLarsCV(cv=5).fit(X,y)
plt.figure(figsize=(10,10))
for i in range((modelo.mse_path_.shape[1])):
plt.plot(modelo.mse_path_[:,i],label=r'Fold: %d'%i)
plt.ylabel("MSE")
plt.xlabel(r"$path$")
plt.legend()
plt.show()
y_pred=modelo.predict(X_test)
print("Los coeficientes son:",modelo.coef_)
print("MSE: %.3f"%mean_squared_error(y_pred,y_test))
###Output
_____no_output_____
###Markdown
También para los siguientes métodos de clasificación, obtenga el mejor parámetro, los scores de precision-recall utilizando el dataset de "boston house-prices":
###Code
#linear_model.RidgeCV([alphas, …])
#linear_model.RidgeClassifierCV([alphas, …])
###Output
_____no_output_____
###Markdown
1.2. Criterios de información:Como ya vimos, algunos modelos pueden ofrecer información del óptimo parámetro de regulación basado en un criterio cerrado, computando un camino de regularización. Ejercicio: Obtenga las curvas de AIC y BIC para el siguiente modelo. Utilice el dataset de breast_cancer
###Code
# linear_model.LassoLarsIC([criterion, …])
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LassoCV, LassoLarsCV, LassoLarsIC
from sklearn import datasets
EPSILON=1e-4
diabetes=datasets.load_breast_cancer()
X=diabetes.data
y=diabetes.target
rng=np.random.RandomState(42)
X=np.c_[X,rng.randn(X.shape[0],14)]
X /= np.sqrt(np.sum(X**2,axis=0))
model_bic=LassoLarsIC(criterion='bic')
model_bic.fit(X,y)
alphla_bic_=model_bic.alpha_
model_aic=LassoLarsIC(criterion='aic')
model_aic.fit(X,y)
alphla_aic_=model_aic.alpha_
def plot_ic_criterion(model,name,color):
alpha_=model.alpha_+EPSILON
alphas_=model.alphas_+EPSILON
criterion_=model.criterion_
plt.plot(-np.log10(alphas_),criterion_,'--',color=color,linewidth=3,label='Criterio %s'%name)
plt.axvline(-np.log10(alpha_),color=color,linewidth=3,label='alpha: estimado %s'%name)
plt.xlabel('-log(alpha)')
plt.ylabel('criterion')
plt.figure()
plot_ic_criterion(model_aic,'AIC','b')
plot_ic_criterion(model_bic,'BIC','r')
plt.legend()
plt.title('Information criterion para selección de modelo')
plt.show()
###Output
_____no_output_____
###Markdown
1.3. Estimados Out of Bag:Es posible utilizar un ensamble de métodos para realizar bagging. Para esto se generan nuevos conjuntos de entrenamiento utilizando muestreo con remplazo, parte de los conjuntos de entrenamiento permanecen sin ser utilizados. Para cada clasificador, una diferente parte del conjunto de entrenamiento se deja "fuera de la bolsa".Esta porción que es dejada por fuera puede ser utilizada para estimar el error de generalización sin tener que depender de un conjunto de validación separado. Este estimado no requiere datos nuevos y puede ser utilizado para selección de modelo. 1.3.1 RandomForestClassifier:ensemble.RandomForestClassifier([…]) Un RandomForest es un meta estimador que ajusta un número de clasificadores de árboles de decisión en diferentes submuestras del conjunto de datos y los utiliza los promedios para mejorar la precisión predictiva y el control del sobreajuste. El tamaño del subconjunto siempre es del tamaño de la muestra original, pero las muestras son dibujadas con remplazo si bootstrap=True. 1.3.2 RandomForestRegressor:ensemble.RandomForestRegressor([…]) El regresor de random forest es un meta estimador que ajusta un número de árboles de decisión de clasificación en varios subconjuntos del dataset y utiliza promedios para mejorar la precisión predictiva y el control del sobreajuste. El tamaño del subconjunto de la muestra es del tamaño de la entrada original pero las muestras son dibujadas con remplazo si "bootstrap=True." 1.3.3 GradientBoostingClassifier:ensemble.GradientBoostingClassifier([loss, …]) Este método construye un modelo aditivo; permite la optimización de funciones de pérdida arbitrarias. En cada etapa se ajustan "n_classes_" de regresión en el gradiente de la función de pérdida binomial o multinomial. La clasificación binaria es un casoespecial en el que sólo se induce un árbol de regresión. Las características son siempre permutadas en cada split. Por lo tanto, el mejor split puede variar, incluso en el mismo conjunto de entrenamiento y "max_features=n_features", si la mejora del criterio es idéntica para muchos splits enumerados durante la búsqueda del mejor split. Para obtener un comportamiento determinístico, se puede fijar el random_state.
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn import ensemble
from sklearn.model_selection import KFold
from sklearn.model_selection import train_test_split
from scipy.special import expit
# Generate data (adapted from G. Ridgeway's gbm example)
n_samples = 1000
random_state = np.random.RandomState(13)
x1 = random_state.uniform(size=n_samples)
x2 = random_state.uniform(size=n_samples)
x3 = random_state.randint(0, 4, size=n_samples)
p = expit(np.sin(3 * x1) - 4 * x2 + x3)
y = random_state.binomial(1, p, size=n_samples)
X = np.c_[x1, x2, x3]
X = X.astype(np.float32)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5,
random_state=9)
# Fit classifier with out-of-bag estimates
params = {'n_estimators': 1200, 'max_depth': 3, 'subsample': 0.5,
'learning_rate': 0.01, 'min_samples_leaf': 1, 'random_state': 3}
clf = ensemble.GradientBoostingClassifier(**params)
clf.fit(X_train, y_train)
acc = clf.score(X_test, y_test)
print("Accuracy: {:.4f}".format(acc))
n_estimators = params['n_estimators']
x = np.arange(n_estimators) + 1
def heldout_score(clf, X_test, y_test):
"""compute deviance scores on ``X_test`` and ``y_test``. """
score = np.zeros((n_estimators,), dtype=np.float64)
for i, y_pred in enumerate(clf.staged_decision_function(X_test)):
score[i] = clf.loss_(y_test, y_pred)
return score
def cv_estimate(n_splits=None):
cv = KFold(n_splits=n_splits)
cv_clf = ensemble.GradientBoostingClassifier(**params)
val_scores = np.zeros((n_estimators,), dtype=np.float64)
for train, test in cv.split(X_train, y_train):
cv_clf.fit(X_train[train], y_train[train])
val_scores += heldout_score(cv_clf, X_train[test], y_train[test])
val_scores /= n_splits
return val_scores
# Estimate best n_estimator using cross-validation
cv_score = cv_estimate(2)
# Compute best n_estimator for test data
test_score = heldout_score(clf, X_test, y_test)
# negative cumulative sum of oob improvements
cumsum = -np.cumsum(clf.oob_improvement_)
# min loss according to OOB
oob_best_iter = x[np.argmin(cumsum)]
# min loss according to test (normalize such that first loss is 0)
test_score -= test_score[0]
test_best_iter = x[np.argmin(test_score)]
# min loss according to cv (normalize such that first loss is 0)
cv_score -= cv_score[0]
cv_best_iter = x[np.argmin(cv_score)]
# color brew for the three curves
oob_color = list(map(lambda x: x / 256.0, (190, 174, 212)))
test_color = list(map(lambda x: x / 256.0, (127, 201, 127)))
cv_color = list(map(lambda x: x / 256.0, (253, 192, 134)))
# plot curves and vertical lines for best iterations
plt.figure(figsize=(18,10))
plt.plot(x, cumsum, label='OOB loss', color=oob_color)
plt.plot(x, test_score, label='Test loss', color=test_color)
plt.plot(x, cv_score, label='CV loss', color=cv_color)
plt.axvline(x=oob_best_iter, color=oob_color)
plt.axvline(x=test_best_iter, color=test_color)
plt.axvline(x=cv_best_iter, color=cv_color)
# add three vertical lines to xticks
xticks = plt.xticks()
xticks_pos = np.array(xticks[0].tolist() +
[oob_best_iter, cv_best_iter, test_best_iter])
xticks_label = np.array(list(map(lambda t: int(t), xticks[0])) +
['OOB', 'CV', 'Test'])
ind = np.argsort(xticks_pos)
xticks_pos = xticks_pos[ind]
xticks_label = xticks_label[ind]
plt.xticks(xticks_pos, xticks_label)
plt.legend(loc='upper right')
plt.ylabel('normalized loss')
plt.xlabel('number of iterations')
plt.show()
###Output
Accuracy: 0.6820
###Markdown
1.3.4 GradientBoostingRegressorensemble.GradientBoostingRegressor([loss, …]) Crea un modelo aditivo por etapas; permite la optimización de funciones diferenciables arbitrarias. En cada etapa de regresión el arbol es ajustado al gradiente negativo de la función de costo.
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn import ensemble
from sklearn import datasets
from sklearn.utils import shuffle
from sklearn.metrics import mean_squared_error
# #############################################################################
# Load data
boston = datasets.load_boston()
X, y = shuffle(boston.data, boston.target, random_state=13)
X = X.astype(np.float32)
offset = int(X.shape[0] * 0.9)
X_train, y_train = X[:offset], y[:offset]
X_test, y_test = X[offset:], y[offset:]
# #############################################################################
# Fit regression model
params = {'n_estimators': 500, 'max_depth': 4, 'min_samples_split': 2,
'learning_rate': 0.01, 'loss': 'ls'}
clf = ensemble.GradientBoostingRegressor(**params)
clf.fit(X_train, y_train)
mse = mean_squared_error(y_test, clf.predict(X_test))
print("MSE: %.4f" % mse)
# #############################################################################
# Plot training deviance
# compute test set deviance
test_score = np.zeros((params['n_estimators'],), dtype=np.float64)
for i, y_pred in enumerate(clf.staged_predict(X_test)):
test_score[i] = clf.loss_(y_test, y_pred)
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.title('Deviance')
plt.plot(np.arange(params['n_estimators']) + 1, clf.train_score_, 'b-',
label='Training Set Deviance')
plt.plot(np.arange(params['n_estimators']) + 1, test_score, 'r-',
label='Test Set Deviance')
plt.legend(loc='upper right')
plt.xlabel('Boosting Iterations')
plt.ylabel('Deviance')
# #############################################################################
# Plot feature importance
feature_importance = clf.feature_importances_
# make importances relative to max importance
feature_importance = 100.0 * (feature_importance / feature_importance.max())
sorted_idx = np.argsort(feature_importance)
pos = np.arange(sorted_idx.shape[0]) + .5
plt.subplot(1, 2, 2)
plt.barh(pos, feature_importance[sorted_idx], align='center')
plt.yticks(pos, boston.feature_names[sorted_idx])
plt.xlabel('Relative Importance')
plt.title('Variable Importance')
plt.show()
###Output
MSE: 6.5323
###Markdown
2. Métricas de clasificación y regresión:Para el dataset iris, descarte una clase y obtenga las siguientes métricas y comparelas. Puede utilizar un SVC y recuerde que para obtener "y_prob" debe pedirle al constructor del objeto que le retorne las probabilidades asociadas a cada clase. De igual manera para los ejercicios de regresión (aquí utilice house prices)
###Code
#brier_score_loss(y_true, y_prob)
#matthews_corrcoef(y_true, y_pred)
import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn import svm
from sklearn.model_selection import cross_val_score
from sklearn.metrics import brier_score_loss,matthews_corrcoef
iris=datasets.load_iris()
X=iris.data
y=iris.target
X_train, X_test, y_train, y_test = train_test_split(X,y , test_size=0.3, random_state=0)
clf = svm.SVC(kernel='linear', C=1 , probability = True)
clf.fit(X_train,y_train)
y_prob=clf.predict_proba(X_test)
y_prob1=y_prob[:,2]
y_prob2=y_prob[:,1]
y_prob0=y_prob[:,0]
y_true0=y_test
y_true0[y_true0==0]=1
y_true0[y_true0!=0]=0
y_true1=y_test
y_true1[y_true0==1]=1
y_true1[y_true0!=1]=0
y_true2=y_test
y_true2[y_true0==2]=1
y_true2[y_true0!=2]=0
brier_score_loss(y_true0, y_prob0)
brier_score_loss(y_true1 ,y_prob1)
brier_score_loss(y_true2 ,y_prob2)
#mean_squared_error
#r2_score
#mean_squared_log_error
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
# import some data to play with
iris = datasets.load_iris()
X = iris.data
y = iris.target
class_names = iris.target_names
# Split the data into a training set and a test set
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# Run classifier, using a model that is too regularized (C too low) to see
# the impact on the results
classifier = svm.SVC(kernel='linear', C=0.01)
y_pred = classifier.fit(X_train, y_train).predict(X_test)
# Compute confusion matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
###Output
[[13 0 0]
[ 0 10 6]
[ 0 0 9]]
|
Roee/rule-based.ipynb | ###Markdown
rule-based for squats, on bottom df
###Code
df2 = df.copy()
###Output
_____no_output_____ |
ImageCollection/filtering_by_calendar_range.ipynb | ###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
###Code
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
roi = ee.Geometry.Point([-99.2182, 46.7824])
# find images acquired during June and July
collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterBounds(roi) \
.filter(ee.Filter.calendarRange(6, 7, 'month')) \
.sort('DATE_ACQUIRED')
print(collection.size().getInfo())
first = collection.first()
propertyNames = first.propertyNames()
print(propertyNames.getInfo())
time_start = ee.Date(first.get('system:time_start')).format("YYYY-MM-dd")
print(time_start.getInfo())
###Output
71
['system:version', 'system:id', 'RADIANCE_MULT_BAND_5', 'RADIANCE_MULT_BAND_6', 'RADIANCE_MULT_BAND_3', 'RADIANCE_MULT_BAND_4', 'RADIANCE_MULT_BAND_1', 'RADIANCE_MULT_BAND_2', 'K2_CONSTANT_BAND_11', 'K2_CONSTANT_BAND_10', 'system:footprint', 'REFLECTIVE_SAMPLES', 'SUN_AZIMUTH', 'CPF_NAME', 'DATE_ACQUIRED', 'ELLIPSOID', 'google:registration_offset_x', 'google:registration_offset_y', 'STATION_ID', 'RESAMPLING_OPTION', 'ORIENTATION', 'WRS_ROW', 'RADIANCE_MULT_BAND_9', 'TARGET_WRS_ROW', 'RADIANCE_MULT_BAND_7', 'RADIANCE_MULT_BAND_8', 'IMAGE_QUALITY_TIRS', 'TRUNCATION_OLI', 'CLOUD_COVER', 'GEOMETRIC_RMSE_VERIFY', 'COLLECTION_CATEGORY', 'GRID_CELL_SIZE_REFLECTIVE', 'CLOUD_COVER_LAND', 'GEOMETRIC_RMSE_MODEL', 'COLLECTION_NUMBER', 'IMAGE_QUALITY_OLI', 'LANDSAT_SCENE_ID', 'WRS_PATH', 'google:registration_count', 'PANCHROMATIC_SAMPLES', 'PANCHROMATIC_LINES', 'GEOMETRIC_RMSE_MODEL_Y', 'REFLECTIVE_LINES', 'TIRS_STRAY_LIGHT_CORRECTION_SOURCE', 'GEOMETRIC_RMSE_MODEL_X', 'system:asset_size', 'system:index', 'REFLECTANCE_ADD_BAND_1', 'REFLECTANCE_ADD_BAND_2', 'DATUM', 'REFLECTANCE_ADD_BAND_3', 'REFLECTANCE_ADD_BAND_4', 'RLUT_FILE_NAME', 'REFLECTANCE_ADD_BAND_5', 'REFLECTANCE_ADD_BAND_6', 'REFLECTANCE_ADD_BAND_7', 'REFLECTANCE_ADD_BAND_8', 'BPF_NAME_TIRS', 'GROUND_CONTROL_POINTS_VERSION', 'DATA_TYPE', 'UTM_ZONE', 'LANDSAT_PRODUCT_ID', 'REFLECTANCE_ADD_BAND_9', 'google:registration_ratio', 'GRID_CELL_SIZE_PANCHROMATIC', 'RADIANCE_ADD_BAND_4', 'REFLECTANCE_MULT_BAND_7', 'system:time_start', 'RADIANCE_ADD_BAND_5', 'REFLECTANCE_MULT_BAND_6', 'RADIANCE_ADD_BAND_6', 'REFLECTANCE_MULT_BAND_9', 'PROCESSING_SOFTWARE_VERSION', 'RADIANCE_ADD_BAND_7', 'REFLECTANCE_MULT_BAND_8', 'RADIANCE_ADD_BAND_1', 'RADIANCE_ADD_BAND_2', 'RADIANCE_ADD_BAND_3', 'REFLECTANCE_MULT_BAND_1', 'RADIANCE_ADD_BAND_8', 'REFLECTANCE_MULT_BAND_3', 'RADIANCE_ADD_BAND_9', 'REFLECTANCE_MULT_BAND_2', 'REFLECTANCE_MULT_BAND_5', 'REFLECTANCE_MULT_BAND_4', 'THERMAL_LINES', 'TIRS_SSM_POSITION_STATUS', 'GRID_CELL_SIZE_THERMAL', 'NADIR_OFFNADIR', 'RADIANCE_ADD_BAND_11', 'REQUEST_ID', 'EARTH_SUN_DISTANCE', 'TIRS_SSM_MODEL', 'FILE_DATE', 'SCENE_CENTER_TIME', 'SUN_ELEVATION', 'BPF_NAME_OLI', 'RADIANCE_ADD_BAND_10', 'ROLL_ANGLE', 'K1_CONSTANT_BAND_10', 'SATURATION_BAND_1', 'SATURATION_BAND_2', 'SATURATION_BAND_3', 'SATURATION_BAND_4', 'SATURATION_BAND_5', 'MAP_PROJECTION', 'SATURATION_BAND_6', 'SENSOR_ID', 'SATURATION_BAND_7', 'K1_CONSTANT_BAND_11', 'SATURATION_BAND_8', 'SATURATION_BAND_9', 'TARGET_WRS_PATH', 'RADIANCE_MULT_BAND_11', 'RADIANCE_MULT_BAND_10', 'GROUND_CONTROL_POINTS_MODEL', 'SPACECRAFT_ID', 'ELEVATION_SOURCE', 'THERMAL_SAMPLES', 'GROUND_CONTROL_POINTS_VERIFY', 'system:bands', 'system:band_names']
2013-06-08
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for this first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
roi = ee.Geometry.Point([-99.2182, 46.7824])
# find images acquired during June and July
collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterBounds(roi) \
.filter(ee.Filter.calendarRange(6, 7, 'month')) \
.sort('DATE_ACQUIRED')
print(collection.size().getInfo())
first = collection.first()
propertyNames = first.propertyNames()
print(propertyNames.getInfo())
time_start = ee.Date(first.get('system:time_start')).format("YYYY-MM-dd")
print(time_start.getInfo())
###Output
69
['system:version', 'system:id', 'RADIANCE_MULT_BAND_5', 'RADIANCE_MULT_BAND_6', 'RADIANCE_MULT_BAND_3', 'RADIANCE_MULT_BAND_4', 'RADIANCE_MULT_BAND_1', 'RADIANCE_MULT_BAND_2', 'K2_CONSTANT_BAND_11', 'K2_CONSTANT_BAND_10', 'system:footprint', 'REFLECTIVE_SAMPLES', 'SUN_AZIMUTH', 'CPF_NAME', 'DATE_ACQUIRED', 'ELLIPSOID', 'google:registration_offset_x', 'google:registration_offset_y', 'STATION_ID', 'RESAMPLING_OPTION', 'ORIENTATION', 'WRS_ROW', 'RADIANCE_MULT_BAND_9', 'TARGET_WRS_ROW', 'RADIANCE_MULT_BAND_7', 'RADIANCE_MULT_BAND_8', 'IMAGE_QUALITY_TIRS', 'TRUNCATION_OLI', 'CLOUD_COVER', 'GEOMETRIC_RMSE_VERIFY', 'COLLECTION_CATEGORY', 'GRID_CELL_SIZE_REFLECTIVE', 'CLOUD_COVER_LAND', 'GEOMETRIC_RMSE_MODEL', 'COLLECTION_NUMBER', 'IMAGE_QUALITY_OLI', 'LANDSAT_SCENE_ID', 'WRS_PATH', 'google:registration_count', 'PANCHROMATIC_SAMPLES', 'PANCHROMATIC_LINES', 'GEOMETRIC_RMSE_MODEL_Y', 'REFLECTIVE_LINES', 'TIRS_STRAY_LIGHT_CORRECTION_SOURCE', 'GEOMETRIC_RMSE_MODEL_X', 'system:asset_size', 'system:index', 'REFLECTANCE_ADD_BAND_1', 'REFLECTANCE_ADD_BAND_2', 'DATUM', 'REFLECTANCE_ADD_BAND_3', 'REFLECTANCE_ADD_BAND_4', 'RLUT_FILE_NAME', 'REFLECTANCE_ADD_BAND_5', 'REFLECTANCE_ADD_BAND_6', 'REFLECTANCE_ADD_BAND_7', 'REFLECTANCE_ADD_BAND_8', 'BPF_NAME_TIRS', 'GROUND_CONTROL_POINTS_VERSION', 'DATA_TYPE', 'UTM_ZONE', 'LANDSAT_PRODUCT_ID', 'REFLECTANCE_ADD_BAND_9', 'google:registration_ratio', 'GRID_CELL_SIZE_PANCHROMATIC', 'RADIANCE_ADD_BAND_4', 'REFLECTANCE_MULT_BAND_7', 'system:time_start', 'RADIANCE_ADD_BAND_5', 'REFLECTANCE_MULT_BAND_6', 'RADIANCE_ADD_BAND_6', 'REFLECTANCE_MULT_BAND_9', 'PROCESSING_SOFTWARE_VERSION', 'RADIANCE_ADD_BAND_7', 'REFLECTANCE_MULT_BAND_8', 'RADIANCE_ADD_BAND_1', 'RADIANCE_ADD_BAND_2', 'RADIANCE_ADD_BAND_3', 'REFLECTANCE_MULT_BAND_1', 'RADIANCE_ADD_BAND_8', 'REFLECTANCE_MULT_BAND_3', 'RADIANCE_ADD_BAND_9', 'REFLECTANCE_MULT_BAND_2', 'REFLECTANCE_MULT_BAND_5', 'REFLECTANCE_MULT_BAND_4', 'THERMAL_LINES', 'TIRS_SSM_POSITION_STATUS', 'GRID_CELL_SIZE_THERMAL', 'NADIR_OFFNADIR', 'RADIANCE_ADD_BAND_11', 'REQUEST_ID', 'EARTH_SUN_DISTANCE', 'TIRS_SSM_MODEL', 'FILE_DATE', 'SCENE_CENTER_TIME', 'SUN_ELEVATION', 'BPF_NAME_OLI', 'RADIANCE_ADD_BAND_10', 'ROLL_ANGLE', 'K1_CONSTANT_BAND_10', 'SATURATION_BAND_1', 'SATURATION_BAND_2', 'SATURATION_BAND_3', 'SATURATION_BAND_4', 'SATURATION_BAND_5', 'MAP_PROJECTION', 'SATURATION_BAND_6', 'SENSOR_ID', 'SATURATION_BAND_7', 'K1_CONSTANT_BAND_11', 'SATURATION_BAND_8', 'SATURATION_BAND_9', 'TARGET_WRS_PATH', 'RADIANCE_MULT_BAND_11', 'RADIANCE_MULT_BAND_10', 'GROUND_CONTROL_POINTS_MODEL', 'SPACECRAFT_ID', 'ELEVATION_SOURCE', 'THERMAL_SAMPLES', 'GROUND_CONTROL_POINTS_VERIFY', 'system:bands', 'system:band_names']
2013-06-08
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
roi = ee.Geometry.Point([-99.2182, 46.7824])
# find images acquired during June and July
collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterBounds(roi) \
.filter(ee.Filter.calendarRange(6, 7, 'month')) \
.sort('DATE_ACQUIRED')
print(collection.size().getInfo())
first = collection.first()
propertyNames = first.propertyNames()
print(propertyNames.getInfo())
time_start = ee.Date(first.get('system:time_start')).format("YYYY-MM-dd")
print(time_start.getInfo())
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
roi = ee.Geometry.Point([-99.2182, 46.7824])
# find images acquired during June and July
collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterBounds(roi) \
.filter(ee.Filter.calendarRange(6, 7, 'month')) \
.sort('DATE_ACQUIRED')
print(collection.size().getInfo())
first = collection.first()
propertyNames = first.propertyNames()
print(propertyNames.getInfo())
time_start = ee.Date(first.get('system:time_start')).format("YYYY-MM-dd")
print(time_start.getInfo())
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
roi = ee.Geometry.Point([-99.2182, 46.7824])
# find images acquired during June and July
collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterBounds(roi) \
.filter(ee.Filter.calendarRange(6, 7, 'month')) \
.sort('DATE_ACQUIRED')
print(collection.size().getInfo())
first = collection.first()
propertyNames = first.propertyNames()
print(propertyNames.getInfo())
time_start = ee.Date(first.get('system:time_start')).format("YYYY-MM-dd")
print(time_start.getInfo())
###Output
69
['system:version', 'system:id', 'RADIANCE_MULT_BAND_5', 'RADIANCE_MULT_BAND_6', 'RADIANCE_MULT_BAND_3', 'RADIANCE_MULT_BAND_4', 'RADIANCE_MULT_BAND_1', 'RADIANCE_MULT_BAND_2', 'K2_CONSTANT_BAND_11', 'K2_CONSTANT_BAND_10', 'system:footprint', 'REFLECTIVE_SAMPLES', 'SUN_AZIMUTH', 'CPF_NAME', 'DATE_ACQUIRED', 'ELLIPSOID', 'google:registration_offset_x', 'google:registration_offset_y', 'STATION_ID', 'RESAMPLING_OPTION', 'ORIENTATION', 'WRS_ROW', 'RADIANCE_MULT_BAND_9', 'TARGET_WRS_ROW', 'RADIANCE_MULT_BAND_7', 'RADIANCE_MULT_BAND_8', 'IMAGE_QUALITY_TIRS', 'TRUNCATION_OLI', 'CLOUD_COVER', 'GEOMETRIC_RMSE_VERIFY', 'COLLECTION_CATEGORY', 'GRID_CELL_SIZE_REFLECTIVE', 'CLOUD_COVER_LAND', 'GEOMETRIC_RMSE_MODEL', 'COLLECTION_NUMBER', 'IMAGE_QUALITY_OLI', 'LANDSAT_SCENE_ID', 'WRS_PATH', 'google:registration_count', 'PANCHROMATIC_SAMPLES', 'PANCHROMATIC_LINES', 'GEOMETRIC_RMSE_MODEL_Y', 'REFLECTIVE_LINES', 'TIRS_STRAY_LIGHT_CORRECTION_SOURCE', 'GEOMETRIC_RMSE_MODEL_X', 'system:asset_size', 'system:index', 'REFLECTANCE_ADD_BAND_1', 'REFLECTANCE_ADD_BAND_2', 'DATUM', 'REFLECTANCE_ADD_BAND_3', 'REFLECTANCE_ADD_BAND_4', 'RLUT_FILE_NAME', 'REFLECTANCE_ADD_BAND_5', 'REFLECTANCE_ADD_BAND_6', 'REFLECTANCE_ADD_BAND_7', 'REFLECTANCE_ADD_BAND_8', 'BPF_NAME_TIRS', 'GROUND_CONTROL_POINTS_VERSION', 'DATA_TYPE', 'UTM_ZONE', 'LANDSAT_PRODUCT_ID', 'REFLECTANCE_ADD_BAND_9', 'google:registration_ratio', 'GRID_CELL_SIZE_PANCHROMATIC', 'RADIANCE_ADD_BAND_4', 'REFLECTANCE_MULT_BAND_7', 'system:time_start', 'RADIANCE_ADD_BAND_5', 'REFLECTANCE_MULT_BAND_6', 'RADIANCE_ADD_BAND_6', 'REFLECTANCE_MULT_BAND_9', 'PROCESSING_SOFTWARE_VERSION', 'RADIANCE_ADD_BAND_7', 'REFLECTANCE_MULT_BAND_8', 'RADIANCE_ADD_BAND_1', 'RADIANCE_ADD_BAND_2', 'RADIANCE_ADD_BAND_3', 'REFLECTANCE_MULT_BAND_1', 'RADIANCE_ADD_BAND_8', 'REFLECTANCE_MULT_BAND_3', 'RADIANCE_ADD_BAND_9', 'REFLECTANCE_MULT_BAND_2', 'REFLECTANCE_MULT_BAND_5', 'REFLECTANCE_MULT_BAND_4', 'THERMAL_LINES', 'TIRS_SSM_POSITION_STATUS', 'GRID_CELL_SIZE_THERMAL', 'NADIR_OFFNADIR', 'RADIANCE_ADD_BAND_11', 'REQUEST_ID', 'EARTH_SUN_DISTANCE', 'TIRS_SSM_MODEL', 'FILE_DATE', 'SCENE_CENTER_TIME', 'SUN_ELEVATION', 'BPF_NAME_OLI', 'RADIANCE_ADD_BAND_10', 'ROLL_ANGLE', 'K1_CONSTANT_BAND_10', 'SATURATION_BAND_1', 'SATURATION_BAND_2', 'SATURATION_BAND_3', 'SATURATION_BAND_4', 'SATURATION_BAND_5', 'MAP_PROJECTION', 'SATURATION_BAND_6', 'SENSOR_ID', 'SATURATION_BAND_7', 'K1_CONSTANT_BAND_11', 'SATURATION_BAND_8', 'SATURATION_BAND_9', 'TARGET_WRS_PATH', 'RADIANCE_MULT_BAND_11', 'RADIANCE_MULT_BAND_10', 'GROUND_CONTROL_POINTS_MODEL', 'SPACECRAFT_ID', 'ELEVATION_SOURCE', 'THERMAL_SAMPLES', 'GROUND_CONTROL_POINTS_VERIFY', 'system:bands', 'system:band_names']
2013-06-08
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
Pydeck Earth Engine IntroductionThis is an introduction to using [Pydeck](https://pydeck.gl) and [Deck.gl](https://deck.gl) with [Google Earth Engine](https://earthengine.google.com/) in Jupyter Notebooks. If you wish to run this locally, you'll need to install some dependencies. Installing into a new Conda environment is recommended. To create and enter the environment, run:```conda create -n pydeck-ee -c conda-forge python jupyter notebook pydeck earthengine-api requests -ysource activate pydeck-eejupyter nbextension install --sys-prefix --symlink --overwrite --py pydeckjupyter nbextension enable --sys-prefix --py pydeck```then open Jupyter Notebook with `jupyter notebook`. Now in a Python Jupyter Notebook, let's first import required packages:
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import requests
import ee
###Output
_____no_output_____
###Markdown
AuthenticationUsing Earth Engine requires authentication. If you don't have a Google account approved for use with Earth Engine, you'll need to request access. For more information and to sign up, go to https://signup.earthengine.google.com/. If you haven't used Earth Engine in Python before, you'll need to run the following authentication command. If you've previously authenticated in Python or the command line, you can skip the next line.Note that this creates a prompt which waits for user input. If you don't see a prompt, you may need to authenticate on the command line with `earthengine authenticate` and then return here, skipping the Python authentication.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create MapNext it's time to create a map. Here we create an `ee.Image` object
###Code
# Initialize objects
ee_layers = []
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# %%
# Add Earth Engine dataset
roi = ee.Geometry.Point([-99.2182, 46.7824])
# find images acquired during June and July
collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterBounds(roi) \
.filter(ee.Filter.calendarRange(6, 7, 'month')) \
.sort('DATE_ACQUIRED')
print(collection.size().getInfo())
first = collection.first()
propertyNames = first.propertyNames()
print(propertyNames.getInfo())
time_start = ee.Date(first.get('system:time_start')).format("YYYY-MM-dd")
print(time_start.getInfo())
###Output
_____no_output_____
###Markdown
Then just pass these layers to a `pydeck.Deck` instance, and call `.show()` to create a map:
###Code
r = pdk.Deck(layers=ee_layers, initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('Installing geemap ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
import ee
import geemap
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
roi = ee.Geometry.Point([-99.2182, 46.7824])
# find images acquired during June and July
collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterBounds(roi) \
.filter(ee.Filter.calendarRange(6, 7, 'month')) \
.sort('DATE_ACQUIRED')
print(collection.size().getInfo())
first = collection.first()
propertyNames = first.propertyNames()
print(propertyNames.getInfo())
time_start = ee.Date(first.get('system:time_start')).format("YYYY-MM-dd")
print(time_start.getInfo())
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
roi = ee.Geometry.Point([-99.2182, 46.7824])
# find images acquired during June and July
collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterBounds(roi) \
.filter(ee.Filter.calendarRange(6, 7, 'month')) \
.sort('DATE_ACQUIRED')
print(collection.size().getInfo())
first = collection.first()
propertyNames = first.propertyNames()
print(propertyNames.getInfo())
time_start = ee.Date(first.get('system:time_start')).format("YYYY-MM-dd")
print(time_start.getInfo())
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____ |
notebook/sv/bgs_footprint.ipynb | ###Markdown
Lets get the sun and moon coordinates
###Code
sun_ra, sun_dec = [], []
moon_ra, moon_dec = [], []
for year in range(2020, 2025):
for month in range(1,13):
if month == 2: days = 28
else: days = 30
for day in range(1,days+1):
tt = Time(datetime(year, month, day, 12, 0, 0), scale='utc')
_sun = get_sun(tt)
sun_ra.append(_sun.ra.to(u.deg).value)
sun_dec.append(_sun.dec.to(u.deg).value)
_moon = get_moon(tt)
moon_ra.append(_moon.ra.to(u.deg).value)
moon_dec.append(_moon.dec.to(u.deg).value)
sun_ra = np.array(sun_ra)
sun_dec = np.array(sun_dec)
moon_ra = np.array(moon_ra)
moon_dec = np.array(moon_dec)
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(111)
sub.scatter(((tiles['RA'][in_desi] - 80) % 360) + 80, tiles['DEC'][in_desi], s=5, c='k')
sub.scatter(((sun_ra - 80) % 360) + 80, sun_dec, c='C1', s=50)
sub.scatter(((moon_ra - 80) % 360) + 80, moon_dec, c='C0', s=50)
sub.set_xlabel('RA', fontsize=25)
sub.set_xlim(80, 440)
sub.set_ylabel('Dec', fontsize=25)
sub.set_ylim(-25., 90.)
sub.set_title('DESI footprint', fontsize=30)
is_bgs = in_desi & (tiles['PROGRAM'] == 'BRIGHT')
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(111)
sub.scatter(((tiles['RA'][is_bgs] - 80) % 360) + 80, tiles['DEC'][is_bgs], s=5, c='k')
sub.scatter(((sun_ra - 80) % 360) + 80, sun_dec, c='C1', s=1)
sub.scatter(((moon_ra - 80) % 360) + 80, moon_dec, c='C0', s=1)
sub.set_xlabel('RA', fontsize=25)
sub.set_xlim(80, 440)
sub.set_ylabel('Dec', fontsize=25)
sub.set_ylim(-25., 90.)
sub.set_title('BGS footprint: %i tiles' % np.sum(is_bgs), fontsize=30)
###Output
_____no_output_____
###Markdown
Now lets convert to ecliptic coordinates
###Code
sun_coord = SkyCoord(ra=sun_ra * u.deg, dec=sun_dec * u.deg, frame='icrs')
moon_coord = SkyCoord(ra=moon_ra * u.deg, dec=moon_dec * u.deg, frame='icrs')
tile_coord = SkyCoord(ra=tiles['RA'] * u.deg, dec=tiles['DEC'] * u.deg, frame='icrs')
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(111)
sub.scatter(((tile_coord.barycentrictrueecliptic.lon.value[is_bgs]-80) % 360)+80, tile_coord.barycentrictrueecliptic.lat.value[is_bgs], s=5, c='k')
sub.scatter(((sun_coord.barycentrictrueecliptic.lon.value-80) % 360)+80, sun_coord.barycentrictrueecliptic.lat, s=1, c='C1')
sub.scatter(((moon_coord.barycentrictrueecliptic.lon.value-80) % 360)+80, moon_coord.barycentrictrueecliptic.lat, s=1, c='C0')
sub.set_xlabel('Longitude', fontsize=25)
sub.set_xlim(80, 440)
sub.set_ylabel('Latitude', fontsize=25)
sub.set_ylim(-45., 100.)
sub.set_title('BGS footprint: %i tiles, $%.f deg^2$' % (np.sum(is_bgs), 14000), fontsize=30)
is_ngc = (tile_coord.galactic.b.value >= 0.)
is_sgc = (tile_coord.galactic.b.value < 0.)
np.sum(is_bgs & is_ngc & (tile_coord.barycentrictrueecliptic.lat.to(u.deg).value < 8.1))/np.sum(is_bgs) * 14000
foot13000 = ((is_bgs & is_ngc) |
(is_bgs & is_sgc & (np.abs(tile_coord.barycentrictrueecliptic.lat.to(u.deg).value) > 6.5)))
foot12000 = ((is_bgs & is_ngc & (tile_coord.barycentrictrueecliptic.lat.to(u.deg).value > 8.1)) |
(is_bgs & is_sgc))
foot11000 = ((is_bgs & is_ngc & (tile_coord.barycentrictrueecliptic.lat.to(u.deg).value > 8.1)) |
(is_bgs & is_sgc & (np.abs(tile_coord.barycentrictrueecliptic.lat.to(u.deg).value) > 6.5)))
foot10000 = ((is_bgs & is_ngc & (tile_coord.barycentrictrueecliptic.lat.to(u.deg).value > 12.6)) |
(is_bgs & is_sgc & (np.abs(tile_coord.barycentrictrueecliptic.lat.to(u.deg).value) > 9.7)))
for _foot, lbl in zip([foot13000, foot12000, foot11000, foot10000], [13000, 12000, 11000, 10000]):
foot = is_bgs & _foot
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(111)
sub.scatter(((tiles['RA'][foot] - 80) % 360) + 80, tiles['DEC'][foot], s=5, c='k')
sub.scatter(((sun_ra - 80) % 360) + 80, sun_dec, c='C1', s=1)
sub.scatter(((moon_ra - 80) % 360) + 80, moon_dec, c='C0', s=1)
sub.set_xlabel('RA', fontsize=25)
sub.set_xlim(80, 440)
sub.set_ylabel('Dec', fontsize=25)
sub.set_ylim(-25., 90.)
sub.set_title('%i tiles, $%.f deg^2$' % (np.sum(foot), 14000 * float(np.sum(foot))/float(np.sum(is_bgs))), fontsize=30)
###Output
_____no_output_____ |
generateur.ipynb | ###Markdown
Les générateurs Les générateurs sont très simples à utiliser et très puissants. Ils vous permettront d'optimiser votre code à moindre frais. Alors pourquoi se priver ? Imaginons que je veuille extraire d'une liste de mots la liste des mots comportants le caractère 'a'. Je vais écrire une fonction.
###Code
def with_a(words):
"""
Reçoit une liste de mots et renvoie la liste des mots contenant le car. 'a'
"""
res = []
for word in words:
if 'a' in word:
res.append(word)
return res
mots = ["le", "petit", "chat", "est", "mort", "ce", "matin"]
mots_a = with_a(mots)
print("\n".join(mots_a))
###Output
_____no_output_____
###Markdown
Jusque là rien de méchant. Comme il est question d'optimisation je vais mesurer le temps de traitement avec `timeit`. ipython est plein de magie, `%time` hup hup hup barbatruc et voilà.
###Code
%time mots_a = with_a(mots)
mots_big = mots * 1000000
%time mots_a = with_a(mots_big)
###Output
_____no_output_____
###Markdown
Comme on pouvait s'y attendre le temps d'exécution de la fonction augmente avec la taille de la liste initiale. Voyons ce que ça donne avec un générateur. Construire un générateur c'est simple : vous remplacez `return` par `yield` dans votre fonction. C'est tout ? C'est tout. Vous pouvez quand même en apprendre plus [ici](http://intermediatepythonista.com/python-generators) ou lire la [PEP 255](https://www.python.org/dev/peps/pep-0255/) si vous aimez ça.
###Code
def gen_with_a(words):
"""
Reçoit une liste de mots et renvoie les mots contenant le car. 'a' sous forme de générateur
"""
for word in words:
if 'a' in word:
yield(word)
mots_big = mots * 100
%time mots_a = with_a(mots_big)
%time mots_a_gen = gen_with_a(mots_big)
###Output
_____no_output_____
###Markdown
😲 !!!!!!!!! Oui c'est de la magie. Enfin c'est plutôt de la triche, regardez :
###Code
print(f"mots_a is a {type(mots_a)}")
print(f"mots_a_gen is a {type(mots_a_gen)}")
import sys
print(f"Taille de mots_a : {sys.getsizeof(mots_a)}")
print(f"Taille de mots_a_gen : {sys.getsizeof(mots_a_gen)}")
###Output
_____no_output_____
###Markdown
`mots_a_gen` n'est pas une liste, c'est un objet `generator`. Il ne stocke rien ou presque en mémoire, on ne peut pas connaître sa taille (essayez `len(mots_a_gen)` pour voir. Mais c'est un itérable, on peut le parcourir comme une liste. Par contre on ne peut pas les "trancher", on ne peut accéder à un élément d'index `i` comme pour une liste. Encore une différence d'avec les listes : vous ne pouvez parcourir un générateur qu'une seule fois.
###Code
%time mots_a_gen = list(gen_with_a(mots_big))
###Output
_____no_output_____
###Markdown
Mais même sans tricher les générateurs demeurent très efficaces. Vous aurez compris qu'il vous est désormais chaudement recommandé de les utiliser. Vous pouvez aussi utiliser des générateurs en compréhension, à la manière des listes en compréhension :
###Code
[mot for mot in mots if 'a' in mot]
(mot for mot in mots if 'a' in mot)
###Output
_____no_output_____
###Markdown
Les générateurs Les générateurs sont très simples à utiliser et très puissants. Ils vous permettront d'optimiser votre code à moindre frais. Alors pourquoi se priver ? Imaginons que je veuille extraire d'une liste de mots la liste des mots comportants le caractère 'a'. Je vais écrire une fonction.
###Code
def with_a(words):
"""
Reçoit une liste de mots et renvoie la liste des mots contenant le car. 'a'
"""
res = []
for word in words:
if 'a' in word:
res.append(word)
return res
mots = ["le", "petit", "chat", "est", "mort", "ce", "matin"]
mots_a = with_a(mots)
print("\n".join(mots_a))
###Output
_____no_output_____
###Markdown
Jusque là rien de méchant. Comme il est question d'optimisation je vais mesurer le temps de traitement avec `timeit`. ipython est plein de magie, `%time` hup hup hup barbatruc et voilà.
###Code
%time mots_a = with_a(mots)
mots_big = mots * 1000000
%time mots_a = with_a(mots_big)
###Output
_____no_output_____
###Markdown
Comme on pouvait s'y attendre le temps d'exécution de la fonction augmente avec la taille de la liste initiale. Voyons ce que ça donne avec un générateur. Construire un générateur c'est simple : vous remplacez `return` par `yield` dans votre fonction. C'est tout ? C'est tout. Vous pouvez quand même en apprendre plus [ici](http://intermediatepythonista.com/python-generators) ou lire la [PEP 255](https://www.python.org/dev/peps/pep-0255/) si vous aimez ça.
###Code
def gen_with_a(words):
"""
Reçoit une liste de mots et renvoie les mots contenant le car. 'a' sous forme de générateur
"""
for word in words:
if 'a' in word:
yield(word)
mots_big = mots * 100
%time mots_a = with_a(mots_big)
%time mots_a_gen = gen_with_a(mots_big)
###Output
_____no_output_____
###Markdown
😲 !!!!!!!!! Oui c'est de la magie. Enfin c'est plutôt de la triche, regardez :
###Code
print("mots_a is a {}".format(type(mots_a)))
print("mots_a_gen is a {}".format(type(mots_a_gen)))
import sys
print("Taille de mots_a : {}".format(sys.getsizeof(mots_a)))
print("Taille de mots_a_gen : {}".format(sys.getsizeof(mots_a_gen)))
###Output
_____no_output_____
###Markdown
`mots_a_gen` n'est pas une liste, c'est un objet `generator`. Il ne stocke rien ou presque en mémoire, on ne peut pas connaître sa taille (essayez `len(mots_a_gen)` pour voir. Mais c'est un itérable, on peut le parcourir comme une liste. Par contre on ne peut pas les "trancher", on ne peut accéder à un élément d'index `i` comme pour une liste. Encore une différence d'avec les listes : vous ne pouvez parcourir un générateur qu'une seule fois.
###Code
%time mots_a_gen = list(gen_with_a(mots_big))
###Output
_____no_output_____
###Markdown
Mais même sans tricher les générateurs demeurent très efficaces. Vous aurez compris qu'il vous est désormais chaudement recommandé de les utiliser. Vous pouvez aussi utiliser des générateurs en compréhension, à la manière des listes en compréhension :
###Code
[mot for mot in mots if 'a' in mot]
(mot for mot in mots if 'a' in mot)
###Output
_____no_output_____ |
notebooks/YPMLNonParametricInversion.ipynb | ###Markdown
**Import modules and set paths.**
###Code
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import os
import sys
## make paths above 'notebooks/' visible for local imports.
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import json
import numpy as np
import pandas as pd
import scipy.stats
from scipy.sparse import coo_matrix
# from scipy.linalg import lstsq
print(f"using numpy v{np.__version__}.")
print(f"using pandas v{pd.__version__}.")
import matplotlib
import matplotlib.ticker as ticker
import matplotlib.pyplot as plt
from ipywidgets import interact
from threadpoolctl import threadpool_limits
from lininvbox.lininvbox.equation import Term
from lininvbox.lininvbox.constructors import *
from lininvbox.lininvbox.operations import *
from lininvbox.lininvbox.inversion import Inversion
from lininvbox.lininvbox.constraints import *
from lininvbox.lininvbox.regularisation import *
from lininvbox.lininvbox.utils import delete_directory
from mlinversion.mlinversion.regtests import *
from magscales.magscales import Richter1958
from plotconf import matsettings
###Output
using numpy v1.21.0.
using pandas v1.3.0.
###Markdown
Yellowstone National Park $M_L$ Non-Paramatric Inversion A notebook to perform a non-parametric inversion to recover $M_L$, distance and station corrections using earthquakes from Yellowstone Nation Park.
###Code
PDIR = ".."
for_inv = pd.read_csv(f"{PDIR}/catalogs/amplitudes/yellowstone.amps.clean.geobalanced.csv")
sims = pd.read_csv(f"{PDIR}/miscmeta/simulations/data/sim_out.csv")
interp_nodes = np.array([1, 10, 30, 50, 100, 180])
inc = np.geomspace(4, 180, 20)
dr = 5
inc = np.arange(20, 180+dr, dr)
interp_nodes = np.round(np.append([3,6,9,12,15,18,21], inc[inc>=25]), 1)
# interp_nodes=inc
interp_values = np.sort(for_inv.Rhyp.values)
mwcat = pd.read_csv("../catalogs/events/MTCAT.csv")
mwcat["Evid"] = mwcat["Evid"].astype(str)
MLcons = dict(mwcat[["Evid", "UUSSMw"]].dropna().values)
def vis_inv_output(inv):
terms = inv.m.term_map.values.keys()
X, Y = [],[]
for term in terms:
tmp = inv.m.term_map.get_term(term)
if term == "MLi":
X.append(tmp['unique_indices'])
else:
X.append(tmp['unique_labels'])
Y.append(tmp['model_values'])
fig, axes = plt.subplots(1, 3, figsize=(15, 5))
for ax, y, x, term, x_lab in zip(axes, Y, X, terms, ["Rhyp [km]", "Ev #", "Sj"]):
ax.plot(x, y, 'ks', mfc='none')
ax.set_ylabel(term)
ax.set_xlabel(x_lab)
if term == "Sj":
ax.tick_params(axis='x', labelrotation = 90)
return fig, axes
def vis_matrices(mats, kind='spy', **kwargs):
fs = (4.5*2, 15)
if type(mats) is list:
assert type(mats[0]) is coo_matrix or type(mats[0]) is np.ndarray,\
f"Matrices in list must be coo_matrix or np.ndarray not {type(mats[0])}."
fig, axes = plt.subplots(1, num_mats, figsize=fs)
else:
fig, axes = plt.subplots(1, 1, figsize=fs)
axes = [axes,]
mats = [mats,]
for mat, ax in zip(mats, axes):
# spy plot
if kind == 'spy':
ax.spy(mat, aspect='auto', **kwargs)
# matshot
if kind == 'matshow':
a = ax.matshow(mat.toarray(), aspect='auto', **kwargs)
fig.colorbar(a)
fig.tight_layout()
return fig, axes
sta_df = for_inv.copy()
stas = list(sta_df.groupby("Sta")["Rhyp"].max().sort_values().index)
stas_min = dict(sta_df.groupby("Sta")["Rhyp"].min())
stas_max = dict(sta_df.groupby("Sta")["Rhyp"].max())
fig, axes = plt.subplots(len(stas), 1, figsize=(2.5, 6), sharex=True)
for i, sta in enumerate(stas):
tmp = sta_df[sta_df["Sta"]==sta]["Rhyp"]
density = scipy.stats.gaussian_kde(tmp)
# d = np.geomspace(stas_min[sta]*0.5, stas_max[sta]*1.1, 50)
d = np.arange(1, 201,)
pdf = density.pdf(d)
axes[i].plot(d, pdf, '-', lw=1.2)
axes[i].set(
yticks=[pdf.max() / 2, ],
yticklabels=[sta, ],
ylim=[pdf.min() - 0.005, pdf.max() + (0.25 * pdf.max())],
xlim=[0, 200]
)
loc = (0.7, 0.5) if i < 14 else (0.03, 0.5)
horiz = 'left'
axes[i].text(*loc, f"N = {len(tmp)}", horizontalalignment=horiz,
verticalalignment='center', transform=axes[i].transAxes, fontsize=8)
axes[i].tick_params(axis='y', labelsize=9)
axes[i].vlines(80, -1, 1, color='orange', linewidth=1.5, zorder=10)
for j, ax in enumerate(fig.get_axes()):
ss = ax.get_subplotspec()
ax.spines.top.set_visible(ss.is_first_row())
ax.spines.bottom.set_visible(ss.is_last_row())
ax.xaxis.set_visible(ss.is_last_row())
if ss.is_last_row():
ax.tick_params(axis='x', labelsize=10)
ax.set_xlabel(r"$R_{hyp}$ $\mathrm{[km]}$", fontsize=14)
fig.subplots_adjust(hspace=0)
fig.supylabel('Stations', x=-0.175, fontsize=14)
fig.savefig("../figures/STATION-DIST-KDEs.pdf", bbox_inches="tight")
stas = for_inv[["Net", "Sta"]].agg('.'.join, axis=1).values
evs = for_inv['Evid'].apply(str).values
logA0np = Term("logA0n", "LINEAR INTERPOLATION", for_inv.Rhyp.values, unique_labels=interp_nodes)
MLp = Term("MLi", "CONSTANT", evs)
Sjp = Term("Sj", "CONSTANT", stas, sign=-1)
logA0n = LinInterpCoeffs(logA0np)
MLi = ConstantCoeffs(MLp)
Sj = ConstantCoeffs(Sjp)
G = logA0n + MLi + Sj
d = DataArray(for_inv['halfAmpH'].apply(np.log10))
inv = Inversion("YPML")
inv.invert(G, d)
r = Richter1958()
r.epi_to_hypo(av_dep=10.5)
fig, axes = vis_inv_output(inv)
axes[0].plot(r.distances[r.distances<=180], r.logA0[r.distances<=180], 'g^', label="R58", zorder=1, alpha=0.65)
axes[0].plot(18, -1.6, 'r.', mfc='none', label="anchor point")
axes[0].legend()
axes[2].set_title(f"\u03A3 Sj={np.sum(G.term_map.get_term('Sj')['model_values']):.1E}")
fig.tight_layout()
# axes[0].set_xscale("log")
###Output
_____no_output_____
###Markdown
Smoothing parameter ($\alpha$) penalty testsTo smooth our inversions we use the Thikonov formality () Roughness vs MSEIn the first set of tests we show the roughness ($||\dfrac{\mathrm{d^2}logA_{0}(R)}{\mathrm{d}R^2}||^2$) by the mean error of the resdiuals between predicted and recorded data (MSE).
###Code
# def reg_test_invert(inv, G, d, alpha, constraints):
# """
# """
# Gamma = Regularisation(G.term_map, regs=OrderedDict(logA0n=dict(kind="FD", alpha=alpha)))
# m = inv.invert(G, d, regularisation=Gamma, constraints=constraints, inplace=False)
# fd = inv.forward(G, m)
# _mse = mse(d.array.A.flatten(), fd.array.A.flatten())
# inds = m.term_map.values["logA0n"]['model_indices']
# _rough = roughness(m.array.A[inds].flatten())
# return _mse, _rough
# def regularisation_optimiser(inv, G, d, alphas, constraints=None):
# """
# """
# _rough = np.zeros(len(alphas))
# _mse = np.zeros(len(alphas))
# with threadpool_limits(limits=2, user_api='blas'):
# for i in range(len(alphas)):
# MSE, ROUGH = reg_test_invert(inv, G, d, alphas[i], constraints)
# _mse[i] = MSE
# _rough[i] = ROUGH
# # obtain turning point of "L-curve" (minimum in this case) ...
# # and take that as the optimal value for alpha.
# # + 2 because you lose two points by differentiating twice from the ...
# # finite difference approximations
# pt = np.abs(np.diff(np.log10(_rough), 2)).argmin() + 2
# return _mse, _rough, pt
# def do_norm_test(inv, alphas, root="../mlinversion/.norm"):
# try:
# a_comp = np.load(f"{root}/alphas.npy")
# if not np.array_equal(a_comp, alphas):
# print("new alphas detected...")
# delete_directory(f"{root}/")
# else:
# print("alphas unchanged...")
# _mse = np.load(f"{root}/mse.npy")
# _rough = np.load(f"{root}/rough.npy")
# best_i = np.load(f"{root}/besti.npy")
# print("loaded local files...")
# except FileNotFoundError:
# print("running inversions...")
# _mse, _rough, best_i = regularisation_optimiser(inv, G, d, alphas)
# os.makedirs(f"{root}/", exist_ok=True)
# np.save(f"{root}/mse.npy", _mse)
# np.save(f"{root}/rough.npy", _rough)
# np.save(f"{root}/besti.npy", best_i)
# np.save(f"{root}/alphas.npy", alphas)
# print(f"Best alpha is {alphas[best_i]:.2f}.")
# return _mse, _rough, best_i
inv = Inversion("YPML")
alphas = np.round(np.geomspace(1E-1, 500, 50), 3)
_mse, _rough, best_i = do_norm_test(inv, G, d, alphas)
best_i = np.abs((alphas-22.8)).argmin()
best_i
###Output
_____no_output_____
###Markdown
We obtain an "L-curve" and choose the value for alpha at the point of maximum curvature (2nd derivative is minimum).
###Code
def plot_regularisation_optimiser(mse, rough, alphas, best_i, save=""):
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
fac = 1
fig, ax = plt.subplots(1,1, figsize=(6*fac, 5*fac))
out = ax.scatter(mse, rough, c=alphas, cmap="jet", norm=matplotlib.cm.colors.LogNorm())
ax.loglog(mse[best_i], rough[best_i], 'ks', ms=10, mfc='none', label=r"$\alpha$={:.2f}".format(alphas[best_i]))
ax.set_xlabel("MSE", fontsize=14)
ax.set_ylabel(r"$||\dfrac{\mathrm{d^2log(A_{0})}}{\mathrm{d}R^2}||^2$", fontsize=14)
ax.legend()
fig.tight_layout()
ax.set_ylim([0.05, 1.1])
ax.set_xlim([0.035, 0.051])
ax.tick_params(labelsize=12)
ax.yaxis.set_ticks([0.05, 0.1, 0.5, 1])
ax.xaxis.set_ticks([0.04, 0.05])
ax.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, _: '{:g}'.format(y)))
ax.xaxis.set_major_formatter(ticker.FuncFormatter(lambda x, _: '{:g}'.format(x)))
cax = inset_axes(
ax,
height="66%", # width = 10% of parent_bbox width
width="3%", # height : 50%
loc='center right',
bbox_to_anchor=(-0.1, 0, 1, 1),
bbox_transform=ax.transAxes,
borderpad=0.1,
)
# colorbar
cb = fig.colorbar(
out, cax=cax, orientation='vertical'
)
cb.set_label(
r"$\alpha$", size=20, labelpad=10, rotation=-360
)
cb.ax.yaxis.set_ticks([500, 100, 50, 10, 5, 1, 0.5, 0.1])
cb.ax.yaxis.set_major_formatter(ticker.FuncFormatter(lambda x, _: '{:g}'.format(x)))
cb.ax.yaxis.set_label_position("left")
cb.ax.tick_params(labelsize=10)
if save:
fig.savefig(save)
plot_regularisation_optimiser(_mse, _rough, alphas, best_i, "../figures/NORM.pdf")
###Output
_____no_output_____
###Markdown
Visulise the effect on changing $\alpha$ on the recovered model.
###Code
C = Constraints(G.term_map, OrderedDict(Sj={"SUM":0},))
@interact
def alpha_vis(alpha=np.round([1E-1, 3, 5, 7.38, 10, 20], 2)):
global C
plt.close()
Gamma = Regularisation(G.term_map, regs=OrderedDict(logA0n=dict(kind="FD", alpha=alpha)))
inv = Inversion("YPML")
inv.invert(G, d, constraints=C, regularisation=Gamma)
r = Richter1958()
r.epi_to_hypo(av_dep=10.5)
fig, axes = vis_inv_output(inv)
axes[0].plot(r.distances[r.distances<=180], r.logA0[r.distances<=180], 'g^', label="R58", zorder=1, alpha=0.65)
axes[0].plot(18, -1.6, 'r.', mfc='none', label="anchor point")
axes[0].legend()
axes[2].set_title(f"\u03A3 Sj={np.sum(G.term_map.get_term('Sj')['model_values']):.1E}")
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Station-amplitude-distance distributionsCan we speculate which stations we might expect a significant bias by looking at the KDE's of the distance distributions?
###Code
@interact
def plot_sta_kdes(station=for_inv.Sta.unique(), dist_metric=["Rhyp", "Repi"]):
plt.figure(figsize=(12, 7))
out = for_inv.groupby("Sta")[dist_metric].plot.kde(logx=True)
plt.xlim([5, 250])
plt.xlabel(r"${}$".format(f"{dist_metric[0]}_{'{'}{dist_metric[1:]}{'}'}"))
for_inv[for_inv.Sta.isin([station,])].groupby("Sta")[dist_metric].plot.kde(color='k', lw=5)
###Output
_____no_output_____
###Markdown
Visualise the effect of arbitrary constraints on the recovered model.
###Code
c1 = OrderedDict(#logA0n={18:-1.6},
Sj={"SUM":0},)
# Sj={"SUM":0, "MB.BUT":-0.23, #"US.AHID":-0.43},
# "US.BOZ":0.17}, #"US.BW06":-0.15},)
# MLi=MLcons)
c2 = OrderedDict(logA0n={18:-1.6},
Sj={"SUM":0},)
# Sj={"SUM":0, "MB.BUT":-0.23, #"US.AHID":-0.43},
# "US.BOZ":0.17}, #"US.BW06":-0.15},)
# MLi=MLcons)
c3 = OrderedDict(#logA0n={18:-1.6},
Sj={"SUM":0},
# Sj={"SUM":0, "MB.BUT":-0.23, #"US.AHID":-0.43},
# "US.BOZ":0.17}, #"US.BW06":-0.15},)
MLi=MLcons)
c4 = OrderedDict(Sj={"SUM":0},
# Sj={"SUM":0, "MB.BUT":-0.23, "US.AHID":-0.43},
# "US.BOZ":0.17}, #"US.BW06":-0.15},)
MLi=MLcons)
c5 = OrderedDict(logA0n={18:-1.6},
# Sj={"SUM":0},
Sj={"SUM":0, "MB.BUT":-0.23, "US.AHID":-0.43,
"US.BOZ":0.17, "US.BW06":-0.15},)
# MLi=MLcons)
c6 = OrderedDict(logA0n={18:-1.6},
# Sj={"SUM":0},
Sj={"SUM":0, "MB.BUT":-0.23, "US.AHID":-0.43,
"US.BOZ":0.17, "US.BW06":-0.15},
MLi=MLcons)
c7 = OrderedDict(#logA0n={18:-1.6},
# Sj={"SUM":0},)
Sj={"SUM":0, "MB.BUT":-0.23, "US.AHID":-0.43,
"US.BOZ":0.17, "US.BW06":-0.15},
MLi=MLcons)
c8 = OrderedDict(#logA0n={18:-1.6},
# Sj={"SUM":0},)
Sj={"SUM":0, "MB.BUT":-0.23, "US.AHID":-0.43,
"US.BOZ":0.17, "US.BW06":-0.15},)
# MLi=MLcons)
cons = [c1, c2, c3, c4, c5, c6, c7, c8]
Cs = [None]+[Constraints(G.term_map, constraints=con) for con in cons]
@interact
def constraints_vis(C=Cs, A=-np.round(np.arange(0.3, 1.1, 0.1), 1)):
plt.close()
inv = Inversion("YPML")
Gamma = Regularisation(G.term_map, regs=OrderedDict(logA0n=dict(kind="FD", alpha=alphas[best_i])))
inv.invert(G, d, regularisation=Gamma, constraints=C)
r = Richter1958()
r.epi_to_hypo(av_dep=10.5)
fig, axes = vis_inv_output(inv)
axes[0].plot(r.distances[r.distances<=180], r.logA0[r.distances<=180], 'g^', label="R58", zorder=1, alpha=0.65)
axes[0].hist2d(sims.rhyp, sims.A+A, bins=(30,25), cmap="gnuplot", norm=matplotlib.cm.colors.LogNorm(),)
axes[0].plot(18, -1.6, 'r.', mfc='none', label="anchor point")
axes[0].legend()
axes[2].set_title(f"\u03A3 Sj={np.sum(G.term_map.get_term('Sj')['model_values']):.1E}")
axes[2].plot(["MB.BUT", "US.AHID", "US.BOZ", "US.BW06"], [-0.23, -0.43, 0.17, -0.15], 'ro', label='UUSS')
axes[2].legend()
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Save the chosen model
###Code
logA0np = Term("logA0n", "LINEAR INTERPOLATION", for_inv.Rhyp.values, unique_labels=interp_nodes)
MLp = Term("MLi", "CONSTANT", evs)
Sjp = Term("Sj", "CONSTANT", stas, sign=-1)
logA0n = LinInterpCoeffs(logA0np)
MLi = ConstantCoeffs(MLp)
Sj = ConstantCoeffs(Sjp)
G = logA0n + MLi + Sj
d = DataArray(for_inv['halfAmpH'].apply(np.log10))
C = Constraints(G.term_map, constraints=c4)
inv = Inversion("YPML")
Gamma = Regularisation(G.term_map, regs=OrderedDict(logA0n=dict(kind="FD", alpha=alphas[best_i])))
inv.invert(G, d, regularisation=Gamma, constraints=C)
def build_name(t_m):
name = []
for term, out in t_m.items():
for k, v in out.items():
if k.lower() == 'constraints':
if v:
name.append(term)
name.append(str(len(v)))
return "model-" + "-".join(name)
root = "../mlinversion/.model/"
t_m = inv.m.term_map.values
os.makedirs(f"{root}", exist_ok=True)
fname = os.path.join(root, f"{inv.name}-{build_name(t_m)}.json")
print(f"saved {fname}")
pd.DataFrame(t_m).to_json(fname)
logA0 = inv.G.term_map.get_term("logA0n")['model_values']
R = inv.G.term_map.get_term("logA0n")['unique_labels']
sj = {
n: v for n, v in zip(
inv.G.term_map.get_term("Sj")['unique_labels'],
inv.G.term_map.get_term("Sj")['model_values']
)
}
mws = np.array(list(inv.G.term_map.get_term("MLi")["constraints"].values()), dtype=float)
evids = np.array(list(inv.G.term_map.get_term("MLi")["constraints"].keys()), dtype=int)
As=[];Rhyps=[];Stas=[];
for evid in evids:
As.append(for_inv[for_inv.Evid==evid][["halfAmpH", "Rhyp"]].sort_values("Rhyp")["halfAmpH"].values)
Rhyps.append(for_inv[for_inv.Evid==evid][["halfAmpH", "Rhyp"]].sort_values("Rhyp")["Rhyp"].values)
Stas.append((for_inv[for_inv.Evid==evid][["Net", "Rhyp"]].sort_values("Rhyp")["Net"] + "." + for_inv[for_inv.Evid==evid][["Sta", "Rhyp"]].sort_values("Rhyp")["Sta"]).values)
# warning, the sub-arrays have different lengths
As = np.array(As, dtype='object')
Rhyps = np.array(Rhyps, dtype='object')
Stas = np.array(Stas, dtype='object')
Stacs = np.array([np.vectorize(sj.get)(Sta) for Sta in Stas], dtype='object')
logA0s = np.array([np.interp(Rhyp, R, logA0) for Rhyp in Rhyps], dtype='object')
fac = 1.25
nsub = 2
fig, axes = plt.subplots(2, nsub, figsize=(6*nsub*fac, 5*nsub*fac))
axes=axes.flatten()
axes[3].plot(R, -logA0, 'k', label=r"YP21 $\mathrm{-log_{10}A_0}$")
cols = ['orange', 'blue', 'green', 'red']
MLs = []; MLs_sta = [];
for i, evid in enumerate(evids):
axes[0].plot(
Rhyps[i],
np.log10(As[i]),
'o',
mfc='none',
color=cols[i],
label=f"{str(evid)} | {r'$M_w$'} {mws[i]:.2f}"
)
axes[3].plot(
Rhyps[i],
-logA0s[i],
's',
color=cols[i],
label='predicted',
)
axes[3].plot(
Rhyps[i],
-(np.log10(As[i])+Stacs[i]-mws[i]),
'^',
color=cols[i],
label='actual from inversion'
)
axes[1].plot(Rhyps[i], np.log10(As[i])-logA0s[i], 'o', color=cols[i])
axes[1].plot(Rhyps[i], np.log10(As[i])-logA0s[i]+Stacs[i], 'o', mfc='none', color=cols[i])
mean = (np.log10(As[i]) - logA0s[i]).mean()
means = (np.log10(As[i]) - logA0s[i] + Stacs[i]).mean()
axes[1].plot(Rhyps[i], [mean for _ in Rhyps[i]], '-', color=cols[i], label=f"{r'$M_L$'}={mean:.2f}")
axes[1].plot(Rhyps[i], [means for _ in Rhyps[i]], '--', color=cols[i], label=f"{r'$M_L$ (w/Sj)'}={means:.2f}")
MLs.append(mean)
MLs_sta.append(means)
axes[2].plot(mws[i], MLs[i]-mws[i], 'o', color=cols[i])
axes[2].plot(mws[i], MLs_sta[i]-mws[i], 'o', color=cols[i], mfc='none')
axes[2].hlines(
np.mean(np.array(MLs) - mws),
mws.min(),
mws.max(),
color='k'
)
axes[2].hlines(
np.mean(np.array(MLs_sta) - mws),
mws.min(),
mws.max(),
color='k',
linestyles='dashed'
)
axes[2].text(0.1, 0.7,
f"{r'$L_2$'}={np.linalg.norm(np.array(MLs_sta) - mws, 2):.2f}",
horizontalalignment='center',
verticalalignment='center',
transform=axes[2].transAxes
)
axes[0].legend(loc='lower left')
axes[1].legend()
axes[0].set_xlabel(r"$R_{hyp}$")
axes[0].set_ylabel(r"$\mathrm{log_{10}A}$")
axes[1].set_xlabel(r"$R_{hyp}$")
axes[1].set_ylabel(r"$M_L$")
axes[2].set_xlabel(r"$M_w$")
axes[2].set_ylabel(r"$M_{L} - M_{w}$")
axes[3].set_ylabel(r"$\mathrm{-log_{10}A_0}$")
axes[3].set_xlabel(r"$R_{hyp}$")
axes[3].legend()
fig.savefig("../figures/INV-DISAGREEMENT-EXPLAN.pdf")
###Output
_____no_output_____ |
azure_code/Training_Testing_MultiK_RF.ipynb | ###Markdown
Training classifiers for each values of K using saved features
###Code
import os
import numpy as np
import pickle
import pandas as pd
data_dir = os.path.join(os.getcwd(),'BlobStorage')
train_data_df = pd.read_pickle(data_dir+'/train_data_features_df.pkl')
val_data_df = pd.read_pickle(data_dir+'/val_data_features_df.pkl')
#Combining train and val data
train_val_data_df = pd.concat([train_data_df,val_data_df])
#Training a classifier for each value of K.
from sklearn.ensemble import RandomForestClassifier
f = open("fasttext/clusterCenters.txt",'r')
lines = f.readlines()
for line in lines:
line = line.split()
modelName = line[0]
classesNow = line[1:]
print(modelName)
#Subsetting dataframe for only the classes being used now.
train_now_df = train_val_data_df[train_val_data_df['class_name'].isin(classesNow)]
X_train_val = train_now_df.img_features.apply(pd.Series)
y_train_val = train_now_df['class_name'].astype('category')
#training randomforest
mdl_rf = RandomForestClassifier(n_estimators=1000,random_state=0,verbose=1,n_jobs=-1, min_samples_split= 2, min_samples_leaf= 1, max_features= 'auto', max_depth= 60, bootstrap= False)
clf_fit = mdl_rf.fit(X_train_val, y_train_val)
#Saving baseline model
pickle.dump(clf_fit, open('trained_models/'+ modelName + '.sav', 'wb'))
f.close()
###Output
model10
###Markdown
Using Trained classifiers to predict on test data for each K. Saving predictions
###Code
import os
import numpy as np
import pickle
import pandas as pd
data_dir = os.path.join(os.getcwd(),'BlobStorage')
test_data_df = pd.read_pickle(data_dir+'/test_data_features_df.pkl')
X_test = test_data_df.img_features.apply(pd.Series)
y_test = test_data_df['class_name'].astype('category')
f = open("fasttext/clusterCenters.txt",'r')
lines = f.readlines()
for line in lines:
line = line.split()
modelName = line[0]
classesNow = line[1:]
print(modelName)
clf_fit = pickle.load(open('trained_models/'+ modelName + '.sav', 'rb'))
# evaluate the model on test data
yhat_clf = clf_fit.predict(X_test)
pred_df = pd.DataFrame(data=yhat_clf, index=test_data_df['image_paths'], columns=['max_prob'])
pred_df.to_pickle('predictions/'+modelName+'.pkl')
#Finding prob predictions for all classes
yhat_clf_prob = clf_fit.predict_proba(X_test)
pred_df = pd.DataFrame(data=yhat_clf_prob, index=test_data_df['image_paths'], columns=clf_fit.classes_)
pred_df.to_pickle('predictions/all_categories/'+modelName+'.pkl')
f.close()
###Output
_____no_output_____
###Markdown
Generating close word dict from FastText for each K
###Code
#Finding closest words to top predictions on testing set
import math
import pickle
from scipy.spatial import distance
#from itertools import islice
#def take(n, iterable):
# "Return first n items of the iterable as a list"
# return list(islice(iterable, n))
def scipy_distance(v, u):
return distance.euclidean(v, u)
#Reading the fasttext dictionary populated at clustering phase
fastext_dict = pickle.load(open("fasttext/fastext_dict.pkl","rb"))
print(len(fastext_dict))
#print(fastext_dict.keys())
#print(fastext_dict['car'])
#total_classes = 379
dict_keys = list(fastext_dict.keys())
#Generating the close words dictionary for all dictionary keys
closeWords_Count = 6
closeWord_dict = {}
for word in dict_keys:
distance_dict = {}
for fast_word in dict_keys:
dist = scipy_distance(fastext_dict[word],fastext_dict[fast_word])
distance_dict[fast_word] = dist
#sorted_distace_dict = {k: v for k, v in sorted(distance_dict.items(), key=lambda item: item[1],reverse = True)[:closeWords_Count+1]}
closeWords_dict = {k: v for k, v in sorted(distance_dict.items(), key=lambda item: item[1])[:closeWords_Count]}
closeWord_dict[word] = list(closeWords_dict.keys())
pickle.dump(closeWord_dict, open('close_word_dict/closeWord_dict.pkl', 'wb'))
#Generating the close words dictionary for each model
closeWords_Count = 6
f = open("fasttext/clusterCenters.txt",'r')
lines = f.readlines()
for line in lines:
line = line.split()
modelName = line[0]
print(modelName)
classesNow = line[1:]
closeWord_dict = {}
for word in classesNow:
distance_dict = {}
for fast_word in dict_keys:
dist = scipy_distance(fastext_dict[word],fastext_dict[fast_word])
distance_dict[fast_word] = dist
#sorted_distace_dict = {k: v for k, v in sorted(distance_dict.items(), key=lambda item: item[1],reverse = True)[:closeWords_Count+1]}
closeWords_dict = {k: v for k, v in sorted(distance_dict.items(), key=lambda item: item[1])[:closeWords_Count]}
closeWord_dict[word] = list(closeWords_dict.keys())
pickle.dump(closeWord_dict, open('close_word_dict/'+ modelName + '_closeWord_dict.pkl', 'wb'))
#pred_df = pd.read_csv('predictions/'+modelName+'.txt', header=True, index=True, sep=',')
f.close()
###Output
_____no_output_____
###Markdown
Running final predictions from classifier and close word dict
###Code
import os
import numpy as np
import pickle
import pandas as pd
data_dir = os.path.join(os.getcwd(),'BlobStorage')
test_data_df = pd.read_pickle(data_dir+'/test_data_features_df.pkl')
y_test_df = pd.DataFrame(test_data_df.set_index('image_paths').class_name)
closeWord_dict = pickle.load(open('close_word_dict/closeWord_dict.pkl',"rb"))
#Running final predictions for top 3 predictions from classifier
h = open("Kmodels_final_accuracy.txt", "w")
f = open("fasttext/clusterCenters.txt",'r')
lines = f.readlines()
for line in lines:
line = line.split()
modelName = line[0]
print(modelName)
#Reading the predictions for each model
pred_df = pd.read_pickle('predictions/all_categories/'+modelName+'.pkl')
#Finding top 3 predictions
top_n_predictions = np.argsort(pred_df.values, axis = 1)[:,-3:]
#then find the associated code for each prediction
top_class = pred_df.columns[top_n_predictions]
top_class_df = pd.DataFrame(data=top_class,columns=['top1','top2','top3'],index = pred_df.index)
results = pd.merge(y_test_df, top_class_df, left_index=True, right_index=True)
#closeWord_dict = pickle.load(open('close_word_dict/'+ modelName + '_closeWord_dict.pkl',"rb"))
results['guesses_1'] = results['top1'].map(closeWord_dict)
results['guesses_2'] = results['top2'].map(closeWord_dict)
results['guesses_3'] = results['top3'].map(closeWord_dict)
pred_check = []
#pred_df['pred_check'] = np.where(pred_df['actual_label'] in pred_df['guesses'],1,0)
for index,row in results.iterrows():
if (row['class_name'] in row['guesses_1']) or (row['class_name'] in row['guesses_2']) or (row['class_name'] in row['guesses_3']):
pred_check.append(1)
else:
pred_check.append(0)
results['pred_check'] = pred_check
total_right = results['pred_check'].sum()
total_rows = len(pred_df)
accuracy = round(total_right/total_rows,4)
h.write(str(modelName) + ',' + str(accuracy) + '\n')
f.close()
h.close()
#Running final predictions for single predictions
h = open("Kmodels_singlePred_final_accuracy.txt", "w")
f = open("fasttext/clusterCenters.txt",'r')
lines = f.readlines()
for line in lines:
line = line.split()
modelName = line[0]
print(modelName)
#Reading the predictions for each model
pred_df = pd.read_pickle('predictions/'+modelName+'.pkl')
results = pd.merge(y_test_df, pred_df, left_index=True, right_index=True)
closeWord_dict = pickle.load(open('close_word_dict/'+ modelName + '_closeWord_dict.pkl',"rb"))
results['guesses'] = results['max_prob'].map(closeWord_dict)
pred_check = []
#pred_df['pred_check'] = np.where(pred_df['actual_label'] in pred_df['guesses'],1,0)
for index,row in results.iterrows():
if row['class_name'] in row['guesses']:
pred_check.append(1)
else:
pred_check.append(0)
results['pred_check'] = pred_check
total_right = results['pred_check'].sum()
total_rows = len(pred_df)
accuracy = round(total_right/total_rows,4)
h.write(str(modelName) + ',' + str(accuracy) + '\n')
f.close()
h.close()
###Output
_____no_output_____ |
notebooks/from-zero-to-hero-tutorial/06_loggers.ipynb | ###Markdown
---description: "Logging... logging everywhere! \U0001F52E"--- LoggersWelcome to the _"Logging"_ tutorial of the _"From Zero to Hero"_ series. In this part we will present the functionalities offered by the _Avalanche_ `logging` module.
###Code
!pip install git+https://github.com/ContinualAI/avalanche.git
###Output
_____no_output_____
###Markdown
📑 The Logging ModuleIn the previous tutorial we have learned how to evaluate a continual learning algorithm in _Avalanche_, through different metrics that can be used _off-the-shelf_ via the _Evaluation Plugin_ or stand-alone. However, computing metrics and collecting results, may not be enough at times.While running complex experiments with long _waiting times_, **logging** results over-time is fundamental to "_babysit_" your experiments in real-time, or even understand what went wrong in the aftermath.This is why in Avalanche we decided to put a strong emphasis on logging and **provide a number of loggers** that can be used with any set of metrics! Loggers_Avalanche_ at the moment supports three main Loggers:* **InteractiveLogger**: This logger provides a nice progress bar and displays real-time metrics results in an interactive way \(meant for `stdout`\).* **TextLogger**: This logger, mostly intended for file logging, is the plain text version of the `InteractiveLogger`. Keep in mind that it may be very verbose.* **TensorboardLogger**: It logs all the metrics on [Tensorboard](https://www.tensorflow.org/tensorboard) in real-time. Perfect for real-time plotting.In order to keep track of when each metric value has been logged, we leverage a `global counter`. You can see the `global counter` reported in the x axis of the logged plots.The `global counter` is an ever-increasing value which starts from 0 and it is increased by one each time a training or evaluation iteration is performed (i.e. after each training or evaluation minibatch).The `global counter` is updated automatically by the strategy. It should be reset by creating a new instance of the strategy. How to use Them
###Code
from torch.optim import SGD
from torch.nn import CrossEntropyLoss
from avalanche.benchmarks.classic import SplitMNIST
from avalanche.evaluation.metrics import forgetting_metrics, \
accuracy_metrics, loss_metrics, timing_metrics, cpu_usage_metrics, \
confusion_matrix_metrics, disk_usage_metrics
from avalanche.models import SimpleMLP
from avalanche.logging import InteractiveLogger, TextLogger, TensorboardLogger
from avalanche.training.plugins import EvaluationPlugin
from avalanche.training.strategies import Naive
scenario = SplitMNIST(n_experiences=5)
# MODEL CREATION
model = SimpleMLP(num_classes=scenario.n_classes)
# DEFINE THE EVALUATION PLUGIN and LOGGERS
# The evaluation plugin manages the metrics computation.
# It takes as argument a list of metrics, collectes their results and returns
# them to the strategy it is attached to.
# log to Tensorboard
tb_logger = TensorboardLogger()
# log to text file
text_logger = TextLogger(open('log.txt', 'a'))
# print to stdout
interactive_logger = InteractiveLogger()
eval_plugin = EvaluationPlugin(
accuracy_metrics(minibatch=True, epoch=True, experience=True, stream=True),
loss_metrics(minibatch=True, epoch=True, experience=True, stream=True),
timing_metrics(epoch=True, epoch_running=True),
cpu_usage_metrics(experience=True),
forgetting_metrics(experience=True, stream=True),
confusion_matrix_metrics(num_classes=scenario.n_classes, save_image=False,
stream=True),
disk_usage_metrics(minibatch=True, epoch=True, experience=True, stream=True),
loggers=[interactive_logger, text_logger, tb_logger]
)
# CREATE THE STRATEGY INSTANCE (NAIVE)
cl_strategy = Naive(
model, SGD(model.parameters(), lr=0.001, momentum=0.9),
CrossEntropyLoss(), train_mb_size=500, train_epochs=1, eval_mb_size=100,
evaluator=eval_plugin)
# TRAINING LOOP
print('Starting experiment...')
results = []
for experience in scenario.train_stream:
print("Start of experience: ", experience.current_experience)
print("Current Classes: ", experience.classes_in_this_experience)
# train returns a dictionary which contains all the metric values
res = cl_strategy.train(experience)
print('Training completed')
print('Computing accuracy on the whole test set')
# test also returns a dictionary which contains all the metric values
results.append(cl_strategy.eval(scenario.test_stream))
###Output
Starting experiment...
Start of experience: 0
Current Classes: [0, 9]
-- >> Start of training phase << --
-- Starting training on experience 0 (Task 0) from train stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 24/24 [00:14<00:00, 1.61it/s]
Epoch 0 ended.
DiskUsage_Epoch/train_phase/train_stream/Task000 = 720915.6689
DiskUsage_MB/train_phase/train_stream/Task000 = 720915.3457
Loss_Epoch/train_phase/train_stream/Task000 = 1.0546
Loss_MB/train_phase/train_stream/Task000 = 0.2314
RunningTime_Epoch/train_phase/train_stream/Task000 = 0.0334
Time_Epoch/train_phase/train_stream/Task000 = 14.4948
Top1_Acc_Epoch/train_phase/train_stream/Task000 = 0.7566
Top1_Acc_MB/train_phase/train_stream/Task000 = 0.9839
-- >> End of training phase << --
Training completed
Computing accuracy on the whole test set
-- >> Start of eval phase << --
-- Starting eval on experience 0 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 16.26it/s]
> Eval on experience 0 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp000 = 369.0000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp000 = 720916.5859
Loss_Exp/eval_phase/test_stream/Task000/Exp000 = 0.1841
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp000 = 0.9693
-- Starting eval on experience 1 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 21/21 [00:01<00:00, 16.39it/s]
> Eval on experience 1 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp001 = 364.3000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp001 = 720917.2822
Loss_Exp/eval_phase/test_stream/Task000/Exp001 = 4.4039
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp001 = 0.0000
-- Starting eval on experience 2 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 19/19 [00:01<00:00, 16.62it/s]
> Eval on experience 2 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp002 = 372.0000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp002 = 720917.9785
Loss_Exp/eval_phase/test_stream/Task000/Exp002 = 4.9915
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp002 = 0.0000
-- Starting eval on experience 3 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 16.13it/s]
> Eval on experience 3 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp003 = 358.5000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp003 = 720918.6748
Loss_Exp/eval_phase/test_stream/Task000/Exp003 = 4.7982
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp003 = 0.0000
-- Starting eval on experience 4 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 22/22 [00:01<00:00, 17.01it/s]
> Eval on experience 4 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp004 = 369.6000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp004 = 720919.3711
Loss_Exp/eval_phase/test_stream/Task000/Exp004 = 4.1429
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp004 = 0.0000
-- >> End of eval phase << --
ConfusionMatrix_Stream/eval_phase/test_stream =
tensor([[ 950, 0, 0, 0, 0, 0, 0, 0, 0, 30],
[ 6, 0, 0, 0, 0, 0, 0, 0, 0, 1129],
[ 510, 0, 0, 0, 0, 0, 0, 0, 0, 522],
[ 397, 0, 0, 0, 0, 0, 0, 0, 0, 613],
[ 38, 0, 0, 0, 0, 0, 0, 0, 0, 944],
[ 416, 0, 0, 0, 0, 0, 0, 0, 0, 476],
[ 448, 0, 0, 0, 0, 0, 0, 0, 0, 510],
[ 35, 0, 0, 0, 0, 0, 0, 0, 0, 993],
[ 154, 0, 0, 0, 0, 0, 0, 0, 0, 820],
[ 31, 0, 0, 0, 0, 0, 0, 0, 0, 978]])
DiskUsage_Stream/eval_phase/test_stream = 720920.0059
Loss_Stream/eval_phase/test_stream = 3.6963
Top1_Acc_Stream/eval_phase/test_stream = 0.1928
Start of experience: 1
Current Classes: [2, 7]
-- >> Start of training phase << --
-- Starting training on experience 1 (Task 0) from train stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:15<00:00, 1.66it/s]
Epoch 0 ended.
DiskUsage_Epoch/train_phase/train_stream/Task000 = 720948.1162
DiskUsage_MB/train_phase/train_stream/Task000 = 720947.7930
Loss_Epoch/train_phase/train_stream/Task000 = 1.8814
Loss_MB/train_phase/train_stream/Task000 = 0.3365
RunningTime_Epoch/train_phase/train_stream/Task000 = 0.0324
Time_Epoch/train_phase/train_stream/Task000 = 14.7142
Top1_Acc_Epoch/train_phase/train_stream/Task000 = 0.4811
Top1_Acc_MB/train_phase/train_stream/Task000 = 0.9596
-- >> End of training phase << --
Training completed
Computing accuracy on the whole test set
-- >> Start of eval phase << --
-- Starting eval on experience 0 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 16.26it/s]
> Eval on experience 0 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp000 = 356.7000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp000 = 720951.2979
Loss_Exp/eval_phase/test_stream/Task000/Exp000 = 2.6347
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp000 = 0.0850
-- Starting eval on experience 1 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 21/21 [00:01<00:00, 16.54it/s]
> Eval on experience 1 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp001 = 357.1000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp001 = 720952.0020
Loss_Exp/eval_phase/test_stream/Task000/Exp001 = 0.2836
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp001 = 0.9578
-- Starting eval on experience 2 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 19/19 [00:01<00:00, 16.20it/s]
> Eval on experience 2 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp002 = 360.1000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp002 = 720952.7061
Loss_Exp/eval_phase/test_stream/Task000/Exp002 = 4.8760
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp002 = 0.0000
-- Starting eval on experience 3 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 16.78it/s]
> Eval on experience 3 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp003 = 377.6000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp003 = 720953.4102
Loss_Exp/eval_phase/test_stream/Task000/Exp003 = 4.9460
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp003 = 0.0000
-- Starting eval on experience 4 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 22/22 [00:01<00:00, 17.54it/s]
> Eval on experience 4 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp004 = 374.7000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp004 = 720954.1143
Loss_Exp/eval_phase/test_stream/Task000/Exp004 = 4.0850
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp004 = 0.0000
-- >> End of eval phase << --
ConfusionMatrix_Stream/eval_phase/test_stream =
tensor([[169, 0, 697, 0, 0, 0, 0, 114, 0, 0],
[ 0, 0, 549, 0, 0, 0, 0, 586, 0, 0],
[ 0, 0, 991, 0, 0, 0, 0, 41, 0, 0],
[ 0, 0, 762, 0, 0, 0, 0, 248, 0, 0],
[ 0, 0, 147, 0, 0, 0, 0, 835, 0, 0],
[ 0, 0, 537, 0, 0, 0, 0, 355, 0, 0],
[ 0, 0, 928, 0, 0, 0, 0, 30, 0, 0],
[ 0, 0, 46, 0, 0, 0, 0, 982, 0, 0],
[ 0, 0, 730, 0, 0, 0, 0, 244, 0, 0],
[ 0, 0, 51, 0, 0, 0, 0, 958, 0, 0]])
###Markdown
---description: "Logging... logging everywhere! \U0001F52E"--- LoggersWelcome to the _"Logging"_ tutorial of the _"From Zero to Hero"_ series. In this part we will present the functionalities offered by the _Avalanche_ `logging` module.
###Code
!pip install avalanche-lib
###Output
_____no_output_____
###Markdown
📑 The Logging ModuleIn the previous tutorial we have learned how to evaluate a continual learning algorithm in _Avalanche_, through different metrics that can be used _off-the-shelf_ via the _Evaluation Plugin_ or stand-alone. However, computing metrics and collecting results, may not be enough at times.While running complex experiments with long _waiting times_, **logging** results over-time is fundamental to "_babysit_" your experiments in real-time, or even understand what went wrong in the aftermath.This is why in Avalanche we decided to put a strong emphasis on logging and **provide a number of loggers** that can be used with any set of metrics! Loggers_Avalanche_ at the moment supports four main Loggers:* **InteractiveLogger**: This logger provides a nice progress bar and displays real-time metrics results in an interactive way \(meant for `stdout`\).* **TextLogger**: This logger, mostly intended for file logging, is the plain text version of the `InteractiveLogger`. Keep in mind that it may be very verbose.* **TensorboardLogger**: It logs all the metrics on [Tensorboard](https://www.tensorflow.org/tensorboard) in real-time. Perfect for real-time plotting.* **WandBLogger**: It leverages [Weights and Biases](https://wandb.ai/site) tools to log metrics and results on a dashboard. It requires a W&B account.In order to keep track of when each metric value has been logged, we leverage two `global counters`, one for the training phase, one for the evaluation phase. You can see the `global counter` value reported in the x axis of the logged plots.Each `global counter` is an ever-increasing value which starts from 0 and it is increased by one each time a training/evaluation iteration is performed (i.e. after each training/evaluation minibatch).The `global counters` are updated automatically by the strategy. How to use loggers
###Code
from torch.optim import SGD
from torch.nn import CrossEntropyLoss
from avalanche.benchmarks.classic import SplitMNIST
from avalanche.evaluation.metrics import forgetting_metrics, \
accuracy_metrics, loss_metrics, timing_metrics, cpu_usage_metrics, \
confusion_matrix_metrics, disk_usage_metrics
from avalanche.models import SimpleMLP
from avalanche.logging import InteractiveLogger, TextLogger, TensorboardLogger, WandBLogger
from avalanche.training.plugins import EvaluationPlugin
from avalanche.training.strategies import Naive
benchmark = SplitMNIST(n_experiences=5, return_task_id=False)
# MODEL CREATION
model = SimpleMLP(num_classes=benchmark.n_classes)
# DEFINE THE EVALUATION PLUGIN and LOGGERS
# The evaluation plugin manages the metrics computation.
# It takes as argument a list of metrics, collectes their results and returns
# them to the strategy it is attached to.
loggers = []
# log to Tensorboard
loggers.append(TensorboardLogger())
# log to text file
loggers.append(TextLogger(open('log.txt', 'a')))
# print to stdout
loggers.append(InteractiveLogger())
# W&B logger - comment this if you don't have a W&B account
loggers.append(WandBLogger(project_name="avalanche", run_name="test"))
eval_plugin = EvaluationPlugin(
accuracy_metrics(minibatch=True, epoch=True, experience=True, stream=True),
loss_metrics(minibatch=True, epoch=True, experience=True, stream=True),
timing_metrics(epoch=True, epoch_running=True),
cpu_usage_metrics(experience=True),
forgetting_metrics(experience=True, stream=True),
confusion_matrix_metrics(num_classes=benchmark.n_classes, save_image=True,
stream=True),
disk_usage_metrics(minibatch=True, epoch=True, experience=True, stream=True),
loggers=loggers,
benchmark=benchmark
)
# CREATE THE STRATEGY INSTANCE (NAIVE)
cl_strategy = Naive(
model, SGD(model.parameters(), lr=0.001, momentum=0.9),
CrossEntropyLoss(), train_mb_size=500, train_epochs=1, eval_mb_size=100,
evaluator=eval_plugin)
# TRAINING LOOP
print('Starting experiment...')
results = []
for experience in benchmark.train_stream:
# train returns a dictionary which contains all the metric values
res = cl_strategy.train(experience)
print('Training completed')
print('Computing accuracy on the whole test set')
# test also returns a dictionary which contains all the metric values
results.append(cl_strategy.eval(benchmark.test_stream))
# need to manually call W&B run end since we are in a notebook
import wandb
wandb.finish()
###Output
_____no_output_____
###Markdown
---description: "Logging... logging everywhere! \U0001F52E"--- LoggersWelcome to the _"Logging"_ tutorial of the _"From Zero to Hero"_ series. In this part we will present the functionalities offered by the _Avalanche_ `logging` module.
###Code
!pip install git+https://github.com/ContinualAI/avalanche.git
###Output
_____no_output_____
###Markdown
📑 The Logging ModuleIn the previous tutorial we have learned how to evaluate a continual learning algorithm in _Avalanche_, through different metrics that can be used _off-the-shelf_ via the _Evaluation Plugin_ or stand-alone. However, computing metrics and collecting results, may not be enough at times.While running complex experiments with long _waiting times_, **logging** results over-time is fundamental to "_babysit_" your experiments in real-time, or even understand what went wrong in the aftermath.This is why in Avalanche we decided to put a strong emphasis on logging and **provide a number of loggers** that can be used with any set of metrics! Loggers_Avalanche_ at the moment supports four main Loggers:* **InteractiveLogger**: This logger provides a nice progress bar and displays real-time metrics results in an interactive way \(meant for `stdout`\).* **TextLogger**: This logger, mostly intended for file logging, is the plain text version of the `InteractiveLogger`. Keep in mind that it may be very verbose.* **TensorboardLogger**: It logs all the metrics on [Tensorboard](https://www.tensorflow.org/tensorboard) in real-time. Perfect for real-time plotting.* **WandBLogger**: It leverages [Weights and Biases](https://wandb.ai/site) tools to log metrics and results on a dashboard. It requires a W&B account.In order to keep track of when each metric value has been logged, we leverage two `global counters`, one for the training phase, one for the evaluation phase. You can see the `global counter` value reported in the x axis of the logged plots.Each `global counter` is an ever-increasing value which starts from 0 and it is increased by one each time a training/evaluation iteration is performed (i.e. after each training/evaluation minibatch).The `global counters` are updated automatically by the strategy. How to use loggers
###Code
from torch.optim import SGD
from torch.nn import CrossEntropyLoss
from avalanche.benchmarks.classic import SplitMNIST
from avalanche.evaluation.metrics import forgetting_metrics, \
accuracy_metrics, loss_metrics, timing_metrics, cpu_usage_metrics, \
confusion_matrix_metrics, disk_usage_metrics
from avalanche.models import SimpleMLP
from avalanche.logging import InteractiveLogger, TextLogger, TensorboardLogger, WandBLogger
from avalanche.training.plugins import EvaluationPlugin
from avalanche.training.strategies import Naive
benchmark = SplitMNIST(n_experiences=5, return_task_id=False)
# MODEL CREATION
model = SimpleMLP(num_classes=benchmark.n_classes)
# DEFINE THE EVALUATION PLUGIN and LOGGERS
# The evaluation plugin manages the metrics computation.
# It takes as argument a list of metrics, collectes their results and returns
# them to the strategy it is attached to.
loggers = []
# log to Tensorboard
loggers.append(TensorboardLogger())
# log to text file
loggers.append(TextLogger(open('log.txt', 'a')))
# print to stdout
loggers.append(InteractiveLogger())
# W&B logger - comment this if you don't have a W&B account
loggers.append(WandBLogger(project_name="avalanche", run_name="test"))
eval_plugin = EvaluationPlugin(
accuracy_metrics(minibatch=True, epoch=True, experience=True, stream=True),
loss_metrics(minibatch=True, epoch=True, experience=True, stream=True),
timing_metrics(epoch=True, epoch_running=True),
cpu_usage_metrics(experience=True),
forgetting_metrics(experience=True, stream=True),
confusion_matrix_metrics(num_classes=benchmark.n_classes, save_image=True,
stream=True),
disk_usage_metrics(minibatch=True, epoch=True, experience=True, stream=True),
loggers=loggers,
benchmark=benchmark
)
# CREATE THE STRATEGY INSTANCE (NAIVE)
cl_strategy = Naive(
model, SGD(model.parameters(), lr=0.001, momentum=0.9),
CrossEntropyLoss(), train_mb_size=500, train_epochs=1, eval_mb_size=100,
evaluator=eval_plugin)
# TRAINING LOOP
print('Starting experiment...')
results = []
for experience in benchmark.train_stream:
# train returns a dictionary which contains all the metric values
res = cl_strategy.train(experience)
print('Training completed')
print('Computing accuracy on the whole test set')
# test also returns a dictionary which contains all the metric values
results.append(cl_strategy.eval(benchmark.test_stream))
# need to manually call W&B run end since we are in a notebook
import wandb
wandb.finish()
###Output
_____no_output_____
###Markdown
---description: "Logging... logging everywhere! \U0001F52E"--- LoggersWelcome to the _"Logging"_ tutorial of the _"From Zero to Hero"_ series. In this part we will present the functionalities offered by the _Avalanche_ `logging` module.
###Code
!pip install git+https://github.com/ContinualAI/avalanche.git
###Output
_____no_output_____
###Markdown
📑 The Logging ModuleIn the previous tutorial we have learned how to evaluate a continual learning algorithm in _Avalanche_, through different metrics that can be used _off-the-shelf_ via the _Evaluation Plugin_ or stand-alone. However, computing metrics and collecting results, may not be enough at times.While running complex experiments with long _waiting times_, **logging** results over-time is fundamental to "_babysit_" your experiments in real-time, or even understand what went wrong in the aftermath.This is why in Avalanche we decided to put a strong emphasis on logging and **provide a number of loggers** that can be used with any set of metrics! Loggers_Avalanche_ at the moment supports three main Loggers:* **InteractiveLogger**: This logger provides a nice progress bar and displays real-time metrics results in an interactive way \(meant for `stdout`\).* **TextLogger**: This logger, mostly intended for file logging, is the plain text version of the `InteractiveLogger`. Keep in mind that it may be very verbose.* **TensorboardLogger**: It logs all the metrics on [Tensorboard](https://www.tensorflow.org/tensorboard) in real-time. Perfect for real-time plotting.In order to keep track of when each metric value has been logged, we leverage a `global counter`. You can see the `global counter` reported in the x axis of the logged plots.The `global counter` is an ever-increasing value which starts from 0 and it is increased by one each time a training or evaluation iteration is performed (i.e. after each training or evaluation minibatch).The `global counter` is updated automatically by the strategy. It should be reset by creating a new instance of the strategy. How to use Them
###Code
from torch.optim import SGD
from torch.nn import CrossEntropyLoss
from avalanche.benchmarks.classic import SplitMNIST
from avalanche.evaluation.metrics import forgetting_metrics, \
accuracy_metrics, loss_metrics, timing_metrics, cpu_usage_metrics, \
StreamConfusionMatrix, disk_usage_metrics
from avalanche.models import SimpleMLP
from avalanche.logging import InteractiveLogger, TextLogger, TensorboardLogger
from avalanche.training.plugins import EvaluationPlugin
from avalanche.training.strategies import Naive
scenario = SplitMNIST(n_experiences=5)
# MODEL CREATION
model = SimpleMLP(num_classes=scenario.n_classes)
# DEFINE THE EVALUATION PLUGIN and LOGGERS
# The evaluation plugin manages the metrics computation.
# It takes as argument a list of metrics, collectes their results and returns
# them to the strategy it is attached to.
# log to Tensorboard
tb_logger = TensorboardLogger()
# log to text file
text_logger = TextLogger(open('log.txt', 'a'))
# print to stdout
interactive_logger = InteractiveLogger()
eval_plugin = EvaluationPlugin(
accuracy_metrics(minibatch=True, epoch=True, experience=True, stream=True),
loss_metrics(minibatch=True, epoch=True, experience=True, stream=True),
timing_metrics(epoch=True, epoch_running=True),
cpu_usage_metrics(experience=True),
forgetting_metrics(experience=True, stream=True),
StreamConfusionMatrix(num_classes=scenario.n_classes, save_image=False),
disk_usage_metrics(minibatch=True, epoch=True, experience=True, stream=True),
loggers=[interactive_logger, text_logger, tb_logger]
)
# CREATE THE STRATEGY INSTANCE (NAIVE)
cl_strategy = Naive(
model, SGD(model.parameters(), lr=0.001, momentum=0.9),
CrossEntropyLoss(), train_mb_size=500, train_epochs=1, eval_mb_size=100,
evaluator=eval_plugin)
# TRAINING LOOP
print('Starting experiment...')
results = []
for experience in scenario.train_stream:
print("Start of experience: ", experience.current_experience)
print("Current Classes: ", experience.classes_in_this_experience)
# train returns a dictionary which contains all the metric values
res = cl_strategy.train(experience)
print('Training completed')
print('Computing accuracy on the whole test set')
# test also returns a dictionary which contains all the metric values
results.append(cl_strategy.eval(scenario.test_stream))
###Output
Starting experiment...
Start of experience: 0
Current Classes: [0, 9]
-- >> Start of training phase << --
-- Starting training on experience 0 (Task 0) from train stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 24/24 [00:14<00:00, 1.61it/s]
Epoch 0 ended.
DiskUsage_Epoch/train_phase/train_stream/Task000 = 720915.6689
DiskUsage_MB/train_phase/train_stream/Task000 = 720915.3457
Loss_Epoch/train_phase/train_stream/Task000 = 1.0546
Loss_MB/train_phase/train_stream/Task000 = 0.2314
RunningTime_Epoch/train_phase/train_stream/Task000 = 0.0334
Time_Epoch/train_phase/train_stream/Task000 = 14.4948
Top1_Acc_Epoch/train_phase/train_stream/Task000 = 0.7566
Top1_Acc_MB/train_phase/train_stream/Task000 = 0.9839
-- >> End of training phase << --
Training completed
Computing accuracy on the whole test set
-- >> Start of eval phase << --
-- Starting eval on experience 0 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 16.26it/s]
> Eval on experience 0 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp000 = 369.0000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp000 = 720916.5859
Loss_Exp/eval_phase/test_stream/Task000/Exp000 = 0.1841
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp000 = 0.9693
-- Starting eval on experience 1 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 21/21 [00:01<00:00, 16.39it/s]
> Eval on experience 1 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp001 = 364.3000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp001 = 720917.2822
Loss_Exp/eval_phase/test_stream/Task000/Exp001 = 4.4039
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp001 = 0.0000
-- Starting eval on experience 2 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 19/19 [00:01<00:00, 16.62it/s]
> Eval on experience 2 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp002 = 372.0000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp002 = 720917.9785
Loss_Exp/eval_phase/test_stream/Task000/Exp002 = 4.9915
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp002 = 0.0000
-- Starting eval on experience 3 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 16.13it/s]
> Eval on experience 3 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp003 = 358.5000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp003 = 720918.6748
Loss_Exp/eval_phase/test_stream/Task000/Exp003 = 4.7982
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp003 = 0.0000
-- Starting eval on experience 4 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 22/22 [00:01<00:00, 17.01it/s]
> Eval on experience 4 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp004 = 369.6000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp004 = 720919.3711
Loss_Exp/eval_phase/test_stream/Task000/Exp004 = 4.1429
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp004 = 0.0000
-- >> End of eval phase << --
ConfusionMatrix_Stream/eval_phase/test_stream =
tensor([[ 950, 0, 0, 0, 0, 0, 0, 0, 0, 30],
[ 6, 0, 0, 0, 0, 0, 0, 0, 0, 1129],
[ 510, 0, 0, 0, 0, 0, 0, 0, 0, 522],
[ 397, 0, 0, 0, 0, 0, 0, 0, 0, 613],
[ 38, 0, 0, 0, 0, 0, 0, 0, 0, 944],
[ 416, 0, 0, 0, 0, 0, 0, 0, 0, 476],
[ 448, 0, 0, 0, 0, 0, 0, 0, 0, 510],
[ 35, 0, 0, 0, 0, 0, 0, 0, 0, 993],
[ 154, 0, 0, 0, 0, 0, 0, 0, 0, 820],
[ 31, 0, 0, 0, 0, 0, 0, 0, 0, 978]])
DiskUsage_Stream/eval_phase/test_stream = 720920.0059
Loss_Stream/eval_phase/test_stream = 3.6963
Top1_Acc_Stream/eval_phase/test_stream = 0.1928
Start of experience: 1
Current Classes: [2, 7]
-- >> Start of training phase << --
-- Starting training on experience 1 (Task 0) from train stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:15<00:00, 1.66it/s]
Epoch 0 ended.
DiskUsage_Epoch/train_phase/train_stream/Task000 = 720948.1162
DiskUsage_MB/train_phase/train_stream/Task000 = 720947.7930
Loss_Epoch/train_phase/train_stream/Task000 = 1.8814
Loss_MB/train_phase/train_stream/Task000 = 0.3365
RunningTime_Epoch/train_phase/train_stream/Task000 = 0.0324
Time_Epoch/train_phase/train_stream/Task000 = 14.7142
Top1_Acc_Epoch/train_phase/train_stream/Task000 = 0.4811
Top1_Acc_MB/train_phase/train_stream/Task000 = 0.9596
-- >> End of training phase << --
Training completed
Computing accuracy on the whole test set
-- >> Start of eval phase << --
-- Starting eval on experience 0 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 16.26it/s]
> Eval on experience 0 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp000 = 356.7000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp000 = 720951.2979
Loss_Exp/eval_phase/test_stream/Task000/Exp000 = 2.6347
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp000 = 0.0850
-- Starting eval on experience 1 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 21/21 [00:01<00:00, 16.54it/s]
> Eval on experience 1 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp001 = 357.1000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp001 = 720952.0020
Loss_Exp/eval_phase/test_stream/Task000/Exp001 = 0.2836
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp001 = 0.9578
-- Starting eval on experience 2 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 19/19 [00:01<00:00, 16.20it/s]
> Eval on experience 2 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp002 = 360.1000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp002 = 720952.7061
Loss_Exp/eval_phase/test_stream/Task000/Exp002 = 4.8760
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp002 = 0.0000
-- Starting eval on experience 3 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 16.78it/s]
> Eval on experience 3 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp003 = 377.6000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp003 = 720953.4102
Loss_Exp/eval_phase/test_stream/Task000/Exp003 = 4.9460
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp003 = 0.0000
-- Starting eval on experience 4 (Task 0) from test stream --
100%|██████████████████████████████████████████████████████████████████████████████████| 22/22 [00:01<00:00, 17.54it/s]
> Eval on experience 4 (Task 0) from test stream ended.
CPUUsage_Exp/eval_phase/test_stream/Task000/Exp004 = 374.7000
DiskUsage_Exp/eval_phase/test_stream/Task000/Exp004 = 720954.1143
Loss_Exp/eval_phase/test_stream/Task000/Exp004 = 4.0850
Top1_Acc_Exp/eval_phase/test_stream/Task000/Exp004 = 0.0000
-- >> End of eval phase << --
ConfusionMatrix_Stream/eval_phase/test_stream =
tensor([[169, 0, 697, 0, 0, 0, 0, 114, 0, 0],
[ 0, 0, 549, 0, 0, 0, 0, 586, 0, 0],
[ 0, 0, 991, 0, 0, 0, 0, 41, 0, 0],
[ 0, 0, 762, 0, 0, 0, 0, 248, 0, 0],
[ 0, 0, 147, 0, 0, 0, 0, 835, 0, 0],
[ 0, 0, 537, 0, 0, 0, 0, 355, 0, 0],
[ 0, 0, 928, 0, 0, 0, 0, 30, 0, 0],
[ 0, 0, 46, 0, 0, 0, 0, 982, 0, 0],
[ 0, 0, 730, 0, 0, 0, 0, 244, 0, 0],
[ 0, 0, 51, 0, 0, 0, 0, 958, 0, 0]])
###Markdown
---description: "Logging... logging everywhere! \U0001F52E"--- LoggersWelcome to the _"Logging"_ tutorial of the _"From Zero to Hero"_ series. In this part we will present the functionalities offered by the _Avalanche_ `logging` module.
###Code
!pip install avalanche-lib==0.2.0
###Output
_____no_output_____
###Markdown
📑 The Logging ModuleIn the previous tutorial we have learned how to evaluate a continual learning algorithm in _Avalanche_, through different metrics that can be used _off-the-shelf_ via the _Evaluation Plugin_ or stand-alone. However, computing metrics and collecting results, may not be enough at times.While running complex experiments with long _waiting times_, **logging** results over-time is fundamental to "_babysit_" your experiments in real-time, or even understand what went wrong in the aftermath.This is why in Avalanche we decided to put a strong emphasis on logging and **provide a number of loggers** that can be used with any set of metrics! Loggers_Avalanche_ at the moment supports four main Loggers:* **InteractiveLogger**: This logger provides a nice progress bar and displays real-time metrics results in an interactive way \(meant for `stdout`\).* **TextLogger**: This logger, mostly intended for file logging, is the plain text version of the `InteractiveLogger`. Keep in mind that it may be very verbose.* **TensorboardLogger**: It logs all the metrics on [Tensorboard](https://www.tensorflow.org/tensorboard) in real-time. Perfect for real-time plotting.* **WandBLogger**: It leverages [Weights and Biases](https://wandb.ai/site) tools to log metrics and results on a dashboard. It requires a W&B account.In order to keep track of when each metric value has been logged, we leverage two `global counters`, one for the training phase, one for the evaluation phase. You can see the `global counter` value reported in the x axis of the logged plots.Each `global counter` is an ever-increasing value which starts from 0 and it is increased by one each time a training/evaluation iteration is performed (i.e. after each training/evaluation minibatch).The `global counters` are updated automatically by the strategy. How to use loggers
###Code
from torch.optim import SGD
from torch.nn import CrossEntropyLoss
from avalanche.benchmarks.classic import SplitMNIST
from avalanche.evaluation.metrics import forgetting_metrics, \
accuracy_metrics, loss_metrics, timing_metrics, cpu_usage_metrics, \
confusion_matrix_metrics, disk_usage_metrics
from avalanche.models import SimpleMLP
from avalanche.logging import InteractiveLogger, TextLogger, TensorboardLogger, WandBLogger
from avalanche.training.plugins import EvaluationPlugin
from avalanche.training import Naive
benchmark = SplitMNIST(n_experiences=5, return_task_id=False)
# MODEL CREATION
model = SimpleMLP(num_classes=benchmark.n_classes)
# DEFINE THE EVALUATION PLUGIN and LOGGERS
# The evaluation plugin manages the metrics computation.
# It takes as argument a list of metrics, collectes their results and returns
# them to the strategy it is attached to.
loggers = []
# log to Tensorboard
loggers.append(TensorboardLogger())
# log to text file
loggers.append(TextLogger(open('log.txt', 'a')))
# print to stdout
loggers.append(InteractiveLogger())
# W&B logger - comment this if you don't have a W&B account
loggers.append(WandBLogger(project_name="avalanche", run_name="test"))
eval_plugin = EvaluationPlugin(
accuracy_metrics(minibatch=True, epoch=True, experience=True, stream=True),
loss_metrics(minibatch=True, epoch=True, experience=True, stream=True),
timing_metrics(epoch=True, epoch_running=True),
cpu_usage_metrics(experience=True),
forgetting_metrics(experience=True, stream=True),
confusion_matrix_metrics(num_classes=benchmark.n_classes, save_image=True,
stream=True),
disk_usage_metrics(minibatch=True, epoch=True, experience=True, stream=True),
loggers=loggers,
benchmark=benchmark
)
# CREATE THE STRATEGY INSTANCE (NAIVE)
cl_strategy = Naive(
model, SGD(model.parameters(), lr=0.001, momentum=0.9),
CrossEntropyLoss(), train_mb_size=500, train_epochs=1, eval_mb_size=100,
evaluator=eval_plugin)
# TRAINING LOOP
print('Starting experiment...')
results = []
for experience in benchmark.train_stream:
# train returns a dictionary which contains all the metric values
res = cl_strategy.train(experience)
print('Training completed')
print('Computing accuracy on the whole test set')
# test also returns a dictionary which contains all the metric values
results.append(cl_strategy.eval(benchmark.test_stream))
# need to manually call W&B run end since we are in a notebook
import wandb
wandb.finish()
%load_ext tensorboard
%tensorboard --logdir tb_data --port 6066
###Output
_____no_output_____ |
04_Combined_Cycle_Power_Plant_Data_Set/Combined_Cycle_Power_Plant_Data_Setann_ANN.ipynb | ###Markdown
Link to the Dataset and Descriptionhttps://archive.ics.uci.edu/ml/datasets/Combined+Cycle+Power+Plant
###Code
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/00294/CCPP.zip
!unzip *.zip
!ls
%cd CCPP
%pwd
###Output
_____no_output_____
###Markdown
Importing the libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score
tf.__version__
###Output
_____no_output_____
###Markdown
Part 1 - Data Preprocessing Importing the dataset
###Code
df = pd.read_excel('Folds5x2_pp.xlsx')
print(f"The shape of the dataset is - {df.shape}")
df.head()
df.info()
df.describe()
X = df.iloc[:, :-1].values
Y = df.iloc[:, -1].values
print(X)
print(Y)
###Output
[463.26 444.37 488.56 ... 429.57 435.74 453.28]
###Markdown
Splitting the dataset into the Training set and Test set
###Code
X_train, X_test, Y_train, Y_test = train_test_split(X,
Y,
test_size = 0.2,
random_state = 42)
print(f"Size of X_train : {X_train.shape}")
print(f"Size of Y_train : {Y_train.shape}")
print(f"Size of X_test : {X_test.shape}")
print(f"Size of Y_test : {Y_test.shape}")
###Output
Size of X_train : (7654, 4)
Size of Y_train : (7654,)
Size of X_test : (1914, 4)
Size of Y_test : (1914,)
###Markdown
Part 2 - Building the ANN Initializing the ANN
###Code
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(units=6, activation='relu'))
model.add(tf.keras.layers.Dense(units=6, activation='relu'))
model.add(tf.keras.layers.Dense(units=1))
model.compile(optimizer='adam', loss= 'mean_squared_error')
model.fit(X_train,Y_train,batch_size=32, epochs=100)
model.summary()
Y_pred = model.predict(X_test)
np.set_printoptions(precision=2)
print(np.concatenate((Y_pred.reshape(len(Y_pred), 1), Y_test.reshape(len(Y_test), 1)), 1))
print("The Mean Squared Error is: %.2f" % mean_squared_error(Y_test, Y_pred))
print("Variance Score: %.2f" % r2_score(Y_test, Y_pred))
###Output
The Mean Squared Error is: 26.13
Variance Score: 0.91
|
nbs/01a_datasets.ipynb | ###Markdown
Helpers for Dataset Generation
###Code
# hide
import tempfile
from typing import Tuple, List
tmp_dir = tempfile.TemporaryDirectory()
tmp_dir.name
# hide
f_path = Path(tmp_dir.name) / 'test.csv'
f_path
# hide
import pandas as pd
DataFrame({'a': [4], 'b': 5}).to_csv(f_path, index=False)
pd.read_csv(f_path)
###Output
_____no_output_____
###Markdown
Display image
###Code
# export
def show_img(img):
"""
Display a numpy array as a image
"""
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(3, 3))
ax.imshow(img)
# export
def save_img(path, name, img):
"""
Save a numpy array as a image
"""
image = img.astype(np.uint8)
filename = path / (name + ".jpg")
imsave(filename, image, check_contrast=False)
# export
def save_img_annotations(path, annotations, name="annotations"):
"""
Helper to save the annotations of a image into the desired file
"""
filename = path / (name + ".json")
with open(filename, "w") as file:
json.dump(annotations, file)
###Output
_____no_output_____
###Markdown
Create images
###Code
from skimage.draw import (ellipse_perimeter)
# change from zeros to ones to have a white bg.
img = np.zeros((300, 500, 3), dtype=np.double)
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(8, 8))
# draw ellipse with perimeter
rr_ellipse, cc_ellipse = ellipse(150, 100, 100, 50)
rr_ellipse_p, cc_ellipse_p = ellipse_perimeter(150, 100, 100, 50)
img[rr_ellipse, cc_ellipse, :] = (1, 0, 0)
img[rr_ellipse_p, cc_ellipse_p, :] = (0, 0, 0)
# draw square
rr_square, cc_square = rectangle(start=(100, 200), extent=(75, 75))
rr_square_p, cc_square_p = rectangle_perimeter(start=(100, 200), extent=(75, 75))
img[rr_square, cc_square, :] = (0, 0, 1)
img[rr_square_p, cc_square_p, :] = (1, 0, 0)
# draw line
rr_line, cc_line = line(70, 350, 200, 350)
img[rr_line, cc_line, :] = (1, 1, 0)
# display img
ax.imshow(img)
# export
def draw_grid(im=None, size=(100, 100), n_hlines=10, n_vlines=10, black=True):
"""
Will draw the default background with a grid system.
im np.array:
Existing image, if None will create one
size (int, int):
Height and width, respectively
n_hlines int:
Number of horizontal lines
n_vlines int:
Number of vertial lines
black bool:
If true, the background will be black
"""
height, width = size
img = im
color = (0, 0, 0)
line_color = (1, 1, 1)
if not black:
color = (1, 1, 1)
line_color = (0, 0, 0)
if im is None:
img = np.full((height, width, 3), dtype=np.double, fill_value=color)
for lines in range(n_hlines):
y = height * lines * (1 / n_hlines)
y = int(y)
rr_line, cc_line = line(0, y, width - 1, y)
img[rr_line, cc_line, :] = line_color
for lines in range(n_vlines):
x = width * lines * (1 / n_vlines)
x = int(x)
rr_line, cc_line = line(x, 0, x, height - 1)
img[rr_line, cc_line, :] = line_color
return img
img = draw_grid(size=(200, 200), n_hlines=4, n_vlines=4)
show_img(img)
# export
def draw_bbox(rect, rect_dimensions, im=None, black=True):
"""
Draw a Bounding Box
rect (int, int):
Begining point of the retangle
rect_dimensions (int, int):
Width and Height of the retangle
im np.array:
Image where bbox will be draw
black bool:
If true, the bbox will be black
"""
init_x, init_y = rect
height, width = rect_dimensions
img = im
if im is None:
img = np.ones((100, 200, 3), dtype=np.double)
color = (0, 0, 0)
if not black:
color = (255, 255, 255)
rr, cc = rectangle_perimeter(start=(init_x, init_y),
extent=(height, width),
shape=img.shape)
img[rr, cc, :] = color
ex_height = height + 10
ex_width = width + 10
if (ex_height > len(img)):
ex_height = len(img)
if (ex_width > len(im[0])):
ex_width = len(img[0])
rr, cc = rectangle_perimeter(start=(init_x - 5, init_y - 5),
extent=(ex_height, ex_width),
shape=img.shape)
img[rr, cc, :] = color
return img
#img = draw_grid(size=(3400, 400),n_hlines=2, n_vlines=10, black=False)
# draw_bbox((35, 50, 200, 250), im=img, black=False)
img = np.ones((300, 400, 3), dtype=np.double)
draw_bbox((215, 250), (15, 100), im=img, black=True)
show_img(img)
#exporti
def xywh_to_xyxy(boxes):
boxes = np.array(boxes)
"""Convert [x y w h] box format to [x1 y1 x2 y2] format."""
return np.hstack((boxes[0:2], boxes[0:2] + boxes[2:4])).tolist()
def xyxy_to_xywh(boxes):
boxes = np.array(boxes)
"""Convert [x1 y1 x2 y2] box format to [x y w h] format."""
return np.hstack((boxes[0:2], boxes[2:4] - boxes[0:2])).tolist()
# hide
xyxy_to_xywh([50, 50, 150, 150])
# hide
xywh_to_xyxy([50, 50, 100, 100])
###Output
_____no_output_____
###Markdown
Overlap & Intersection over Union (IOU)
###Code
#exporti
def bbox_intersection(b1_coords, b1_dimensions, b2_coords, b2_dimensions):
"""
determine the (x, y)-coordinates of the intersection rectangle
b1_coords (int, int):
The origin of the bbox one
b2_coords (int, int):
THe origin of the bbox two
b1_dimensions (int, int):
The width and heigh of bbox one
b2_dimensions (int, int):
The width and heigh of bbox two
"""
xA = max(b1_coords[0], b2_coords[0])
yA = max(b1_coords[1], b2_coords[1])
b1_final_x = b1_dimensions[0] + b1_coords[0]
b1_final_y = b1_dimensions[1] + b1_coords[1]
b2_final_x = b2_dimensions[0] + b2_coords[0]
b2_final_y = b2_dimensions[1] + b2_coords[1]
xB = min(b1_final_x, b2_final_x) - xA
yB = min(b1_final_y, b2_final_y) - yA
# compute the area of intersection rectangle
interArea = max(0, xB) * max(0, yB)
# compute the area of both the prediction and ground-truth
# rectangles
b1Area = b1_dimensions[0] * b1_dimensions[1]
b2Area = b2_dimensions[0] * b2_dimensions[1]
return interArea, b1Area, b2Area, (xA, yA, xB, yB)
#exporti
def overlap(boxA, boxA_dimensions, boxB, boxB_dimensions):
"""
Returns the max relative overlap between two bboxs.
"""
interArea, boxAArea, boxBArea, _ = bbox_intersection(boxA, boxA_dimensions,
boxB, boxB_dimensions)
return max(interArea / float(boxAArea), interArea / float(boxBArea))
r1 = (10, 10)
r1_dimensions = (130, 130)
r2 = (50, 50)
r2_dimensions = (90, 90)
assert overlap(r1, r1_dimensions, r2, r2_dimensions) == 1
assert overlap(r2, r2_dimensions, r1, r1_dimensions) == 1
# export
def bb_intersection_over_union(boxA, boxA_dimensions, boxB, boxB_dimensions, verbose=False):
interArea, boxAArea, boxBArea, _ = bbox_intersection(boxA, boxA_dimensions,
boxB, boxB_dimensions)
iou = interArea / float(boxAArea + boxBArea - interArea)
if verbose:
print(f"iou: {iou: .2f}, interArea: {interArea: .2f}, "
f"boxAArea {boxAArea: .2f}, box1Area {boxBArea: .2f}")
return iou
r1 = (10, 10)
r2 = (80, 80)
r1_dimensions = (100, 100)
r2_dimensions = (100, 100)
img = np.zeros((300, 200, 3), dtype=np.double)
draw_bbox(r1, r1_dimensions, im=img, black=False)
draw_bbox(r2, r2_dimensions, im=img, black=False)
iou = bb_intersection_over_union(r1, r1_dimensions, r2, r2_dimensions, True)
# iou = bb_intersection_over_union(r1, r2, verbose=True)
_, _, _, union = bbox_intersection(r1, r1_dimensions, r2, r2_dimensions)
init_height, init_widht, final_height, final_widht = union
extent_height = final_height - init_height
extent_width = final_widht - init_widht
rr, cc = rectangle(start=(init_height, init_widht), extent=(final_height, final_widht))
img[rr, cc, :] = (1, 1, 1)
show_img(img)
###Output
_____no_output_____
###Markdown
Sample Random bbox
###Code
#export
def sample_bbox(bboxs=(), canvas_size=(100, 100), diag=(0.3, 0.3), ratio=(1, 1),
max_iou=0.0, max_overlap=0.0,
max_tries=1000, random_seed=None):
"""
bboxs [(x, y, x, y), ... ]:
List of existing bboxs
canvas_size (int, int):
Width and height on which to position the new bbox.
max_iou float [0, 1]:
Maximum acceptable intersection over union between any two bboxs
max_overlap float [0, 1]:
Maximum overlap between any two bboxs
diag (float, float) or float:
Range of acceptable diagonal lenght relative to canvas diagonal
ratio (float, float) or float:
Range of acceptable width / heigh ratios of the new bbox
max_tries int:
Number of random tries to create a valid bbox
"""
# for v in [diag, ratio]: assert min(v) >= 0 and max(v) <= 1, f"{v} is outside of (0, 1)"
width, height = canvas_size
canvas_diag = np.sqrt(width ** 2 + height**2)
for i in range(max_tries):
s_diag = np.random.uniform(*diag) * canvas_diag
s_ratio = np.random.uniform(*ratio)
# sample position fully inside canvas
s_height = np.sqrt(s_diag ** 2 / (1. + s_ratio ** 2))
s_width = s_ratio * s_height
cx = np.random.randint(s_width / 2, width - s_width / 2)
cy = np.random.randint(s_height / 2, height - s_height / 2)
bbox_x = cx - s_width / 2
bbox_y = cy - s_height / 2
bbox_width = cx + s_width / 2 - bbox_x
bbox_height = cy + s_height / 2 - bbox_y
bbox = (bbox_x, bbox_y, bbox_width, bbox_height)
bbox = tuple(int(v) for v in bbox)
# check if valid iou then return
if len(bboxs) == 0:
return bbox
violation = False
for b in bboxs:
b_x, b_y, b_width, b_heigh = b
iou = bb_intersection_over_union((b_x, b_y), (b_width, b_heigh),
(bbox_x, bbox_y), (bbox_width, bbox_height))
b_overlap = overlap((b_x, b_y), (b_width, b_heigh),
(bbox_x, bbox_y), (bbox_width, bbox_height))
if iou > max_iou or b_overlap > max_overlap:
violation = True
if not violation:
return bbox
return None
img = np.zeros((300, 300, 3), dtype=np.double)
bboxs: List[Tuple[int, int, int, int]] = []
for i in range(10):
bbox: Tuple[int, int, int, int] = sample_bbox(
bboxs=bboxs, canvas_size=(300, 300), diag=(0.1, 0.3), max_iou=0.3,
max_overlap=0.5)
init_x, init_y, width, heigh = bbox
bboxs.append(bbox)
draw_bbox((init_x, init_y), (width, heigh), im=img, black=False, )
show_img(img)
###Output
_____no_output_____
###Markdown
Draw Objects inside bbox
###Code
#export
def draw_rectangle(im, start, dimensions, color):
#draw = ImageDraw.Draw(im)
#draw.rectangle(bbox, fill=color)
rr, cc = rectangle(start=start, extent=dimensions)
im[rr, cc, :] = color
return im
#export
def draw_ellipse(im, start, dimensions, color):
#draw = ImageDraw.Draw(im)
#cx, cy = bbox[0] + bbox[2] / 2, bbox[1] + bbox[3]
#draw.ellipse(bbox, fill=color)
x, y = start
v_radius, h_radius = dimensions
rr, cc = ellipse(x, y, v_radius, h_radius)
im[rr, cc, :] = color
return im
img = np.zeros((200, 200, 3), dtype=np.double)
ractangle_init_point = (25, 25)
rectangle_dimensions = (75, 50)
img = draw_rectangle(img, ractangle_init_point, rectangle_dimensions, (0, 0, 1))
img = draw_bbox(im=img, rect=ractangle_init_point, black=False,
rect_dimensions=rectangle_dimensions)
ellipse_init_point = (150, 65)
ellipse_dimensions = (20, 54)
ellipse_x, ellipse_y = ellipse_init_point
ellipse_v_radius, ellipse_h_radius = ellipse_dimensions
ellipse_bbox_start = (ellipse_x - ellipse_v_radius, ellipse_y - ellipse_h_radius)
ellipse_bbox_dimensions = (ellipse_v_radius * 2, ellipse_h_radius * 2)
img = draw_ellipse(img, ellipse_init_point, ellipse_dimensions, (1, 0, 0))
img = draw_bbox(im=img, rect=ellipse_bbox_start, black=False,
rect_dimensions=ellipse_bbox_dimensions)
show_img(img)
image, shapes = random_shapes((500, 500), 50, multichannel=True)
rr_0, rr_1 = shapes[0][1][0]
cc_0, cc_1 = shapes[0][1][1]
middle_x = int((rr_0 + rr_1) / 2)
middle_y = int((cc_0 + cc_1) / 2)
# Picking up the middle value will guarantee we get the shape color
print(image[middle_x, middle_y])
show_img(image)
###Output
_____no_output_____
###Markdown
Create Object Detection Dataset Generic Dataset
###Code
# exporti
def create_simple_object_detection_dataset(path, n_samples=100, n_objects_max=3, n_objects_min=1,
size=(150, 150), min_size=0.2):
(path / 'images').mkdir(parents=True, exist_ok=True)
(path / 'class_images').mkdir(parents=True, exist_ok=True)
min_dimension = size[0]
if (size[1] < size[0]):
min_dimension = size[1]
# create class labels
cname = ['red', 'green', 'blue']
color = [(255, 0, 0), (0, 255, 0), (0, 0, 255)]
for clr, name in zip(color, cname):
img_name = f'{name}'
img = np.ones((50, 50, 3), dtype=np.uint8)
draw_rectangle(img, start=(0, 0), dimensions=(50, 50), color=clr)
save_img(path / 'class_images', img_name, img)
type_shapes = ['rectangle', 'circle', 'ellipse']
# create images + annotations
annotations = {}
images = {}
for i in range(n_samples):
labels = []
bboxs = []
img_name = f'img_{i}'
image, shapes = random_shapes(size, n_objects_max, multichannel=True,
shape=type_shapes[randint(0, 2)],
min_shapes=n_objects_min,
min_size=min_size * min_dimension)
for shape in shapes:
shape_name = shape[0]
rr_0, rr_1 = shape[1][0]
cc_0, cc_1 = shape[1][1]
middle_x = int((rr_0 + rr_1) / 2)
middle_y = int((cc_0 + cc_1) / 2)
label = (image[middle_x, middle_y].tolist(), shape_name)
bbox = (int(cc_0), int(rr_0), int(cc_1), int(rr_1))
labels.append(label)
bboxs.append(bbox)
img_file = img_name + ".jpg"
images[img_file] = image
save_img(path / 'images', img_name, image)
annotations[img_file] = {'labels': labels, 'bboxs': bboxs}
save_img_annotations(path, annotations)
return (images, annotations)
# hide
import tempfile
tmp_dir = tempfile.TemporaryDirectory()
path = Path(tmp_dir.name)
images, annotations = create_simple_object_detection_dataset(path=path, n_samples=5)
show_img(images['img_0.jpg'])
print(annotations['img_0.jpg'])
print(pd.read_json(path / 'annotations.json').T)
###Output
_____no_output_____
###Markdown
Specific Tasks
###Code
#export
def create_color_classification(path, n_samples=10, size=(150, 150)):
"""
Helper function to color classification
"""
images, annotations = create_simple_object_detection_dataset(path=path, n_samples=n_samples,
size=size)
color_img = {}
for img in annotations:
color_arr = []
for shape in annotations[img]['labels']:
color_arr.append(shape[0])
color_img[img] = {'label': color_arr}
save_img_annotations(path, color_img)
return (images, color_img)
# hide
import tempfile
tmp_dir = tempfile.TemporaryDirectory()
path = Path(tmp_dir.name)
images, color_imgs = create_color_classification(path=path, size=(100, 100))
show_img(images['img_0.jpg'])
print(annotations['img_0.jpg'])
#export
def create_shape_color_classification(path, n_samples=10, size=(150, 150)):
"""
Helper function to shape classification
"""
images, annotations = create_simple_object_detection_dataset(
path, n_samples=n_samples, size=size)
label_img = {}
for img in annotations:
label_arr = []
for shape in annotations[img]['labels']:
label_arr.append(shape)
label_img[img] = {'label': label_arr}
save_img_annotations(path, label_img)
return (images, label_img)
# hide
import tempfile
tmp_dir = tempfile.TemporaryDirectory()
path = Path(tmp_dir.name)
images, color_imgs = create_shape_color_classification(path=path, size=(100, 100))
show_img(images['img_0.jpg'])
print(annotations['img_0.jpg'])
print(pd.read_json(path / 'annotations.json').T)
#export
def create_object_detection(path, n_samples=10, n_objects=1, size=(150, 150), multilabel=False):
"""
Helper function to object detection
"""
images, annotations = create_simple_object_detection_dataset(path=path, size=size,
n_samples=n_samples,
n_objects_max=n_objects)
coords_img = {}
for img in annotations:
coords_arr = []
for coord in annotations[img]['bboxs']:
coords_arr.append(coord)
if not multilabel:
coords_arr = coords_arr[0]
coords_img[img] = {'label': coords_arr}
save_img_annotations(path, coords_img)
return (images, coords_img)
# hide
import tempfile
tmp_dir = tempfile.TemporaryDirectory()
path = Path(tmp_dir.name)
images, color_imgs = create_object_detection(path=path, n_samples=50, n_objects=1)
show_img(images['img_0.jpg'])
# Label is wrong
print(color_imgs['img_0.jpg'])
print(pd.read_json(path / 'annotations.json').T)
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
_____no_output_____
###Markdown
Helpers for Dataset Generation Use tmp dir
###Code
import tempfile
tmp_dir = tempfile.TemporaryDirectory(); tmp_dir.name
f_path = Path(tmp_dir.name)/'test.csv'; f_path
DataFrame({'a': [4], 'b':5}).to_csv(f_path, index=False); pd.read_csv(f_path)
###Output
_____no_output_____
###Markdown
Create images
###Code
from PIL import Image, ImageDraw
im = Image.new('RGB', (500, 300), (128, 128, 128))
draw = ImageDraw.Draw(im)
draw.ellipse((100, 100, 150, 200), fill=(255, 0, 0), outline=(0, 0, 0))
draw.rectangle((200, 100, 300, 200), fill=(0, 192, 192), outline=(255, 255, 255))
draw.line((350, 200, 450, 100), fill=(255, 255, 0), width=10)
im
def draw_grid(im=None, size=(100, 100), n_hlines=10, n_vlines=10, black=True):
"""
size: (width, hight)
black: bool
draw grid and numbers in black or white
"""
color = (0, 0, 0)
if not black: color = (255, 255, 255)
if im is None:
im = Image.new('RGB', size, color =(211, 211, 211))
width, hight = im.size
draw = ImageDraw.Draw(im)
ln_width = int((max(size) * 0.03) / max(n_hlines, n_vlines))
for h in range(n_hlines):
y = hight * h * (1 / n_hlines)
draw.line((0, y, width, y), fill=color, width=ln_width)
draw.text((width * 0.05, y), text=str((int(y))) + 'y',
fill=color)
draw.text((width * 0.9, y), text=str((int(y))) + 'y', fill=color)
for h in range(n_vlines):
x = width * h * (1 / n_vlines)
draw.line((x, 0, x, hight), fill=color, width=ln_width)
draw.text((x, hight * 0.05), text=str((int(x))) + 'x', fill=color)
draw.text((x, hight * 0.9), text=str((int(x)))+ 'x', fill=color)
return im
draw_grid(size=(200, 200), n_hlines=4, n_vlines=4)
#exporti
def draw_bbox(rect, im=None, values=True, black=True, width=1):
"""
rect: [x, y, x, y]
two points (x, y), (x, y)
values: bool
draw values
black: bool
draw grid and numbers in black or white
"""
color = (0, 0, 0)
if not black: color = (255, 255, 255)
if im is None:
im = Image.new('RGB', (100, 100), color = 'grey')
draw = ImageDraw.Draw(im)
draw.rectangle(rect, outline=color, width=width)
if values:
draw.text((rect[0], rect[1]), text=f"({rect[0]}x, {rect[1]}y)", fill=color)
draw.text((rect[0], rect[3]), text=f"({rect[0]}x, {rect[3]}y)", fill=color)
draw.text((rect[2], rect[1]), text=f"({rect[2]}x, {rect[1]}y)", fill=color)
draw.text((rect[2], rect[3]), text=f"({rect[2]}x, {rect[3]}y)", fill=color)
draw.text(((rect[0] + rect[2]) / 2, (rect[1] + rect[3]) / 2), text=f"{rect}", fill=color)
return im
img = Image.new('RGB', (300, 300), color = 'grey')
# draw_bbox((35, 50, 200, 250), im=img, black=False)
draw_bbox((200, 250, 35, 50), im=img, black=False, width=4)
draw_grid(img, n_hlines=5, n_vlines=5)
#exporti
def xywh_to_xyxy(boxes):
boxes = np.array(boxes)
"""Convert [x y w h] box format to [x1 y1 x2 y2] format."""
return np.hstack((boxes[0:2], boxes[0:2] + boxes[2:4])).tolist()
def xyxy_to_xywh(boxes):
boxes = np.array(boxes)
"""Convert [x1 y1 x2 y2] box format to [x y w h] format."""
return np.hstack((boxes[0:2], boxes[2:4] - boxes[0:2])).tolist()
xyxy_to_xywh([50,50,150,150])
xywh_to_xyxy([50,50,100,100])
###Output
_____no_output_____
###Markdown
Overlap & Intersection over Union (IOU)
###Code
#exporti
def bbox_intersection(b1, b2):
# determine the (x, y)-coordinates of the intersection rectangle
xA = max(b1[0], b2[0])
yA = max(b1[1], b2[1])
xB = min(b1[2], b2[2])
yB = min(b1[3], b2[3])
# compute the area of intersection rectangle
interArea = max(0, xB - xA) * max(0, yB - yA)
# compute the area of both the prediction and ground-truth
# rectangles
b1Area = (b1[2] - b1[0]) * (b1[3] - b1[1])
b2Area = (b2[2] - b2[0]) * (b2[3] - b2[1])
return interArea, b1Area, b2Area, (xA, yA, xB, yB)
#exporti
def overlap(boxA, boxB, verbose=False):
"""
Returns the max relative overlap between two bboxs.
"""
interArea, boxAArea, boxBArea, _ = bbox_intersection(boxA, boxB)
return max(interArea / float(boxAArea), interArea / float(boxBArea))
r1 = (10, 10, 110, 110)
r2 = (50, 50, 90, 90)
assert overlap(r1, r2) == 1
assert overlap(r2, r1) == 1
r1 = (0, 0, 100, 100)
r2 = (50, 50, 150, 150)
assert overlap(r1, r2) == 0.25
assert overlap(r2, r1) == 0.25
def bb_intersection_over_union(boxA, boxB, verbose=False):
interArea, boxAArea, boxBArea, _ = bbox_intersection(boxA, boxB)
iou = interArea / float(boxAArea + boxBArea - interArea)
if verbose:
print(f"iou: {iou: .2f}, interArea: {interArea: .2f}, boxAArea {boxAArea: .2f}, box1Area {boxBArea: .2f}")
return iou
r1 = (10, 10, 110, 110)
r2 = (80, 80, 180, 180)
img = Image.new('RGB', (300, 200), color = 'grey')
draw_bbox(r1, im=img, black=False, values=True)
draw_bbox(r2, im=img, black=False,values=True)
iou = bb_intersection_over_union(r1, r2, verbose=True)
# iou = bb_intersection_over_union(r1, r2, verbose=True)
_, _, _, union = bbox_intersection(r1, r2)
draw = ImageDraw.Draw(img)
draw.rectangle(union, fill='blue')
img
r1 = (10, 10, 110, 110)
r2 = (20, 20, 120, 120)
img = Image.new('RGB', (200, 150), color = 'grey')
draw_bbox(r1, im=img, black=False, values=True)
draw_bbox(r2, im=img, black=False, values=True)
iou = bb_intersection_over_union(r1, r2, verbose=True)
_, _, _, union = bbox_intersection(r1, r2)
draw = ImageDraw.Draw(img)
draw.rectangle(union, fill='blue')
img
r1 = (10, 10, 110, 110)
img = Image.new('RGB', (200, 150), color = 'grey')
draw_bbox(r1, im=img, black=False, values=True)
draw_bbox(r1, im=img, black=False,values=True)
iou = bb_intersection_over_union(r1, r1, verbose=True)
_, _, _, union = bbox_intersection(r1, r2)
draw = ImageDraw.Draw(img)
draw.rectangle(union, fill='blue')
img
r1 = (10, 10, 110, 110)
r2 = (20, 20, 90, 90)
img = Image.new('RGB', (200, 150), color = 'grey')
draw_bbox(r1, im=img, black=False, values=True)
draw_bbox(r2, im=img, black=False,values=True)
iou = bb_intersection_over_union(r1, r2, verbose=True)
_, _, _, union = bbox_intersection(r1, r2)
draw = ImageDraw.Draw(img)
draw.rectangle(union, fill='blue')
img
###Output
_____no_output_____
###Markdown
Sample Random bbox
###Code
#exporti
def sample_bbox(bboxs=(), canvas_size=(100, 100), diag=(0.3, 0.3), ratio=(1, 1),
max_iou=0.0, max_overlap=0.0,
max_tries=1000, random_seed=None):
"""
bboxs: [(x, y, x, y), ..., (x, y, x, y)]
List of existing bboxs.
canvas_size: (int, int)
Size of the canvas (width, height) on which to position the new bbox.
max_iou: float [0, 1]
Maximum acceptable intersection over union between any two bboxs.
max_overlap: float [0, 1]
Maximum overlap between any two bboxs.
diag: (float, float) or float
Range of acceptable diagonal lenght relative to canvas diagonal.
ratio: (float, float) or float
Range of acceptable width / height ratios of the new bbox.
max_tries: int
Number of random tries to create a valid bbox
"""
# for v in [diag, ratio]: assert min(v) >= 0 and max(v) <= 1, f"{v} is outside of (0, 1)"
rng = np.random.RandomState(random_seed)
width, height = canvas_size
canvas_diag = np.sqrt(width ** 2 + height**2)
for i in range(max_tries):
s_diag = np.random.uniform(*diag) * canvas_diag
s_ratio = np.random.uniform(*ratio)
# sample position fully inside canvas
s_height = np.sqrt(s_diag ** 2 / (1. + s_ratio ** 2))
s_width = s_ratio * s_height
cx = np.random.randint(s_width / 2, width - s_width / 2)
cy = np.random.randint(s_height / 2, height - s_height / 2)
bbox = (cx - s_width / 2, cy - s_height / 2, cx + s_width / 2 , cy + s_height / 2)
bbox = tuple(int(v) for v in bbox)
# check if valid iou then return
if len(bboxs) == 0:
return bbox
violation = False
for b in bboxs:
iou = bb_intersection_over_union(b, bbox)
b_overlap = overlap(b, bbox)
if iou > max_iou or b_overlap > max_overlap:
violation = True
if not violation:
return bbox
return None
canvas_size = (300, 300)
img = Image.new('RGB', canvas_size, color = 'grey')
bboxs = []
for i in range(10):
bbox = sample_bbox(bboxs=bboxs, canvas_size=canvas_size, diag=(0.1, 0.3),
max_iou=0.3, max_overlap=0.5)
bboxs.append(bbox)
draw_bbox(bbox, im=img, black=False, values=False, width=3)
img
###Output
_____no_output_____
###Markdown
Draw Objects inside bbox
###Code
#exporti
def draw_rectangle(im, bbox, color):
draw = ImageDraw.Draw(im)
draw.rectangle(bbox, fill=color)
#exporti
def draw_ellipse(im, bbox, color):
draw = ImageDraw.Draw(im)
cx, cy = bbox[0] + bbox[2] / 2, bbox[1] + bbox[3]
draw.ellipse(bbox, fill=color)
img = Image.new('RGB', (200, 100), color = 'grey')
bbox1 = (25, 25, 90, 75)
bbox2 = (125, 25, 190, 75)
draw_ellipse(img, bbox1, "blue")
draw_rectangle(img, bbox2, "red")
draw_bbox(bbox1, im=img, black=False, values=False)
draw_bbox(bbox2, im=img, black=False, values=False)
img
###Output
_____no_output_____
###Markdown
Create Object Detection Dataset Generic Dataset
###Code
# exporti
def create_simple_object_detection_dataset(path, n_samples=100, n_shapes=2, n_colors=3, n_objects=(1, 3),
size=(150, 150), label_noise=0):
(path/'images').mkdir(parents=True, exist_ok=True)
red = (255, 0, 0)
blue = (0, 192, 192)
yellow = (255, 255, 0)
color = [red, blue, yellow]
cname = ['red', 'blue', 'yellow']
draw_shape = [draw_ellipse, draw_rectangle]
shape = ['ellipse', 'rectangle']
bg_color = (211, 211, 211) # light grey
assert n_shapes > 0 and n_shapes <= 2, f"n_shapes:{n_shapes} but only max 2 shapes are supported."
assert n_colors > 0 and n_colors <= 3, f"n_shapes:{n_colors} but only max 3 colors are supported."
# create class labels
(path/'class_images').mkdir(parents=True, exist_ok=True)
for clr, name in zip (color, cname):
img_name = f'{name}.jpg'
img = Image.new('RGB', (50,50), color = bg_color)
draw_rectangle(img, bbox=(0, 0, 50, 50), color=clr)
img.save(path/'class_images'/img_name)
# create images + annotations
bbox_sample_failed = 0
annotations = {}
with open(path/'annotations.json', 'w') as f:
for i in range(n_samples):
img_name = f'img_{i}.jpg'
img = Image.new('RGB', size, color = bg_color)
bboxs, labels = [], []
for o in range(np.random.randint(n_objects[0], n_objects[1] + 1)):
# sample bbox
bbox = sample_bbox(bboxs=bboxs, canvas_size=size, diag=(0.2, 0.5), ratio=(0.5, 2.),
max_iou=0.0, max_overlap=0.0, max_tries=1000, random_seed=None)
if bbox is None:
bbox_sample_failed += 1
continue
bboxs.append(bbox)
# sample color
c = np.random.randint(0, n_colors)
# sample shape
s = np.random.randint(0, n_shapes)
draw_shape[s](img, bbox, cname[c])
labels.append((cname[c], shape[s]))
img.save(path/'images'/img_name)
annotations[img_name] = {'labels': labels, 'bboxs': bboxs}
json.dump(annotations, f)
if bbox_sample_failed > 0:
import warnings
warnings.warn(f"{bbox_sample_failed} bbox have been failed to create." +
" You can increase max_tries or reduce the number and size of objects per image.")
import tempfile
tmp_dir = tempfile.TemporaryDirectory()
path = Path(tmp_dir.name)
create_simple_object_detection_dataset(path=path, n_samples=5)
print(pd.read_json(path/'annotations.json').T)
Image.open(list(path.glob('**/images/*'))[2])
###Output
_____no_output_____
###Markdown
Specific Tasks
###Code
#export
def create_color_classification(path, n_samples=10, n_colors=3, size=(150, 150)):
create_simple_object_detection_dataset(path=path, n_objects=(1, 1), n_samples=n_samples,
n_colors=n_colors, size=size)
with open(path/'annotations.json', 'r') as f:
annotations = json.load(f)
# simplify by dropping mutli-label and bbox
annotations = {k: {'labels': v['labels'][0][0]} for k, v in annotations.items()}
with open(path/'annotations.json', 'w') as f:
json.dump(annotations, f)
import tempfile
tmp_dir = tempfile.TemporaryDirectory()
path = Path(tmp_dir.name)
create_color_classification(path, size=(100, 100))
print(pd.read_json(path/'annotations.json').T)
Image.open(list(path.glob('**/images/*'))[2])
#export
def create_shape_color_classification(path, n_samples=10, n_colors=3, size=(150, 150)):
create_simple_object_detection_dataset(path=path, n_objects=(1, 1), n_samples=n_samples,
n_colors=n_colors, size=size)
with open(path/'annotations.json', 'r') as f:
annotations = json.load(f)
# simplify by dropping mutli-label and bbox
annotations = {k: {'labels': v['labels'][0]} for k, v in annotations.items()}
with open(path/'annotations.json', 'w') as f:
json.dump(annotations, f)
import tempfile
tmp_dir = tempfile.TemporaryDirectory()
path = Path(tmp_dir.name)
create_shape_color_classification(path, size=(100, 100))
print(pd.read_json(path/'annotations.json').T)
Image.open(list(path.glob('**/images/*'))[2])
#export
def create_object_detection(path, n_samples=10, n_objects=(1, 1), n_colors=3, size=(150, 150), multilabel=False):
create_simple_object_detection_dataset(path=path, n_objects=n_objects, n_samples=n_samples,
n_colors=n_colors, size=size)
with open(path/'annotations.json', 'r') as f:
annotations = json.load(f)
# simplify by dropping mutli-label and bbox
if max(n_objects) == 1:
annotations = {k: {'labels': v['labels'][0], 'bbox': v['bboxs'][0]} for k, v in annotations.items()}
if not multilabel:
for k, v in annotations.items():
v['labels'] = v['labels'][0]
else:
if not multilabel:
for k, v in annotations.items():
v['labels'] = v['labels'] = [l[0] for l in v['labels']]
with open(path/'annotations.json', 'w') as f:
json.dump(annotations, f)
import tempfile
tmp_dir = tempfile.TemporaryDirectory()
path = Path(tmp_dir.name)
create_object_detection(path, size=(100, 100), n_objects=(1, 5), multilabel=True)
print(pd.read_json(path/'annotations.json').T)
Image.open(list(path.glob('**/images/*'))[2])
#hide
from nbdev.export import *
notebook2script()
###Output
_____no_output_____ |
02_severity_of_illness.ipynb | ###Markdown
eICU Collaborative Research Database Notebook 2: Severity of illnessThis notebook introduces high level admission details relating to a single patient stay, using the following tables:- patient- admissiondx- apacheapsvar- apachepredvar- apachepatientresult Load libraries and connect to the database
###Code
# Import libraries
import numpy as np
import os
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.path as path
# Make pandas dataframes prettier
from IPython.display import display, HTML
# Access data using Google BigQuery.
from google.colab import auth
from google.cloud import bigquery
# authenticate
auth.authenticate_user()
# Set up environment variables
project_id='hack-aotearoa'
os.environ["GOOGLE_CLOUD_PROJECT"]=project_id
###Output
_____no_output_____
###Markdown
Selecting a single patient stay¶As we have seen, the patient table includes general information about the patient admissions (for example, demographics, admission and discharge details). See: http://eicu-crd.mit.edu/eicutables/patient/ QuestionsUse your knowledge from the previous notebook and the online documentation (http://eicu-crd.mit.edu/) to answer the following questions:- Which column in the patient table is distinct for each stay in the ICU (similar to `icustay_id` in MIMIC-III)?- Which column is unique for each patient (similar to `subject_id` in MIMIC-III)?
###Code
# view distinct ids
%%bigquery
SELECT DISTINCT(patientunitstayid)
FROM `physionet-data.eicu_crd_demo.patient`
# set the where clause to select the stay of interest
%%bigquery patient
SELECT *
FROM `physionet-data.eicu_crd_demo.patient`
WHERE patientunitstayid = <your_id_here>
patient
###Output
_____no_output_____
###Markdown
Questions- Which type of unit was the patient admitted to? Hint: Try `patient['unittype']` or `patient.unittype`- What year was the patient discharged from the ICU? Hint: You can view the table columns with `patient.columns`- What was the status of the patient upon discharge from the unit? The admissiondx tableThe `admissiondx` table contains the primary diagnosis for admission to the ICU according to the APACHE scoring criteria. For more detail, see: http://eicu-crd.mit.edu/eicutables/admissiondx/
###Code
# set the where clause to select the stay of interest
%%bigquery admissiondx
SELECT *
FROM `physionet-data.eicu_crd_demo.admissiondx`
WHERE patientunitstayid = <your_id_here>
# View the columns in this data
admissiondx.columns
# View the data
admissiondx.head()
# Set the display options to avoid truncating the text
pd.set_option('display.max_colwidth', -1)
admissiondx.admitdxpath
###Output
_____no_output_____
###Markdown
Questions- What was the primary reason for admission?- How soon after admission to the ICU was the diagnoses recorded in eCareManager? Hint: The `offset` columns indicate the time in minutes after admission to the ICU. The apacheapsvar tableThe apacheapsvar table contains the variables used to calculate the Acute Physiology Score (APS) III for patients. APS-III is an established method of summarizing patient severity of illness on admission to the ICU, taking the "worst" observations for a patient in a 24 hour period.The score is part of the Acute Physiology Age Chronic Health Evaluation (APACHE) system of equations for predicting outcomes for ICU patients. See: http://eicu-crd.mit.edu/eicutables/apacheApsVar/
###Code
# set the where clause to select the stay of interest
%%bigquery apacheapsvar
SELECT *
FROM `physionet-data.eicu_crd_demo.apacheapsvar`
WHERE patientunitstayid = <your_id_here>
apacheapsvar.head()
###Output
_____no_output_____
###Markdown
Questions- What was the 'worst' heart rate recorded for the patient during the scoring period?- Was the patient oriented and able to converse normally on the day of admission? (hint: the verbal element refers to the Glasgow Coma Scale). apachepredvar tableThe apachepredvar table provides variables underlying the APACHE predictions. Acute Physiology Age Chronic Health Evaluation (APACHE) consists of a groups of equations used for predicting outcomes in critically ill patients. See: http://eicu-crd.mit.edu/eicutables/apachePredVar/
###Code
# set the where clause to select the stay of interest
%%bigquery apachepredvar
SELECT *
FROM `physionet-data.eicu_crd_demo.apachepredvar`
WHERE patientunitstayid = <your_id_here>
apachepredvar.columns
###Output
_____no_output_____
###Markdown
Questions- Was the patient ventilated during (APACHE) day 1 of their stay?- Is the patient recorded as having diabetes? `apachepatientresult` tableThe `apachepatientresult` table provides predictions made by the APACHE score (versions IV and IVa), including probability of mortality, length of stay, and ventilation days. See: http://eicu-crd.mit.edu/eicutables/apachePatientResult/
###Code
# set the where clause to select the stay of interest
%%bigquery apachepatientresult
SELECT *
FROM `physionet-data.eicu_crd_demo.apachepatientresult`
WHERE patientunitstayid = <your_id_here>
apachepatientresult
###Output
_____no_output_____
###Markdown
eICU Collaborative Research Database Notebook 2: Severity of illnessThis notebook introduces high level admission details relating to a single patient stay, using the following tables:- patient- admissiondx- apacheapsvar- apachepredvar- apachepatientresult Load libraries and connect to the database
###Code
# Import libraries
import numpy as np
import os
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.path as path
# Make pandas dataframes prettier
from IPython.display import display, HTML
# Access data using Google BigQuery.
from google.colab import auth
from google.cloud import bigquery
# authenticate
auth.authenticate_user()
# Set up environment variables
project_id='sccm-datathon'
os.environ["GOOGLE_CLOUD_PROJECT"]=project_id
###Output
_____no_output_____
###Markdown
Selecting a single patient stay¶As we have seen, the patient table includes general information about the patient admissions (for example, demographics, admission and discharge details). See: http://eicu-crd.mit.edu/eicutables/patient/ QuestionsUse your knowledge from the previous notebook and the online documentation (http://eicu-crd.mit.edu/) to answer the following questions:- Which column in the patient table is distinct for each stay in the ICU (similar to `icustay_id` in MIMIC-III)?- Which column is unique for each patient (similar to `subject_id` in MIMIC-III)?
###Code
# view distinct ids
%%bigquery
SELECT DISTINCT(patientunitstayid)
FROM `physionet-data.eicu_crd_demo.patient`
# set the where clause to select the stay of interest
%%bigquery patient
SELECT *
FROM `physionet-data.eicu_crd_demo.patient`
WHERE patientunitstayid = <your_id_here>
patient
###Output
_____no_output_____
###Markdown
Questions- Which type of unit was the patient admitted to? Hint: Try `patient['unittype']` or `patient.unittype`- What year was the patient discharged from the ICU? Hint: You can view the table columns with `patient.columns`- What was the status of the patient upon discharge from the unit? The admissiondx tableThe `admissiondx` table contains the primary diagnosis for admission to the ICU according to the APACHE scoring criteria. For more detail, see: http://eicu-crd.mit.edu/eicutables/admissiondx/
###Code
# set the where clause to select the stay of interest
%%bigquery admissiondx
SELECT *
FROM `physionet-data.eicu_crd_demo.admissiondx`
WHERE patientunitstayid = <your_id_here>
# View the columns in this data
admissiondx.columns
# View the data
admissiondx.head()
# Set the display options to avoid truncating the text
pd.set_option('display.max_colwidth', -1)
admissiondx.admitdxpath
###Output
_____no_output_____
###Markdown
Questions- What was the primary reason for admission?- How soon after admission to the ICU was the diagnoses recorded in eCareManager? Hint: The `offset` columns indicate the time in minutes after admission to the ICU. The apacheapsvar tableThe apacheapsvar table contains the variables used to calculate the Acute Physiology Score (APS) III for patients. APS-III is an established method of summarizing patient severity of illness on admission to the ICU, taking the "worst" observations for a patient in a 24 hour period.The score is part of the Acute Physiology Age Chronic Health Evaluation (APACHE) system of equations for predicting outcomes for ICU patients. See: http://eicu-crd.mit.edu/eicutables/apacheApsVar/
###Code
# set the where clause to select the stay of interest
%%bigquery apacheapsvar
SELECT *
FROM `physionet-data.eicu_crd_demo.apacheapsvar`
WHERE patientunitstayid = <your_id_here>
apacheapsvar.head()
###Output
_____no_output_____
###Markdown
Questions- What was the 'worst' heart rate recorded for the patient during the scoring period?- Was the patient oriented and able to converse normally on the day of admission? (hint: the verbal element refers to the Glasgow Coma Scale). apachepredvar tableThe apachepredvar table provides variables underlying the APACHE predictions. Acute Physiology Age Chronic Health Evaluation (APACHE) consists of a groups of equations used for predicting outcomes in critically ill patients. See: http://eicu-crd.mit.edu/eicutables/apachePredVar/
###Code
# set the where clause to select the stay of interest
%%bigquery apachepredvar
SELECT *
FROM `physionet-data.eicu_crd_demo.apachepredvar`
WHERE patientunitstayid = <your_id_here>
apachepredvar.columns
###Output
_____no_output_____
###Markdown
Questions- Was the patient ventilated during (APACHE) day 1 of their stay?- Is the patient recorded as having diabetes? `apachepatientresult` tableThe `apachepatientresult` table provides predictions made by the APACHE score (versions IV and IVa), including probability of mortality, length of stay, and ventilation days. See: http://eicu-crd.mit.edu/eicutables/apachePatientResult/
###Code
# set the where clause to select the stay of interest
%%bigquery apachepatientresult
SELECT *
FROM `physionet-data.eicu_crd_demo.apachepatientresult`
WHERE patientunitstayid = <your_id_here>
apachepatientresult
###Output
_____no_output_____
###Markdown
eICU Collaborative Research Database Notebook 2: Severity of illnessThis notebook introduces high level admission details relating to a single patient stay, using the following tables:- patient- admissiondx- apacheapsvar- apachepredvar- apachepatientresult Load libraries and connect to the database
###Code
# Import libraries
import numpy as np
import os
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.path as path
# Make pandas dataframes prettier
from IPython.display import display, HTML
# Access data using Google BigQuery.
from google.colab import auth
from google.cloud import bigquery
# authenticate
auth.authenticate_user()
# Set up environment variables
project_id='bidmc-datathon'
os.environ["GOOGLE_CLOUD_PROJECT"]=project_id
###Output
_____no_output_____
###Markdown
Selecting a single patient stay¶As we have seen, the patient table includes general information about the patient admissions (for example, demographics, admission and discharge details). See: http://eicu-crd.mit.edu/eicutables/patient/ QuestionsUse your knowledge from the previous notebook and the online documentation (http://eicu-crd.mit.edu/) to answer the following questions:- Which column in the patient table is distinct for each stay in the ICU (similar to `icustay_id` in MIMIC-III)?- Which column is unique for each patient (similar to `subject_id` in MIMIC-III)?
###Code
# view distinct ids
%%bigquery
SELECT DISTINCT(patientunitstayid)
FROM `physionet-data.eicu_crd_demo.patient`
# set the where clause to select the stay of interest
%%bigquery patient
SELECT *
FROM `physionet-data.eicu_crd_demo.patient`
WHERE patientunitstayid = <your_id_here>
patient
###Output
_____no_output_____
###Markdown
Questions- Which type of unit was the patient admitted to? Hint: Try `patient['unittype']` or `patient.unittype`- What year was the patient discharged from the ICU? Hint: You can view the table columns with `patient.columns`- What was the status of the patient upon discharge from the unit? The admissiondx tableThe `admissiondx` table contains the primary diagnosis for admission to the ICU according to the APACHE scoring criteria. For more detail, see: http://eicu-crd.mit.edu/eicutables/admissiondx/
###Code
# set the where clause to select the stay of interest
%%bigquery admissiondx
SELECT *
FROM `physionet-data.eicu_crd_demo.admissiondx`
WHERE patientunitstayid = <your_id_here>
# View the columns in this data
admissiondx.columns
# View the data
admissiondx.head()
# Set the display options to avoid truncating the text
pd.set_option('display.max_colwidth', -1)
admissiondx.admitdxpath
###Output
_____no_output_____
###Markdown
Questions- What was the primary reason for admission?- How soon after admission to the ICU was the diagnoses recorded in eCareManager? Hint: The `offset` columns indicate the time in minutes after admission to the ICU. The apacheapsvar tableThe apacheapsvar table contains the variables used to calculate the Acute Physiology Score (APS) III for patients. APS-III is an established method of summarizing patient severity of illness on admission to the ICU, taking the "worst" observations for a patient in a 24 hour period.The score is part of the Acute Physiology Age Chronic Health Evaluation (APACHE) system of equations for predicting outcomes for ICU patients. See: http://eicu-crd.mit.edu/eicutables/apacheApsVar/
###Code
# set the where clause to select the stay of interest
%%bigquery apacheapsvar
SELECT *
FROM `physionet-data.eicu_crd_demo.apacheapsvar`
WHERE patientunitstayid = <your_id_here>
apacheapsvar.head()
###Output
_____no_output_____
###Markdown
Questions- What was the 'worst' heart rate recorded for the patient during the scoring period?- Was the patient oriented and able to converse normally on the day of admission? (hint: the verbal element refers to the Glasgow Coma Scale). apachepredvar tableThe apachepredvar table provides variables underlying the APACHE predictions. Acute Physiology Age Chronic Health Evaluation (APACHE) consists of a groups of equations used for predicting outcomes in critically ill patients. See: http://eicu-crd.mit.edu/eicutables/apachePredVar/
###Code
# set the where clause to select the stay of interest
%%bigquery apachepredvar
SELECT *
FROM `physionet-data.eicu_crd_demo.apachepredvar`
WHERE patientunitstayid = <your_id_here>
apachepredvar.columns
###Output
_____no_output_____
###Markdown
Questions- Was the patient ventilated during (APACHE) day 1 of their stay?- Is the patient recorded as having diabetes? `apachepatientresult` tableThe `apachepatientresult` table provides predictions made by the APACHE score (versions IV and IVa), including probability of mortality, length of stay, and ventilation days. See: http://eicu-crd.mit.edu/eicutables/apachePatientResult/
###Code
# set the where clause to select the stay of interest
%%bigquery apachepatientresult
SELECT *
FROM `physionet-data.eicu_crd_demo.apachepatientresult`
WHERE patientunitstayid = <your_id_here>
apachepatientresult
###Output
_____no_output_____ |
notebooks/Experimental/Ishan/PublishTest.ipynb | ###Markdown
TODOQOL- Make sum work directly on Private Syft Tensors, instead of having to call it on Tensor.child
###Code
%load_ext autoreload
%%autoreload
import syft as sy
import numpy as np
from syft.core.adp.entity import Entity
person1 = Entity(name="1")
person2 = Entity(name="2")
entities = np.random.choice([person1, person2], size=10**6)
tensor = sy.Tensor(np.ones(10**6, dtype=np.int32)).private(min_val=0, max_val=2, entities=entities)
from syft.core.tensor.autodp.ndim_entity_phi import NDimEntityPhiTensor as NDEPT
assert isinstance(tensor.child, NDEPT), f"Please double check that 'ndept' is set to True in the _private() method in the ancestors.py file"
result = tensor.sum()
result = tensor.child.sum()
assert result.value == 10**6
assert result.max_val/10**6 == 2
from nacl.signing import VerifyKey, SigningKey
key = SigningKey.generate()
vk = key.verify_key
from syft.core.adp.adversarial_accountant import AdversarialAccountant
acc = AdversarialAccountant()
result.publish(accountant=acc, sigma=200_000, user_key=vk)
from jax import numpy as jnp
result.max_val
result.min_val
result.max_val - result.min_val
jnp.sum(jnp.square(result.max_val - result.min_val))
jnp.sum(jnp.square(result.value))
result.value
np.square(result.value) # Correct
jnp.square(result.value) # ERROR!!
jnp.sqrt(jnp.sum(jnp.square(result.value)))
result.value.mean()
jnp.sqrt(jnp.sum(result.value))
import numpy as np
np.square(result.value)
%%timeit
np.sqrt(np.sum(np.square(result.value)))
%%timeit
jnp.sqrt(jnp.sum(jnp.square(result.value)))
%%time
np.sqrt(np.sum(np.square(result.value)))
%%time
jnp.sqrt(jnp.sum(jnp.square(result.value)))
%%time
np.sum(np.square(result.value))
%%time
np.ones_like(result.value)
%%time
jnp.ones_like(result.value)
###Output
CPU times: user 471 µs, sys: 41 µs, total: 512 µs
Wall time: 526 µs
###Markdown
TODOQOL- Make sum work directly on Private Syft Tensors, instead of having to call it on Tensor.child
###Code
%load_ext autoreload
%%autoreload
import syft as sy
import numpy as np
from syft.core.adp.entity import DataSubject
person1 = DataSubject(name="1")
person2 = DataSubject(name="2")
entities = np.random.choice([person1, person2], size=10**6)
tensor = sy.Tensor(np.ones(10**6, dtype=np.int32)).private(min_val=0, max_val=2, entities=entities)
from syft.core.tensor.autodp.ndim_entity_phi import PhiTensor as NDEPT
assert isinstance(tensor.child, NDEPT), f"Please double check that 'ndept' is set to True in the _private() method in the ancestors.py file"
result = tensor.sum()
result = tensor.child.sum()
assert result.value == 10**6
assert result.max_val/10**6 == 2
from nacl.signing import VerifyKey, SigningKey
key = SigningKey.generate()
vk = key.verify_key
from syft.core.adp.adversarial_accountant import AdversarialAccountant
acc = AdversarialAccountant()
result.publish(accountant=acc, sigma=200_000, user_key=vk)
from jax import numpy as jnp
result.max_val
result.min_val
result.max_val - result.min_val
jnp.sum(jnp.square(result.max_val - result.min_val))
jnp.sum(jnp.square(result.value))
result.value
np.square(result.value) # Correct
jnp.square(result.value) # ERROR!!
jnp.sqrt(jnp.sum(jnp.square(result.value)))
result.value.mean()
jnp.sqrt(jnp.sum(result.value))
import numpy as np
np.square(result.value)
%%timeit
np.sqrt(np.sum(np.square(result.value)))
%%timeit
jnp.sqrt(jnp.sum(jnp.square(result.value)))
%%time
np.sqrt(np.sum(np.square(result.value)))
%%time
jnp.sqrt(jnp.sum(jnp.square(result.value)))
%%time
np.sum(np.square(result.value))
%%time
np.ones_like(result.value)
%%time
jnp.ones_like(result.value)
###Output
CPU times: user 471 µs, sys: 41 µs, total: 512 µs
Wall time: 526 µs
|
yds_mapping_2013_ds.ipynb | ###Markdown
This script is to map 2013 galway traffic data (bridge 1)
###Code
#python list to store csv data as mapping suggest
#Site No Dataset Survey Company Client Project Reference Method of Survey Address Latitude Longtitude Easting Northing Date From Date To Time From Time To Observations Weather Junction Type Vehicle Type Direction Count
#Site No,Dataset,Survey Company,Client,Project Reference,Method of Survey,Address,Latitude,Longtitude,Easting,Northing,Date From,Date To,Time From,Time To,Observations,Weather,Junction Type,Vehicle Type,Direction,Count
header=["Site No","Dataset","Survey Company","Client","Project Reference","Method of Survey","Address","Latitude","Longtitude","Easting","Northing",
"Date From","Date To","Time From","Time To","Observations","Weather","Junction Type","Vehicle Type","Direction","Count"]
full_data_template = ["","Galway 2013 Br. 1","Idaso Ltd.","Galway City Council","2013 Annual Survey","JTC","Quincentenary Bridge",53.282696,-9.06065,495956.4,5903720.6,"","","","","Nothing to report","Sunny and generally dry but there were some light showers","Link","","",""]
data_template = ["","Galway 2013 Br. 1","","Galway City Council","","","Quincentenary Bridge",53.282696,-9.06065,495956.4,5903720.6,"","","","","","","Link","","",""]
directions_alphabet = ["", "", "", "", "", "", "A TO F", "A TO E", "A TO D", "A TO C", "A TO B", "A TO A", "B TO A", "B TO F", "B TO E", "B TO D", "B TO C", "B TO B", "C TO B", "C TO A", "C TO F", "C TO E", "C TO D", "C TO C", "D TO C", "D TO B", "D TO A", "D TO F", "D TO E", "D TO D", "E TO D", "E TO C", "E TO B", "E TO A", "E TO F", "E TO E", "F TO E", "F TO D", "F TO C", "F TO B", "F TO A", "F TO F"]
#outputfile_name="data/2013/mapped-final/bridge1_2013_eastbound_verified.csv"
outputfile_name="data/2013/mapped-final/bridge1_2013_westbound_verified.csv"
#sourcefile_name='data/2013/refined/Br1_Eastbound_2013.csv'
sourcefile_name='data/2013/refined/Br1_Westbound_2013.csv'
vich_type = ["Motorcycles","Cars","LGV","HGV","Buses",]
"""
mapping types viechls:
Class Description Comments
1 Cyclist Some cyclists may not be detected if they cross at the same time as a vehicle or use pavement
2 M/Cycle
changed to "Motorcycles"
3 Car
4 Van Some vans will have same axle with as a car, or a small HGV, so may be classed elsewhere
5 Rigid 2 Axle
6 Rigid 3 Axle
7 Rigid 4 Axle
4 to 7 >> LGV
8 3 Axle HGv
9 4 Axle HGV
10 5+Axle HGV
8 to 10 >> HGV
11 Bus
changed to "Buses"
"""
directions = ["Westbound","Eastbound"]
"""
mapping types routes:
(For the record B to A = Eastbound and A to B = Westbound).
"""
counts_in_rows = [3,5,7,9,11]
#times_hourly = ["00:00","01:00","02:00","03:00","04:00","05:00","06:00","07:00","08:00","08:00","09:00","10:00","11:00"]
#Read csv file data row by row
#this file wil only fill sections (0,11,12,13,14,19,20,21)
import csv
with open(sourcefile_name, 'rb') as source:
#write data again acoording to the schema
#import csv
with open(outputfile_name, 'w+') as output:
csv_sourcereader = csv.reader(source, delimiter=',', quotechar='\"')
outputwriter = csv.writer(output, delimiter=',', quotechar='\"')
#putting the header
outputwriter.writerow(header)
#counter to scape file headers
c = 0
#list to get all possible readings
quinque_data = []
#csv reader object to list
sourcereader = list(csv_sourcereader)
for r in xrange (0,len(sourcereader)):
#print ', '.join(row)
print sourcereader[r]
import copy
#lget both possible directions (A-B, B-A)
#data_A_B = copy.deepcopy(data_template)
#data_B_A = copy.deepcopy(data_template)
data = copy.deepcopy(data_template)
#print data
#ignoring headers
if c > 1 :
for x in xrange(0,5):
#a-b
#data_A_B[0]=row[0] # Site NO
#data_A_B[11]=row[2] # date from
#data_A_B[12]=row[2] # date to
#data_A_B[13]=row[3] # time from
#data_A_B[14]=row[4] # time to
#data_A_B[18]=row[5] # Vehicle Type
#b-a
#data_B_A[0]=row[0] # Site NO
#data_B_A[11]=row[2] # date from
#data_B_A[12]=row[2] # date to
#data_B_A[13]=row[3] # time from
#data_B_A[14]=row[4] # time to
#data_B_A[18]=row[5] # Vehicle Type
data[0]="" # Site NO
data[11]=sourcereader[r][0] # date from
data[12]=sourcereader[r][0] # date to
"""
#adding 000
if len(str(sourcereader[r][1]))<4:
if len(str(sourcereader[r][1])) == 1:
data[13]="\'000"+str(sourcereader[r][1]) # time from
if len(str(sourcereader[r][1])) == 2:
data[13]="\'00"+str(sourcereader[r][1]) # time from
if len(str(sourcereader[r][1])) == 3:
data[13]="\'0"+str(sourcereader[r][1]) # time from
else:
data[13]="\'"+str(sourcereader[r][1]) # time from
"""
data[13] = "\'"+ str(sourcereader[r][1]) # time from
#last one to avoid index out range
if sourcereader[r][1] != "23:00":
"""
#adding 000
if len(str(sourcereader[r+1][1])) < 4:
if len(str(sourcereader[r+1][1])) == 1:
data[14]="\'000"+str(sourcereader[r+1][1]) # time to
if len(str(sourcereader[r+1][1])) == 2:
data[14]="\'00"+str(sourcereader[r+1][1]) # time to
if len(str(sourcereader[r+1][1])) == 3:
data[14]="\'0"+str(sourcereader[r+1][1]) # time to
else:
"""
data[14] = "\'"+ str(sourcereader[r+1][1]) # time to
elif sourcereader[r][1] == "23:00":
data[14]="\'24:00" # time to
data[18]=vich_type[x] # Vehicle Type
data[19]=sourcereader[r][13] # direction
data[20]=sourcereader[r][counts_in_rows[x]] # count
#appending data row to the 5 rows batch
quinque_data.append(copy.deepcopy(data))
for data_row in quinque_data:
outputwriter.writerow(data_row)
c = c + 1
#print data
#del data_B_A [:]
#del data_A_B[:]
del data[:]
del quinque_data [:]
###Output
['Date From', 'Time', 'Total', 'Bin 1', 'Bin 1', 'Bin 2', 'Bin 2', 'Bin 3', 'Bin 3', 'Bin 4', 'Bin 4', 'Bin 5', 'Bin 5', 'dir']
['', 'Begin', 'Vol.', 'Motorcycles', '%', 'Cars', '%', 'LGV', '%', 'HGV', '%', 'Buses', '%', 'Westbound']
['25/11/13', '00:00', '115', '0', '0', '109', '94.78', '5', '4.35', '1', '0.87', '0', '0', 'Westbound']
['25/11/13', '01:00', '52', '0', '0', '47', '90.38', '3', '5.77', '2', '3.85', '0', '0', 'Westbound']
['25/11/13', '02:00', '27', '1', '3.7', '25', '92.59', '1', '3.7', '0', '0', '0', '0', 'Westbound']
['25/11/13', '03:00', '28', '0', '0', '24', '85.71', '1', '3.57', '3', '10.71', '0', '0', 'Westbound']
['25/11/13', '04:00', '32', '0', '0', '25', '78.13', '5', '15.63', '1', '3.13', '1', '3.13', 'Westbound']
['25/11/13', '05:00', '72', '1', '1.39', '56', '77.78', '11', '15.28', '3', '4.17', '1', '1.39', 'Westbound']
['25/11/13', '06:00', '215', '0', '0', '160', '74.42', '35', '16.28', '18', '8.37', '2', '0.93', 'Westbound']
['25/11/13', '07:00', '943', '1', '0.11', '799', '84.73', '72', '7.64', '70', '7.42', '1', '0.11', 'Westbound']
['25/11/13', '08:00', '1546', '11', '0.71', '1382', '89.39', '52', '3.36', '100', '6.47', '1', '0.06', 'Westbound']
['25/11/13', '09:00', '1161', '5', '0.43', '995', '85.7', '86', '7.41', '71', '6.12', '4', '0.34', 'Westbound']
['25/11/13', '10:00', '976', '1', '0.1', '837', '85.76', '86', '8.81', '52', '5.33', '0', '0', 'Westbound']
['25/11/13', '11:00', '1036', '2', '0.19', '899', '86.78', '87', '8.4', '47', '4.54', '1', '0.1', 'Westbound']
['25/11/13', '12:00', '1137', '6', '0.53', '1008', '88.65', '77', '6.77', '44', '3.87', '2', '0.18', 'Westbound']
['25/11/13', '13:00', '1223', '1', '0.08', '1098', '89.78', '76', '6.21', '48', '3.92', '0', '0', 'Westbound']
['25/11/13', '14:00', '1154', '2', '0.17', '1008', '87.35', '96', '8.32', '48', '4.16', '0', '0', 'Westbound']
['25/11/13', '15:00', '1186', '2', '0.17', '1028', '86.68', '79', '6.66', '74', '6.24', '3', '0.25', 'Westbound']
['25/11/13', '16:00', '1492', '5', '0.34', '1347', '90.28', '72', '4.83', '68', '4.56', '0', '0', 'Westbound']
['25/11/13', '17:00', '1635', '8', '0.49', '1452', '88.81', '51', '3.12', '123', '7.52', '1', '0.06', 'Westbound']
['25/11/13', '18:00', '1371', '7', '0.51', '1208', '88.11', '68', '4.96', '88', '6.42', '0', '0', 'Westbound']
['25/11/13', '19:00', '991', '3', '0.3', '904', '91.22', '41', '4.14', '43', '4.34', '0', '0', 'Westbound']
['25/11/13', '20:00', '693', '0', '0', '624', '90.04', '40', '5.77', '29', '4.18', '0', '0', 'Westbound']
['25/11/13', '21:00', '535', '2', '0.37', '493', '92.15', '24', '4.49', '16', '2.99', '0', '0', 'Westbound']
['25/11/13', '22:00', '357', '1', '0.28', '334', '93.56', '14', '3.92', '8', '2.24', '0', '0', 'Westbound']
['25/11/13', '23:00', '282', '1', '0.35', '263', '93.26', '13', '4.61', '5', '1.77', '0', '0', 'Westbound']
['26/11/13', '00:00', '146', '0', '0', '138', '94.52', '6', '4.11', '2', '1.37', '0', '0', 'Westbound']
['26/11/13', '01:00', '69', '0', '0', '63', '91.3', '5', '7.25', '1', '1.45', '0', '0', 'Westbound']
['26/11/13', '02:00', '53', '0', '0', '50', '94.34', '2', '3.77', '1', '1.89', '0', '0', 'Westbound']
['26/11/13', '03:00', '18', '0', '0', '18', '100', '0', '0', '0', '0', '0', '0', 'Westbound']
['26/11/13', '04:00', '28', '0', '0', '17', '60.71', '5', '17.86', '4', '14.29', '2', '7.14', 'Westbound']
['26/11/13', '05:00', '81', '0', '0', '66', '81.48', '5', '6.17', '10', '12.35', '0', '0', 'Westbound']
['26/11/13', '06:00', '199', '1', '0.5', '146', '73.37', '31', '15.58', '18', '9.05', '3', '1.51', 'Westbound']
['26/11/13', '07:00', '983', '3', '0.31', '857', '87.18', '70', '7.12', '52', '5.29', '1', '0.1', 'Westbound']
['26/11/13', '08:00', '1460', '6', '0.41', '1303', '89.25', '52', '3.56', '97', '6.64', '2', '0.14', 'Westbound']
['26/11/13', '09:00', '1339', '9', '0.67', '1165', '87.01', '90', '6.72', '72', '5.38', '3', '0.22', 'Westbound']
['26/11/13', '10:00', '1028', '1', '0.1', '878', '85.41', '92', '8.95', '54', '5.25', '3', '0.29', 'Westbound']
['26/11/13', '11:00', '979', '1', '0.1', '828', '84.58', '80', '8.17', '70', '7.15', '0', '0', 'Westbound']
['26/11/13', '12:00', '1071', '2', '0.19', '924', '86.27', '84', '7.84', '59', '5.51', '2', '0.19', 'Westbound']
['26/11/13', '13:00', '1226', '6', '0.49', '1081', '88.17', '72', '5.87', '67', '5.46', '0', '0', 'Westbound']
['26/11/13', '14:00', '1121', '1', '0.09', '956', '85.28', '92', '8.21', '71', '6.33', '1', '0.09', 'Westbound']
['26/11/13', '15:00', '1242', '4', '0.32', '1097', '88.33', '74', '5.96', '63', '5.07', '4', '0.32', 'Westbound']
['26/11/13', '16:00', '1413', '7', '0.5', '1261', '89.24', '51', '3.61', '93', '6.58', '1', '0.07', 'Westbound']
['26/11/13', '17:00', '1365', '9', '0.66', '1182', '86.59', '38', '2.78', '135', '9.89', '1', '0.07', 'Westbound']
['26/11/13', '18:00', '1453', '6', '0.41', '1287', '88.58', '45', '3.1', '110', '7.57', '5', '0.34', 'Westbound']
['26/11/13', '19:00', '1040', '6', '0.58', '956', '91.92', '37', '3.56', '41', '3.94', '0', '0', 'Westbound']
['26/11/13', '20:00', '684', '3', '0.44', '629', '91.96', '29', '4.24', '23', '3.36', '0', '0', 'Westbound']
['26/11/13', '21:00', '555', '1', '0.18', '523', '94.23', '16', '2.88', '15', '2.7', '0', '0', 'Westbound']
['26/11/13', '22:00', '415', '0', '0', '392', '94.46', '13', '3.13', '10', '2.41', '0', '0', 'Westbound']
['26/11/13', '23:00', '293', '0', '0', '279', '95.22', '8', '2.73', '6', '2.05', '0', '0', 'Westbound']
['27/11/13', '00:00', '178', '1', '0.56', '170', '95.51', '5', '2.81', '2', '1.12', '0', '0', 'Westbound']
['27/11/13', '01:00', '65', '0', '0', '55', '84.62', '8', '12.31', '2', '3.08', '0', '0', 'Westbound']
['27/11/13', '02:00', '45', '0', '0', '39', '86.67', '2', '4.44', '3', '6.67', '1', '2.22', 'Westbound']
['27/11/13', '03:00', '18', '1', '5.56', '16', '88.89', '1', '5.56', '0', '0', '0', '0', 'Westbound']
['27/11/13', '04:00', '22', '0', '0', '15', '68.18', '3', '13.64', '2', '9.09', '2', '9.09', 'Westbound']
['27/11/13', '05:00', '92', '0', '0', '70', '76.09', '10', '10.87', '11', '11.96', '1', '1.09', 'Westbound']
['27/11/13', '06:00', '204', '1', '0.49', '160', '78.43', '32', '15.69', '9', '4.41', '2', '0.98', 'Westbound']
['27/11/13', '07:00', '974', '5', '0.51', '847', '86.96', '57', '5.85', '64', '6.57', '1', '0.1', 'Westbound']
['27/11/13', '08:00', '1450', '4', '0.28', '1282', '88.41', '51', '3.52', '108', '7.45', '5', '0.34', 'Westbound']
['27/11/13', '09:00', '1237', '6', '0.49', '1074', '86.82', '90', '7.28', '66', '5.34', '1', '0.08', 'Westbound']
['27/11/13', '10:00', '1070', '3', '0.28', '932', '87.1', '86', '8.04', '47', '4.39', '2', '0.19', 'Westbound']
['27/11/13', '11:00', '1008', '3', '0.3', '880', '87.3', '83', '8.23', '39', '3.87', '3', '0.3', 'Westbound']
['27/11/13', '12:00', '1078', '6', '0.56', '922', '85.53', '90', '8.35', '59', '5.47', '1', '0.09', 'Westbound']
['27/11/13', '13:00', '1183', '7', '0.59', '1054', '89.1', '67', '5.66', '52', '4.4', '3', '0.25', 'Westbound']
['27/11/13', '14:00', '1140', '2', '0.18', '990', '86.84', '83', '7.28', '62', '5.44', '3', '0.26', 'Westbound']
['27/11/13', '15:00', '1184', '2', '0.17', '1045', '88.26', '76', '6.42', '59', '4.98', '2', '0.17', 'Westbound']
['27/11/13', '16:00', '1416', '6', '0.42', '1276', '90.11', '48', '3.39', '85', '6', '1', '0.07', 'Westbound']
['27/11/13', '17:00', '1341', '6', '0.45', '1203', '89.71', '38', '2.83', '92', '6.86', '2', '0.15', 'Westbound']
['27/11/13', '18:00', '1393', '8', '0.57', '1216', '87.29', '55', '3.95', '109', '7.82', '5', '0.36', 'Westbound']
['27/11/13', '19:00', '1082', '5', '0.46', '990', '91.5', '37', '3.42', '50', '4.62', '0', '0', 'Westbound']
['27/11/13', '20:00', '762', '2', '0.26', '698', '91.6', '34', '4.46', '27', '3.54', '1', '0.13', 'Westbound']
['27/11/13', '21:00', '572', '2', '0.35', '531', '92.83', '23', '4.02', '16', '2.8', '0', '0', 'Westbound']
['27/11/13', '22:00', '451', '1', '0.22', '421', '93.35', '18', '3.99', '11', '2.44', '0', '0', 'Westbound']
['27/11/13', '23:00', '313', '0', '0', '297', '94.89', '12', '3.83', '4', '1.28', '0', '0', 'Westbound']
['28/11/13', '00:00', '175', '0', '0', '163', '93.14', '5', '2.86', '6', '3.43', '1', '0.57', 'Westbound']
['28/11/13', '01:00', '57', '0', '0', '52', '91.23', '4', '7.02', '1', '1.75', '0', '0', 'Westbound']
['28/11/13', '02:00', '53', '0', '0', '47', '88.68', '6', '11.32', '0', '0', '0', '0', 'Westbound']
['28/11/13', '03:00', '24', '0', '0', '20', '83.33', '3', '12.5', '1', '4.17', '0', '0', 'Westbound']
['28/11/13', '04:00', '33', '0', '0', '24', '72.73', '4', '12.12', '4', '12.12', '1', '3.03', 'Westbound']
['28/11/13', '05:00', '82', '1', '1.22', '59', '71.95', '11', '13.41', '11', '13.41', '0', '0', 'Westbound']
['28/11/13', '06:00', '193', '1', '0.52', '138', '71.5', '34', '17.62', '17', '8.81', '3', '1.55', 'Westbound']
['28/11/13', '07:00', '1059', '8', '0.76', '921', '86.97', '73', '6.89', '56', '5.29', '1', '0.09', 'Westbound']
['28/11/13', '08:00', '1544', '5', '0.32', '1391', '90.09', '59', '3.82', '88', '5.7', '1', '0.06', 'Westbound']
['28/11/13', '09:00', '1309', '5', '0.38', '1134', '86.63', '96', '7.33', '73', '5.58', '1', '0.08', 'Westbound']
['28/11/13', '10:00', '1106', '7', '0.63', '936', '84.63', '100', '9.04', '59', '5.33', '4', '0.36', 'Westbound']
['28/11/13', '11:00', '1070', '6', '0.56', '925', '86.45', '83', '7.76', '56', '5.23', '0', '0', 'Westbound']
['28/11/13', '12:00', '1179', '4', '0.34', '1041', '88.3', '77', '6.53', '53', '4.5', '4', '0.34', 'Westbound']
['28/11/13', '13:00', '1156', '10', '0.87', '1001', '86.59', '82', '7.09', '62', '5.36', '1', '0.09', 'Westbound']
['28/11/13', '14:00', '1171', '2', '0.17', '1001', '85.48', '91', '7.77', '76', '6.49', '1', '0.09', 'Westbound']
['28/11/13', '15:00', '1137', '2', '0.18', '1014', '89.18', '78', '6.86', '42', '3.69', '1', '0.09', 'Westbound']
['28/11/13', '16:00', '1361', '5', '0.37', '1209', '88.83', '69', '5.07', '77', '5.66', '1', '0.07', 'Westbound']
['28/11/13', '17:00', '1432', '8', '0.56', '1277', '89.18', '33', '2.3', '113', '7.89', '1', '0.07', 'Westbound']
['28/11/13', '18:00', '1487', '6', '0.4', '1318', '88.63', '45', '3.03', '114', '7.67', '4', '0.27', 'Westbound']
['28/11/13', '19:00', '1144', '3', '0.26', '1030', '90.03', '45', '3.93', '64', '5.59', '2', '0.17', 'Westbound']
['28/11/13', '20:00', '855', '2', '0.23', '778', '90.99', '39', '4.56', '35', '4.09', '1', '0.12', 'Westbound']
['28/11/13', '21:00', '690', '1', '0.14', '642', '93.04', '30', '4.35', '17', '2.46', '0', '0', 'Westbound']
['28/11/13', '22:00', '514', '1', '0.19', '479', '93.19', '26', '5.06', '8', '1.56', '0', '0', 'Westbound']
['28/11/13', '23:00', '320', '1', '0.31', '297', '92.81', '18', '5.63', '4', '1.25', '0', '0', 'Westbound']
['29/11/13', '00:00', '211', '2', '0.95', '196', '92.89', '11', '5.21', '2', '0.95', '0', '0', 'Westbound']
['29/11/13', '01:00', '106', '0', '0', '97', '91.51', '7', '6.6', '2', '1.89', '0', '0', 'Westbound']
['29/11/13', '02:00', '51', '0', '0', '48', '94.12', '2', '3.92', '1', '1.96', '0', '0', 'Westbound']
['29/11/13', '03:00', '49', '0', '0', '47', '95.92', '0', '0', '2', '4.08', '0', '0', 'Westbound']
['29/11/13', '04:00', '28', '0', '0', '19', '67.86', '3', '10.71', '5', '17.86', '1', '3.57', 'Westbound']
['29/11/13', '05:00', '78', '0', '0', '53', '67.95', '14', '17.95', '10', '12.82', '1', '1.28', 'Westbound']
['29/11/13', '06:00', '218', '1', '0.46', '160', '73.39', '37', '16.97', '17', '7.8', '3', '1.38', 'Westbound']
['29/11/13', '07:00', '992', '5', '0.5', '866', '87.3', '64', '6.45', '53', '5.34', '4', '0.4', 'Westbound']
['29/11/13', '08:00', '1434', '9', '0.63', '1282', '89.4', '54', '3.77', '86', '6', '3', '0.21', 'Westbound']
['29/11/13', '09:00', '1114', '2', '0.18', '949', '85.19', '98', '8.8', '63', '5.66', '2', '0.18', 'Westbound']
['29/11/13', '10:00', '1061', '0', '0', '914', '86.15', '97', '9.14', '46', '4.34', '4', '0.38', 'Westbound']
['29/11/13', '11:00', '1128', '3', '0.27', '991', '87.85', '80', '7.09', '53', '4.7', '1', '0.09', 'Westbound']
['29/11/13', '12:00', '1160', '5', '0.43', '1026', '88.45', '76', '6.55', '52', '4.48', '1', '0.09', 'Westbound']
['29/11/13', '13:00', '1378', '5', '0.36', '1219', '88.46', '78', '5.66', '74', '5.37', '2', '0.15', 'Westbound']
['29/11/13', '14:00', '1270', '8', '0.63', '1124', '88.5', '78', '6.14', '59', '4.65', '1', '0.08', 'Westbound']
['29/11/13', '15:00', '1478', '4', '0.27', '1339', '90.6', '58', '3.92', '76', '5.14', '1', '0.07', 'Westbound']
['29/11/13', '16:00', '1403', '7', '0.5', '1263', '90.02', '55', '3.92', '78', '5.56', '0', '0', 'Westbound']
['29/11/13', '17:00', '1430', '9', '0.63', '1266', '88.53', '59', '4.13', '94', '6.57', '2', '0.14', 'Westbound']
['29/11/13', '18:00', '1245', '5', '0.4', '1131', '90.84', '50', '4.02', '58', '4.66', '1', '0.08', 'Westbound']
['29/11/13', '19:00', '1101', '1', '0.09', '1003', '91.1', '49', '4.45', '45', '4.09', '3', '0.27', 'Westbound']
['29/11/13', '20:00', '947', '4', '0.42', '877', '92.61', '35', '3.7', '31', '3.27', '0', '0', 'Westbound']
['29/11/13', '21:00', '747', '6', '0.8', '694', '92.9', '29', '3.88', '17', '2.28', '1', '0.13', 'Westbound']
['29/11/13', '22:00', '398', '2', '0.5', '381', '95.73', '10', '2.51', '5', '1.26', '0', '0', 'Westbound']
['29/11/13', '23:00', '280', '0', '0', '262', '93.57', '12', '4.29', '6', '2.14', '0', '0', 'Westbound']
['30/11/13', '00:00', '197', '0', '0', '185', '93.91', '8', '4.06', '4', '2.03', '0', '0', 'Westbound']
['30/11/13', '01:00', '114', '0', '0', '107', '93.86', '4', '3.51', '3', '2.63', '0', '0', 'Westbound']
['30/11/13', '02:00', '65', '0', '0', '58', '89.23', '6', '9.23', '0', '0', '1', '1.54', 'Westbound']
['30/11/13', '03:00', '50', '0', '0', '46', '92', '2', '4', '2', '4', '0', '0', 'Westbound']
['30/11/13', '04:00', '51', '0', '0', '42', '82.35', '3', '5.88', '5', '9.8', '1', '1.96', 'Westbound']
['30/11/13', '05:00', '31', '0', '0', '21', '67.74', '6', '19.35', '4', '12.9', '0', '0', 'Westbound']
['30/11/13', '06:00', '102', '0', '0', '71', '69.61', '25', '24.51', '5', '4.9', '1', '0.98', 'Westbound']
['30/11/13', '07:00', '296', '0', '0', '256', '86.49', '25', '8.45', '13', '4.39', '2', '0.68', 'Westbound']
['30/11/13', '08:00', '500', '3', '0.6', '428', '85.6', '50', '10', '19', '3.8', '0', '0', 'Westbound']
['30/11/13', '09:00', '904', '0', '0', '808', '89.38', '64', '7.08', '32', '3.54', '0', '0', 'Westbound']
['30/11/13', '10:00', '1009', '3', '0.3', '896', '88.8', '63', '6.24', '47', '4.66', '0', '0', 'Westbound']
['30/11/13', '11:00', '1085', '3', '0.28', '975', '89.86', '69', '6.36', '37', '3.41', '1', '0.09', 'Westbound']
['30/11/13', '12:00', '1298', '5', '0.39', '1179', '90.83', '55', '4.24', '59', '4.55', '0', '0', 'Westbound']
['30/11/13', '13:00', '1264', '4', '0.32', '1144', '90.51', '46', '3.64', '70', '5.54', '0', '0', 'Westbound']
['30/11/13', '14:00', '1367', '14', '1.02', '1243', '90.93', '49', '3.58', '61', '4.46', '0', '0', 'Westbound']
['30/11/13', '15:00', '1170', '2', '0.17', '1060', '90.6', '52', '4.44', '55', '4.7', '1', '0.09', 'Westbound']
['30/11/13', '16:00', '1119', '5', '0.45', '1011', '90.35', '49', '4.38', '54', '4.83', '0', '0', 'Westbound']
['30/11/13', '17:00', '1066', '0', '0', '979', '91.84', '35', '3.28', '52', '4.88', '0', '0', 'Westbound']
['30/11/13', '18:00', '964', '6', '0.62', '898', '93.15', '36', '3.73', '24', '2.49', '0', '0', 'Westbound']
['30/11/13', '19:00', '845', '1', '0.12', '786', '93.02', '30', '3.55', '28', '3.31', '0', '0', 'Westbound']
['30/11/13', '20:00', '620', '2', '0.32', '585', '94.35', '24', '3.87', '9', '1.45', '0', '0', 'Westbound']
['30/11/13', '21:00', '457', '1', '0.22', '435', '95.19', '12', '2.63', '9', '1.97', '0', '0', 'Westbound']
['30/11/13', '22:00', '384', '1', '0.26', '355', '92.45', '17', '4.43', '11', '2.86', '0', '0', 'Westbound']
['30/11/13', '23:00', '232', '0', '0', '216', '93.1', '10', '4.31', '6', '2.59', '0', '0', 'Westbound']
['01/12/13', '00:00', '186', '2', '1.08', '173', '93.01', '11', '5.91', '0', '0', '0', '0', 'Westbound']
['01/12/13', '01:00', '143', '1', '0.7', '137', '95.8', '4', '2.8', '1', '0.7', '0', '0', 'Westbound']
['01/12/13', '02:00', '102', '0', '0', '93', '91.18', '9', '8.82', '0', '0', '0', '0', 'Westbound']
['01/12/13', '03:00', '87', '1', '1.15', '81', '93.1', '3', '3.45', '2', '2.3', '0', '0', 'Westbound']
['01/12/13', '04:00', '57', '0', '0', '53', '92.98', '4', '7.02', '0', '0', '0', '0', 'Westbound']
['01/12/13', '05:00', '47', '0', '0', '39', '82.98', '8', '17.02', '0', '0', '0', '0', 'Westbound']
['01/12/13', '06:00', '50', '0', '0', '39', '78', '9', '18', '2', '4', '0', '0', 'Westbound']
['01/12/13', '07:00', '211', '2', '0.95', '198', '93.84', '10', '4.74', '1', '0.47', '0', '0', 'Westbound']
['01/12/13', '08:00', '203', '2', '0.99', '182', '89.66', '14', '6.9', '5', '2.46', '0', '0', 'Westbound']
['01/12/13', '09:00', '328', '0', '0', '303', '92.38', '19', '5.79', '6', '1.83', '0', '0', 'Westbound']
['01/12/13', '10:00', '524', '0', '0', '484', '92.37', '28', '5.34', '12', '2.29', '0', '0', 'Westbound']
['01/12/13', '11:00', '740', '4', '0.54', '697', '94.19', '20', '2.7', '19', '2.57', '0', '0', 'Westbound']
['01/12/13', '12:00', '928', '5', '0.54', '851', '91.7', '37', '3.99', '35', '3.77', '0', '0', 'Westbound']
['01/12/13', '13:00', '1075', '3', '0.28', '995', '92.56', '34', '3.16', '43', '4', '0', '0', 'Westbound']
['01/12/13', '14:00', '1059', '6', '0.57', '966', '91.22', '35', '3.31', '52', '4.91', '0', '0', 'Westbound']
['01/12/13', '15:00', '1030', '5', '0.49', '950', '92.23', '37', '3.59', '37', '3.59', '1', '0.1', 'Westbound']
['01/12/13', '16:00', '1061', '4', '0.38', '991', '93.4', '23', '2.17', '42', '3.96', '1', '0.09', 'Westbound']
['01/12/13', '17:00', '954', '3', '0.31', '897', '94.03', '20', '2.1', '34', '3.56', '0', '0', 'Westbound']
['01/12/13', '18:00', '941', '4', '0.43', '872', '92.67', '26', '2.76', '39', '4.14', '0', '0', 'Westbound']
['01/12/13', '19:00', '858', '4', '0.47', '791', '92.19', '27', '3.15', '36', '4.2', '0', '0', 'Westbound']
['01/12/13', '20:00', '659', '1', '0.15', '613', '93.02', '21', '3.19', '22', '3.34', '2', '0.3', 'Westbound']
['01/12/13', '21:00', '499', '1', '0.2', '471', '94.39', '11', '2.2', '16', '3.21', '0', '0', 'Westbound']
['01/12/13', '22:00', '352', '1', '0.28', '334', '94.89', '12', '3.41', '4', '1.14', '1', '0.28', 'Westbound']
['01/12/13', '23:00', '237', '0', '0', '222', '93.67', '8', '3.38', '7', '2.95', '0', '0', 'Westbound']
|
Chap 1: Think as PYTHON/Chap 3 Find difference between bytes, str, unicode.ipynb | ###Markdown
bytes, str, unicode의 차이점을 알자|Python3|Python2||:-|:-||bytes와 str 두 가지 타입으로 문자 시퀀스 표현|str과 unicode 두 가지 타입으로 문자 시퀀스 표현||bytes instance는 raw 8bit 값 저장|str instance는 raw 8bit 값 저장||str instance는 unicode 문자 저장|unicode instance는 unicode 문자 저장| Binary data(raw 8bit) 표현 방법- unicode 문자를 binary data(raw 8bit)로 표현하는 방법은 많음 (대표적으로 UTF-8)- Python3 str instance와 Python2 unicode instance는 binary data로 변환하기 위해 encode method 사용- binary data를 unicode 문자로 변환하기 위해 decode method 사용- Python programming 시 외부에 제공할 interface에서 unicode를 encode, decode 해야 함- Program 핵심 부분에서 unicode type을 사용하고, 문자 encoding에 대해서는 어떤 가정도 하지 말아야 함출력 텍스트 인코딩을 엄격하게 유지하면서 다른 텍스트 인코딩을 쉽게 수용 가능 문자 타입의 분리로 인한 두가지 상황1. 인코드된 문자인 raw 8bit 값을 처리하려는 상황2. 인코딩이 없는 문자를 처리하려는 상황원하는 타입과 입력값의 타입이 정확히 일치하려면 핼퍼 함수 두개가 필요 Python3 str이나 bytes를 입력으로 받고 str을 반환하는 method
###Code
def to_str(bytes_or_str):
if isinstance(bytes_or_str, bytes):
value = bytes_or_str.decode('utf-8')
else:
value = bytes_or_str
return value # str instance
###Output
_____no_output_____
###Markdown
str이나 bytes를 받고 bytes를 반환하는 method
###Code
def to_bytes(bytes_or_str):
if isinstance(bytes_or_str, str):
value = bytes_or_str.encode('utf-8')
else:
value = bytes_or_str
return value # bytes instance
###Output
_____no_output_____
###Markdown
Python2 str이나 unicode를 입력으로 받고 unicode를 반환하는 method
###Code
def to_unicode(unicode_or_str):
if isinstance(unicode_or_str, str):
value = unicode_or_str.decode('utf-8')
else:
value = unicode_or_str
return value # unicode instance
###Output
_____no_output_____
###Markdown
str이나 unicode를 받고 str을 반환하는 method
###Code
def to_str(unicode_or_str):
if isinstance(unicode_or_str, unicode):
value = unicode_or_str.encode('utf-8')
else:
value = unicode_or_str
return value # str instance
###Output
_____no_output_____
###Markdown
Python에서 raw 8bit 값과 unicode 문자를 처리할 때 이슈 1. Python2에서 str이 7bit ASCII 문자만 포함한다면 unicode와 str instance가 같은 타입처럼 보임- str과 unicode를 '+' 연산자로 묶을 수 있음- 같음(equality)과 같지 않음(inequality) 연산자로 이런 str과 unicode를 비교 가능- '%s' 같은 포맷 문자열에 unicode instance 사용 가능7bit ASII 만 처리하는 경우 str 또는 unicode를 받는 함수에 str이나 unicode instance를 넘겨도 문제없이 동작함을 의미 Python3 에서는 bytes와 str instance는 빈 문자열이라도 절대 같지 않으므로, 함수에 넘기는 문자열의 타입을 더 신주히 처리 2. Python3에서 내장 함수 open이 반환하는 파일 핸들을 사용하는 연산은 기본으로 UTF-8 인코딩 사용Python2의 경우 binary encoding이 기본이다. 예를 들어 임의의 binary data를 파일에 기록하려 할때, 아래 코드는 Python2에서는 동작하지만 Python3에서는 동작하지 않는다.
###Code
import os
with open('/tmp/random.bin','w') as f:
f.write(os.urandom(10))
###Output
_____no_output_____
###Markdown
문제의 이유는 Python3의 open에 새로운 encoding 인수가 추가되었기 때문이다. __파라미터의 기본 값은 'utf-8'이기 때문에 파일 핸들을 사용하는 read, write연산은 binary data를 담은 bytes instance가 아닌 unicode 문자를 담은 str instance를 기대한다.__ 위 코드를 동작하게 하려면 data를 문자 쓰기 모드('w')가 아니라 __binary 쓰기 모드('wb')로 오픈__ 해야 한다.Python2와 Python3에서 open이 올바르게 동작하게 만드는 방법은 다음과 같다.
###Code
import os
with open('/tmp/random.bin','wb') as f:
f.write(os.urandom(10))
###Output
_____no_output_____ |
docs/docs/colab-notebook/orca/examples/basic_text_classification.ipynb | ###Markdown
###Code
#
# Copyright 2018 Analytics Zoo Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
#
# Copyright 2018 The TensorFlow Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
# This example classifies movie reviews as positive or negative using the text of the review,
# and is adapted from
# https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/keras/basic_text_classification.ipynb
###Output
_____no_output_____
###Markdown
**Environment Preparation****Install Java 8**Run the cell on the **Google Colab** to install jdk 1.8.**Note:** if you run this notebook on your computer, root permission is required when running the cell to install Java 8. (You may ignore this cell if Java 8 has already been set up in your computer).
###Code
# Install jdk8
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
import os
# Set environment variable JAVA_HOME.
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
!update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java
!java -version
###Output
_____no_output_____
###Markdown
**Install Analytics Zoo**You can install the latest pre-release version using `pip install --pre --upgrade analytics-zoo`.
###Code
# Install latest pre-release version of Analytics Zoo
# Installing Analytics Zoo from pip will automatically install pyspark, bigdl, and their dependencies.
!pip install --pre --upgrade analytics-zoo
# Install required dependencies
# The tutorial below only supports TensorFlow 1.15
!pip install tensorflow==1.15 tensorflow-datasets==2.1.0
import tensorflow as tf
print(tf.__version__)
###Output
_____no_output_____
###Markdown
**Distributed Keras using Orca APIs**In this guide we will describe how to scale out Keras programs using Orca in 5 simple steps.
###Code
#import necessary libraries and modules
import argparse
from zoo.orca import init_orca_context, stop_orca_context
from zoo.orca import OrcaContext
###Output
_____no_output_____
###Markdown
**Step 1: Init Orca Context**
###Code
# recommended to set it to True when running Analytics Zoo in Jupyter notebook.
OrcaContext.log_output = True # (this will display terminal's stdout and stderr in the Jupyter notebook).
cluster_mode = "local"
if cluster_mode == "local":
init_orca_context(cores=1, memory="2g") # run in local mode
elif cluster_mode == "k8s":
init_orca_context(cluster_mode="k8s", num_nodes=2, cores=4) # run on K8s cluster
elif cluster_mode == "yarn":
init_orca_context(
cluster_mode="yarn-client", cores=4, num_nodes=2, memory="2g",
driver_memory="10g", driver_cores=1
) # run on Hadoop YARN cluster
###Output
_____no_output_____
###Markdown
This is the only place where you need to specify local or distributed mode. View [Orca Context](https://analytics-zoo.readthedocs.io/en/latest/doc/Orca/Overview/orca-context.html) for more details.**Note**: You should export HADOOP_CONF_DIR=/path/to/hadoop/conf/dir when you run on Hadoop YARN cluster. **Step 2: Define the Model**You may define your model, loss and optimizer in the same way as in any standard (single node) Keras program.
###Code
from tensorflow import keras
# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))
model.summary()
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['acc'])
###Output
_____no_output_____
###Markdown
**Step 3: Define Train Dataset****Prepare Dataset**The IMDB dataset comes packaged with TensorFlow. It has already been preprocessed such that the reviews (sequences of words) have been converted to sequences of integers, where each integer represents a specific word in a dictionary.
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=vocab_size)
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k: (v + 3) for k, v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Transform a list (of length num_samples) of sequences (lists of integers) into a 2D Numpy array of shape (num_samples, num_timesteps). num_timesteps is either the maxlen argument if provided, or the length of the longest sequence in the list.
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Define the dataset using standard `tf.data.Dataset`.
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
train_dataset = tf.data.Dataset.from_tensor_slices((partial_x_train, partial_y_train))
validation_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
###Output
_____no_output_____
###Markdown
**Step 4: Fit with Orca Estimator**First, create an Estimator and set backend to `bigdl`.
###Code
from zoo.orca.learn.tf.estimator import Estimator
est = Estimator.from_keras(keras_model=model, backend='bigdl')
# the path of the directory where to save the log files to be parsed by TensorBoard
tensorboard_dir = "runs"
# "bigdl" is the application name for tensorboard to save training and validation results under log path
est.set_tensorboard(tensorboard_dir, "bigdl")
###Output
_____no_output_____
###Markdown
Next, fit the Estimator.
###Code
est.fit(data=train_dataset,
batch_size=32,
epochs=1,
validation_data=validation_dataset)
###Output
_____no_output_____
###Markdown
Finally, evaluate using the Estimator.
###Code
results = est.evaluate(validation_dataset)
print(results)
est.save_keras_model("/tmp/text_classification_keras.h5")
###Output
_____no_output_____
###Markdown
Now, the accuracy of this model has reached around 85%.
###Code
# Stop orca context when your program finishes
stop_orca_context()
###Output
_____no_output_____
###Markdown
**Step 5: Visualization by Tensorboard**TensorBoard is a visualization toolkit for machine learning experimentation. TensorBoard allows tracking and visualizing metrics such as loss and accuracy, visualizing the model graph, viewing histograms, displaying images and much more.
###Code
# Load the TensorBoard notebook extension
%load_ext tensorboard
###Output
_____no_output_____
###Markdown
A brief overview of the dashboards shown (tabs in top navigation bar):* The **SCALARS** dashboard shows how the loss and metrics change with every epoch. You can use it to also track training speed, learning rate, and other scalar values.Start TensorBoard, specifying the root log directory you used above. Argument ``logdir`` points to directory where TensorBoard will look to find event files that it can display. TensorBoard will recursively walk the directory structure rooted at logdir, looking for .*tfevents.* files.This dashboard shows how the loss and accuracy change with every iteration.
###Code
%tensorboard --logdir "/content/runs/bigdl/train"
###Output
_____no_output_____ |
basic algorithm/basic algorithm/RedBlackTreeWalkthrough.ipynb | ###Markdown
Building a Red-Black Tree In this notebook, we'll walk through how you might build a red-black tree. Remember, we need to follow the red-black tree rules, on top of the binary search tree rules. Our new rules are:* All nodes have a color* All nodes have two children (use NULL nodes) * All NULL nodes are colored black* If a node is red, its children must be black* The root node must be black (optional) * We'll go ahead and implement without this for now* Every path to its descendant NULL nodes must contain the same number of black nodes SketchSimilar to our binary search tree implementation, we will define a class for nodes and a class for the tree itself. The `Node` class will need a couple new attributes. It is no longer enough to only know the children, because we need to ask questions during insertion like, "what color is my parent's sibling?". So we will add a parent link as well as the color.
###Code
class Node(object):
def __init__(self, value, parent, color):
self.value = value
self.left = None
self.right = None
self.parent = parent
self.color = color
###Output
_____no_output_____
###Markdown
For the tree, we can start with a mostly empty implementation. But we know we want to always insert nodes with color red, so let's fill in the constructor to insert the root node.
###Code
class RedBlackTree(object):
def __init__(self, root):
self.root = Node(root, None, 'red')
def insert(self, new_val):
pass
def search(self, find_val):
return False
###Output
_____no_output_____
###Markdown
InsertionNow how would we design our `insert` implementation? We know from our experience with BSTs how most of it will work. We can re-use that portion and augment it to assign colors and parents.
###Code
class RedBlackTree(object):
def __init__(self, root):
self.root = Node(root, None, 'red')
def insert(self, new_val):
self.insert_helper(self.root, new_val)
def insert_helper(self, current, new_val):
if current.value < new_val:
if current.right:
self.insert_helper(current.right, current, new_val)
else:
current.right = Node(new_val, current, 'red')
else:
if current.left:
self.insert_helper(current.left, current, new_val)
else:
current.left = Node(new_val, current, 'red')
###Output
_____no_output_____
###Markdown
RotationsAt this point we are only making a BST, with extra attributes. To make this a red-black tree, we need to add the extra sauce that makes red-black trees awesome. We will sketch out some more code for rebalancing the tree based on the case, and fill them in one at a time.First, we need to change our `insert_helper` to return the node that was inserted so we can interrogate it when rebalancing.
###Code
class RedBlackTree(object):
def __init__(self, root):
self.root = Node(root, None, 'red')
def insert(self, new_val):
new_node = self.insert_helper(self.root, new_val)
self.rebalance(new_node)
def insert_helper(self, current, new_val):
if current.value < new_val:
if current.right:
return self.insert_helper(current.right, new_val)
else:
current.right = Node(new_val, current, 'red')
return current.right
else:
if current.left:
return self.insert_helper(current.left, new_val)
else:
current.left = Node(new_val, current, 'red')
return current.left
def rebalance(self, node):
pass
###Output
_____no_output_____
###Markdown
Case 1_We have just inserted the root node_If we're enforcing that the root must be black, we change its color. We are not enforcing this, so we are all done! Four to go.
###Code
def rebalance(node):
if node.parent == None:
return
###Output
_____no_output_____
###Markdown
Case 2_We inserted under a black parent node_Thinking through this, we can observe the following: We inserted a red node beneath a black node. The new children (the NULL nodes) are black by definition, and our red node _replaced_ a black NULL node. So the number of black nodes for any paths from parents is unchanged. Nothing to do in this case, either.
###Code
def rebalance(node):
# Case 1
if node.parent == None:
return
# Case 2
if node.parent.color == 'black':
return
###Output
_____no_output_____
###Markdown
Case 3_The parent and its sibling of the newly inserted node are both red_Okay, we're done with free cases. In this specific case, we can flip the color of the parent and its sibling. We know they're both red in this case, which means the grandparent is black. It will also need to flip. At that point we will have a freshly painted red node at the grandparent. At that point, we need to do the same evaluation! If the grandparent turns red, and its sibling is also red, that's case 3 again. Guess what that means! Time for more recursion.We will define the `grandparent` and `pibling` (a parent's sibling) methods later, for now let's focus on the core logic.
###Code
def rebalance(self, node):
# Case 1
if node.parent == None:
return
# Case 2
if node.parent.color == 'black':
return
# From here, we know parent's color is red
# Case 3
if pibling(node).color == 'red':
pibling(node).color = 'black'
node.parent.color = 'black'
grandparent(node).color = 'red'
self.rebalance(grandparent(node))
###Output
_____no_output_____
###Markdown
Case 4_The newly inserted node has a red parent, but that parent has a black sibling_These last cases get more interesting. The criteria above actually govern case 4 and 5. What separates them is if the newly inserted node is on the _inside_ or the _outside_ of the sub tree. We define _inside_ and _outside_ like this:* inside * _EITHER_ * the new node is a left child of its parent, but its parent is a right child, or * the new node is a right child of its parent, but its parent is a left child* outside * the opposite of inside, the new node and its parent are on the same side of the grandparent Case 4 is to handle the _inside_ scenario. In this case, we need to rotate. As we will see, this will not finish balancing the tree, but will now qualify for Case 5.We rotate against the inside-ness of the new node. If the new node qualifies for case 4, it needs to move into its parent's spot. If it's on the right of the parent, that's a rotate left. If it's on the left of the parent, that's a rotate right.
###Code
def rebalance(self, node):
# ... omitted cases 1-3 ...
# Case 4
gp = grandparent(node)
if gp.left and node == gp.left.right:
self.rotate_left(parent(node))
elif gp.right and node == gp.right.left:
self.rotate_right(parent(node))
# TODO: Case 5
###Output
_____no_output_____
###Markdown
To implement `rotate_left` and `rotate_right`, think about what we want to accomplish. We want to take one of the node's children and have it take the place of its parent. The given node will move down to a child of the newly parental node.
###Code
def rotate_left(self, node):
# Save off the parent of the sub-tree we're rotating
p = node.parent
node_moving_up = node.right
# After 'node' moves up, the right child will now be a left child
node.right = node_moving_up.left
# 'node' moves down, to being a left child
node_moving_up.left = node
node.parent = node_moving_up
# Now we need to connect to the sub-tree's parent
# 'node' may have been the root
if p != None:
if node == p.left:
p.left = node_moving_up
else:
p.right = node_moving_up
node_moving_up.parent = p
def rotate_right(self, node):
p = node.parent
node_moving_up = node.left
node.left = node_moving_up.right
node_moving_up.right = node
node.parent = node_moving_up
# Now we need to connect to the sub-tree's parent
if p != None:
if node == p.left:
p.left = node_moving_up
else:
p.right = node_moving_up
node_moving_up.parent = p
###Output
_____no_output_____
###Markdown
Case 5Now that case 4 is resolved, or if we didn't qualify for case 4 and have an outside sub-tree already, we need to rotate again. If our new node is a left child of a left child, we rotate right. If our new node is a right of a right, we rotate left. This is done on the grandparent node.But after this rotation, our colors will be off. Remember that for cases 3, 4, and 5, the parent of the new node is red. But we will have rotated a red node with a red child up, which violates our rule of all red nodes having two black children. So after rotating, we switch the colors of the (original) parent and grandparent nodes.
###Code
def rebalance(self, node):
# ... omitted cases 1-3 ...
# Case 4
gp = grandparent(node)
if node == gp.left.right:
self.rotate_left(node.parent)
elif node == gp.right.left:
self.rotate_right(node.parent)
# Case 5
p = node.parent
gp = p.parent
if node == p.left:
self.rotate_right(gp)
else:
self.rotate_left(gp)
p.color = 'black'
gp.color = 'red'
###Output
_____no_output_____
###Markdown
ResultCombining all of our efforts we have the following.
###Code
class Node(object):
def __init__(self, value, parent, color):
self.value = value
self.left = None
self.right = None
self.parent = parent
self.color = color
def __repr__(self):
print_color = 'R' if self.color == 'red' else 'B'
return '%d%s' % (self.value, print_color)
def grandparent(node):
if node.parent == None:
return None
return node.parent.parent
# Helper for finding the node's parent's sibling
def pibling(node):
p = node.parent
gp = grandparent(node)
if gp == None:
return None
if p == gp.left:
return gp.right
if p == gp.right:
return gp.left
class RedBlackTree(object):
def __init__(self, root):
self.root = Node(root, None, 'red')
def __iter__(self):
yield from self.root.__iter__()
def insert(self, new_val):
new_node = self.insert_helper(self.root, new_val)
self.rebalance(new_node)
def insert_helper(self, current, new_val):
if current.value < new_val:
if current.right:
return self.insert_helper(current.right, new_val)
else:
current.right = Node(new_val, current, 'red')
return current.right
else:
if current.left:
return self.insert_helper(current.left, new_val)
else:
current.left = Node(new_val, current, 'red')
return current.left
def rebalance(self, node):
# Case 1
if node.parent == None:
return
# Case 2
if node.parent.color == 'black':
return
# Case 3
if pibling(node) and pibling(node).color == 'red':
pibling(node).color = 'black'
node.parent.color = 'black'
grandparent(node).color = 'red'
return self.rebalance(grandparent(node))
gp = grandparent(node)
# Small change, if there is no grandparent, cases 4 and 5
# won't apply
if gp == None:
return
# Case 4
if gp.left and node == gp.left.right:
self.rotate_left(node.parent)
node = node.left
elif gp.right and node == gp.right.left:
self.rotate_right(node.parent)
node = node.right
# Case 5
p = node.parent
gp = p.parent
if node == p.left:
self.rotate_right(gp)
else:
self.rotate_left(gp)
p.color = 'black'
gp.color = 'red'
def rotate_left(self, node):
# Save off the parent of the sub-tree we're rotating
p = node.parent
node_moving_up = node.right
# After 'node' moves up, the right child will now be a left child
node.right = node_moving_up.left
# 'node' moves down, to being a left child
node_moving_up.left = node
node.parent = node_moving_up
# Now we need to connect to the sub-tree's parent
# 'node' may have been the root
if p != None:
if node == p.left:
p.left = node_moving_up
else:
p.right = node_moving_up
node_moving_up.parent = p
def rotate_right(self, node):
p = node.parent
node_moving_up = node.left
node.left = node_moving_up.right
node_moving_up.right = node
node.parent = node_moving_up
# Now we need to connect to the sub-tree's parent
if p != None:
if node == p.left:
p.left = node_moving_up
else:
p.right = node_moving_up
node_moving_up.parent = p
###Output
_____no_output_____
###Markdown
TestingWe've written a lot of code. Let's see how the tree mutates as we add nodes.First, we'll need a way to visualize the tree. The below will nest, but remember the first child is always the left child.
###Code
def print_tree(node, level=0):
print(' ' * (level - 1) + '+--' * (level > 0) + '%s' % node)
if node.left:
print_tree(node.left, level + 1)
if node.right:
print_tree(node.right, level + 1)
###Output
_____no_output_____
###Markdown
For cases 1 and 2, we can insert the first few nodes and see the tree behaves the same as a BST.
###Code
tree = RedBlackTree(9)
tree.insert(6)
tree.insert(19)
print_tree(tree.root)
###Output
9R
+--6R
+--19R
###Markdown
Inserting 13 should flip 6 and 19 to black, as it hits our Case 3 logic.
###Code
tree.insert(13)
print_tree(tree.root)
###Output
9R
+--6B
+--19B
+--13R
###Markdown
Observe that 13 was inserted as red, and then because of Case 3, 6 and 19 flipped to black. 9 was also assigned red, but that was not a net change. Because we're not enforcing the optional "root is always black rule", this is acceptable.Now let's cause some rotations. When we insert 16, it goes under 13, but 13 does not have a red sibling. 16 rotates into 13's spot, because it's currently an _inside_ sub-tree. Then 16 rotates into 19's spot. After these rotations, the ordering of the BST has been preserved _and_ our tree is balanced.
###Code
tree.insert(16)
print_tree(tree.root)
###Output
9R
+--6B
+--16R
+--13B
+--16R
+--19B
|
optimization/cython.ipynb | ###Markdown
Cythonhttp://nbviewer.ipython.org/github/ipython/ipython/blob/master/examples/notebooks/Cython%20Magics.ipynbhttp://nbviewer.ipython.org/github/rasbt/python_reference/blob/master/tutorials/running_cython.ipynb
###Code
%load_ext cython
a = 10
b = 20
%%cython_inline
return a + b
%%cython
def testfunc():
return 42
testfunc()
%%cython -lm
from libc.math cimport sin
print 'sin(1)=', sin(1)
# TODO: something useful!
###Output
_____no_output_____ |
src/Taxi_Driver.ipynb | ###Markdown
Description: There are four designated locations in the grid world indicated by R(ed), G(reen), Y(ellow), and B(lue). When the episode starts, the taxi starts off at a random square and the passenger is at a random location. The taxi drives to the passenger's location, picks up the passenger, drives to the passenger's destination (another one of the four specified locations), and then drops off the passenger. Once the passenger is dropped off, the episode ends. Observations: There are 500 discrete states since there are 25 taxi positions, 5 possible locations of the passenger (including the case when the passenger is in the taxi), and 4 destination locations. Note that there are 400 states that can actually be reached during an episode. The missing states correspond to situations in which the passenger is at the same location as their destination, as this typically signals the end of an episode. Four additional states can be observed right after a successful episodes, when both the passenger and the taxi are at the destination. This gives a total of 404 reachable discrete states. Passenger locations: - 0: R(ed) - 1: G(reen) - 2: Y(ellow) - 3: B(lue) - 4: in taxi Destinations: - 0: R(ed) - 1: G(reen) - 2: Y(ellow) - 3: B(lue) Actions: There are 6 discrete deterministic actions: - 0: move south - 1: move north - 2: move east - 3: move west - 4: pickup passenger - 5: drop off passenger Rewards: There is a default per-step reward of -1, except for delivering the passenger, which is +20, or executing "pickup" and "drop-off" actions illegally, which is -10. Rendering: - blue: passenger - magenta: destination - yellow: empty taxi - green: full taxi - other letters (R, G, Y and B): locations for passengers and destinations
###Code
env = gym.make("Taxi-v3")
q_table = np.zeros([env.observation_space.n, env.action_space.n])
env.render()
env.reset()
"""Training the agent"""
import random
from IPython.display import clear_output
# Hyperparameters
alpha = 0.1
gamma = 0.6
epsilon = 0.1
# For plotting metrics
all_epochs = []
all_penalties = []
for i in range(1, 100001):
state = env.reset()
epochs, penalties, reward, = 0, 0, 0
done = False
while not done:
if random.uniform(0, 1) < epsilon:
action = env.action_space.sample() # Explore action space
else:
action = np.argmax(q_table[state]) # Exploit learned values
next_state, reward, done, info = env.step(action)
old_value = q_table[state, action]
next_max = np.max(q_table[next_state])
new_value = (1 - alpha) * old_value + alpha * (reward + gamma * next_max)
q_table[state, action] = new_value
if reward == -10:
penalties += 1
state = next_state
epochs += 1
if i % 100 == 0:
clear_output(wait=True)
print(f"Episode: {i}")
print("Training finished.\n")
q_table[250]
"""Evaluate agent's performance after Q-learning"""
total_epochs, total_penalties = 0, 0
episodes = 5
for _ in range(episodes):
state = env.reset()
epochs, penalties, reward = 0, 0, 0
done = False
while not done:
action = np.argmax(q_table[state])
state, reward, done, info = env.step(action)
env.render()
if reward == -10:
penalties += 1
epochs += 1
total_penalties += penalties
total_epochs += epochs
print(f"Results after {episodes} episodes:")
print(f"Average timesteps per episode: {total_epochs / episodes}")
print(f"Average penalties per episode: {total_penalties / episodes}")
###Output
_____no_output_____ |
exercices/part4.ipynb | ###Markdown
Exercice 1: Write a Python class named square constructed by a length and two methods which will compute the area and the perimeter of the square.
###Code
class square():
def __init__(self,length):
self.length = length
def area(self):
return self.length**2
def perimeter(self):
return 4*self.length
pass
###Output
_____no_output_____
###Markdown
Exercice 2: Write a python class rectangle that inherits from the square class.
###Code
class rectangle(square):
def __init__(self,L,l):
self.length = L
self.width = l
super(rectangle,self).__init__(self.length)
def area(self):
return self.length * self.width
def perimeter(self):
return 2*(self.length+self.width)
pass
###Output
_____no_output_____
###Markdown
Exercice 3:
###Code
class SampleClass:
def __init__(self, a):
## private varibale in Python
self.__a = a
def set_a(self):
self.a = self.__a
def get_a(self):
print(self.a)
x = SampleClass(3)
x.set_a()
x.get_a()
print(x.a)
x.a = 23
print(x.a)
###Output
3
3
23
###Markdown
Exercice 1: Write a Python class named square constructed by a length and two methods which will compute the area and the perimeter of the square.
###Code
class square():
def __init__(self,length):
self.length = length
print("its here")
def s_area(self):
return self.length * self.length
def r_perimeter(self):
return 4 * self.length
#a = square(3)
#print(a.r_perimeter())
#print(a.s_area())
###Output
_____no_output_____
###Markdown
Exercice 2: Write a python class rectangle that inherits from the square class.
###Code
class rectangle(square):
#__init__ method only
def __init__(self,largeur, lenght):
self.largeur = largeur
super(rectangle, self).__init__(lenght)
pass
r = rectangle(20, 50)
print(r.r_perimeter())
print(r.s_area())
###Output
its here
200
2500
###Markdown
Exercice 3:
###Code
class SampleClass:
def __init__(self, a):
self.a = a
x = SampleClass(3)
print(x.a)
x.a = 23
print(x.a)
###Output
3
23
###Markdown
Exercice 1: Write a Python class named square constructed by a length and two methods which will compute the area and the perimeter of the square.
###Code
class rectangle():
#define your methods
def __init__(self, l, w):
self.length = l
self.width = w
def rectangle_area(self):
return self.length * self.width
def rectangle_perimeter(self):
return ((2 * self.length) + (2 * self.length))
pass
newRectangle = rectangle(18, 20)
print(newRectangle.rectangle_area())
###Output
360
###Markdown
Exercice 2: Write a python class rectangle that inherits from the square class.
###Code
class square(rectangle):
def __init__(self, l):
super().__init__(l, l)
###Output
_____no_output_____
###Markdown
Exercice 3:
###Code
class SampleClass:
def __init__(self, a):
## private varibale in Python
self.__a = a
@property
def meth(self):
return self.__a
x = SampleClass(3)
print(x.meth)
x.meth = 23
print(x.meth)
###Output
3
23
###Markdown
Exercice 1: Write a Python class named square constructed by a length and two methods which will compute the area and the perimeter of the square.
###Code
class square():
def __init__(self,length):
self.length = length
def area(self):
return self.length**2
def perimeter(self):
return 4*self.length
pass
###Output
_____no_output_____
###Markdown
Exercice 2: Write a python class rectangle that inherits from the square class.
###Code
class rectangle(square):
def __init__(self,lenght,width):
super().__init__(lenght)
self.width=width
self.area=self.length * self.width
self.perimeter=2*(self.length+self.width)
pass
a = rectangle(3,4)
a.area
###Output
_____no_output_____
###Markdown
Exercice 3:
###Code
class SampleClass:
def __init__(self, a):
## private varibale in Python
self.__a = a
@property
def a(self):
return self.__a
@a.setter
def a(self,a):
self.__a=a
x = SampleClass(3)
print(x.a)
x.a = 23
print(x.a)
###Output
3
23
###Markdown
Exercice 1: Write a Python class named square constructed by a length and two methods which will compute the area and the perimeter of the square.
###Code
class square():
def __init__(self, length):
self.length = length
def area(self):
return self.length**2
def perimeter(self):
return self.length*4
###Output
_____no_output_____
###Markdown
Exercice 2: Write a python class rectangle that inherits from the square class.
###Code
class rectangle(square):
def __init__(self, length, width):
self.length = length
self.width = width
###Output
100
###Markdown
Exercice 3:
###Code
class SampleClass:
def __init__(self, a):
## private varibale in Python
self.__a = a
x = SampleClass(3)
print(x.a)
x.a = 23
print(x.a)
###Output
_____no_output_____
###Markdown
Use python decorators to make the above code works
###Code
class SampleClass:
def __init__(self, a):
## private varibale in Python
self.__a = a
def set_a(self):
self.a = self.__a
def get_a(self):
print(self.a)
x = SampleClass(3)
x.set_a()
x.get_a()
#print(x.a)
#x.a = 23
#print(x.a)
###Output
3
###Markdown
Exercice 1: Write a Python class named square constructed by a length and two methods which will compute the area and the perimeter of the square.
###Code
class square():
def __init__(self , l):
self.l = l
def area(self):
return self.l * self.l
def perimeter(self):
return 4*(self.l)
h1 = square(4)
print("the area of the square is ->{}".format(h1.area()))
print("the perimeter of the square is ->{}".format(h1.perimeter()))
###Output
the area of the square is ->16
the perimeter of the square is ->16
###Markdown
Exercice 2: Write a python class rectangle that inherits from the square class.
###Code
class rectangle(square):
def __init__(self, width , l):
self.width = width
super(rectangle, self).__init__(l)
h2 = rectangle(4,6)
print("the area of the rectangle is ->{}".format(h2.area()))
print("the perimeter of the square is ->{}".format(h2.perimeter()))
###Output
the area of the rectangle is ->36
the perimeter of the square is ->24
###Markdown
Exercice 3:
###Code
class SampleClass:
def __init__(self, a):
## private varibale in Python
self.a = a
x = SampleClass(3)
print(x.a)
x.a = 23
print(x.a)
###Output
3
23
###Markdown
Exercice 1: Write a Python class named square constructed by a length and two methods which will compute the area and the perimeter of the square.
###Code
class square():
def __init__(self, lenght):
self.lenght = lenght
def area(self):
print("area of square is " , self.lenght*self.lenght)
def perimeter(self):
print("area of square is " , 4*self.lenght)
pass
s1 = square(2)
s1.area()
s1.perimeter()
###Output
area of square is 4
area of square is 8
###Markdown
Exercice 2: Write a python class rectangle that inherits from the square class.
###Code
class rectangle(square):
#__init__ method only
def __init__(self,lenght):
super().__init__(self,lenght)
pass
###Output
_____no_output_____
###Markdown
Exercice 3:
###Code
class SampleClass:
def __init__(self, a):
## private varibale in Python
self.__a = a
@property
def a(self):
return "%s %s" % (self.a)
x = SampleClass(3)
print(x.a)
x.a = 23
print(x.a)
###Output
_____no_output_____
###Markdown
Exercice 1: Write a Python class named square constructed by a length and two methods which will compute the area and the perimeter of the square.
###Code
class square():
def __init__(self, length):
self.length = length
def area(self):
return (self.length**2)
def perimeter(self):
return (self.length*4)
pass
###Output
_____no_output_____
###Markdown
Exercice 2: Write a python class rectangle that inherits from the square class.
###Code
class rectangle(square):
def __init__(self, length, width):
self.length = length
self.width = width
pass
###Output
_____no_output_____
###Markdown
Exercice 3:
###Code
class SampleClass:
def __init__(self, a):
## private varibale in Python
self.__a = a
x = SampleClass(3)
print(x.a)
x.a = 23
print(x.a)
###Output
_____no_output_____
###Markdown
Use python decorators to make the above code works
###Code
class SampleClass:
def __init__(self, a):
## private varibale in Python
self.__a = a
def set_a(self):
self.a = self.__a
x = SampleClass(3)
x.set_a()
print(x.a)
x.a = 23
print(x.a)
###Output
3
23
###Markdown
Exercice 1: Write a Python class named square constructed by a length and two methods which will compute the area and the perimeter of the square.
###Code
class square():
def __init__(self, length):
self.length=length
def area(self):
return self.length**2
def perimeter(self):
return 4*self.length
#mysquare=square(10)
#print(mysquare.area())
#print(mysquare.perimeter())
###Output
_____no_output_____
###Markdown
Exercice 2: Write a python class rectangle that inherits from the square class.
###Code
class rectangle(square):
def __init__(self,length,width):
self.width=width
super(rectangle,self).__init__(length)
def area(self):
return self.length*self.width
def perimeter(self):
return 2*self.length + 2*self.width
mysquare=rectangle(10,2)
print(mysquare.area())
print(mysquare.perimeter())
###Output
20
24
###Markdown
Exercice 3:
###Code
class SampleClass:
def __init__(self, a):
## private varibale in Python
self.__a = a
x = SampleClass(3)
print(x.a)
x.a = 23
print(x.a)
###Output
_____no_output_____
###Markdown
Exercice 1: Write a Python class named square constructed by a length and two methods which will compute the area and the perimeter of the square.
###Code
class square():
#define your methods
pass
###Output
_____no_output_____
###Markdown
Exercice 2: Write a python class rectangle that inherits from the square class.
###Code
class rectangle(square):
#__init__ method only
pass
###Output
_____no_output_____
###Markdown
Exercice 3:
###Code
class SampleClass:
def __init__(self, a):
## private varibale in Python
self.__a = a
x = SampleClass(3)
print(x.a)
x.a = 23
print(x.a)
###Output
_____no_output_____
###Markdown
Exercice 1: Write a Python class named square constructed by a length and two methods which will compute the area and the perimeter of the square.
###Code
class square():
def __init__(self,width,length):
self.width=width
self.length=length
def perimeter(self):
return 2*(self.width+self.length)
def area(self):
return (self.width)*(self.length)
pass
square1=square(4,4)
print(square1.perimeter())
print(square1.area())
###Output
16
16
###Markdown
Exercice 2: Write a python class rectangle that inherits from the square class.
###Code
class rectangle(square):
def __init__(self,width,length):
super().__init__(width, length)
pass
a=rectangle(2,4)
print(a.perimeter())
print(a.area())
###Output
12
8
###Markdown
Exercice 3:
###Code
class SampleClass:
__a = None
def __init__(self, a):
## private varibale in Python
self.a = a
x = SampleClass(3)
print(x.a)
x.a = 23
print(x.a)
###Output
3
23
###Markdown
Exercice 1: Write a Python class named square constructed by a length and two methods which will compute the area and the perimeter of the square.
###Code
class square:
#define your methods
def __init__(self,lenght):
self.lenght=lenght
def area(self):
return self.lenght**2
def primeter(self):
return self.lenght*2
pass
###Output
_____no_output_____
###Markdown
Exercice 2: Write a python class rectangle that inherits from the square class.
###Code
class rectangle(square):
#__init__ method only
def __init__(self,lenght, width):
super().__init__(lenght)
self.width = width
pass
r=rectangle(7, 2)
print(r.lenght)
###Output
7
###Markdown
Exercice 3: Use python decorators to make the above code works
###Code
class SampleClass:
def __init__(self, a):
# private varibale in Python
self.__a = a
def get_a(self):
print(self.__a)
x = SampleClass(3)
x.get_a()
x.a = 23
print(x.a)
###Output
3
23
###Markdown
Exercice 1: Write a Python class named square constructed by a length and two methods which will compute the area and the perimeter of the square.
###Code
class square():
def __init__(self,width,length):
self.width=width
self.length=length
def perimeter(self):
return 2*(self.width+self.length)
def area(self):
return (self.width)*(self.length)
pass
square1=square(3,2)
print(square1.perimeter())
print(square1.area())
###Output
10
6
###Markdown
Exercice 2: Write a python class rectangle that inherits from the square class.
###Code
class rectangle(square):
def __init__(self,width,length):
super().__init__(width, length)
pass
a=rectangle(2,4)
print(a.perimeter())
print(a.area())
###Output
12
8
###Markdown
Exercice 3:
###Code
class SampleClass:
def __init__(self, a):
self.a = a
x = SampleClass(3)
print(x.a)
x.a = 23
print(x.a)
###Output
3
23
###Markdown
Exercice 1: Write a Python class named square constructed by a length and two methods which will compute the area and the perimeter of the square.
###Code
class square():
def __init__(self,length):
self.length = length
print("this is the result ")
def s_area(self):
return self.length * self.length
def r_perimeter(self):
return 4 * self.length
###Output
_____no_output_____
###Markdown
Exercice 2: Write a python class rectangle that inherits from the square class.
###Code
class rect(square):
def __init__(self,largeur, lenght):
self.largeur = largeur
super(rect, self).__init__(lenght)
pass
r = rect(30, 70)
print(r.r_perimeter())
print(r.s_area())
###Output
this is the result
280
4900
###Markdown
Exercice 3:
###Code
class SpleClass:
def __init__(self, m):
self.m = m
L = SpleClass(5)
print(L.m)
L.m = 30
print(L.m)
###Output
5
30
###Markdown
Exercice 1: Write a Python class named square constructed by a length and two methods which will compute the area and the perimeter of the square.
###Code
class square():
#define your methods
def __init__(self,longueur):
self.longueur = longueur
def aire_carree(self):
return self.longueur**2
def perimetre(self):
return self.longueur*4
square1 = square(5)
print('Aire est : \n',square1.aire_carree())
print('Perimetre est :\n',square1.perimetre())
###Output
Aire est :
25
Perimetre est :
20
###Markdown
Exercice 2: Write a python class rectangle that inherits from the square class.
###Code
class rectangle(square):
def __init__(self,longueur,largeur):
self.largeur = largeur
super().__init__(longueur)
square = rectangle(5,2)
print('Aire_R est : \n',square.aire_carree())
print('Perimetre_R est :\n',square.perimetre())
###Output
Aire_R est :
25
Perimetre_R est :
20
###Markdown
Exercice 3:
###Code
class SampleClass:
def __init__(self, a):
## private varibale in Python
self.a = a
@SampleClass
def work(x):
x = SampleClass(3)
return x
print('p1 --->',work.a)
work.a = 23
print('p2--->',work.a)
###Output
p1 ---> <function work at 0x7f93280d45e0>
p2---> 23
###Markdown
Exercice 1: Write a Python class named square constructed by a length and two methods which will compute the area and the perimeter of the square.
###Code
class square():
def __init__(self,a):
self.length = a
def area(self):
return(self.length**2)
def perimeter(self):
return(self.length*4)
pass
carre = square(2)
print(carre.area())
print(carre.perimeter())
###Output
4
8
###Markdown
Exercice 2: Write a python class rectangle that inherits from the square class.
###Code
class rectangle(square):
def __init__(self,a,b):
self.longueur=a
self.largeur=b
def area(self):
return(self.longueur*self.largeur)
def perimeter(self):
return(self.longueur*2+self.largeur*2)
pass
rec = rectangle(3,5)
print(rec.area())
print(rec.perimeter())
###Output
15
16
###Markdown
Exercice 3:
###Code
class SampleClass:
def __init__(self, a):
## private varibale in Python
self.__a = a
@property
def value(self):
return(self.__a)
@value.setter
def value(self,value):
self.__a=value
x = SampleClass(3)
print(x.value)
x.a = 23
print(x.a)
###Output
3
23
###Markdown
Exercice 1: Write a Python class named square constructed by a length and two methods which will compute the area and the perimeter of the square.
###Code
class square():
def __init__(self,length):
self.length = length
def area(self):
return self.length**2
def perimeter(self):
return 4*self.length
pass
###Output
_____no_output_____
###Markdown
Exercice 2: Write a python class rectangle that inherits from the square class.
###Code
class rectangle(square):
def __init__(self,L,l):
self.length = L
self.width = l
super(rectangle,self).__init__(self.length)
def area(self):
return self.length * self.width
def perimeter(self):
return 2*(self.length+self.width)
pass
a = rectangle(2,3)
a.area()
###Output
_____no_output_____
###Markdown
Exercice 3:
###Code
class SampleClass:
def __init__(self, a):
## private varibale in Python
self.__a = a
x = SampleClass(3)
print(x.a)
x.a = 23
print(x.a)
###Output
_____no_output_____ |
231_Assignment.ipynb | ###Markdown
Lambda School Data Science*Unit 2, Sprint 3, Module 1*--- Define ML problemsYou will use your portfolio project dataset for all assignments this sprint. AssignmentComplete these tasks for your project, and document your decisions.- [ ] Choose your target. Which column in your tabular dataset will you predict?- [ ] Is your problem regression or classification?- [ ] How is your target distributed? - Classification: How many classes? Are the classes imbalanced? - Regression: Is the target right-skewed? If so, you may want to log transform the target.- [ ] Choose your evaluation metric(s). - Classification: Is your majority class frequency >= 50% and < 70% ? If so, you can just use accuracy if you want. Outside that range, accuracy could be misleading. What evaluation metric will you choose, in addition to or instead of accuracy? - Regression: Will you use mean absolute error, root mean squared error, R^2, or other regression metrics?- [ ] Choose which observations you will use to train, validate, and test your model. - Are some observations outliers? Will you exclude them? - Will you do a random split or a time-based split?- [ ] Begin to clean and explore your data.- [ ] Begin to choose which features, if any, to exclude. Would some features "leak" future information?If you haven't found a dataset yet, do that today. [Review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2) and choose your dataset.Some students worry, ***what if my model isn't “good”?*** Then, [produce a detailed tribute to your wrongness. That is science!](https://twitter.com/nathanwpyle/status/1176860147223867393)
###Code
from google.colab import files
uploaded = files.upload()
import pandas as pd
df = pd.read_csv('song_data.csv')
df.head()
df['song_popularity'].nunique()
#drop any row in song popularity that is a 0 since that is equivalent to a null value
index_delete = df.index[df['song_popularity']==0]
df = df.drop(index_delete)
df.shape
df.info()
df.columns
import seaborn as sns
import matplotlib.pyplot as plt
sns.distplot(df['song_popularity']);
#create a new column with a binary classification for song popularity
df['popular'] = [1 if i>=51 else 0 for i in df.song_popularity ]
df.head()
df['tempo'].describe()
###Output
_____no_output_____
###Markdown
there shouldn't be any songs with a tempo of 0, so we will treat that as a null value
###Code
#I will replace the 0's in the tempo column with nan
import numpy as np
df['tempo'].replace(0, np.nan, inplace=True)
df['tempo'].describe()
df['popular'].value_counts(normalize=True)
from sklearn.model_selection import train_test_split
train, validate, test = np.split(df.sample(frac=1), [int(.6*len(df)), int(.8*len(df))])
test.shape
###Output
_____no_output_____
###Markdown
I will use accuracy since my majority class is 70
###Code
target = 'popular'
features = df.columns.drop([target, 'song_popularity'])
X_train = train[features]
y_train = train[target]
X_val = validate[features]
y_val = validate[target]
X_test = test[features]
y_test = test[target]
###Output
_____no_output_____ |
Final Project Portfolio.ipynb | ###Markdown
CSYE7245 - Modern Music Genre Classification with large multi-class dataset Ashutosh Mahala, Xiaosui Zhang Background Classification of Songs Classifying the songs can reduce the work for searching for a song. Classification can be done on multiple bases like the genre, mood or keywords related to them. The most of the common way classify the song is through the genre. What is a genre? A genre is any form or type of communication in any mode with socially-agreed upon conventions developed over time. The genre is most popular in categorizing the music. Music can be devided into different genres in different ways. Due to subjective nature of the music, it is difficult to categories music in one genre and most of the time genre of the music overlap each other. Expectation from the Project To do build a model which automatically classify the songs through their acoustic features. Why did we choose this project? There have been multiple automatic genre detection studies on multiple feature of audio. However they were either heavily based on methods or on features or the target genre scope is very small. The Dataset we are using for the study is more standard and real world. The dataset $\href{https://github.com/mdeff/fma}{FMA-LTS2}$ has been created from the dump of songs from $\href{http://freemusicarchive.org/}{FMA}$. FMA is a high qaulity liberary which has certain types music for work which would not be available otherwise due to copyright. It has around 106,500 songs with 163 genres and provides features already extracted from the $\href{https://librosa.github.io/librosa/}{LibROSA}$ liberary. How are we going to measure our performance? We are going to use accracy as the matrix to check the performance of the model. Let's Start with the Real Work Now Data Introduction FMA: A Dataset For Music Analysis is the dataset comipled from FMA dumped music by Michaël Defferrard, Kirell Benzi, Pierre Vandergheynst, Xavier Bresson, EPFL LTS2. Consists of:```1. tracks.csv - This csv contains metadata like track_id, track_id, album_name, release date of 106,574 songs2. genres.csv - This csv contains 163 genres.3. features.csv - This csv contains all the features which can be extracted through LibROSA liberary.4. echonest.csv - This csv contains data like danceablity, valence provided by Echonest(now known as Spotify)``` In this dataset we maily going to deal with following tables and the columns:1. tracks.csv: a. track_id: id of the tracks b. genre_top: this only contains the name of genre which are only one. A song containing multiple genres will be blank for that perticular song2. genres: a. genre_id: id of genre (This will be our target of prediction for our model) b. title: title of genre3. features Let's import the liberary needed to run this project
###Code
import numpy as np
import pandas as pd
import theano
import theano.tensor as T
import keras
import urllib.request
import zipfile
import os.path
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
# -- Keras Import
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.preprocessing import image
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.utils import np_utils
from keras.preprocessing import sequence
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import LSTM, GRU, SimpleRNN
from keras.layers import Activation, TimeDistributed, RepeatVector
from keras.callbacks import EarlyStopping, ModelCheckpoint
###Output
WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
Using TensorFlow backend.
###Markdown
Let's set the global variables from where we are going to download the dataset and where we are goin to save these
###Code
# Global variables
url_loc = 'https://os.unil.cloud.switch.ch/fma/fma_metadata.zip'
file_loc = './data/fma_metadata.zip'
dir_unzip_loc = './data/'
data_length = 5000
###Output
_____no_output_____
###Markdown
Let's start downloading the data
###Code
# Download zipped fma meta data
if not os.path.isfile(file_loc):
urllib.request.urlretrieve(url_loc, file_loc)
###Output
_____no_output_____
###Markdown
and then unzip it
###Code
# Unzip fma meta data
if not os.path.isdir(dir_unzip_loc):
with zipfile.ZipFile(file_loc, 'r') as zip_ref:
zip_ref.extractall(dir_unzip_loc)
###Output
_____no_output_____
###Markdown
Now we are going to load these dataset in the pandas dataframes
###Code
# We are goint to read the headers seperatly and create an list
feat_header = pd.read_csv(dir_unzip_loc + "fma_metadata/features.csv", header=None, nrows=3)
feat_header = feat_header.transpose(copy = True)
name = []
for index, row in feat_header.iterrows():
name.append(row[0]+"#"+row[1]+"#"+row[2])
name[0] = "track_id"
#Now read the dataset
df_tracks = pd.read_csv(dir_unzip_loc + "fma_metadata/tracks.csv", skiprows=[0,2]);
df_features = pd.read_csv(dir_unzip_loc + "fma_metadata/features.csv", header = None, names= name ,skiprows=4);
# df_features = pd.read_csv(dir_unzip_loc + "fma_metadata/features.csv", header = None, names= a ,skiprows=4, nrows=data_length);
df_genres = pd.read_csv(dir_unzip_loc + "fma_metadata/genres.csv", skiprows=0, nrows=data_length);
df_tracks = df_tracks.rename(columns={ df_tracks.columns[0]: "track_id" })
df_features = df_features.rename(columns={ df_features.columns[0]: "track_id" })
1
###Output
_____no_output_____
###Markdown
Let's check the structure of df_tracks dataframe
###Code
df_tracks.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 106574 entries, 0 to 106573
Data columns (total 53 columns):
track_id 106574 non-null int64
comments 106574 non-null int64
date_created 103045 non-null object
date_released 70294 non-null object
engineer 15295 non-null object
favorites 106574 non-null int64
id 106574 non-null int64
information 83149 non-null object
listens 106574 non-null int64
producer 18060 non-null object
tags 106574 non-null object
title 105549 non-null object
tracks 106574 non-null int64
type 100066 non-null object
active_year_begin 22711 non-null object
active_year_end 5375 non-null object
associated_labels 14271 non-null object
bio 71156 non-null object
comments.1 106574 non-null int64
date_created.1 105718 non-null object
favorites.1 106574 non-null int64
id.1 106574 non-null int64
latitude 44544 non-null float64
location 70210 non-null object
longitude 44544 non-null float64
members 46849 non-null object
name 106574 non-null object
related_projects 13152 non-null object
tags.1 106574 non-null object
website 79256 non-null object
wikipedia_page 5581 non-null object
split 106574 non-null object
subset 106574 non-null object
bit_rate 106574 non-null int64
comments.2 106574 non-null int64
composer 3670 non-null object
date_created.2 106574 non-null object
date_recorded 6159 non-null object
duration 106574 non-null int64
favorites.2 106574 non-null int64
genre_top 49598 non-null object
genres 106574 non-null object
genres_all 106574 non-null object
information.1 2349 non-null object
interest 106574 non-null int64
language_code 15024 non-null object
license 106487 non-null object
listens.1 106574 non-null int64
lyricist 311 non-null object
number 106574 non-null int64
publisher 1263 non-null object
tags.2 106574 non-null object
title.1 106573 non-null object
dtypes: float64(2), int64(16), object(35)
memory usage: 43.1+ MB
###Markdown
By looking at the info, we can say following things:1. track_id column doesn't have any value which is no non numeric value. otherwise it would have parsed as object not as int64. 1. genre_top column has less number of values, however at the same time genres and genres_all have same value as the track_id which implies if there are no missing values in track_id. Then the genre_top value are the only genres which are only single for one song. Because genre_top only account for the pure genre type of songs while the genre type if for all genres associated with the perticular song. So we are going to focus on this as our classification target.1. We can see there are 30% of missing values for release date for an album (date_released) which is around 70 percent. Which can fairly give us the idea what period of songs were targeted while making the dataset. Let's check if there are any missing values in the track_id
###Code
df_tracks.track_id.isnull().sum()
###Output
_____no_output_____
###Markdown
As we can see there are no missing values. It looks the track_id has been parsed correctly. Now let's have a look at the features table
###Code
df_features.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 106574 entries, 0 to 106573
Columns: 519 entries, track_id to zcr#std#01
dtypes: float64(518), int64(1)
memory usage: 422.0 MB
###Markdown
OMG 519 columns! Look's rather daunting. Allow me to explain This data is pretty inflated. The actual number of features are just 10. However, these features have been taken on different time frames of songs. This dataset has column at three level: 1. Audio feature 2. Statictical feature: like mean, meadian, min, max 3. The window: Each song has been devided into 12 parts and then features have been extracted out of them. We are actaully dealing with the following audio features here:1. $\href {http://ismir2011.ismir.net/papers/PS2-8.pdf}{Chroma-features}$: chroma_cens, chroma_cqt, chroma_stft2. $\href{https://en.wikipedia.org/wiki/Mel-frequency_cepstrum} {mfcc}$3. $\href{https://en.wikipedia.org/wiki/Energy_(signal_processing%29)} {rmse}$: Root-Mean square Energy4. $\href{https://en.wikipedia.org/wiki/Spectral_centroid} {Sprectral-Centroid}$5. $\href {https://ieeexplore.ieee.org/document/1035731} {Spectral-Contrast}$6. $\href {https://ccrma.stanford.edu/~jos/sasp/Spectral_Roll_Off.html} {Spectral-rolloff}$7. $\href {https://en.wikipedia.org/wiki/Tonnetz} {Tonnetz}$8. $\href {https://en.wikipedia.org/wiki/Zero_crossing} {zcr}$: Zero cross Rate If there is any missing value while the feature extraction
###Code
df_features.isnull().sum().unique()
###Output
_____no_output_____
###Markdown
By looking above we can say that there are no feature which data has net been extracted. --- Merging the tablesTakeing only the data which we need for training the model.
###Code
df_tracks_only_genre = df_tracks[['genre_top','track_id']]
df_tracks_only_genre_with_id = pd.merge(df_tracks_only_genre, df_genres, left_on="genre_top", right_on="title", how='inner')
df_tracks_only_genre_with_id = df_tracks_only_genre_with_id[['genre_id','track_id']]
df_tracks_only_genre_with_id.head()
# Merge
df = pd.merge(df_tracks_only_genre_with_id, df_features, on ="track_id", how='inner')
df.head()
###Output
_____no_output_____
###Markdown
Now it seema we are properly able to merge the tables Data Analysis Now Let's do some data analysis. First we will check how many genre we are specifically dealing with after merging as it removes the songs which belongs to more than one genre.
###Code
df.genre_id.unique().size
###Output
_____no_output_____
###Markdown
We can see from 163 genres we are down to 16 only. But let's check how it is distributed.
###Code
df["genre_id"].groupby(df["genre_id"]).count().sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Seems we have quite abundance of genre 12 which is "Rock" by the way is the most common. we can see our song language distribution as well. which is pretty skewed. English is abundance as this site is english regenal. There will be some language as well as people are from different place or they like to hear different language songs however it should be less in number
###Code
df_tracks["language_code"].groupby(df_tracks["language_code"]).count().sort_values(ascending=False).plot(kind="bar", figsize=[10,10])
###Output
_____no_output_____
###Markdown
Let's check one of the feature RMSE. which is used to check the scielence in the signal. That means the signal which has lower RMSE will hasve more silence in it.
###Code
sns.set(rc={'figure.figsize':(15,12)})
sns.boxplot(x="genre_id", y="rmse#mean#01", data=df);
###Output
_____no_output_____
###Markdown
From the above graphs we can see that some songs have more silence in them than the other songs. Here we can see that some genres clearly have more silence in them than the other genres.
###Code
df[["chroma_cens#kurtosis#06","genre_id"]].groupby(df["genre_id"]).mean().sort_values(by = "chroma_cens#kurtosis#06",ascending=False).head(30).plot(x="genre_id", y="chroma_cens#kurtosis#06", kind="bar", figsize=[10,10])
###Output
_____no_output_____
###Markdown
We are looking at the chroma cens feature which relates to the energy in the octave. And the statistical feature are considering here is kurtosis which looks tailness. This means the some songs have lower raising and declining rate in ocatave than the other songs. Which is pretty common in certain types of genre. which clearly differentiate them. Methods - Let's train some models now We will apply random forest, svm and nerual network(RNN) to the data we have. A 10-fold cross validation will be used and the optimization will be based on grid search with multiple hyperparameters. 1. Random Forest Random forset is the ensemble method of multiple classification desision tree and linear regression. We train maultiple descision trees and then enseble them for their end result. Let's train our first model which is Random Forest
###Code
n_max = 0
msl_max = 0
mf_max = 0
acc_max = 0
print ("Processing FMA data upon Random Forest, hyperparameters are [n_estimators, min_samples_leaf, max_features]")
for n in [100, 500]:
for msl in [5, 20]:
for mf in [3, 10]:
# Build the model
rf = RandomForestClassifier(n_estimators = n,
min_samples_leaf=msl,
max_features=mf,
criterion='gini',)
m = rf.fit(X_train, Y_train)
Y_pred = m.predict(X_test)
acc = metrics.accuracy_score(Y_test, Y_pred)
print("Random forest with [%r, %r, %r] gets accuracy %r " % (
n, msl, mf, acc))
if acc_max < acc:
acc_max = acc
n_max = n
msl_max = msl
mf_max = mf
print("Best random forest with [%r, %r, %r] gets accuracy %r " % (
n_max, msl_max, mf_max, acc_max))
###Output
Processing FMA data upon Random Forest, hyperparameters are [n_estimators, min_samples_leaf, max_features]
Random forest with [100, 5, 3] gets accuracy 0.25263157894736843
Random forest with [100, 5, 10] gets accuracy 0.3157894736842105
Random forest with [100, 20, 3] gets accuracy 0.2
Random forest with [100, 20, 10] gets accuracy 0.21052631578947367
Random forest with [500, 5, 3] gets accuracy 0.23157894736842105
Random forest with [500, 5, 10] gets accuracy 0.3157894736842105
Random forest with [500, 20, 3] gets accuracy 0.22105263157894736
Random forest with [500, 20, 10] gets accuracy 0.22105263157894736
Best random forest with [100, 5, 10] gets accuracy 0.3157894736842105
###Markdown
2. Neural Network - RNN A recurrent neural network (RNN) is a class of artificial neural network where connections between nodes form a directed graph along a sequence. This allows it to exhibit temporal dynamic behavior for a time sequence. Unlike feedforward neural networks, RNNs can use their internal state (memory) to process sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition.* The document is licensed by wiki: https://en.wikipedia.org/wiki/Wikipedia:Copyrights 
###Code
# Reshape the data for RNN
X_train_rnn = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1]))
X_test_rnn = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1]))
Y_train_rnn = np.reshape(Y_train, (Y_train.shape[0], 1, Y_train.shape[1]))
Y_test_rnn = np.reshape(Y_test, (Y_test.shape[0], 1, Y_test.shape[1]))
print(X_train_rnn.shape)
print(X_test_rnn.shape)
print(Y_train_rnn.shape)
print(Y_test_rnn.shape)
do_max = 0
bias_max = ""
act_max = ""
acc_max = 0
batch_size = 50
print ("Processing FMA data upon RNN, hyperparameters are [dropout, bias_initializer, activation]")
for do in [0.3, 0.5, 0.7]:
for bias in ["zeros", "Ones", "RandomNormal"]:
for act in ["sigmoid", "tanh", "relu"]:
model = Sequential()
model.add(SimpleRNN(input_dim=518, output_dim=40, return_sequences=True))
model.add(Dropout(do))
model.add(Dense(40, bias_initializer=bias))
model.add(Activation(act))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
# train the data without std output
hist = model.fit(X_train_rnn, Y_train_rnn, batch_size=batch_size, epochs=20,
validation_data=(X_test_rnn, Y_test_rnn), verbose=0)
print("RNN with [%r, %r, %r] gets accuracy %r " % (do, bias, act, acc))
# get last accuracy
acc = hist.history.get('acc')[-1]
if acc_max < acc:
acc_max = acc
do_max = do
bias_max = bias
act_max = act
print("Best FMA with [%r, %r, %r] gets accuracy %r " % (
do_max, bias_max, act_max, acc_max))
###Output
Processing FMA data upon RNN, hyperparameters are [dropout, bias_initializer, activation]
###Markdown
3. SVM In machine learning, support vector machines (SVMs, also support vector networks[1]) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training examples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier (although methods such as Platt scaling exist to use SVM in a probabilistic classification setting). An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible.* The document is licensed by wiki: https://en.wikipedia.org/wiki/Wikipedia:Copyrights 
###Code
X_train = train.iloc[:, range(2,df.shape[1])].values
Y_train = train.iloc[:, [0]].values
X_test = test.iloc[:, range(2,df.shape[1])].values
Y_test = test.iloc[:, [0]].values
1
kel_max = ""
C_max = 0
tol_max = 0
acc_max = 0
print ("Processing FMA data with SVM, hyperparameters are [kernel, C, tol]")
for kel in ["linear","rbf", "sigmoid"]:
for c in [1, 3, 10]:
for tol in [1e-2, 1e-3, 1e-4]:
svm_model_linear = SVC(kernel = kel, C = c, tol = tol).fit(X_train, Y_train)
acc = svm_model_linear.score(X_test, Y_test)
print("SVM with [%r, %r, %r] gets accuracy %r " % (kel, c, tol, acc))
if acc_max < acc:
acc_max = acc
kel_max = kel
C_max = c
tol_max = tol
print("Best SVM with [%r, %r, %r] gets accuracy %r " % (
kel_max, C_max, tol_max, acc_max))
###Output
Processing FMA data with SVM, hyperparameters are [kernel, C, tol]
|
hcds-a2-bias.ipynb | ###Markdown
Assignment 2: Bias in Data Project OverviewThe goal of this assignment is to explore the concept of 'bias' through data on Wikipedia articles - specifically, articles on political figures from a variety of countries. First, we need to use a machine learning service called [ORES](https://ores.wikimedia.org) to estimate the quality of each article for the wekipedia's dataset. Then, we will combine a dataset of Wikipedia articles with a dataset of country populations. Moreover, we are going to use bar charts to create 4 different visualizations from this combined dataset to address the following analyses:1. 10 highest-ranked countries in terms of number of politician articles as a proportion of country population2. 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population3. 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country4. 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country Getting the article and population dataThe first step is getting the data, which lives in several different places. 1. The wikipedia dataset can be found on [Figshare](https://figshare.com/articles/Untitled_Item/5513449).2. The population data is on the [Population Research Bureau website](http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14). Getting article quality predictionsWe need to get the predicted quality scores for each article in the Wikipedia dataset. For this step, we're using a Wikimedia API endpoint for a machine learning system called $ORES$ ("Objective Revision Evaluation Service"). $ORES$ estimates the quality of an article and assigns a series of probabilities that the article is in one of 6 quality categories. The options are, from best to worst:1. FA - Featured article2. GA - Good article3. B - B-class article4. C - C-class article5. Start - Start-class article6. Stub - Stub-class articleSee the documentaion [here](https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_model)Below is a function to get the each article's quality by using $ORES$. It will return a list contains all the qulity predictions.
###Code
def get_article_quality(revids):
"""
This function takes a list of revision id.
use the Wikimedia API endpoint for a machine learning system called 'ORES' to get the prediction values
and returns a list of all the articles quality prediction
Args:
param (str): an url for the API
Return:
a list of string, each string represents a prediction value corresponding to the revision id
"""
quality = []
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
with open('data.json', 'w') as f:
json.dump(response, f)
with open("data.json", 'r') as f:
json_data = json.load(f)
prediction = json_data['enwiki']['scores']
for key in prediction:
if 'error' in prediction[key]['wp10']:
quality.append('NA')
else:
# print(prediction[key]['wp10']['score']['prediction'])
quality.append(prediction[key]['wp10']['score']['prediction'])
return quality
###Output
_____no_output_____
###Markdown
Get the PredictionIn order to get article predictions for each article in the Wikipedia dataset, we need to read page_data.csv into Python, and then read through the dataset line by line. Moreover, we are going to use the value of the 'last_edit' column in the API query.Now I am calling the get_article_quality function every time I have collected 100 ids. This function will give me back the article quality prediction corresponding to each id. Because $ORES$ allows you to submit multiple revision id at the same time and it would return the same amount of predictions for you. This approach would speed up this process a lot!Once this procedure done, we can print and see the output of a list of predictions.Note that, there are 4 articles that don't have prediction values.
###Code
revids = []
quality = []
header = True
count = 1
with open("page_data.csv", 'r') as csv_file:
csv_reader = csv.reader(csv_file)
for row in csv_reader:
if (header):
header = False
continue
if (count == 100):
quality = quality + get_article_quality(revids)
revids = []
count = 0
# add the id into revids list
revids.append(row[2])
count = count + 1
quality = quality + get_article_quality(revids)
#print(quality)
###Output
_____no_output_____
###Markdown
Getting the article dataNow we need to use pandas to read the Wekipeida data into a data frame. Then, create a new column call 'article_quality' for the page_data which contains every article's prediction. Let's see the dimension and first 5 rows of the data.
###Code
page_data = pd.read_csv("page_data.csv")
page_data['article_quality'] = quality
print(page_data.count())
page_data.head()
###Output
page 47197
country 47197
rev_id 47197
article_quality 47197
dtype: int64
###Markdown
Let's take a look at the number of these 6 quality categories.We can see there are 4 'NA' value in the 'article_quality' column. That means there are two rows that without article quality predictions. Thus, we need to remove those two rows before moving forward.
###Code
print(page_data['article_quality'].value_counts())
page_data = page_data.drop(page_data.index[page_data.article_quality == 'NA'])
print()
print(page_data['article_quality'].value_counts())
print()
print(page_data.count())
###Output
Stub 24666
Start 14873
C 5851
GA 774
B 737
FA 292
NA 4
Name: article_quality, dtype: int64
Stub 24666
Start 14873
C 5851
GA 774
B 737
FA 292
Name: article_quality, dtype: int64
page 47193
country 47193
rev_id 47193
article_quality 47193
dtype: int64
###Markdown
Population DatasetNext, The population data is on the Population Research Bureau website, download the CSV file and use pandas to read it as a data frame. Since the header starting in row 3 of the Population Mid-2015.csv, when we read the csv file remeber to skip the first two rows. Moreover, I also change the headers to lowercase.Let's see the dimension and first 5 rows of the data.
###Code
population = pd.read_csv("Population Mid-2015.csv", skiprows=2)
population.columns = map(str.lower, population.columns)
print(population.count())
population.head()
###Output
location 210
location type 210
timeframe 210
data type 210
data 210
footnotes 0
dtype: int64
###Markdown
The column 'footnotes' is irrelevant here, so we can drop in this case. Moreover, we need to merge the population data to the Wekipedia data by the country names. Thus, it is better to rename the 'location' column as 'country'.Moreover, we need to get rid of the all the ',' in 'data' column and change it into int type.Let's see the first 5 rows of the data.
###Code
population.drop('footnotes', axis=1, inplace=True)
population.rename(columns={'location':'country'}, inplace=True)
population['data'] = [int(x.replace(',', '')) for x in population['data']]
print(population.info())
population.head()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 210 entries, 0 to 209
Data columns (total 5 columns):
country 210 non-null object
location type 210 non-null object
timeframe 210 non-null object
data type 210 non-null object
data 210 non-null int64
dtypes: int64(1), object(4)
memory usage: 8.3+ KB
None
###Markdown
Combining the datasetsNow, we can combine these two datasets. We need to merge the wikipedia data and population data together by the commcon attributes. And we can see both datasets have fields containing country names for merging purpose. After merging the data, we are going to drop thoes rows that can not be matched. Either the population dataset does not have an entry for the equivalent Wikipedia country, or vice versa.Therefore, we can use the inner join method in order to merge these two dataset by only keeping all the country names that are matched after the merge operation. Then, we need to rename some of the columns and drop the irrelevant columns in order to construct our the final dataset. It has the following columns:1. country2. article_name3. revision_id4. article_quality5. populationLet's see the dimension and first 5 rows of the data. Then save the data frame into csv file name as 'final_data.csv'
###Code
final = pd.merge(page_data, population, how='inner', on=['country'])
final.rename(columns={'page':'article_name'}, inplace=True)
final.rename(columns={'rev_id':'revision_id'}, inplace=True)
final.rename(columns={'data':'population'}, inplace=True)
final.drop('location type', axis=1, inplace=True)
final.drop('timeframe', axis=1, inplace=True)
final.drop('data type', axis=1, inplace=True)
print(final.info())
final.to_csv("final_data.csv", index=False)
final.head()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 45795 entries, 0 to 45794
Data columns (total 5 columns):
article_name 45795 non-null object
country 45795 non-null object
revision_id 45795 non-null int64
article_quality 45795 non-null object
population 45795 non-null int64
dtypes: int64(2), object(3)
memory usage: 2.1+ MB
None
###Markdown
AnalysisNow we are going to analyst the proportion (as a percentage) of articles-per-population and high-quality articles for each country. By "high quality" articles, in this case we mean the number of articles about politicians in a given country that ORES predicted would be in either the "FA" (featured article) or "GA" (good article) classes.* If a country has a population of 10,000 people, and you found 10 articles about politicians from that country, then the percentage of articles-per-population would be .1%.Therefore, we will calculate the percentage of articles per population for each country. Let's see the first 5 rows of the data.
###Code
article_count = final[['country', 'article_name']].groupby('country').count().reset_index(level=0)
population_count = final[['country', 'population']].groupby('country').mean().reset_index(level=0)
article_population = article_count.merge(population_count, how='inner', on=['country'])
article_population.rename(columns={'article_name':'article_total'}, inplace=True)
article_population['article_population_percent'] = round(article_population['article_total']
/ article_population['population'] * 100, 6)
article_population.head(5)
###Output
_____no_output_____
###Markdown
* If a country has 10 articles about politicians, and 2 of them are FA or GA class articles, then the percentage of high-quality articles would be 20%.Therefore, we will find all the high-quality articles from the final dataset and then calculate the total number of those articles by grouping the country names.Note: We need to use left join for the two table in order to keep all the countries for high-quality articles. Otherwise, we will lose some of the countries that have 0 high-quality articles when we only consider FA or GA class articles.Let's see the first 5 rows of the data.
###Code
high_quality_article = final.loc[(final['article_quality'] == 'FA') | (final['article_quality'] == 'GA')]
high_quality_article_count = high_quality_article[['country', 'article_name']].groupby('country').count().reset_index(level=0)
high_quality_article_count.rename(columns={'article_name':'high_quality_total'}, inplace=True)
high_quality = article_count.merge(high_quality_article_count, how='left', on=['country'])
high_quality['high_quality_total'].fillna(0, inplace=True)
high_quality['high_quality_total'] = high_quality['high_quality_total'].astype(int)
high_quality.rename(columns={'article_name':'article_total'}, inplace=True)
high_quality['high_quality_percent'] = round(high_quality['high_quality_total']
/ high_quality['article_total'] * 100, 6)
high_quality.head()
###Output
_____no_output_____
###Markdown
After we have these two new data frames we construct above, we will be able to show the following tables:1. 10 highest-ranked countries in terms of number of politician articles as a proportion of country population2. 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population3. 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country4. 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that countryI am going to use two dataframes *article_population* and *high_quality* to address the above tables. The idea is to just sort the dataframse either in ascending or descending orders and keep the first 10 rows.You can also go to my github and see the tables I have provided [here](https://github.com/lzctony/data-512-a2/blob/master/README.md). 1. 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
table_1 = article_population.sort_values(['article_population_percent'], ascending=False)[:10]
table_1
###Output
_____no_output_____
###Markdown
2. 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
table_2 = article_population.sort_values(['article_population_percent'], ascending=True)[:10]
table_2
###Output
_____no_output_____
###Markdown
3. 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
table_3 = high_quality.sort_values(['high_quality_percent'], ascending=False)[:10]
table_3
###Output
_____no_output_____
###Markdown
4. 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
table_4 = high_quality.sort_values(['high_quality_percent'], ascending=True)[:10]
table_4
###Output
_____no_output_____
###Markdown
Table 1 Top 10 country by articles per capita
###Code
df_country.sort_values('PagesPerCapita', ascending=False).head(10)[['Name', 'PagesPerCapita']]
###Output
_____no_output_____
###Markdown
Table 2 Bottom 10 Countries by articles per capita
###Code
df_country.sort_values('PagesPerCapita', ascending=True).head(10)[['Name', 'PagesPerCapita']]
df_country = df_country.merge(df_total[df_total['quality'] == 'FA'].groupby(['country']).count()['page'], how='left', left_on='Name', right_on='country')
df_country
df_country = df_country.merge(df_total[df_total['quality'] == 'GA'].groupby(['country']).count()['page'], how='left', left_on='Name', right_on='country')
df_country
df_country = df_country.fillna(0)
df_country['good_articles'] = df_country['page_y'] + df_country['page']
df_country['good_articles_portion'] = df_country['good_articles'] / df_country['page_x']
df_country = df_country.fillna(0)
df_country
###Output
_____no_output_____
###Markdown
Table 3 Top 10 Countries by Proportion of Articles that are FA or GA.
###Code
df_country.sort_values('good_articles_portion', ascending=False).head(10)[['Name', 'good_articles_portion']]
###Output
_____no_output_____
###Markdown
Table 4 Bottom 10 Countries by Proportion of Articles that are FA or GA
###Code
df_country.sort_values('good_articles_portion', ascending=True).head(10)[['Name','good_articles_portion']]
df_no_match.to_csv(src_dir + "wp_wpds_countries-no_match.csv")
df_total.to_csv(src_dir + "wp_wpds_politicians_by_country.csv")
###Output
_____no_output_____
###Markdown
A2 - Bias in English Wikipedia Articles This assignment is to assess the biases in English Wikipedia. More information on this assignment can be found here: https://wiki.communitydata.cc/Human_Centered_Data_Science_(Fall_2018)/AssignmentsA2:_Bias_in_data
###Code
# Imports for code
import requests
import json
# import csv
import pandas as pd
###Output
_____no_output_____
###Markdown
Gather the Data To use the ORES API, I used the code below. I got this code from the repository here: https://github.com/Ironholds/data-512-a2The code is from this Python Notebook: https://github.com/Ironholds/data-512-a2/blob/master/hcds-a2-bias_demo.ipynb
###Code
# Customize these with your own information by replacing "hmurph3"
headers = {
'User-Agent': 'https://github.com/hmurph3',
'From': '[email protected]'
}
def get_ores_data(revision_ids, headers):
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# Specify the parameters - smushing all the revision IDs together separated by | marks.
# Yes, 'smush' is a technical term, trust me I'm a scientist.
# What do you mean "but people trusting scientists regularly goes horribly wrong" who taught you tha- oh.
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
#print(json.dumps(response, indent=4, sort_keys=True))
#I decided to comment the print so the function returned something instead of printing it out
return response
# So if we grab some example revision IDs and turn them into a list and then call get_ores_data...
example_ids = [783381498, 807355596, 757539710]
example_call= get_ores_data(example_ids, headers)
print(example_call)
###Output
{'enwiki': {'models': {'wp10': {'version': '0.6.1'}}, 'scores': {'757539710': {'wp10': {'score': {'prediction': 'Start', 'probability': {'B': 0.05635270475191951, 'C': 0.17635417131683803, 'FA': 0.001919869734464717, 'GA': 0.005517075264277984, 'Start': 0.732764644204933, 'Stub': 0.027091534727566813}}}}, '783381498': {'wp10': {'score': {'prediction': 'Start', 'probability': {'B': 0.039498449850621085, 'C': 0.06068466061111685, 'FA': 0.0029057427468351755, 'GA': 0.007477221115409147, 'Start': 0.5674464916024892, 'Stub': 0.3219874340735285}}}}, '807355596': {'wp10': {'score': {'prediction': 'Start', 'probability': {'B': 0.04566408685167919, 'C': 0.10144128886317841, 'FA': 0.002651239009002438, 'GA': 0.006433022662730785, 'Start': 0.7675063182740381, 'Stub': 0.07630404433937113}}}}}}}
###Markdown
I now need to read in the csv files `page_data.csv` and `WPDS_2018_data.csv` as tables. I used the **pandas.read_csv()**
###Code
page_data = pd.read_csv('page_data.csv', sep = ',', header = 0)
wpds_2018 = pd.read_csv('WPDS_2018_data.csv', sep = ',', thousands=',', header = 0) # population has commas
page_data.head()
wpds_2018.head()
###Output
_____no_output_____
###Markdown
I now need to conver the *rev_id* in the `page_data` table into a list in order to use the **ORES API**.
###Code
rev_ids = page_data.iloc[:, 2].tolist()
print(rev_ids[0:10])
###Output
[235107991, 355319463, 391862046, 391862070, 391862409, 391862819, 391863340, 391863361, 391863617, 391863809]
###Markdown
I created a for-loop to gather all *rev_id* since **ORES** only allows 50 to 100 *rev_id* to be passed into the API query at once and saved it as a list of dictionaries.
###Code
# this part will take some time. I had to do ~ 1000 calls to the API.
ores_query = []
inc = 50
start = 0
end = len(rev_ids)
for i in range(int(end/inc)+1):
if start + inc > end:
temp = get_ores_data(rev_ids[start: start + (end-start)], headers)
else:
temp = get_ores_data(rev_ids[start: start + inc], headers)
ores_query.append(temp)
start += inc
print(ores_query[0])
###Output
{'enwiki': {'models': {'wp10': {'version': '0.6.1'}}, 'scores': {'235107991': {'wp10': {'error': {'message': 'RevisionNotFound: Could not find revision ({revision}:235107991)', 'type': 'RevisionNotFound'}}}, '355319463': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.0037293011286007372, 'C': 0.003856823065973545, 'FA': 0.0005009114577946061, 'GA': 0.0009278080381894021, 'Start': 0.008398482183096077, 'Stub': 0.9825866741263456}}}}, '391862046': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.00752908372935955, 'C': 0.011698750542107464, 'FA': 0.001217297276719427, 'GA': 0.0018271099726449593, 'Start': 0.12703001272170586, 'Stub': 0.8506977457574628}}}}, '391862070': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007528602399161758, 'C': 0.011761932099515725, 'FA': 0.0012172194555714589, 'GA': 0.0018269931665054447, 'Start': 0.1270218917625896, 'Stub': 0.8506433611166563}}}}, '391862409': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007958430970874009, 'C': 0.012253321701537543, 'FA': 0.0012797297965052574, 'GA': 0.0019208184542949486, 'Start': 0.13309327025182544, 'Stub': 0.843494428824963}}}}, '391862819': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007958430970874009, 'C': 0.012253321701537543, 'FA': 0.0012797297965052574, 'GA': 0.0019208184542949486, 'Start': 0.13309327025182544, 'Stub': 0.843494428824963}}}}, '391863340': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007958430970874009, 'C': 0.012253321701537543, 'FA': 0.0012797297965052574, 'GA': 0.0019208184542949486, 'Start': 0.13309327025182544, 'Stub': 0.843494428824963}}}}, '391863361': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007957898076082623, 'C': 0.012319461002361975, 'FA': 0.0012796441060789103, 'GA': 0.0019206898367127596, 'Start': 0.1330843583556558, 'Stub': 0.8434379486231081}}}}, '391863617': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007957898076082623, 'C': 0.012319461002361975, 'FA': 0.0012796441060789103, 'GA': 0.0019206898367127596, 'Start': 0.1330843583556558, 'Stub': 0.8434379486231081}}}}, '391863809': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.008024197217194854, 'C': 0.012290690898551187, 'FA': 0.0012836326137212481, 'GA': 0.0019266764122425476, 'Start': 0.13349916742472073, 'Stub': 0.8429756354335695}}}}, '393276188': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.0085673917799792, 'C': 0.008368820789631162, 'FA': 0.0008017130915481524, 'GA': 0.001780386727215207, 'Start': 0.05176560550147401, 'Stub': 0.9287160821101522}}}}, '393298432': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007958430970874009, 'C': 0.012253321701537543, 'FA': 0.0012797297965052574, 'GA': 0.0019208184542949486, 'Start': 0.13309327025182544, 'Stub': 0.843494428824963}}}}, '393822005': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.021488920984905412, 'C': 0.023986338010416, 'FA': 0.0013114385466519716, 'GA': 0.0036291279205695713, 'Start': 0.24392964768627112, 'Stub': 0.7056545268511859}}}}, '394482629': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007958430970874009, 'C': 0.012253321701537543, 'FA': 0.0012797297965052574, 'GA': 0.0019208184542949486, 'Start': 0.13309327025182544, 'Stub': 0.843494428824963}}}}, '394482891': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007958430970874009, 'C': 0.012253321701537543, 'FA': 0.0012797297965052574, 'GA': 0.0019208184542949486, 'Start': 0.13309327025182544, 'Stub': 0.843494428824963}}}}, '394580295': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007958430970874009, 'C': 0.012253321701537543, 'FA': 0.0012797297965052574, 'GA': 0.0019208184542949486, 'Start': 0.13309327025182544, 'Stub': 0.843494428824963}}}}, '394580630': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.00795498623108365, 'C': 0.012680859528777195, 'FA': 0.0012791758762454644, 'GA': 0.0019199870442112546, 'Start': 0.13303566195120142, 'Stub': 0.8431293293684807}}}}, '394580939': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007957898076082623, 'C': 0.012319461002361975, 'FA': 0.0012796441060789103, 'GA': 0.0019206898367127596, 'Start': 0.1330843583556558, 'Stub': 0.8434379486231081}}}}, '394580993': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.008024197217194854, 'C': 0.012290690898551187, 'FA': 0.0012836326137212481, 'GA': 0.0019266764122425476, 'Start': 0.13349916742472073, 'Stub': 0.8429756354335695}}}}, '394581284': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007957898076082623, 'C': 0.012319461002361975, 'FA': 0.0012796441060789103, 'GA': 0.0019206898367127596, 'Start': 0.1330843583556558, 'Stub': 0.8434379486231081}}}}, '394581557': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007958430970874009, 'C': 0.012253321701537543, 'FA': 0.0012797297965052574, 'GA': 0.0019208184542949486, 'Start': 0.13309327025182544, 'Stub': 0.843494428824963}}}}, '394587483': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.00752908372935955, 'C': 0.011698750542107464, 'FA': 0.001217297276719427, 'GA': 0.0018271099726449593, 'Start': 0.12703001272170586, 'Stub': 0.8506977457574628}}}}, '394587547': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007958430970874009, 'C': 0.012253321701537543, 'FA': 0.0012797297965052574, 'GA': 0.0019208184542949486, 'Start': 0.13309327025182544, 'Stub': 0.843494428824963}}}}, '395521877': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.006235927440399146, 'C': 0.0064114956306656964, 'FA': 0.0009173142984451422, 'GA': 0.0015725170942118502, 'Start': 0.06796727376778845, 'Stub': 0.9168954717684896}}}}, '395526568': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.006235927440399146, 'C': 0.0064114956306656964, 'FA': 0.0009173142984451422, 'GA': 0.0015725170942118502, 'Start': 0.06796727376778845, 'Stub': 0.9168954717684896}}}}, '401577829': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.003569214193115822, 'C': 0.003333636657404969, 'FA': 0.0004452014778183147, 'GA': 0.0008180453236411899, 'Start': 0.007695736316044267, 'Stub': 0.9841381660319757}}}}, '413885084': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.010079787712809788, 'C': 0.01500203085737421, 'FA': 0.0016003022934902432, 'GA': 0.0024157055688351662, 'Start': 0.28517501516260463, 'Stub': 0.6857271584048861}}}}, '433871129': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.00752908372935955, 'C': 0.011698750542107464, 'FA': 0.001217297276719427, 'GA': 0.0018271099726449593, 'Start': 0.12703001272170586, 'Stub': 0.8506977457574628}}}}, '433871165': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007958430970874009, 'C': 0.012253321701537543, 'FA': 0.0012797297965052574, 'GA': 0.0019208184542949486, 'Start': 0.13309327025182544, 'Stub': 0.843494428824963}}}}, '435008715': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.00752908372935955, 'C': 0.011698750542107464, 'FA': 0.001217297276719427, 'GA': 0.0018271099726449593, 'Start': 0.12703001272170586, 'Stub': 0.8506977457574628}}}}, '437454659': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.009767878016849526, 'C': 0.014774483472799908, 'FA': 0.0015953913871969987, 'GA': 0.002356281450596829, 'Start': 0.2743991812989202, 'Stub': 0.6971067843736365}}}}, '437735138': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007581664459717573, 'C': 0.011628167987075888, 'FA': 0.0012211826347900238, 'GA': 0.001838579207025105, 'Start': 0.12743546592904914, 'Stub': 0.8502949397823423}}}}, '438305657': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007580889048209687, 'C': 0.011734890201082999, 'FA': 0.0012210577388554597, 'GA': 0.001832754261843581, 'Start': 0.12742243252097374, 'Stub': 0.8502079762290345}}}}, '439671509': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007552558096302776, 'C': 0.011735225201292339, 'FA': 0.0012210925968380334, 'GA': 0.0018328065821508752, 'Start': 0.1274260700958328, 'Stub': 0.8502322474275833}}}}, '439708117': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007552558096302776, 'C': 0.011735225201292339, 'FA': 0.0012210925968380334, 'GA': 0.0018328065821508752, 'Start': 0.1274260700958328, 'Stub': 0.8502322474275833}}}}, '440397578': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007528602399161758, 'C': 0.011761932099515725, 'FA': 0.0012172194555714589, 'GA': 0.0018269931665054447, 'Start': 0.1270218917625896, 'Stub': 0.8506433611166563}}}}, '440594068': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.00758216437419195, 'C': 0.011562997418329331, 'FA': 0.0012212631562742208, 'GA': 0.0018387004379715236, 'Start': 0.12744386867943533, 'Stub': 0.8503510059337979}}}}, '440598656': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007581664459717573, 'C': 0.011628167987075888, 'FA': 0.0012211826347900238, 'GA': 0.001838579207025105, 'Start': 0.12743546592904914, 'Stub': 0.8502949397823423}}}}, '441172886': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.00752908372935955, 'C': 0.011698750542107464, 'FA': 0.001217297276719427, 'GA': 0.0018271099726449593, 'Start': 0.12703001272170586, 'Stub': 0.8506977457574628}}}}, '441186581': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.00752908372935955, 'C': 0.011698750542107464, 'FA': 0.001217297276719427, 'GA': 0.0018271099726449593, 'Start': 0.12703001272170586, 'Stub': 0.8506977457574628}}}}, '441771813': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.00752908372935955, 'C': 0.011698750542107464, 'FA': 0.001217297276719427, 'GA': 0.0018271099726449593, 'Start': 0.12703001272170586, 'Stub': 0.8506977457574628}}}}, '441995465': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007580889048209687, 'C': 0.011734890201082999, 'FA': 0.0012210577388554597, 'GA': 0.001832754261843581, 'Start': 0.12742243252097374, 'Stub': 0.8502079762290345}}}}, '442411422': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.007484512994694413, 'C': 0.011412492243059649, 'FA': 0.0012009723629945405, 'GA': 0.0018081511740405953, 'Start': 0.12769467767768936, 'Stub': 0.8503991935475214}}}}, '442913438': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.004327406123706563, 'C': 0.00462966550439898, 'FA': 0.0009008485378231201, 'GA': 0.0011326304248249336, 'Start': 0.019890641232998097, 'Stub': 0.9691188081762483}}}}, '442937236': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.0035143531013533897, 'C': 0.0034116082778090115, 'FA': 0.0004504786619134722, 'GA': 0.0008217818871048018, 'Start': 0.008247081760823342, 'Stub': 0.9835546963109959}}}}, '443468553': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.008142377113722343, 'C': 0.007778737928393821, 'FA': 0.0013078486593185129, 'GA': 0.0018460179563454006, 'Start': 0.16606521282499745, 'Stub': 0.8148598055172228}}}}, '443469862': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.008088362645173128, 'C': 0.00787958655266893, 'FA': 0.0012991727223913424, 'GA': 0.0018337719405380176, 'Start': 0.1647409328755845, 'Stub': 0.8161581732636439}}}}, '443470532': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.00808779492593443, 'C': 0.007726593385291659, 'FA': 0.0012990815339294435, 'GA': 0.0018336432287510112, 'Start': 0.16495199950843686, 'Stub': 0.8161008874176564}}}}, '443496992': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.008088362645173128, 'C': 0.00787958655266893, 'FA': 0.0012991727223913424, 'GA': 0.0018337719405380176, 'Start': 0.1647409328755845, 'Stub': 0.8161581732636439}}}}, '443497423': {'wp10': {'score': {'prediction': 'Stub', 'probability': {'B': 0.008088362645173128, 'C': 0.00787958655266893, 'FA': 0.0012991727223913424, 'GA': 0.0018337719405380176, 'Start': 0.1647409328755845, 'Stub': 0.8161581732636439}}}}}}}
###Markdown
Clean the data Here is where I start to break out the dictionaries of dictionaires that the API querie gave me. I am interestet in the *rev_id* (which is within the **scores** dictionary) and the *prediction* (which is within the **score** dictionary). For this part, I used list comprehension. More information can be found here: https://www.pythonforbeginners.com/basics/list-comprehensions-in-python
###Code
# Here I drill down into the "enwiki" dictionary
new_list = [i["enwiki"] for i in ores_query] #list comprehesion
temp_data_frame = pd.DataFrame(new_list) # create a dataframe of the data becuase I like dataframes better
temp_data_frame.head()
###Output
_____no_output_____
###Markdown
Now I need to drill down into the **scores** column
###Code
# Here I get the first batch of the API query results
scores = pd.DataFrame.from_dict(ores_query[0]['enwiki']['scores']).T
scores.head()
###Output
_____no_output_____
###Markdown
Because my for-loop created a list of dictionaries, I need a way to append each query for only the information I want. I created another for-loop to do this. I start with the first chunk of the API query, and append each chunk as I loop through the list.
###Code
new_table = scores.reset_index() # Start with the first chunk from the API Query
for i in range(1,len(ores_query)):
temp = pd.DataFrame.from_dict(ores_query[i]['enwiki']['scores']).T.reset_index()
new_table = new_table.append(temp, ignore_index = True)
new_table.head()
###Output
_____no_output_____
###Markdown
I now need to create a column that has just the *prediction*, and append it to my current data frame.
###Code
# Create a table of just the "wp10" column, keeping the indicies the same. Will need to use this for combining data sets later
new_table_2 = pd.DataFrame(new_table['wp10'])
new_table_2.head()
# split the 'wp10' into columns based on the dictionary key
temp_scores = new_table_2['wp10'].apply(pd.Series)
temp_scores.head()
###Output
_____no_output_____
###Markdown
I am only really interested in the *prediction* value under the *score* column. As you can see above, it will show NaN if there was an error in finding the *rev_id*.
###Code
# split *score* dictionary into its values
pred_list = temp_scores['score'].apply(pd.Series)
pred_list.head()
###Output
_____no_output_____
###Markdown
Now that I have the **ORES** predictions of the quality of the article, I can append it to `new_dataframe` so that I have the *rev_id* and the *predictions*
###Code
new_table['prediction'] = pred_list['prediction']
new_table.head()
###Output
_____no_output_____
###Markdown
Below I create a table of just the values I need (*rev_id* and *prediction*).
###Code
predictions = pd.DataFrame(new_table["index"]) # this creates the rev_id column
predictions ['prediction'] = new_table['prediction'] # this creates the prediction column, remember NaN means the rev_id was not found in the query
predictions = predictions .rename(columns = {'index': 'rev_id'}) # rename the 'index' column to its proper title of 'rev_id'
predictions.head()
###Output
_____no_output_____
###Markdown
I now need to merge the three datasources `page_data`, `wpds` and `predictions`. I need to do some more clean up of column titles in order to merge the tables properly by *country* and *rev_id*. For more information on merging dataframes, see https://pandas.pydata.org/pandas-docs/stable/merging.html
###Code
page_data.head()
wpds_2018.head()
predictions.head()
###Output
_____no_output_____
###Markdown
Here I merge the `predicitons` and the `page_data` tables using *rev_id* to merge the tables. First I need to check that the types of the *rev_id* columns are the same.
###Code
type(page_data['rev_id'][0])
type(predictions['rev_id'][0])
###Output
_____no_output_____
###Markdown
Since they are not, I need to change the type of *rev_id*
###Code
predictions['rev_id'] = predictions['rev_id'].astype('int64')
type(predictions['rev_id'][0])
en_wikipedia_bias_data = pd.merge(page_data, predictions, on = 'rev_id', how = 'outer')
en_wikipedia_bias_data.head()
###Output
_____no_output_____
###Markdown
I need to rename `wpds_2018` *Geography* column to *country* to merge this table with the table created above.
###Code
wpds_2018 = wpds_2018.rename(columns = {'Geography': 'country'})
wpds_2018.head()
###Output
_____no_output_____
###Markdown
Now I can merge the `wpds_2018` table with `en_wikipedia_bias_data` using *country*
###Code
en_wikipedia_bias_data = pd.merge(en_wikipedia_bias_data, wpds_2018, on = 'country', how = 'outer')
en_wikipedia_bias_data.head()
###Output
_____no_output_____
###Markdown
Now that I have the data in one one source, I need to remove the rows with *NaN* values.
###Code
en_wikipedia_bias_data = en_wikipedia_bias_data.dropna().reset_index(drop = True)
en_wikipedia_bias_data.head()
###Output
_____no_output_____
###Markdown
Now to match the column title requirments, I need to rename *page* and *prediction*.
###Code
en_wikipedia_bias_data = en_wikipedia_bias_data.rename(columns ={'page' : 'article_name', 'prediction' : 'article_quality'})
en_wikipedia_bias_data.head()
###Output
_____no_output_____
###Markdown
Here I save `en_wikipedia_bias_data` to a **.csv** file.
###Code
en_wikipedia_bias_data.to_csv('en-wikipedia_bias_data.csv')
###Output
_____no_output_____
###Markdown
Calculate the proportions of articles by population of country and proportions of high quality articles by country Here I find the proportion of the politician artilces as a proportion of the country's population and the proportion of the high qulaity articles as a proportion of the country's population. High quality artices are articles that ORES predicted as **FA** or **GA** (*featured article* and *good article* respectivly). To do this, I used `DataFrame.groupby`. See for more details: https://www.shanelynn.ie/summarising-aggregation-and-grouping-data-in-python-pandas/
###Code
politician_articles_by_population = en_wikipedia_bias_data.groupby(['country', 'Population mid-2018 (millions)'], as_index=False)[['article_quality']].count()
politician_articles_by_population.head()
###Output
_____no_output_____
###Markdown
Here I calculate proportion of articles as a function of the population. First I need to change *article_quality* to *count of articles because they way I grouped the rows was by the count of the articles by country.
###Code
politician_articles_by_population = politician_articles_by_population.rename(columns = {'article_quality' : "count of articles"})
politician_articles_by_population.head()
###Output
_____no_output_____
###Markdown
Now I need to calculate the proportion of articles.
###Code
politician_articles_by_population['proportion of articles to population (millions)'] = politician_articles_by_population['count of articles'] / politician_articles_by_population['Population mid-2018 (millions)'].astype('float')
politician_articles_by_population.head()
###Output
_____no_output_____
###Markdown
Now I need to calculate the proportion of high quality articles. Remember, high quality articles means they have a rating of **FA** or **GA**.
###Code
en_wikipedia_bias_data.head()
###Output
_____no_output_____
###Markdown
I need to get the counts of **FA** and **GA** articles.
###Code
fa = en_wikipedia_bias_data['country'][en_wikipedia_bias_data['article_quality'] == "FA"].value_counts()
ga = en_wikipedia_bias_data['country'][en_wikipedia_bias_data['article_quality'] == "GA"].value_counts()
fa = pd.DataFrame(fa).reset_index()
ga = pd.DataFrame(ga).reset_index()
fa.head()
ga.head()
###Output
_____no_output_____
###Markdown
I now need to re-name the column titles of the `fa` and `ga` tables so they make sense.
###Code
fa = fa.rename(columns = {'country' : 'count of FA'})
ga = ga.rename(columns ={'country' : 'count of GA'})
fa = fa.rename(columns = {'index' : 'country'})
ga = ga.rename(columns = {'index' : 'country'})
fa.head()
ga.head()
###Output
_____no_output_____
###Markdown
Now I will create a table of quality articles by country. First I will merge `FA` and `GA` by country.
###Code
merge = pd.merge(fa, ga, on = 'country', how = 'inner')
merge.head()
###Output
_____no_output_____
###Markdown
Now I will create a table of the counts of quality articles by country.
###Code
high_quality_article_counts = pd.DataFrame(merge['country'])
high_quality_article_counts.head()
high_quality_article_counts['count of high quality articles'] = merge['count of FA'] + merge['count of GA']
high_quality_article_counts.head()
###Output
_____no_output_____
###Markdown
Now I need to make a table of countries, populations, and the count of high quality articles. I will do this by merging the `politician_articles_by_population` data with the `high_quality_articles_counts`.
###Code
high_quality_politician_articles_by_population = pd.merge(politician_articles_by_population, high_quality_article_counts, on = 'country', how = 'inner')
high_quality_politician_articles_by_population.head()
###Output
_____no_output_____
###Markdown
I now need to calculate the proportion of high quality articles by country
###Code
high_quality_politician_articles_by_population['proportion of high quality articles'] = high_quality_politician_articles_by_population['count of high quality articles'] / high_quality_politician_articles_by_population['count of articles']
high_quality_politician_articles_by_population.head()
###Output
_____no_output_____
###Markdown
Final Deliverables Top 10 ranked countries of proportion of articles by population
###Code
politician_articles_by_population.sort_values(by = 'proportion of articles to population (millions)', ascending = False).head(10)
###Output
_____no_output_____
###Markdown
Bottom 10 ranked countries of proportion of articles by population
###Code
politician_articles_by_population.sort_values(by = 'proportion of articles to population (millions)', ascending = True).head(10)
###Output
_____no_output_____
###Markdown
Top 10 ranked countries of high qulaity articles
###Code
high_quality_politician_articles_by_population.sort_values(by = 'proportion of high quality articles', ascending = False).head(10)
###Output
_____no_output_____
###Markdown
Bottom 10 ranked countries of high qulaity articles
###Code
high_quality_politician_articles_by_population.sort_values(by = 'proportion of high quality articles', ascending = True).head(10)
###Output
_____no_output_____
###Markdown
The Notebook is used for the Assignment: Bias in DataThe goal of the analysis is to understand bias in the article coverage for politicians across countries. Related Data Filesraw data files:page_data.csv : raw wikipedia dataWPDS_2018_data.csv : raw country population dataOutput files:wp_wpds_politicians_by_country.csv : combined data (country population, ores data and wikipedia data)
###Code
import pandas as pd
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
First I download the data from the two source files into a data frame. Then I read the data frame to see how the data is structured.
###Code
Page_Data = pd.read_csv('page_data.csv', encoding='utf-8')
WPDS = pd.read_csv('WPDS_2018_data.csv', encoding='utf-8') ##after removing the geography related information in excel.
Page_Data.head()
WPDS.head()
###Output
_____no_output_____
###Markdown
Processing of the Data:I used excel to process the data. Here are the steps which I took1. Removed all the page names which start with "Template:"2. Copied the remaining and stored in a CSV files3. Imported them back into the jupyter notebook as Clean_Page_Data1. Filtered all the capital letterd geography data from WPDS data and stored in a seperate file.2. Saved the remaining file as CSV
###Code
Clean_Page_Data = pd.read_csv('clean_page_data.csv', encoding='utf-8')
Clean_Page_Data.head()
###Output
_____no_output_____
###Markdown
In the below code, we go to the Ores API to score the quality of the politician wikipedia pages. Most of this part of the code was taken from the wiki page provided for the course assignment. https://wiki.communitydata.science/Human_Centered_Data_Science_(Fall_2019)/AssignmentsA2:_Bias_in_data
###Code
import requests
import json
headers = {'User-Agent' : 'https://github.com/your_github_RCRK', 'From' : '[email protected]'}
def get_ores_data(revision_ids, headers):
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# Specify the parameters - smushing all the revision IDs together separated by | marks.
# Yes, 'smush' is a technical term, trust me I'm a scientist.
# What do you mean "but people trusting scientists regularly goes horribly wrong" who taught you tha- oh.
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = pd.read_json(json.dumps(api_call.json(), indent=4, sort_keys=True))
for id in response['enwiki']['scores']:
try:
response_data.append([id, response['enwiki']['scores'][id]['wp10']['score']['prediction']])
except:
response_fail.append([id])
#print(response)
#return response
##return (response, indent=4, sort_keys=True)
##print(Predict)
###Output
_____no_output_____
###Markdown
In the below code the data in the Clean_page_data is split into 472 shunks so that every time only a 100 of them are fed to the API for scoring. The data is collected into two files. Response_data file for all the rev_id's where ores gave a prediction. When there were no predictions, it is captured in the response_fail file. There were 155 articles whcih ores was not able to provide any prediction.
###Code
response_data = []
response_fail = []
for i in np.array_split(np.asarray(Clean_Page_Data['rev_id']),472):
get_ores_data(i, headers)
print(len(response_data))
print(len(response_fail))
###Output
46546
155
###Markdown
Here the response_data is converted into a data frame and the data is pushed into a CSV
###Code
response_data = pd.DataFrame(data=response_data, columns=['rev_id','prediction'])
response_data.to_csv('response_data.csv')
response_data.head()
###Output
_____no_output_____
###Markdown
The response_data was taken into excel and then joined with the WPD data together to create the final file with the 5 columns. The population column values are in millions.
###Code
final_data = pd.read_csv('wp_wpds_politicians_by_country.csv', encoding='utf-8')
final_data.head()
###Output
_____no_output_____
###Markdown
Analysis: Top Countries by coverage. Coverage defines % of articles per person. Here I multiplied the population * million so that we get the population and then used this to generate the coverage. I did all of the data grouping and pivoting in excel and generated CSV files for each of the different analysis. Then saved those files into CSV to be loaded into jupyter
###Code
coverage = pd.read_csv('coverage.csv', encoding='utf-8')
coverage.head()
from PIL import Image
display(Image.open("Capture.png"))
###Output
_____no_output_____
###Markdown
Top 10 countreis sorted highest rank in coverage
###Code
coverage.sort_values(by = 'CoveragePerPerson',ascending = False)[0:10]
###Output
_____no_output_____
###Markdown
It is interesting to note that Tuvalu has 5.4% coverage for political articles per person. Bottom 10 countreis sorted lowest rank in coverage
###Code
coverage.sort_values(by = 'CoveragePerPerson',ascending = True)[0:10]
###Output
_____no_output_____
###Markdown
Top 10 countries by relative quality. Relative quality here refers to the articles which were predicted to be FA("Featured Articles") or GA("Good Articles") clasess divided by the total articles.
###Code
relativequality = pd.read_csv('relativequality.csv', encoding='utf-8')
relativequality.head()
relativequality.sort_values(by = 'relative quality',ascending = False)[0:10]
###Output
_____no_output_____
###Markdown
Bottom 10 countries by Relative Quality
###Code
relativequality.sort_values(by = 'relative quality',ascending = True)[0:10]
###Output
_____no_output_____
###Markdown
Geographic Regions by coverage. This uses the geographic locations which were provided and their population.
###Code
geocoverage = pd.read_csv('geocoverage.csv', encoding='utf-8')
geocoverage.head()
geocoverage.sort_values(by = 'Coverage per person',ascending = False)
###Output
_____no_output_____
###Markdown
Geographic region by relative quality.
###Code
georelativequality = pd.read_csv('georelativequality.csv', encoding='utf-8') #reading geographic regions relative quality
georelativequality.head() #visually making sure that the data is accurate
georelativequality.sort_values(by = 'Relative Quality',ascending = False) # sorted gregraphic locations
###Output
_____no_output_____
###Markdown
A2 - Bias in DataPatrick Peng (ID:2029888) DATA 512 AU 2021
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 1: Getting the article and population data The "Politicians by Country" dataset was downloaded from [Figshare](https://figshare.com/articles/dataset/Untitled_Item/5513449) and is licensed [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/).
###Code
pols_by_country_raw = pd.read_csv('page_data.csv')
###Output
_____no_output_____
###Markdown
The world population data is drawn from the [World Population Data Sheet](https://www.prb.org/international/indicator/population/table/) compiled by the Population Reference Bureau.
###Code
country_pop_raw = pd.read_csv('WPDS_2020_data.csv')
###Output
_____no_output_____
###Markdown
Step 2: Cleaning the data The "Politicians by Country" dataset contains pages that are not articles. These include templates (pages that start with the string "Template:") and lists (pages that start with "List of") that we want to remove from the dataset. We'll do that here.
###Code
pols_by_country = pols_by_country_raw[
~pols_by_country_raw['page'].str.startswith('Template:') &
~pols_by_country_raw['page'].str.startswith('List of')
]
pols_by_country
###Output
_____no_output_____
###Markdown
Next, we will separate out the country population counts and sub-regional population counts from the world population data into separate DataFrames. These are distinguished in the dataset by whether or not their name is printed in all caps or not.
###Code
country_pop = country_pop_raw[~country_pop_raw['Name'].str.isupper()]
subregion_pop = country_pop_raw[country_pop_raw['Name'].str.isupper()]
country_pop
subregion_pop
###Output
_____no_output_____
###Markdown
Before we go any further, we need to associate each country with one or more sub-regions, for performing analysis at the regional level later. The `country_pop_raw` dataset is arranged hierarchically. Each row containing a sub-region is followed by rows containing data for the countries within that subregion, repeated for all sub-regions. There is also a higher level of sub-region that contains other sub-regions. Since the `country_pop` and `subregion_pop` dataframes we created preserve the original indices from `country_pop_raw`, we can use the relative position of a country's index to identify its sub-region (basically, the last sub-region entry that appears above the location of the country entry).
###Code
subregion_index = subregion_pop.index # for minor divisions like "Eastern Europe"
subregion2_index = np.array([1,64,67,109,166,216]) # for major divisions like "Europe"
country_index = country_pop.index
country_name = []
country_pop_list = []
subregion_name = []
subregion2_name = []
subregion_pop_list = []
subregion2_pop_list = []
for i in country_index:
j = subregion_index[int(np.sum(i > subregion_index))-1]
k = subregion2_index[int(np.sum(i > subregion2_index))-1]
country_name.append(country_pop['Name'][i])
country_pop_list.append(country_pop['Population'][i])
subregion_name.append(subregion_pop['Name'][j])
subregion_pop_list.append(subregion_pop['Population'][j])
subregion2_name.append(subregion_pop['Name'][k])
subregion2_pop_list.append(subregion_pop['Population'][k])
country_and_subregions = pd.DataFrame(data={'country':country_name,
'country_pop': country_pop_list,
'subregion':subregion_name,
'subregion_pop':subregion_pop_list,
'subregion2':subregion2_name,
'subregion2_pop':subregion2_pop_list})
###Output
_____no_output_____
###Markdown
Now we have a neat table listing each country, its population, and the subregions it belongs to (along with the subregional populations).
###Code
country_and_subregions
###Output
_____no_output_____
###Markdown
Step 3: Getting article quality predictionsWe'll use the REST API endpoint for ORES to get article quality predictions. We'll set it up here.
###Code
import json
import requests
endpoint = 'https://ores.wikimedia.org/v3/scores/enwiki/?models=articlequality&revids={revid}'
headers = {
'User-Agent': 'https://github.com/ppeng2',
'From': '[email protected]'
}
###Output
_____no_output_____
###Markdown
The maximum number of revids in a single API request appears to be 50, so we will have to batch our revids and make sequential API calls. I've written a function `batch_revids` below to perform this.
###Code
def batch_revids(batch_sz, revid_input):
batch_list = []
count = 0
while count < len(revid_input):
start_ind = count
if count + batch_sz < len(revid_input):
end_ind = count + batch_sz
else:
end_ind = len(revid_input)
# 0-50 (0-49), 50-100 (50-99), 100-150 (100-149) ... 46200-46250 (46200-46249), 46250-46291
batch_list.append('|'.join(str(x) for x in revid_input[start_ind:end_ind]))
count = end_ind
return batch_list
all_revids = pols_by_country['rev_id'].to_list()
batch_list = batch_revids(50,all_revids)
###Output
_____no_output_____
###Markdown
Next, I wrote some functions to perform the ORES API call (`get_data`) and parse the resulting JSON structure (`parse_json`) to pull out the features of interest, namely the revid and the predicted score. `parse_json` also compiles a list of all revids that ORES couldn't retrieve a score for.
###Code
def get_data(revids):
call = requests.get(endpoint.format(revid = revids), headers=headers)
response = call.json()
return response
def parse_json(response):
revid_list = []
score_list = []
unscored_revids = []
for i in response['enwiki']['scores']:
try:
score_list.append(response['enwiki']['scores'][i]['articlequality']['score']['prediction'])
revid_list.append(i)
except KeyError:
unscored_revids.append(i)
#score_data = pd.DataFrame({'rev_id': revid_list, 'score': score_list})
return (revid_list,score_list,unscored_revids)
###Output
_____no_output_____
###Markdown
Now, we sequentially call `get_data` and `parse_json` on each batch we prepared. This takes a little bit of time. As each batch completes, we'll add its results to a set of `big__list`s. We'll convert them to a DataFrame once all the batches are done running (it's faster to do it this way rather than create a DataFrame for each batch then concatenate them).
###Code
big_revid_list = []
big_score_list = []
big_unscored_revid_list = []
for i, batch in enumerate(batch_list):
response = get_data(batch)
(revid_list,score_list,unscored_revids) = parse_json(response)
big_revid_list.extend(revid_list)
big_score_list.extend(score_list)
big_unscored_revid_list.extend(unscored_revids)
score_data = pd.DataFrame({'rev_id': big_revid_list, 'score': big_score_list})
score_data
###Output
_____no_output_____
###Markdown
Before we continue, let's save a list of the pages that we couldn't retrieve scores for.
###Code
unscored_pages = pols_by_country[pols_by_country['rev_id'].isin(big_unscored_revid_list)]
unscored_pages.to_csv(path_or_buf='unscored_pages.csv',index=False)
###Output
_____no_output_____
###Markdown
Step 4: Combining the datasets We can do a database-style inner join of our `pols_by_country` and `score_data` dataframes using `.merge()` with `rev_id` as the key. Since it's an inner join, pages that we couldn't get scores for will not show up in the resulting dataframe. But before we do this we have to cast the `rev_id` column of `score_data` to int (currently str) so that it's consistent with that of `pols_by_country`.
###Code
score_data['rev_id'] = score_data['rev_id'].astype(int)
combined_dataset = pols_by_country.merge(score_data, on='rev_id')
###Output
_____no_output_____
###Markdown
We're still need to add another column, for population. So we need to do another inner join with the `country_and_subregions` dataframe.
###Code
combined_dataset2 = combined_dataset.merge(country_and_subregions, on='country')
###Output
_____no_output_____
###Markdown
This is the size of the dataset after we do the second inner join. In this process we lose about 1,800 pages that couldn't find matches for their Country.
###Code
combined_dataset2.shape[0]
###Output
_____no_output_____
###Markdown
Let's take a look at those pages that couldn't get a match for Country and see what countries are causing problems.
###Code
no_match = pols_by_country[~pols_by_country['rev_id'].isin(combined_dataset2['rev_id'])]
no_match2 = no_match[~no_match['rev_id'].isin(unscored_pages['rev_id'])] # remove those that didn't have scores
no_match2['country'].unique()
###Output
_____no_output_____
###Markdown
It looks like there are some typos and errors in the `country` field, most commonly the use of the demonym rather than the country name or the use of an outdated name. Just for the heck of it, I'll try to fix some of them and see if we can reduce the number of no-match pages.
###Code
correction_dict = {'Czech Republic': 'Czechia',
'Salvadoran': 'El Salvador',
'Congo, Dem. Rep. of': 'Congo, Dem. Rep.',
'Samoan': 'Samoa',
'Saint Kitts and Nevis':'St. Kitts-Nevis',
'Ivorian': "Cote d'Ivoire",
'South Korean': 'Korea, South',
'Saint Lucian': 'Saint Lucia',
'Hondura': 'Honduras',
'Jersey': 'Channel Islands',
'Guernsey': 'Channel Islands',
'Macedonia': 'North Macedonia',
'Saint Vincent and the Grenadines': 'St. Vincent and the Grenadines',
'Omani': 'Oman',
'Swaziland': 'eSwatini',
'Palauan': 'Palau'
}
combined_dataset['country'] = combined_dataset['country'].replace(to_replace=correction_dict)
###Output
_____no_output_____
###Markdown
Having done that, we'll do the join again. But this time, we're going to do an outer join, because we also want to get both "countries with no matching articles" and "articles with no matching country" in there, which we can pull out and save to a file later.
###Code
combined_dataset3 = combined_dataset.merge(country_and_subregions, how='outer', on='country')
combined_dataset3
###Output
_____no_output_____
###Markdown
Before we save anything to file, we'll rename and reorder some columns.
###Code
combined_dataset3 = combined_dataset3[['country','page','rev_id','score','country_pop','subregion','subregion_pop','subregion2','subregion2_pop']]
combined_dataset3.rename(columns={'page':'article_name','score':'article_quality_est'},inplace=True)
###Output
_____no_output_____
###Markdown
Let's pull out the rows where no match for either country or article could be found. We can identify these because they have NaN for one or more columns.
###Code
missing_data = combined_dataset3[combined_dataset3.isnull().any(axis=1)]
matched_data = combined_dataset3[~combined_dataset3.isnull().any(axis=1)]
matched_data
missing_data
###Output
_____no_output_____
###Markdown
Finally, we'll save the matched and no-match datasets to file. By manually fixing some of the country encodings we've reduced the number of lost pages to about 500.
###Code
matched_data.to_csv(path_or_buf='wp_wpds_politicians_by_country.csv',index=False)
missing_data.to_csv(path_or_buf='wp_wpds_countries-no_match.csv',index=False)
###Output
_____no_output_____
###Markdown
Step 5: Analysis We will perform some pivots on `matched_data` to obtain our desired insights. First, to get a measure of coverage, or total articles for each country.
###Code
total_articles = pd.pivot_table(data=matched_data,index=['country','country_pop'],values='article_name',aggfunc='count')
total_articles.reset_index(inplace=True)
total_articles.rename(columns={'article_name':'total_articles_count'},inplace=True)
###Output
_____no_output_____
###Markdown
Next, we'll get a count of GA or FA articles for each country.
###Code
# First pivot: for each country, how many articles are in each score class
quality_articles = pd.pivot_table(data=matched_data,index=['country','country_pop','article_quality_est'],values='article_name',aggfunc='count')
quality_articles.reset_index(inplace=True)
quality_articles.rename(columns={'article_name':'total_articles_count'},inplace=True)
# Filter: GA or FA scores only
quality_articles = quality_articles[(quality_articles['article_quality_est']=='GA') | (quality_articles['article_quality_est']=='FA')]
# Second pivot: For each country, sum up the number of GA and FA scores
quality_articles2 = pd.pivot_table(data=quality_articles,index=['country','country_pop'],values='total_articles_count',aggfunc='sum')
quality_articles2.rename(columns={'total_articles_count':'quality_articles_count'},inplace=True)
quality_articles2.reset_index(inplace=True)
###Output
_____no_output_____
###Markdown
Now we'll perform an right outer join with `total_articles` to access the `total_articles_count` attribute so we can calculate a proportion. We're doing an outer join because there might be some countries that have no GA or FA articles.
###Code
country_data = quality_articles2.merge(total_articles,on=['country','country_pop'],how='right')
country_data.fillna(value=0,inplace=True)
###Output
_____no_output_____
###Markdown
We now have everything we need to calculate the proportions at the country level. We'll do those calculations now.
###Code
country_data['articles_per_capita'] = country_data['total_articles_count']/country_data['country_pop']
country_data['quality_fraction'] = country_data['quality_articles_count']/country_data['total_articles_count']
country_data
###Output
_____no_output_____
###Markdown
Now we'll repeat the same analysis at the sub-regional level. Since there are actually two levels of sub-regions, but they all have the same tag, we have to do this twice and then take the union of the two tables.
###Code
# For the lower level of subregion e.g "eastern europe"
# coverage (count of all articles for a sub-region)
total_articles = pd.pivot_table(data=matched_data,index=['subregion','subregion_pop'],values='article_name',aggfunc='count')
total_articles.reset_index(inplace=True)
total_articles.rename(columns={'article_name':'total_articles_count'},inplace=True)
# Quality proportion
# First pivot: for each country, how many articles are in each score class
quality_articles = pd.pivot_table(data=matched_data,index=['subregion','subregion_pop','article_quality_est'],values='article_name',aggfunc='count')
quality_articles.reset_index(inplace=True)
quality_articles.rename(columns={'article_name':'total_articles_count'},inplace=True)
# Filter: GA or FA scores only
quality_articles = quality_articles[(quality_articles['article_quality_est']=='GA') | (quality_articles['article_quality_est']=='FA')]
# Second pivot: For each country, sum up the number of GA and FA scores
quality_articles2 = pd.pivot_table(data=quality_articles,index=['subregion','subregion_pop'],values='total_articles_count',aggfunc='sum')
quality_articles2.rename(columns={'total_articles_count':'quality_articles_count'},inplace=True)
quality_articles2.reset_index(inplace=True)
region_data = quality_articles2.merge(total_articles,on=['subregion','subregion_pop'],how='right')
region_data.fillna(value=0,inplace=True)
region_data['articles_per_capita'] = region_data['total_articles_count']/region_data['subregion_pop']
region_data['quality_fraction'] = region_data['quality_articles_count']/region_data['total_articles_count']
region_data
# for the higher level of subregion e.g. "europe"
# coverage (count of all articles for a sub-region)
total_articles = pd.pivot_table(data=matched_data,index=['subregion2','subregion2_pop'],values='article_name',aggfunc='count')
total_articles.reset_index(inplace=True)
total_articles.rename(columns={'article_name':'total_articles_count'},inplace=True)
# Quality proportion
# First pivot: for each country, how many articles are in each score class
quality_articles = pd.pivot_table(data=matched_data,index=['subregion2','subregion2_pop','article_quality_est'],values='article_name',aggfunc='count')
quality_articles.reset_index(inplace=True)
quality_articles.rename(columns={'article_name':'total_articles_count'},inplace=True)
# Filter: GA or FA scores only
quality_articles = quality_articles[(quality_articles['article_quality_est']=='GA') | (quality_articles['article_quality_est']=='FA')]
# Second pivot: For each country, sum up the number of GA and FA scores
quality_articles2 = pd.pivot_table(data=quality_articles,index=['subregion2','subregion2_pop'],values='total_articles_count',aggfunc='sum')
quality_articles2.rename(columns={'total_articles_count':'quality_articles_count'},inplace=True)
quality_articles2.reset_index(inplace=True)
region2_data = quality_articles2.merge(total_articles,on=['subregion2','subregion2_pop'],how='right')
region2_data.fillna(value=0,inplace=True)
region2_data['articles_per_capita'] = region2_data['total_articles_count']/region2_data['subregion2_pop']
region2_data['quality_fraction'] = region2_data['quality_articles_count']/region2_data['total_articles_count']
region2_data.rename(columns={'subregion2':'subregion','subregion2_pop':'subregion_pop'},inplace=True)
region2_data
###Output
_____no_output_____
###Markdown
Now to take the union of the two subregion tables and get them all in one. Since "oceania" and "northern america" occur in both sets, we will make sure to drop duplicates from the final table.
###Code
region_data_all = pd.concat([region_data,region2_data]).drop_duplicates().reset_index(drop=True)
region_data_all
###Output
_____no_output_____
###Markdown
Step 6: Results 6.1 Top 10 countries by coverage
###Code
country_data.nlargest(10,'articles_per_capita')
###Output
_____no_output_____
###Markdown
6.2 Bottom 10 countries by coverage
###Code
country_data.nsmallest(10,'articles_per_capita')
###Output
_____no_output_____
###Markdown
6.3 Top 10 countries by relative quality
###Code
country_data.nlargest(10,'quality_fraction')
###Output
_____no_output_____
###Markdown
6.4 Bottom 10 countries by relative quality
###Code
country_data.nsmallest(10,'quality_fraction')
###Output
_____no_output_____
###Markdown
Note that these are just the first 10 entries in an alphabetized list of all countries with 0 GA or FA ranked articles. Not a particularly interesting result. 6.5 Geographic regions by coverage
###Code
region_data_all.sort_values(by=['articles_per_capita'],ascending=False)
###Output
_____no_output_____
###Markdown
6.6 Geographic regions by relative quality
###Code
region_data_all.sort_values(by=['quality_fraction'],ascending=False)
###Output
_____no_output_____
###Markdown
English Wikipedia Political Figures Articles Coverage and Quality Analysis
###Code
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 1: Set up page and population data **Wikipedia dataset** from Figshare.com: https://figshare.com/articles/Untitled_Item/5513449 This dataset is downloaded from Figshare.com. The dataset is titled "Politicians by Country from the English-language Wikipedia", of which are data extracted from Wikimedia thru API calls. Both the dataset and the code used to extract the data are under CC-BY-SA 4.0 license. It is downloadable as a csv file titled "page_data.csv", and there are three columns and 47,197 rows in the csv file. page: article title of the page for political figures, not cleaned yet country: cleaned version of country name from which the category the political figure is under rev_id: unique identifier for revision tracking
###Code
## load page_data.csv into pandas DataFrame and examine first 5 rows
page_data = pd.read_csv('page_data.csv', sep=',')
page_data.head()
###Output
_____no_output_____
###Markdown
**Population dataset** from Dropbox: https://www.dropbox.com/s/5u7sy1xt7g0oi2c/WPDS_2018_data.csv?dl=0 This dataset is downloaded from Dropbox. The dataset is originally from Population Reference Bureau under International Indicators and it is population data for all countries from mid-2018 in millions of population. It is downloadable as a csv file titled "WPDS_2018_data.csv", and there are two columns and 207 rows in the csv file. Geography: country and continent names Population mid-2018 (millions): population data from mid-2018 in millions
###Code
## load WPDS_2018_data.csv into pandas DataFrame and examine first 5 rows
population_data = pd.read_csv('WPDS_2018_data.csv', sep=',', thousands=',')
population_data.head()
###Output
_____no_output_____
###Markdown
Step 2: Set up article quality predictions For the article quality predictions, we will be using ORES API calls by passing in each articles' rev_id and getting their 'prediction' values from the json file. For ORES documentation, please refer to this website: https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context For prediction values in ORES, there are 6 quality categories, in later analysis, we will mainly focus on the first two categories for high quality article percentage calculation. FA - Featured article GA - Good article B - B-class article C - C-class article Start - Start-class article Stub - Stub-class article
###Code
## import packages for making API calls to ORES
import requests
import json
## Define hearders and endpoint for API call
headers = {'User-Agent' : 'https://github.com/yd4wh', 'From' : '[email protected]'}
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
## Define a function that will recurse over all rev_ids and output quality predictions
def get_ores_quality_prediction(revids, headers, endpoint):
# define parameters for endpoints
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revids)
}
# use above defined parameters to make API requests
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
# loop thru each revids in the 100 group to get their quality predictions
quality_prediction = []
revid_list = []
# After testing, there are errors with revids that don't have a score associated
# therefore, when looping, also included except to pass the revids that don't have scores associated with them
for revid in revids:
# to iterate thru every revids to pull out the prediction value
try:
quality_prediction.append(response['enwiki']['scores'][str(revid)]['wp10']['score']['prediction'])
revid_list.append(revid)
# use except to pass thru the revids without scores
except:
pass
# this function will return revids and its associated prediction values
return revid_list, quality_prediction
## set up revids in every 100 to be passed thru the API call function
# change revids into list
revids = list(page_data['rev_id'])
# define starting and ending points for first iteration
start = 0
end = 100
# create empty Dataframe for collecting article quality output
article_quality = pd.DataFrame()
# loop over all revids in groups of 100s
while start < len(revids):
# pull out the revids in groups of 100 for each iteration
iter_revids = revids[start:end]
# call the function to get article quality predictions
iter_result = get_ores_quality_prediction(iter_revids,headers,endpoint)
article_quality = article_quality.append(pd.DataFrame(list(iter_result)).T)
# update starting and ending points for next iteration
start += 100
end = min(start+100, len(revids))
# print out the final Dataframe with the revids that don't have scores
article_quality.head()
###Output
_____no_output_____
###Markdown
The ORES article quality prediction dataframe is now saved as **article_quality** with 2 columns and 47,092 rows after removing all articles that doesn't have a article score. revision_id: the revision_id that can be linked back to page_data article_quality: the ORES quality prediction for associated revision_id
###Code
# rename article_quality columns before merging in next step
article_quality.rename(columns={0:'revision_id',1:'article_quality'}, inplace=True)
article_quality.head()
###Output
_____no_output_____
###Markdown
Step 3: Combine page_data, population_data and article_quality This step will use the common columns in page_data(rev_id, country), population_data(Geography), article_quality(revision_id) to merge all three dataframes together, and in the end build a combined dataframe together with 5 columns and 44,973 rows after removing all data point that don't match. country: country column from page_data article_name: page column from page_data revision_id: revision_id column from article_quality article_quality: article_quality column from article_quality population: Population mid-2018 (millions) column from population_data which will be in millions
###Code
# make deep copies of the three dataframes as base df for merging
df_page_data = page_data.copy(deep=True)
df_population_data = population_data.copy(deep=True)
df_article_quality = article_quality.copy(deep=True)
# combine page_data and article_quality on rev_id and revision_id columns
combined_data = df_page_data.merge(df_article_quality, how='right', left_on='rev_id', right_on='revision_id')
# combine combined_data with population_data on country and geography columns
combined_data = combined_data.merge(df_population_data, how='inner', left_on='country', right_on='Geography')
combined_data.rename(columns={'page':'article_name','Population mid-2018 (millions)':'population'}, inplace=True)
combined_data.head()
# clean dataframe for combined_data to just keep five colunms documented above
df_combined_data = combined_data[['country',
'article_name',
'revision_id',
'article_quality',
'population']]
df_combined_data.head()
# output final_data.csv for reproducibiilty
df_combined_data.to_csv('final_data.csv', index=False)
###Output
_____no_output_____
###Markdown
Step 4: Analysis on articles quality by country and population
###Code
# make a deep copy of the final DataFrame for analysis
final_data = df_combined_data.copy(deep=True)
###Output
_____no_output_____
###Markdown
The **percentage of articles-per-population** for each country: this measure will be calculated by taking the total number of articles in a particular country and divide it by the total population of the corresponding country. This requires us to sum the total number of articles by country and to represent population number normally.
###Code
# count total number of articles in each country using group by
article_by_country = final_data.groupby('country').count()['article_name']
# pass the series into a dataframe for merging with population data
df_article_by_country = article_by_country.to_frame(name='article_count')
# change country into a column instead of index for merging
df_article_by_country['country'] = df_article_by_country.index
df_article_by_country.head()
# merge with population_data df to calculate percentage
articles_per_population = df_article_by_country.merge(df_population_data, how='inner',
left_on='country', right_on='Geography')
# change population number into normal presentation
articles_per_population['population'] = articles_per_population['Population mid-2018 (millions)']*1000000
# calculate the percentage of articles per population by country
articles_per_population['pcnt_articles_per_population'] = 100*(articles_per_population['article_count']/articles_per_population['population'])
###Output
_____no_output_____
###Markdown
The **percentage of high-quality-articles** for each country: this measure will be calculated by taking the total number of articles in a particular country that qualifies as being either "FA" or "GA" and divide it by the total number of articles about politicians of the corresponding country.
###Code
# limit articles to only "FA" and "GA" qualities
high_quality_articles = final_data.loc[final_data['article_quality'].isin(['FA','GA'])]
# count total number of high quality articles in each country using group by
quality_article_by_country = high_quality_articles.groupby('country').count()['article_name']
# pass the series into a dataframe for merging with population data
df_quality_article_by_country = quality_article_by_country.to_frame(name='high_quality_article_count')
# change country into a column instead of index for later merging
df_quality_article_by_country['country'] = df_quality_article_by_country.index
df_quality_article_by_country.head()
# merge with articles_per_population df to calculate article percentage
analysis_df = df_quality_article_by_country.merge(articles_per_population, how='right',
left_on='country', right_on='country')
# divide total number of high quality articles by total article count
analysis_df['pcnt_high_quality_articles'] = 100*(analysis_df['high_quality_article_count']/analysis_df['article_count'])
analysis_df.head()
###Output
_____no_output_____
###Markdown
The combined analysis DataFrame will include all countries that have population and wikipedia articles regardless of the count of high quality articles.
###Code
# keep neccessary and non-duplicate columns
analysis_df = analysis_df[['country',
'article_count',
'high_quality_article_count',
'population',
'pcnt_articles_per_population',
'pcnt_high_quality_articles']]
analysis_df.head()
###Output
_____no_output_____
###Markdown
Step 5: Tables of highest and lowest ranked countries by *articles_per_population* and *high_quality_articles* This section will display four tables that summarize the 10 highest and 10 lowest ranked countries in terms of their pcnt_articles_per_population and pcnt_high_quality_articles in the order below: 1. 10 highest-ranked countries in terms of pcnt_articles_per_population 2. 10 lowest-ranked countries in terms of pcnt_articles_per_population 3. 10 highest-ranked countries in terms of pcnt_high_quality_articles 4. 10 lowest-ranked countries in terms of pcnt_high_quality_articles
###Code
# 10 highest-ranked countries sorting by 'pcnt_articles_per_population'
analysis_df.sort_values(by='pcnt_articles_per_population', ascending=False).head(10)[['country',
'article_count',
'population',
'pcnt_articles_per_population']]
# 10 lowest-ranked countries sorting by 'pcnt_articles_per_population'
analysis_df.sort_values(by='pcnt_articles_per_population').head(10)[['country',
'article_count',
'population',
'pcnt_articles_per_population']]
# 10 highest-ranked countries sorting by 'pcnt_high_quality_articles'
analysis_df.sort_values(by='pcnt_high_quality_articles', ascending=False).head(10)[['country',
'high_quality_article_count',
'article_count',
'pcnt_high_quality_articles']]
# 10 lowest-ranked countries sorting by 'pcnt_high_quality_articles'
analysis_df.sort_values(by='pcnt_high_quality_articles').head(10)[['country',
'high_quality_article_count',
'article_count',
'pcnt_high_quality_articles']]
###Output
_____no_output_____
###Markdown
One caveat on the lowest-ranked countries in terms of pcnt_high_quality_articles, we only included countries that have at least 1 article qualified as "GA" or "FA" and didn't include countries that don't have any high quality articles about politicians. Therefore, as a separate group of countries that don't have any high quality articles written about politicians, we've listed below in alphabetical order. There are 37 countries that dont have any articles qualified as "GA" or "FA".
###Code
countries_without_high_qulaity_articles = analysis_df.loc[pd.isnull(analysis_df['pcnt_high_quality_articles'])]
print("There are "+ str(countries_without_high_qulaity_articles.count()[1]) + " countries that don't have any high quality articles.")
countries_without_high_qulaity_articles[['country']]
###Output
There are 37 countries that don't have any high quality articles.
###Markdown
A2: Bias In Data The goal of this repository is to explore the concept of bias through data on Wikipedia articles - specifically, articles on political figures from a variety of countries Table of Contents 1. [Data Acquisition](acquisition) 2. [Data Cleaning and Processing](cleaning) 3. [Analysis and Results](analysis)
###Code
import os
import json
import requests
import pandas as pd
from tqdm import tqdm_notebook as tqdm
###Output
_____no_output_____
###Markdown
Data Acquisition We use two local data sources: 1. The Wikipedia English article dataset under the "Category: Politicians by nationality" category 2. The population dataset
###Code
wiki_articles_df = pd.read_csv("./data/page_data.csv")
population_df = pd.read_csv("./data/wikipedia_population_2018_data.csv")
wiki_articles_df.head(2)
population_df.head(2)
###Output
_____no_output_____
###Markdown
Rename the columns, and make the population count more explicit
###Code
population_df.columns = ['country', 'population']
population_df["population"] = population_df["population"].apply(lambda s: s.replace(",", "")).apply(float)*1000000
population_df.head()
###Output
_____no_output_____
###Markdown
Retrieving Article QualityWe also need to generate the quality of each article, for which we use the [ORES API](https://www.mediawiki.org/wiki/ORES)This API returns a prediction which is one of the following categories: 1. FA - Featured article 2. GA - Good article 3. B - B-class article 4. C - C-class article 5. Start - Start-class article 6. Stub - Stub-class article The following code is inspired from the [A2 reference notebook](https://github.com/Ironholds/data-512-a2/blob/master/hcds-a2-bias_demo.ipynb)
###Code
HEADERS = {'User-Agent': 'https://github.com/havanagrawal', 'From': '[email protected]'}
def get_ores_data(revision_ids, headers=HEADERS):
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
params = {
'project': 'enwiki',
'model': 'wp10',
'revids': '|'.join(str(x) for x in revision_ids)
}
json_response = requests.get(endpoint.format(**params)).json()
quality_predictions = []
# Unpack predictions according to the response structure, which can be found in the reference notebook
for key, value in json_response["enwiki"]["scores"].items():
result_dict = value["wp10"]
if "error" not in result_dict:
prediction = {
'rev_id': int(key),
'prediction': result_dict["score"]["prediction"]
}
quality_predictions.append(prediction)
return quality_predictions
###Output
_____no_output_____
###Markdown
In order to minimize the number of calls to the API, we can group revision ids into chunks of 50-100, and then call the API once for each group To enable this, we use a quick recipe to iterate n items at a time from a collection
###Code
def grouper(lst, n):
"""Collect data into fixed-length chunks or blocks
>>> grouper('ABCDEFG', 3)
"ABC DEF G"
"""
for i in range(0, len(lst), n):
yield lst[i:i + n]
###Output
_____no_output_____
###Markdown
If you already have the `article_quality.csv` file, then you need not retrieve predictions from ORES, since it can take up to 20 minutes, depending on your internet speed.
###Code
QUALITY_PREDICTION_FILEPATH = "./data/article_quality.csv"
DOWNLOAD_PREDICTIONS = not os.path.exists(QUALITY_PREDICTION_FILEPATH)
DOWNLOAD_PREDICTIONS
###Output
_____no_output_____
###Markdown
Retrieve and concatenate all JSON results into a single pandas DataFrame, and save it, if the output file doesn't already exist
###Code
if DOWNLOAD_PREDICTIONS:
revision_ids = wiki_articles_df.rev_id.tolist()
# Group revision IDs into chunks of 100
grouped_ids = list(grouper(revision_ids, 100))
# Get the article predictions from ORES in JSON format
article_quality_json_data = [get_ores_data(subset) for subset in tqdm(grouped_ids)]
# Convert the JSON data into DataFrames
temp_dfs = [pd.DataFrame.from_records(json_subset) for json_subset in article_quality_json_data]
# Concatenate and save the DataFrames
article_quality_df = pd.concat(temp_dfs)
article_quality_df.to_csv(QUALITY_PREDICTION_FILEPATH, index=False)
else:
article_quality_df = pd.read_csv(QUALITY_PREDICTION_FILEPATH)
article_quality_df.head(2)
###Output
_____no_output_____
###Markdown
Data Processing and Cleaning We want to find and report: 1. 10 highest-ranked countries in terms of number of politician articles as a proportion of country population 2. 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population 3. 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country 4. 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country To achieve this, we need to merge the quality prediction with the original dataset. This may lead to some data loss since not all revisions will have a prediction
###Code
print("DataFrame Shape Before Merging\t", wiki_articles_df.shape)
wiki_articles_df = wiki_articles_df.merge(article_quality_df, on='rev_id')
print("DataFrame Shape After Merging\t", wiki_articles_df.shape)
###Output
DataFrame Shape Before Merging (47197, 3)
DataFrame Shape After Merging (47092, 4)
###Markdown
We can now perform a group by country, and count 1. The total number of articles 2. The total number of high-quality articles where high quality is defined as a prediction of either "FA" or "GA"
###Code
def is_high_quality(s):
return s == "FA" or s == "GA"
high_quality_only = wiki_articles_df[wiki_articles_df.prediction.apply(is_high_quality)]
###Output
_____no_output_____
###Markdown
Counting the total number of articles by country:
###Code
all_article_counts = pd.DataFrame(wiki_articles_df.groupby('country').count()['rev_id'])
all_article_counts = all_article_counts.reset_index()
all_article_counts.columns = ['country', 'all_article_counts']
all_article_counts.head()
###Output
_____no_output_____
###Markdown
Similarly, counting the total number of high-quality (HQ) articles by country
###Code
hq_article_counts = pd.DataFrame(high_quality_only.groupby('country').count()['rev_id'])
hq_article_counts = hq_article_counts.reset_index()
hq_article_counts.columns = ['country', 'hq_article_counts']
hq_article_counts.head()
###Output
_____no_output_____
###Markdown
We can now perform a three-way merge between the population, all article count and high quality article count datasets:
###Code
temp = pd.merge(hq_article_counts, all_article_counts, on='country')
final_df = pd.merge(temp, population_df, on='country')
final_df.head()
###Output
_____no_output_____
###Markdown
We apply a final transformation to get the articles/population counts
###Code
final_df["hq_articles_ratio"] = final_df.hq_article_counts / final_df.all_article_counts
final_df["all_articles_per_pop"] = final_df.all_article_counts / final_df.population
final_df.sample(5, random_state=42)
###Output
_____no_output_____
###Markdown
Save the results to a CSV, so that advanced analysis can be performed independently.
###Code
final_df.to_csv("./data/article_quality_with_population.csv", index=False)
###Output
_____no_output_____
###Markdown
We can now report the desired metrics Analysis and Results The `.reset_index().drop('index', axis=1)` correctly numbers the rows. 1. 10 Highest Ranked Countries in terms of number of politician articles as a proportion of country population
###Code
final_df.sort_values("all_articles_per_pop", ascending=False).head(10).reset_index().drop('index', axis=1)
###Output
_____no_output_____
###Markdown
2. 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
final_df.sort_values("all_articles_per_pop", ascending=True).head(10).reset_index().drop('index', axis=1)
###Output
_____no_output_____
###Markdown
In order to investigate if population is playing a stronger role in this metric than the number of articles, we can look at the most and least populated countries in our dataset:
###Code
most_populated = final_df.sort_values('population').country.head(10).tolist()
least_populated = final_df.sort_values('population').country.tail(10).tolist()
print("Most Populated:\t", ", ".join(sorted(most_populated)))
print("Least Populated:", ", ".join(sorted(least_populated)))
###Output
Most Populated: Dominica, Grenada, Iceland, Luxembourg, Maldives, Montenegro, Suriname, Tonga, Tuvalu, Vanuatu
Least Populated: Bangladesh, Brazil, China, India, Indonesia, Mexico, Nigeria, Pakistan, Russia, United States
###Markdown
3. 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
final_df.sort_values("hq_articles_ratio", ascending=False).head(10).reset_index().drop('index', axis=1)
###Output
_____no_output_____
###Markdown
4. 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
final_df.sort_values("hq_articles_ratio", ascending=True).head(10).reset_index().drop('index', axis=1)
###Output
_____no_output_____
###Markdown
A2 - Bias in DataThe goal of this assignment is to explore the concept of bias in data through an analysis of article quality for politicial figures on English Wikipedia from a variety of countries. This analysis uses a datasets of Wikipedia articles and country populations and employs the ORES machine learning service to estimate article quality. Before we begin, we must import any packages needed:
###Code
import pandas as pd
import numpy as np
import json
import requests
###Output
_____no_output_____
###Markdown
Step 1: Getting the Article and Population DataData was collected from two different sources, [Politicians by Country](https://figshare.com/articles/Untitled_Item/5513449) is from the _page_data.csv_ file on Figshare and the [attached spreadsheet](https://docs.google.com/spreadsheets/d/1CFJO2zna2No5KqNm9rPK5PCACoXKzb-nycJFhV689Iw/edit?usp=sharing) drawn from the [World Population Data Sheet (WPDS)](https://www.prb.org/international/indicator/population/table/) published by the Population Reference Bureau. The data was read into a pandas dataframe from the _/raw_data/_ directory and stored below. We also previewed the data and investigated its shape.
###Code
politicians_df = pd.read_csv('raw_data/page_data.csv')
population_df = pd.read_csv('raw_data/WPDS_2020_data.csv')
print('politicians shape:', politicians_df.shape)
print('population shape:', population_df.shape)
# preview Politicians by Country
politicians_df.head()
# preview WPDS population data
population_df.head()
###Output
_____no_output_____
###Markdown
Step 2: Cleaning the Data First, we will clean `politicians_df` which contains some _pages_ that are NOT Wikipedia article names. These start with the string _'Template:'_ and will be excluded from future analysis.
###Code
politicians_df = politicians_df[~politicians_df['page'].str.startswith('Template:')]
print('politicians shape:', politicians_df.shape)
politicians_df.head()
###Output
politicians shape: (46701, 3)
###Markdown
Next, we will clean `population_df` which has some rows with cumulative (region or world) population counts. These are indicated by ALL CAPS in the _Name_ field.__Note:__ If this cleaning is done by filtering the _Type_ field for `Type = 'Country'` this will exclude the _Channel Islands_. Although the _Channel Islands_ are not a country, they are also not a region and will be treated like a country for this analysis.
###Code
country_df = population_df[~population_df['Name'].str.isupper()]
print('country shape:', country_df.shape)
country_df.head()
###Output
country shape: (210, 6)
###Markdown
Finally, we will create a new dataframe that maps each country to their appropriate region.__Note:__ We made the assumption that each of the 24 regions was at the same level and that the ordering of the WPDS dataset matters (i.e. the nearest region above a given country in the original _WPDS_2020_data.csv_ is its region). - Create a dataframe of all the regional data we discarded to create _country_df_
###Code
region_df = population_df[population_df['Name'].str.isupper()]
print('region shape:', region_df.shape)
region_df.head()
###Output
region shape: (24, 6)
###Markdown
- Get the indices of each region in the original WPDS dataset
###Code
region_index_df = pd.DataFrame(columns = ['region_index','region', 'region_population'])
region_index_df['region'] = region_df['Name']
region_index_df['region_population'] = region_df['Population']
region_index_df['region_index'] = region_df.index
region_indices = region_index_df['region_index'].unique()
print(region_indices)
###Output
[ 0 1 2 10 27 48 58 64 67 68 77 95 109 110 129 135 145 157
166 167 179 189 200 216]
###Markdown
- Initial _country_region_df_ to serve as the mapping between country and region (and their respective populations) by index
###Code
country_region_df = pd.DataFrame(columns = ['country_index' ,'country', 'country_population', 'region_index'])
country_region_df['country'] = country_df['Name']
country_region_df['country_population'] = country_df['Population']
country_region_df['country_index'] = country_df.index
country_region_df.head()
###Output
_____no_output_____
###Markdown
- For each element of _region_indicies_, if the value is greater than the _country_index_, map to that region. Then join to also map populations at country and regional level.__Note:__ If there are no countries in a given region, the region will be dropped here from future analysis.
###Code
country_region_df.shape
for i in region_indices:
country_region_df.loc[country_region_df['country_index'] > i, 'region_index'] = i
country_region_df = pd.merge(country_region_df, region_index_df, how = 'inner', on = 'region_index')
country_region_df = country_region_df.drop(columns = ['country_index', 'region_index'])
country_region_df.head()
###Output
_____no_output_____
###Markdown
Write the 4 cleaned dataframes to _/prelim_data/_ for investigation later.
###Code
politicians_df.to_csv('prelim_data/politicians_prelim.csv', index = False)
country_df.to_csv('prelim_data/country_prelim.csv', index = False)
region_df.to_csv('prelim_data/region_prelim.csv', index = False)
country_region_df.to_csv('prelim_data/country_region_mapping.csv', index = False)
###Output
_____no_output_____
###Markdown
Step 3: Getting Article Quality PredictionsAs discussed in __Step 1__, the article quality predictions are generated using ORES. This is a machine learning system that learned scoring based on articles in Wikipedia that had been peer-reviewed using the [Wikipedia content assessment](https://en.wikipedia.org/wiki/Wikipedia:Content_assessment) guidelines and grouped into a subset of 6 categories.The 6 categories for article quality (from best-to-worst) are:1. FA - Featured article2. GA - Good article3. B - B-class article4. C - C-class article5. Start - Start-class article6. Stub - Stub-class article There are two options for calling this API (Option 1 uses the _ORES client_, see some demo code under __Step 3: Option 1:__ [here](https://docs.google.com/document/d/11eswL84T-H6bli8aX_-XndCN6tAZ4bIb9Z2ywiIf2fE/edit)). We will be using the second recommended method, the ORES REST API endpoint [documentation](https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_model). This API expects a project (_enwiki_ for English Wikipedia), a model (we will be using _articlequality_), and a revision ID (aka _{rev_id}_) as parameters.__Note:__ The ORES REST API can take up to 50 rev_ids at a time (format: _{rev_id_1}|{rev_id_2}|...|{rev_id_50}_).It is possible that a given _rev_id_ will NOT have a score, if this occurs, it will be logged in a separate list. - First, we need a function that groups the _rev_ids_ from into batches of 50. See [here](https://www.geeksforgeeks.org/break-list-chunks-size-n-python/) for some documentation on how to create the `batches` function.
###Code
def batches(rev_ids, n):
for i in range(0, len(rev_ids), n):
yield rev_ids[i:i + n]
rev_ids = politicians_df['rev_id']
batch_list = list(batches(rev_ids, 50))
print('batch count:', len(batch_list))
###Output
batch count: 935
###Markdown
- Next, we need a function to call the API the passes in a list of _rev_ids_ as a parameter.
###Code
headers = {
'User-Agent': 'https://github.com/nriggio',
'From': '[email protected]'
}
url = 'https://ores.wikimedia.org/v3/scores/enwiki?models=articlequality&revids={rev_ids}'
def api_call(url, rev_ids, headers):
call = requests.get(url.format(rev_ids = rev_ids), headers=headers)
response = call.json()
return response
###Output
_____no_output_____
###Markdown
- Then, we loop through the `batch_list` and use `api_call` to pass in a list of _rev_ids_.- If a score is found, it is stored in the _page_scores_ list. If there is no score, the _rev_id_ is stored in _page_errors_.__Note:__ It should take ~5-10 minutes to execute all the API calls and store the results.
###Code
# intialize lists
page_scores = []
page_errors = []
# loop through each batch list and call the API
for i in range(len(batch_list)):
batch_ids = '|'.join(str(x) for x in batch_list[i])
scores = api_call(url, batch_ids, headers)
# for each revision_id, append score (or indication of no score) to the appropriate list)
for rev_id in batch_list[i]:
try:
page_scores.append((rev_id, scores['enwiki']['scores'][str(rev_id)]['articlequality']['score']['prediction']))
except KeyError:
page_errors.append(rev_id)
###Output
_____no_output_____
###Markdown
- To check our results, we get a count of the _rev_ids_ in each list and validate the total count.
###Code
print('pages with ORES scores:', len(page_scores))
print('pages without ORES scores:', len(page_errors))
print('total page count:', len(page_scores) + len(page_errors))
###Output
pages with ORES scores: 46425
pages without ORES scores: 276
total page count: 46701
###Markdown
- Finally, the output of each list is turned into a pandas dataframe and stored in a CSV under _/prelim_data/_.
###Code
# page_scores processing
ORES_scores_df = pd.DataFrame(page_scores, columns = ['revision_id', 'article_quality_est.'])
ORES_scores_df.to_csv('prelim_data/ORES_scores.csv', index = False)
print('score types:', ORES_scores_df['article_quality_est.'].unique())
ORES_scores_df.head()
# page_errors processing
no_score_df = pd.DataFrame(page_errors, columns = ['revision_id'])
no_score_df.to_csv('prelim_data/no_ORES_scores.csv', index = False)
no_score_df.head()
###Output
_____no_output_____
###Markdown
Step 4: Combining the DatasetsTo set us up for analysis, we will need to merge the wikipedia data and population data together by country name in `\data\wp_wpds_politicians_by_country.csv`.__Note:__ There will be entries that can't be merged on country name. These rows are removed and stored in `\data\wp_wpds_countries-no_match.csv` - In order to do this, we must first join `ORES_scores_df` to `politicians_df` on _rev_id_ to get a view of each page and its score.
###Code
politician_ORES_df = pd.merge(politicians_df, ORES_scores_df, how = 'inner', left_on='rev_id', right_on='revision_id')
print(len(politician_ORES_df))
politician_ORES_df.head()
###Output
46425
###Markdown
- Now we can outer join `politician_ORES_df` with our `country_df` on country. If the country is in both lists the _merge_ column produced by `indicator = True` will display _both_.
###Code
final_df = pd.merge(politician_ORES_df, country_df, indicator = True,
how = 'outer', left_on = 'country', right_on = 'Name')
print(len(final_df))
final_df.head()
###Output
46452
###Markdown
- We have to clean up this join a bit to get the desired data format.
###Code
final_df = final_df.drop(columns = ['rev_id', 'FIPS', 'Name', 'Type', 'TimeFrame', 'Data (M)'])
final_df.rename(columns={'page':'article_name', 'Population':'population'}, inplace = True)
final_df = final_df[['country', 'article_name', 'revision_id', 'article_quality_est.', 'population', '_merge']]
final_df['revision_id'] = final_df['revision_id'].astype('Int64')
final_df['population'] = final_df['population'].astype('Int64')
print(final_df['_merge'].unique())
print(len(final_df))
final_df.head()
###Output
['both', 'left_only', 'right_only']
Categories (3, object): ['both', 'left_only', 'right_only']
46452
###Markdown
- If there is a country match, output to `\final_data\wp_wpds_politicians_by_country.csv`. If not, log results in `\final_data\wp_wpds_countries-no_match.csv`.
###Code
match = final_df[final_df['_merge'] == 'both']
match = match.drop(columns = ['_merge'])
print('match count:', len(match))
match.to_csv('final_data/wp_wpds_politicians_by_country.csv', index = False)
no_match = final_df[final_df['_merge'] != 'both']
no_match = no_match.drop(columns = ['_merge'])
print('no match count:', len(no_match))
no_match.to_csv('final_data/wp_wpds_countries-no_match.csv', index = False)
###Output
match count: 44568
no match count: 1884
###Markdown
Step 5: AnalysisWe want to calculate the proportion (as a __percent__) of articles-per-population and high-quality-articles by country AND region.__Note:__ "High quality" articles refer to those with a ORES score of 'FA' or 'GA' - Get an article count by country
###Code
article_cnt_c = match.groupby(['country']).size().reset_index(name = 'article_cnt')
print(sum(article_cnt_c['article_cnt']))
print(article_cnt_c.shape)
article_cnt_c.head()
###Output
44568
(183, 2)
###Markdown
- Get a high-quality article count by country
###Code
hq_article_cnt_c = match.groupby(['country', 'article_quality_est.']).size().reset_index(name = 'hq_article_cnt')
hq_article_cnt_c = hq_article_cnt_c[hq_article_cnt_c['article_quality_est.'].str.contains('FA|GA')].reset_index(drop = True)
hq_article_cnt_c = hq_article_cnt_c.groupby(['country']).sum()
print(sum(hq_article_cnt_c['hq_article_cnt']))
print(hq_article_cnt_c.shape)
hq_article_cnt_c.head()
###Output
1028
(146, 1)
###Markdown
- Merge by country and calculate the proportions of _articles-per-population_ and _high-quality-articles_ as a __percent____Note__: Left join to keep all countries (even those without any articles or any high-quality articles)
###Code
# merge in article count
by_country = pd.merge(country_region_df, article_cnt_c, how = 'left', on = 'country')
by_country['article_cnt'] = by_country['article_cnt'].astype('Int64')
# merge in high-quality article count (and fill missing values with 0)
by_country = pd.merge(by_country, hq_article_cnt_c, how = 'left', on = 'country')
by_country['hq_article_cnt'] = by_country['hq_article_cnt'].fillna(0)
by_country['hq_article_cnt'] = by_country['hq_article_cnt'].astype('Int64')
# clean up extra columns
by_country = by_country.drop(columns = ['region', 'region_population'])
by_country['articles_per_pop_pct'] = by_country['article_cnt'] / by_country['country_population'] * 100
by_country['hq_articles_pct'] = by_country['hq_article_cnt'] / by_country['article_cnt'] * 100
by_country.head()
###Output
_____no_output_____
###Markdown
Repeat the above analysis for the regional level.
###Code
# article count by region
article_cnt_r = pd.merge(match, country_region_df, how = 'left', on = 'country')
article_cnt_r = article_cnt_r.groupby(['region']).size().reset_index(name = 'article_cnt')
print(sum(article_cnt_r['article_cnt']))
article_cnt_r.head()
# high-quality article count by region
hq_article_cnt_r = pd.merge(match, country_region_df, how = 'left', on = 'country')
hq_article_cnt_r = hq_article_cnt_r.groupby(['region', 'article_quality_est.']).size().reset_index(name = 'hq_article_cnt')
hq_article_cnt_r = hq_article_cnt_r[hq_article_cnt_r['article_quality_est.'].str.contains('FA|GA')].reset_index(drop = True)
hq_article_cnt_r = hq_article_cnt_r.groupby(['region']).sum()
print(sum(hq_article_cnt_r['hq_article_cnt']))
print(hq_article_cnt_r.shape)
hq_article_cnt_r.head()
# Merge by region, calculate articles-per-population and high-quality-articles (as a percent)
# merge in article count
by_region = pd.merge(country_region_df, article_cnt_r, how = 'inner', on = 'region')
by_region['article_cnt'] = by_region['article_cnt'].astype('Int64')
# merge in high-quality article count (and fill missing values with 0)
by_region = pd.merge(by_region, hq_article_cnt_r, how = 'left', on = 'region')
by_region['hq_article_cnt'] = by_region['hq_article_cnt'].fillna(0)
by_region['hq_article_cnt'] = by_region['hq_article_cnt'].astype('Int64')
# clean up extra columns
by_region = by_region.drop(columns = ['country', 'country_population'])
by_region = by_region.drop_duplicates().reset_index(drop = True)
# calculate proportions
by_region['articles_per_pop_pct'] = by_region['article_cnt'] / by_region['region_population'] * 100
by_region['hq_articles_pct'] = by_region['hq_article_cnt'] / by_region['article_cnt'] * 100
print(by_region.shape)
by_region.head()
###Output
(19, 6)
###Markdown
Step 6: ResultsProduce the following 6 data tables:1. __Top 10 countries by coverage:__ 10 highest-ranked countries in terms of number of politician articles as a proportion of country population2. __Bottom 10 countries by coverage:__ 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population3. __Top 10 countries by relative quality:__ 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality4. __Bottom 10 countries by relative quality:__ 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality5. __Geographic regions by coverage:__ Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population6. __Geographic regions by relative quality:__ Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality __Top 10 countries by coverage__
###Code
df_1 = by_country.nlargest(n = 10, columns = 'articles_per_pop_pct').reset_index(drop = True)
df_1[['country', 'country_population', 'article_cnt', 'articles_per_pop_pct']]
###Output
_____no_output_____
###Markdown
__Bottom 10 countries by coverage__
###Code
df_2 = by_country.nsmallest(n = 10, columns = 'articles_per_pop_pct').reset_index(drop = True)
df_2[['country', 'country_population', 'article_cnt', 'articles_per_pop_pct']]
###Output
_____no_output_____
###Markdown
__Top 10 countries by relative quality__
###Code
df_3 = by_country.nlargest(n = 10, columns = 'hq_articles_pct').reset_index(drop = True)
df_3[['country','article_cnt', 'hq_article_cnt', 'hq_articles_pct']]
###Output
_____no_output_____
###Markdown
__Bottom 10 countries by relative quality____Note:__ There were more than 10 countries with no high-quality articles. This is just a selection of those 10.
###Code
df_4 = by_country.nsmallest(n = 10, columns = 'hq_articles_pct').reset_index(drop = True)
df_4[['country','article_cnt', 'hq_article_cnt', 'hq_articles_pct']]
###Output
_____no_output_____
###Markdown
__Geographic regions by coverage__
###Code
df_5 = by_region.sort_values(by = 'articles_per_pop_pct', ascending = False).reset_index(drop = True)
df_5[['region', 'region_population', 'article_cnt', 'articles_per_pop_pct']]
###Output
_____no_output_____
###Markdown
__Geographic regions by relative quality__
###Code
df_6 = by_region.sort_values(by = 'hq_articles_pct', ascending = False).reset_index(drop = True)
df_6[['region','article_cnt', 'hq_article_cnt', 'hq_articles_pct']]
###Output
_____no_output_____
###Markdown
hcds-a2-bias Purpose This notebook details the steps needed to construct and analyze a dataset which details the articles on politicians of various countries together with the modelled quality of those articles as predicted by a machine learning system called __[ORES](https://www.mediawiki.org/wiki/ORES)__. It is divided into three sections:1. Data acquisition - collecting data from various sources 2. Data processing - preparing the data in (1) for analysis 3. Data analysis - visualizing the dataset created in (2) as a series of bar charts 1. Data acquisition Three types of data were required for the analysis: 1. Population data - A list of countries together with their mid-2015 populations was obtained from the __[Population Research Bureau website](http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14)__ in the form of a csv file (__[Population Mid-2015.csv](http://www.prb.org/RawData.axd?ind=14&fmt=14&tf=76&loc=34235%2c249%2c250%2c251%2c252%2c253%2c254%2c34227%2c255%2c257%2c258%2c259%2c260%2c261%2c262%2c263%2c264%2c265%2c266%2c267%2c268%2c269%2c270%2c271%2c272%2c274%2c275%2c276%2c277%2c278%2c279%2c280%2c281%2c282%2c283%2c284%2c285%2c286%2c287%2c288%2c289%2c290%2c291%2c292%2c294%2c295%2c296%2c297%2c298%2c299%2c300%2c301%2c302%2c304%2c305%2c306%2c307%2c308%2c311%2c312%2c315%2c316%2c317%2c318%2c319%2c320%2c321%2c322%2c324%2c325%2c326%2c327%2c328%2c34234%2c329%2c330%2c331%2c332%2c333%2c334%2c336%2c337%2c338%2c339%2c340%2c342%2c343%2c344%2c345%2c346%2c347%2c348%2c349%2c350%2c351%2c352%2c353%2c354%2c358%2c359%2c360%2c361%2c362%2c363%2c364%2c365%2c366%2c367%2c368%2c369%2c370%2c371%2c372%2c373%2c374%2c375%2c377%2c378%2c379%2c380%2c381%2c382%2c383%2c384%2c385%2c386%2c387%2c388%2c389%2c390%2c392%2c393%2c394%2c395%2c396%2c397%2c398%2c399%2c400%2c401%2c402%2c404%2c405%2c406%2c407%2c408%2c409%2c410%2c411%2c415%2c416%2c417%2c418%2c419%2c420%2c421%2c422%2c423%2c424%2c425%2c427%2c428%2c429%2c430%2c431%2c432%2c433%2c434%2c435%2c437%2c438%2c439%2c440%2c441%2c442%2c443%2c444%2c445%2c446%2c448%2c449%2c450%2c451%2c452%2c453%2c454%2c455%2c456%2c457%2c458%2c459%2c460%2c461%2c462%2c464%2c465%2c466%2c467%2c468%2c469%2c470%2c471%2c472%2c473%2c474%2c475%2c476%2c477%2c478%2c479%2c480)__. 2. Article data - A dataset containing a list of English language Wikipedia articles on politicians mapped to the country of the politician, together with the revision id of the most recent edit to the article. This list was obtained from an existing project on __[Figshare](https://figshare.com/articles/Untitled_Item/5513449)__ in the form of a csv file (page_data.csv). 3. Article quality data - For each article in (2), a prediction of the quality of the article was obtained by calling the __[ORES API](https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_model)__ and passing it three parameters: i) context (the name of the Wikipedia project, in this case 'enwiki' for English language Wikipedia), ii) the revision id to score as detailed in (2) and iii) the scoring model, in this case wp10. For example: https://ores.wikimedia.org/v3/scores/enwiki/235107991/wp10The API returns a JSON object with a key-value pair "prediction" and one of six quality values (in order of quality from best to worst): - FA Featured article - GA Good article - B B-class article - C C-class article - Start Start-class article - Stub Stub-class article Since steps (1) and (2) can be replicated simply by downloading the relevant csv files, this notebook provides only the code needed to produce (3). Article quality data a) Setup Loading packages and setting the working directory. This assumes that the packages 'httr, 'jsonlite', 'data.table', 'plyr', 'dplyr', 'tidyr', 'stringr' and 'ggplot2' which are available from __[CRAN](https://cran.r-project.org/)__, have been installed.
###Code
library (httr)
library(jsonlite)
library(data.table)
library(plyr)
library(dplyr)
library(tidyr)
library(stringr)
library(ggplot2)
###Output
_____no_output_____
###Markdown
b) Set the working directory Set the working directory to match the directory which contains the csv files: page_data.csv and Population Mid-2015.csv as detailed in 1. Data acquisition.
###Code
wd <- "/Users/MB/Desktop/DATA_512/Week 4/Assignment/" # specify working directory here
setwd(wd)
getwd()
###Output
_____no_output_____
###Markdown
c) Read the page_data and Population Mid-2015 csv files into R
###Code
page_data <-read.csv(file="page_data.csv", header=TRUE, sep=",")
population_data <-read.csv(file="Population Mid-2015.csv", header=TRUE, sep=",")
head(page_data)
###Output
_____no_output_____
###Markdown
d) Get quality article assessment using ORES API Since the ORES API can only accept one revision id at a time, it is necessary to loop through each of the rev_ids in the third column of the page_data table created above. A function, return_category, was created which takes a single rev_id as input and returns an object, edit_category, which contains the rev_id together with the quality_category. Using lapply, this function was applied over all values in the rev_id column of the page_data table above. Note that due to the large number of API request it can take up to 5 hours to complete this step.
###Code
ores_api <- "https://ores.wikimedia.org/v3/scores/enwiki/"
edit_id <- as.character(page_data[,3])
model <- "/wp10"
return_category <- function(last_edit) {
ores_url <- str_replace_all(paste(ores_api, last_edit, model), fixed(" "), "")
quality_estimate <- GET(ores_url)
quality_estimate.json <- jsonlite::fromJSON(toJSON(content(quality_estimate)))
quality_category <- unlist(quality_estimate.json)[[2]]
edit_category <- cbind(last_edit, quality_category)
return(edit_category)
}
edit_id_quality_list <- lapply(edit_id, return_category)
###Output
_____no_output_____
###Markdown
Then in order to work with this object, it is necessary to convert it to a data.frame using the ldply function in the dplyr package.
###Code
edit_quality <- ldply(edit_id_quality_list, data.frame)
###Output
_____no_output_____
###Markdown
The net result is a data.frame consisting of two columns: last_edit (the revision_id of the last edit), quality_category (the predicted quality category of the article). 2. Data processing During this step, a single csv file was created with the following columns: - country - article name - revision_id - article_quality - population In other words, for each article name in the original page_data.csv file, the article quality (as identified in Step 1) and the population of the country was appended. Articles from countries which did not have a corresponding population value in the Population Mid-2015.csv dataset were removed as were articles which did not have a corresponding quality value. a) Prepare data.frames for mergingTo enable merging, new columns were created in the edit_quality data.frame from Step 2 d) and the page_data data.frame from Step 1.
###Code
edit_quality$last_edit_num <- as.numeric(as.character((edit_quality$last_edit)))
page_data$last_edit_num <- as.numeric(as.character(page_data$rev_id))
page_data$country <- as.character(page_data$country)
###Output
_____no_output_____
###Markdown
b) Merge page_data and edit_quality on last_edit_num For each article in page_data, the article quality was appended (where this existed). Any duplicates caused by the merge were removed.
###Code
page_data_quality_dupes <- inner_join(page_data, edit_quality, by= c("last_edit_num" = "last_edit_num"))
page_data_quality <- distinct(page_data_quality_dupes)
rm(page_data_quality_dupes)
###Output
_____no_output_____
###Markdown
c) Remove redundant columns in dataset created in b)
###Code
keep_columns <- c("country", "page", "last_edit_num", "quality_category")
page_data_quality <- subset(page_data_quality, select = keep_columns)
page_data_quality$country <- as.character(page_data_quality$country)
###Output
_____no_output_____
###Markdown
d) Prepare population dataset for mergingTo enable merging, a new dataset was created (country_population) which contained only two columns: country and population.
###Code
keep_columns2 <- c("Location", "Data")
country_population <- subset(population_data, select=keep_columns2)
names(country_population)[1] <- "country"
country_population$country <- as.character(country_population$country)
country_population$Data <- as.character(country_population$Data)
country_population$population <- as.numeric(str_replace_all(country_population$Data, fixed(","), ""))
country_population <- subset(country_population, select=c("country", "population"))
###Output
_____no_output_____
###Markdown
e) Merge country_population and page_data_quality on country For each article in page_data_quality, the country population was appended (where this existed in the country_population dataset).
###Code
page_quality_population <- inner_join(page_data_quality, country_population, by=c("country" = "country"))
###Output
_____no_output_____
###Markdown
f) Rename columns in page_quality_population
###Code
names(page_quality_population)[2] <- "article_name"
names(page_quality_population)[3] <- "revision_id"
names(page_quality_population)[4] <- "article_quality"
###Output
_____no_output_____
###Markdown
g) Export f) to csv without row names A csv file (page_quality_population.csv) was created with the columns as described above in 2. Data processing
###Code
write.csv(page_quality_population, file="page_quality_population.csv", row.names=FALSE)
###Output
_____no_output_____
###Markdown
3. Data analysis During this step, four visualizations (bar charts) were created which showed: 1. The top 10 countries in terms of the number of politician articles as a proportion of country population. 2. The bottom 10 countries in terms of the number of politician articles as a proportion of country population. 3. The top 10 countries in terms of the proportion of articles which are high quality. 4. The bottom 10 countries in terms of the proportion of articles which are high quality. For the purpose of this analysis, high quality articles were defined as those articles which were either: FA (Featured article) or GA (Good article). a) Read page_quality_population.csv created in 2 g) into R and convert to a data.table
###Code
page_quality_population <-read.csv(file="page_quality_population.csv", header=TRUE, sep=",")
head(page_quality_population)
###Output
_____no_output_____
###Markdown
b) For each country in a), calculate number of articles and population
###Code
page_quality_population <- data.table(page_quality_population)
country_articles <- summarise(group_by(page_quality_population, country), count_article = n(), population = mean(population))
head(country_articles)
###Output
_____no_output_____
###Markdown
c) For each country in a), calculate number of articles which are high quality (i.e. where the article quality is FA or GA.
###Code
country_high_quality <- (page_quality_population[article_quality %in% c("FA", "GA"), .N, by=country])
names(country_high_quality)[2] <- "count_high_quality"
###Output
_____no_output_____
###Markdown
d) Merge country_articles and country_high_quality For each country in b), the count of high quality articles as identified in c) was appended. Where the country did not have any high quality articles the NA value created by the merge was replaced with a zero.
###Code
country_article_quality <- full_join(country_articles, country_high_quality, by=c("country" = "country"))
country_article_quality[("count_high_quality")][is.na(country_article_quality["count_high_quality"])] <- 0
head(country_article_quality)
###Output
_____no_output_____
###Markdown
e) Calculate articles per 1000 population, proportion of articles which are high quality
###Code
country_article_quality$articles_per_population <- ((country_article_quality$count_article)/(country_article_quality$population)*1000)
country_article_quality$proportion_high_quality <- (country_article_quality$count_high_quality)/(country_article_quality$count_article)
###Output
_____no_output_____
###Markdown
f) Identify top/bottom 10 countries in terms of the number of FA/GA articles as a proportion of all articles
###Code
top10_quality <- subset(arrange(top_n(country_article_quality, 10, proportion_high_quality), desc(proportion_high_quality)), select=c("country", "proportion_high_quality"))
bottom10_quality <- subset(arrange(top_n(country_article_quality, 10, -proportion_high_quality), proportion_high_quality), select=c("country", "proportion_high_quality"))
head(top10_quality)
head(bottom10_quality)
###Output
_____no_output_____
###Markdown
g) Identify top/bottom 10 countries in terms of the number of articles as a proportion of country population
###Code
top10_articles <- subset(arrange(top_n(country_article_quality, 10, articles_per_population), desc(articles_per_population)), select=c("country", "articles_per_population"))
bottom10_articles <- subset(arrange(top_n(country_article_quality, 10, -articles_per_population), articles_per_population), select=c("country", "articles_per_population"))
head(top10_articles)
head(bottom10_articles)
###Output
_____no_output_____
###Markdown
h) Create bar charts representing top/bottom 10 countries as identified in f) and g)
###Code
top10_proportion_plot <- ggplot(top10_articles, aes(x=reorder(country, articles_per_population), y=articles_per_population)) + geom_bar(stat='identity', fill="purple") + coord_flip() + scale_y_continuous(expand=c(0,0), limits=c(0, 5)) + labs(title = "Top 10 countries based on articles as a proportion of population", x="Country", y="Politician articles per 1000 population") + theme(plot.title = element_text(size=12, hjust=1))
bottom10_proportion_plot <- ggplot(bottom10_articles, aes(x=reorder(country, articles_per_population), y=articles_per_population)) + geom_bar(stat='identity', fill="purple") + coord_flip() + scale_y_continuous(expand=c(0,0), limits=c(0, 0.005)) + labs(title = "Bottom 10 countries based on articles as a proportion of population", x="Country", y="Politician articles per 1000 population") + theme(plot.title = element_text(size=12, hjust=1))
top10_quality_plot <- ggplot(top10_quality, aes(x=reorder(country, proportion_high_quality), y=proportion_high_quality)) + geom_bar(stat='identity', fill="purple") + coord_flip() + scale_y_continuous(labels = scales::percent, expand=c(0,0), limits=c(0, 0.3)) + labs(title = "Top 10 countries with highest proportion of high quality articles", x="Country", y="% politician articles which are high quality") + theme(plot.title = element_text(size=12, hjust=1))
bottom10_quality_plot <-ggplot(bottom10_quality, aes(x=reorder(country, proportion_high_quality), y=proportion_high_quality)) + geom_bar(stat='identity', fill="purple") + coord_flip() + scale_y_continuous(labels=scales::percent, expand=c(0,0), limits=c(0, 0.3)) + labs(title = "Bottom 10 countries with highest proportion of high quality articles", x="Country", y="% politician articles which are high quality") + theme(plot.title = element_text(size=12, hjust=1))
###Output
_____no_output_____
###Markdown
i) Export charts to png
###Code
png(filename="top10_proportion_plot.png")
plot(top10_proportion_plot)
dev.off()
png(filename="bottom10_proportion_plot.png")
plot(bottom10_proportion_plot)
dev.off()
png(filename="top10_quality_plot.png")
plot(top10_quality_plot)
dev.off()
png(filename="bottom10_quality_plot.png")
plot(bottom10_quality_plot)
dev.off()
###Output
_____no_output_____
###Markdown
Bias in Data
###Code
import pandas as pd
import requests
import json
###Output
_____no_output_____
###Markdown
Data Acquisition Getting data from the CSV files
###Code
#Dataframe stores wikipedia articles' data
wiki_df = pd.read_csv('data/page_data.csv')
wiki_df.head(5)
#Dataframe stores countries' population data
country_df = pd.read_csv('data/WPDS_2018_data.csv')
country_df.rename(columns={"Geography": "country"}, inplace=True)
country_df.head(5)
###Output
_____no_output_____
###Markdown
ORES API The function below is referenced from this [github repository](https://github.com/Ironholds/data-512-a2)
###Code
headers = {'User-Agent' : 'https://github.com/saylidighde', 'From' : '[email protected]'}
def get_ores_data(revision_ids, headers):
""" Function calls ORES API, a machine learning service and returns the quality of the article
>>> get_ores_data([1, 2, 3], headers)
>>> Quality of any article belongs to one of the following 6 categories in the returned JSON
FA - Featured article
GA - Good article
B - B-class article
C - C-class article
Start - Start-class article
Stub - Stub-class article
"""
# Define API endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# Specify the parameters - smushing all the revision IDs together separated by | marks.
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
return response
def generate_chunkwise_ores_data(revision_ids_list, chunkSize):
""" Function to call ORES API on piecewise chunks of the revision IDs list
Performs network load balancing -
larger chunks cause network congestion and smaller chunks increase network calls/load
"""
ores_data = []
for i in range(0, len(revision_ids_list), chunkSize):
chunk = revision_ids_list[i:i+chunkSize]
ores_data.append(get_ores_data(chunk, headers))
return ores_data
def extract_article_quality(ores_data):
""" Function to extract article quality in a list from the JSON obtained from the ORES API
>>> extract_article_quality(Net ORES data for piecewise chunks of revision IDs)
Returns quality list
"""
quality_list = []
for obj in ores_data:
scores_dict = obj['enwiki']['scores']
for score_key, value in scores_dict.items():
revision_id = score_key
#Mark article quality as NA for revision IDs returning error
if 'error' in value['wp10']:
article_quality = 'NA'
else:
article_quality = value['wp10']['score']['prediction']
quality_list.append(article_quality)
return quality_list
# Convert rev_ids column of dataframe to list
revision_ids_list = wiki_df['rev_id'].tolist()
# Call function generate_chunkwise_ores_data with chunksize of 100 IDs
ores_data = generate_chunkwise_ores_data(revision_ids_list, 100)
# Call function to extract article quality in a list
quality_list = extract_article_quality(ores_data)
# Append quality list to wiki articles' dataframe
wiki_df['article_quality'] = quality_list
wiki_df.head(5)
###Output
_____no_output_____
###Markdown
Data Pre-processing
###Code
# Remove rows with 'NA' article qualities
# Articles for which the API returned error are being pruned, causing some data loss
wiki_df = wiki_df[wiki_df.article_quality != 'NA']
print(wiki_df.shape)
# Generate final dataframe by merging the wikipedia and country dataframes on field 'country'
# Rows found in one table and not in the other are pruned
final_df = pd.merge(wiki_df, country_df, on = 'country')
print(final_df.shape)
#Rename column names
final_df.rename(columns={"page": "article_name",
"Population mid-2018 (millions)": "population",
"rev_id": "revision_id"}, inplace = True)
print(final_df.head(5))
#Dump datafrme to a CSV file
final_df.to_csv('analyses_data.csv', index=False)
###Output
article_name country revision_id article_quality population
0 Bir I of Kanem Chad 355319463 Stub 15.4
1 Abdullah II of Kanem Chad 498683267 Stub 15.4
2 Salmama II of Kanem Chad 565745353 Stub 15.4
3 Kuri I of Kanem Chad 565745365 Stub 15.4
4 Mohammed I of Kanem Chad 565745375 Stub 15.4
###Markdown
Data Analysis
###Code
#Proportion (as a percentage) of articles-per-population
def generate_analysis_table(grouped_df):
"""Funtion to generate analyses
>>> Calculates the proportion, as percentage, of articles-per-population
>>> Calculates the proportion, as percentage, of high-quality articles for each country
"""
#Initialize analysis dataframe
analysis_table = pd.DataFrame(columns = ['country', 'total_articles',
'population (in millions)', 'total_quality_articles',
'articles_per_population (%)', 'quality_articles_proportion (%)'])
country_list = []
total_articles_list = []
population_list = []
total_quality_articles_list = []
proportion_list = []
quality_list = []
#Iterate over every country's group
for country, group_value in grouped_df:
no_of_articles = len(group_value)
#Remove commas from population strings
format_population = group_value['population'].iloc[0].replace(",","")
#Convert population strings to corresponding float values in millions
population = float(format_population)*1000000
#Append current country to country names' list
country_list.append(country)
#Append no_of_articles to corresponding list
total_articles_list.append(no_of_articles)
#Append country's population to corresponding list
population_list.append(group_value['population'].iloc[0])
#Append proportion, as percentage, of articles-per-population
proportion_list.append( (no_of_articles / population)*100 )
quality_articles = group_value[group_value.article_quality.isin(['FA', 'GA'])]
#Append number of quality_articles
total_quality_articles_list.append(len(quality_articles))
#Append proportion, as percentage, of high-quality articles for each country
quality_list.append((len(quality_articles)/no_of_articles)*100)
#Assign generated lists to respective columns in the Analysis dataframe
analysis_table['country'] = country_list
analysis_table['total_articles'] = total_articles_list
analysis_table['population (in millions)'] = population_list
analysis_table['total_quality_articles'] = total_quality_articles_list
analysis_table['articles_per_population (%)'] = proportion_list
analysis_table['quality_articles_proportion (%)'] = quality_list
return analysis_table
#Group final datafrme by country
grouped_df = final_df.groupby('country')
#Generate Analysis table
analysis_table = generate_analysis_table(grouped_df)
analysis_table.head(10)
###Output
_____no_output_____
###Markdown
Specialized Tables 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
analysis_table.sort_values('articles_per_population (%)', ascending=False).head(10).reset_index().drop('index', axis=1)
###Output
_____no_output_____
###Markdown
10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
analysis_table.sort_values('articles_per_population (%)', ascending=True).head(10).reset_index().drop('index', axis=1)
###Output
_____no_output_____
###Markdown
10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
analysis_table.sort_values('quality_articles_proportion (%)', ascending=False).head(10).reset_index().drop('index', axis=1)
###Output
_____no_output_____
###Markdown
10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
analysis_table.sort_values('quality_articles_proportion (%)', ascending=True).head(10).reset_index().drop('index', axis=1)
#As all 10 values for high quality articles are 0. By exploring further, we get first 37 countries with 0% quality articles.
analysis_table.sort_values('quality_articles_proportion (%)', ascending=True).head(37).reset_index().drop('index', axis=1)
###Output
_____no_output_____
###Markdown
Statistics of the Analysis Table
###Code
analysis_table.describe()
###Output
_____no_output_____
###Markdown
A2 - Bias in Data Assignment
###Code
# import libraries
import pandas as pd
import numpy as np
# import two datasets
page_data = pd.read_csv("page_data.csv")
WPDS_data = pd.read_csv("WPDS_2020_data.csv")
# Check the data has been imported properly
page_data.head()
# Check the WPDS data has been imported properly
WPDS_data.head()
###Output
_____no_output_____
###Markdown
Data Cleaning
###Code
# Remove rows where "page" starts with "Template" in page_data
page_data = page_data[~page_data.page.str.contains("Template:")]
page_data
# remove rows where "Name" contains all caps values in WPDS_data
WPDS_clean = WPDS_data[~WPDS_data['Name'].str[:].str.isupper()]
# Note there is still one row left where the Type is specified as sub-region, need to remove this row as well
WPDS_clean = WPDS_clean[WPDS_clean['Type']=='Country']
WPDS_clean
###Output
_____no_output_____
###Markdown
Getting Article Quality Predictions
###Code
# import the libaries
import json
import requests
import os
# Set the endpoint for using the API
endpoint = 'https://ores.wikimedia.org/v3/scores/enwiki/?models=articlequality&revids={rev_id}'
# Customize these with your own information
headers = {
'User-Agent': 'https://github.com/Sabrinawang06',
'From': '[email protected]'
}
# Define the API call
def api_call(endpoint, rev_id):
call = requests.get(endpoint.format(rev_id = rev_id), headers=headers)
response = call.json()
return response
# create batch of 50 rev_id and loop over the whole table of page_data
res = [] # valid API returns
error_log = [] # API returns with error
n= page_data.shape[0]
for i in range(0,n,50):
if i+50 > n:
batch = api_call(endpoint, "|".join(str(x) for x in page_data.rev_id.iloc[i:n]))
else:
batch = api_call(endpoint, "|".join(str(x) for x in page_data.rev_id.iloc[i:i+50]))
for j in batch['enwiki']['scores'].keys():
try:
res.append([j, batch['enwiki']['scores'][j]['articlequality']['score']['prediction']])
except KeyError:
error_log.append([j,batch['enwiki']['scores'][j]])
pass
# Conver previous output into dataframe and rename the columns
prediction = pd.DataFrame(res).rename(columns={0: "rev_id", 1: "prediction"})
prediction
# Check the error log
error_log
###Output
_____no_output_____
###Markdown
Combining the Datasets
###Code
# Conver the rev_id type in the prediction dataframe to int64 for merging with page_data
prediction['rev_id'] = prediction['rev_id'].astype('int64')
# Merge the perdiction back to the page_data
pred_page = pd.merge(left=page_data, right=prediction, left_on='rev_id', right_on='rev_id')
# Check if all rows are assigned a prediction
pred_page[pred_page['prediction'].isna()]
pred_page
# Combine the WPDS data with the page data with prediction
full_res = pd.merge(left=pred_page, right=WPDS_clean, how='outer', left_on='country', right_on='Name')
full_res
# Extract the rows with a null country or Name (no matching between WPDS and page data)
no_match = full_res[full_res['country'].isna()|full_res['Name'].isna()].reset_index()
# if wanted, the following code can be used to create a new columns in replace of Name and country (since there are no overlapping)
# no_match['new_country']=[no_match['Name'][i] if pd.isna(no_match['country'][i]) else no_match['country'][i] for i in range(len(no_match))]
# Output the csv file for no matching record
no_match.to_csv('wp_wpds_countries-no_match.csv')
# Extract rows with matching record between WPDS and page data
match_record = pd.merge(left=pred_page, right=WPDS_clean, how='inner', left_on='country', right_on='Name')
# Select and rename interested columns
match_record_out = match_record[['country','page','rev_id','prediction','Population']]
match_record_out.columns = ['country','article_name','revision_id','article_quality_est.','population']
match_record_out
# Output the matching record to csv file
match_record_out.to_csv('wp_wpds_politicians_by_country.csv')
###Output
_____no_output_____
###Markdown
Analysis
###Code
# Count the number of articles for each country
count = pd.DataFrame(match_record_out.groupby('country').size()).reset_index()
count.columns=['country','article_count']
count
# Count the number of high quality articles (FA or GA class) for each country
count_high_quality = pd.DataFrame(match_record_out[match_record_out['article_quality_est.'].isin(['FA','GA'])].groupby('country').size()).reset_index()
count_high_quality.columns=['country','high_quality_article_count']
count_high_quality
# Check the layout of the subregion and the hierarchical structure
WPDS_data[WPDS_data['Type']=='Sub-Region']
# Combine the count tables to the population data
count_combined = pd.merge(left=count, right=count_high_quality, left_on='country', right_on='country')
# Extract the population data for each country and he sub-region it belongs to
region = WPDS_data[WPDS_data['Name'].isin(['AFRICA','NORTHERN AMERICA','LATIN AMERICA AND THE CARIBBEAN','SOUTH AMERICA','ASIA','EUROPE','OCEANIA'])]
region
# Create an empty column to store the region information
WPDS_data['region'] = np.nan
# Assign the region for each row based on the row index
index_list = region.index.values.tolist()
for i in range(len(index_list)):
if i < len(index_list)-1:
WPDS_data['region'][index_list[i]:index_list[i+1]] = region['Name'][index_list[i]]
else:
WPDS_data['region'][index_list[i]:] = region['Name'][index_list[i]]
# Find the indexes for all sub-regions
temp = WPDS_data[WPDS_data['Type']=='Sub-Region']
subregion = temp[~temp['Name'].isin(['AFRICA','NORTHERN AMERICA','LATIN AMERICA AND THE CARIBBEAN','SOUTH AMERICA','ASIA','EUROPE','OCEANIA','Channel Islands'])]
subregion
# Create an empty column to store the sub-region information
WPDS_data['sub-region'] = np.nan
# Assign the sub-region for each row based on the row index
index_list = subregion.index.values.tolist()
for i in range(len(index_list)):
if i < len(index_list)-1:
WPDS_data['sub-region'][index_list[i]:index_list[i+1]] = subregion['Name'][index_list[i]]
else:
WPDS_data['sub-region'][index_list[i]:216] = subregion['Name'][index_list[i]]
WPDS_data
# Merge all three dataset together to obtain the final table needed for analysis
final_count = pd.merge(left=WPDS_data[['Name','Population','sub-region','region']], right=count_combined, how='right', left_on='Name', right_on='country')
final_count
# Calculate the percentage per country
country_proportion = final_count.copy()
country_proportion['article_per_population']= country_proportion['article_count']/country_proportion['Population']
country_proportion['high_quality_article_per_article']= country_proportion['high_quality_article_count']/country_proportion['article_count']
country_proportion.sort_values(by=['article_per_population'], ascending=False)
# Calculate the percentage per region (using total population from the original WPDS data)
region_proportion = pd.merge(pd.merge(pd.DataFrame(final_count.groupby('region')['article_count'].sum()).reset_index(),
pd.DataFrame(final_count.groupby('region')['high_quality_article_count'].sum()).reset_index(),
left_on='region', right_on='region'),
region[['Name','Population']], left_on='region', right_on='Name').drop(['Name'], axis=1)
region_proportion['article_per_population']= region_proportion['article_count']/region_proportion['Population']
region_proportion['high_quality_article_per_article']= region_proportion['high_quality_article_count']/region_proportion['article_count']
region_proportion.sort_values(by=['article_per_population'], ascending=False)
# Calculate the percentage per sub-region (using total population from the original WPDS data)
subregion_proportion = pd.merge(pd.merge(pd.DataFrame(final_count.groupby('sub-region')['article_count'].sum()).reset_index(),
pd.DataFrame(final_count.groupby('sub-region')['high_quality_article_count'].sum()).reset_index(),
left_on='sub-region', right_on='sub-region'),
subregion[['Name','Population']], left_on='sub-region', right_on='Name').drop(['Name'], axis=1)
subregion_proportion['article_per_population']= subregion_proportion['article_count']/subregion_proportion['Population']
subregion_proportion['high_quality_article_per_article']= subregion_proportion['high_quality_article_count']/subregion_proportion['article_count']
# combine region_proportion and subregion_proportion after renaming the first column to 'region/subregion'
region1 = region_proportion.copy()
region1.rename(columns={'region': 'region/subregion'}, inplace=True)
region2 = subregion_proportion.copy()
region2.rename(columns={'sub-region': 'region/subregion'}, inplace=True)
full_region = pd.concat([region1,region2])
###Output
_____no_output_____
###Markdown
Results Top 10 countries by coverage
###Code
country_proportion.sort_values(by=['article_per_population'], ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Bottom 10 countries by coverage
###Code
country_proportion.sort_values(by=['article_per_population'], ascending=True).head(10)
###Output
_____no_output_____
###Markdown
Top 10 countries by relative quality
###Code
country_proportion.sort_values(by=['high_quality_article_per_article'], ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Bottom 10 countries by relative quality
###Code
country_proportion.sort_values(by=['high_quality_article_per_article'], ascending=True).head(10)
###Output
_____no_output_____
###Markdown
Geographic regions by coverage Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
###Code
# continent level
region_proportion.sort_values(by=['article_per_population'], ascending=False)
# sub-region level
subregion_proportion.sort_values(by=['article_per_population'], ascending=False).head(10)
# all regions/subregions together
full_region.sort_values(by=['article_per_population'], ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Geographic regions by coverage II Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
# continent level
region_proportion.sort_values(by=['high_quality_article_per_article'], ascending=False)
# sub-region level
subregion_proportion.sort_values(by=['high_quality_article_per_article'], ascending=False).head(10)
# all regions/subregions together
full_region.sort_values(by=['high_quality_article_per_article'], ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Appendix: Checking if there are countries using "Channel Islands" as country name
###Code
page_data[page_data['country']=='Channel Islands']
###Output
_____no_output_____
###Markdown
DATA 512 A2 - Bias in Data Assignment Madalyn Li Fall 2021 The purpose of this project is to uncover insights about biases through analyzing English Wikipedia articles on political figures from various countries. We will be acquiring and merging three different data sets from various sources to produce 6 total tables displaying proportion of articles per population and proportion of high quality articles by country and geographic region. At the end, we will reflect on the results and discuss issues and potential for future improvements. This notebook is divided into 6 general sections:1. Acquiring Article and Population Data2. Cleaning Data3. Acquiring Article Quality Predictions4. Combining Datasets5. Analyzing Data and Displaying Results6. Reflection
###Code
import json
import requests
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
1. Acquiring Article and Population DataIn the first step, our goal is to obtain and download the data from their sources and add them into the notebook as dataframes for further cleaning and analysis in later steps. The first dataset: *page_data.csv* is acquired from Figshare, and it includes data on English Wikipedia articles within the category: "Politicians by nationality". The documentation and source can be found [here](https://figshare.com/articles/dataset/Untitled_Item/5513449). This file comprises of the following information:1. **page**: Contains the page title of the article2. **country**: Contains the country name extracted from the category3. **rev_id**: Contains the revision ID of the last edit made to the page
###Code
# Add page_data.csv as data frame
df_page = pd.read_csv("page_data.csv")
###Output
_____no_output_____
###Markdown
The second dataset: *WPDS_2020_data.csv* is acquired from the Population Reference Bureau, and it includes data on the estimated world population from mid-2020. The documentation and source of the dataset can be found [here](https://www.prb.org/international/indicator/population/table). This file comprises of the following information:1. **FIPS**: Abbreviation of country name2. **Name**: Name of country or sub-region3. **Type**: Category of Name (i.e country or sub-region)4. **Timeframe**: Year that data was collected5. **Data(M)**: Population in millions6. **Population**: Total population
###Code
# Add WPDS_2020_data.csv as data frame
df_world = pd.read_csv("WPDS_2020_data.csv")
###Output
_____no_output_____
###Markdown
2. Cleaning DataAfter acquiring our data, the next step is to clean and process it to remove any uncessary information not needed for analysis. **Remove non-Wikipedia articles**The *page_data.csv* file contains some page names that start with "Template:". Since these are not considered Wikipedia articles, we have removed these rows in the code below so it is not included in our analysis.
###Code
# Remove rows containing "Template:" from wikipedia page data frame
df_page = df_page[df_page["page"].str.contains("Template:") == False]
###Output
_____no_output_____
###Markdown
**Add geographic region**In our subsequent analysis, we will be looking at article and population proportions grouped by geographic region. For this reason, we will need to add a new column to our world population data frame that includes the corresponding geographic region for each country. The *Name* column of world population dataset differentiates geographic region and country by UPPERCASE lettering for geographic regions. In addition, the default data set is sorted by geographic region and the countries corresponding to that geographic region listed below. The code below adds a new column called 'geographic region' to the world population data frame and parses through each value in the *Name* column. If the string value in that column is all uppercase (i.e. is a geographic region), it saves that value into a variable called *current_upper* and adds that value into the 'geographic region' column. This ensures that all countries are tied to the geographic region listed above them.
###Code
# Add geographic region to world population data frame
df_world['geographic region'] = ""
current_upper = df_world.iloc[0,1]
for i in range(len(df_world)):
if df_world.iloc[i, 1].isupper():
current_upper = df_world.iloc[i,1]
df_world.iloc[i, -1] = current_upper
else:
df_world.iloc[i, -1] = current_upper
###Output
_____no_output_____
###Markdown
**Remove geographic region rows from Name**Now that we have obtained the geographic region for each country, we no longer need the rows containing the geographic region name. The code below removes these rows from the world population data frame.
###Code
# Remove rows containing geographic regions in the Name column from the world population data frame
df_world = df_world[df_world['Name'].str.isupper() == False]
###Output
_____no_output_____
###Markdown
**Re-sort and re-name columns**The code below narrows the world population data frame to include only the rows needed for analysis later on. Specifically, we select: *Name, geographic region, and Population*. In addition, the column names for both data frames are renamed for cleanliness and consistency purposes and to make future analysis easier to conduct and follow along.
###Code
# Select Name, geographic region, and population from world population data frame
df_world = df_world[["Name", "geographic region", "Population"]]
# Rename columns in both data frames
df_page.columns = ["article_name", "country", "revision_id"]
df_world.columns = ["country", "geographic region", "population"]
###Output
_____no_output_____
###Markdown
**Final cleaned data frames** Below are previews of the finalized clean data frames for reference:
###Code
# Preview the wikipedia page data frame
df_page.head()
# Preview the world population data frame
df_world.head()
###Output
_____no_output_____
###Markdown
3. Acquiring Article Quality PredictionsIn this section, we will be obtaining data on predicted quality scores for each Wikipedia article. We will be utilizing a machine learning tool called ORES (short for Objective Revision Evaluation Service) to retrieve these predictions. Below is a table referencing the predicted scores that ORES will assign to an article; please note that they are listed in order from best to worst:| Score | Description || --- | --- || FA | Featured article || GA | Good article || B | B-class article || C | C-class article || Start | Start-class article || Stub | Stub-class article | **Set up API endpoint, header, and parameters**To obtain the predicted article quality data, we will utilize the ORES REST API. In the code below, we first define the endpoint and headers. Next, we construct a function *api_call* that accepts an endpoint, a list of revision ids, and headers and returns the queried API as a dictionary. The parameters include *context* which we have set as *enwiki* to correspond to English Wikipedia articles, *model* which we have set to *articlequality* which corresponds to the scoring model, and finally the *revid* which corresponds to the revision id of the article. It is important to note that multiple revision ids can be passed through the parameters by separating each value with a "|".The links below were used for reference:[Documentation for ORES REST API](https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_model)[ORES MediaWiki Page](https://www.mediawiki.org/wiki/ORES)
###Code
# Set endpoint and headers
endpoint = "https://ores.wikimedia.org/v3/scores/{context}?models={model}&revids={revid}"
headers = {'User-Agent': 'https://github.com/madalynli',
'From': '[email protected]'}
# Define api_call function
def api_call(endpoint, revid, headers):
revid_combine = "|".join(str(id) for id in revid)
param = {"context":"enwiki",
"model":"articlequality",
"revid":revid_combine}
call = requests.get(endpoint.format(**param), headers=headers)
response = call.json()
return response
###Output
_____no_output_____
###Markdown
**Query API data and obtain article quality predictions**In the code below, we first convert the *revision_id* column in the wikipedia page data frame to a list. This makes it easier to query later when we run *api_call*. Next, since our list has nearly 50,000 different revision ids, we will need to divide this into batches of 50 so that the query will run successfully. To implement this, we run a for loop with *batch_size* = 50 to section off a chunk of the *rev_id* list to run through the *api_call* function. It is important to note that the ORES API will not be able to obtain a predicted score for every article in our list. In these instances, we have set up an if else statement to document and log the revision ids of the articles that return no predicted score. This list is saved to a file in the repository named 'ores_api_no_score.csv'The result from api_call returns a nested dictionary, and to obtain the prediction value, we have to parse through various keys to get there. The final result is an appended list of revision_ids with their corresponding predicted scores. ***Note: the cell below takes an estimated time of 5 minutes to run.***
###Code
# Convert revision_id to list
rev_id = df_page['revision_id'].to_list()
# Set batch size and create empty list score and no_score to store the results
batch_size = 50
score = []
no_score = []
# Query API data and obtain article quality predictions
for i in range(0, len(rev_id), batch_size):
rev_id_chunk = rev_id[i:i+batch_size]
response = api_call(endpoint, rev_id_chunk, headers)
for res in response:
for res in response['enwiki']['scores']:
if response['enwiki']['scores'][res]['articlequality'].get('score') is None:
no_score.append(response['enwiki']['scores'][res]['articlequality']['error']['message'])
else:
score.append([res,response['enwiki']['scores'][res]['articlequality']['score']['prediction']])
# Output list of revision_ids with no score to ores_api_no_score.csv
df_no_score = pd.DataFrame(no_score)
df_no_score.to_csv("ores_api_no_score.csv")
###Output
_____no_output_____
###Markdown
**Clean and standardize score results** To make the results easier to utilize in later analysis, we need to clean and standardize the data. First, we will convert the list of scores to a data frame called *df_pred*. Next, we re-name the columns *revision_id* and *article_quality_est*, respectively. Finally, since values of *revision_id* are objects, we will need to convert these to integers to make merging the data more seamless in the next step.In addition, we have output the list of predictions to a csv file named: *ores_api_scores.csv*
###Code
# Convert list of score results to a data frame
df_pred = pd.DataFrame(score)
# Re-name columns
df_pred.columns = ["revision_id", "article_quality_est"]
# Convert revision_id type from object to integer
df_pred['revision_id'] = df_pred['revision_id'].astype(str).astype(int)
# Output list of score to ores_api_scores.csv
df_pred.to_csv("ores_api_scores.csv")
###Output
_____no_output_____
###Markdown
4. Combining DatasetsIn this section, we will combine all our data sets (wikipedia page data, world population data, and article quality prediction data) into one file to make the analysis in step 5 easier. The final combined data set will be saved to a csv file named *wp_wpds_politicians_by_country.csv*. In addition, since not all countries in the world population data frame will match to the countries in the wikipedia page data frame, we isolate these values into a separate csv file named *wp_wpds_countries-no_match.csv*. **Merge Wikipedia page data with article quality data** First, we will merge *df_pred* (article quality prediction data) with *df_page* (wikipedia page data) together on their common value: *revision_id*.
###Code
# Merge df_pred and df_page
df_pred_page = pd.merge(df_pred, df_page, on = "revision_id", how = "left")
###Output
_____no_output_____
###Markdown
**Merge Wikipedia quality page data with world population data** Next, we will merge the results from the previous step with *df_world* (world population data) together on their common value: *country*. In this scenario, we will use an outer join so we can obtain all matches and non-matches for each country.
###Code
# Merge df_pred_page and df_world
merge_all = pd.merge(df_pred_page, df_world, on = "country", how = "outer", indicator = True)
###Output
_____no_output_____
###Markdown
**Obtain and save values for countries with no matches**In the code below, we filter *merge_all* to values where *_merge* does not equal to *both*. This gives us all the results for countries that had no matches on both data frames. Finally, we output the results to a file named *wp_wpds_countries-no_match.csv*.
###Code
# Filter results to countries with no matches
no_match = merge_all.query('_merge != "both"')
# Output results to csv file
no_match.to_csv('wp_wpds_countries-no_match.csv')
###Output
_____no_output_____
###Markdown
**Obtain and save values for countries with no matches**Simlar to the code above, this time we filter *merge_all* to values where *_merge* does equal *both* since we want to obtain the results for countries that had matches to both data frames. Next, we remove the *_merge* column in the data frame to clean and finalize the dataset since this column is no longer needed for the final analysis. Finally, we output the results to a file named *wp_wpds_politicians_by_country.csv*.
###Code
# Filter results to countries with matches
df_all = merge_all.query('_merge == "both"')
# Remove _merge column
df_all = df_all.drop('_merge', axis = 1)
# Output results to csv file
df_all.to_csv('wp_wpds_politicians_by_country.csv')
###Output
_____no_output_____
###Markdown
5. Analzying Data and Displaying ResultsThe goal of this section is to produce a total of 6 tables that display the following information:| Table | Table Name | Description || :--: | :-- | :-- || 1 | Top 10 countries by coverage | 10 highest-ranked countries in terms of number of politician articles as a proportion of country population || 2 | Bottom 10 countries by coverage | 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population || 3 | Top 10 countries by relative quality | 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality || 4 | Bottom 10 countries by relative quality | 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality || 5 | Geographic regions by coverage | Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population || 6 | Geographic regions by relative quality | Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality |In order to produce these tables, we will first need to perform a series of processing steps to calculate the proportion values needed. Specifically, we will need to sum up the total number of articles grouped by country. Then, we will need to sum the total number of high quality articles (where scores are either FA or GA) grouped by country. **Count number of articles grouped by country** In the step below, we calculate the total number of politician articles for each country
###Code
# Calculate total number of politician articles grouped by country
article_count = df_all.groupby(['country']).size().to_frame('article_count').reset_index()
###Output
_____no_output_____
###Markdown
**Count number of high quality articles grouped by country**In the next step, we calculate the total number of high quality articles grouped by country. In this instance, we will define high quality articles as those who have an estimated ranked score of FA or GA. In the code below, we first add an additional column called *highquality_count* that tallies the number of FA or GA scores. If the estimated article quality is equal to any one of these values, it will input a 1 in the new column, otherwise it will input a 0. Next, now that we have the tallies, we can sum the total number of high quality articles grouped by country.
###Code
# Create new column that equals 1 if the article_quality_est is equal to FA or GA and 0 if otherwise
df_all['highquality_count'] = np.where(
df_all['article_quality_est'] == 'FA', 1, np.where(
df_all['article_quality_est'] == 'GA', 1, 0))
# Calculate total number of high quality articles grouped by country
highquality_count = df_all.groupby(['country'])['highquality_count'].sum().reset_index()
###Output
_____no_output_____
###Markdown
**Merge tables**Next, we will be merging the values for total article count, total high quality article count, and total population by country into one table.
###Code
# Merge article count value, high quality count value, and population values
merge_all_country = article_count.merge(highquality_count, on='country').merge(df_world, on='country')
###Output
_____no_output_____
###Markdown
**Calculating proportions**Now that we have our count values per country, we can calculate the proportions needed for the final tables. Specifically, we will add two new columns to hold these calculations: *Percentage of articles per population* & *Percentage of high quality articles****Percentage of articles per population*** is calculated by dividing article count by population and multiplying this value by 100. ***Percentage of high quality articles*** is calculated by dividing high quality article count by total article count and multiplying this value by 100.
###Code
# Calculate proportion values
merge_all_country['Percentage of articles per population'] = (merge_all_country['article_count']/merge_all_country['population']) * 100
merge_all_country['Percentage of high quality articles'] = (merge_all_country['highquality_count']/merge_all_country['article_count']) * 100
###Output
_____no_output_____
###Markdown
**Sort by percentage of articles per population descending**For the first two tables, we want to find the top and bottom countries sorted by percentage of articles per population. Thus, we sort percentage of articles per population in descending order.
###Code
# Sort by percentage of articles per population
articles_per_pop = merge_all_country.sort_values('Percentage of articles per population',ascending=False)
###Output
_____no_output_____
###Markdown
Table 1: Top 10 countries by coverage
###Code
articles_per_pop.head(10)
###Output
_____no_output_____
###Markdown
Table 2: Bottom 10 countries by coverage
###Code
articles_per_pop.tail(10)
###Output
_____no_output_____
###Markdown
**Sort by percentage of high quality articles descending**For the next two tables, we want to find the top and bottom countries sorted by percentage of high quality articles. Thus, we sort percentage of high quality articles in descending order.
###Code
# Sort by percentage of high quality articles
percent_of_highquality = merge_all_country.sort_values('Percentage of high quality articles',ascending=False)
###Output
_____no_output_____
###Markdown
Table 3: Top 10 countries by relative quality
###Code
percent_of_highquality.head(10)
###Output
_____no_output_____
###Markdown
Table 4: Bottom 10 countries by relative quality
###Code
percent_of_highquality.tail(10)
###Output
_____no_output_____
###Markdown
**Group by geographic region**For the last two tables, we will need to re-sum *article_count*, *highquality_count* and *population values* grouped by geographic region. Then we will need to recalculate the values for *percentage of articles per population* and *percentage of high quality articles* with the new totals.
###Code
# Calculate total article_count, highquality_count, and population by geographic region
groupby_geo = merge_all_country.groupby(['geographic region'])[['article_count', 'highquality_count', 'population']].agg('sum')
# Calculate proportion values by geographic region
groupby_geo['Percentage of articles per population'] = (groupby_geo['article_count']/groupby_geo['population']) * 100
groupby_geo['Percentage of high quality articles'] = (groupby_geo['highquality_count']/groupby_geo['article_count']) * 100
###Output
_____no_output_____
###Markdown
Table 5: Geographic regions by coverage
###Code
groupby_geo.sort_values('Percentage of articles per population',ascending=False)
###Output
_____no_output_____
###Markdown
Table 6: Geographic regions by relative quality
###Code
groupby_geo.sort_values('Percentage of high quality articles',ascending=False)
###Output
_____no_output_____
###Markdown
Bias in Data PurposeThis project explores the concept of *bias* by examinging how the number and quality of Wikipedia articles about political figures vary among countries.Several specific questions are addressed:- Which countries have the greatest and the least coverage of politicians on Wikipedia compared to their population?- which countries have the highest and lowest proportion of high quality articles about politicians?- Which regions have the most articles about politicians, relative to their populations?- Which regions have teh highest proprtion of high-quality articles about politicians?Article quality is estimated using a machine learning service called ORES. NOTES ABOUT PACKAGES AND THINGS THAT ARE USED TO CONDUCT ANALYSIS. Data Ingestion and Cleaning Data SourcesThe data used in this analysis is drawn from two sources:- The Wikipedia politicians by country dataset, found on Figshare: https://figshare.com/articles/Untitled_Item/5513449- A subset of the world population datasheet published by the Population Reference Bureau Data CleaningThe Wikipedia *Politicians by Country* dataset contains some pages which are not Wikipedia articles. These pages are filtered out before we conduct our analysis by removing all page names that begin with the string "Template:".The Population Reference Bureau *World Population Datasheet* contains some rows relating to regional population counts. These are filtered out prior to country-level analyses performed below, but utilized in the final two tables in the Analysis section and in the Reflection section to address coverage and quality by region.
###Code
# import needed packages
import pandas as pd
# read the csv files in to Pandas data frames
politicos_full = pd.read_csv("page_data.csv")
pops_regions = pd.read_csv("WPDS_2018_data.csv")
# check that the imports have worked correctly
#print(politicos_full.head())
# remove the no-Wikipedia articles by filtering the politicos data frame to remove instances of the string "Template:"
politicos = politicos_full[~politicos_full.page.str.contains("Template:")]
# check that the filtering step has worked correctly
#print(politicos.head())
# remove the regions from the population data frame by removing rows where the geography col is all caps
# first we make a deep copy of the dataframe because we want a dataframe free of regions, but we also want the region data
pops_countries = pops_regions.copy(deep=True)
# drop regions from the new countries dataframe for the upcoming analysis
pops_countries.drop(pops_countries[pops_countries['Geography'].str.isupper()].index, inplace = True)
# drop countries from the regions dataframe so the two will be completely distinct
#pops_regions = pops_regions[pops_regions['Geography'].str.isupper()]
# check that both dataframes are correct
#print(pops_regions.head())
#print(pops_countries.head())
###Output
_____no_output_____
###Markdown
Quality PredictionsIn the following code we use the ORES API to get json files which contain predictions about the quality of individual articles.ORES documentation: https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_contextThere are six total quality categories. The first two categories (FA and GA) are considered high quality.FA - Featured articleGA - Good articleB - B-class articleC - C-class articleStart - Start-class articleStub - Stub-class articleThe first fuction in the following code get_ores_data is taken from https://github.com/Ironholds/data-512-a2/blob/master/hcds-a2-bias_demo.ipynb and modified only so that it returns the result (rather than simply printing it).
###Code
# import needed packages
import requests
import json
# this block of code is taken from https://github.com/Ironholds/data-512-a2/blob/master/hcds-a2-bias_demo.ipynb
# it is modified only so that get_ores_data returns the result response
headers = {'User-Agent' : 'https://github.com/chisquareatops', 'From' : '[email protected]'}
def get_ores_data(revision_ids, headers):
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# Specify the parameters - smushing all the revision IDs together separated by | marks.
# Yes, 'smush' is a technical term, trust me I'm a scientist.
# What do you mean "but people trusting scientists regularly goes horribly wrong" who taught you tha- oh.
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
#print(json.dumps(response, indent=4, sort_keys=True
return response
# we need to extract the overall prediction from the above function, which also returns sores for all page types
# make a list of the ids
revids = list(politicos['rev_id'])
# loop through the list of ids in chunks of 100
def get_pred(df, block_size):
start = 0
end = block_size
output_final = list()
while start < len(revids):
revids_temp = revids[start:end]
output_temp = get_ores_data(revids_temp, headers)
for key, item in output_temp['enwiki']['scores'].items():
dict_temp = dict()
dict_temp['rev_id'] = key
if 'error' in item['wp10']:
dict_temp['prediction'] = 'no score'
else:
dict_temp['prediction'] = item['wp10']['score']['prediction']
output_final.append(dict_temp)
start += 100
end += 100
scores = pd.DataFrame(output_final)
return scores
# call the above functions to get the predictions for our data frame; divide the articles into blocks of 100
politicos_preds = get_pred(politicos, 100)
# check that the above step worked correctly
#print(politicos_preds.head())
# save the articles with no score to a csv and then remove them from the data frame
pred[pred.prediction == 'no score'][['rev_id']].to_csv('wp_wpds_articles-no_score.csv')
politicos_preds = politicos_preds[~politicos_preds.prediction.str.contains("no score")]
###Output
_____no_output_____
###Markdown
Merge and Output DataIn the following code we merge our data so that the predictions we are interested in are associated with the individual articles in our data set. We then export a csv of this combined data.
###Code
# make copies just in case before merging
politicos_final = politicos.copy(deep=True)
politicos_preds_final = politicos_preds.copy(deep=True)
pops_countries_final = pops_countries.copy(deep=True)
# merge the politcal article data and the quality predictions on the rev_id/revision_id cols
politicos_preds_final = politicos_preds_final.astype({'rev_id': 'int64'})
combined_final = politicos_final.merge(politicos_preds_final, how='right', left_on='rev_id', right_on='rev_id')
# merge the new data frame with the population data on the country/Geography cols
combined_final = combined_final.merge(pops_countries_final, how='right', left_on='country', right_on='Geography')
# check that the above step worked
#print(combined_final.head())
# rename the cols to comply with assignment
combined_final.rename(columns={'page':'article_name','Population mid-2018 (millions)':'population','rev_id':'revision_id','prediction':'article_quality'}, inplace=True)
# save the rows that have no match on the country field to a csv, then drop from the final data frame
combined_final[combined_final.Geography.isnull()].to_csv('wp_wpds_countries-no_match.csv')
combined_final.dropna(inplace=True)
# remove Geography col to comply with assigment (now that rows with no country match are gone)
combined_final.drop('Geography', axis=1)
# check that the above step worked
print(combined_final.head())
# change some data types so the following analysis will work
combined_final['population'] = combined_final['population'].str.replace(',', '')
combined_final = combined_final.astype({'population':'float'})
###Output
_____no_output_____
###Markdown
AnalysisIn this section we create the following six individual tables:- Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population- Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population- Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality- Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality- Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population- Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
# select for high quality articles by keeping only the FA and GA designations in the article_quality field
combined_final_2 = combined_final.copy(deep=True)
hq_articles = combined_final_2.loc[combined_final_2['article_quality'].isin(['FA','GA'])]
# count total number of high quality articles in each country using group by
hq_articles_country = hq_articles.groupby('country').count()['article_name']
# make this result into a dataframe with appropriate cols so we can bring back population data and report the proportion
hq_articles_country_df = hq_articles_country.to_frame()
hq_articles_country_df['country'] = hq_articles_country_df.index
hq_articles_country_df.reset_index(drop=True, inplace=True)
hq_articles_country_df = hq_articles_country_df.merge(pops_countries_final, how='inner', left_on='country', right_on='Geography')
# find the actual proprtion: divide number of high quality articles by total population
hq_articles_country_df = hq_articles_country_df.astype({'article_name': 'float'})
hq_articles_country_df['Population mid-2018 (millions)'] = hq_articles_country_df['Population mid-2018 (millions)'].str.replace(',', '')
hq_articles_country_df = hq_articles_country_df.astype({'Population mid-2018 (millions)': 'float'})
hq_articles_country_df['article_proportion'] = hq_articles_country_df['article_name'] / (hq_articles_country_df['Population mid-2018 (millions)'] * 1000000)
###Output
_____no_output_____
###Markdown
Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
# sort by proportion and display a table of the top 10
articles_over_pop = hq_articles_country_df[['country','article_proportion']]
articles_over_pop = articles_over_pop.sort_values('article_proportion', ascending=False)
print(articles_over_pop.head(10))
###Output
country article_proportion
130 Tuvalu 0.000500
33 Dominica 0.000014
46 Grenada 0.000010
137 Vanuatu 0.000010
52 Iceland 0.000005
57 Ireland 0.000004
13 Bhutan 0.000004
79 Maldives 0.000003
90 New Zealand 0.000002
58 Israel 0.000002
###Markdown
Bottom 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
#Bottom 10 countries by coverage
articles_over_pop = articles_over_pop.sort_values('article_proportion', ascending=True)
print(articles_over_pop.head(10))
#Top 10 countries by relative quality
# group the same way we did in previous steps, but this time using all articles
all_articles = combined_final.loc[combined_final['article_quality'].isin(['FA','GA','B','C','Start','Stub'])]
# count total number of high quality articles in each country using group by
all_articles_country = all_articles.groupby('country').count()['article_name']
# make a dataframe with this total number of articles per country so it can be merged with dataframe from prev step
all_articles_country_df = all_articles_country.to_frame()
all_articles_country_df['country'] = all_articles_country_df.index
all_articles_country_df.reset_index(drop=True, inplace=True)
all_articles_country_df = all_articles_country_df.astype({'article_name': 'float'})
all_articles_country_df.rename(columns = {'article_name':'total_articles'}, inplace = True)
all_articles_country_df = all_articles_country_df.merge(hq_articles_country_df, how='right', left_on='country', right_on='country')
###Output
_____no_output_____
###Markdown
Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
# find the proprtion: divide number of high quality articles by total articles
all_articles_country_df['quality_to_total'] = all_articles_country_df['article_name'] / all_articles_country_df['total_articles']
hqarticles_over_total = all_articles_country_df[['country','quality_to_total']]
hqarticles_over_total = hqarticles_over_total.sort_values('quality_to_total', ascending=False)
print(hqarticles_over_total.head(10))
###Output
country quality_to_total
64 Korea, North 0.194444
107 Saudi Arabia 0.127119
81 Mauritania 0.125000
23 Central African Republic 0.121212
104 Romania 0.113703
130 Tuvalu 0.092593
13 Bhutan 0.090909
33 Dominica 0.083333
122 Syria 0.078125
12 Benin 0.076923
###Markdown
Bottom 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
hqarticles_over_total = hqarticles_over_total.sort_values('quality_to_total', ascending=True)
print(hqarticles_over_total.head(10))
# regions by coverage
# The only data source we have that connects countries to regions is the original WPDS_2018_data.csv data (now pops_regions)
# countries in this data belong to the region that precedes them in the file, so we need to loop through it.
# create an empty dict to hold country/region pairs as we find them
region_dict = {}
# loop through the original data we preserved (as a list) to identify countries vs. regions, then store pairs
for value in pops_regions['Geography'].tolist():
# if the current row is a region, make it the current region (the first row is a region)
if value.isupper():
region = value
# if the current row is a country, add a new country/region pair to the dict
else:
region_dict.update({value:region})
# use a lambda to make a new col in the most recent dataframe and use the dict to insert a region value
all_articles_country_df['region'] = all_articles_country_df['country'].apply(lambda x: region_dict[x])
# test that the above step worked correctly
#print(all_articles_country_df.head())
###Output
_____no_output_____
###Markdown
Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
###Code
# add up the total number of articles in each region using group by
all_articles_region = all_articles_country_df.groupby('region').sum()['total_articles']
# turn that result back into a data frame
all_articles_region_df = all_articles_region.to_frame()
all_articles_country_df.reset_index(drop=True, inplace=True)
# add up the total number of articles in each region using group by
pop_region = all_articles_country_df.groupby('region').sum()['Population mid-2018 (millions)']
# turn that result back into a data frame
pop_region_df = pop_region.to_frame()
pop_region_df.reset_index(inplace=True)
#all_articles_region_df = all_articles_region_df.sort_values('article_proportion', ascending=False)
all_articles_over_pop = all_articles_region_df.merge(pop_region_df, how='right', left_on='region', right_on='region')
all_articles_over_pop['total_articles_over_pop'] = all_articles_over_pop['total_articles']/all_articles_over_pop['Population mid-2018 (millions)']
all_articles_over_pop = all_articles_over_pop[['region', 'total_articles_over_pop']]
all_articles_over_pop = all_articles_over_pop.sort_values('total_articles_over_pop', ascending=False)
print(all_articles_over_pop)
###Output
region total_articles_over_pop
5 OCEANIA 72.668561
2 EUROPE 19.907834
3 LATIN AMERICA AND THE CARIBBEAN 7.932139
0 AFRICA 5.867868
4 NORTHERN AMERICA 5.260131
1 ASIA 2.544333
###Markdown
Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
# add up the total number of articles in each region using group by
all_articles_region = all_articles_country_df.groupby('region').sum()['total_articles']
# turn that result back into a data frame
all_articles_region_df = all_articles_region.to_frame()
all_articles_country_df.reset_index(drop=True, inplace=True)
# add up the total number of high quality articles in each region using group by
hq_region = all_articles_country_df.groupby('region').sum()['article_name']
# turn that result back into a data frame
hq_region_df = hq_region.to_frame()
hq_region_df.reset_index(inplace=True)
hq_over_all_articles_df = all_articles_region_df.merge(hq_region_df, how='right', left_on='region', right_on='region')
hq_over_all_articles_df['hq_over_all_articles'] = hq_over_all_articles_df['article_name']/hq_over_all_articles_df['total_articles']
hq_over_all_articles_df = hq_over_all_articles_df[['region', 'hq_over_all_articles']]
hq_over_all_articles_df = hq_over_all_articles_df.sort_values('hq_over_all_articles', ascending=False)
print(hq_over_all_articles_df)
###Output
region hq_over_all_articles
4 NORTHERN AMERICA 0.051536
1 ASIA 0.027143
5 OCEANIA 0.023462
2 EUROPE 0.022587
0 AFRICA 0.021324
3 LATIN AMERICA AND THE CARIBBEAN 0.014002
###Markdown
Bias in Data Kenten Danas An exploration of bias in data on Wikipedia, originally completed for University of Washington's DATA 512 class in Autumn 2018 The purpose of this notebook is to explore bias in data by looking at English Wikipedia pages on politicians from a variety of countries. I combine the article data with data on the countries' populations and a prediction of the quality of the article garnered from ORES (more information on this below), and explore whether these have an impact on the number of articles on politicians overall and the number of higher quality articles.This notebook is used to process the downloaded data, get article quality prediction data from ORES, munge the data from the different sources, and complete an analysis of the combined data. A discussion of the results can be found in the ReadMe of this repository.
###Code
#Import necessary packages and initialize desired notebook settings
import json
import numpy as np
import pandas as pd
import requests
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_inateractivity = "all"
from IPython.display import display, HTML
###Output
_____no_output_____
###Markdown
Getting Data Two sources of data were used for this analysis; both are described in separate sections below. For both the data is publically available and easily downloaded. Article DataThe first data set contains data about English Wikipedia articles on politicians, by country. It can be found at this link: https://figshare.com/articles/Untitled_Item/5513449This data is released under the CC-BY-SA 4.0 license, and so can be included here in this repo. From this page you can download a zip file containing the data, as well as R code used to generate the data. For the purpose of this analysis, only the page_data.csv is needed, so I have extracted that from the download and included it in the 'raw_data' folder of this repository.
###Code
#Read page data csv from local data and look at first couple of rows
pages = pd.read_csv('data_raw/page_data.csv')
pages.head()
###Output
_____no_output_____
###Markdown
Consistent with what is described on the figshare page for this data, page_data.csv contains the following columns: - page: the name of the English Wikipedia page (not cleaned) - country: the name of the country the politician was from - rev_id: the id of the last revision made to the page Population DataThe second data set used for this analysis contains world population by country, in millions of people, as of 2018. The data set can be found here: https://www.dropbox.com/s/5u7sy1xt7g0oi2c/WPDS_2018_data.csv?dl=0The origin of this data is from the Population Reference Bureau (PRB); more information on the data can be found on their website, here: https://www.prb.org/2018-world-population-data-sheet-with-focus-on-changing-age-structures/Since this data is not licensed, and everything on the PRB website is copyrighted I have not included it in this repository. The code below loads the data locally. If you want to replicate this analysis, you can download the data from the link above and update the filepath used to read in the csv in the code below.Note that here I also do some minor data processing to make the analysis below easier, namely renaming the columns and converting the population to the full value rather than millions.
###Code
#Read population data from local csv and review first couple of rows
pop = pd.read_csv('C:/Users/kentd/OneDrive/Documents/School-Grad/DATA_512/WPDS_2018_data.csv')
#Rename columns to make future analysis easier
pop.rename(columns={'Geography': 'country', 'Population mid-2018 (millions)': 'Population'}, inplace=True)
#Convert population to int, and then to raw value
pop['Population'] = pd.to_numeric(pop['Population'], errors='coerce')
pop['Population'] = pop['Population'] * 1e6
pop.head()
###Output
_____no_output_____
###Markdown
Article Quality Predictions Using ORES For this analysis, I get the prediction of article quality from ORES, an API for machine learning developed by Wikimedia. See the following link for more information:https://www.mediawiki.org/wiki/ORESORES takes a Wikipedia article ID and assigns a probability that the quality of the article falls into one of six categories. The highest probability is the category assigned to the article. The categories are (from best to worst quality): - FA - Featured article - GA - Good article - B - B-class article - C - C-class article - Start - Start-class article - Stub - Stub-class article For this analysis, I consider articles classified as 'FA' and 'GA' to be "high quality" articles.To get the predictions, I feed the pages dataset to the ORES system using the code below. Since the API documentation recommends batching 50 revision IDs per request, I have split the data into chunks of 50 rather than sending the entire dataframe. The code for the function 'ores' I used to make the API requests is based on the example provided here: https://github.com/Ironholds/data-512-a2/blob/master/hcds-a2-bias_demo.ipynb
###Code
#Start with some housekeeping - define headers & API endpoint
headers = {'User-Agent' : 'https://github.com/kentdanas', 'From' : '[email protected]'}
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
#Now define function to make the API call for a given list of revision ids
def ores(rev_ids, headers=headers, endpoint=endpoint):
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in rev_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
return response
#Use the ORES function above to get predictions for each article by looping through 50 articles at a time.
# Note, this chunk will take a few minutes to run; this is a good time to grab a cup of coffee!
#Define start indexes
index_start = 0
index_stop = 49
#Create empty dataframe to store results
page_quality = pd.DataFrame(columns=('rev_id', 'article_quality'))
while index_start < len(pages):
#Pull chunk of revision ids
rev_ids = list(pages['rev_id'][index_start:index_stop])
#Feed revision ids to ORES and get predictions
response = ores(rev_ids)
#pull out prediction for each rev_id from resulting json dump. Note that if the article was not found by ORES
# it will not have a prediction, hence the try/except statement. For articles with no prediction I filled the
# results with 'nan'
for rev_id in rev_ids:
rev_id = str(rev_id)
try:
prediction = response['enwiki']['scores'][rev_id]['wp10']['score']['prediction']
except:
prediction = np.nan
#append results to dataframe
page_quality = page_quality.append({'rev_id':rev_id, 'article_quality':prediction}, ignore_index=True)
#Redefine indexes
index_start += 50
index_stop = min(index_stop+50, len(pages))
###Output
_____no_output_____
###Markdown
Combining DatasetsNow that I have the article quality prediction data from ORES, I combine the three data sets into one so they can be used for the analysis. The final dataframe has the following columns: - article_name: the (uncleaned) article name - country: the country of the politician in the article - revision_id: the id of the last revision of the article - article_quality: the ORES prediction of the article quality - population: the population of the country
###Code
#First combine the ORES predictions and the page data (note that I need to convert the rev_id type to an int first)
page_quality['rev_id'] = pd.to_numeric(page_quality['rev_id'], errors='coerce')
final_dataset = pages.merge(page_quality, how='left', on=['rev_id'])
#Next, merge with population data. This is done as an inner join since I only want to keep articles for which
#there is a population, and I don't need the population for any countries without any articles
final_dataset = final_dataset.merge(pop, how='inner', on=['country'])
#Rename a couple of columns
final_dataset.rename(columns={'page': 'article_name', 'rev_id': 'revision_id'}, inplace=True)
#Save to CSV
final_dataset.to_csv('data_clean/final_article_dataset.csv', index = False)
#Take a look at the resulting dataframe
final_dataset.head()
###Output
_____no_output_____
###Markdown
AnalysisThe analysis on this data set consists of a calculation of the proportion of articles per population for each country, and proportion of high quality articles for each country. A detailed discussion of the results of this analysis can be found in the ReadMe of this repository. Politician Articles as a Proportion of Country PopulationFirst, for each country I counted the total number of articles and then calculated the number of articles per capita as a percentage. The final tables show the top 10 and bottom 10 countries for this metric respectively.
###Code
#Create a new dataframe with count of articles by country
articles_per_cap = final_dataset.groupby(['country', 'Population']).count().reset_index()
articles_per_cap.rename(columns={'article_name': 'number_of_articles'}, inplace=True)
articles_per_cap = articles_per_cap.drop(['revision_id', 'article_quality'], 1)
#Calculate number of articles as proportion of country population as a percentage
articles_per_cap['articles_per_capita_percentage'] = articles_per_cap['number_of_articles'] / articles_per_cap['Population'] * 100
###Output
_____no_output_____
###Markdown
Now that I have the needed data, I can show the highest and lowest ranked countries: Lowest
###Code
#Pull first 10 rows of ascending-sorted dataframe
low_articles_per_cap = articles_per_cap.sort_values(by=['articles_per_capita_percentage'])[0:10]
display(HTML(low_articles_per_cap.to_html(index=False)))
###Output
_____no_output_____
###Markdown
It looks like most of the countries with very low percentages of articles per capita do not have low populations, but may be countries where media access is restricted, or dictatorships that may not have as many politicians. A more detailed discussion of these results can be found in the ReadMe. Highest
###Code
#Pull first 10 rows of ascending-sorted dataframe
high_articles_per_cap = articles_per_cap.sort_values(by=['articles_per_capita_percentage'], ascending=False)[0:10]
display(HTML(high_articles_per_cap.to_html(index=False)))
###Output
_____no_output_____
###Markdown
It looks like most of the countries with high percentages of articles per capita have very small populations. Again, a more detailed discussion of this analysis can be found in the ReadMe for this repository. Proportion of High Quality Articles About PoliticiansIn the last portion of this analysis I look at the proportion of articles per country that are considered "high quality". For this analysis, "high quality" is defined as having an ORES prediction of either FA (featured article) or GA (good article).
###Code
#Create new dataframe with number of high quality articles per country
quality_articles = final_dataset[(final_dataset['article_quality']=='FA')|(final_dataset['article_quality']=='GA')]
quality_articles = quality_articles.groupby(['country']).count().reset_index()
quality_articles.rename(columns={'article_name': 'high_quality_articles'}, inplace=True)
#Merge this new dataframe back to the dataframe created in the section above with total articles per country
#Note that this is a left join because some countries do not have any high quality articles and so do not appear
#in the dataframe I created directly before this. I fill those with zeros
percent_quality_articles = articles_per_cap.merge(quality_articles, on=['country'], how='left').fillna(0)
#Remove unneeded columns
percent_quality_articles = percent_quality_articles.drop(['Population_x',
'articles_per_capita_percentage',
'revision_id',
'article_quality',
'Population_y'], 1)
#Calculate percentage of high quality articles
percent_quality_articles['percent_high_quality_articles'] = percent_quality_articles['high_quality_articles']/percent_quality_articles['number_of_articles']*100
#Take a look at resulting dataframe
percent_quality_articles.head()
###Output
_____no_output_____
###Markdown
Now that I have the needed data I can show the highest and lowest ranked countries: Lowest
###Code
#Pull first 10 rows of ascending-sorted dataframe
low_percent_quality = percent_quality_articles.sort_values(by=['percent_high_quality_articles'])[0:10]
display(HTML(low_percent_quality.to_html(index=False)))
###Output
_____no_output_____
###Markdown
All of the bottom 10 countries by percentage of high quality articles had no articles in the "FA" or "GA" category, but these may not be the only countries this is true for. Since this is the case, using a metric of "bottom 10" doesn't really make sense. Therefore, below I also created a table for all countries that had no high quality articles.
###Code
#Create and view new dataframe with all countries that had no high quality articles
no_high_quality = percent_quality_articles[percent_quality_articles['percent_high_quality_articles']==0]
display(HTML(no_high_quality.to_html(index=False)))
###Output
_____no_output_____
###Markdown
It appears there are actually 37 countries that had no articles predicted to be high quality. More discussion of these results can be found in the ReadMe. Highest
###Code
#Pull first 10 rows of descending-sorted dataframe
high_percent_quality = percent_quality_articles.sort_values(by=['percent_high_quality_articles'], ascending=False)[0:10]
display(HTML(high_percent_quality.to_html(index=False)))
###Output
_____no_output_____
###Markdown
Assignment 2 - Bias on Wikipedia Ian Kirkman, 11/1/2017The goal of this assignment is to explore the ramifications of bias in data. Given the known demographics of english Wikipedia editors (see "Nationality" in https://en.wikipedia.org/wiki/Wikipedia:Wikipedians), we anticipate a bias that affects both the scope and quality of english Wikipedia articles for political figures from various countries. We will analyze the coverage and quality metrics of these articles on political figures, and reflect on our findings.*Assignment source: https://wiki.communitydata.cc/HCDS_(Fall_2017)/AssignmentsA2:_Bias_in_data* 1. Data PrerequisitesSection 1 covers all the information needed to gather data, import the required libraries, and set user inputs. We start below with libraries and parameters. Gathering data will be broken into precomputed source data ([Section 1.2](sec1.2)) and data pulled from an API ([Section 1.3](sec1.3)). 1.1. Importing Libraries and Setting ParametersThis package includes the following libraries for processing and analysis: - `requests`: This is used to pull data from the ORES API. - `json`: This is used to format, save, and load raw data after it's pulled from the source. - `csv`: This is used to load raw data from csv files, and to write processed data to a csv. - `math`: The functions `floor` and `ceil` are used to split the Residual IDs for ORES API calls. - `copy`: The `deepcopy` function is used when processing and analyzing data (Sections 2 and 3). - `operator`: The `itemgetter` function is used when sorting a list of lists (Section 3). - `IPython`: The `display` and `markdown` functions are used to embed the final ranking tables in the notebook. User inputs are also set in this section, and referenced throughout the later processing steps. Inputs are split into categories that correspond to later notebook Sections. - [Section 1.2](sec1.2) of this notebook covers the raw data CSV files that need to be uploaded to the project directory. The inputs in this section represent the filepaths of those uploaded CSVs. - [Section 1.3](sec1.3) of this notebook will cover the ORES API calls to collect the raw ORES data. This section of inputs contains the parameters and endpoint used for the ORES API calls, as well as the file location of where to write the raw API call results. - [Section 2](sec2) of this notebook contains the data processing steps for our project. In that section, we create the dataset of merged data from our 3 sources that is required as assignment output. Below, we enter the file location of where to save the merged data as a CSV file.**Notes and Assumptions:**- For all data paths it is assumed that this notebook lives in the project root directory. All paths should be written from the root.- Project folders are currently split into DATA (all raw data) and OUTPUT (all processing and analysis output).- Raw data files use the naming convention: `source_description_accessdate`.- Raw data must be saved in a json format, and our processed data output must be saved as a CSV. Changing the file extensions in the paths will require updating code in the related sections.
###Code
import requests
import json
import csv
import math
import copy
from operator import itemgetter
from IPython.display import display, Markdown
############ BEGIN USER INPUTS ###
github_username = 'iankirkman'
uw_email = '[email protected]'
headers={'User-Agent' : 'https://github.com/%s'%github_username, 'From' : '%s'%uw_email}
# Raw Data Upload Location (from root dir) -- See data source notes in Section 1.2.
raw_wp_data_path = 'DATA/wp_page_data_20171101.csv'
raw_prb_data_path = 'DATA/prb_population_mid2015_20171101.csv'
# ORES Parameters for API calls -- See usage in Section 1.3.
ores_endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
ores_params = {'project' : 'enwiki',
'model' : 'wp10'}
raw_ores_data_path = 'DATA/raw_ores_data_20171101.json'
# Filepaths of Notebook Output (from root dir) - See usage in Section 2.
merged_wp_prb_ores_data_path = 'OUTPUT/processed_wp_prb_ores_data.csv'
############ END USER INPUTS ###
###Output
_____no_output_____
###Markdown
1.2. Uploading Raw Data from Outside SourcesWe have three data sources available for this assignment. This section will cover the first two, which are available publicly in CSV format. To use these datasets in our project, we need to pull the CSV files from the online sources and upload them to our data directory.The [Population Reference Bureau (PRB)](http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14) website contains a dataset with population counts by country circa mid-2015. A CSV file can be downloaded directly from the link by clicking the Excel icon in the top right side of the page. The CSV file must then be uploaded to the project data directory at the path specified in the inputs above. The only fields we use from this data are `Location` and `Data`, which correspond to 'Country' and 'Population' on our final dataset, respectively.The english Wikipedia page data for political figures by country was provided by Oliver Keyes on [Figshare](https://figshare.com/articles/Untitled_Item/5513449). The CSV file can be downloaded via the download button on the top left, and then uploaded to the project data directory at the path specified in the inputs above. See code and data notes at the link. We will use each field from this dataset, with the following mapping to our final output: 'country' to 'Country', 'page' to 'Article_Name', and 'rev_id' to 'Revision_ID'.**Notes and Assumptions:**- The column order is assumed to be consistent for any data download from these sources. Passive header checking has been added as print statements in Sections 1.3 and 2.- Note that the Wikipedia page data on Figshare has updated a field name from 'last_edit' to 'rev_id'. This change is not currently reflected in the page data documentation. 1.3. Pulling Raw Data from the ORES APIFor our third data source, we will be accessing the [Objective Revision Evaluation Service (ORES)](https://www.mediawiki.org/wiki/ORES) API to collect quality score predictions by article (matched on Revision ID). We use the endpoint and parameters specified in Section 1.1 to call the API with multiple Revision IDs smushed together with a vertical line delimiter. The user-input parameters simply specify the project and model for the API call. Version 3 is assumed and hard-coded into the calls below. Revision IDs are added to the parameters after some initial processing steps from the Figshare data.API calls return a nested dictionary. To access the score prediction for a given article (using Rev_ID_001 as an example), we pull: `api_results[ores_params['project']]['scores'][Rev_ID_001][ores_params['model']]['score']['prediction']`. ORES score predictions are classified as (ordered from best to worst):- `FA`: Featured article- `GA`: Good article- `B`: B-class article- `C`: C-class article- `Start`: Start-class article- `Stub`: Stub-class articleSee the ORES API linked above for further details.We first build a simple get function to return the API call results for a list of Revision IDs. We then batch groups of 50 Revision IDs at a time from the Wikipedia page data, and add the results of each call to our raw ORES data. The combined raw data is exported to a json file in the project data directory, at the path specified in Section 1.1.**Notes and Assumptions:**- Some of the API calls return an error dictionary instead of returning a score prediction. Those error dictionaries are saved in place on the raw data, and dealt with in our processing steps of Section 2.- The pull of Revision IDs from the Figshare data assumes the column ordering is consistent with this download. - See lines marked with ` TEST ` for passive error checking below.
###Code
def get_ores_data(revision_ids):
'''
Returns a json-formatted dictionary of ORES API results for list of
(up to 50) Wikipedia article Revision IDs.
DEPENDENCIES:
- Requires Wikipedia page data from figshare uploaded to project
data directory specified in Section 1.1.
- Requires ORES endpoint and parameters specified in Sec 1.1.
INPUTS:
- revision_ids: list of up to 50 revision ids to pull ORES data on
RETURNS:
- json-formatted nested dictionary
- See ORES API documentation: https://www.mediawiki.org/wiki/ORES
'''
params = {'revids' : '|'.join(str(x) for x in revision_ids)}
params.update(ores_params)
return requests.get(ores_endpoint.format(**params)).json()
# Read uploaded raw CSV of Wikipedia page data
# This is needed to collect Revision IDs for ORES API calls
# Assumes header row = ['page','country','rev_id']
wp_data = []
with open(raw_wp_data_path) as csvfile:
reader = csv.reader(csvfile)
for row in reader:
wp_data.append([row[0],row[1],row[2]])
## TEST ##
# Check Wikipedia data header row:
print('Check WP data headers: %r'%(wp_data[0]==['page','country','rev_id']))
# Consolidate list of Revision IDs for ORES API calls
rev_ids = [wp_data[i][2] for i in range(1,len(wp_data))]
# Batch groups of 50 Revision IDs for ORES API calls
ores_data = get_ores_data(rev_ids[50*math.floor(len(rev_ids)/50):len(rev_ids)+1])
for i in range(math.ceil(len(rev_ids)/50)-1):
ores_data[ores_params['project']]['scores']. \
update(get_ores_data(rev_ids[i*50:(i+1)*50])[ores_params['project']]['scores'])
## TEST ##
# Check that all rows have been added to ores_data dictionary.
print('Check ores_data for completeness (2):')
# Check total number of rev_ids in ores_data versus wp_data:
print('* 1/2: %r'%(len(ores_data[ores_params['project']]['scores']) == \
len(wp_data)-1))# -1 because wp_data has a header row to ignore
# Check ores_data contains last row of wp_data:
print('* 2/2: %r'%(wp_data[-1][2] in \
ores_data[ores_params['project']]['scores']))
# Save raw ORES data as json file
with open(raw_ores_data_path, 'w') as outfile:
json.dump(ores_data, outfile)
###Output
Check WP data headers: True
Check ores_data for completeness (2):
* 1/2: True
* 2/2: True
###Markdown
2. Processing DataProcessing our data requires two merges, which we have broken into two steps below. 2.1 Merge Wikipedia and Population DataFirst we merge the Wikipedia page data with the PRB Population data. Both datasets have a country feature that we can join on. We remove all countries that do not have an exact match in both datasets.Since we create the merged set by iterating over the Wikipedia page data, the countries in the PRB data that are missing are implicitly removed from our result. However, we added some exclusion tracking below so we can reconcile our data. A list of lists is created, called `dsmerge_wp_prb`, with **ordered** headers that correspond to each source value (where WP represents Wikipedia page data and PRB represents PRB population data):| Column | Value Source || :--- | :--- || Country | WP.country & PRB.Location || Article_Name | WP.page || Revision_ID | WP.rev_id || Article_Quality | '' || Population | PRB.Data |**Notes and Assumptions:**- Note that the `Article_Quality` field is an empty string placeholder for the merge with ORES data in Section 2.2.- The ordering of the list `dsmerge_wp_prb` is assumed in later processing steps.- Passive checks are added in lines marked by ` TEST `.
###Code
# Recall the wikipedia page data was read from CSV in Section 1.2.
# This can be repeated here if necessary:
wp_data = []
with open(raw_wp_data_path) as csvfile:
reader = csv.reader(csvfile)
for row in reader:
wp_data.append([row[0],row[1],row[2]]) # Assumes header row = ['page','country','rev_id']
## TEST ##
# Check Wikipedia data header row:
print('WP Header Check: %r'%(wp_data[0]==['page','country','rev_id']))
# Read uploaded raw CSV of PRB Population data into dictionary pop_data
pop_data = {} # Dict format is: {'country':'population'}
hdr = True # Data has header row
with open(raw_prb_data_path) as csvfile:
reader = csv.reader(csvfile)
for row in reader:
if len(row)==6: # Ignore title rows, include column headers
## TEST ##
if hdr: # Check list order in header row
print('PRB Header Check: %r'%(row[0]=='Location' and row[4]=='Data'))
hdr = False
else: # Add data rows (non-header) to dictionary
pop_data[row[0]] = row[4]
# Merge PRB population data with Wikipedia data
# Note that this leaves an empty column to join Article_Quality from ores_data
dsmerge_wp_prb = [['Country','Article_Name','Revision_ID','Article_Quality','Population']]
# Tracking exclusions:
# By iterating over wp_data below, we are implicitly skipping countries in pop_data
# that are not in wp_data.
# Therefore, to keep track of excluded countries in each source, we use the dictionaries:
wp_excl,wp_incl = {},{} # add keys as wp countries are skipped
pop_excl = copy.deepcopy(pop_data) # remove keys as prb countries are used
# Iterate over wp_data to construct merged list:
for row in wp_data[1:]: # Skip header row
# We must also skip rows with countries that are not included in pop_data:
if row[1] in pop_data:
dsmerge_wp_prb.append([row[1],row[0],row[2],'',int(pop_data[row[1]].replace(',',''))])
pop_excl.pop(row[1],None)
wp_incl.update({row[1] : 1})
else:
wp_excl.update({row[1] : 1})
# Track totals for data reconciliation:
wp_art_incl_ct = len(dsmerge_wp_prb)
wp_art_excl_ct = len(wp_data)-len(dsmerge_wp_prb)
wp_ctry_incl_ct = len(wp_incl.keys())
wp_ctry_excl_ct = len(wp_excl.keys())
prb_pop_incl_ct = sum([int(v.replace(',','')) for v in pop_data.values()]) - \
sum([int(v.replace(',','')) for v in pop_excl.values()])
prb_pop_excl_ct = sum([int(v.replace(',','')) for v in pop_excl.values()])
prb_ctry_incl_ct = len([k for k in pop_data.keys() if k not in pop_excl.keys()])
prb_ctry_excl_ct = len(pop_excl.keys())
## TEST ##
# Check that number of included countries is the same in both sources
print('Merged Country Check: %s; Match: %r'%(format(wp_ctry_incl_ct,','), \
(wp_ctry_incl_ct == prb_ctry_incl_ct)))
###Output
WP Header Check: True
PRB Header Check: True
Merged Country Check: 187; Match: True
###Markdown
Exclusion Reconciliation for WP and PRB Data by CountryWe can use the tracking values computed above to print some information about the total amount of excluded countries, articles, and people on each of the applicable datasets. This allows us to confirm that we are not excluding a greater propotion than expected, as well as track our data totals throughout all processing steps.
###Code
## DATA RECONCILIATION ##
print('WIKIPEDIA DATA RECONCILIATION')
print('------------------------------------')
print('* Excluded Articles:')
print('\tNumber: %s'%format(wp_art_excl_ct,","))
print('\tPercent: %s'%format((wp_art_excl_ct/(wp_art_excl_ct+wp_art_incl_ct)),".2%"))
print('* Excluded Countries:')
print('\tNumber: %s'%format(wp_ctry_excl_ct,","))
print('\tPercent: %s'%format((wp_ctry_excl_ct/(wp_ctry_excl_ct+wp_ctry_incl_ct)),".2%"))
print('* Excluded Country List:')
for k in wp_excl.keys():
print('\t%s'%k)
print() # Add whitespace
print('PRB POPULATION DATA RECONCILIATION')
print('------------------------------------')
print('* Excluded Population:')
print('\tRaw Count: %s'%format(prb_pop_excl_ct,","))
print('\tPercent of Total Pop.: %s'%format((prb_pop_excl_ct/(prb_pop_excl_ct+prb_pop_incl_ct)),".2%"))
print('* Excluded Countries:')
print('\tNumber: %s'%format(prb_ctry_excl_ct,","))
print('\tPercent: %s'%format((prb_ctry_excl_ct/(prb_ctry_excl_ct+prb_ctry_incl_ct)),".2%"))
print('* Excluded Country List:')
for k in pop_excl.keys():
print('\t%s'%k)
###Output
WIKIPEDIA DATA RECONCILIATION
------------------------------------
* Excluded Articles:
Number: 1,398
Percent: 2.96%
* Excluded Countries:
Number: 32
Percent: 14.61%
* Excluded Country List:
Hondura
Salvadoran
Saint Kitts and Nevis
Palauan
Ivorian
Saint Vincent and the Grenadines
Rhodesian
Omani
Niuean
East Timorese
Faroese
Cape Colony
South Korean
Samoan
Montserratian
Pitcairn Islands
Abkhazia
Carniolan
Saint Lucian
South African Republic
Incan
Chechen
Jersey
Guernsey
South Ossetian
Cook Island
Tokelauan
Dagestani
Greenlandic
Ossetian
Somaliland
Rojava
PRB POPULATION DATA RECONCILIATION
------------------------------------
* Excluded Population:
Raw Count: 62,366,406
Percent of Total Pop.: 0.85%
* Excluded Countries:
Number: 23
Percent: 10.95%
* Excluded Country List:
Brunei
Channel Islands
Cote d'Ivoire
Curacao
El Salvador
French Polynesia
Georgia
Guam
Honduras
Hong Kong, SAR
Macao, SAR
Mayotte
New Caledonia
Oman
Palau
Puerto Rico
Reunion
Samoa
St. Kitts-Nevis
St. Lucia
St. Vincent & the Grenadines
Timor-Leste
Western Sahara
###Markdown
2.2 Merge ORES Data with the WP/PRB Merged DatasetNow we merge the ORES data with our previously (Section 2.1) merged dataset from the Wikipedia page data and PRB population data. In this step, we will need to remove articles where the Revision ID returned an error dictionary instead of a score predicition in the ORES API call. We will also track our exclusions due to ORES errors to allow for a full data reconciliation.We start with a deep copy of our previously merged data, named `dsmerge_wp_prb_ores`. We match the ORES data on the `Revision_ID` column. If the ORES API returned an error dictionary for that Revision ID, then the row is removed from our list and added to our exclusion tracking. If it returned a score dictionary, then we add the score prediction to the `Article_Quality` column.The final `dsmerge_wp_prb_ores` list of lists dataset has **ordered** headers that correspond to each source value:| Column | Value Source || :--- | :--- || Country | WP.country & PRB.Location || Article_Name | WP.page || Revision_ID | WP.rev_id & ORES.revid || Article_Quality | ORES.prediction || Population | PRB.Data |*__This dataset is a requirement of the assignment, and is output to the location specified in the user inputs of Section 1.1.__***Notes and Assumptions:**- The ordering of the list `dsmerge_wp_prb_ores` is assumed in later processing steps.- Passive checks are added in lines marked by ` TEST `.
###Code
# If running apart from Section 1, load ores_data from raw json:
with open('%s'%(raw_ores_data_path), 'r') as infile:
ores_data = json.load(infile)
# We will also need to exclude articles that did not have ORES data.
# The list wp_prb_excl will collect any data from the merged wikipedia and prb sources
# that is excluded for not having ORES data.
wp_prb_excl = []
# Pull raw ORES API data to add quality prediction to merged list:
dsmerge_wp_prb_ores = copy.deepcopy(dsmerge_wp_prb)
for row in dsmerge_wp_prb_ores[1:]: # skip header row
if 'error' in ores_data[ores_params['project']]['scores'][row[2]][ores_params['model']]:
# No quality data for this article-- remove it from the merged set
wp_prb_excl.append(row)
dsmerge_wp_prb_ores.remove(row)
else:
# Add the quality prediction to the merged data
row[3] = ores_data[ores_params['project']]['scores'][row[2]][ores_params['model']]['score']['prediction']
## TEST ##
print('Check merged row totals: %r'%(len(wp_prb_excl)+len(dsmerge_wp_prb_ores)==len(dsmerge_wp_prb)))
# Write Merged Dataset to Output CSV file
with open(merged_wp_prb_ores_data_path,'w') as csvfile:
writer = csv.writer(csvfile)
for row in dsmerge_wp_prb_ores:
writer.writerow(row)
###Output
Check merged row totals: True
###Markdown
Exclusion Reconciliation for ORES data with merged WP/PRB data by Revision IDWe can use the tracking values computed above to print some information about the articles excluded by this merge. This allows us to confirm that we are not excluding a greater propotion than expected, as well as track our data totals throughout all processing steps.
###Code
## DATA RECONCILIATION ##
print('MERGED WP/PRB/ORES DATA RECONCILIATION')
print('------------------------------------')
print('* Articles from WP/PRB merged set excluded by ORES error:')
for k in wp_prb_excl:
print('\t%s (%s, Rev ID: %s, Pop: %s)'%(k[1],k[0],k[2],format(k[4],',')))
###Output
MERGED WP/PRB/ORES DATA RECONCILIATION
------------------------------------
* Articles from WP/PRB merged set excluded by ORES error:
Olajide Awosedo (Nigeria, Rev ID: 806811023, Pop: 181,839,400)
Jalal Movaghar (Iran, Rev ID: 807367030, Pop: 78,483,446)
Mohsen Movaghar (Iran, Rev ID: 807367166, Pop: 78,483,446)
Ajay Kannoujiya (India, Rev ID: 807484325, Pop: 1,314,097,616)
###Markdown
3. Data AnalysisTo analyze the bias in english Wikipedia articles, we compute two metrics for each country in the combined data. To assess coverage of articles in a country, we compute an articles-per-population proportion (reported as a percentage). To assess the quality of articles in a given country, we compute the proportion of articles that are high quality (those that are classified as 'FA' or 'GA', also reported as a percentage). 3.1 Developing Metrics for Country RankingWe use a `countries` dictionary with country names as key. Each value is a dictionary that includes values for the country's population, total articles, and high-quality articles. After counting all the articles from our final merged dataset into the countries dictionary, we can create a simple table containing each of our two (coverage and quality) metrics for each country row. This table is called `countries_all_pcov_pqual` in the code section below. We can use this `countries_all_pcov_pqual` table of combined metrics along with some simple sorts to obtain the following country-ranking visualizations:- Top Ten Countries by Coverage Proportion (Articles-to-Population)- Bottom Ten Countries by Coverage Proportion (Articles-to-Population)- Top Ten Countries by Proportion of High Quality Articles- Bottom Ten Countries by Proportion of High Quality Articles**Notes and Assumptions:**- Tie-breakers for equal proportions will be based on previous data sort.
###Code
# Compile Article counts in dict of dicts with country names as keys
# e.g.: {'country': {'population': [population],
# 'tot_articles': [article count],
# 'hq_articles': [high-quality article count]}}
countries = {}
for row in dsmerge_wp_prb_ores[1:]:
if row[0] in countries:
# add to article counts only
countries[row[0]]['tot_articles'] += 1
if row[3] in ['GA','FA']:
countries[row[0]]['hq_articles'] += 1
else:
# create new dict entry
countries[row[0]] = {'population' : row[4],
'tot_articles' : 1,
'hq_articles' : int(row[3] in ['GA','FA'])}
# Create table of all countries with article-per-pop and hq-per-article values
countries_all_pcov_pqual = [['country','prop_coverage','prop_quality']] + \
[[c, \
countries[c]['tot_articles']/countries[c]['population'], \
countries[c]['hq_articles']/countries[c]['tot_articles']] \
for c in countries.keys()]
# Pull top/bottom 10 country lists from countries_all_pcov_pqual list
# Reference (use of itemgetter): https://stackoverflow.com/questions/10695139/sort-a-list-of-tuples-by-2nd-item-integer-value
countries_top10_pcov = [r for r in sorted(countries_all_pcov_pqual[1:],key=itemgetter(1),reverse=True)[:10]]
countries_bot10_pcov = [r for r in sorted(countries_all_pcov_pqual[1:],key=itemgetter(1),reverse=False)[:10]]
countries_top10_pqual = [r for r in sorted(countries_all_pcov_pqual[1:],key=itemgetter(2),reverse=True)[:10]]
countries_bot10_pqual = [r for r in sorted(countries_all_pcov_pqual[1:],key=itemgetter(2),reverse=False)[:10]]
###Output
_____no_output_____
###Markdown
Display Rankings VisualizationsWe create a simple get function to construct a string that will work with the IPython `markdown` and `display` functions. That function is called for each Rankings display we wish to show.
###Code
def get_embedstr_ranktab(title,rank_table):
'''
Creates an embedding string for a country-ranking table via
the IPython markdown function.
INPUT:
- title: the name of the table to display
- rank_table: the rankings table to display
RETURNS:
- the string used by IPython display(markdown()) function
to embed the country-rankings table
'''
embstr = '%s\n----\n'%title + \
'|Country|Population|Article-per-Population|High Quality-per-Article\n' + \
'|:-------------|-------------:|-----:|-----:|\n'
for c in rank_table:
embstr += '|%s|%s|%s|%s|\n'%(c[0],format(countries[c[0]]['population'],','),format(c[1],'.2%'),format(c[2],'.1%'))
return embstr + '\n\n\n'
# Display Country Ranking tables in markdown.
# Reference: https://stackoverflow.com/questions/36288670/jupyter-notebook-output-in-markdown
mkdwn_str = get_embedstr_ranktab('Top Ten Countries by Coverage Proportion (Articles-to-Population)', \
countries_top10_pcov) + \
get_embedstr_ranktab('Bottom Ten Countries by Coverage Proportion (Articles-to-Population)', \
countries_bot10_pcov) + \
get_embedstr_ranktab('Top Ten Countries by Proportion of High Quality Articles', \
countries_top10_pqual) + \
get_embedstr_ranktab('Bottom Ten Countries by Proportion of High Quality Articles', \
countries_bot10_pqual)
display(Markdown(mkdwn_str))
###Output
_____no_output_____
###Markdown
Assignment 2: Bias in Data Analyzing Wikipedia articles on political figures from different countries. Vishnu Nandakumar The main objective of this notebook is to provide a detailed walkthrough of the steps involved in analysing quality of the Wikipedia articles on political figures from various countries. The data involved in the analysis are obtained from two sources:- Politicians by Country from the English-language Wikipedia dataset: [Source: Figshare](https://figshare.com/articles/Untitled_Item/5513449) - 2018 World Populataion Data Sheet by the Population Reference Bureau: [Source: Dropbox](https://www.dropbox.com/s/5u7sy1xt7g0oi2c/WPDS_2018_data.csv?dl=0)The data quality is assesed using **ORES (Objective Revision Evaluation Service)**, which is a web service and API that provides machine learning as a service for Wikimedia projects maintained by the Scoring Platform team.The notebook is organized into the following sections:- Getting Article and Population data- Getting Article quality estimates using ORES- Creating the analytic dataset- Analysis and Results
###Code
# Loading the necessary packages
import json
import pandas as pd
import numpy as np
import requests
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Defining generic functions to read the datasets as well as to access the ORES service.
###Code
def get_data(file):
"""
Function to retrieve the specified file as a dataframe.
Expects the target file to be a .csv file.
Args:
file(str): Path of the target file
Returns:
df(pandas.Dataframe)
Raises:
FileNotFoundError: If file doesn't exist.
"""
try:
df = pd.read_csv(file,thousands=',')
return df
except FileNotFoundError:
raise FileNotFoundError("The file {} does not exist".format(file))
HEADERS = {'User-Agent' : 'https://github.com/vivanvish', 'From' : '[email protected]'}
def get_ores_data(revision_ids):
"""
Function to retrieve the quality scores from ORES API.
Args:
revision_ids (int): The revision id of the articles,
headers(dict): Request headers.
Returns:
rev_score_arr(list:dict) : List of dictionaries with revision_id:score pairs.
error_revs(list:int) : List of revision ids for which we didnt get the score.
"""
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params), headers=HEADERS)
response = api_call.json()
# Stripping out the scores in the predictions.
rev_score_arr = []
error_revs = []
for rev_id in revision_ids:
try:
score = response['enwiki']["scores"][str(rev_id)]["wp10"]["score"]["prediction"]
rev_score_arr.append({'rev_id':rev_id,
'score':score})
except:
# Storing the rev_ids for which we couldn't get any score.
error_revs.append(rev_id)
return rev_score_arr, error_revs
###Output
_____no_output_____
###Markdown
Section 1: Getting Article and Population data Population DataThe population data used here is obtained from the Population Reference Bureau's 2018 estimates. In order to ensure full reproducibility the data has been saved to Dropbox and can be downloaded freely. The link can be found in project description given above and also in the readme. For convenience I have downloaded and stored it in the data folder under the name `WPDS_2018_data.csv`.
###Code
population_df = get_data('data/WPDS_2018_data.csv')
# An initial look at the data
population_df.head()
population_df.shape
###Output
_____no_output_____
###Markdown
There are 207 individual locations in the dataset. The individual countries are listed under their a parent geographical locations such as their respective continents etc. Let's check for the parent level geographical locations. **We can filter them by searching for location names in all caps.**
###Code
population_df[population_df["Geography"].str.isupper()]
# Saving them for further use.
aggregate_locs = population_df[population_df["Geography"].str.isupper()]["Geography"].tolist()
###Output
_____no_output_____
###Markdown
There are 6 parent level locations. Let's check for any NaNs and unusual values.
###Code
population_df[population_df.isnull().any(axis=1)]
###Output
_____no_output_____
###Markdown
Looks like no NaNs, which is good. Checking for unsual values by looking at the max and min values.
###Code
population_df["Population mid-2018 (millions)"].min(), population_df["Population mid-2018 (millions)"].max()
###Output
_____no_output_____
###Markdown
Seems like there are no unusual values that are negative or zeroes. Wikipedia Article DataThe article dataset is basically the metadata of articles on Politicians by country published on Wikipedia English. However, the time span of the data is not clear. The data was published on `28.10.2017`, so for the purpose of the analysis we can assume that as the upper limit on the time span. The link is provided in the description above as well as in the readme. The data is also stored locally as `page_data.csv` in the `data` folder.
###Code
article_df = get_data('data/page_data.csv')
article_df.head()
article_df["country"].unique().shape
###Output
_____no_output_____
###Markdown
Despite lacking the aggregate locations, there seems to be more locations in this dataset than that is there in the Population dataset. Looking a bit deeper to find the extra locations, so that we can be careful while creating the final analytical dataset.
###Code
population_locs = population_df["Geography"].unique().tolist()
article_locs = article_df["country"].unique().tolist()
# Finding set of locations present in article locations and not in population locations.
# Also ignoring the aggregate locations since they are not relevant here.
set(article_locs) - set(population_locs) - set(aggregate_locs)
###Output
_____no_output_____
###Markdown
I checked the individual locations in this list and several of them could be corrected easily by mapping to the respective country names. However there are several others, such as **Abkhazia, South Ossetia** etc, that are currently in conflict and thus their status is not certain. And several others that are considered as overseas territories and independent governing regions. I have left those locations untouched to avoid making any untoward assumptions. So below you can see the new names that will be applied to the some of the locations in the article dataset.
###Code
# correcting the names to match those in the population dataset.
corrections = {
'Cape Colony': 'South Africa',
'Congo, Dem. Rep. of':'Congo, Dem. Rep.',
'Czech Republic':'Czechia',
'East Timorese':'Timor-Leste',
'South Korean' : 'Korea, South',
'Swaziland': 'eSwatini',
'Samoan':'Samoa',
'South African Republic':'South Africa',
'Saint Lucian':'Saint Lucia',
'Guadeloupe':'France',
'Hondura':'Honduras',
'Ivorian':"Cote d'Ivoire",
'Omani':'Oman',
'Palauan':'Palau',
'Saint Kitts and Nevis':'St. Kitts-Nevis',
'Saint Vincent and the Grenadines':'St. Vincent and the Grenadines',
'Salvadoran':'El Salvador',
'Rhodesian':'Zimbabwe'
}
# Mapping to new names
article_df["country"] = article_df["country"].map(corrections).fillna(article_df['country'])
# Saving the new updated data
article_df.to_csv('data/page_data_v1.csv',index=False)
###Output
_____no_output_____
###Markdown
Section 2: Getting the article scores using ORES.As mentioned above ORES is a machine learning service that returns the quality of an article in the form of the following levels from highest to lowest:- FA - Featured article- GA - Good article- B - B-class article- C - C-class article- Start - Start-class article- Stub - Stub-class articleHere the classes FA and GA are assigned to articles that are deemed high quality.Inorder to retrieve the scores from ORES, we need to provide a revision ID, which is the third column in page_data.csv and the machine learning model, which is `wp10`.According to the API docs, it can handle 150 calls, so we need to send the requests in chunks to avoid hitting the rate limitter.
###Code
## Reading in the updated version of article data 'page_data_v1.csv'
article_df = get_data('data/page_data_v1.csv')
## The number of rows ~47000, so dividing the data into 500 chunks of approx. 90 rows.
revision_score_arr = []
revs_with_error = []
for i, chunk in enumerate(np.array_split(article_df, 500)):
if (i+1)%100 == 0:
print("Chunk: {}".format(i))
rev_ids = chunk['rev_id'].tolist()
# Getting the scores and storing the results in arrays.
rev_score_chunk, error_revs = get_ores_data(rev_ids)
revision_score_arr.extend(rev_score_chunk)
revs_with_error.extend(error_revs)
len(revision_score_arr), len(revs_with_error)
###Output
_____no_output_____
###Markdown
We got the score for almost all articles, except 105, which I assume should be okay.We now need to merge this result with the article data and then combine that with the population data to get the final analytical dataset.
###Code
# Creating dataframe containing revision id and scores.
revision_score_df = pd.DataFrame(revision_score_arr)
# Combining this with the article data.
article_df = article_df.merge(revision_score_df, on='rev_id')
article_df.head(5)
# Saving the data.
article_df.to_csv('data/page_data_with_scores.csv', index=False)
###Output
_____no_output_____
###Markdown
Now we need to combine this data with the population data. Before doing that we need to ensure that the dataframe are consistently named.
###Code
population_df.head()
# Uncomment these lines if you want to restart from this point.
# article_df = get_data('data/page_data_with_scores.csv')
# population_df = get_data('data/WPDS_2018_data.csv')
population_df = population_df.rename(columns={'Geography':'country', 'Population mid-2018 (millions)':'population'})
article_df = article_df.rename(columns={'page':'article_name', 'rev_id':'revision_id', 'score':'article_quality'})
# Merging the two dataframes to get the final analytical data.
final_data = article_df.merge(population_df, on='country')
final_data.head(5)
final_data.shape
###Output
_____no_output_____
###Markdown
The final dataset looks fine. We lost a couple of hundreds of articles because we didn't have matching names for several locations in the population dataset. But this should be enough to get a reasonable good result for our analysis. Last step is to save this data to ensure reproducibility.
###Code
final_data.to_csv('data/article_quality_with_population.csv', index=False)
###Output
_____no_output_____
###Markdown
Section 3: Analysis of articles The analysis involves processing the final data to find the article per person for each country and also examining the proportion of high quality articles. The steps involved are:- Finding the number of articles for each country.- Dividing by the population gives number of articles per person.- Sort the values in decreasing order for convenience.Note : Need to ensure that population is in the right scale
###Code
# Reading the analytical dataset.
score_with_population_data = get_data('data/article_quality_with_population.csv')
# Converting population to original scale.
score_with_population_data['population'] = score_with_population_data['population']*1000000
# Finding the number of articles by country
articles_by_country = score_with_population_data.groupby(['country','population']).agg('count')['article_name'].reset_index()
# Renaming the column to reflect the value it contains.
articles_by_country = articles_by_country.rename(columns={'article_name':'num_article'})
# Finding the number of articles per person
articles_by_country["article_per_person"] = articles_by_country['num_article'] / articles_by_country['population']
articles_by_country = articles_by_country.sort_values(by='article_per_person', ascending=False)
###Output
_____no_output_____
###Markdown
Q1. 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
print(articles_by_country.head(10))
ax = articles_by_country.head(10).plot.bar(
x='country', y='article_per_person',
title='Highest-ranked countries in terms of number of politician articles as a proportion of country population')
ax.set_xlabel("Country")
ax.set_ylabel("Proportion")
###Output
country population num_article article_per_person
178 Tuvalu 10000.0 55 0.005500
120 Nauru 10000.0 53 0.005300
144 San Marino 30000.0 82 0.002733
130 Palau 20000.0 23 0.001150
113 Monaco 40000.0 40 0.001000
98 Liechtenstein 40000.0 29 0.000725
160 St. Kitts-Nevis 50000.0 32 0.000640
173 Tonga 100000.0 63 0.000630
108 Marshall Islands 60000.0 37 0.000617
73 Iceland 400000.0 206 0.000515
###Markdown
Q.2 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population.
###Code
print(articles_by_country.tail(10)[::-1])
articles_by_country.tail(10)[::-1].plot.bar(
x='country', y='article_per_person',
title='Lowest-ranked countries in terms of number of politician articles as a proportion of country population')
ax.set_xlabel("Country")
ax.set_ylabel("Proportion")
###Output
country population num_article article_per_person
74 India 1.371300e+09 986 7.190257e-07
75 Indonesia 2.652000e+08 214 8.069382e-07
34 China 1.393800e+09 1135 8.143206e-07
185 Uzbekistan 3.290000e+07 29 8.814590e-07
55 Ethiopia 1.075000e+08 105 9.767442e-07
190 Zambia 1.770000e+07 25 1.412429e-06
87 Korea, North 2.560000e+07 39 1.523437e-06
38 Congo, Dem. Rep. 8.430000e+07 142 1.684460e-06
170 Thailand 6.620000e+07 112 1.691843e-06
13 Bangladesh 1.664000e+08 323 1.941106e-06
###Markdown
Q.3 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
# Finding number of High quality papers and total number of articles by country.
def find_article_counts_by_type(group):
"""
Function to find the number of articles by type, i.e, high quality or not.
Args:
group(pandas.Dataframe): Dataframe for each country.
Returns:
df(pandas.Dataframe): Dataframe with counts by article type.
"""
high_quality_articles = group.query('article_quality == "FA" or article_quality == "GA"')
return pd.DataFrame([{'hq_article_counts':high_quality_articles.shape[0],
'total_article_counts':group.shape[0]}])
article_type_by_country = score_with_population_data.groupby(
'country').apply(find_article_counts_by_type).reset_index(level=1, drop=True)
article_type_by_country.head(5)
# Finding proportion of hq articles and sorting them.
article_type_by_country['proportion_hq_articles'] = (article_type_by_country['hq_article_counts'] /
article_type_by_country['total_article_counts'])
article_type_by_country.sort_values(by='proportion_hq_articles', ascending=False, inplace=True)
# Dropping all locations that hasn't published any hq articles so far.
article_type_by_country_hq = article_type_by_country.query('proportion_hq_articles != 0')
article_type_by_country_hq.head(10)
ax = article_type_by_country_hq['proportion_hq_articles'].head(10).plot.bar(
title='highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country')
ax.set_xlabel("Country")
ax.set_ylabel("Proportion")
###Output
_____no_output_____
###Markdown
Q.4 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
article_type_by_country_hq.tail(10)[::-1]
article_type_by_country_hq['proportion_hq_articles'].tail(10)[::-1].plot.bar(
title='Lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country')
ax.set_xlabel("Country")
ax.set_ylabel("Proportion")
###Output
_____no_output_____
###Markdown
These are the countries for which atleast one article has been classified as a high quality article. There are several countries for which there are no high quality articles at all. Lets see what are those countries.
###Code
article_type_by_country_nohq = article_type_by_country.query('proportion_hq_articles == 0')
article_type_by_country_nohq.index.values
###Output
_____no_output_____
###Markdown
A2: Bias in data The notebook attempts to do some basic exploration of biases that exist in the english wikipedia pages. We use the publicly available wikipedia dataset about politicians from various countries along with [ORES](https://www.mediawiki.org/wiki/ORES), a machine learning web service, that 'rates' the articles on certain parameters and assigns it a list of probabilities as a score reflective of its [quality](https://en.wikipedia.org/wiki/Wikipedia:Content_assessmentGrades), per the API. We use this and another publicly available data set, called the [world population datasheet](https://www.prb.org/international/indicator/population/table/) published by the Population Reference Bureau for better evaluation. Notebook Workflow:1. Data Acquisition and preparation2. Getting article quality predictions3. Fetching Quality Score Prediction Score4. Data Merge5. Analysis5. Reflection ---- Data Acquisition and preparation The Wikipedia politicians by country dataset is gathered from [Figshare](https://figshare.com/articles/Untitled_Item/5513449). Originaly, the data was extracted via the Wikimedia API using the associated code. The fields in the data are:1. "country", containing the sanitised country name, extracted from the category name;2. "page", the unsanitised page title.3. "rev_id", Unique identifier, refers to the the edit ID of the last edit to the page.
###Code
page_data = pd.read_csv('source-data/page_data.csv')
page_data.head()
# Dropping the wiki pages starting with 'Template:', as they are not real articles pages.
page_data = page_data[~page_data.page.str.startswith('Template:')].reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Next we grab the the Population dataset, which is drawn from the [world population datasheet](https://www.prb.org/international/indicator/population/table/) published by the Population Reference Bureau.
###Code
WPDS_2018_data = pd.read_csv('source-data/WPDS_2018_data.csv')
WPDS_2018_data.head()
# Since the dataset contains regions/continents as well, we separate Countries and region data from WPDS data
WPDS_2018_data_region = WPDS_2018_data[WPDS_2018_data.Geography.str.isupper()].reset_index(drop=True)
# Converting values to float
WPDS_2018_data_region['Population mid-2018 (millions)'] = \
WPDS_2018_data_region['Population mid-2018 (millions)'].str.replace(',', '').astype(float)
WPDS_2018_data_country = WPDS_2018_data[~WPDS_2018_data.Geography.str.isupper()].reset_index(drop=True)
# Since the values for population are in string, we need to convert them to float for later use
WPDS_2018_data_country['Population mid-2018 (millions)'] = \
WPDS_2018_data_country['Population mid-2018 (millions)'].str.replace(',', '').astype(float)
###Output
_____no_output_____
###Markdown
--- Getting article quality predictions The article quality predictions are gather through [ORES](https://www.mediawiki.org/wiki/ORES), a machine learning system called that estimates the quality of an article (at a particular point in time), and assigns the following series of probabilities that the article is in one of 6 quality categories:1. FA - Featured article2. GA - Good article3. B - B-class article4. C - C-class article5. Start - Start-class article6. Stub - Stub-class articleThe exact assessment details can be read [here](https://en.wikipedia.org/wiki/Wikipedia:Content_assessmentGrades)The 'rev_id' from the page data contains the unique identifier to fetch the quality score from the API
###Code
# Sample data fetch:
revision_ids = page_data.rev_id[-1890:-1888]
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
response
HEADERS = {'User-Agent' : 'https://github.com/nmnshrma', 'From' : '[email protected]'}
def fetch_ores_response(revision_ids, headers):
"""
fetches ORES response for the ORES API, for a set of revision IDs
:param revision_ids: list of ids to be fetched
:param headers: HEADERS for the API call
:returns: nested dict object with ORES API response
"""
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
return response
def store_and_read(store_loc, action, dict_={}):
"""
Helper function to read and write dict objects to a location
NOTE: The following ONLY performs a 'write' if there is non-empty dict passed
:param store_loc: location for the file store/read
:param action: list of ids to be fetched
:param dict_: key-value dict to be stored
:returns: (for read) dict objects
"""
if action not in ['read', 'write']:
raise ValueError("action value must be read/write")
if action == 'read':
try:
with open(store_loc, 'r+') as csv_file:
reader = csv.reader(csv_file)
read_dict = dict(reader)
csv_file.close()
return read_dict
except FileNotFoundError:
return None
if dict_ and action=='write':
# dict_.update(read_dict)
with open(store_loc, 'a+') as csv_file:
writer = csv.writer(csv_file)
for key, value in dict_.items():
writer.writerow([key, value])
csv_file.close()
return None
def fetch_ores_response_batchwise(revision_ids, headers, perc_split, store_loc):
"""
splits a large set of revision IDs to gather and clean pertinent responses
from the ORES API
Calls fetch_ores_response method for a fraction of revision IDs.
Fraction of revision IDs to be sent are decided through perc_split
:param revision_ids: list of ids to be fetched
:param headers: HEADERS for the API call
:param perc_split: fraction of revision_ids to be used
:param store_loc: the store loc for the dict object to be written/read
:returns: nested dict object with key-value pair
"""
init_ids=len(revision_ids)
# Shorten the list that have already been read and stored
ignore_list = store_and_read(store_loc=store_loc, action='read')
revision_ids = [i for i in revision_ids if str(i) not in ignore_list]
print(f"IDs list shortened by {init_ids-len(revision_ids)}")
if len(revision_ids) == 0:
return ignore_list
# helper values for the API calls
# Batch size decides the chunk size for the API call
n = len(revision_ids)
batch_size = n//perc_split
data_dict = {}
for i in range(0, n, batch_size):
# sends a batch at once
data = fetch_ores_response(revision_ids[i:i+batch_size], headers=headers)
for key, val in data['enwiki']['scores'].items():
data_dict[key] = 'NA' if 'error' in val.get('wp10') else val.get('wp10', 'NA').get('score', 'NA').get('prediction', 'NA')
store_and_read(store_loc=store_loc, action='write', dict_={key:data_dict[key]})
# Return dict object contains: {Rev_id: Prediction}
return data_dict
score_map= fetch_ores_response_batchwise(revision_ids=page_data.rev_id, headers=HEADERS, perc_split=500,
store_loc='results-data/quality-map.csv')
## Maps the quality score from 'score_map'
page_data['quality_score'] = page_data.rev_id.map(lambda x: score_map.get(str(x), 'NA'))
###Output
_____no_output_____
###Markdown
---- Data merge
###Code
# STEPWISE Preparation for the data
# Only take articles who have a legitimate quality score
final_page_data = page_data[page_data.quality_score != 'NA']
# Inner join to merge file with country, so as to attach populations
final_page_data = final_page_data.merge(WPDS_2018_data_country, how='inner',
left_on='country', right_on='Geography')
# Remove redundant columns
final_page_data = final_page_data[['page', 'country', 'rev_id', 'quality_score','Population mid-2018 (millions)']]
# Column rename and reshuffle as per the instructions
final_page_data.rename(columns={"page": "article_name",
"quality_score": "article_quality",
"rev_id": "revision_id",
"Population mid-2018 (millions)": "population"},
inplace = True)
final_page_data = final_page_data[['country', 'article_name', 'revision_id', 'article_quality', 'population']]
final_page_data.to_csv('results-data/wiki_page_merged.csv', index=False)
final_page_data.head()
###Output
_____no_output_____
###Markdown
---- AnalysisYour analysis will consist of calculating the proportion (as a percentage) of articles-per-population and high-quality articles for each country AND for each geographic region. By "high quality" articles, in this case we mean the number of articles about politicians in a given country that ORES predicted would be in either the "FA" (featured article) or "GA" (good article) classes. Examples:1. if a country has a population of 10,000 people, and you found 10 articles about politicians from that country, then the percentage of articles-per-population would be .1%.2. if a country has 10 articles about politicians, and 2 of them are FA or GA class articles, then the percentage of high-quality articles would be 20%.
###Code
# Generating a helper column that has 1 for good quality articles and 0 for bad
final_page_data.loc[:,'high_quality'] = final_page_data.article_quality.map(lambda x:
1 if x in ['GA', 'FA'] else 0)
final_page_data.head(n=2)
###Output
_____no_output_____
###Markdown
--- **Results format**Your results from this analysis will be published in the form of data tables. You are being asked to produce six total tables, that show:1. **Top 10 countries by coverage**: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population2. **Bottom 10 countries by coverage**: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population To answer the questions about the 'coverage', we need to prepare aggregated values of the data prepared in the final_page_data prepared in the last step.
###Code
country_group_data = final_page_data.groupby('country').agg({'revision_id':'count',
'high_quality':'mean',
# min of population will just mean the actual population
'population': 'min'}).reset_index()
# Since aggregated 'revision_id' represent count of those IDs, we rename the column for clarity
country_group_data.rename(columns = {'revision_id':'articles'},
inplace=True)
country_group_data.head(n=2)
# Coverage here is defined as per instructions
country_group_data.loc[:, 'coverage'] = \
(country_group_data.articles/country_group_data.population/1e6)*100
###Output
_____no_output_____
###Markdown
1. **Top 10 countries by coverage:**
###Code
country_group_data.sort_values(by='coverage', ascending=False).head(n=10)
###Output
_____no_output_____
###Markdown
1. **Bottom 10 countries by coverage:**
###Code
country_group_data.sort_values(by='coverage', ascending=True).head(n=10)
###Output
_____no_output_____
###Markdown
--- 3. **Top 10 countries by relative quality**: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality4. **Bottom 10 countries by relative quality**: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
# We can utilize the high_quality parameter to measure the relative quality
country_group_data.loc[:, 'high_quality'] = country_group_data.loc[:, 'high_quality']*100
###Output
_____no_output_____
###Markdown
3. **Top 10 countries by relative quality**:
###Code
country_group_data.sort_values(by='high_quality', ascending=False).head(n=10)
###Output
_____no_output_____
###Markdown
4. **Bottom 10 countries by relative quality**:
###Code
country_group_data.sort_values(by='high_quality', ascending=True).head(n=10)
###Output
_____no_output_____
###Markdown
--- 5. **Geographic regions by coverage**: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population6. **Geographic regions by coverage**: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality To answer the above question we have to extract the country-region mapping through the WPDS_2018_data dataframe
###Code
# Declares a new column to be filled with region names
WPDS_2018_data.loc[:,'geo_region'] = None
# Extracting list of regions to be mapped to country
geo_regions = list(WPDS_2018_data_region.Geography)
# We extract the region by mapping the first occurence of region in the WPDS_2018_data dataframe
for idx, geo_region in enumerate(geo_regions):
i = int(WPDS_2018_data.index[WPDS_2018_data.Geography== geo_region][0])
if geo_region != geo_regions[-1]:
region_next = geo_regions[idx+1]
i_next = int(WPDS_2018_data.index[WPDS_2018_data.Geography== region_next][0])
WPDS_2018_data.loc[i:i_next, 'geo_region'] = geo_region
else:
WPDS_2018_data.loc[i:, 'geo_region'] = geo_region
WPDS_2018_data.head(n=2)
# For convinience, we can create a dictionary of the map
country_region_map = dict(list(zip(WPDS_2018_data.Geography,
WPDS_2018_data.geo_region)))
final_page_data.loc[:, 'region'] = final_page_data.country.map(lambda x:
country_region_map.get(x, 'None'))
final_page_data.head(n=2)
###Output
_____no_output_____
###Markdown
To figure out region-level data, we need to prepare aggregated values of the data prepared in the in the last step:
###Code
region_group_data = final_page_data.groupby('region').\
agg({'revision_id':'count',
'high_quality':'mean'}).reset_index()
# Merge region population
region_group_data = \
region_group_data.merge(WPDS_2018_data_region, how='left', left_on='region', right_on='Geography')
# Rename column for clarity:
region_group_data.rename(columns={'Population mid-2018 (millions)':'population'}, inplace=True)
region_group_data.rename(columns = {'revision_id':'articles'}, inplace =True)
# Drop redundant columns
region_group_data.drop(labels="Geography", axis=1,inplace=True)
region_group_data.head(n=2)
region_group_data.loc[:, 'coverage'] = \
(region_group_data.articles/region_group_data.population/1e6)*100
###Output
_____no_output_____
###Markdown
5. **Geographic regions by coverage**: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
###Code
region_group_data.sort_values(by='coverage', ascending=False).head(n=10)
###Output
_____no_output_____
###Markdown
6. **Geographic regions by coverage**: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
region_group_data.sort_values(by='high_quality', ascending=False).head(n=10)
###Output
_____no_output_____
###Markdown
--- Reflections Writeup: reflections and implicationsWrite a few paragraphs, either in the README or at the end of the notebook, reflecting on what you have learned, what you found, what (if anything) surprised you about your findings, and/or what theories you have about why any biases might exist (if you find they exist). You can also include any questions this assignment raised for you about bias, Wikipedia, or machine learning.In addition to any reflections you want to share about the process of the assignment, please respond the questions below:**What biases did you expect to find in the data (before you started working with it), and why?**My inherent assumption was that the English speaking, western, part of the world will come on top especially with respect to the high quality articles. This assumption was based on an the bias that the western/english speaking world is generally developed, with the larger pool of resources will have higher quality of wiki articles. Furthermore, I wrongly believed that the higher education level in said countries will inevitable place them on top of the countries with highest quality coverage list.I also believed, that the number of article on politicians to be much higher overall. This was largely on account of the news-cycle and the coverage the political personality recieve, I believed the number would at least an order of magnitude over what it came out to be. Consequently, I did not expect the 'high quality' article percentage to hover between 2-5 percent, but instead be higher than that on account of personally having never seen a shoddy wiki page about a political entities.**What (potential) sources of bias did you discover in the course of your data processing and analysis?**The biggest source of possible issue is the ORES API with the claims of the quality of the article. While wikimedia has comprehensive guidelines in place, however there are still gaps that may be introduced by subjectivity from the reviewers.**What might your results suggest about (English) Wikipedia as a data source?**The coverage of the English Wiki is truly profound, with 200 articles per country the resource is vastly great. However, finding out that over 87% of the articles are of 'Stub' or 'Start' quality is very dissapointing and has shaken my belief in the repository as a database. The biggest shock in the whole process was certainly the appearance of North Korea in the 'highest quality' score.
###Code
###Output
_____no_output_____
###Markdown
Import Packages and DataWe first import the relevant packages as described in the README.
###Code
import requests
import json
import pandas as pd
import os
import csv
import numpy as np
# The following code is motivated from the Stack Overflow post by user emunsing:
# https://stackoverflow.com/a/29665452/3905509
from IPython.display import display, HTML
###Output
_____no_output_____
###Markdown
Next, we import the 2 datasets as pandas dataframes:* one representing population estimates from many nations (*data_population*)* one representing political Wikipeia articles from many nations (*data_wiki*)Downstream in this notebook, we will use the ORES API from Wikipedia. To prevent access issues (and more generally to be polite), we partition *data_wiki* into chunks of 100 articles in the list of dataframes *data_wiki_partition*.
###Code
# Import WPDS_2018_data.csv
# The following dataset comes from https://www.dropbox.com/s/5u7sy1xt7g0oi2c/WPDS_2018_data.csv?dl=0
data_population = pd.read_csv('WPDS_2018_data.csv', thousands = ',')
# Import page_data.csv
# The following dataset is produced by the R project found here: https://figshare.com/articles/Untitled_Item/5513449
data_wiki = pd.read_csv('page_data.csv')
# Partition data_wiki into small enough chunks to not get blocked by ORES
# The following creates a list of data_wiki partitions. Each list element has 100 records.
data_wiki_partition = np.array_split(data_wiki, data_wiki.shape[0] // 100)
###Output
_____no_output_____
###Markdown
Scoring Wikipedia Articles with ORESIn order to estimate the quality of Wikipedia articles, we use the ORES API. In the following code block, relevant headers and an api call function, *get_ores_data*, are defined. The scored ORES data is stored in the list of json objects called *data_wiki_ores_json*.To make *data_wiki_ores_json* more usable, we restate it as a dataframe in the object *data_wiki_ores_df*. Lastly, the predicted article qualities are appended to the original *data_wiki* object into the dataframe *data_wiki_ores*.
###Code
# The code in this block is reused & adapted from Os Keyes:
# https://github.com/Ironholds/data-512-a2/blob/master/hcds-a2-bias_demo.ipynb
headers = {'User-Agent' : 'https://github.com/OO00OO00', 'From' : '[email protected]'}
# Score Wiki articles with ORES
def get_ores_data(revision_ids, headers):
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# Specify the parameters - smushing all the revision IDs together separated by | marks.
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
return response
# Score the chunks of 100 articles in ORES. Return as a list of json output for each partition.
data_wiki_ores_json = [get_ores_data(i['rev_id'].tolist(), headers) for i in data_wiki_partition]
# For each batch and each revision id contained therein (i.e., the keys of batch['enwiki']['scores']):
# 1) Restate the wp10 dictionary value as a pandas dataframe
# 2) Add a column reflecting the revision id
# 3) Extract the relevant prediction field, i.e., the 1st row of the resulting dataframe
# 4) Lastly, concatenate the list of dataframes as a new dataframe and transpose the result
data_wiki_ores_df = pd.concat([pd.DataFrame.from_dict(batch['enwiki']['scores'][revID]['wp10']).assign(rev_id = int(revID)).iloc[0,:]
for batch in data_wiki_ores_json
for revID in batch['enwiki']['scores'].keys()
], axis = 1).transpose()
# Append ORES prediction to data_wiki
data_wiki_ores = pd.merge(data_wiki, data_wiki_ores_df, left_on = 'rev_id', right_on = 'rev_id', how = 'left')
###Output
_____no_output_____
###Markdown
Combine DataNext, we combine the ORES-scored article dataframe with the population dataframe into a new dataframe called *data*. Some minor changes are performed on it to clarify its operations and analysis downstream in this notebook. Lastly, its results are exported to CSV in the file *wikipedia_political_article_bias_2018.csv*
###Code
# Merge dataframes
data = pd.merge(data_population, data_wiki_ores, left_on = 'Geography', right_on = 'country', how = 'inner')
# Rename columns:
data.rename(columns = {'page': 'article_name'}, inplace = True)
data.rename(columns = {'rev_id': 'revision_id'}, inplace = True)
data.rename(columns = {'score': 'article_quality'}, inplace = True)
data.rename(columns = {'Population mid-2018 (millions)': 'population'}, inplace = True)
# Remove duplicate column
data.drop(columns = 'Geography')
# Reorder columns:
data = data.reindex(columns = ['country', 'article_name', 'revision_id', 'article_quality', 'population'])
# Export data to CSV
data.to_csv('wikipedia_political_article_bias_2018.csv', index = False)
###Output
_____no_output_____
###Markdown
Since we want to calculate the proportion of high quality political articles via the FA and GA ORES values, the following code reshapes the *data* dataframe to expand these levels as new columns and stores the result in the dataframe *data_pivot*. Next, *data_pivot* is aggregated at the country level and additional summary data is calculated. The result is called *data_aggregated*. Lastly, *data_aggregated* is modified to more easily create derived fields downstream in this notebook.
###Code
# Expand levels of article_quality column into new columns themselves
# Motivated by the Stack Overflow response from user DYZ:
# https://stackoverflow.com/a/42708606
# 1) Use pivot_table on data to dcast levels of article_quality as new columns
# 2) Restate pivot to numpy record array
# 3) Restate numpy record array back to a pandas dataframe
data_pivot = pd.DataFrame(data.pivot_table(index = ['country', 'article_name', 'revision_id', 'population'], columns = 'article_quality', aggfunc=len).to_records())
# Aggregate data to country-level and include relevant summary data
data_aggregated = data_pivot.groupby('country').agg({
'population': {'max'},
'revision_id': {'count'},
'B': {'count'},
'C': {'count'},
'FA': {'count'},
'GA': {'count'},
'Start': {'count'},
'Stub': {'count'}
})
# Flatten the multi-index dataframe for ease in deriving new columns
# The following code is motivated by the Stack Overflow response from user Andy Hayden:
# https://stackoverflow.com/a/14508355
data_aggregated.columns = data_aggregated.columns.get_level_values(0)
# For clarity, renaming revision_id to article_count
data_aggregated.rename(columns = {'revision_id' : 'article_count'}, inplace = True)
###Output
_____no_output_____
###Markdown
Two fields are derived:* articles_per_population: the proportion of articles-per-population for each nation* pct_high_quality_articles: the proportion of high quality articles for each nation, where "high quality" means proportion of articles that have the ORES prediction quality of *FA* or *GA*.
###Code
# Create derived fields
data_aggregated['articles_per_population'] = data_aggregated['article_count'] / (data_aggregated['population'] * 1e6)
data_aggregated['pct_high_quality_articles'] = (data_aggregated['FA'] + data_aggregated['GA']) / data_aggregated['article_count']
###Output
_____no_output_____
###Markdown
AnalysisLastly, for this notebook, we create 4 tables:* 10 highest-ranked countries in terms of number of politician articles as a proportion of country population* 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population* 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country* 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that countryNote: The wording for the tables is taken from here: https://wiki.communitydata.cc/Human_Centered_Data_Science_(Fall_2018)/AssignmentsA2:_Bias_in_data
###Code
print('10 highest-ranked countries in terms of number of politician articles as a proportion of country population:')
display(data_aggregated.sort_values(by = ['articles_per_population'], ascending = False).iloc[0:10,[0, 1, -2]])
print('10 lowest-ranked countries in terms of number of politician articles as a proportion of country population:')
display(data_aggregated.sort_values(by = ['articles_per_population'], ascending = True).iloc[0:10,[0, 1, -2]])
print('10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country:')
display(data_aggregated.sort_values(by = ['pct_high_quality_articles'], ascending = False).iloc[0:10,[1, 4, 5, -1]])
print('10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country:')
display(data_aggregated.sort_values(by = ['pct_high_quality_articles', 'article_count', 'population'], ascending = True).iloc[0:10,[0, 1, 4, 5, -1]])
###Output
10 highest-ranked countries in terms of number of politician articles as a proportion of country population:
###Markdown
A2: Bias in Data___
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import requests
import json
import time
###Output
_____no_output_____
###Markdown
Step 1: Getting the Article and Population Data___The first step is getting the data, which lives in several different places. The Wikipedia politicians by country dataset can be found on Figshare. Read through the documentation for this repository, then download and unzip it to extract the data file, which is called `page_data.csv`.The population data is available in CSV format as `WPDS_2020_data.csv`. This dataset is drawn from the world population data sheet published by the Population Reference Bureau.
###Code
page_data = pd.read_csv('data_raw/page_data.csv')
WPDS_data = pd.read_csv('data_raw/WPDS_2020_data - WPDS_2020_data.csv.csv')
###Output
_____no_output_____
###Markdown
Step 2: Cleaning the Data___Both `page_data.csv` and `WPDS_2020_data.csv` contain some rows that you will need to filter out and/or ignore when you combine the datasets in the next step. In the case of `page_data.csv`, the dataset contains some page names that start with the string "Template:". These pages are not Wikipedia articles, and should not be included in your analysis.Similarly, `WPDS_2020_data.csv` contains some rows that provide cumulative regional population counts, rather than country-level counts. These rows are distinguished by having ALL CAPS values in the 'geography' field (e.g. AFRICA, OCEANIA). These rows won't match the country values in `page_data.csv`, but you will want to retain them (either in the original file, or a separate file) so that you can report coverage and quality by region in the analysis section.
###Code
page_data.head()
page_data[page_data['country'].str.contains('Hondura')]
# standardizes the country names in page_data so that we can merge with WPDS_data, if necessary
def standardize_countries(country):
if country == 'Salvadoran':
return 'El Salvador'
elif country == 'East Timorese':
return 'Timor Leste'
elif country == 'Hondura':
return 'Honduras'
elif country == 'Rhodesian':
return 'Zimbabwe'
elif country == 'Samoan':
return 'Samoan'
elif country == 'São Tomé and Príncipe':
return "Sao Tome and Principe"
elif country == 'South African Republic':
return 'South Africa'
elif country == 'South Korean':
return 'South Korea'
else:
return country
# cleaning page_data
page_data_clean = page_data[~page_data['page'].str.contains('Template:')]
page_data_clean['country'] = page_data_clean['country'].apply(standardize_countries)
page_data_clean.head()
WPDS_data.head()
WPDS_data_clean = WPDS_data[WPDS_data['Type'] == 'Country']
WPDS_data_clean.head()
page_data_clean.to_csv('data_clean/page_data.csv', sep=',')
WPDS_data_clean.to_csv('data_clean/WPDS_data.csv', sep=',')
###Output
_____no_output_____
###Markdown
Step 3: Getting Article Quality Predictions___Now you need to get the predicted quality scores for each article in the Wikipedia dataset. We're using a machine learning system called ORES. This was originally an acronym for "Objective Revision Evaluation Service" but was simply renamed “ORES”. ORES is a machine learning tool that can provide estimates of Wikipedia article quality. The article quality estimates are, from best to worst:1. FA - Featured article2. GA - Good article3. B - B-class article4. C - C-class article5. Start - Start-class article6. Stub - Stub-class article
###Code
def pred_page_quality_scores(revids):
ret_scores = []
ret_revs = []
ore_endpoint = "https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}"
rev_ids = list(map(str, revids))
params = {'project': 'enwiki',
'model':'wp10',
'revids':'|'.join(rev_ids)}
api_call = requests.get(ore_endpoint.format(**params))
response = api_call.json()
# print(response)
for rev_id in rev_ids:
try:
ret_scores.append(response["enwiki"]["scores"][rev_id]["wp10"]["score"]["prediction"])
ret_revs.append(rev_id)
except:
# continue
print(f"Could not use rev_id={rev_id}")
return ret_revs, ret_scores
try:
page_scores_df = pd.read_csv("page_scores.csv")
if page_scores_df["Unnamed: 0"].any():
page_scores_df.drop(["Unnamed: 0"], axis=1, inplace=True)
except:
revid_lst = []
scores_lst = []
batch_size = 50
start = time.time()
for i in range(0, len(page_data_clean), batch_size):
revid_batch = page_data_clean["rev_id"][i:(i+batch_size)]
rev_ids, scores = pred_page_quality_scores(revid_batch)
# print(rev_ids, scores)
# break
# for rev_id in rev_ids:
# revid_lst.append(rev_id)
revid_lst.extend(rev_ids)
# for score in scores:
# scores_lst.append(score)
scores_lst.extend(scores)
if i % 10000 == 0:
print(f"Iter {i} after {time.time() - start} seconds")
print(f"Finished after {(time.time()-start) / 60} minutes")
# scores_lst.extend(scores)
# print(len(revid_lst))
page_scores_df = pd.DataFrame({"rev_id": revid_lst, "article_quality_est":scores_lst})
page_scores_df['rev_id'] = page_scores_df['rev_id'].astype(int)
page_scores_df.to_csv("page_scores.csv")
page_scores_df.head()
# print(len(revid_lst), len(scores_lst))
###Output
_____no_output_____
###Markdown
Step 4: Combining the Datasets___Some processing of the data will be necessary! In particular, you'll need to - after retrieving and including the ORES data for each article - merge the wikipedia data and population data together. Both have fields containing country names for just that purpose. After merging the data, you'll invariably run into entries which cannot be merged. Either the population dataset does not have an entry for the equivalent Wikipedia country, or vise versa.Please remove any rows that do not have matching data, and output them to a CSV file called:`wp_wpds_countries-no_match.csv`Consolidate the remaining data into a single CSV file called:`wp_wpds_politicians_by_country.csv`
###Code
wp_wpds_countries = page_scores_df.merge(page_data_clean, on='rev_id', how='outer')
wp_wpds_countries = wp_wpds_countries.merge(WPDS_data_clean, left_on='country', right_on='Name', how='outer')
wp_wpds_countries_no_match = wp_wpds_countries.dropna()
wp_wpds_countries_no_match.to_csv('data_clean/wp_wpds_countries-no_match.csv', sep=',')
# wp_wpds_politicians_by_country
column_map = {'rev_id':'revision_id', 'page':'article_name', 'Population':'population'}
wp_wpds_countries_no_match.rename(columns=column_map, inplace=True)
wp_wpds_politicians_by_country = wp_wpds_countries_no_match.loc[:, ['country',
'article_name',
'revision_id',
'article_quality_est',
'population']]
wp_wpds_politicians_by_country.to_csv('data_clean/wp_wpds_politicians_by_country.csv')
###Output
/Users/stiwari/anaconda3/lib/python3.7/site-packages/pandas/core/frame.py:4449: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
errors=errors,
###Markdown
Step 5: Analysis___Your analysis will consist of calculating the proportion (as a percentage) of articles-per-population and high-quality articles for each country AND for each geographic region. By "high quality" articles, in this case we mean the number of articles about politicians in a given country that ORES predicted would be in either the "FA" (featured article) or "GA" (good article) classes.Examples:- if a country has a population of 10,000 people, and you found 10 articles about politicians from that country, then the percentage of articles-per-population would be .1%.- if a country has 10 articles about politicians, and 2 of them are FA or GA class articles, then the percentage of high-quality articles would be 20%.
###Code
country_article_cnts = wp_wpds_politicians_by_country.loc[:, ['country', 'article_name']].groupby('country').count().reset_index().rename(columns={'article_name':'article_count'})
country_article_cnts.head()
# wp_wpds_politicians_by_country[(wp_wpds_politicians_by_country['article_quality_est'] == 'FA') | wp_wpds_politicians_by_country['article_quality_est'] == 'GA']
hq_articles = wp_wpds_politicians_by_country[(wp_wpds_politicians_by_country['article_quality_est'] == 'FA')
| (wp_wpds_politicians_by_country['article_quality_est'] == 'GA')]
hq_article_counts = hq_articles.loc[:, ['country', 'article_quality_est']].groupby('country').count().reset_index().rename(columns={'article_quality_est':'hq_article_count'})
hq_article_counts.head()
article_pop_df = country_article_cnts.merge(hq_article_counts, on=['country'])
article_pop_df = article_pop_df.merge(wp_wpds_politicians_by_country.loc[:, ['country', 'population']].groupby('country').mean().reset_index(), how='left', on='country')
article_pop_df.head()
article_pop_df['article_per_pop'] = article_pop_df['article_count'] / article_pop_df['population']
article_pop_df['hq_per_article'] = article_pop_df['hq_article_count'] / article_pop_df['article_count']
article_pop_df.head()
###Output
_____no_output_____
###Markdown
Here we will also map the countries to their respective regions
###Code
regions = []
prev = WPDS_data.iloc[0, :]
for i in range(len(WPDS_data)):
curr = WPDS_data.iloc[i, :]
prev = WPDS_data.iloc[i-1, :]
if curr["Type"] != "Country":
regions.append(curr["Name"])
else:
regions.append(regions[-1])
WPDS_data["region"] = regions
region_mapping = WPDS_data[["Name", "region"]]
# #regional
region_pop_df = article_pop_df.merge(region_mapping, how="left", left_on="country", right_on="Name").drop(columns="Name")
region_pop_df = region_pop_df.groupby("region")[["article_count","population"]].sum().reset_index()
region_pop_df["regional_prop"] = region_pop_df["article_count"] / region_pop_df["population"] * 100
region_pop_df
region_quality_df = article_pop_df.merge(region_mapping, left_on='country', right_on='Name', how='left')
region_quality_df = region_quality_df.groupby("region")[['country', 'hq_article_count', 'article_count']].sum().reset_index()
region_quality_df["hq_per_article"] = region_quality_df["hq_article_count"] / region_quality_df["article_count"]
region_quality_df
###Output
_____no_output_____
###Markdown
Step 6: Results___Your results from this analysis will be published in the form of data tables. You are being asked to produce six total tables, that show:1. Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population2. Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population3. Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality4. Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality5. Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population6. Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-qualityEmbed these tables in your Jupyter notebook. You do not need to graph or otherwise visualize the data for this assignment, although you are welcome to do so in addition to generating the data tables described above, if you wish. 1. Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
article_pop_df.sort_values('article_per_pop', ascending=False).loc[:, ['country', 'article_count', 'population', 'article_per_pop']].head(10)
###Output
_____no_output_____
###Markdown
2. Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
article_pop_df.sort_values('article_per_pop').loc[:, ['country', 'article_count', 'population', 'article_per_pop']].head(10)
###Output
_____no_output_____
###Markdown
3. Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
article_pop_df.sort_values('hq_per_article', ascending=False).loc[:, ['country', 'hq_per_article']].head(10)
###Output
_____no_output_____
###Markdown
4. Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
article_pop_df.sort_values('hq_per_article').head(10).loc[:, ['country', 'hq_per_article']]
###Output
_____no_output_____
###Markdown
5. Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
###Code
region_pop_df.sort_values('regional_prop', ascending=False)
###Output
_____no_output_____
###Markdown
6. Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
region_quality_df.sort_values('hq_per_article', ascending=False)
###Output
_____no_output_____
###Markdown
Step 1: Getting the Article and Population DataIn this step, two data set are downloaded.The Wikipedia politicians by country dataset is downloaded to page_data.csvThe population data is available in CSV format as WPDS_2020_data.csv. This dataset is drawn from the world population data sheet published by the Population Reference Bureau. Step 2: Cleaning the DataClean up the two data set we got in Step 1- page_data.csv dataset contains some page names that start with the string "Template:". which should be removed- WPDS_2020_data.csv contains some rows that provide cumulative regional population counts, rather than country-level counts. These rows are distinguished by having ALL CAPS values in the 'geography' field (e.g. AFRICA, OCEANIA), we will separate the country and sub-region level data into different table, while add a new column in country level data for sub-region name
###Code
import pandas as pd
page_data = pd.read_csv('page_data.csv')
page_data
page_data_clean = page_data.loc[(page_data['page'].apply(lambda r: (r.startswith('Template:') == False)))]
page_data_clean
wpds_data = pd.read_csv('WPDS_2020_data.csv')
wpds_data
wpds_data_clean = wpds_data[['Name', 'Type', 'Population']]
wpds_data_clean
wpds_data_clean_country = wpds_data_clean.loc[(wpds_data_clean['Type'].apply(lambda r: r =='Country'))]
wpds_data_clean_country
# Process WPDS data to get Sub-region to country map list
def get_country_region_map():
country_region_map = {}
region_name = None
for index, row in wpds_data_clean.iterrows():
t = row['Type']
name = row['Name']
# skip world
if (name == 'WORLD'):
continue
if (t == 'Sub-Region' and name.isupper()):
if region_name is None or region_name != name:
region_name = name
continue
country_region_map[name] = region_name
return country_region_map
country_region_map = get_country_region_map()
country_region_map
wpds_data_clean_country_with_region = wpds_data_clean_country.copy()
wpds_data_clean_country_with_region['sub_region'] = wpds_data_clean_country_with_region['Name'].map(country_region_map)
wpds_data_clean_country_with_region
###Output
_____no_output_____
###Markdown
Step 3: Getting Article Quality PredictionsIn this step, we will try to get the predicted quality category for each article in the Wikipedia dataset.To support easy repro and avoid install the ORES client, we will use the API call to get the page quality prediction resultsPages that cannot get prediction result are saved in page_data_with_no_quality.csv file
###Code
# Use the batch API to get the page quality prediction to speed up
api_endpoint = "https://ores.wikimedia.org/v3/scores/enwiki?models=articlequality&revids={rev_ids}"
headers = {
'User-Agent': 'https://github.com/IvyLinMS',
'From': '[email protected]'
}
def api_call(rev_ids):
call = requests.get(api_endpoint.format(rev_ids = rev_ids), headers=headers)
response = call.json()
return response
def get_page_quality_prediction():
page_quality_results = {}
# split list into even size chunks for speedy process
def chunks(lst, n):
for i in range(0, len(lst), n):
yield lst[i:i + n]
# API allow max 50 in a batch
batch_size = 50
for rev_id_chunks in chunks(page_data_clean['rev_id'], batch_size):
rev_ids = '|'.join(str(rev_id) for rev_id in rev_id_chunks)
print('.', end =' ')
result = api_call(rev_ids)
for key in result['enwiki']['scores']:
if 'score' in result['enwiki']['scores'][key]['articlequality']:
page_quality_results[key] = result['enwiki']['scores'][key]['articlequality']['score']['prediction']
else:
page_quality_results[key] = 'N/A'
return page_quality_results
page_quality_results = get_page_quality_prediction()
page_data_with_quality = page_data_clean.copy()
page_data_with_quality['article_quality_est'] = page_data_with_quality['rev_id'].astype(str).map(page_quality_results)
page_data_with_quality
page_data_with_quality_cleaned = page_data_with_quality.loc[page_data_with_quality['article_quality_est'].apply(lambda r: (r != 'N/A'))]
page_data_with_quality_cleaned
# Pages that cannot get prediction result are saved in page_data_with_no_quality.csv file
page_data_with_no_quality = page_data_with_quality.loc[page_data_with_quality['article_quality_est'].apply(lambda r: (r == 'N/A'))]
page_data_with_no_quality.to_csv('page_data_with_no_quality.csv', index=False)
page_data_with_no_quality
###Output
_____no_output_____
###Markdown
Step 4: Combining the DatasetsIn this step, we will merge the wikipedia data and population data together use the contry name as key, - Rows that do not have matching data, and output them to a CSV file called: wp_wpds_countries-no_match.csv - remaining data into a single CSV file called: wp_wpds_politicians_by_country.csv with columns country, article_name, revision_id, article_quality_est, population
###Code
merged_data = page_data_with_quality_cleaned.merge(wpds_data_clean_country_with_region,how='outer',left_on=['country'],right_on=['Name'])
merged_data
# get the unmatched data
unmatched_data = merged_data.loc[merged_data['country'].isna() | merged_data['Name'].isna()]
unmatched_data.to_csv('wp_wpds_countries-no_match.csv', index=False)
unmatched_data
matched_data = merged_data.loc[(merged_data['country'].notna()) & (merged_data['Name'].notna())]
matched_data
matched_data_renamed = matched_data.rename(columns={'page': 'article_name', 'rev_id': 'revision_id', 'Population': 'population'})
matched_data_renamed
# generate the data in required format
politicians_by_country_data = pd.concat( [matched_data_renamed['country'],
matched_data_renamed['article_name'],
matched_data_renamed['revision_id'],
matched_data_renamed['article_quality_est'],
matched_data_renamed['population'],
], axis=1)
politicians_by_country_data
politicians_by_country_data.to_csv('wp_wpds_politicians_by_country.csv', index=False)
###Output
_____no_output_____
###Markdown
Step 5: AnalysisIn this step, we will get the country and sub region level article per population and high quality article percetage
###Code
# get aggregate data per country
country_article_count_temp = matched_data_renamed.groupby('country').agg({'article_name':'count'})
country_article_count = country_article_count_temp.rename(columns={'article_name': 'article_count'})
country_article_count
country_population = matched_data_renamed.groupby('country').agg({'population':'mean'})
country_population
high_quality_article_data = matched_data_renamed.loc[(matched_data_renamed['article_quality_est'].apply(lambda r: r == 'FA' or r =='GA'))]
high_quality_article_data
country_high_quality_article_count_temp = high_quality_article_data.groupby('country').agg({'article_name':'count'})
country_high_quality_article_count = country_high_quality_article_count_temp.rename(columns={'article_name': 'high_quality_article_count'})
country_high_quality_article_count
# join these data
country_merged_aggregate_data_intermediate = country_article_count.merge(country_population,how='left',left_on=['country'],right_on=['country'])
country_merged_aggregate_data_intermediate
country_merged_aggregate_data = country_merged_aggregate_data_intermediate.merge(country_high_quality_article_count,how='left',left_on=['country'],right_on=['country'])
country_merged_aggregate_data
country_merged_aggregate_data.fillna(0, inplace=True)
# Calculate the percentage
country_merged_aggregate_data['articles_per_population'] = country_merged_aggregate_data['article_count'].astype('int64') / country_merged_aggregate_data['population'].astype('int64') * 100
country_merged_aggregate_data['high_quality_article_percentage'] = country_merged_aggregate_data['high_quality_article_count'].astype('int64') / country_merged_aggregate_data['article_count'].astype('int64') * 100
country_merged_aggregate_data
# region level data
region_article_count_temp = matched_data_renamed.groupby('sub_region').agg({'article_name':'count'})
region_article_count = region_article_count_temp.rename(columns={'article_name': 'article_count'})
region_article_count
region_high_quality_article_count_temp = high_quality_article_data.groupby('sub_region').agg({'article_name':'count'})
region_high_quality_article_count = region_high_quality_article_count_temp.rename(columns={'article_name': 'high_quality_article_count'})
region_high_quality_article_count
region_population_temp= wpds_data_clean.loc[(wpds_data_clean['Type'].apply(lambda r: r == 'Sub-Region'))]
region_population = region_population_temp.rename(columns={'Population': 'population', 'Name': 'sub_region'})
region_population
# join region level data
region_merged_aggregate_data_intermediate = region_article_count.merge(region_population,how='left',left_on=['sub_region'],right_on=['sub_region'])
region_merged_aggregate_data_intermediate
region_merged_aggregate_data = region_merged_aggregate_data_intermediate.merge(region_high_quality_article_count,how='left',left_on=['sub_region'],right_on=['sub_region'])
region_merged_aggregate_data
# Calculate the percentage for region level data
region_merged_aggregate_data['articles_per_population'] = region_merged_aggregate_data['article_count'].astype('int64') / region_merged_aggregate_data['population'].astype('int64') * 100
region_merged_aggregate_data['high_quality_article_percentage'] = region_merged_aggregate_data['high_quality_article_count'].astype('int64')/ region_merged_aggregate_data['article_count'].astype('int64') * 100
region_merged_aggregate_data
###Output
_____no_output_____
###Markdown
Step 6: Results+ Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population+ Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population+ Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality+ Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality+ Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population+ Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
# Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
country_merged_aggregate_data.sort_values('articles_per_population', ascending=False).head(10)
# Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
country_merged_aggregate_data.sort_values('articles_per_population', ascending=True).head(10)
# Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
country_merged_aggregate_data.sort_values('high_quality_article_percentage', ascending=False).head(10)
# Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
country_merged_aggregate_data.sort_values('high_quality_article_percentage', ascending=True).head(10)
# Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
region_merged_aggregate_data.sort_values('articles_per_population', ascending=False)
# Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
region_merged_aggregate_data.sort_values('high_quality_article_percentage', ascending=False)
###Output
_____no_output_____
###Markdown
512hw2 Bias
###Code
# Import the libraries
import json
import requests
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Step 1 - data acquisitionThere are two initial data files downloaded in csv format.1. The Wikipedia politicians by country dataset called 'page_data.csv'. (https://figshare.com/articles/dataset/Untitled_Item/5513449)2. The population data is available in CSV format as 'WPDS_2020_data.csv'. (https://docs.google.com/spreadsheets/d/1CFJO2zna2No5KqNm9rPK5PCACoXKzb-nycJFhV689Iw/edit) Step 2 - data cleaningWe need to ilter the page_data by removing the page names that start with the string 'Template' in 'page_data.csv'.There is no cleaning for 'WPDS_2020_data.csv'. For convenience purposes, I created a reduced version of that data (full data used in step 6!).
###Code
page_data = pd.read_csv('page_data.csv')
WPDS_data = pd.read_csv('WPDS_2020_data.csv')
print(page_data.shape)
page_data_cleaned = page_data.loc[(page_data['page'].apply(lambda x: (x[0:8] != 'Template')))]
print(page_data_cleaned.shape)
page_data_cleaned
WPDS_data_reduced = WPDS_data[['Name', 'Type', 'Population']]
WPDS_data_reduced
###Output
_____no_output_____
###Markdown
Step 3 - get article quality prediction I used the ORES machine learning system to get the quality prediction for each article.'ORES' is an acronym for "Objective Revision Evaluation Service" but was simply renamed “ORES”. ORES is a machine learning tool that can provide estimates of Wikipedia article quality. The article quality estimates are, from best to worst:FA - Featured articleGA - Good articleB - B-class articleC - C-class articleStart - Start-class articleStub - Stub-class articleI chose to use REST API endpoint to get the prediction results. The API documentation can be found here. (https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_model)For this assignment, I only extracted the 'prediction' from the API response.
###Code
# api header
headers = {
'User-Agent': 'https://github.com/tommycqy',
'From': '[email protected]'
}
# api endpoint
endpoint = "https://ores.wikimedia.org/v3/scores/enwiki?models=articlequality&revids={rev_ids}"
#
def api_call(endpoint, rev_ids):
call = requests.get(endpoint.format(rev_ids = rev_ids), headers=headers)
response = call.json()
return response
def call_api(score_dict):
for i in range(0, len(page_data_cleaned), 50):
if i + 50 > len(page_data_cleaned):
batch_ids = page_data_cleaned.rev_id.iloc[i:]
else:
batch_ids = page_data_cleaned.rev_id.iloc[i:i+50]
res = api_call(endpoint, '|'.join(str(s) for s in batch_ids))
for key in res['enwiki']['scores']:
if 'score' in res['enwiki']['scores'][key]['articlequality']:
curr_score = res['enwiki']['scores'][key]['articlequality']['score']['prediction']
score_dict[key] = curr_score
else:
score_dict[key] = 'NA'
score_dict = dict()
call_api(score_dict)
print(len(score_dict))
###Output
46701
###Markdown
After we got all the predictions (some of them are NAs), we need to filter out the NAs and keep a seperate log.There are 276 articles that we cannot get a prediction score!
###Code
page_data_cleaned['article_quality_est.'] = page_data_cleaned['rev_id'].astype(str).map(score_dict)
no_score_log = page_data_cleaned.loc[page_data_cleaned['article_quality_est.']=='NA']
print(len(no_score_log))
no_score_log
page_data_reduced = page_data_cleaned.loc[page_data_cleaned['article_quality_est.']!='NA']
print(len(page_data_reduced))
###Output
46425
###Markdown
Step 4 - combine the datasets Remove any rows that do not have matching data, and output them to a CSV file called: wp_wpds_countries-no_match.csvConsolidate the remaining data into a single CSV file called: wp_wpds_politicians_by_country.csvThe schema for that file is the following:Columncountryarticle_namerevision_idarticle_quality_est.populationNote: revision_id here is the same thing as rev_id, which you used to get scores from ORES.
###Code
df_merge = page_data_reduced.merge(WPDS_data_reduced,how='outer',left_on=['country'],right_on=['Name'])
print(len(df_merge))
no_match_df = df_merge.loc[df_merge['country'].isna() | df_merge['Name'].isna()]
no_match_index = no_match_df.index
no_match_df
no_match_df.to_csv('wp_wpds_countries-no_match.csv', index=False)
filter_df = df_merge[~df_merge.index.isin(no_match_index)]
filter_df.rev_id = filter_df.rev_id.astype('int64')
filter_df.Population = filter_df.Population.astype('int64')
filter_df_reduced = filter_df[['page','country', 'rev_id', 'article_quality_est.','Population']]
filter_df_reduced.rename(columns={'page':'article_name','rev_id':'revision_id','Population':'population'}, inplace=True)
filter_df_reduced
filter_df_reduced.to_csv('wp_wpds_politicians_by_country.csv', index=False)
###Output
_____no_output_____
###Markdown
Step 5 - analysisThe following are the code for generating the tables in step 6.
###Code
# for tasks 1 and 2
article_count_by_country = filter_df_reduced.groupby('country').agg({'article_name':'count'})
population_by_country = filter_df_reduced.groupby('country').agg({'population':'mean'})
article_proportion_by_country = article_count_by_country['article_name'].astype('int64') / population_by_country['population'].astype('int64') * 100
# for tasks 3 and 4
groups = filter_df_reduced.groupby('country')
group = groups.apply(lambda g: g[(g['article_quality_est.'] == 'FA') | (g['article_quality_est.'] == 'GA')])
high_quality_article_count = group.groupby(level=[0]).size()
df = filter_df_reduced.set_index('country')
article_count_by_country_with_hqa = df.loc[high_quality_article_count.index].groupby('country').agg({'article_name':'count'})
hqa_proportion_by_country = high_quality_article_count / article_count_by_country_with_hqa['article_name'].astype('int64').astype('int64') * 100
# for tasks 5 and 6
region_population_temp= WPDS_data.loc[(WPDS_data['Type'].apply(lambda x: x == 'Sub-Region'))]
region_population = region_population_temp.rename(columns={'Population': 'population', 'Name': 'sub_region'})
population_by_region = region_population[['sub_region', 'population']]
article_pop_by_country = article_count_by_country.merge(population_by_country,how='left',left_on=['country'],right_on=['country'])
hqa_count_by_country = pd.DataFrame(high_quality_article_count, columns={'hqa_count'})
article_pop_hqa_by_country = article_pop_by_country.merge(hqa_count_by_country,how='left',left_on=['country'],right_on=['country'])
article_pop_hqa_by_country.fillna(0, inplace=True)
article_pop_hqa_by_country.rename(columns={'article_name':'article_count'}, inplace=True)
article_pop_hqa_by_country.reset_index(inplace=True)
article_pop_hqa_by_country
# get the dictionary for country region by iterating the pandas dataframe
def get_country_region_dict():
country_region_dict = {}
region_name = None
for index, row in WPDS_data.iterrows():
type_ = row['Type']
name = row['Name']
if (name == 'WORLD'):
continue
if (type_ == 'Sub-Region' and name.isupper()):
if region_name is None or region_name != name:
region_name = name
continue
country_region_dict[name] = region_name
return country_region_dict
country_region_dict = get_country_region_dict()
print(len(country_region_dict))
article_pop_hqa_by_country['sub_region'] = article_pop_hqa_by_country['country'].map(country_region_dict)
article_pop_hqa_by_country
article_count_by_region = article_pop_hqa_by_country.groupby('sub_region').agg({'article_count':'sum'})
article_count_by_region.reset_index(inplace=True)
article_count_by_region
region_df = article_count_by_region.merge(population_by_region,how='left',left_on=['sub_region'],right_on=['sub_region'])
region_df['article_proportion'] = region_df['article_count']/region_df['population'] * 100
region_df
hqa_count_by_region = article_pop_hqa_by_country.groupby('sub_region').agg({'hqa_count':'sum'})
hqa_count_by_region.reset_index(inplace=True)
hqa_count_by_region
final_region_df = region_df.merge(hqa_count_by_region,how='left',left_on=['sub_region'],right_on=['sub_region'])
final_region_df['hqa_proportion'] = final_region_df['hqa_count']/final_region_df['article_count'] * 100
final_region_df
###Output
_____no_output_____
###Markdown
Step 6 - analysis resultsThere are six analysis tables below, corresponding to the six analysis questions. First Analysis Task: Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
print(article_proportion_by_country.sort_values(ascending=False).head(10))
###Output
country
Tuvalu 0.540000
Nauru 0.472727
San Marino 0.238235
Monaco 0.105263
Liechtenstein 0.071795
Marshall Islands 0.064912
Tonga 0.063636
Iceland 0.054620
Andorra 0.041463
Federated States of Micronesia 0.033962
dtype: float64
###Markdown
Second Analysis Task: Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
print(article_proportion_by_country.sort_values(ascending=True).head(10))
###Output
country
India 0.000069
Indonesia 0.000077
China 0.000081
Uzbekistan 0.000082
Ethiopia 0.000088
Zambia 0.000136
Korea, North 0.000140
Thailand 0.000168
Mozambique 0.000186
Bangladesh 0.000187
dtype: float64
###Markdown
Third Analysis Task: Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
print(hqa_proportion_by_country.sort_values(ascending=False).head(10))
###Output
country
Korea, North 22.222222
Saudi Arabia 12.820513
Romania 12.244898
Central African Republic 12.121212
Uzbekistan 10.714286
Mauritania 10.416667
Guatemala 8.433735
Dominica 8.333333
Syria 7.812500
Benin 7.692308
dtype: float64
###Markdown
Forth Analysis Task: Bottom 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality(Note: I only outputed the countries with 'GA' or 'FA' quality article in this task, if done the other way, the bottom ten countries all have 0% of high quality articles!)
###Code
print(hqa_proportion_by_country.sort_values(ascending=True).head(10))
###Output
country
Belgium 0.192678
Tanzania 0.247525
Switzerland 0.248756
Nepal 0.280899
Peru 0.285714
Nigeria 0.295858
Portugal 0.314465
Colombia 0.350877
Lithuania 0.409836
Morocco 0.485437
dtype: float64
###Markdown
Fifth Analysis Task: Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
###Code
final_region_df.sort_values('article_proportion', ascending=False)
###Output
_____no_output_____
###Markdown
Sixth Analysis Task: Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
final_region_df.sort_values('hqa_proportion', ascending=False)
###Output
_____no_output_____
###Markdown
Data Analysis
###Code
# calcualte articles-per-population rate for each country
num_articles_by_country = data_cleaned.groupby(['country'])['page'].count().reset_index()
num_articles_by_country.rename(columns={'page':'num_articles'}, inplace=True)
population_by_country = data_cleaned[['country','Population']].drop_duplicates().reset_index()
population_by_country.drop(columns=['index'],inplace=True)
articles_per_population = num_articles_by_country.merge(population_by_country, on='country', how='right')
articles_per_population['articles_per_pop'] = articles_per_population['num_articles'] * 100.0/articles_per_population['Population']
articles_per_population
# calcualte good-articles rate for each country
high_quality_article = data_cleaned[data_cleaned['prediction'].isin(['FA','GA'])].reset_index()
good_articles_by_country = high_quality_article.groupby(['country'])['page'].count().reset_index()
percent_good_articles = num_articles_by_country.merge(good_articles_by_country, on='country', how='left')
percent_good_articles.rename(columns={'page':'num_good_articles'},inplace=True)
percent_good_articles['num_good_articles']= percent_good_articles['num_good_articles'].fillna(0)
percent_good_articles['percent_good_articles'] = percent_good_articles['num_good_articles'] * 100.0/percent_good_articles['num_articles']
percent_good_articles
# Identify which country belongs to which region
cur_region = ''
pop_data_copy = pop_data.copy()
pop_data_copy['region'] = np.nan
for row_num in range(len(pop_data_copy)):
if pop_data_copy.iloc[row_num,1].isupper():
cur_region = pop_data_copy.iloc[row_num,1]
pop_data_copy.loc[row_num,'region'] = cur_region
pop_data_copy
# calculate the articles-per-population rate for each sub-region
region=region[region['Name']!= 'WORLD'].reset_index()
region.drop(columns='index',inplace=True)
population_by_region = region[['Name','Population']]
data_cleaned_region_all = data_cleaned.merge(pop_data_copy, left_on='country', right_on='Name',how='left')
data_cleaned_region_all = data_cleaned_region_all[['page','country','region']]
num_articles_by_region = data_cleaned_region_all.groupby(['region'])['page'].count().reset_index()
num_articles_by_region.rename(columns={'page':'num_articles'}, inplace=True)
articles_per_population_region = num_articles_by_region.merge(population_by_region, left_on='region',right_on='Name', how='right')
articles_per_population_region.drop(columns='region',inplace=True)
articles_per_population_region.rename(columns={'Name':'region','page':'num_articles'},inplace=True)
# calculate the articles-per-population rate for each continent by summing up numbers of sub-regions
articles_per_population_region.loc[0,'num_articles'] = articles_per_population_region.loc[1:6,'num_articles'].sum()
articles_per_population_region.loc[7,'num_articles'] = articles_per_population_region.loc[8:10,'num_articles'].sum()
articles_per_population_region.loc[11,'num_articles'] = articles_per_population_region.loc[12:16,'num_articles'].sum()
articles_per_population_region.loc[17,'num_articles'] = articles_per_population_region.loc[18:21,'num_articles'].sum()
articles_per_population_region['articles_per_pop'] = articles_per_population_region['num_articles'] * 100.0/articles_per_population_region['Population']
articles_per_population_region
# calculate the high-quality-articles rate for each sub-region
data_cleaned_region = high_quality_article.merge(pop_data_copy, left_on='country', right_on='Name',how='left')
data_cleaned_region = data_cleaned_region[['page','country','region']]
good_articles_by_region = data_cleaned_region.groupby(['region'])['page'].count().reset_index()
good_articles_by_region.rename(columns={'page':'num_good_articles'})
percent_good_articles_region = num_articles_by_region.merge(good_articles_by_region, on='region')
percent_good_articles_region = percent_good_articles_region.merge(region[['Name']], left_on='region',right_on='Name',how='right')
percent_good_articles_region.drop(columns='region',inplace=True)
percent_good_articles_region.rename(columns={'page':'num_good_articles','Name':'region'}, inplace=True)
# calculate the high-quality-articles rate for each continent by summing up numbers of sub-regions
percent_good_articles_region.loc[0,'num_good_articles'] = percent_good_articles_region.loc[1:6,'num_good_articles'].sum()
percent_good_articles_region.loc[7,'num_good_articles'] = percent_good_articles_region.loc[8:10,'num_good_articles'].sum()
percent_good_articles_region.loc[11,'num_good_articles'] = percent_good_articles_region.loc[12:16,'num_good_articles'].sum()
percent_good_articles_region.loc[17,'num_good_articles'] = percent_good_articles_region.loc[18:21,'num_good_articles'].sum()
percent_good_articles_region.loc[0,'num_articles'] = percent_good_articles_region.loc[1:6,'num_articles'].sum()
percent_good_articles_region.loc[7,'num_articles'] = percent_good_articles_region.loc[8:10,'num_articles'].sum()
percent_good_articles_region.loc[11,'num_articles'] = percent_good_articles_region.loc[12:16,'num_articles'].sum()
percent_good_articles_region.loc[17,'num_articles'] = percent_good_articles_region.loc[18:21,'num_articles'].sum()
percent_good_articles_region['percent_good_articles'] = percent_good_articles_region['num_good_articles']/percent_good_articles_region['num_articles']
percent_good_articles_region
###Output
_____no_output_____
###Markdown
Results
###Code
# Top 10 countries by coverage aka articles-per-population rate
articles_per_population.sort_values('articles_per_pop', ascending=False, ignore_index=True)[:10]
# Bottom 10 countries by coverage
articles_per_population['Population'] = articles_per_population['Population'].map(lambda x: '%.1f' % x)
articles_per_population.sort_values('articles_per_pop', ascending=True, ignore_index=True)[:10]
# Top 10 countries by relative quality aka high-quailty-articles rate
percent_good_articles.sort_values('percent_good_articles', ascending=False, ignore_index=True)[:10]
# Bottom 10 countries by relative quality
percent_good_articles.sort_values('percent_good_articles', ascending=True, ignore_index=True)[:10]
# Geographic regions by coverage
articles_per_population_region['articles_per_pop'] = articles_per_population_region['articles_per_pop'].map(lambda x: '%.6f' % x)
articles_per_population_region.sort_values('articles_per_pop', ascending=False,ignore_index=True)
# Geographic regions by relative quality
percent_good_articles_region['percent_good_articles'] = percent_good_articles_region['percent_good_articles'].map(lambda x: '%.6f' % x)
percent_good_articles_region.sort_values('percent_good_articles',ascending=False, ignore_index=True)
###Output
_____no_output_____
###Markdown
DATA 512 - A2: Bias in Data**Corey Christopherson****10/17/2019** The purpose of this project is to investigate the quality of Wikipedia articles on political figures from different countries and explore the concept of any suspected bias based on the results.Project information and data can be found in the Github repo https://github.com/chrico7/data-512-a2Three data sets were used to conduct this analysis:1. page_data.csv - Wikipedia Politicians by Country Data https://figshare.com/articles/Untitled_Item/55134492. WPDS_2018_data.csv - Population Reference Bureau World Population Datasheet https://canvas.uw.edu/courses/1319253/files/3. Objective Revision Evaluation Service (ORES) Article Quality Scores API https://www.mediawiki.org/wiki/ORES
###Code
import numpy as np
import pandas as pd
import requests
import json
import time
###Output
_____no_output_____
###Markdown
Data AcquisitionThe two data sets in the list above were downloaded to a local directory as csv files and read in using Pandas read_csv. These raw files can be viewed in hte Github repo noted above.
###Code
#
### ACQUIRE DATA ###
#
# Define data paths
path = r'C:/Users/chrico7/Documents/__Corey Christopherson/MS Data Science/Courses/HCDE 512/Week 2/Homework/'
polPath = r'C:/Users/chrico7/Documents/__Corey Christopherson/MS Data Science/Courses/HCDE 512/Week 2/Homework/Data/country/data/'
popPath = r'C:/Users/chrico7/Documents/__Corey Christopherson/MS Data Science/Courses/HCDE 512/Week 2/Homework/Data/'
# Read in csv data
polData_raw = pd.read_csv(r'{}page_data.csv'.format(polPath))
popData_raw = pd.read_csv(r'{}WPDS_2018_data.csv'.format(popPath))
###Output
_____no_output_____
###Markdown
Data CleaningBoth data sets were cleaned to ensure the quality of the final data by executing the following actions:1. polData Page names that begin with the string 'Template' were removed since these are not Wikipedia pages2. popData Records in the 'geography' field that were ALL CAPS were broken out to a separate table because these are aggregates and not countries
###Code
#
### CLEAN DATA ###
#
# Politician Data (polData)
polData = polData_raw.copy()
# Remove page names that start with the string 'Template'
polData = polData[~polData['page'].str.contains('Template',regex=True)].reset_index(drop=True)
# Population Data (popData)
popData = popData_raw.copy()
# Break out records with ALL CAP records in 'geography' field
popData_agg = popData[popData['Geography'].str.isupper()].reset_index(drop=True)
popData = popData[~popData['Geography'].str.isupper()].reset_index(drop=True)
###Output
_____no_output_____
###Markdown
A separate table was then generated from the aggregate fields extracted from the population data to map regions to the corresponding countries
###Code
# Derive map between Geogrpahy and Country
geoMap = pd.DataFrame()
for i in range(popData_agg['Geography'].shape[0]):
# derive the start and stop points for each region
start = popData_raw.loc[popData_raw['Geography']==popData_agg['Geography'][i],:].index[0]
if i == popData_agg['Geography'].shape[0]:
stop = popData_raw.loc[popData_raw['Geography']==popData_agg['Geography'][i+1],:].index[0]
else:
stop = popData_raw.loc[popData_raw['Geography']==popData_raw.iloc[-1,0],:].index[0]
# extract countries for Region i
temp = popData_raw.iloc[start:stop,:]
temp.loc[:,'Region'] = popData_agg['Geography'][i]
# add country list to geoMap frame
geoMap = geoMap.append(temp[['Region','Geography']], ignore_index=True, sort=False)
# trim out Region rows from geoMap frame
geoMap = geoMap[geoMap['Region']!=geoMap['Geography']]
###Output
_____no_output_____
###Markdown
Data ProcessingFirst, the quality score for each article was obtained from the ORES API based on the rev_id article identifier.
###Code
#
### PROCESS DATA ###
#
# Get article quality predictions
###Output
_____no_output_____
###Markdown
A function to use the ORES API was obtained with permission from https://github.com/Ironholds/data-512-a2 and modified to better work with Pandas.The function passes a list of rev_ids to the API using the python requests library and then converts the response to json
###Code
def get_ores_data(revision_ids):
"""
Function to get ores data when passed a list of revision ids
"""
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# Specify the parameters - smushing all the revision IDs together separated by | marks.
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
# Call the API and convert to json
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
return response
###Output
_____no_output_____
###Markdown
Exploration of the API revealed that the maximum number of rev_ids it could process in one pull was 100. The data frame was therefore divided into 100 element chunks and fed into the API in order.
###Code
# Loop through 100 line chunks and compile cumulative ores data frame
start = time.time()
oresData_raw = pd.DataFrame()
for chunk in np.array_split(polData.iloc[:,2],np.ceil(polData.iloc[:,:].shape[0] / 100)):
temp_dict = get_ores_data(chunk)
temp_revid = pd.Series(list(temp_dict['enwiki']['scores'].keys()))
temp_df = pd.io.json.json_normalize(temp_dict['enwiki']['scores'].values())
temp_df.loc[:,'rev_id.rev_id'] = temp_revid
oresData_raw = oresData_raw.append(temp_df, ignore_index=True, sort=False)
print(time.time() - start)
###Output
198.28700017929077
###Markdown
The frame columns were then renamed to more readable values.
###Code
# Rename columns to common terms and set data types
colDict = dict(zip(pd.Series(oresData_raw.columns),
pd.Series(oresData_raw.columns).str.rsplit('.',n=1,expand=True)[1]))
oresData = oresData_raw.rename(colDict, axis='columns')
oresData.rename({'message':'error.message','type':'error.type'},axis=1,inplace=True)
oresData.loc[:,'rev_id'] = oresData['rev_id'].astype(int)
oresData.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 46701 entries, 0 to 46700
Data columns (total 10 columns):
prediction 46546 non-null object
B 46546 non-null float64
C 46546 non-null float64
FA 46546 non-null float64
GA 46546 non-null float64
Start 46546 non-null float64
Stub 46546 non-null float64
error.message 155 non-null object
error.type 155 non-null object
rev_id 46701 non-null int32
dtypes: float64(6), int32(1), object(3)
memory usage: 3.4+ MB
###Markdown
Some of the rev_ids returned errors instead of scores. The records were extracted and placed in a seprate frame for future reference if needed.
###Code
# Extract good and bad data
oresData_good = oresData[oresData['error.type'].isnull()]
oresData_bad = oresData[~oresData['error.type'].isnull()]
###Output
_____no_output_____
###Markdown
The article score was extracted and added on to the polData frame. Unmatched rows were then removed and stored.
###Code
# Add article score to polData and extract bad rows
polData_score = pd.merge(polData,
oresData_good[['rev_id','prediction']],
how='outer',on='rev_id')
polData_noScore = polData_score[polData_score['prediction'].isnull()]
polData_score = polData_score[~polData_score['prediction'].isnull()]
###Output
_____no_output_____
###Markdown
The population data was then added to the data to form a common frame. Again, unmatched rows were then removed and stored.
###Code
# Add popData to pol data and extract bad rows
allData_raw = pd.merge(polData_score,
popData,how='outer',left_on='country',right_on='Geography')
allData_raw_noGeo = allData_raw[allData_raw['Geography'].isnull()]
allData_raw_noPage = allData_raw[allData_raw['page'].isnull()]
allData_raw_good = allData_raw[(~allData_raw['Geography'].isnull())&
(~allData_raw['page'].isnull())]
###Output
_____no_output_____
###Markdown
Finally, the columns were renamed to more user friendly terms, reordered, and the population counts were converted to raw counts from millions
###Code
# Format columns
finalCols = ['country','article_name','revision_id','article_quality','population']
allData = allData_raw_good.rename({'page':'article_name',
'rev_id':'revision_id',
'prediction':'article_quality',
'Population mid-2018 (millions)':'population'},axis=1)
allData = allData[finalCols]
# Format data types
allData.loc[:,'population'] = allData['population'].str.replace(',','',regex=True).astype(float)*1000000
allData.info()
# Output unmatched rows
badData = pd.concat([polData_noScore,allData_raw_noGeo,allData_raw_noPage], sort=False)
badData.to_csv(r'{}wp_wpds_countries-no_match.csv'.format(path),header=True,index=False)
# Output final data
allData.to_csv(r'{}wp_wpds_politicians_by_country.csv'.format(path),header=True,index=False)
###Output
_____no_output_____
###Markdown
Data AnalysisThe final data was then split into several different frames to calculate following metrics1. Country article count2. Country population total3. Country article count for good articles (FA and GA)4. Region article count5. Region population sum6. Country article count for good articles (FA and GA)These tables were then used to generate the following tables1. Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population2. Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population3. Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality4. Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality5. Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population6. Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
#
### ANALYZE DATA ###
#
allData.info()
# Make a frame with Region information
allData_region = pd.merge(allData, geoMap, how='left',left_on='country',right_on='Geography').drop('Geography',axis=1)
# Calculate metrics
country_article_count = (allData.groupby('country')['article_name'].count()
.reset_index().rename({'article_name':'article_count'},axis=1))
country_pop_sum = (allData.groupby('country')['population'].mean()
.reset_index().rename({'population':'population_sum'},axis=1))
country_article_good = (allData[(allData['article_quality']=='GA')|
(allData['article_quality']=='FA')]
.groupby('country')['article_name'].count()
.reset_index().rename({'article_name':'article_count_good'},axis=1))
region_article_count = (allData_region.groupby('Region')['article_name'].count()
.reset_index().rename({'article_name':'article_count'},axis=1))
region_pop_sum = (allData_region.groupby('Region')['population'].mean()
.reset_index().rename({'population':'population_sum'},axis=1))
region_article_good = (allData_region[(allData_region['article_quality']=='GA')|
(allData_region['article_quality']=='FA')]
.groupby('Region')['article_name'].count()
.reset_index().rename({'article_name':'article_count_good'},axis=1))
# Calculate data tables
country_cov = pd.merge(country_article_count,country_pop_sum,how='left',on='country')
country_cov.loc[:,'coverage%'] = country_cov['article_count']/country_cov['population_sum']*100
country_qual = pd.merge(country_article_good,country_article_count,how='left',on='country')
country_qual.loc[:,'quality%'] = country_qual['article_count_good']/country_qual['article_count']*100
region_cov = pd.merge(region_article_count,region_pop_sum,how='left',on='Region')
region_cov.loc[:,'coverage%'] = region_cov['article_count']/region_cov['population_sum']*100
region_qual = pd.merge(region_article_good,region_article_count,how='left',on='Region')
region_qual.loc[:,'quality%'] = region_qual['article_count_good']/region_qual['article_count']*100
# Output data tables
# Top 10 Countries by Coverage
country_cov.sort_values('coverage%',ascending=False)[0:10].reset_index(drop=True)
# Bottom 10 Countries by Coverage
country_cov.sort_values('coverage%',ascending=True)[0:10].reset_index(drop=True)
# Top 10 Countries by Relative Quality
country_qual.sort_values('quality%',ascending=False)[0:10].reset_index(drop=True)
# Bottom 10 Countries by Relative Quality
country_qual.sort_values('quality%',ascending=True)[0:10].reset_index(drop=True)
# Geographic Regions by Coverage
region_cov.sort_values('coverage%',ascending=False).reset_index(drop=True)
# Geographic Regions by Relative Quality
region_qual.sort_values('quality%',ascending=False).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
A2: Bias in data Step 1: Getting the Article and Population Data
###Code
import pandas as pd
from tqdm import tqdm
article_data = pd.read_csv('page_data.csv')
population_data = pd.read_csv('WPDS_2020_data.csv')
###Output
_____no_output_____
###Markdown
Step 2: Cleaning the Data
###Code
# cleaning article_data
article_data_cleaned = article_data[~article_data['page'].str.startswith('Template:')]
print(article_data_cleaned.head())
# cleaning population_data
population_data_cleaned = population_data[~population_data['Name'].str.isupper()]
print(population_data_cleaned.head())
population_data_regional = population_data[population_data['Name'].str.isupper()]
###Output
FIPS Name Type TimeFrame Data (M) Population
3 DZ Algeria Country 2019 44.357 44357000
4 EG Egypt Country 2019 100.803 100803000
5 LY Libya Country 2019 6.891 6891000
6 MA Morocco Country 2019 35.952 35952000
7 SD Sudan Country 2019 43.849 43849000
###Markdown
Step 3: Getting Article Quality Predictions
###Code
import json
import requests
headers = {
'User-Agent': 'https://github.com/zhangjunhao0',
'From': '[email protected]'
}
def ores_data(revision_ids):
revids = '|'.join(str(x) for x in revision_ids)
ores_api = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
params = {
'project': 'enwiki',
'model': 'articlequality',
'revids': revids
}
return requests.get(ores_api.format(**params), headers=headers).json()
print(article_data_cleaned.shape)
# create batches and use batch processing, 50 per batch
revision_ids = list(article_data_cleaned['rev_id'])
#revision_ids = revision_ids[:1001]
total_batch = (len(revision_ids)-1)//50+1
scores_list = []
rev_ids_no_scores = []
for i in tqdm(range(total_batch)):
# Call the API
ores_data_json = None
if i == total_batch-1:
ores_data_json = ores_data(revision_ids[50*i:])
else:
ores_data_json = ores_data(revision_ids[50*i:50*i+50])
# Extract the key value pair
ores_data_scores = ores_data_json['enwiki']['scores']
for rev_id in ores_data_scores:
if 'score' in ores_data_scores[rev_id]['articlequality']:
scores_list.append([rev_id, ores_data_scores[rev_id]['articlequality']['score']['prediction']])
else:
rev_ids_no_scores.append(rev_id)
df_article_score = pd.DataFrame(scores_list, columns=['rev_id','article_quality_est.'])
df_article_score['rev_id'] = df_article_score['rev_id'].astype('int')
df_article_with_quality = article_data_cleaned.merge(df_article_score, how='inner', on='rev_id')
print(df_article_with_quality.head())
with open('articles_with_no_ores_scores', 'w') as f:
for item in rev_ids_no_scores:
f.write("%s\n" % item)
###Output
_____no_output_____
###Markdown
Step 4: Combining the Datasets
###Code
df_merged = df_article_with_quality.merge(population_data_cleaned, how='outer', left_on='country', right_on='Name')
print(df_merged.head(20))
# keep selected column
df_merged['article_name'] = df_merged['page']
df_merged['revision_id'] = df_merged['rev_id']
df_merged['population'] = df_merged['Population']
df_merged = df_merged[['country','article_name','revision_id','article_quality_est.','population']]
# rows with unmatched data
df_unmatched = df_merged[df_merged.isnull().any(axis=1)]
df_unmatched.to_csv('wp_wpds_countries-no_match.csv', index=False)
# rows with matched data
df_matched = df_merged.dropna()
df_matched.to_csv('wp_wpds_politicians_by_country.csv', index=False)
###Output
_____no_output_____
###Markdown
Step 5: Analysis
###Code
# load data
article_with_population = pd.read_csv('wp_wpds_politicians_by_country.csv')
# articles-per-population for each country
high_quality_article = article_with_population[(article_with_population['article_quality_est.']=='FA')|(article_with_population['article_quality_est.']=='GA')]
articles_per_population = high_quality_article.groupby(['country','population'])['revision_id'].count().to_frame().reset_index()
articles_per_population['articles-per-population'] = (articles_per_population['revision_id']/articles_per_population['population']) * 100
articles_per_population = articles_per_population[['country', 'articles-per-population']]
print(articles_per_population.head())
# percent high quality articles for each country
high_quality_count_by_country = high_quality_article.groupby('country')['revision_id'].count().to_frame().reset_index()
total_count_by_country = article_with_population.groupby('country')['revision_id'].count().to_frame().reset_index()
percent_high_quality_by_country = total_count_by_country.merge(high_quality_count_by_country, how='inner', on='country')
percent_high_quality_by_country['percent_high_quality'] = percent_high_quality_by_country['revision_id_y']/percent_high_quality_by_country['revision_id_x']*100
percent_high_quality_by_country = percent_high_quality_by_country[['country', 'percent_high_quality']]
percent_high_quality_by_country.head()
population_data_regional.head(30)
# articles-per-population for each smaller region
# find the corresponding region for each country
country_to_subregion_mapping = population_data
current_region = None
country_names = population_data['Name']
res_region = []
for i in range(len(country_names)):
if country_names[i].isupper():
current_region = country_names[i]
res_region.append(current_region)
country_to_subregion_mapping['region'] = res_region
country_to_subregion_mapping = country_to_subregion_mapping[~country_to_subregion_mapping['Name'].str.isupper()]
country_to_subregion_mapping = country_to_subregion_mapping[['Name','region']]
article_with_population_and_region = article_with_population.merge(country_to_subregion_mapping, how='inner', left_on='country', right_on='Name')
# analysis
high_quality_article_region = article_with_population_and_region[(article_with_population_and_region['article_quality_est.']=='FA')|(article_with_population_and_region['article_quality_est.']=='GA')]
articles_per_population_region = high_quality_article_region.groupby('region').agg({'revision_id':'count'}).reset_index()
articles_per_population_region = articles_per_population_region.merge(population_data[['Name','Population']], left_on='region',right_on='Name')
articles_per_population_region['articles-per-population'] = (articles_per_population_region['revision_id']/articles_per_population_region['Population']) * 100
articles_per_population_region = articles_per_population_region[['region', 'articles-per-population']]
print(articles_per_population_region)
# articles-per-population for each larger subregion (AFRICA, LATIN AMERICA AND THE CARIBBEAN, ASIA, EUROPE)
# find the corresponding larger region for each country
country_to_large_subregion_mapping = population_data
current_region = None
country_names = population_data['Name']
res_region = []
large_regions = ['AFRICA', 'LATIN AMERICA AND THE CARIBBEAN', 'ASIA', 'EUROPE']
for i in range(len(country_names)):
if country_names[i] in large_regions:
current_region = country_names[i]
elif country_names[i] == 'NORTHERN AMERICA': # since northern america does not belong to any larger subregion
current_region = None
res_region.append(current_region)
country_to_large_subregion_mapping['region'] = res_region
country_to_large_subregion_mapping = country_to_large_subregion_mapping[~country_to_large_subregion_mapping['Name'].str.isupper()]
country_to_large_subregion_mapping = country_to_large_subregion_mapping[['Name','region']]
country_to_large_subregion_mapping = country_to_large_subregion_mapping.dropna()
article_with_population_and_large_region = article_with_population.merge(country_to_large_subregion_mapping, how='inner', left_on='country', right_on='Name')
# analysis
high_quality_article_large_region = article_with_population_and_large_region[(article_with_population_and_large_region['article_quality_est.']=='FA')|(article_with_population_and_large_region['article_quality_est.']=='GA')]
articles_per_population_large_region = high_quality_article_large_region.groupby('region').agg({'revision_id':'count'}).reset_index()
articles_per_population_large_region = articles_per_population_large_region.merge(population_data[['Name','Population']], left_on='region',right_on='Name')
articles_per_population_large_region['articles-per-population'] = (articles_per_population_large_region['revision_id']/articles_per_population_large_region['Population']) * 100
articles_per_population_large_region = articles_per_population_large_region[['region', 'articles-per-population']]
print(articles_per_population_large_region)
# concatenate both smaller and large regions for final result
articles_per_population_all_region = pd.concat([articles_per_population_region, articles_per_population_large_region], ignore_index=True)
print(articles_per_population_all_region)
# percent high quality articles for each small region
high_quality_count_by_region = high_quality_article_region.groupby('region')['revision_id'].count().to_frame().reset_index()
total_count_by_region = article_with_population_and_region.groupby('region')['revision_id'].count().to_frame().reset_index()
percent_high_quality_by_region = total_count_by_region.merge(high_quality_count_by_region, how='inner', on='region')
percent_high_quality_by_region['percent_high_quality'] = percent_high_quality_by_region['revision_id_y']/percent_high_quality_by_region['revision_id_x']*100
percent_high_quality_by_region = percent_high_quality_by_region[['region', 'percent_high_quality']]
# percent high quality articles for each large region
high_quality_count_by_large_region = high_quality_article_large_region.groupby('region')['revision_id'].count().to_frame().reset_index()
total_count_by_large_region = article_with_population_and_large_region.groupby('region')['revision_id'].count().to_frame().reset_index()
percent_high_quality_by_large_region = total_count_by_large_region.merge(high_quality_count_by_large_region, how='inner', on='region')
percent_high_quality_by_large_region['percent_high_quality'] = percent_high_quality_by_large_region['revision_id_y']/percent_high_quality_by_large_region['revision_id_x']*100
percent_high_quality_by_large_region = percent_high_quality_by_large_region[['region', 'percent_high_quality']]
# concatenate both smaller and large regions for final result
percent_high_quality_by_all_region = pd.concat([percent_high_quality_by_region, percent_high_quality_by_large_region], ignore_index=True)
print(percent_high_quality_by_all_region)
###Output
region percent_high_quality
0 CARIBBEAN 1.870504
1 CENTRAL AMERICA 1.490603
2 CENTRAL ASIA 2.857143
3 EAST ASIA 3.073190
4 EASTERN AFRICA 1.398881
5 EASTERN EUROPE 3.161844
6 MIDDLE AFRICA 2.406015
7 NORTHERN AFRICA 2.113459
8 NORTHERN AMERICA 5.470805
9 NORTHERN EUROPE 2.710603
10 OCEANIA 2.015355
11 SOUTH AMERICA 1.319261
12 SOUTH ASIA 1.626202
13 SOUTHEAST ASIA 3.613861
14 SOUTHERN AFRICA 1.419558
15 SOUTHERN EUROPE 1.994609
16 WESTERN AFRICA 1.870033
17 WESTERN ASIA 3.472493
18 WESTERN EUROPE 1.228070
19 AFRICA 1.740020
20 ASIA 2.708494
21 EUROPE 2.186226
22 LATIN AMERICA AND THE CARIBBEAN 1.442125
###Markdown
Step 6: Results Top 10 countries by coverage
###Code
articles_per_population = articles_per_population.sort_values(by='articles-per-population', ascending=False)
print(articles_per_population.head(10))
###Output
country articles-per-population
133 Tuvalu 0.040000
33 Dominica 0.001389
141 Vanuatu 0.000935
51 Iceland 0.000543
56 Ireland 0.000500
86 Montenegro 0.000322
81 Martinique 0.000281
12 Bhutan 0.000274
91 New Zealand 0.000261
106 Romania 0.000218
###Markdown
Bottom 10 countries by coverage
###Code
articles_per_population = articles_per_population.sort_values(by='articles-per-population', ascending=True)
print(articles_per_population.head(10))
###Output
country articles-per-population
52 India 9.285051e-07
94 Nigeria 9.702144e-07
128 Tanzania 1.674088e-06
38 Ethiopia 1.740402e-06
8 Bangladesh 1.766691e-06
27 Colombia 2.022490e-06
134 Uganda 2.186222e-06
87 Morocco 2.781486e-06
16 Brazil 2.832701e-06
26 China 2.852284e-06
###Markdown
Top 10 countries by relative quality
###Code
percent_high_quality_by_country = percent_high_quality_by_country.sort_values(by='percent_high_quality', ascending=False)
print(percent_high_quality_by_country.head(10))
###Output
country percent_high_quality
63 Korea, North 22.222222
109 Saudi Arabia 12.820513
106 Romania 12.244898
23 Central African Republic 12.121212
140 Uzbekistan 10.714286
82 Mauritania 10.416667
46 Guatemala 8.433735
33 Dominica 8.333333
125 Syria 7.812500
11 Benin 7.692308
###Markdown
Bottom 10 countries by relative quality
###Code
percent_high_quality_by_country = percent_high_quality_by_country.sort_values(by='percent_high_quality', ascending=True)
print(percent_high_quality_by_country.head(10))
###Output
country percent_high_quality
10 Belgium 0.192678
128 Tanzania 0.247525
124 Switzerland 0.248756
89 Nepal 0.280899
101 Peru 0.285714
94 Nigeria 0.295858
104 Portugal 0.314465
27 Colombia 0.350877
73 Lithuania 0.409836
87 Morocco 0.485437
###Markdown
Geographic regions by coverage
###Code
print(articles_per_population_all_region.sort_values(by='articles-per-population', ascending=False))
###Output
region articles-per-population
10 OCEANIA 0.000146
9 NORTHERN EUROPE 0.000096
21 EUROPE 0.000055
15 SOUTHERN EUROPE 0.000048
5 EASTERN EUROPE 0.000040
17 WESTERN ASIA 0.000032
0 CARIBBEAN 0.000030
18 WESTERN EUROPE 0.000029
8 NORTHERN AMERICA 0.000028
14 SOUTHERN AFRICA 0.000013
1 CENTRAL AMERICA 0.000013
22 LATIN AMERICA AND THE CARIBBEAN 0.000012
13 SOUTHEAST ASIA 0.000011
16 WESTERN AFRICA 0.000010
2 CENTRAL ASIA 0.000009
11 SOUTH AMERICA 0.000009
6 MIDDLE AFRICA 0.000009
19 AFRICA 0.000009
4 EASTERN AFRICA 0.000008
7 NORTHERN AFRICA 0.000008
20 ASIA 0.000007
3 EAST ASIA 0.000005
12 SOUTH ASIA 0.000004
###Markdown
Geographic regions by quality
###Code
print(percent_high_quality_by_all_region.sort_values(by='percent_high_quality', ascending=False))
###Output
region percent_high_quality
8 NORTHERN AMERICA 5.470805
13 SOUTHEAST ASIA 3.613861
17 WESTERN ASIA 3.472493
5 EASTERN EUROPE 3.161844
3 EAST ASIA 3.073190
2 CENTRAL ASIA 2.857143
9 NORTHERN EUROPE 2.710603
20 ASIA 2.708494
6 MIDDLE AFRICA 2.406015
21 EUROPE 2.186226
7 NORTHERN AFRICA 2.113459
10 OCEANIA 2.015355
15 SOUTHERN EUROPE 1.994609
0 CARIBBEAN 1.870504
16 WESTERN AFRICA 1.870033
19 AFRICA 1.740020
12 SOUTH ASIA 1.626202
1 CENTRAL AMERICA 1.490603
22 LATIN AMERICA AND THE CARIBBEAN 1.442125
14 SOUTHERN AFRICA 1.419558
4 EASTERN AFRICA 1.398881
11 SOUTH AMERICA 1.319261
18 WESTERN EUROPE 1.228070
###Markdown
DATA512 A2: Bias in dataWikipedia is a free and openly editable online encyclopedia. Despite its nature of open collaboration, [critics note](https://en.wikipedia.org/wiki/Criticism_of_Wikipedia) that there is bias in its coverage. In this notebook, we examine bias in English Wikipedia's coverage of politicians and the quality of these articles. Data sourcesFor Wikipedia data about politicians, we use data processed by Os Keyes from [Politicians by Country from the English-language Wikipedia](https://figshare.com/articles/Untitled_Item/5513449), which is available as a Fileset on figshare under the CC-BY-SA 4.0 license. The file named `page_data.csv` was extracted from the Fileset and saved to the `data_raw` folder.Format: {page, country, rev_id}Population data is from the 2018 World Population Data Sheet Population Reference Bureau's [World Population Data Sheet](http://www.worldpopdata.org/table), using the "Population mid-2018 (millions)" indicator with geography filter set to regions Africa, Asia, Europe, Latin America And The Carribean, Northern America and Oceania, plus all countries selected. Instead of using the PRB website directly, we used cached copy of WPDS 2018 CSV file hosted at [this Dropbox location](https://www.dropbox.com/s/5u7sy1xt7g0oi2c/WPDS_2018_data.csv?dl=0), which we saved to the `data_raw` folder. The license for this dataset is unknown.Format: {geography, population in millions}To measure quality, we will use Wikimedia's web service called [Objective Revision Evaluation Service (ORES)](https://ores.wikimedia.org/) to make predictions about articles' quality rating according to the English Wikipedia 1.0 (wp10) assessment scale. Data acquisition
###Code
import os, errno
import json
import requests
def create_folder(path):
"""Creates a folder if it doesn't already exist."""
created = False
try:
os.makedirs(path)
created = True
except OSError as e:
if e.errno != errno.EEXIST:
raise
return created
HEADERS = {
'Api-User-Agent': 'https://github.com/EdmundTse/data-512-a2'
}
def get_ores_data(revision_ids):
"""Uses ORES API to get wp10 quality scores for the given revision IDs."""
ores_endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
revids = '|'.join(str(x) for x in revision_ids)
params = {
'project': 'enwiki',
'model': 'wp10',
'revids': revids
}
api_call = requests.get(ores_endpoint.format(**params), headers=HEADERS)
response = api_call.json()
return response
###Output
_____no_output_____
###Markdown
First, load the Wikipedia data to find the revision IDs we need to request quality scores for. The CSV file has a header row and has the following columns:| Column Name | Description | Format ||-------------|-----------------------------|---------|| page | Wikipedia page name | text || country | Country name | text || rev_id | Wikipedia page revision ID | integer |
###Code
import csv
RAW_DATA_DIR = 'data_raw'
page_data_path = os.path.join(RAW_DATA_DIR, 'page_data.csv')
with open(page_data_path, encoding='utf-8') as page_data_file:
reader = csv.reader(page_data_file)
# Skip the header row
next(reader)
page_data = [row for row in reader]
###Output
_____no_output_____
###Markdown
Next, batch the revision IDs, as recommended by the API usage guidelines, to retrieve quality scores using ORES. If the saved responses already exist, use that instead.
###Code
# If present, load the saved API responses from file.
ORES_FILENAME = 'ores_responses.json'
path = os.path.join(RAW_DATA_DIR, ORES_FILENAME)
try:
with open(path) as f:
results = json.load(f)
except FileNotFoundError:
results = []
###Output
_____no_output_____
###Markdown
%%time This variant makes sequential API calls to ORESif not results: rev_ids_batch = [] MAX_BATCH_SIZE = 50 for page in page_data: Add this page revision ID to batch rev_id = int(page[2]) rev_ids_batch.append(rev_id) If the batch is filled, then start a new request for this batch if len(rev_ids_batch) == MAX_BATCH_SIZE: response = get_ores_data(rev_ids_batch) results.append(response) Start a new batch rev_ids_batch = [] Flush any remaining revision IDs that don't fill the batch if rev_ids_batch: response = get_ores_data(rev_ids_batch) results.append(response)
###Code
%%time
# This variant uses 4 workers to make parallel requests to ORES for faster completion
import threading, queue
if not results:
NUM_THREADS = 4
MAX_BATCH_SIZE = 50
def ores_worker():
"""Worker thread takes a batchs of rev_ids and makes requests to ORES."""
while True:
item = q.get()
if item is None:
break
serial_num, rev_ids = item
response = get_ores_data(rev_ids)
results.append((serial_num, response))
q.task_done()
# Start the ORES worker threads
q = queue.Queue()
threads = []
for i in range(NUM_THREADS):
t = threading.Thread(target=ores_worker)
t.start()
threads.append(t)
# Batch and queue revision IDs for ORES requests
serial_num = 0
rev_ids_batch = []
for page in page_data:
# Add this page revision ID to batch
rev_id = int(page[2])
rev_ids_batch.append(rev_id)
# When the batch is filled, enqueue a new request
if len(rev_ids_batch) == MAX_BATCH_SIZE:
serial_num += 1
q.put((serial_num, rev_ids_batch))
rev_ids_batch = []
# Flush any remaining revision IDs that didnt't fill a batch
if rev_ids_batch:
serial_num += 1
q.put((serial_num, rev_ids_batch))
# Wait for queue, workers and their threads to complete
q.join()
for i in range(NUM_THREADS):
q.put(None)
for t in threads:
t.join()
# Put the responses back into the original order, then discard the order index
results.sort()
results = [x[1] for x in results]
###Output
Wall time: 1min 29s
###Markdown
Save the raw ORES API responses to files.
###Code
# Output all of the API responses into one file
path = os.path.join(RAW_DATA_DIR, ORES_FILENAME)
with open(path, 'w') as f:
json.dump(results, f)
###Output
_____no_output_____
###Markdown
Data processingFirst, process the ORES API responses to extract the results. Revision IDs that did not produce a quality score, possibly due to the revision being deleted, will be recorded with a null value.
###Code
ores = []
for response in results:
scores = response['enwiki']['scores']
for rev_id, result in scores.items():
prediction = None
try:
prediction = result['wp10']['score']['prediction']
except KeyError:
pass
ores.append((rev_id, prediction))
###Output
_____no_output_____
###Markdown
Load the population data from the CSV file. This is a quoted CSV with a header row with these columns:| Column Name | Description | Format ||-------------|-------------------------------------|---------|| Geography | Continent or country name | text || Population | Population in mid-2018, in millions | decimal |
###Code
pop_data_path = os.path.join(RAW_DATA_DIR, 'WPDS_2018_data.csv')
pop_data = []
with open(pop_data_path, encoding='utf-8') as pop_data_file:
reader = csv.reader(pop_data_file)
# Skip the header row
next(reader)
for row in reader:
geo = row[0]
# Parse decimal value formatted with comma separator
population = float(row[1].replace(',', ''))
pop_data.append((geo, population))
###Output
_____no_output_____
###Markdown
Combine the the data sources into one table.
###Code
import pandas as pd
df_page = pd.DataFrame(page_data, columns=['article_name', 'country', 'revision_id'])
df_page.describe()
df_ores = pd.DataFrame(ores, columns=['revision_id', 'article_quality'])
df_ores.describe()
df_pop = pd.DataFrame(pop_data, columns=['country', 'population'])
df_pop.shape
###Output
_____no_output_____
###Markdown
Finally, combine the Wikipedia page data, world population data and article quality data from ORES into one table. When merging page data with article quality data, we introduced several None values for those articles that we were unable to get a score, which we then proceed to exclude from the dataset. On the second merge with population data, we used the inner join operation to drop rows where there is not a match for the 'country' values.
###Code
df = df_page.merge(df_ores).merge(df_pop).dropna(subset=['article_quality'])
df.describe(include='all').head()
###Output
_____no_output_____
###Markdown
Output the cleaned data to a CSV in the required format.
###Code
CLEAN_DATA_DIR = 'data_clean'
CLEANED_FILENAME = 'combined.csv'
create_folder(CLEAN_DATA_DIR)
# Output the cleaned data in the required format
cleaned_filepath = os.path.join(CLEAN_DATA_DIR, CLEANED_FILENAME)
OUTPUT_COLUMNS = [
'country',
'article_name',
'revision_id',
'article_quality',
'population'
]
df.to_csv(cleaned_filepath, columns=OUTPUT_COLUMNS, index=False)
df.head()
###Output
_____no_output_____
###Markdown
There are articles included in the dataset that are purely templates without content. These articles have names starting with "Template:". Should we filter these out, since they are not intended to be articles that contain content? We will leave these in place for now as it is an article for that country albeit not about any one politician, though it might be worthwhile revisiting this decision in the future. AnalysisWe examine these Wikipedia articles for bias, by looking at:* The proportion of articles-per-population and high-quality articles for each country. We define high-quality as having received an ORES prediction of either "feature article" (FA) or "good article" (GA).* The number of politician articles per capita for each country.
###Code
articles_by_country = df_page.groupby('country').size().rename('articles').reset_index()
articles_by_country.shape
df_country = df_pop.merge(articles_by_country).set_index('country')
df_country.shape
###Output
_____no_output_____
###Markdown
Let's examine the result of this join operation. First, there were 219 countries represented from the Wikipedia page data. From the WPDS data, there were 207 geographies represented, which had 6 regions and 201 countries. After the inner join operation, only 180 countries names common to both data sets remain.In this notebook, we will not be examining which countries were unable to be joined; although that could be an interesting exercise for future work.Now, we tabulate the number of articles per capita and show the countries with the highest and lowest rates:
###Code
RESULTS_DIR = 'results'
_ = create_folder(RESULTS_DIR)
###Output
_____no_output_____
###Markdown
The 10 highest-ranked countries by number of politician articles relative to population
###Code
df_country.loc[:, 'articles_per_million'] = df_country.articles / df_country.population
articles_per_million_highest = df_country.sort_values(by='articles_per_million', ascending=False)[:10]
# Output table of results
path = os.path.join(RESULTS_DIR, 'articles_per_million_highest.csv')
articles_per_million_highest.to_csv(path)
articles_per_million_highest
###Output
_____no_output_____
###Markdown
The 10 lowest-ranked countries by number of politician articles relative to population
###Code
articles_per_million_lowest = df_country.sort_values(by='articles_per_million', ascending=True)[:10]
# Output table of results
path = os.path.join(RESULTS_DIR, 'articles_per_million_lowest.csv')
articles_per_million_lowest.to_csv(path)
articles_per_million_lowest
###Output
_____no_output_____
###Markdown
The 10 highest-ranked countries by proportion of high-quality articles about its politicians
###Code
# Tabulate by country the proportion of articles that are high quality
df.loc[:, 'high_quality'] = df.dropna(subset=['article_quality']).article_quality.isin(['FA', 'GA'])
high_quality_prop = df.groupby('country').high_quality.mean()
high_quality_prop_highest = pd.DataFrame(high_quality_prop.sort_values(ascending=False)[:10])
# Output table of results
path = os.path.join(RESULTS_DIR, 'high_quality_prop_highest.csv')
high_quality_prop_highest.to_csv(path)
high_quality_prop_highest
###Output
_____no_output_____
###Markdown
The 10 lowest-ranked countries by proportion of high-quality articles about its politicians
###Code
high_quality_prop_lowest = pd.DataFrame(high_quality_prop.sort_values(ascending=True)[:10])
# Output table of results
path = os.path.join(RESULTS_DIR, 'high_quality_prop_lowest.csv')
high_quality_prop_lowest.to_csv(path)
high_quality_prop_lowest
###Output
_____no_output_____
###Markdown
Actually, there are many more countries with zero high quality articles about politicians. Since the order was arbitrary within the same value, it would be better to list all such countries:
###Code
high_quality_prop_zero = high_quality_prop[high_quality_prop == 0]
# Output table of results
path = os.path.join(RESULTS_DIR, 'high_quality_prop_zero.csv')
high_quality_prop_zero.to_csv(path)
high_quality_prop_zero.index.values
###Output
_____no_output_____
###Markdown
Goal:The goal of this assignment is to explore the concept of bias through data on Wikipedia articles - specifically, articles on political figures from a variety of countries.To do this, we will combine a data set of wiki articles with country populations, and then use an ML service to estimate quality of each article
###Code
import pandas as pd
import requests
###Output
_____no_output_____
###Markdown
Step 1Getting the article & population data
###Code
#Population data data
#This dataset is drawn from the world population data sheet published by the Population Reference Bureau.https://www.prb.org/international/indicator/population/table/
WPDS_data = pd.read_csv("data_raw/WPDS_2020_data.csv")
#Article data
#This data set is available on figshare https://figshare.com/articles/dataset/Untitled_Item/5513449
#I downloaded the data & it can be found in this guthub repo
page_data = pd.read_csv("data_raw/page_data.csv")
###Output
_____no_output_____
###Markdown
Step 2Data Cleaning the dataset contains some page names that start with the string "Template:". These pages are not Wikipedia articles, and should not be included in your analysis.
###Code
page_data = page_data[page_data["page"].str.contains("Template:")!=True].reset_index()
###Output
_____no_output_____
###Markdown
WPDS_2020_data.csv contains some rows that provide cumulative regional population counts, rather than country-level counts. These rows are distinguished by having ALL CAPS values in the 'geography' field (e.g. AFRICA, OCEANIA). These rows won't match the country values in page_data.csv, but you will want to retain them (either in the original file, or a separate file) so that you can report coverage and quality by region in the analysis section.
###Code
WPDS_data_country = WPDS_data[WPDS_data["Type"]=="Country"].reset_index()
###Output
_____no_output_____
###Markdown
Step 3Getting Article Quality Prediction
###Code
page_data
# iterate through each rev_id, and call API to get prediction of article quality
rev_ids = []
predictions =[]
log_rev_ids_missing_predictions=[]
for batch in range(len(page_data["rev_id"])//50+1):
batch_ids = page_data["rev_id"][50*batch:50*batch+50]
rev_id = "|".join(str(x) for x in batch_ids)
url = 'https://ores.wikimedia.org/v3/scores/enwiki?models=articlequality&revids={rev_id}'
call = requests.get(url.format(rev_id=rev_id))
response = call.json()
for i in batch_ids:
try:
prediction = response["enwiki"]["scores"][str(i)]["articlequality"]["score"]["prediction"]
rev_ids.append(i)
predictions.append(prediction)
except KeyError:
log_rev_ids_missing_predictions.append(i)
article_quality= pd.DataFrame({'rev_id':rev_ids,
'article_quality_estimate':predictions})
###Output
_____no_output_____
###Markdown
Step 4 Combining the datasets
###Code
page_data["rev_id"] = page_data["rev_id"].apply(str)
article_quality["rev_id"] = article_quality["rev_id"].apply(str)
page_data = page_data.merge(article_quality,
how = "inner",
on = "rev_id"
)
joined = page_data.merge(WPDS_data_country,
how = "outer",
left_on = "country",
right_on = "Name"
)
###Output
_____no_output_____
###Markdown
there are a couple of edge cases - either the population dataset does not have an entry for the equivalent Wikipedia country, or vise versa.
###Code
joined
wp_wpds_countries_no_match = joined[(joined["country"].isnull())|(joined["Name"].isnull())]
wp_wpds_countries_no_match.to_csv("data_clean/wp_wpds_countries-no_match.csv")
wp_wpds_politicians_by_country = joined[(joined["country"].isnull()==False)&(joined["Name"].isnull()==False)]
page_data
wp_wpds_politicians_by_country.columns = ['index_x', 'article_name', 'country', 'revision_id', 'article_quality_est.',
'index_y', 'FIPS', 'Name', 'Type', 'TimeFrame', 'Data (M)',
'population']
wp_wpds_politicians_by_country = wp_wpds_politicians_by_country[["country","article_name","revision_id","article_quality_est.","population"]]
wp_wpds_politicians_by_country.to_csv("wp_wpds_politicians_by_country.csv")
wp_wpds_politicians_by_country
###Output
_____no_output_____
###Markdown
Step 5 Analysis articles-per-population and high-quality articles for each country AND for each geographic region. By "high quality" articles, in this case we mean the number of articles about politicians in a given country that ORES predicted would be in either the "FA" (featured article) or "GA" (good article) classes.
###Code
wp_wpds_politicians_by_country["article_quality_est."].value_counts()
###Output
_____no_output_____
###Markdown
Articles per populationNumber of articles in each country divided by population in that country
###Code
articles_per_population = wp_wpds_politicians_by_country.groupby(["country"]).apply(lambda s: (s.article_name.count()/s.population.max())*100)
articles_per_population
###Output
_____no_output_____
###Markdown
High Quality Articles
###Code
wp_wpds_politicians_by_country["hq"]= (wp_wpds_politicians_by_country["article_quality_est."]=="FA")|(wp_wpds_politicians_by_country["article_quality_est."]=="GA")
hq_articles = wp_wpds_politicians_by_country.groupby(["country"]).apply(lambda s: (s.hq.sum()/s.article_name.count())*100)
hq_articles
###Output
_____no_output_____
###Markdown
Step 6: 1) Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
articles_per_population.to_frame("articles_per_pop_pct").reset_index().sort_values("articles_per_pop_pct",ascending = False).head(10)
###Output
_____no_output_____
###Markdown
2) Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
articles_per_population.to_frame("articles_per_pop_pct").reset_index().sort_values("articles_per_pop_pct",ascending = True).head(10)
###Output
_____no_output_____
###Markdown
3) Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
hq_articles.to_frame("hq_articles_pct").reset_index().sort_values("hq_articles_pct",ascending = False).head(10)
###Output
_____no_output_____
###Markdown
4) Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
hq_articles.to_frame("hq_articles_pct").reset_index().sort_values("hq_articles_pct",ascending = True).head(10)
###Output
_____no_output_____
###Markdown
Alot of the countries have 0 high quality articles, we can also look at the bottom 10 that have atleast 1 hq article
###Code
temp = hq_articles.to_frame("hq_articles_pct").reset_index()
temp[temp["hq_articles_pct"]>0].sort_values("hq_articles_pct",ascending = True).head(10)
#To answer the next 2 questions, we need a country region mapping
WPDS_data = pd.read_csv("data_raw/WPDS_2020_data.csv")
#create region - country mapping
region = WPDS_data["Type"]
name = WPDS_data["Name"]
population = WPDS_data["Population"]
regions_country = {}
regions_population ={}
#hacky way to create region country mapping
for r,n,p in zip(region,name,population):
if r=="Sub-Region":
regions_country[n]=[]
current_region = n
regions_population[n]=p
if r=="Country":
if current_region!= None:
regions_country[current_region].append(n)
# create country. ->region mapping
country_region ={}
for region,countries in regions_country.items():
for country in countries:
country_region[country]=region
#create a column for region
wp_wpds_politicians_by_country["region"] = wp_wpds_politicians_by_country["country"].replace(country_region)
#create a region for region population
wp_wpds_politicians_by_country["region_population"] = wp_wpds_politicians_by_country["region"].replace(regions_population)
###Output
_____no_output_____
###Markdown
5) Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
###Code
wp_wpds_politicians_by_country.groupby(["region"]).apply(lambda s: (s.article_name.count()/s.population.max())*100).to_frame("articles_per_regional_pop_pct").sort_values("articles_per_regional_pop_pct",ascending = False)
###Output
_____no_output_____
###Markdown
6) Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
wp_wpds_politicians_by_country.groupby(["region"]).apply(lambda s: (s.hq.sum()/s.article_name.count())*100).to_frame("hq_articles_regional_pct").sort_values("hq_articles_regional_pct",ascending = False)
###Output
_____no_output_____
###Markdown
Cleaning the data- In the case of page_data.csv, the dataset contains some page names that start with the string "Template:". These pages are not Wikipedia articles, and should not be included in analysis.- WPDS_2018_data contains some rows that provide cumulative regional population counts, rather than country-level counts. These rows are distinguished by having ALL CAPS values in the 'geography' field (e.g. AFRICA, OCEANIA). These rows won't match the country values in page_data, but you will want to retain them (either in the original file, or a separate file) so that you can report coverage and quality by region in the analysis section.
###Code
# Removing the "template"
page_data_clean = page_data[~page_data.page.str.contains("Template")]
print( "After Data Cleaning page_data: ")
print ("Rows : " ,page_data_clean.shape[0])
print ("Columns : " ,page_data_clean.shape[1])
print ("\nFeatures : \n" ,page_data_clean.columns.tolist())
print ("\nMissing values : ", page_data_clean.isnull().sum().values.sum())
print ("\nUnique values : \n",page_data_clean.nunique())
#Changing column names
WPDS_data = WPDS_data.rename(columns={"Geography": "country"})
# Loading the seperate WPDS file with a new region column which can be accessed as needed
WPDS_new = pd.read_csv("WPDS_2018_seperate_data.csv")
###Output
_____no_output_____
###Markdown
Getting article quality predictions- Predicted quality scores for each article in the Wikipedia dataset. We're using a machine learning system called ORES ("Objective Revision Evaluation Service"). ORES estimates the quality of an article (at a particular point in time), and assigns a series of probabilities that the article is in one of 6 quality categories.
###Code
# Getting the revision IDs
rev_ids = page_data_clean.rev_id.tolist()
# pip install ores in your local notebook environment
# Send all ~50k articles in the page_data.csv in a single batch
ores_session = api.Session("https://ores.wikimedia.org", "Class project <[email protected]>")
results = ores_session.score("enwiki", ["articlequality"], rev_ids)
probDF = pd.DataFrame()
probList = []
for prediction, rev_id in zip(results, rev_ids):
if list(prediction['articlequality'].keys()) != ['error']:
temp = pd.DataFrame(prediction['articlequality']['score'])
#print(str(rev_id) + " " + str(temp['prediction'][0]))
probList.append([rev_id, temp['prediction'][0]])
# Get data in a dataframe
probDF = pd.DataFrame(probList)
# Dataframe with the predictions
probDF = probDF.rename(columns={1: "article_quality", 0: "rev_id"})
###Output
_____no_output_____
###Markdown
Combining the datasets- After retrieving and including the ORES data for each article - merge the wikipedia data and population data together. Both have fields containing country names for just that purpose. - After merging the data, you'll invariably run into entries which cannot be merged. Either the population dataset does not have an entry for the equivalent Wikipedia country, or vis versa.- Any rows that do not have matching data, and output them to a CSV file called wp_wpds_countries-no_match.csv- The remaining data into a single CSV file called wp_wpds_politicians_by_country.csv
###Code
# Combining the probability from probDF with page_data
new_page_data = pd.merge(probDF , page_data_clean, how='left', on=['rev_id'])
# Combining WPDS data with the data above
combined_data = pd.merge(new_page_data , WPDS_data, how='left', on=['country'])
combined_data = combined_data.dropna()
# Data we could not find probability for (they did not have a match)
def anti_join(x, y, on):
"""
Anti-join of two data frames. Return rows in x which are not present in y
"""
ans = pd.merge(left=x, right=y, how='left', indicator=True, on=on)
ans = ans.loc[ans._merge == 'left_only', :].drop(columns='_merge')
return ans
# Dataframe with no results
no_class = anti_join(WPDS_data, combined_data, ['country'])
no_class = no_class.drop(["rev_id", "article_quality", "page", "Population mid-2018 (millions)_x", "Population mid-2018 (millions)_y"], 1)
# Saving these to a CSV file
no_class.to_csv('wp_wpds_countries-no_match.csv')
# Getting the right format
combined_data = combined_data.rename(columns={"rev_id": "revision_id"})
# Saving this to a csv
combined_data.to_csv('wp_wpds_politicians_by_country.csv')
###Output
_____no_output_____
###Markdown
Analysis- Analysis will consist of calculating the proportion (as a percentage) of articles-per-population and high-quality articles for each country AND for each geographic region. - By "high quality" articles, in this case we mean the number of articles about politicians in a given country that ORES predicted would be in either the "FA" (featured article) or "GA" (good article) classes. Analysis 1: articles-per-population
###Code
combined_data_new = combined_data
# data frame gives us the frequency of the times a country had an article i.e the number of articles for a country
combined_data_new["frequency"] = combined_data_new.groupby('country')["country"].transform('count')
# Converting the population column to float value
combined_data_new["Population mid-2018 (millions)"] = combined_data_new["Population mid-2018 (millions)"].str.replace(',', '').astype(float)
# Finding the articles-per-population-percentage
combined_data_new["articles-per-population-percentage"] = ((combined_data_new["frequency"])/(combined_data_new["Population mid-2018 (millions)"]*1000000))*100
# Dropping rows from the table that we do not need
articles_population = combined_data_new.drop(["revision_id", "article_quality", "page", "frequency"], 1)
# Dropping Duplicates
articles_population = articles_population.drop_duplicates(subset=['country'], keep="first")
# Dataframe with the articles per population
articles_population = articles_population.sort_values('articles-per-population-percentage', ascending=False)
articles_population.head()
###Output
_____no_output_____
###Markdown
Analysis 2: high-quality articles- By "high quality" articles, in this case we mean the number of articles about politicians in a given country that ORES predicted would be in either the "FA" (featured article) or "GA" (good article) classes.
###Code
def HQ(x):
"""
Function to categorise 1 if the prediction is FA or GA, and 0 otherwise
"""
if "Stub" in x:
return 0
elif "B" in x:
return 0
elif "C" in x:
return 0
elif "Start" in x:
return 0
else: return 1
# Categorising the data set
combined_data_new["GA_FA"] = combined_data_new["article_quality"].apply(HQ)
# Keeping only the required rows
high_quality = combined_data_new.drop(["revision_id", "article_quality", "page", "Population mid-2018 (millions)", "articles-per-population-percentage", "frequency"], 1)
# Removing articles which are not high quality
high_quality = high_quality[high_quality.GA_FA != 0]
# Dropping the column not needed
combined_data_new = combined_data_new.drop(["GA_FA"], 1)
# Grouping by country to find number of GA and FA per country
high_quality = high_quality.groupby(['country']).sum().reset_index()
# Combining this with the data we already have to get the frequency of articles for a country
GA_FA_data = pd.merge(high_quality , combined_data_new, how='left', on=['country'])
# Finding the percentage of high quality articles = (high quality articles/ total articles)*100
GA_FA_data["high-quality-percentage"] = ((GA_FA_data["GA_FA"])/(GA_FA_data["frequency"]))*100
# Dropping unnecessary columns and rows
GA_FA_data = GA_FA_data.drop(["GA_FA", "revision_id", "article_quality", "page", "articles-per-population-percentage", "frequency"], 1)
GA_FA_data = GA_FA_data.drop_duplicates(subset=['country'], keep="first")
# Sorting the data
GA_FA_data = GA_FA_data.sort_values('high-quality-percentage', ascending=False)
GA_FA_data.head()
#Saving all data into csv
# Creating a file with all data combined
final_data = pd.merge(high_quality , combined_data_new, how='left', on=['country'])
# adding high_quality column
final_data["high-quality-percentage"] = ((final_data["GA_FA"])/(final_data["frequency"]))*100
final_data.to_csv("Final_combined_data.csv")
###Output
_____no_output_____
###Markdown
Analysis based on Geographic regions
###Code
# Let us see what geographic regions we have
Geographic_Region = WPDS_data[WPDS_data['country'].str.isupper()]
Geographic_Region
# Finding indices at which these region names begin to disect data into regions
america_index = WPDS_data[WPDS_data['country']=="NORTHERN AMERICA"].index.values.astype(int)[0]
africa_index = WPDS_data[WPDS_data['country']=="AFRICA"].index.values.astype(int)[0]
latin_index = WPDS_data[WPDS_data['country']=="LATIN AMERICA AND THE CARIBBEAN"].index.values.astype(int)[0]
asia_index = WPDS_data[WPDS_data['country']=="ASIA"].index.values.astype(int)[0]
europe_index = WPDS_data[WPDS_data['country']=="EUROPE"].index.values.astype(int)[0]
oceania_index = WPDS_data[WPDS_data['country']=="OCEANIA"].index.values.astype(int)[0]
# Getting data for seperate regions
africa_WPDS_data = WPDS_data.iloc[(africa_index+1):america_index,:]
america_WPDS_data = WPDS_data.iloc[(america_index+1):latin_index,:]
latin_WPDS_data = WPDS_data.iloc[(latin_index+1):asia_index,:]
asia_WPDS_data = WPDS_data.iloc[(asia_index+1):europe_index,:]
europe_WPDS_data = WPDS_data.iloc[(europe_index+1):oceania_index,:]
oceania_WPDS_data = WPDS_data.iloc[(oceania_index+1):,:]
###Output
_____no_output_____
###Markdown
Analysis 3: Geographic Region based articles per population AFRICA
###Code
print("Number of countries in Africa:", africa_WPDS_data.shape[0])
# Merging geographical data with total article numbers (frequency)
# to get the number of articles in each of these countries
africa_combined_data = pd.merge(africa_WPDS_data , combined_data_new, how='left', on=['country'])
#Removing columns not required and duplicate rows
africa_combined_data = africa_combined_data.drop(["Population mid-2018 (millions)_y","Population mid-2018 (millions)_x", "revision_id", "article_quality", "page", "articles-per-population-percentage"], 1)
africa_combined_data = africa_combined_data.drop_duplicates(subset=['country'], keep="first")
# frequency = total number of articles in the country as described above.
# We take sum of articles publisdhed by all countries in that geography
africa_frequency = africa_combined_data['frequency'].sum()
print("Total number of articles published in Africa", africa_frequency)
###Output
Total number of articles published in Africa 6851.0
###Markdown
NORTHERN AMERICA
###Code
print("Number of countries in Northern America:", america_WPDS_data.shape[0])
# Merging geographical data with total article numbers (frequency)
# to get the number of articles in each of these countries
america_combined_data = pd.merge(america_WPDS_data , combined_data_new, how='left', on=['country'])
#Removing columns not required and duplicate rows
america_combined_data = america_combined_data.drop(["Population mid-2018 (millions)_y","Population mid-2018 (millions)_x", "revision_id", "article_quality", "page", "articles-per-population-percentage"], 1)
america_combined_data = america_combined_data.drop_duplicates(subset=['country'], keep="first")
# frequency = total number of articles in the country as described above.
# We take sum of articles publisdhed by all countries in that geography
america_frequency = america_combined_data['frequency'].sum()
print("Total number of articles published in america", america_frequency)
###Output
Total number of articles published in america 1921
###Markdown
LATIN AMERICA AND THE CARIBBEAN
###Code
print("Number of countries in LATIN AMERICA AND THE CARIBBEAN:", latin_WPDS_data.shape[0])
# Merging geographical data with total article numbers (frequency)
# to get the number of articles in each of these countries
latin_combined_data = pd.merge(latin_WPDS_data , combined_data_new, how='left', on=['country'])
#Removing columns not required and duplicate rows
latin_combined_data = latin_combined_data.drop(["Population mid-2018 (millions)_y","Population mid-2018 (millions)_x", "revision_id", "article_quality", "page", "articles-per-population-percentage"], 1)
latin_combined_data = latin_combined_data.drop_duplicates(subset=['country'], keep="first")
# frequency = total number of articles in the country as described above.
# We take sum of articles publisdhed by all countries in that geography
latin_frequency = latin_combined_data['frequency'].sum()
print("Total number of articles published in latin", latin_frequency)
###Output
Total number of articles published in latin 5169.0
###Markdown
ASIA
###Code
print("Number of countries in ASIA:", asia_WPDS_data.shape[0])
# Merging geographical data with total article numbers (frequency)
# to get the number of articles in each of these countries
asia_combined_data = pd.merge(asia_WPDS_data , combined_data_new, how='left', on=['country'])
#Removing columns not required and duplicate rows
asia_combined_data = asia_combined_data.drop(["Population mid-2018 (millions)_y","Population mid-2018 (millions)_x", "revision_id", "article_quality", "page", "articles-per-population-percentage"], 1)
asia_combined_data = asia_combined_data.drop_duplicates(subset=['country'], keep="first")
# frequency = total number of articles in the country as described above.
# We take sum of articles publisdhed by all countries in that geography
asia_frequency = asia_combined_data['frequency'].sum()
print("Total number of articles published in asia", asia_frequency)
###Output
Total number of articles published in asia 11531.0
###Markdown
EUROPE
###Code
print("Number of countries in Europe:", europe_WPDS_data.shape[0])
# Merging geographical data with total article numbers (frequency)
# to get the number of articles in each of these countries
europe_combined_data = pd.merge(europe_WPDS_data , combined_data_new, how='left', on=['country'])
#Removing columns not required and duplicate rows
europe_combined_data = europe_combined_data.drop(["Population mid-2018 (millions)_y","Population mid-2018 (millions)_x", "revision_id", "article_quality", "page", "articles-per-population-percentage"], 1)
europe_combined_data = europe_combined_data.drop_duplicates(subset=['country'], keep="first")
# frequency = total number of articles in the country as described above.
# We take sum of articles publisdhed by all countries in that geography
europe_frequency = europe_combined_data['frequency'].sum()
print("Total number of articles published in europe", europe_frequency)
###Output
Total number of articles published in europe 15864.0
###Markdown
OCEANIA
###Code
print("Number of countries in Oceania:", oceania_WPDS_data.shape[0])
# Merging geographical data with total article numbers (frequency)
# to get the number of articles in each of these countries
oceania_combined_data = pd.merge(oceania_WPDS_data , combined_data_new, how='left', on=['country'])
#Removing columns not required and duplicate rows
oceania_combined_data = oceania_combined_data.drop(["Population mid-2018 (millions)_y","Population mid-2018 (millions)_x", "revision_id", "article_quality", "page", "articles-per-population-percentage"], 1)
oceania_combined_data = oceania_combined_data.drop_duplicates(subset=['country'], keep="first")
# frequency = total number of articles in the country as described above.
# We take sum of articles publisdhed by all countries in that geography
oceania_frequency = oceania_combined_data['frequency'].sum()
print("Total number of articles published in oceania", oceania_frequency)
###Output
Total number of articles published in oceania 3128.0
###Markdown
Getting the dataframe together
###Code
# Adding the article frequency per country
Geographic_Region["frequency"] = [africa_frequency,america_frequency,latin_frequency,asia_frequency,europe_frequency, oceania_frequency]
# Converting the population column to float value
Geographic_Region["Population mid-2018 (millions)"] = Geographic_Region["Population mid-2018 (millions)"].str.replace(',', '').astype(float)
# Calculating the articles by population percentage
Geographic_Region["articles-per-population-percentage"] = ((Geographic_Region["frequency"])/(Geographic_Region["Population mid-2018 (millions)"]*1000000))*100
###Output
/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:4: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
after removing the cwd from sys.path.
###Markdown
Analysis 4: Geographical region high quality article analysis Africa
###Code
# merging geography specific data with the data frame "high_quality" to get the frequency of FA and GA articles
africa_quality_data = pd.merge(africa_WPDS_data , high_quality, how='left', on=['country'])
# merging this data with the dataframe to get the frequency or number of articles published by each country
africa_quality_data = pd.merge(africa_quality_data , combined_data_new, how='left', on=['country'])
# removing unwanted rows and columns
africa_quality_data = africa_quality_data.drop(["Population mid-2018 (millions)_y","Population mid-2018 (millions)_x", "revision_id", "article_quality", "page", "articles-per-population-percentage"], 1)
africa_quality_data = africa_quality_data.drop_duplicates(subset=['country'], keep="first")
# Total FA and GA articles in all geography specific countries
africa_FA_GA = africa_quality_data['GA_FA'].sum()
# Total articles in all geography specific countries
africa_frequency = africa_quality_data['frequency'].sum()
# The percentage of high quality articles in the geography
africa_high_quality = (africa_FA_GA/africa_frequency)*100
print("FA GA article percentage:", africa_high_quality)
###Output
FA GA article percentage: 1.824551160414538
###Markdown
Northern America
###Code
# merging geography specific data with the data frame "high_quality" to get the frequency of FA and GA articles
america_quality_data = pd.merge(america_WPDS_data , high_quality, how='left', on=['country'])
# merging this data with the dataframe to get the frequency or number of articles published by each country
america_quality_data = pd.merge(america_quality_data , combined_data_new, how='left', on=['country'])
# removing unwanted rows and columns
america_quality_data = america_quality_data.drop(["Population mid-2018 (millions)_y","Population mid-2018 (millions)_x", "revision_id", "article_quality", "page", "articles-per-population-percentage"], 1)
america_quality_data = america_quality_data.drop_duplicates(subset=['country'], keep="first")
# Total FA and GA articles in all geography specific countries
america_FA_GA = america_quality_data['GA_FA'].sum()
# Total articles in all geography specific countries
america_frequency = america_quality_data['frequency'].sum()
# The percentage of high quality articles in the geography
america_high_quality = (america_FA_GA/america_frequency)*100
print("FA GA article percentage:", america_high_quality)
###Output
FA GA article percentage: 5.153565851119208
###Markdown
Latin America and Caribbean islands
###Code
# merging geography specific data with the data frame "high_quality" to get the frequency of FA and GA articles
latin_quality_data = pd.merge(latin_WPDS_data , high_quality, how='left', on=['country'])
# merging this data with the dataframe to get the frequency or number of articles published by each country
latin_quality_data = pd.merge(latin_quality_data , combined_data_new, how='left', on=['country'])
# removing unwanted rows and columns
latin_quality_data = latin_quality_data.drop(["Population mid-2018 (millions)_y","Population mid-2018 (millions)_x", "revision_id", "article_quality", "page", "articles-per-population-percentage"], 1)
latin_quality_data = latin_quality_data.drop_duplicates(subset=['country'], keep="first")
# Total FA and GA articles in all geography specific countries
latin_FA_GA = latin_quality_data['GA_FA'].sum()
# Total articles in all geography specific countries
latin_frequency = latin_quality_data['frequency'].sum()
# The percentage of high quality articles in the geography
latin_high_quality = (latin_FA_GA/latin_frequency)*100
print("FA GA article percentage:", latin_high_quality)
###Output
FA GA article percentage: 1.3348810214741729
###Markdown
Asia
###Code
# merging geography specific data with the data frame "high_quality" to get the frequency of FA and GA articles
asia_quality_data = pd.merge(asia_WPDS_data , high_quality, how='left', on=['country'])
# merging this data with the dataframe to get the frequency or number of articles published by each country
asia_quality_data = pd.merge(asia_quality_data , combined_data_new, how='left', on=['country'])
# removing unwanted rows and columns
asia_quality_data = asia_quality_data.drop(["Population mid-2018 (millions)_y","Population mid-2018 (millions)_x", "revision_id", "article_quality", "page", "articles-per-population-percentage"], 1)
asia_quality_data = asia_quality_data.drop_duplicates(subset=['country'], keep="first")
# Total FA and GA articles in all geography specific countries
asia_FA_GA = asia_quality_data['GA_FA'].sum()
# Total articles in all geography specific countries
asia_frequency = asia_quality_data['frequency'].sum()
# The percentage of high quality articles in the geography
asia_high_quality = (asia_FA_GA/asia_frequency)*100
print("FA GA article percentage:", asia_high_quality)
###Output
FA GA article percentage: 2.688405168675744
###Markdown
Europe
###Code
# merging geography specific data with the data frame "high_quality" to get the frequency of FA and GA articles
europe_quality_data = pd.merge(europe_WPDS_data , high_quality, how='left', on=['country'])
# merging this data with the dataframe to get the frequency or number of articles published by each country
europe_quality_data = pd.merge(europe_quality_data , combined_data_new, how='left', on=['country'])
# removing unwanted rows and columns
europe_quality_data = europe_quality_data.drop(["Population mid-2018 (millions)_y","Population mid-2018 (millions)_x", "revision_id", "article_quality", "page", "articles-per-population-percentage"], 1)
europe_quality_data = europe_quality_data.drop_duplicates(subset=['country'], keep="first")
# Total FA and GA articles in all geography specific countries
europe_FA_GA = europe_quality_data['GA_FA'].sum()
# Total articles in all geography specific countries
europe_frequency = europe_quality_data['frequency'].sum()
# The percentage of high quality articles in the geography
europe_high_quality = (europe_FA_GA/europe_frequency)*100
print("FA GA article percentage:", europe_high_quality)
###Output
FA GA article percentage: 2.0297528996469993
###Markdown
Oceania
###Code
# merging geography specific data with the data frame "high_quality" to get the frequency of FA and GA articles
oceania_quality_data = pd.merge(oceania_WPDS_data , high_quality, how='left', on=['country'])
# merging this data with the dataframe to get the frequency or number of articles published by each country
oceania_quality_data = pd.merge(oceania_quality_data , combined_data_new, how='left', on=['country'])
# removing unwanted rows and columns
oceania_quality_data = oceania_quality_data.drop(["Population mid-2018 (millions)_y","Population mid-2018 (millions)_x", "revision_id", "article_quality", "page", "articles-per-population-percentage"], 1)
oceania_quality_data = oceania_quality_data.drop_duplicates(subset=['country'], keep="first")
# Total FA and GA articles in all geography specific countries
oceania_FA_GA = oceania_quality_data['GA_FA'].sum()
# Total articles in all geography specific countries
oceania_frequency = oceania_quality_data['frequency'].sum()
# The percentage of high quality articles in the geography
oceania_high_quality = (oceania_FA_GA/oceania_frequency)*100
print("FA GA article percentage:", oceania_high_quality)
###Output
FA GA article percentage: 2.1099744245524295
###Markdown
Merging Results to a dataframe
###Code
Geographic_Region["FA_GA_total"] = [africa_high_quality, america_high_quality, latin_high_quality, asia_high_quality, europe_high_quality, oceania_high_quality]
# Calculating high quality article percentage
Geographic_Region["high-quality-percentage"] = ((Geographic_Region["FA_GA_total"])/(Geographic_Region["frequency"]))*100
Geographic_Region = Geographic_Region.drop(["frequency", "FA_GA_total"], 1)
###Output
/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""Entry point for launching an IPython kernel.
/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:3: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
Results Table 1: Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
top_10_art_per_pop = articles_population.head(10)
top_10_art_per_pop
top_10_art_per_pop.plot.bar(x='country', y='articles-per-population-percentage', rot=45, subplots=True, layout=(2,1), figsize=(10,10))
top_10_art_per_pop.plot.bar(x='country', y='Population mid-2018 (millions)', rot=45, subplots=True, layout=(2,1), figsize=(10,10))
plt.show()
###Output
_____no_output_____
###Markdown
Table 2: Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
bottom_10_art_per_pop = articles_population.tail(10)
bottom_10_art_per_pop
bottom_10_art_per_pop.plot.bar(x='country', y='articles-per-population-percentage', rot=45, subplots=True, layout=(2,1), figsize=(10,10))
bottom_10_art_per_pop.plot.bar(x='country', y='Population mid-2018 (millions)', rot=45, subplots=True, layout=(2,1), figsize=(10,10))
plt.show()
###Output
_____no_output_____
###Markdown
Table 3: Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
top_10_high_quality = GA_FA_data.head(10)
top_10_high_quality
top_10_high_quality.plot.bar(x='country', y='high-quality-percentage', rot=45, subplots=True, layout=(2,1), figsize=(10,10))
top_10_high_quality.plot.bar(x='country', y='Population mid-2018 (millions)', rot=45, subplots=True, layout=(2,1), figsize=(10,10))
plt.show()
###Output
_____no_output_____
###Markdown
Table 4: Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
bottom_10_high_quality = GA_FA_data.tail(10)
bottom_10_high_quality
bottom_10_high_quality.plot.bar(x='country', y='high-quality-percentage', rot=45, subplots=True, layout=(2,1), figsize=(10,10))
bottom_10_high_quality.plot.bar(x='country', y='Population mid-2018 (millions)', rot=45, subplots=True, layout=(2,1), figsize=(10,10))
plt.show()
###Output
_____no_output_____
###Markdown
Table 5: Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
###Code
regional_top_art_pop = Geographic_Region.drop(["high-quality-percentage"], 1)
regional_top_art_pop.sort_values('articles-per-population-percentage', ascending=False)
regional_top_art_pop.plot.bar(x='country', y='articles-per-population-percentage', rot=45, subplots=True, layout=(2,1), figsize=(10,10))
regional_top_art_pop.plot.bar(x='country', y='Population mid-2018 (millions)', rot=45, subplots=True, layout=(2,1), figsize=(10,10))
plt.show()
###Output
_____no_output_____
###Markdown
Table 6: Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
regional_high_quality = Geographic_Region.drop(["articles-per-population-percentage"], 1)
regional_high_quality.sort_values('high-quality-percentage', ascending=False)
regional_high_quality.plot.bar(x='country', y='high-quality-percentage', rot=45, subplots=True, layout=(2,1), figsize=(10,10))
regional_high_quality.plot.bar(x='country', y='Population mid-2018 (millions)', rot=45, subplots=True, layout=(2,1), figsize=(10,10))
plt.show()
###Output
_____no_output_____
###Markdown
DATA512 Homework A2 Ryan Williams
###Code
import pandas as pd
import numpy as np
import math
import json
import requests
###Output
_____no_output_____
###Markdown
Step 1: Acquire article and population dataIn this step we read in our data on articles and population sizes (see README for documentation on this data)
###Code
# read in CSVs
art_df = pd.read_csv("page_data.csv")
pop_df = pd.read_csv("WPDS_2020_data - WPDS_2020_data.csv")
art_df.head()
pop_df.head()
###Output
_____no_output_____
###Markdown
Step 2: Acquire article quality predictionsIn this step we pull data on Wikipedia article quality predictions using the ORES REST API (see README for details):* The ORES API can only pull batches of 50 articles (identified by rev ID) at once, so we start by creating a dataframe of rev IDs grouped into batches of 50* Then we pull from the API one batch at a time, writing the results to article_quality_ratings.json
###Code
## set up the rev ids to iterate over
# get rev ids
rev_ids = pd.DataFrame(art_df['rev_id'])
# add a column for batch number
# we want to pull batches of no more than 50 responses at a time
rev_ids['batch_no'] = rev_ids.index % math.ceil(len(rev_ids) / 50)
# verify the batch size
print("max batch size: {0}".format(len(rev_ids[rev_ids['batch_no'] == 0])))
## set up the API call
# define headers
headers = {
'User-Agent': 'https://github.com/lawrywill',
'From': '[email protected]'
}
# set url endpoint
url = "https://ores.wikimedia.org/v3/scores/enwiki?models=articlequality&revids={rev_ids}"
# define a function for calling API for a batch of revids
def call_api(url, batch):
# get the list of rev ids in the batch and concatenate them together
batch_ids = rev_ids[rev_ids['batch_no'] == batch]['rev_id']
batch = "|".join(str(x) for x in batch_ids)
# call the api
call = requests.get(url.format(rev_ids=batch), headers=headers)
response = call.json()
return response
# function for calling api and writing results to Json
def call_and_write(url, batch, filename):
# call api to get data
data = call_api(url, batch)
# if this isn't the first batch, we want to append to the existing data rather than overwrite
if batch != 0:
with open(filename) as data_file:
old_data = json.load(data_file)
data['enwiki']['scores'].update(old_data['enwiki']['scores'])
jsonString = json.dumps(data, indent = 2)
jsonFile = open(filename, "w")
jsonFile.write(jsonString)
jsonFile.close()
# loop through all batches collecting data
for i in range(max(rev_ids['batch_no'])):
call_and_write(url, i, 'article_quality_ratings.json')
print("batch {0} complete".format(i))
###Output
batch 0 complete
batch 1 complete
batch 2 complete
batch 3 complete
batch 4 complete
batch 5 complete
batch 6 complete
batch 7 complete
batch 8 complete
batch 9 complete
batch 10 complete
batch 11 complete
batch 12 complete
batch 13 complete
batch 14 complete
batch 15 complete
batch 16 complete
batch 17 complete
batch 18 complete
batch 19 complete
batch 20 complete
batch 21 complete
batch 22 complete
batch 23 complete
batch 24 complete
batch 25 complete
batch 26 complete
batch 27 complete
batch 28 complete
batch 29 complete
batch 30 complete
batch 31 complete
batch 32 complete
batch 33 complete
batch 34 complete
batch 35 complete
batch 36 complete
batch 37 complete
batch 38 complete
batch 39 complete
batch 40 complete
batch 41 complete
batch 42 complete
batch 43 complete
batch 44 complete
batch 45 complete
batch 46 complete
batch 47 complete
batch 48 complete
batch 49 complete
batch 50 complete
batch 51 complete
batch 52 complete
batch 53 complete
batch 54 complete
batch 55 complete
batch 56 complete
batch 57 complete
batch 58 complete
batch 59 complete
batch 60 complete
batch 61 complete
batch 62 complete
batch 63 complete
batch 64 complete
batch 65 complete
batch 66 complete
batch 67 complete
batch 68 complete
batch 69 complete
batch 70 complete
batch 71 complete
batch 72 complete
batch 73 complete
batch 74 complete
batch 75 complete
batch 76 complete
batch 77 complete
batch 78 complete
batch 79 complete
batch 80 complete
batch 81 complete
batch 82 complete
batch 83 complete
batch 84 complete
batch 85 complete
batch 86 complete
batch 87 complete
batch 88 complete
batch 89 complete
batch 90 complete
batch 91 complete
batch 92 complete
batch 93 complete
batch 94 complete
batch 95 complete
batch 96 complete
batch 97 complete
batch 98 complete
batch 99 complete
batch 100 complete
batch 101 complete
batch 102 complete
batch 103 complete
batch 104 complete
batch 105 complete
batch 106 complete
batch 107 complete
batch 108 complete
batch 109 complete
batch 110 complete
batch 111 complete
batch 112 complete
batch 113 complete
batch 114 complete
batch 115 complete
batch 116 complete
batch 117 complete
batch 118 complete
batch 119 complete
batch 120 complete
batch 121 complete
batch 122 complete
batch 123 complete
batch 124 complete
batch 125 complete
batch 126 complete
batch 127 complete
batch 128 complete
batch 129 complete
batch 130 complete
batch 131 complete
batch 132 complete
batch 133 complete
batch 134 complete
batch 135 complete
batch 136 complete
batch 137 complete
batch 138 complete
batch 139 complete
batch 140 complete
batch 141 complete
batch 142 complete
batch 143 complete
batch 144 complete
batch 145 complete
batch 146 complete
batch 147 complete
batch 148 complete
batch 149 complete
batch 150 complete
batch 151 complete
batch 152 complete
batch 153 complete
batch 154 complete
batch 155 complete
batch 156 complete
batch 157 complete
batch 158 complete
batch 159 complete
batch 160 complete
batch 161 complete
batch 162 complete
batch 163 complete
batch 164 complete
batch 165 complete
batch 166 complete
batch 167 complete
batch 168 complete
batch 169 complete
batch 170 complete
batch 171 complete
batch 172 complete
batch 173 complete
batch 174 complete
batch 175 complete
batch 176 complete
batch 177 complete
batch 178 complete
batch 179 complete
batch 180 complete
batch 181 complete
batch 182 complete
batch 183 complete
batch 184 complete
batch 185 complete
batch 186 complete
batch 187 complete
batch 188 complete
batch 189 complete
batch 190 complete
batch 191 complete
batch 192 complete
batch 193 complete
batch 194 complete
batch 195 complete
batch 196 complete
batch 197 complete
batch 198 complete
batch 199 complete
batch 200 complete
batch 201 complete
batch 202 complete
batch 203 complete
batch 204 complete
batch 205 complete
batch 206 complete
batch 207 complete
batch 208 complete
batch 209 complete
batch 210 complete
batch 211 complete
batch 212 complete
batch 213 complete
batch 214 complete
batch 215 complete
batch 216 complete
batch 217 complete
batch 218 complete
batch 219 complete
batch 220 complete
batch 221 complete
batch 222 complete
batch 223 complete
batch 224 complete
batch 225 complete
batch 226 complete
batch 227 complete
batch 228 complete
batch 229 complete
batch 230 complete
batch 231 complete
batch 232 complete
batch 233 complete
batch 234 complete
batch 235 complete
batch 236 complete
batch 237 complete
batch 238 complete
batch 239 complete
batch 240 complete
batch 241 complete
batch 242 complete
batch 243 complete
batch 244 complete
batch 245 complete
batch 246 complete
batch 247 complete
batch 248 complete
batch 249 complete
batch 250 complete
batch 251 complete
batch 252 complete
batch 253 complete
batch 254 complete
batch 255 complete
batch 256 complete
batch 257 complete
batch 258 complete
batch 259 complete
batch 260 complete
batch 261 complete
batch 262 complete
batch 263 complete
batch 264 complete
batch 265 complete
batch 266 complete
batch 267 complete
batch 268 complete
batch 269 complete
batch 270 complete
batch 271 complete
batch 272 complete
batch 273 complete
batch 274 complete
batch 275 complete
batch 276 complete
batch 277 complete
batch 278 complete
batch 279 complete
batch 280 complete
batch 281 complete
batch 282 complete
batch 283 complete
batch 284 complete
batch 285 complete
batch 286 complete
batch 287 complete
batch 288 complete
batch 289 complete
batch 290 complete
batch 291 complete
batch 292 complete
batch 293 complete
batch 294 complete
batch 295 complete
batch 296 complete
batch 297 complete
batch 298 complete
batch 299 complete
batch 300 complete
batch 301 complete
batch 302 complete
batch 303 complete
batch 304 complete
batch 305 complete
batch 306 complete
batch 307 complete
batch 308 complete
batch 309 complete
batch 310 complete
batch 311 complete
batch 312 complete
batch 313 complete
batch 314 complete
batch 315 complete
batch 316 complete
batch 317 complete
batch 318 complete
batch 319 complete
batch 320 complete
batch 321 complete
batch 322 complete
batch 323 complete
batch 324 complete
batch 325 complete
batch 326 complete
batch 327 complete
batch 328 complete
batch 329 complete
batch 330 complete
batch 331 complete
batch 332 complete
batch 333 complete
batch 334 complete
batch 335 complete
batch 336 complete
batch 337 complete
batch 338 complete
batch 339 complete
batch 340 complete
batch 341 complete
batch 342 complete
batch 343 complete
batch 344 complete
batch 345 complete
batch 346 complete
batch 347 complete
batch 348 complete
batch 349 complete
batch 350 complete
batch 351 complete
batch 352 complete
batch 353 complete
batch 354 complete
batch 355 complete
batch 356 complete
batch 357 complete
batch 358 complete
batch 359 complete
batch 360 complete
batch 361 complete
batch 362 complete
batch 363 complete
batch 364 complete
batch 365 complete
batch 366 complete
batch 367 complete
batch 368 complete
batch 369 complete
batch 370 complete
batch 371 complete
batch 372 complete
batch 373 complete
batch 374 complete
batch 375 complete
batch 376 complete
batch 377 complete
batch 378 complete
batch 379 complete
batch 380 complete
batch 381 complete
batch 382 complete
batch 383 complete
batch 384 complete
batch 385 complete
batch 386 complete
batch 387 complete
batch 388 complete
batch 389 complete
batch 390 complete
batch 391 complete
batch 392 complete
batch 393 complete
batch 394 complete
batch 395 complete
batch 396 complete
batch 397 complete
batch 398 complete
batch 399 complete
batch 400 complete
batch 401 complete
batch 402 complete
batch 403 complete
batch 404 complete
batch 405 complete
batch 406 complete
batch 407 complete
batch 408 complete
batch 409 complete
batch 410 complete
batch 411 complete
batch 412 complete
batch 413 complete
batch 414 complete
batch 415 complete
batch 416 complete
batch 417 complete
batch 418 complete
batch 419 complete
batch 420 complete
batch 421 complete
batch 422 complete
batch 423 complete
batch 424 complete
batch 425 complete
batch 426 complete
batch 427 complete
batch 428 complete
batch 429 complete
batch 430 complete
batch 431 complete
batch 432 complete
batch 433 complete
batch 434 complete
batch 435 complete
batch 436 complete
###Markdown
Step 3: Clean the article, population, and quality dataIn this step we prepare the article and population data for joining. See below for further information on how each of the 3 datasets (article, population, and quality predictions) are manipulated Article dataTo clean the article data, we remove rows where the 'page' column starts with "Template:" since these are not wikipedia articles
###Code
# remove all rows that start with "Template:" and reset the index
art_df = art_df[~art_df['page'].str.contains("Template:")]
art_df = art_df.reset_index(drop = True)
# verify that this worked
art_df.head()
###Output
_____no_output_____
###Markdown
Population dataThe population data has a few columns we don't need, so we remove them now to make the data cleaner when joined later
###Code
# select only the Name and Population columns
pop_df = pop_df[['Name', 'Population']]
# verify the new format
pop_df.head()
###Output
_____no_output_____
###Markdown
Quality prediction dataTo clean the quality data, we read in the JSON file and manipulate it so that we end up with a table that only has two columns: the rev ID and the quality prediction
###Code
# read in quality dataset
with open('article_quality_ratings.json','r') as f:
data = json.loads(f.read())
qua_df = pd.json_normalize(data['enwiki']['scores'])
# find the columns in the quality list which contain quality predictions
pred_cols = [col for col in qua_df.columns if 'prediction' in col]
# filter the dataframe to have only the columns with predictions, then convert from wide to long
qua_df = qua_df[pred_cols].melt()
# rename columns
qua_df = qua_df.rename(columns = {'variable':'rev_id','value':'quality'})
# change the raw names, leaving only the rev id
qua_df['rev_id'] = qua_df['rev_id'].str.slice(0,9).astype('int')
# inspect the new dataframe
qua_df.head()
###Output
_____no_output_____
###Markdown
Step 4: Combine the datasetsIn this step we combine our data together into a final data set for analysis by doing the following:* Merge the article data and quality predictions together, using revision ID* Keep a record of the articles with no quality prediction in wp_wpds-no_quality_prediction.csv* Merge the combined Wikipedia data together with the population data, using country name* Keep a record of the Wikipedia data that doesn't merge with the population data, and vice versa, in wp_wpds_countries-no_match.csv* Drop the unmerged results from the final combined data, then fix the name, order, and data type of the columns* Save the final data as wp_wpds_politicians_by_country.csv
###Code
# merge the article data with the quality predictions pulled from ORES
df = art_df.merge(qua_df, on = ['rev_id'], how = 'left')
# keep a record of the articles that didn't have a quality prediction
no_qua = df[df['quality'].isna()]
# now merge the wikipedia and population data together on country
df = df.merge(pop_df, left_on = ['country'], right_on = ['Name'], how = 'outer')
# keep a record of the rows that failed to merge on either side
unmatched = df[df['rev_id'].isna() | df['Population'].isna()]
# create a final df without the unmerged rows
wp_df = df.dropna()
# fix the names, orders, and data types of columns
wp_df = wp_df.rename(columns={'page':'article_name','rev_id':'revision_id','quality':'article_quality_est.','Population':'population'})
wp_df = wp_df[['country','article_name','revision_id','article_quality_est.','population']]
wp_df['revision_id'] = wp_df['revision_id'].astype('int')
wp_df['population'] = wp_df['population'].astype('int')
# write all records to csv
no_qua.to_csv('wp_wpds-no_quality_prediction.csv', index = False)
unmatched.to_csv('wp_wpds_countries-no_match.csv', index = False)
wp_df.to_csv('wp_wpds_politicians_by_country.csv', index = False)
# inspect our final dataset
wp_df.head()
###Output
_____no_output_____
###Markdown
Step 5: AnalysisIn this step, for each country and region, we calculate the coverage (number of high quality articles per population), and the relative quality (proportion of high quality articles to toal articles)Definitions of these metrics:* __Coverage__: High quality articles per population = (Number of FA or GA articles) / (Population)* __Relative Quality__: Proportion of high quality articles = (Number of FA or GA articles) / (Number of total articles)To do this we follow these steps:* Add the regional populations to our data, which currently only shows country populations* Manipulate the data so that we have the count of high quality articles vs. other articles* Calculate the two metrics mentioned above at the country and region level
###Code
# read in the combined Wikipedia data
wp_df = pd.read_csv('wp_wpds_politicians_by_country.csv')
wp_df.head()
###Output
_____no_output_____
###Markdown
Since we don't have regional populations included in the current dataset, we need to start by mapping each country to its region and getting the total population for that region
###Code
#create a dataset that will map individual countries to regions and sub-regions
reg_df = pop_df
reg_df['region'] = np.nan
reg_df['region_population'] = np.nan
name = reg_df['Name'][0]
pop = reg_df['Population'][0]
# loop through the dataframe, setting the region name & population
for i in range(len(reg_df)):
# set the value based on the region name
reg_df['region'][i] = name
reg_df['region_population'][i] = pop
# update the name if we've come to a new region heading
# we do this after setting the value so that regions get correctly assigned
# to higher level regions - e.g. 'North Africa' assigned to 'Africa'
if reg_df['Name'][i].isupper():
name = reg_df['Name'][i]
pop = reg_df['Population'][i]
# turn these results into a dataset mapping countries to regions
reg_df = reg_df.rename(columns={'Name':'country'})
reg_df['region_population'] = reg_df['region_population'].astype('int')
reg_df = reg_df[['country','region','region_population']]
# merge our wikipedia data with the region data
df = wp_df.merge(reg_df, on=['country'], how='left')
df.head()
###Output
_____no_output_____
###Markdown
Now we can manipulate the data, using separate country and region level DFs, to get the count of high quality (and total) articles
###Code
# define articles as 'high quality' or not
# 'high quality' articles are defined as FA or GA
df['hq_article'] = df['article_quality_est.'].str.contains('FA') | df['article_quality_est.'].str.contains('GA')
# create a table summarizing how many good vs. bad articles there are by country
agg = df.groupby(
['country','population','hq_article']
).agg(
{'revision_id':'count'}
).reset_index()
# pivot so that we get columns for good/bad articles
agg = agg.pivot(
index = ['country','population'],
columns = 'hq_article',
values = 'revision_id'
).fillna(0).reset_index()
# set column names after aggregation
agg.columns = ['country','population','other_articles','hq_articles']
# create a column for total articles
agg['total_articles'] = agg['hq_articles'] + agg['other_articles']
# specify this for country
co_agg = agg
# inspect final df
co_agg.head()
# do the same aggregation as above, but at a region level
agg = df.groupby(
['region','region_population','hq_article']
).agg(
{'revision_id':'count'}
).reset_index()
# pivot so that we get columns for good/bad articles
agg = agg.pivot(
index = ['region','region_population'],
columns = 'hq_article',
values = 'revision_id'
).fillna(0).reset_index()
# set column names after aggregation
agg.columns = ['region','region_population','other_articles','hq_articles']
# create a column for total articles
agg['total_articles'] = agg['hq_articles'] + agg['other_articles']
# specify this for region
rg_agg = agg
# inspect final df
rg_agg.head()
###Output
_____no_output_____
###Markdown
Finally we calculate our metrics: high quality articles per population, and proportion of high quality articles
###Code
# create the calculated columns we need for high quality articles per population, and proportion of good articles
co_agg['coverage'] = co_agg['hq_articles'] / co_agg['population']
co_agg['relative_quality'] = co_agg['hq_articles'] / co_agg['total_articles']
rg_agg['coverage'] = rg_agg['hq_articles'] / rg_agg['region_population']
rg_agg['relative_quality'] = rg_agg['hq_articles'] / rg_agg['total_articles']
co_agg.head()
rg_agg.head()
###Output
_____no_output_____
###Markdown
Step 6: Results In this step we show the results of our analysis, in the form of 6 tables: the top and bottom 10 countries by coverage and relative quality, and the top and bottom 10 regions by coverageSee the README for a writeup of these results Top 10 Countries by Coverage
###Code
co_agg.sort_values(by='coverage', ascending = False).head(10)
###Output
_____no_output_____
###Markdown
Bottom 10 Countries by Coverage
###Code
co_agg.sort_values(by='coverage', ascending = True).head(10)
###Output
_____no_output_____
###Markdown
Top 10 Countries by Relative Quality
###Code
co_agg.sort_values(by='relative_quality', ascending = False).head(10)
###Output
_____no_output_____
###Markdown
Bottom 10 Countries by Relative Quality
###Code
co_agg.sort_values(by='relative_quality', ascending = True).head(10)
###Output
_____no_output_____
###Markdown
Top 10 Regions by Coverage
###Code
rg_agg.sort_values(by='coverage', ascending = False).head(10)
###Output
_____no_output_____
###Markdown
Bottom 10 Regions by Coverage
###Code
rg_agg.sort_values(by='coverage', ascending = True).head(10)
###Output
_____no_output_____
###Markdown
A2: Bias in data The goal of this assignment is to explore the concept of bias through data on Wikipedia articles - specifically, articles on political figures from a variety of countries. This notebook walks through the data transformations and steps necessary to perform an analysis of how the coverage of politicians on Wikipedia and the quality of articles about politicians varies between countries. The analysis will consist of a series of tables that show:1. The countries with the greatest and least coverage of politicians on Wikipedia compared to their population.2. The countries with the highest and lowest proportion of high quality articles about politicians.3. A ranking of geographic regions by articles-per-person and proportion of high quality articles.[https://wiki.communitydata.science/Human_Centered_Data_Science_(Fall_2019)/AssignmentsA2:_Bias_in_data]
###Code
# import necessary packages
import pandas as pd
from functools import reduce
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Getting the article and population dataWe use two datasets: 1. The Wikipedia politicians by country dataset - ./datafiles/page_data.csv 2. The population dataset - ./datafiles/WPDS_2018_data.csv Cleaning the data
###Code
# Load necessary data files
page_data_path = './data_files/page_data.csv'
wpds_data_path = './data_files/WPDS_2018_data.csv'
page_data = pd.read_csv(page_data_path)
raw_wpds_data = pd.read_csv(wpds_data_path)
# The dataset contains some page names that start with the string "Template:".
# Since these pages are not Wikipedia articles, they are not included in the analysis.
page_data = page_data[~page_data['page'].str.contains(r'Template')]
page_data.head()
raw_wpds_data.head()
# Filter out contries that are all CAPS and rename the column
wpds_data = raw_wpds_data[~raw_wpds_data['Geography'].str.isupper()]
wpds_data.rename(columns={'Geography':'country', 'Population mid-2018 (millions)': 'population'}, inplace=True)
wpds_data['population'] = wpds_data['population'].apply(lambda x: float(x.replace(",", ""))*10e5)
wpds_data.head()
# Merge wpds data with page_data
merged_df = pd.merge(page_data, wpds_data, on='country')
merged_df.rename(columns={'Population mid-2018 (millions)': 'population'}, inplace=True)
merged_df.head()
len(set(page_data.country.unique()) & set(wpds_data['country'].unique()))
###Output
_____no_output_____
###Markdown
Getting article quality predictions We use the [ORES](https://www.mediawiki.org/wiki/ORES) API to estimate the quality of an article. This API returns a probability value for each of these categories: 1. FA - Featured article 2. GA - Good article 3. B - B-class article 4. C - C-class article 5. Start - Start-class article 6. Stub - Stub-class article The following function is derived from [sample notebook](https://github.com/Ironholds/data-512-a2/blob/master/hcds-a2-bias_demo.ipynb)
###Code
import requests
import json
headers = {'User-Agent' : 'https://github.com/deepthimhegde', 'From' : '[email protected]'}
def get_ores_data(revision_ids, headers):
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# Specify the parameters - smushing all the revision IDs together separated by | marks.
# Yes, 'smush' is a technical term, trust me I'm a scientist.
# What do you mean "but people trusting scientists regularly goes horribly wrong" who taught you tha- oh.
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
return response
def extract_class_labels(response):
for rev_id, val in response.items():
try:
rev_scores[rev_id] = val["wp10"]["score"]["prediction"]
except:
pass
page_data_rev_ids = page_data["rev_id"].tolist()
print(len(page_data_rev_ids))
rev_scores = {}
step = 50
for i in range(0, len(page_data_rev_ids), step):
response = get_ores_data(page_data_rev_ids[i: i+step], headers)
extract_class_labels(response["enwiki"]["scores"])
article_quality_df = pd.DataFrame(list(rev_scores.items()), columns=['rev_id', 'article_quality'])
article_quality_df.head()
article_quality_df.to_csv("./data_files/article_quality.csv")
###Output
_____no_output_____
###Markdown
Combining the datasets
###Code
article_quality_df['rev_id'] = article_quality_df['rev_id'].astype('int')
merged_quality_df = pd.merge(page_data, article_quality_df, on='rev_id', how='left')
print(len(merged_quality_df))
(merged_quality_df).head()
wpds_data_no_match = merged_quality_df[merged_quality_df['article_quality'].isnull()]
len(wpds_data_no_match)
wpds_data_no_match.head()
wpds_output_file_path = './data_files/wp_wpds_countries-no_match.csv'
wpds_data_no_match.to_csv(wpds_output_file_path)
merged_quality_df = merged_quality_df[~merged_quality_df['article_quality'].isnull()]
len(merged_quality_df)
merged_quality_df.to_csv('./data_files/wp_wpds_politicians_by_country.csv')
###Output
_____no_output_____
###Markdown
Analysis of countries with the greatest and least coverage of politicians on Wikipedia compared to their population.
###Code
all_articles_count = merged_quality_df.groupby('country').size()
all_articles_count = all_articles_count.reset_index()
all_articles_count.columns = ['country', 'all_articles_count']
print(len(all_articles_count))
all_articles_count.head()
high_qual_articles_count = merged_quality_df[(merged_quality_df['article_quality']=='FA') | (merged_quality_df['article_quality']=='GA')].groupby('country').size()
high_qual_articles_count = high_qual_articles_count.reset_index()
high_qual_articles_count.columns = ['country', 'high_qual_articles_count']
print(len(high_qual_articles_count))
high_qual_articles_count.head()
_ = pd.merge(all_articles_count, high_qual_articles_count, on='country', how='left')
coverage_df = pd.merge(_, wpds_data, on='country')
coverage_df = coverage_df.fillna(0)
coverage_df.head()
coverage_df['coverage'] = coverage_df['all_articles_count']/coverage_df['population']
coverage_df['quality'] = coverage_df['high_qual_articles_count']/coverage_df['all_articles_count']
len(coverage_df)
coverage_df.to_csv('./data_files/coverage.csv')
country_region_mapper = {}
region = ''
for country in raw_wpds_data['Geography']:
if country.isupper():
region = country
continue
country_region_mapper[country] = region
country_region_df = pd.DataFrame(list(country_region_mapper.items()), columns=['country', 'region'])
region_df = pd.merge(coverage_df, country_region_df)
region_df = region_df.drop(columns = ['coverage', 'quality'])
region_df = region_df.groupby('region').sum()
region_df['coverage'] = region_df['all_articles_count']/region_df['population']
region_df['quality'] = region_df['high_qual_articles_count']/region_df['all_articles_count']
region_df
###Output
_____no_output_____
###Markdown
Analysis - Analysis of countries with the greatest and least coverage of politicians on Wikipedia compared to their population.
###Code
# 1.
coverage_df.sort_values(by=['coverage'], ascending=False)[0:10].reset_index().drop('index', axis=1)
# 2.
coverage_df.sort_values(by=['coverage'])[0:10].reset_index().drop('index', axis=1)
###Output
_____no_output_____
###Markdown
- Analysis of countries with the highest and lowest proportion of high quality articles about politicians.
###Code
# 3.
coverage_df.sort_values(by=['quality'], ascending=False)[0:10].reset_index().drop('index', axis=1)
# 4.
coverage_df.sort_values(by=['quality'])[0:10].reset_index().drop('index', axis=1)
###Output
_____no_output_____
###Markdown
- Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
# 5.
region_df.sort_values(by=['coverage'], ascending=False).reset_index()
# 6.
region_df.sort_values(by=['quality'], ascending=False).reset_index()
###Output
_____no_output_____
###Markdown
**A2 - Bias in Data Assignment**
###Code
import pandas as pd
import csv
import numpy as np
###Output
_____no_output_____
###Markdown
**Getting and Cleaning Data** Download the raw dataset from the Wikipedia politicians by country dataset (https://figshare.com/articles/dataset/Untitled_Item/5513449) and the population data from the world population data sheet published by the Population Reference Bureau (https://www.prb.org/international/indicator/population/table/).
###Code
page_data = pd.read_csv('page_data.csv')
#remove rows that 'page' start with 'Template:'
temp = page_data[page_data['page'].str.contains('Template')]
list_template = list(temp.page)
list_all_page = list(page_data.page)
rest_page = list(set(list_all_page) ^ set(list_template))
page = page_data[page_data.page.isin(rest_page)]
page
population_data = pd.read_csv('WPDS_2020_data.csv')
#separate region and country values
region_population = population_data.loc[population_data['Type'] == 'Sub-Region']
country_population1 = population_data.loc[population_data['Type'] == 'Country']
country_population1
region_population
# Because this project needs to analyze the population and ORES score of each region,
#it is necessary to match each country with the sub-region it belongs to.
country_index = country_population1.index
region_index = np.array([1,64,67,109,166,216])
#AFRICA, NORTHERN AMERICA , LATIN AMERICA AND THE CARIBBEAN, ASIA, EUROPE, OCEANIA
sub_region_index = region_population.index
countryName = []
countryPopulation = []
sub_regionName = []
sub_regionPopulation =[]
regionName = []
regionPopulation =[]
for x in country_index:
y = sub_region_index[int(np.sum(x >sub_region_index ))-1]
z = region_index[int(np.sum(x > region_index ))-1]
countryName.append(country_population1['Name'][x])
countryPopulation .append(country_population1['Population'][x])
sub_regionName.append(region_population['Name'][y])
sub_regionPopulation.append(region_population['Population'][y])
regionName.append(region_population['Name'][z])
regionPopulation.append(region_population['Population'][z])
country_population = pd.DataFrame(data={'country': countryName,
'country_population': countryPopulation,
'subRegion':sub_regionName,
'subRegion_population':sub_regionPopulation,
'region': regionName,
'region_population': regionPopulation})
country_population
###Output
_____no_output_____
###Markdown
**Getting Article Quality Predictions** Use the [REST API endpoint](https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_model) to get the predicted quality scores for each article in the Wikipedia dataset. FA - Featured articleGA - Good articleB - B-class articleC - C-class articleStart - Start-class articleStub - Stub-class article
###Code
import json
import requests
import pandas as pd
from pandas import json_normalize
endpoint_pred = 'https://ores.wikimedia.org/v3/scores/enwiki/?models=articlequality&revids={rev_id}'
headers = {
'User-Agent': 'https://github.com/SherryL-star',
'From': '[email protected]'
}
def api_call(endpoint, rev_id):
call = requests.get(endpoint.format(rev_id = rev_id), headers=headers)
response = call.json()
return response
# convert the value of 'rev_id' in ''page' to list
rev_id_list = page['rev_id'].to_list()
# divide revid list into batches, 50 ids each batch
def revid_divide (size, revid_list):
batch = []
count = 0
while count < len(revid_list):
start_number = count
if size + count < len(revid_list):
end_number = size + count
else:
end_number = len(revid_list)
batch.append('|'.join(str(x) for x in revid_list[start_number:end_number]))
count = end_number
return batch
batches = revid_divide(50, rev_id_list)
# Get the rev ids and their relevant prediction score. Some rev ids don't have any prediction score, still need to be stored
def page_score(json_file):
rev_ids = []
pred_score = []
noscore_revids =[]
for x in json_file ['enwiki']['scores']:
try:
pred_score.append(json_file['enwiki']['scores'][x]['articlequality']['score']['prediction'])
rev_ids.append(x)
except KeyError:
noscore_revids.append(x)
return (rev_ids,pred_score,noscore_revids)
revids_all = []
predScore_all = []
noScore_all =[]
for x, b in enumerate (batches):
json_file = api_call(endpoint_pred, b)
(rev_ids,pred_score,noscore_revids) = page_score(json_file)
revids_all.extend(rev_ids)
predScore_all.extend(pred_score)
noScore_all.extend(noscore_revids)
page_pred_score = pd.DataFrame({'rev_id':revids_all, 'Score':predScore_all})
page_noScore = page[page['rev_id'].isin(noScore_all)]
page_pred_score
page_noScore
###Output
_____no_output_____
###Markdown
**Combining the Datasets** Merge the wikipedia data and population data together, and generate .csv files.Columns:countryarticle_namerevision_idarticle_quality_est.population
###Code
page_pred_score['rev_id'].dtypes
page['rev_id'].dtypes
page_pred_score['rev_id'] = page_pred_score['rev_id'].astype(int)
# first, combine country, page, and their relevant prediction score together
Country_page_score = page.merge(page_pred_score, on = 'rev_id')
Country_page_score
# second, combinne country page info and their population together
page_population = Country_page_score.merge(country_population, on = 'country')
page_population.shape[0]
#we need to store the any rows that do not have matching data, double check if there is any typo in country names
#1. figure out the rev_id is not in 'page_population'
dismatch_1 = page[~page['rev_id'].isin(page_population['rev_id'])]
#2 double check if these rev_id in the list of page_noScore
dismatch_2 = dismatch_1[~dismatch_1['rev_id'].isin(page_noScore['rev_id'])]
#show the countries name to check typos
dismatch_2['country'].unique()
country_population['country']
#correct country name in Country_page_score based on the name in country_population.
#'Rhodesia'(1965-1979); Montserratian, Pitcairn Islands and Cape Colony belong to Uk;
#not found in country_population:Ivorian,Chechen,Incan,Jersey,Niuean,Guernsey,
#South Ossetian,Cook Island,Tokelauan,Dagestani,Greenlandic,Ossetian,Somaliland and Rojava.
countryName_correction = {'Czech Republic': 'Czechia',
'Salvadoran': 'El Salvador',
'Congo, Dem. Rep. of': 'Congo, Dem. Rep.',
'East Timorese': 'Timor-Leste',
'South Korean': 'Korea, South',
'Samoan': 'Samoa',
'Saint Kitts and Nevis':'St. Kitts-Nevis',
'Macedonia': 'North Macedonia',
'Saint Lucian': 'Saint Lucia',
'Hondura': 'Honduras',
'Saint Vincent and the Grenadines': 'St. Vincent and the Grenadines',
'Omani': 'Oman',
'Swaziland': 'eSwatini',
'Palauan': 'Palau'
}
#replace country name
Country_page_score['country'] = Country_page_score['country'].replace(to_replace = countryName_correction)
new_page_population = Country_page_score.merge(country_population, on = 'country')
new_page_population
#update dismatch data
#1. figure out the rev_id is not in 'page_population'
dismatch_3 = page[~page['rev_id'].isin(new_page_population['rev_id'])]
#2 double check if these rev_id in the list of page_noScore
dismatch_4 = dismatch_3[~dismatch_3['rev_id'].isin(page_noScore['rev_id'])]
dismatch_4
###Output
_____no_output_____
###Markdown
rename columns:countryarticle_namerevision_idarticle_quality_est.population
###Code
new_page_population = new_page_population[['country', 'page', 'rev_id', 'Score', 'country_population', 'subRegion', 'subRegion_population', 'region', 'region_population']]
dismatch_4 = dismatch_4[['country', 'page', 'rev_id']]
new_page_population.rename(columns={'page':'article_name','rev_id':'revision_id','Score':'article_quality_est'}, inplace = True)
new_page_population
dismatch_4.rename(columns={'page':'article_name','rev_id':'revision_id'}, inplace = True)
dismatch_4
# save to .csv file
wpds_politicians_by_country = new_page_population[['country','article_name','revision_id','article_quality_est','country_population']]
wpds_politicians_by_country.to_csv('wp_wpds_politicians_by_country.csv')
dismatch_4.to_csv('wp_wpds_countries-no_match.csv')
###Output
_____no_output_____
###Markdown
**Analysis** Calculate the proportion (as a percentage) of articles-per-population and high-quality articles for each country AND for each geographic region.High Quality : ORES predicted score "FA" or "GA"
###Code
#caculate the total page(article) number of each country
page_number = pd.pivot_table(data=new_page_population, index=['country', 'country_population'],
values='article_name', aggfunc='count')
page_number.reset_index(inplace=True)
#caculate the total page(article) number of each sub Region
page_number2 = pd.pivot_table(data=new_page_population, index=['subRegion', 'subRegion_population'],
values='article_name', aggfunc='count')
page_number2.reset_index(inplace=True)
#caculate the total page(article) number of each Region
page_number3 = pd.pivot_table(data=new_page_population, index=['region', 'region_population'],
values='article_name', aggfunc='count')
page_number3.reset_index(inplace=True)
page_number
page_number.rename(columns={'article_name': 'totalNum_article'}, inplace = True)
page_number2.rename(columns={'article_name': 'totalNum_article'}, inplace = True)
page_number3.rename(columns={'article_name': 'totalNum_article'}, inplace = True)
page_number
###Output
_____no_output_____
###Markdown
FA - Featured articleGA - Good articleB - B-class articleC - C-class articleStart - Start-class articleStub - Stub-class article
###Code
#caculate predicted score "FA" or "GA" of each country
score_category = pd.pivot_table(data=new_page_population, index=['country', 'country_population','article_quality_est'],
values='article_name', aggfunc='count')
score_category.reset_index(inplace=True)
#caculate predicted score "FA" or "GA" of each sub-region
score_category2 = pd.pivot_table(data=new_page_population, index=['subRegion', 'subRegion_population','article_quality_est'],
values='article_name', aggfunc='count')
score_category2.reset_index(inplace=True)
#caculate predicted score "FA" or "GA" of eachregion
score_category3 = pd.pivot_table(data=new_page_population, index=['region', 'region_population','article_quality_est'],
values='article_name', aggfunc='count')
score_category3.reset_index(inplace=True)
score_category
score_category.rename(columns={'article_name': 'totalNum'}, inplace = True)
score_category2.rename(columns={'article_name': 'totalNum'}, inplace = True)
score_category3.rename(columns={'article_name': 'totalNum'}, inplace = True)
score_category
# sum up 'FA' and 'GA' article for each country
high_quality = score_category[(score_category['article_quality_est']=='FA') |
(score_category['article_quality_est']=='GA')]
num_high_quality = pd.pivot_table(data=high_quality,
index=['country', 'country_population'],
values='totalNum', aggfunc='sum')
num_high_quality.reset_index(inplace=True)
num_high_quality.rename(columns={'totalNum': 'num_highQuality_article'}, inplace = True)
num_high_quality
# sum up 'FA' and 'GA' article for each sub-region
high_quality2 = score_category2[(score_category2['article_quality_est']=='FA') |
(score_category2['article_quality_est']=='GA')]
num_high_quality2 = pd.pivot_table(data=high_quality2,
index=['subRegion', 'subRegion_population'],
values='totalNum', aggfunc='sum')
num_high_quality2.reset_index(inplace=True)
num_high_quality2.rename(columns={'totalNum': 'num_highQuality_article'}, inplace = True)
num_high_quality2
# sum up 'FA' and 'GA' article for each region
high_quality3 = score_category3[(score_category3['article_quality_est']=='FA') |
(score_category3['article_quality_est']=='GA')]
num_high_quality3 = pd.pivot_table(data=high_quality3,
index=['region', 'region_population'],
values='totalNum', aggfunc='sum')
num_high_quality3.reset_index(inplace=True)
num_high_quality3.rename(columns={'totalNum': 'num_highQuality_article'}, inplace = True)
num_high_quality3
#Combine 'page_number' and 'num_high_quality' together, replace Nan with 0
aggregate_table = page_number.merge(num_high_quality, on=['country', 'country_population'],how = 'left')
aggregate_table = aggregate_table.fillna(0)
aggregate_table
#Combine 'page_number2' and 'num_high_quality2' together, replace Nan with 0
aggregate_subRegion = page_number2.merge(num_high_quality2, on=['subRegion', 'subRegion_population'],how = 'left')
aggregate_subRegion = aggregate_subRegion.fillna(0)
aggregate_subRegion
#Combine 'page_number3' and 'num_high_quality3' together, replace Nan with 0
aggregate_region = page_number3.merge(num_high_quality3, on=['region', 'region_population'],how = 'left')
aggregate_region = aggregate_region.fillna(0)
aggregate_region
#Caculate percentage of articles-per-population for each country
aggregate_table['articleNum_perCapita'] = aggregate_table['totalNum_article']/aggregate_table['country_population']
aggregate_table['quality_level'] = aggregate_table['num_highQuality_article']/aggregate_table['totalNum_article']
aggregate_table
#Caculate percentage of articles-per-population for each sub-region
aggregate_subRegion['articleNum_perCapita'] = aggregate_subRegion['totalNum_article']/aggregate_subRegion['subRegion_population']
aggregate_subRegion['quality_level'] = aggregate_subRegion['num_highQuality_article']/aggregate_subRegion['totalNum_article']
aggregate_subRegion
#Caculate percentage of articles-per-population for each region
aggregate_region['articleNum_perCapita'] = aggregate_region['totalNum_article']/aggregate_region['region_population']
aggregate_region['quality_level'] = aggregate_region['num_highQuality_article']/aggregate_region['totalNum_article']
aggregate_region
###Output
_____no_output_____
###Markdown
**Results** 1. Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
aggregate_table.nlargest(10,'articleNum_perCapita')
###Output
_____no_output_____
###Markdown
2. Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
aggregate_table.nsmallest(10,'articleNum_perCapita')
###Output
_____no_output_____
###Markdown
3. Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
aggregate_table.nlargest(10,'quality_level')
###Output
_____no_output_____
###Markdown
4. Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
aggregate_table.nsmallest(10,'quality_level')
###Output
_____no_output_____
###Markdown
5. Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
###Code
aggregate_subRegion.sort_values(by=['articleNum_perCapita'], ascending=False)
aggregate_region.sort_values(by=['articleNum_perCapita'], ascending=False)
###Output
_____no_output_____
###Markdown
6. Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
aggregate_subRegion.sort_values(by=['quality_level'], ascending=False)
aggregate_region.sort_values(by=['quality_level'], ascending=False)
###Output
_____no_output_____
###Markdown
First we will read in the page data and the wpds data from csv files
###Code
filename_pagedata = "./page_data.csv"
filename_wpds = "./WPDS_2020_data.csv"
df_pagedata = pd.read_csv(filename_pagedata)
df_wpds =pd.read_csv(filename_wpds)
###Output
_____no_output_____
###Markdown
Next we will filter the data so that the page data does not contain and "template" articles, and the wdps only data contains data from a country graunulartiy
###Code
df_pagedata_filtered = df_pagedata[~df_pagedata['page'].str.startswith('Template')]
df_wpds_cumulative = df_wpds[df_wpds['Name'].str.isupper()]
df_wpds_filtered = df_wpds[~df_wpds['Name'].str.isupper()]
###Output
_____no_output_____
###Markdown
Here we are defining the constants for the api call
###Code
pagedata_endpoint = "https://ores.wikimedia.org/v3/scores/{context}/{revid}/{model}"
headers = {
'User-Agent': 'https://github.com/mayapatward',
'From': '[email protected]'
}
pagedata_params = {"context" : "enwiki",
"revid" : "1234",
"model" : "articlequality",
}
###Output
_____no_output_____
###Markdown
Next we will define the api call. Since we are individually calling the api, this program runs very slow. So in order to avoid running this call more than once, I saved the results into a pickle file, and then loaded them into a pandas frame called df_pagedata_filtered
###Code
def api_call(endpoint,revid_num, revid_missing_list, page):
call = requests.get(endpoint.format(context = "enwiki", revid = revid_num, model = "articlequality"), headers=headers)
try:
response = call.json()
return response['enwiki']['scores'][str(revid_num)]['articlequality']['score']['prediction']
except:
revid_missing_list.append((revid_num, page))
return None
# revid_missing_list = []
# df_pagedata_filtered["prediction"] = df_pagedata_filtered.apply(lambda row: api_call(pagedata_endpoint,row['rev_id'], revid_missing_list, row['page']), axis =1)
# file='revid_missing.txt'
# with open(file, 'w') as filetowrite:
# for item in revid_missing_list:
# filetowrite.write(f"{str(item[0])}, {item[1]}\n")
# df_pagedata_filtered.to_pickle("./df_pagedata_filtered")
df_pagedata_filtered =pd.read_pickle('../df_pagedata_filtered')
###Output
_____no_output_____
###Markdown
Here we we are combining the rows and removing any data that do not have matching rows. This data is stored in "wp_wpds_countries-no_match.csv"
###Code
df_merged = pd.merge(df_pagedata_filtered, df_wpds_filtered, how='outer',
left_on='country', right_on='Name',
indicator=True)
df_merged_nomatch = df_merged.query('_merge != "both"')
df_merged_match = df_merged.query('_merge == "both"')
file='wp_wpds_countries-no_match.csv'
df_merged_nomatch.to_csv(file)
###Output
_____no_output_____
###Markdown
Next we are processing the merged dataframe to have the correct column names and saving this file to wp_wpds_politicians_by_country
###Code
df_merged_match = df_merged_match[['country', 'page', 'rev_id', 'prediction', 'Population']]
df_merged_match.rename(columns={"page": "article_name", "rev_id": "revision_id", "prediction":"article_quality_est",\
"Population":"population"}, inplace = True)
file='wp_wpds_politicians_by_country.csv'
df_merged_match.to_csv(file)
###Output
_____no_output_____
###Markdown
Finally, we will begin to aggregate the data to find the coverage estimates
###Code
df_coverage = df_merged_match.groupby('country').agg({'country':'size', 'population':'mean'})
df_coverage['percent_coverage'] = df_coverage['country']/df_coverage['population']
###Output
_____no_output_____
###Markdown
Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
df_coverage.sort_values('percent_coverage', ascending = False)[:10]
###Output
_____no_output_____
###Markdown
Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
df_coverage.sort_values('percent_coverage', ascending = True)[:10]
###Output
_____no_output_____
###Markdown
Next, we will begin to aggregate the data to find the quality estimates
###Code
df_quality = df_merged_match.copy()
df_quality['article_quality_est_count'] = df_quality['article_quality_est'].apply(lambda x: 1 if (x=='GA' or x=='FA') \
else 0)
df_quality = df_quality.groupby('country').agg({'country':'size', 'article_quality_est_count':'sum'})
df_quality['percent_quality'] = df_quality['article_quality_est_count']/df_quality['country']
###Output
_____no_output_____
###Markdown
Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
df_quality.sort_values('percent_quality', ascending = False)[:10]
###Output
_____no_output_____
###Markdown
Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
df_quality.sort_values('percent_quality', ascending = True)[:10]
###Output
_____no_output_____
###Markdown
Here, we begin to process the data based on the geographical region. In order to do that, we need to seperate out the continent level data with the sub-region level data. Next we will use their original index to slice the WPDS data and assign the continent/subregion name to the row as a new column
###Code
continents = ['AFRICA', 'NORTH AMERICA', 'LATIN AMERICA AND THE CARIBBEAN', 'ASIA','EUROPE', 'OCEANIA']
curr_cont = 'AFRICA'
curr_region = ''
df_wpds["Continent"] = ""
df_wpds["Region"] = ""
df_wpds_new = pd.DataFrame(columns=df_wpds.columns)
for index, row in df_wpds.iterrows():
if index < 2:
df_wpds_new = df_wpds_new.append(row)
continue
elif row['Type'] == 'Sub-Region' and row['Name'] not in continents:
curr_region = row['Name']
elif row['Type'] == 'Sub-Region' and row['Name'] in continents:
curr_cont = row['Name']
curr_region = ''
row['Continent'] = curr_cont
row['Region'] = curr_region
df_wpds_new = df_wpds_new.append(row)
###Output
_____no_output_____
###Markdown
Here, we are processing the data for the coverage analysis
###Code
df_agg = pd.merge(df_pagedata_filtered, df_wpds_new, left_on = 'country', right_on = 'Name', how = 'inner')
cont_agg_5 = pd.merge(df_agg, df_wpds_cumulative[['Name', 'Population']], left_on = 'Continent', right_on ='Name', how = 'left', suffixes=('', '_continent'))
cont_agg_5 = cont_agg_5.groupby('Continent').agg({'country': 'size', 'Population_continent':'mean'})
reg_agg_5 = pd.merge(df_agg, df_wpds_cumulative[['Name', 'Population']], left_on = 'Region', right_on ='Name', how = 'left', suffixes=('', '_region'))
reg_agg_5 = reg_agg_5.groupby('Region').agg({'country': 'size', 'Population_region':'mean'})
cont_agg_5['percent_coverage'] = cont_agg_5['country']/cont_agg_5['Population_continent']
reg_agg_5['percent_coverage'] = reg_agg_5['country']/reg_agg_5['Population_region']
cont_agg_5 = cont_agg_5.reset_index()
reg_agg_5 = reg_agg_5.reset_index()
cont_agg_5 = cont_agg_5.rename(columns = {'Continent': 'Region', 'Population_continent': 'Population_region'})
df_coverage = pd.concat([reg_agg_5, cont_agg_5], ignore_index=True, sort=False)[1:]
df_coverage.sort_values('percent_coverage', ascending = False)[:10]
###Output
_____no_output_____
###Markdown
Finally we are processing the data for the quality analysis
###Code
df_agg_6 =df_agg.copy()
df_agg_6['article_quality_est_count'] = df_agg_6['prediction'].apply(lambda x: 1 if (x=='GA' or x=='FA') \
else 0)
df_agg_6_cont = df_agg_6.groupby('Continent').agg({'Continent':'size', 'article_quality_est_count':'sum'})
df_agg_6_reg = df_agg_6.groupby('Region').agg({'Region':'size', 'article_quality_est_count':'sum'})
df_agg_6_cont = df_agg_6_cont.rename(columns = {'Continent': 'count'}).reset_index()
df_agg_6_cont = df_agg_6_cont.rename(columns = {'Continent': 'Region'})
df_agg_6_reg = df_agg_6_reg.rename(columns = {'Region': 'count'}).reset_index()[1:]
df_quality = pd.concat([df_agg_6_reg, df_agg_6_cont], ignore_index=True, sort=False)
df_quality['percent_quality'] = df_quality['article_quality_est_count']/df_quality['count']
df_quality.sort_values('percent_quality', ascending = False)[:10]
###Output
_____no_output_____
###Markdown
Investigating Bias in Wikipedia Article Counts by CountryHere I explore bias in English Wikipedia's content by looking at the coverage of politicians by country. I investigate using two metrics regarding the proportion of articles about politicians by country:* **Total Coverage:** proportion of articles compared to the country's population* **High-Quality Coverage:** proportion of high-quality articles compared to the total number of articles for the countryI report on the extremes of both of these metrics, i.e. countries with the highest and lowest proportions of each metric. Setup We run a few lines of code to set up the system before we get going.
###Code
import json
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import requests
%matplotlib inline
###Output
_____no_output_____
###Markdown
Data IngestWe need a few different datasets for this analysis. For both metrics we need data on English Wikipedia politician pages by country, along with a quality rating for each page. We also need country population data for the first metric. The following sections walk through the data retrieval process. Population DataPopulation data can be downloaded from Population Reference Bureau (PRB) here: http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14The data represents world populations for 210 countries as of Mid-2015.Unfortunately this data is copyrighted and therefore I could not include it within this repository. Therefore, if you would like to run this analysis yourself, you should download the data from the link above (click the Excel icon at top right) and save to `./data/raw/`. Alternatively, you can just run the code below, which will attempt to load the file from local storage, or will download the source data from the PRB website if it does not exist locally. Of course, the latter assumes that the resource is still available at the time you choose to do so, which may not be the case.
###Code
# Population data location (if stored locally)
filename = './data/raw/Population Mid-2015.csv'
# check if file exists locally; if not, attempt to download from source website
if not os.path.isfile(filename):
filename = 'http://www.prb.org/RawData.axd?ind=14&fmt=14&tf=76&loc=34235%2c249%2c250%2c251%2c252%2c253%2c254%2c34227%2c255%2c257%2c258%2c259%2c260%2c261%2c262%2c263%2c264%2c265%2c266%2c267%2c268%2c269%2c270%2c271%2c272%2c274%2c275%2c276%2c277%2c278%2c279%2c280%2c281%2c282%2c283%2c284%2c285%2c286%2c287%2c288%2c289%2c290%2c291%2c292%2c294%2c295%2c296%2c297%2c298%2c299%2c300%2c301%2c302%2c304%2c305%2c306%2c307%2c308%2c311%2c312%2c315%2c316%2c317%2c318%2c319%2c320%2c321%2c322%2c324%2c325%2c326%2c327%2c328%2c34234%2c329%2c330%2c331%2c332%2c333%2c334%2c336%2c337%2c338%2c339%2c340%2c342%2c343%2c344%2c345%2c346%2c347%2c348%2c349%2c350%2c351%2c352%2c353%2c354%2c358%2c359%2c360%2c361%2c362%2c363%2c364%2c365%2c366%2c367%2c368%2c369%2c370%2c371%2c372%2c373%2c374%2c375%2c377%2c378%2c379%2c380%2c381%2c382%2c383%2c384%2c385%2c386%2c387%2c388%2c389%2c390%2c392%2c393%2c394%2c395%2c396%2c397%2c398%2c399%2c400%2c401%2c402%2c404%2c405%2c406%2c407%2c408%2c409%2c410%2c411%2c415%2c416%2c417%2c418%2c419%2c420%2c421%2c422%2c423%2c424%2c425%2c427%2c428%2c429%2c430%2c431%2c432%2c433%2c434%2c435%2c437%2c438%2c439%2c440%2c441%2c442%2c443%2c444%2c445%2c446%2c448%2c449%2c450%2c451%2c452%2c453%2c454%2c455%2c456%2c457%2c458%2c459%2c460%2c461%2c462%2c464%2c465%2c466%2c467%2c468%2c469%2c470%2c471%2c472%2c473%2c474%2c475%2c476%2c477%2c478%2c479%2c480'
else:
pass
# load data from .CSV and view structure
population_data = pd.read_csv(filename, skiprows=2, thousands=',')
print('data loaded from ' + filename)
###Output
data loaded from http://www.prb.org/RawData.axd?ind=14&fmt=14&tf=76&loc=34235%2c249%2c250%2c251%2c252%2c253%2c254%2c34227%2c255%2c257%2c258%2c259%2c260%2c261%2c262%2c263%2c264%2c265%2c266%2c267%2c268%2c269%2c270%2c271%2c272%2c274%2c275%2c276%2c277%2c278%2c279%2c280%2c281%2c282%2c283%2c284%2c285%2c286%2c287%2c288%2c289%2c290%2c291%2c292%2c294%2c295%2c296%2c297%2c298%2c299%2c300%2c301%2c302%2c304%2c305%2c306%2c307%2c308%2c311%2c312%2c315%2c316%2c317%2c318%2c319%2c320%2c321%2c322%2c324%2c325%2c326%2c327%2c328%2c34234%2c329%2c330%2c331%2c332%2c333%2c334%2c336%2c337%2c338%2c339%2c340%2c342%2c343%2c344%2c345%2c346%2c347%2c348%2c349%2c350%2c351%2c352%2c353%2c354%2c358%2c359%2c360%2c361%2c362%2c363%2c364%2c365%2c366%2c367%2c368%2c369%2c370%2c371%2c372%2c373%2c374%2c375%2c377%2c378%2c379%2c380%2c381%2c382%2c383%2c384%2c385%2c386%2c387%2c388%2c389%2c390%2c392%2c393%2c394%2c395%2c396%2c397%2c398%2c399%2c400%2c401%2c402%2c404%2c405%2c406%2c407%2c408%2c409%2c410%2c411%2c415%2c416%2c417%2c418%2c419%2c420%2c421%2c422%2c423%2c424%2c425%2c427%2c428%2c429%2c430%2c431%2c432%2c433%2c434%2c435%2c437%2c438%2c439%2c440%2c441%2c442%2c443%2c444%2c445%2c446%2c448%2c449%2c450%2c451%2c452%2c453%2c454%2c455%2c456%2c457%2c458%2c459%2c460%2c461%2c462%2c464%2c465%2c466%2c467%2c468%2c469%2c470%2c471%2c472%2c473%2c474%2c475%2c476%2c477%2c478%2c479%2c480
###Markdown
Let's have a quick look at the data structure.
###Code
population_data.head()
###Output
_____no_output_____
###Markdown
As shown, this data includes six columns. We'll only use two for this analysis: **Location** and **Data**, which represent each country and their populations, respectively. Page DataData on political pages by country can be found here: https://figshare.com/articles/Untitled_Item/5513449Please see the page above for important information on the dataset.The data is provided in a file named `country.zip`. The zip file includes raw data as a .csv file, as well as a .RProj file and the source code that was used to retrieve the raw data. For simplicity I have unzipped the file manually and saved the raw data file to this repository, at `./data/raw/page_data.csv`.We load this data to memory below, and take a quick look at the data structure.
###Code
# Load page_data from local data
filename = './data/raw/page_data.csv'
page_data = pd.read_csv(filename)
page_data.head(4)
###Output
_____no_output_____
###Markdown
The data includes page names, the country associated with each page, and the ID of the latest revision to each page. We will use the latter two columns for this analysis. Article Scores from ORESFor our second metric we wish to look at the proportion of high-quality articles compared to the total number of articles for each country. To do this we will need some way to rate the quality of each page in our `page_data` dataset. For this we turn to the Wikimedia ORES API.Documentation for the ORES API can be found here: https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_contextThe API takes a handful of arguments, including project, model, and a string of revision IDs, separated by '|', and returns, among other things, a rating for each revision ID. The rating options, from best to worst, consist of the following:* **FA:** Featured article* **GA:** Good article* **B:** B-class article* **C:** C-class article* **Start:** Start-class article* **Stub:** Stub-class articleFor the purposes of this project, we will consider "high-quality" articles to be those rated as either "FA" or "GA".A few setup tasks before we ping the API:
###Code
# set endpoint and headers
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
headers = {'User-Agent' : 'https://github.com/rexthompson', 'From' : '[email protected]'}
# pull out revision IDs from page_data
revids_all = list(page_data['rev_id'])
# set up empty dataframe to hold results for each article
rev_ratings = pd.DataFrame()
###Output
_____no_output_____
###Markdown
Now we'll feed the API 100 revision IDs at a time until we have made it through all IDs. Supposedly the API can handle up to ~150 IDs at a time, but we'll stick with 100 for cleanliness and to reduce the chances of crashing the system. We are interested in English Wikipedia only, and the `wp10` model.The code below will return a DataFrame with revision ID and the rating for each page. We'll print out the ID of any pages that fail to return a valid result (e.g. if the pages have been deleted since `page_data.csv` was created).
###Code
# loop 100 entries at a time and save rev_id and rating to rev_ratings DataFrame
idx_start = 0
idx_end = 100
while idx_start < len(revids_all):
# retrieve and concatenate subset of revids
revids = revids_all[idx_start:idx_end]
revids = '|'.join(str(x) for x in revids)
# pull article data from API
params = {'project' : 'enwiki',
'revids' : revids,
'model' : 'wp10'
}
# make the api call and output the results as JSON
api_call = requests.get(endpoint.format(**params), headers)
response = api_call.json()
# loop through the response and pull out the 100 rev_id's and rating for each page; fill invalid entries w/ NaN
for revid in response['enwiki']['scores']:
try:
# # The following two lines could be used if you wanted to verify the reported ratings
# temp_dict = response['enwiki']['scores'][revid]['wp10']['score']['probability']
# rating = max(temp_dict, key=temp_dict.get)
rating = response['enwiki']['scores'][revid]['wp10']['score']['prediction']
except:
print('unable to load score for ' + revid)
rating = np.nan
rev_ratings = rev_ratings.append({'revid':revid, 'rating':rating}, ignore_index=True)
# NOTE: results are not returned in the same order as they were passed to the API!
# NOTE: we will handle this by simply doing a merge with the original dataset, so order won't matter
# update indexes
idx_start += 100
idx_end = min(idx_start+100, len(revids_all))
###Output
unable to load score for 806811023
unable to load score for 807367030
unable to load score for 807367166
unable to load score for 807484325
###Markdown
We see that the following rev_ids do not return a valid result from ORES:* 806811023* 807367030* 807367166* 807484325 That's no problem, we'll address the handling of these articles later.Let's see how the data looks.
###Code
rev_ratings.head()
###Output
_____no_output_____
###Markdown
Data Merge Now we need to perform a few merges to get a good robust dataset for our analysis.First we wish to add the rating for each page (as shown above) to the `page_data` dataframe. This is a simple operation, but before we proceed we need to do a bit of cleanup. The `revid` variable in the `rev_ratings` DataFrame consists of strings (from the JSON output), but the `revid` column in the `page_data` DataFrame is integers. We convert `revid` from string to integer so we can more easily merge these two dataframes.
###Code
rev_ratings['revid'] = pd.to_numeric(rev_ratings['revid'], errors='coerce')
###Output
_____no_output_____
###Markdown
Now we can merge the two dataframes. We'll also drop the redundant column at the same time.
###Code
page_data_with_rating = page_data.merge(rev_ratings, left_on='rev_id', right_on='revid').drop('revid', 1)
###Output
_____no_output_____
###Markdown
Let's have a look at the new merged dataframe, which we call `page_data_with_rating`.
###Code
page_data_with_rating.head()
###Output
_____no_output_____
###Markdown
This looks great! We can move on to the second merge. We now wish to merge the table above with the population data which, as you may recall, has the following structure:
###Code
population_data.head()
###Output
_____no_output_____
###Markdown
Cleaerly, we want to merge this table with the new `page_data_with_rating` dataframe on the shared country columns. Let's go ahead and do that now, noting that we have to specify the column names since they are different between the two DataFrames (i.e. `Location` vs. `country`).
###Code
merged_df = population_data.merge(page_data_with_rating, left_on='Location', right_on='country')
###Output
_____no_output_____
###Markdown
Let's have a look at the data structure of our new merged_df DataFrame.
###Code
merged_df.head()
###Output
_____no_output_____
###Markdown
Looking good! However, we must call out something important at this step. We just merged data from two separate sources. In doing so, we trusted that we would find matches between the `Location` column in `popoulation_data` and the `country` column in the `page_data_with_rating`. Let's check our country count in our new `merged_df` DataFrame to see if we were able to match up all 210 countries.
###Code
len(merged_df.groupby('country').size())
###Output
_____no_output_____
###Markdown
Hmm, not quite 210, is it? While not perfect, this is pretty good, and for the sake of this assignment we were instructed to "remove the rows that do not have matching data". So we'll simply ignore the 23 countries that did not match up perfectly by name between the two data sources. Now, before we continue, let's do a little cleanup to get rid of some of the columns we don't need, and reorder the ones we do to be more intuitive.
###Code
# pull out columns of interest
merged_df = pd.DataFrame({'country':merged_df['Location'],
'population':merged_df['Data'],
'article_name':merged_df['page'],
'revision_id':merged_df['rev_id'],
'article_quality':merged_df['rating']})
# convert population to int
pd.to_numeric(merged_df['population'])
# reorder columns
merged_df = merged_df[['country',
'population',
'article_name',
'revision_id',
'article_quality']]
###Output
_____no_output_____
###Markdown
Let's see how this new, cleaned dataframe looks.
###Code
merged_df.head()
###Output
_____no_output_____
###Markdown
This is looking good, so for the sake of reproducibiilty (and per the assignment instructions) let's save this data to CSV.The code chunk below checks if a file by the name of `population_and_article_quality_data.csv` already exists in the `./data/` folder. It saves `merged_df` to such a file if it does not exist, or if the file does already exists, it imports the file and saves it to `merged_df`.Thus, if you are duplicating or expanding upon this analysis and don't want to wait on the API call above, you can simply start at this point by loading in the `population_and_article_quality_data.csv` which is saved in the `./data/` folder on the GitHub repository for this project.
###Code
# set filename for combined data CSV
filename = './data/population_and_article_quality_data.csv'
# check if file already exists; load if so, create if not
if os.path.isfile(filename):
merged_df = pd.read_csv(filename)
print('loaded CSV data from ' + filename)
else:
merged_df.to_csv(filename, index=False)
print('saved CSV data to ' + filename)
###Output
saved CSV data to ./data/population_and_article_quality_data.csv
###Markdown
We should now be all set to perform some analyses on this data. Analysis So, now we have a good DataFrame with country and ratings for each article, and population for each country. Let's go about creating data for the two metrics we identified at the beginning of this notebook, i.e. **Total Coverage** and **High-Quality Coverage**. Total Coverage**Total Coverage** seeks to calculate the proportion of political articles for each country compared to each country's population. Thus, for this task we'll need article count and population for each country.To get article counts, we group our `merged_df` DataFrame by country and count the number of rows. This will return the number of articles per country.
###Code
# get number of articles per country
articles_per_country = merged_df.groupby(['country']).size().reset_index(name='article_count').set_index('country')
articles_per_country.head()
###Output
_____no_output_____
###Markdown
For our population data, we could use the original `population_data` DataFrame from above. However, in the spirit of reproducibiity, and to enable "checkpointing" as described above, we will rebuild this dataframe from `population_and_article_quality_data.csv`.
###Code
# rebuild population data
population_data = pd.DataFrame(merged_df[['country','population']])
population_data.drop_duplicates(inplace=True)
population_data.set_index('country', inplace=True)
population_data.head()
###Output
_____no_output_____
###Markdown
We now have number of articles per country and population per country. Let's join these two datasets.
###Code
article_count_and_population = population_data.merge(articles_per_country, left_index=True, right_index=True, how='left')
article_count_and_population.head()
###Output
_____no_output_____
###Markdown
Good! Now let's use these two columns to calculate the proportion of articles per country.
###Code
article_count_and_population['articles_per_person_pct'] = 100*article_count_and_population['article_count']/article_count_and_population['population']
article_count_and_population.head()
###Output
_____no_output_____
###Markdown
Excellent!Now, let's have a look at the ten highest- and lowest-ranked countries in terms of number of politician articles as a proportion of country population. Highest-RankedThe following table shows the ten highest-ranked countries in terms of number of politician articles as a proportion of country population.
###Code
article_count_and_population.sort_values(by='articles_per_person_pct', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
As shown, Nauru blows everyone else out of the water, with 53 politician articles compared to a population of just over 10,000, for an article-per-person rate of 0.488%. Tuvalu is not far behind, with 55 articles and a population of ~12,000. The proportion then drops significantly for the next several countries. Lowest-RankedThe following table shows the ten lowest-ranked countries in terms of number of politician articles as a proportion of country population. The lowest-ranked countries are towards the top, with increasing rank as you descend in the table.
###Code
article_count_and_population.sort_values('articles_per_person_pct').head(10)
###Output
_____no_output_____
###Markdown
Perhaps not surprisingly, India and China round out the bottom (top of this table), with a relatively small number of pages compared to their populations, both of which are over 1.3 billion. Also included on the list is North Korea, which may have low number of pages due to government censorship in the hostile state. High-Quality Articles Per Population Now we'll look at the number of high-quality articles per population. This is a similar exercise to the previous, except that instead of summing all articles for each country, in this case we only want to count those that are in the "FA" or "GA" category. We do this by subsetting the original `merged_df` dataframe, then grouping in a similar manner to what we did above.
###Code
# get number of high-quality articles per country
hq_articles_per_country = merged_df[(merged_df['article_quality'] == 'GA') |
(merged_df['article_quality'] == 'FA' )]
hq_articles_per_country = hq_articles_per_country.groupby(['country']).size().reset_index(name='hq_article_count').set_index('country')
###Output
_____no_output_____
###Markdown
Let's see what this gives us.
###Code
hq_articles_per_country.head()
###Output
_____no_output_____
###Markdown
This looks good. But do you notice anything interesting about this dataframe? Notice any difference in the countries listed, compared to prior DataFrames?You might have noticed that Andorra is missing from the DataFrame above, since it apparently has no high-quality articles. We will need to take this -- and other such countries -- into account when we merge the DataFrame above with the total article count DataFrame. Let's do that now with a left join, and we'll substitute a `hq_article_count` of zero for any countries that are not included in the merge's right DataFrame (i.e. `hq_articles_per_country`).
###Code
hq_article_proportions = articles_per_country.merge(hq_articles_per_country, left_index=True, right_index=True, how='left').fillna(0).astype(int)
hq_article_proportions.head()
###Output
_____no_output_____
###Markdown
This looks good. And note that Andorra is back in the picture now, with 34 articles, none of which are high-quality. Now let's use these two columns to calculate the proportion of high-quality articles per country.
###Code
hq_article_proportions['hq_article_pct'] = 100*hq_article_proportions['hq_article_count']/hq_article_proportions['article_count']
hq_article_proportions.head()
###Output
_____no_output_____
###Markdown
Excellent!Now, let's have a look at the ten highest- and lowest-ranked countries in terms of number of high-quality articles as a prorotion of all articles about politicians from each country. Highest-RankedThe following table shows the ten highest-ranked countries in terms of number of high-quality articles as a prorotion of all articles about politicians from each country.
###Code
hq_article_proportions.sort_values(by='hq_article_pct', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Interestingly, we see that North Korea tops the list by a wide margin, with 9 of its 39 articles being ranked as "high-quality". Lowest-RankedThe following table shows the ten lowest-ranked countries in terms of number of high-quality articles as a prorotion of all articles about politicians from each country. The lowest-ranked countries are towards the top, with increasing rank as you descend in the table.
###Code
hq_article_proportions.sort_values(by='hq_article_pct').head(10)
###Output
_____no_output_____
###Markdown
Hmm, that's actually not too helpful. It looks like there are quite a few countries (at least 10) that have a percentage of zero, as a direct result of having exactly zero high-quality articles. Let's look at how many countries have no high-quality articles.
###Code
len(hq_article_proportions[hq_article_proportions['hq_article_count']==0])
###Output
_____no_output_____
###Markdown
So we see that there are 39 countries that don't have a single high-quality article written about any of their politicians. These 39 countries are listed here.
###Code
print(list(hq_article_proportions[hq_article_proportions['hq_article_count']==0].index))
###Output
['Andorra', 'Antigua and Barbuda', 'Bahamas', 'Bahrain', 'Barbados', 'Belgium', 'Belize', 'Cape Verde', 'Comoros', 'Costa Rica', 'Djibouti', 'Eritrea', 'Federated States of Micronesia', 'Finland', 'French Guiana', 'Guadeloupe', 'Kazakhstan', 'Kiribati', 'Lesotho', 'Liechtenstein', 'Macedonia', 'Malta', 'Marshall Islands', 'Moldova', 'Monaco', 'Mozambique', 'Nauru', 'Nepal', 'San Marino', 'Sao Tome and Principe', 'Seychelles', 'Solomon Islands', 'Suriname', 'Swaziland', 'Switzerland', 'Tajikistan', 'Tonga', 'Turkmenistan', 'Zambia']
###Markdown
Data 512 - Assignment A2 The goal of this assignment is to explore the concept of bias through data on Wikipedia articles - specifically, articles on political figures from a variety of countries. For this assignment, you will combine a dataset of Wikipedia articles with a dataset of country populations, and use a machine learning service called ORES to estimate the quality of each article.You are expected to perform an analysis of how the coverage of politicians on Wikipedia and the quality of articles about politicians varies between countries. Your analysis will consist of a series of tables that show:the countries with the greatest and least coverage of politicians on Wikipedia compared to their population.the countries with the highest and lowest proportion of high quality articles about politicians.You are also expected to write a short reflection on the project, that describes how this assignment helps you understand the causes and consequences of bias on Wikipedia. Getting the article and population data The first step is to download the file that contains the wikipedia country information from figshare and store this as a pandas dataframe.
###Code
import pandas as pd
import numpy as np
import math
import requests
import json
country = pd.read_csv('page_data.csv')
###Output
_____no_output_____
###Markdown
We then download the population data from dropbox(link described in readme file), and repeat the same process as above
###Code
population = pd.read_csv('WPDS_2018_data.csv')
###Output
_____no_output_____
###Markdown
Getting article quality predictions Using a Wikimedia API endpoint that connects to a machine learning algorithm called ORES, we get quality predictions for each of the articles listed in the country data above.
###Code
headers = {'User-Agent' : 'https://github.com/Gmoog', 'From' : '[email protected]'}
def get_ores_data(revision_ids, headers):
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# Specify the parameters - smushing all the revision IDs together separated by | marks.
# Yes, 'smush' is a technical term, trust me I'm a scientist.
# What do you mean "but people trusting scientists regularly goes horribly wrong" who taught you tha- oh.
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
return response
###Output
_____no_output_____
###Markdown
Make a list of all the rev_ids from the country table and send this information over to the ORES function, to retrieve the article quality rating
###Code
# store the revision ids in a list
rev_ids = country['rev_id'].tolist()
###Output
_____no_output_____
###Markdown
Divide the revisions ids into lists of size 100 and query the ORES function. This helps avoid hitting the limits.Also some of the revision ids, return an error from ORES, these revision_ids have been excluded from the results. The code in the cell below takes about 3 min to execute.
###Code
#variables to iterate through the revision ids in sizes of 100
i=0
j=100
#lists to store the revision ids and quality predictions
re_id = []
prediction = []
# dictionary to store the results from ORES
res={}
#divide the ids into lists of size 100
for t in range(math.ceil(len(rev_ids)/100)):
ids = rev_ids[i:j]
res = get_ores_data(ids,headers)
# check for no error messages in the output, and only then append the data
for ids in res['enwiki']['scores']:
if not res['enwiki']['scores'][ids]['wp10'].get('score') is None:
re_id.append(ids)
prediction.append(res['enwiki']['scores'][ids]['wp10']['score']['prediction'])
i+=100
j+=100
#create a dataframe to hold the revision ids and quality data
art_quality = pd.DataFrame(np.column_stack([re_id,prediction]), columns=['revision_id','article_quality'])
art_quality.revision_id = art_quality.revision_id.astype(int)
###Output
_____no_output_____
###Markdown
Merging data from the 3 datasets created so far : country, population and article quality
###Code
#lowering the case for both country lists, so that the join is not impacted by the country case
population['Geography'] = population['Geography'].str.lower()
country['country'] = country['country'].str.lower()
# create a new dataframe by joining on the country column, using the inner join, so that unmatched rows are not included
country_population = country.merge(population, how='inner', left_on='country',right_on='Geography')
del country_population['Geography']
# rename the columns as specified in the instructions
country_population = country_population.rename(index=str, columns={"page": "article_name", "rev_id": "revision_id", "Population mid-2018 (millions)":"population"})
# finally, add the data from the article quality dataframe, by joining on the revision_id column
final_df = country_population.merge(art_quality, on='revision_id')
###Output
_____no_output_____
###Markdown
The last step in the data wrangling step involves saving the final dataframe created above as a csv file, using the appropriate naming conventions
###Code
# saving this dataframe to the final csv data file
final_df.to_csv('en-wikipedia_article_quality_bycountry.csv',index=False)
###Output
_____no_output_____
###Markdown
Data Analysis Using pandas and its aggregation methods, create a dataframe that lists unique countries and the percentage of articles produced as a function of its population.
###Code
# convert the population column to a float datatype, after replacing the commas by blanks
final_df['population'] = final_df['population'].str.replace(',', '').astype(float)
# dataframe to hold information about countries and proportion of the number of articles with respect to its population, expressed as a percentage
art_prop = pd.DataFrame(np.column_stack([np.sort(final_df['country'].unique()),final_df['country'].value_counts()/(final_df.groupby('country')['population'].mean()*10000.00)]),columns=['country','article_proportion (as % of population)'])
###Output
_____no_output_____
###Markdown
Create another dataframe, that aggregates countries and the ratio of good quality articles produced from them as a function of the overall article count.
###Code
# dataframe to store values of each country and the total number of articles
total_article = final_df.groupby('country').size().reset_index(name='total_article_count')
# list to identify the good articles
good = ['GA','FA']
# dataframe to store number of good articles per country
good_article = final_df[final_df['article_quality'].isin(good)].groupby('country').size().reset_index(name='good_article_count')
# merge the two dataframes using the left outer join, since there can be countries with zero good articles
good_v_total = total_article.merge(good_article, on='country',how='left')
good_v_total.fillna(0, inplace=True)
# calculate a new field to store the ratio of good articles to total articles per country
good_v_total['proportion (as a percentage)'] = (good_v_total['good_article_count'] * 100)/good_v_total['total_article_count']
###Output
_____no_output_____
###Markdown
Embed four tables as described Table-1 Shows the 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
art_prop.sort_values('article_proportion (as % of population)',ascending=False)[:10]
###Output
_____no_output_____
###Markdown
Table-2 Shows the 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
art_prop.sort_values('article_proportion (as % of population)')[:10]
###Output
_____no_output_____
###Markdown
Table-3 Shows the 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
good_v_total.sort_values('proportion (as a percentage)',ascending=False)[:10]
###Output
_____no_output_____
###Markdown
Table-4 Shows 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
good_v_total.sort_values('proportion (as a percentage)')[:10]
###Output
_____no_output_____
###Markdown
Bias in Data -- A Study on Wikipedia Political Articles bewteen Countries Project Overview This project aims to explore the concept of 'bias' through data on English-language Wikipedia articles within the category of "Politicians by nationality" and subcategories.. The project is consisted of three steps, data acquision, data processing and data analysis. The result will be presented in as four graph visualizations that show the countries with the greatest and least coverage of politicians on Wikipedia compared to their population and the countries with the highest and lowest proportion of high quality articles about politicians respectively. Step1: Data Acquision In this step, three different data sets was collected. 1. Data on most English-language Wikipedia articles within the category "Category:Politicians by nationality" and subcategories. ([Figshare](https://figshare.com/articles/Untitled_Item/5513449)). 2. Quality of articles were obtained using a machine learning service called Objective Revision Evaluation Service ([ORES](https://www.mediawiki.org/wiki/ORES), [documentation](https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_model)) ) to estimate the quality of each article by feeding in each last edit ID in data set 1 to the ORES API. The combined result from dataset 1 and 2 was stored in 'article_quality.csv'. 3. A dataset of country populations on the [Population Research Bureau website](http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14). import libraries
###Code
import requests
import json
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
from pylab import *
%matplotlib inline
###Output
_____no_output_____
###Markdown
1. Read the 1st data set of English-language Wikipedia articles within the category "Category:Politicians by nationality" and subcategories.
###Code
df = pd.read_csv('page_data.csv')
df.head()
df.shape
###Output
_____no_output_____
###Markdown
2. Feed in each last edit ID in data set 1 to the ORES API to acquire article quality, the IDs were feed in in 120 ID bundles for each API call. The combined result from dataset 1 and 2 was stored in 'article_quality.csv'.
###Code
def get_data(rev_id_list):
#endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/{revids}/{model}'
endpoint ='https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
headers={'User-Agent' : 'https://github.com/sliwhu', 'From' : '[email protected]'}
params = {'project' : 'enwiki',
'revids' : '|'.join(str(x) for x in rev_id_list),
'model' : 'wp10'
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
return response
rev_id_list_all = df['rev_id'].tolist()
index = []
for i in range(len(rev_id_list_all)):
if i%120 == 0:
index.append(i)
index.append(df.shape[0])
output = []
print('There are 47197 records in total')
for i in range(len(index)-1):
print('now acquiring article quality records ' + str(index[i]) + ' to ' + str(index[i+1]))
rev_id_list = rev_id_list_all[index[i]:index[i+1]]
response = get_data(rev_id_list)
for revid in response['enwiki']['scores']:
try:
quality = response['enwiki']['scores'][str(revid)]['wp10']['score']['prediction']
output.append(quality)
except KeyError:
output.append('N/A')
print('Data acquisition done!')
result = np.asarray(output)
df['article_quality'] = result
df.to_csv('article_quality.csv')
df.head()
df = pd.read_csv('article_quality.csv', encoding = "ISO-8859-1")
df = df.drop('Unnamed: 0', axis=1)
df.head()
###Output
_____no_output_____
###Markdown
3. A dataset of country populations on the Population Research Bureau website were downloaded and read.
###Code
population = pd.read_csv('Population Mid-2015.csv', header=1)
population.head()
population = population[['Location', 'Data']]
population.head()
population.columns = ['country', 'population']
population.head()
###Output
_____no_output_____
###Markdown
Step2: Data Processing After retrieving and including the ORES data for each article (result stored in 'article_quality.csv', the wikipedia data and population data were merged together. Entries that cannot be merged were removed. The final output was exported to 'hcds-a2-bias.csv'.
###Code
data = pd.merge(df, population, how='inner', on=['country'])
data.columns = ['article_name', 'country', 'revision_id', 'article_quality', 'population']
data.head()
data.to_csv('hcds-a2-bias.csv')
###Output
_____no_output_____
###Markdown
Step3: Data Analysis Two main calculations were performed in this step:1. The proportion (as a percentage) of articles-per-population 2. The proportion (as a percentage) of high-quality articles for each country. By "high quality" articles, in this case we mean the number of articles about politicians in a given country that ORES predicted would be in either the "FA" (featured article) or "GA" (good article) classes. Four visualization were produced in this step:1. Top 10 countries of coverage of politician articles on Wikipedia compared to their population.2. Bottom 10 countries of coverage of politician articles on Wikipedia compared to their population.3. Top 10 countries in terms of proportion of high quality articles about politicians4. Bottom 10 countries in terms of proportion of high quality articles about politicians
###Code
data = pd.read_csv('hcds-a2-bias.csv', encoding = "ISO-8859-1")
data = data.drop('Unnamed: 0', axis=1)
data.head()
###Output
_____no_output_____
###Markdown
Calculations
###Code
num_article = data.groupby(["country"])["article_name"].count().reset_index(name="num_article")
high_quality_article = data[(data['article_quality'] == 'FA') | (data['article_quality'] == 'GA')]
num_high_quality_article = high_quality_article.groupby(['country'])['article_name'].count().reset_index\
(name='num_high_quality_article')
data = pd.merge(pd.merge(data,num_article,how='outer', on='country'),num_high_quality_article,how='outer',on='country')
data.fillna(int(0), inplace=True)
data[['num_high_quality_article']] =data[['num_high_quality_article']].astype(int)
data.head()
data['population'] = data['population'].apply(lambda x: float(x.split()[0].replace(',', '')))
data[['population']] =data[['population']].astype(float)
data.head()
data['percentage_articles_per_population'] = data['num_article']/data['population']*100
data['percentage_of_high_quality'] = data['num_high_quality_article']/data['num_article']*100
data.head()
data.fillna(0, inplace=True)
results = data[['country', 'percentage_articles_per_population', 'percentage_of_high_quality']]
results = results.drop_duplicates(keep='first')
results.head()
###Output
_____no_output_____
###Markdown
Visualizations
###Code
a = results.sort_values(by='percentage_articles_per_population', ascending=False)
a.reset_index(drop=True, inplace=True)
y1_name = a['country'][:10]
y1 = np.arange(len(y1_name))
x1 = a['percentage_articles_per_population'][:10]
y2_name = a['country'][-10:]
y2 = np.arange(len(y2_name))
x2 = a['percentage_articles_per_population'][-10:]
###Output
_____no_output_____
###Markdown
1. Top 10 countries of coverage of politician articles on Wikipedia compared to their population.
###Code
a[['country', 'percentage_articles_per_population']][:10]
###Output
_____no_output_____
###Markdown
2. Bottom 10 countries of coverage of politician articles on Wikipedia compared to their population.
###Code
a[['country', 'percentage_articles_per_population']][-10:]
plt.rcParams.update({'font.size': 25})
fig = plt.figure(figsize=(20,30))
ax1=fig.add_subplot(2,1,1)
ax1.barh(y1, x1, align='center', color='b')
ax1.set_yticks(y1)
ax1.set_yticklabels(y1_name)
ax1.invert_yaxis()
# vals = ax1.get_xticks()
# ax1.set_xticklabels(['{:}%'.format(x) for x in vals])
ax1.tick_params(axis='both', which='major', labelsize=20)
ax1.set_xlabel('Politician articles as a proportion of country population (%)', fontsize=25)
ax1.set_title('Top 10 countries of coverage of politician articles on Wikipedia compared to their population.')
ax2=fig.add_subplot(2,1,2)
ax2.barh(y2, x2, align='center', color='r')
ax2.set_yticks(y2)
ax2.set_yticklabels(y2_name)
# vals = ax2.get_xticks()
# ax2.set_xticklabels(['{:}%'.format(x) for x in vals])
ax2.tick_params(axis='both', which='major', labelsize=20)
ax2.set_xlabel('Politician articles as a proportion of country population (%)', fontsize=25)
ax2.set_title('Bottom 10 countries of coverage of politician articles on Wikipedia compared to their population.')
plt.savefig('politian_article_coverage.png', bbox_inches="tight")
b = results.sort_values(by='percentage_of_high_quality', ascending=False)
b.reset_index(drop=True, inplace=True)
y1_name = b['country'][:10]
y1 = np.arange(len(y1_name))
x1 = b['percentage_of_high_quality'][:10]
y2_name = b['country'][-10:]
y2 = np.arange(len(y2_name))
x2 = b['percentage_of_high_quality'][-10:]
###Output
_____no_output_____
###Markdown
3. Top 10 countries in terms of proportion of high quality articles about politicians.
###Code
b[['country', 'percentage_of_high_quality']][:10]
###Output
_____no_output_____
###Markdown
4. Bottom 10 countries in terms of proportion of high quality articles about politicians. Note that the following countries are all of 0 percentage of high quality articles
###Code
b.loc[b['percentage_of_high_quality'] == 0.0, ['country', 'percentage_of_high_quality']]
###Output
_____no_output_____
###Markdown
A ranomly chosen 10 countries from the above list was used for creating a visualization bar graph
###Code
b[['country', 'percentage_of_high_quality']][-10:]
###Output
_____no_output_____
###Markdown
If I exclude all the countries with 0 percent high quality articles, then the bottom 10 countries list would be:
###Code
b.loc[b['percentage_of_high_quality'] != 0.0, ['country', 'percentage_of_high_quality']][-10:]
plt.rcParams.update({'font.size': 25})
fig = plt.figure(figsize=(20,30))
ax1=fig.add_subplot(2,1,1)
ax1.barh(y1, x1, align='center', color='b')
ax1.set_yticks(y1)
ax1.set_yticklabels(y1_name)
ax1.invert_yaxis()
# vals = ax1.get_xticks()
# ax1.set_xticklabels(['{:}%'.format(x) for x in vals])
ax1.tick_params(axis='both', which='major', labelsize=20)
ax1.set_xlabel('Politician articles as a proportion of country population (%)', fontsize=25)
ax1.set_title('Top 10 countries in terms of proportion of high quality articles about politicians')
ax2=fig.add_subplot(2,1,2)
ax2.barh(y2, x2, align='center', color='r')
ax2.set_yticks(y2)
ax2.set_yticklabels(y2_name)
ax2.set_xticks([0, 0, 0, 0, 0, 0])
# vals = ax2.get_xticks()
# ax2.set_xticklabels(['{:}%'.format(x) for x in vals])
ax2.tick_params(axis='both', which='major', labelsize=20)
ax2.set_xlabel('Politician articles as a proportion of country population (%)', fontsize=25)
ax2.set_title('Bottom 10 countries in terms of proportion of high quality articles about politicians')
plt.savefig('high_quality_proportion.png', bbox_inches="tight")
###Output
_____no_output_____
###Markdown
A2 - Bias in Data DATA 512 Laura ThriftwoodThe purpose of this assignment is to explore the concept of bias through data on Wikipedia articles - specifically, articles on political figures from a variety of countries. I combine a dataset of Wikipedia articles with a dataset of country populations, and use a machine learning service called ORES to estimate the quality of each article. I then perform an analysis of how the coverage of politicians on Wikipedia and the quality of articles about politicians varies between countries.
###Code
import pandas as pd
import numpy as np
import json
import math
import requests
###Output
_____no_output_____
###Markdown
Step 1: Getting the Article and Population DataThe Wikipedia politicians by country dataset comes from Figshare. The .zip file was downloaded and unzipped where the _page_data.csv_ file was located.The population data was drawn from the World Population data sheet published by the Population Reference Bureau and was downloaded as a .csv file named _WPDS_2020_data.csv_. These files can be located in the __data__ folder. Step 2: Cleaning the DataFirst let's read in the politicians by country dataset and take a look at the structure of the data.
###Code
df_politicians = pd.read_csv('data/country/data/page_data.csv')
df_politicians.head()
df_politicians.shape
###Output
_____no_output_____
###Markdown
We want to remove/ignore the rows of data that include "Template:" in the string of the page name as these entries are not Wikipedia articles and should not be included in the analysis.
###Code
df_politicians = df_politicians[~df_politicians.page.str.contains('Template:')].reset_index(drop=True)
df_politicians.head()
df_politicians.shape
###Output
_____no_output_____
###Markdown
We can see that this removed 496 rows. Now let's read in and take a look at our other dataset with population information, _WPDS_2020_data.csv_.
###Code
df_population = pd.read_csv('data/WPDS_2020_data.csv')
df_population.head(20)
###Output
_____no_output_____
###Markdown
We want to ignore rows that provide cumulative regional population counts, rather than country-level counts. These rows are distinguished by having ALL CAPS values in the __Name__ field. We can move these into a separate dataframe to reference later when reporting coverage and quality by region in the analysis section.Note: Initially, I used the designation in the __Type__ field to determine exclusion criteria but there exists an entry for Channel Islands that has a Sub-Region __Type__ but is not displayed in ALL CAPS in the __Name__ field.As we want to preserve the data we are removing in this step, we first make a copy of the population data.
###Code
df_sub_region_population = df_population.copy()
df_sub_region_population = df_sub_region_population[df_sub_region_population['Name'].str.isupper().fillna(False)]
df_sub_region_population
df_sub_region_population.shape
###Output
_____no_output_____
###Markdown
Our challenge is to create a new field for each country that indicates the Sub-Region it belongs to. I started by getting a list of the indices in the Sub-Region DataFrame, then creating a list of Sub-Region names that repeats _n_ number of times, with _n_ being calculated based on the different between Sub-Regions. I freely admit this is an inelegant solution, but it worked in this case.
###Code
# get a list of indices
sub_region_index = pd.Series(df_sub_region_population.index.values.tolist())
#create a a list of repeating index values based on range between list items
rep_items = sub_region_index.diff()
#reset indices
df_sub_region_population_copy = df_sub_region_population.copy().reset_index(drop=True)
#create a column for the number of reps needed for each Sub-Region
df_sub_region_population_copy['reps'] = rep_items
#shift the entries in the reps column up a row
df_sub_region_population_copy['reps'] = df_sub_region_population_copy['reps'].shift(periods = -1, fill_value = 18.0)
#drop the top entries
df_sub_region_population_copy = df_sub_region_population_copy.drop([0,1])
df_sub_region_population_copy
#create a dataframe for the Sub-Region name, and reps needed to assign to our countries data, less 1 so it aligns
sub_reg_reps = df_sub_region_population_copy[['Name', 'reps']]
sub_reg_reps.loc[:, 'reps'] = sub_reg_reps['reps'].apply(lambda x: x - 1).astype(int)
sub_reg_reps
repeating = sub_reg_reps.loc[sub_reg_reps.index.repeat(sub_reg_reps.reps)]
repeating.shape
repeating_series = repeating['Name'].squeeze().reset_index(drop=True)
repeating_series
#drop the Sub-Regions (ALL CAPS) from the original population data
df_population = df_population[~df_population['Name'].str.isupper().fillna(False)].reset_index(drop=True)
df_population
#add column to df_population that notes the sub-region
df_population['Sub_Region'] = repeating_series
df_population
###Output
_____no_output_____
###Markdown
Step 3: Getting Article Quality PredictionsWe need to get the predicted quality scores for each article in the Wikipedia dataset using a machine learning system called ORES that provides estimates of Wikipedia article quality. The article quality estimates (from best to worst) are:1. FA - Featured article2. GA - Good article3. B - B-class article4. C - C-class article5. Start - Start-class article6. Stub - Stub-class articleThese were learned based on articles in Wikipedia that were peer-reviewed using the Wikipedia content assessment procedures. These quality classes are a sub-set of quality assessment categories developed by Wikipedia editors. ORES will assign one of these 6 categories to any rev_id we send it.In order to get article predictions for each article in the Wikipedia dataset, we use the value of each entry in the rev_id column to make an API query.
###Code
from ores import api
#API query headers
#headers = {
# 'User-Agent': 'https://github.com/laurathriftwood',
# 'From': '[email protected]'
#}
#API endpoint
#endpoint = 'https://ores.wikimedia.org/v3/scores/enwiki/?models=articlequality&revids={rev_id}'
###Output
_____no_output_____
###Markdown
We start by extracting a list of `rev_id`'s from our `df_politicians` dataframe for which we want associated ORES scores. Since the full list is quite long, I've left some test batches commented out just in case. The smaller batch includes a `rev_id` for which there is no associate score so we can verify our error handling methodology.
###Code
#full batch of rev_ids
rev_list = df_politicians['rev_id']
#test smaller batch
#rev_list = [502721672, 516633096, 521986779]
#test larger batch
#rev_list = df_politicians['rev_id'][0:50]
#ORES session
ores_session = api.Session("https://ores.wikimedia.org", "DATA 512 Class project [email protected]")
#get the full set of results for all rev_id values included in our dataset
results = ores_session.score("enwiki", ["articlequality"], rev_list)
#create an empty list to load the predictions into
predictions = []
#getting the predictions from our results
for score in results:
while True:
#attempt to retrieve the prediction for the rev_id and add it to a Series
try:
predictions.append(score["articlequality"]["score"]["prediction"])
break
#if no prediction is available, we will see an error in the score
#we note this error with a "No_Score" string entry
except KeyError:
predictions.append(str("No_Score"))
break
###Output
_____no_output_____
###Markdown
Now that we have a list of `rev_ids` and a list of associated predictions, we merge our results into a single dataframe and verify that all `rev_id`s were processed by comparing the resulting shape against the length of the original `rev_list`.
###Code
data = {'rev_id':rev_list, 'prediction':predictions}
predictions_df = pd.DataFrame(data)
print(predictions_df.shape)
###Output
(46701, 2)
###Markdown
Now we merge our predictions with the original politicians dataframe using the `rev_ids`.
###Code
merged_df = df_politicians.merge(predictions_df, how = 'inner', on = ['rev_id', 'rev_id'])
merged_df.head()
###Output
_____no_output_____
###Markdown
We check the shape of the merged dataframe to ensure we have not lost any rows of (missing/unmatched) data in the process.
###Code
print(merged_df.shape)
merged_df.head()
###Output
(46701, 4)
###Markdown
Let's see how many articles came back without associated scores/predictions.
###Code
merged_df.loc[merged_df.prediction == 'No_Score', 'prediction'].count()
###Output
_____no_output_____
###Markdown
We see that there are 276 `rev_id`s that do not have a prediction. We will extract those rows from our working dataframe and store them in a separate output file for our records.
###Code
df_politicians_no_score = merged_df[merged_df['prediction'] == 'No_Score']
print(df_politicians_no_score.shape)
df_politicians_no_score.to_csv(r'output/wp_wpds_politicians_no_score.csv', index = False, header = True)
#drop rows that have a No_Score prediction value and check the shape to ensure 276 rows were dropped
merged_df = merged_df[~merged_df.prediction.str.contains('No_Score')].reset_index(drop=True)
merged_df.shape
df_population.shape
merged_df.shape
###Output
_____no_output_____
###Markdown
Combining the DatasetsWe now need to merge our two datasets - the Wikipedia data in `merged_df` and the population data in `df_population` on their respective __country__ and __Name__ fields. Since we want maintain a record of the subset that does not have matching data, we will use an outer join to retain those rows.
###Code
all_data_df = merged_df.merge(df_population, how = 'outer', left_on='country', right_on='Name')
print(all_data_df.shape)
all_data_df
###Output
(46452, 11)
###Markdown
We want to extract rows in the __country__ or __Name__ columns where our data doesn't match (contains NaN values) and export it to a .csv file for our records. We are only interested in removing rows with NaN values in the country/Name columns, but as there are NaN values in the FIPS column, we need to be specific in our column operations
###Code
df_no_match = all_data_df[(all_data_df['country'].isnull()) | (all_data_df['Name'].isnull())] #1884 rows
df_no_match.to_csv(r'output/wp_wpds_countries-no_match.csv', index = False, header = True)
print(df_no_match.shape)
df_no_match
###Output
(1884, 11)
###Markdown
We will also drop these rows from our working dataframe and check the shape before and after to ensure it matched the number of rows we identified in the previous step.
###Code
print(all_data_df.shape)
all_data_df = all_data_df.dropna(subset = ['country', 'Name']).reset_index(drop=True)
print(all_data_df.shape)
###Output
(46452, 11)
(44568, 11)
###Markdown
Let's clean up our dataset to match the schema in the assignment instructions by dropping unnecessary columns, renaming the column headers, and reordering the columns. Then we can export our final dataset to a .csv file.
###Code
#convert rev_id and Population to integer
all_data_df = all_data_df.astype({"rev_id": int, "Population": int})
all_data_df
#convert rev_id and Population to integer
all_data_df = all_data_df.astype({"rev_id": int, "Population": int})
#drop unnecessary columns
all_data_df = all_data_df.drop(all_data_df.columns[[4, 5, 6, 7, 8]], axis=1)
all_data_df
#rename columns
all_data_df = all_data_df.rename(columns={'page': 'article_name',
'rev_id': 'revision_id',
'prediction': 'article_quality_est',
'Population': 'population',
'Sub_Region': 'subregion'})
#reorder columns
all_data_df = all_data_df[['country', 'subregion', 'article_name', 'revision_id', 'article_quality_est', 'population']]
all_data_df
#make a copy to export that drops the subregion
all_data_df_export = all_data_df.drop(all_data_df.columns[[1]], axis=1)
all_data_df_export
#make a copy to export that drops the subregion
all_data_df_export = all_data_df.drop(all_data_df.columns[[1]], axis=1)
#output to file as a .csv
all_data_df_export.to_csv(r'output/wp_wpds_politicians_by_country.csv', index = False, header = True)
###Output
_____no_output_____
###Markdown
Let's take a look at our final dataset.
###Code
all_data_df
###Output
_____no_output_____
###Markdown
Step 4: AnalysisWe'd like to calculate the proportion (as a percentage) of articles-per-population and proportion of all articles of high-quality for each country AND for each geographic region. "High quality" here is defined as having ORES quality prediction scores as either "FA" - Featured Article or "GA" - Good Article. As such, we add a column that converts the __article_quality_est__ predictions into binary indicators, with $1$ representing FA or GA scores and $0$ representing all other scores.We then need to create a dataframe that groups our data by __country__ and by __high_quality__. To generate the six tables requested in Step 5, we will need a table with the following columns, with one row for each country in our dataset:- country- geographic region- population- total number of articles- total number of high quality articles- coverage (number of total articles / population)- relative quality (number of high quality articles / total number of articles)
###Code
all_data_df['high_quality'] = np.where(all_data_df.article_quality_est.str.contains('GA' or 'FA'), 1, 0)
all_data_df
#create new dataframe that gets a count of high_quality articles per country
all_data_df_quality = all_data_df[(all_data_df['high_quality'] == 1)]
all_data_df_quality
#country, region, and population count table
summary_df = all_data_df.groupby(['country', 'subregion', 'population']).sum('high_quality').reset_index()
summary_df
summary_df = summary_df.drop(columns = ['revision_id'])
summary_df
#total number of articles by country
article_counts = all_data_df[['country', 'article_name']].groupby("country").count().astype(int).reset_index()
article_counts
#merge the above data into a new table that includes population
combined = summary_df.merge(article_counts, how = 'left', on='country')
#rename article_name to total_articles
combined.rename(columns={'article_name': 'total_articles'}, inplace=True)
combined
###Output
_____no_output_____
###Markdown
Let's make a copy of this starter table for our subregion analysis and drop the extra columns. We'll also drop the subregion column from the combined table.
###Code
combined_sub = combined.copy()
combined_sub = combined_sub.drop(columns = ['country'])
combined_sub = combined_sub.groupby(['subregion']).sum('high_quality').reset_index()
combined = combined.drop(columns = ['subregion'])
###Output
_____no_output_____
###Markdown
Now that we have our columns with basic counts, let's add columns to calculate:- __%coverage__ which we define as the proportion of all of a country's articles per population- __%relative_quality__ which we define as the proportion of high_quality articles per total articlesWe want to display the actual percentage so we multiply by 100. I've chosen to round to 4 decimal points as the percentages for the __%relative_quality__ values are so small.We do this for both the combined (by country) table and the combined_sub (by subregion) table.
###Code
#coverage (number of total articles/population)
combined['%coverage'] = combined.apply(lambda x: round(((x['total_articles']/x['population'])*100), 4), axis=1)
#relative quality (number of high quality articles/total number of articles)
combined['%relative_quality'] = combined.apply(lambda x: round(((x['high_quality']/x['total_articles'])*100), 4), axis=1)
#coverage (number of high quality articles/population)
combined_sub['%coverage'] = combined_sub.apply(lambda x: round(((x['total_articles']/x['population'])*100), 4), axis=1)
#relative quality (number of high quality articles/total number of articles)
combined_sub['%relative_quality'] = combined_sub.apply(lambda x: round(((x['high_quality']/x['total_articles'])*100), 4), axis=1)
combined
combined_sub
###Output
_____no_output_____
###Markdown
Step 5: ResultsNow that we have the table we need to produce the six results tables requested in the assignment. Table 1 Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population.We start by sorting our combined country table on the __%coverage__ column in descending order and produce the top 10 rows.
###Code
combined.sort_values(['%coverage'], axis=0, ascending=False, inplace = True, kind='quicksort', ignore_index=True)
combined.head(10)
###Output
_____no_output_____
###Markdown
Table 2 Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country populationSince we already have our combined country table sorted, we instead output 10 entries from the tail of our sorted dataframe in the previous step.
###Code
combined.tail(10)
###Output
_____no_output_____
###Markdown
Table 3 Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality.We simply need to re-sort our combined country table using the __%relative_quality__ column for this table, and output the top 10 entries.
###Code
combined.sort_values(['%relative_quality'], axis=0, ascending=False, inplace = True, kind='quicksort', ignore_index=True)
combined.head(10)
###Output
_____no_output_____
###Markdown
Table 4 Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-qualityAgain, as we already have our combined country table sorted on this value from the previous table, we simply need to output 10 items from the tail.
###Code
combined.tail(10)
###Output
_____no_output_____
###Markdown
Table 5 Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional populationFor this table, we simply sort our combined subregion table on the __%coverage__ column and display in descending order.
###Code
combined_sub.sort_values(['%coverage'], axis=0, ascending=False, inplace = True, kind='quicksort', ignore_index=True)
combined_sub
###Output
_____no_output_____
###Markdown
Table 6 Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality We re-sort our combined sub-region table on the __%relative_quality__ column and display in descending order.
###Code
combined_sub.sort_values(['%relative_quality'], axis=0, ascending=False, inplace = True, kind='quicksort', ignore_index=True)
combined_sub
###Output
_____no_output_____
###Markdown
A2: BIAS IN DATAThis assignment aims to look at the possible biases in data, and present it in a format that is readable and reproducible.This notebook is divided into the following sections:- [Reading Data](Reading-data)- [Initial Data Cleaning](Initial-Data-Cleaning)- [ORES Analysis](Using-ORES-API-for-predicting-article-quality)- [Final Data Cleaning and consolidation](Final-Data-Cleaning-and-Consolidation)- [Analysis](Data-Analysis) - [Creating Percentages](In-the-first-half-of-this-analysis-we-will-calculate-the-percentages-and-store-them-in-a-data-frame) - [Looking at the tables](The-Second-half-of-this-analysis-will-be-looking-at-the-10-lowest-and-highest-ranking-countries-in:)- [Conclusion](Conclusion) Reading data The data we will be analyzing is from two sources:- Population data: this data is drawn from the world population datasheet at: https://www.prb.org/international/indicator/population/table/- Politicle articles data by country is drawn from https://figshare.com/articles/Untitled_Item/5513449 (the documentation for the same is available at the link)The raw data files are present in the data folder in the parent repo.In the following cells we will read the data from the local folders into dataframes:
###Code
politician_data = pd.read_csv('data/raw_data/page_data.csv')
population_data = pd.read_csv('data/raw_data/WPDS_2018_data.csv')
###Output
_____no_output_____
###Markdown
Initial Data CleaningAs mentioned in the [A2 assignment wiki](https://wiki.communitydata.science/Human_Centered_Data_Science_(Fall_2019)/AssignmentsCleaning_the_data), certain rows in both dataframes need to be filtered out, as they have nothing to do with our current analysis. 1) We will remove all the rows containing the word "Template" in the page column of the politician_data Step 1: Split the page column by the delimiter ':' into two columns
###Code
politician_data[['Template','Page_Only']] = politician_data.page.str.split(":",expand=True)
politician_data.head()
###Output
_____no_output_____
###Markdown
Step 2: Remove all rows with Template value "Template"
###Code
politician_data = politician_data[politician_data.Template != "Template"]
###Output
_____no_output_____
###Markdown
Step 3: Finally we will drop the newly generated columns
###Code
del politician_data["Template"]
del politician_data["Page_Only"]
politician_data.head()
###Output
_____no_output_____
###Markdown
Step 4: We will write this clean data into a csv file for future analysis
###Code
politician_data.to_csv('data/generated_data/clean_political_article_data.csv')
###Output
_____no_output_____
###Markdown
2) We will now remove all rows with capital country names in the population_data , and store it in an intermediate csv file for future analysis Step 1: Check for only uppercase values in the "Geography" column and store it in another column "is_upper"
###Code
population_data['is_upper'] = list(map(lambda x: x.isupper(), population_data['Geography']))
population_data.head()
###Output
_____no_output_____
###Markdown
Step 2: Now we will store all rows with is_upper values as "True" in an intermediate csv
###Code
population_data_cumulative = population_data[population_data.is_upper == True]
del population_data_cumulative["is_upper"]
population_data_cumulative.head()
population_data_cumulative.to_csv('data/generated_data/cumulative_population.csv')
###Output
_____no_output_____
###Markdown
Step 3: Finally we will delete all rows with is_upper value True from our population_data DataFrame
###Code
population_data = population_data[population_data.is_upper != True]
###Output
_____no_output_____
###Markdown
Step 4: Deleting superfluous rows
###Code
del population_data['is_upper']
population_data.head()
###Output
_____no_output_____
###Markdown
Step 5: Write clean population data into its own csv
###Code
population_data.to_csv('data/generated_data/clean_population_data.csv')
###Output
_____no_output_____
###Markdown
Using ORES API for predicting article qualityOur politician_data contains rev_ids which are used by the ORES API to categorize the article by "quality". The documentation for this API can be found [here](https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_model).In the following cells we will use the [template](https://github.com/Ironholds/data-512-a2/blob/master/hcds-a2-bias_demo.ipynb) provided in the [A2 assignment wiki](https://wiki.communitydata.science/Human_Centered_Data_Science_(Fall_2019)/AssignmentsGetting_article_quality_predictions) to get these predictions from the ORES API As per the template code provided in the A2 assignment, we know that the json format returned is as follows:```{ "enwiki": { "models": { "wp10": { "version": "0.8.1" } }, "scores": { "757539710": { "wp10": { "score": { "prediction": "Start", "probability": { "B": 0.06907655349650586, "C": 0.1730497923608886, "FA": 0.003738253691275387, "GA": 0.007083489019420698, "Start": 0.7205318510650603, "Stub": 0.02652006036684928 } } } }, } }}``` And we want to isolate the enwiki -> scores -> {revid} -> wp10 -> score -> prediction value, Step 1: thus we will create a function that returns just that value
###Code
import requests
import json
headers = {'User-Agent' : 'https://github.com/apoorva-sh', 'From' : '[email protected]'}
def get_ores_data(revision_ids, headers):
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids) #'smushing' rev ids
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
predictions = []
# Isolating prediction value
for key, value in response["enwiki"]["scores"].items():
item_wp10 = value["wp10"]
if "error" not in item_wp10: #filtering out error values
prediction = [
int(key),
item_wp10["score"]["prediction"]
]
predictions.append(prediction)
return predictions
###Output
_____no_output_____
###Markdown
Step 2: Now we will create a function that sends 100 revision ids at a time to the get_ores_data function
###Code
def get_predictions(rev_ids):
predictions = []
rev_list = []
i = 0
for rev in rev_ids:
if i < 100 :
rev_list.append(rev)
i = i+1
else:
predictions.append(get_ores_data(rev_list,headers))
i=0
rev_list = []
return predictions
###Output
_____no_output_____
###Markdown
Step 3: Using the above two functions to get a set of predictions in encapsulated list format (lists within lists)
###Code
preds = get_predictions(politician_data.rev_id)
###Output
_____no_output_____
###Markdown
Step 4: Now preds contains prediction in subsets of 100, so we will convert it into a temporary collection of dataframes
###Code
temporary = [pd.DataFrame(pred_sub) for pred_sub in preds]
###Output
_____no_output_____
###Markdown
Step 5: Now we will concatenate this into one prediction dataframe
###Code
quality_prediction = pd.concat(temporary)
quality_prediction.head()
#Renaming columns
quality_prediction = quality_prediction.rename(columns={0: "rev_id", 1: "prediction"})
quality_prediction.head()
###Output
_____no_output_____
###Markdown
Step 6: We will store this value for further analysis
###Code
#Storing it in csv file
quality_prediction.to_csv('data/generated_data/quality_prediction.csv')
###Output
_____no_output_____
###Markdown
Final Data Cleaning and Consolidation 1) Merging our clean politician data and population data Step 1: To do this we will first read from our clean data csvs
###Code
#Reading from clean population data
clean_popdata = pd.read_csv('data/generated_data/clean_population_data.csv')
clean_popdata.head(1)
###Output
_____no_output_____
###Markdown
Step 2: Deleting superfluous columns and renaming "Geography" to "country"
###Code
del clean_popdata['Unnamed: 0']
clean_popdata = clean_popdata.rename(columns={'Geography':'country'})
clean_popdata.head(1)
#Reading from clean political data
clean_poldata = pd.read_csv('data/generated_data/clean_political_article_data.csv')
clean_poldata.head(1)
del clean_poldata['Unnamed: 0']
clean_poldata.head(1)
###Output
_____no_output_____
###Markdown
Now merging this data may lead us to losing some records (If there is no population data for a country/ no political article data for a country), looking at the shapes before merging:
###Code
clean_popdata.shape
clean_poldata.shape
###Output
_____no_output_____
###Markdown
Step 3: Before merging our prediction data we will maintain a list of countries that are left out of this analysis, by seeing which countries contain no population data/political data:
###Code
pop_countries = clean_popdata.country.unique() #Get all unique countries for population data
pol_countries = clean_poldata.country.unique() #Get all unique countries for political data
missing_countries = []
for a in pop_countries: #Check which countries do not have political data
if a not in pol_countries:
missing_countries.append(a)
for a in pol_countries: #Check which countries do not have population data
if a not in pop_countries:
missing_countries.append(a)
len(missing_countries)
###Output
_____no_output_____
###Markdown
Now we will write these 60 countries into a csv
###Code
missing_countries_df = pd.DataFrame(missing_countries) # Writing missing countries into a dataframe
missing_countries_df = missing_countries_df.rename(columns={0:"countries"}) #Renaming columns
missing_countries_df.to_csv('data/generated_data/wp_wpds_countries-no_match.csv') #Writing to csv
###Output
_____no_output_____
###Markdown
Step 4: Merge our clean political article data and population data
###Code
merged_data = pd.merge(clean_poldata,clean_popdata,on="country",how="inner")
###Output
_____no_output_____
###Markdown
Shape after merging:
###Code
merged_data.shape
###Output
_____no_output_____
###Markdown
2) Merging this intermediate data with prediction data Step 1: Load predictions data
###Code
pred_data = pd.read_csv('data/generated_data/quality_prediction.csv')
del pred_data['Unnamed: 0'] #removing superfluous columns
pred_data.head(1)
merged_data = pd.merge(merged_data,pred_data,on="rev_id",how="inner")
merged_data.head(1)
###Output
_____no_output_____
###Markdown
Step 2: We will write this data into a csv file for further analysis
###Code
merged_data.to_csv('data/generated_data/wp_wpds_politicians_by_country.csv')
###Output
_____no_output_____
###Markdown
Data AnalysisWe want to analyze this data based on 1) percentage of high quality articles out of total articles in a country 2) percentage of articles to population in a country 3) percentage of articles to region population Now the ORES API ([documentation](https://www.mediawiki.org/wiki/ORES)) categorizes each article as : 1) FA - Featured article 2) GA - Good article 3) B - B-class article 4) C - C-class article 5) Start - Start-class article 6) Stub - Stub-class article We will take the first two categories to mean a "high quality article" In the first half of this analysis we will calculate the percentages and store them in a data frame Step 1: Read data from csv
###Code
data = pd.read_csv('data/generated_data/wp_wpds_politicians_by_country.csv')
#Remove superfluous columns
del data['Unnamed: 0']
data.head(1)
###Output
_____no_output_____
###Markdown
Step 2: Store tuples with prediction "FA" or "GA" in a separate dataframe
###Code
highquality_FA = data[data.prediction == 'FA']
highquality_GA = data[data.prediction == 'GA']
highquality_df = pd.concat([highquality_FA,highquality_GA],axis=0) #concatenate data one after the other
###Output
_____no_output_____
###Markdown
Step 3: Count of high quality articles by country
###Code
highquality_df = highquality_df.groupby('country').count()[['rev_id']] #Grouping by country we get
#the count of articles for each country
###Output
_____no_output_____
###Markdown
Step 4. Renaming rev_id to article count and sorting by article count we get:
###Code
highquality_df = highquality_df.rename(columns={"rev_id":"Num of highquality articles"})
highquality_df = highquality_df.sort_values('Num of highquality articles',ascending = False)
highquality_df.head(10)
###Output
_____no_output_____
###Markdown
Thus we have the top 10 countries with high quality articles as listed above (Note: this is without factoring in population), thus if an ORES user was to just analyze articles based on this data they would not take into account the overall population/ the overall number of articles generated from a country Now let us look at this in proportion to the total number of articles Step 1: Get total count of articles for a country
###Code
allquality_df = data.groupby('country').count()[['rev_id']]
###Output
_____no_output_____
###Markdown
Step 2: Renaming rev_id to article count and sorting by article count we get:
###Code
allquality_df = allquality_df.rename(columns={"rev_id":"Num of articles"})
allquality_df = allquality_df.sort_values('Num of articles',ascending = False)
allquality_df.head(10)
###Output
_____no_output_____
###Markdown
Step 3: Merging this data we get:
###Code
merged_data = pd.merge(highquality_df,allquality_df,on="country",how="outer")
merged_data.head(1)
merged_data = merged_data.fillna(0) #Some countries have no high quality articles, thus fill NaN values with 0
merged_data['percentage of highquality to total'] = merged_data['Num of highquality articles']*100/merged_data['Num of articles']
merged_data.head()
###Output
_____no_output_____
###Markdown
Adding population data to this merged_data Step 1. Getting population by country as a numeric value
###Code
data = data.rename(columns={"Population mid-2018 (millions)":"population"})
data["population"] = data["population"].apply(lambda s: s.replace(",", ""))
data['population'] = pd.to_numeric(data['population'],errors='coerce')*1000000 # Convert to numeric
population_data = data.groupby('country').mean()[['population']] #Getting population for each country
population_data.head(1)
###Output
_____no_output_____
###Markdown
Step 2. Merging this with existing article data
###Code
final_data = pd.merge(merged_data,population_data,on="country",how="outer")
final_data.head()
###Output
_____no_output_____
###Markdown
Step 3. Getting percentage of articles to population
###Code
final_data['percentage of articles to population'] = final_data['Num of articles']*100/final_data['population']
#The following code is to simplify the region specific calculations done in Step 4.
final_data.index.name = 'country'
final_data.reset_index(inplace=True)
final_data.head()
###Output
_____no_output_____
###Markdown
Step 4. Loading region specific data
###Code
region_data = pd.read_csv('data/generated_data/cumulative_population.csv')
del region_data['Unnamed: 0'] #removing superfluous columns
###Output
_____no_output_____
###Markdown
Now from the data structure we have the following countries falling under each region:- AFRICA: Algeria, Egypt, Libya, Morocco, Sudan, Tunisia, Western Sahara, Benin, Burkina Faso, Cape Verde, Cote d'Ivoire, Gambia, Ghana, Guinea, Guinea-Bissau, Liberia, Mali ,Mauritania ,Niger ,Nigeria ,Senegal ,Sierra Leone ,Togo ,Burundi ,Comoros ,Djibouti ,Eritrea ,Ethiopia ,Kenya ,Madagascar ,Malawi ,Mauritius ,Mozambique ,Rwanda ,Seychelles ,Somalia ,South Sudan ,Tanzania ,Uganda ,Zambia ,Zimbabwe ,Angola ,Cameroon ,Central African Republic ,Chad ,Congo ,Congo, Dem. Rep. ,Equatorial Guinea ,Gabon ,Sao Tome and Principe ,Botswana ,eSwatini ,Lesotho ,Namibia ,South Africa - NORTHERN AMERICA: Canada, United States- LATIN AMERICA AND THE CARIBBEAN: Belize ,Costa Rica ,El Salvador ,Guatemala ,Honduras ,Mexico ,Nicaragua ,Panama ,Antigua and Barbuda ,Bahamas ,Barbados ,Cuba ,Curacao ,Dominica ,Dominican Republic ,Grenada ,Haiti ,Jamaica ,Puerto Rico ,St. Kitts-Nevis ,Saint Lucia ,St. Vincent and the Grenadines ,Trinidad and Tobago ,Argentina ,Bolivia ,Brazil ,Chile ,Colombia ,Ecuador ,Guyana ,Paraguay ,Peru ,Suriname ,Uruguay ,Venezuela - ASIA: Armenia ,Azerbaijan ,Bahrain ,Cyprus ,Georgia ,Iraq ,Israel ,Jordan ,Kuwait ,Lebanon ,Oman ,Qatar ,Saudi Arabia ,Syria ,Turkey ,United Arab Emirates ,Yemen ,Kazakhstan ,Kyrgyzstan ,Tajikistan ,Turkmenistan ,Uzbekistan ,Afghanistan ,Bangladesh ,Bhutan ,India ,Iran ,Maldives ,Nepal ,Pakistan ,Sri Lanka ,Brunei ,Cambodia ,Indonesia ,Laos ,Malaysia ,Myanmar ,Philippines ,Singapore ,Thailand ,Timor-Leste ,Vietnam ,China ,Japan ,Korea, North ,Korea, South ,Mongolia ,Taiwan - EUROPE: Denmark ,Estonia ,Finland ,Iceland ,Ireland ,Latvia ,Lithuania ,Norway ,Sweden ,United Kingdom ,Austria ,Belgium ,France ,Germany ,Liechtenstein ,Luxembourg ,Monaco ,Netherlands ,Switzerland ,Belarus ,Bulgaria ,Czechia ,Hungary ,Moldova ,Poland ,Romania ,Russia ,Slovakia ,Ukraine ,Albania ,Andorra ,Bosnia-Herzegovina ,Croatia ,Greece ,Italy ,Kosovo ,Macedonia ,Malta ,Montenegro ,Portugal ,San Marino ,Serbia ,Slovenia ,Spain- OCEANIA: Australia ,Federated States of Micronesia ,Fiji ,French Polynesia ,Guam ,Kiribati ,Marshall Islands ,Nauru ,New Caledonia ,New Zealand ,Palau ,Papua New Guinea ,Samoa ,Solomon Islands ,Tonga ,Tuvalu ,Vanuatu Step 5. Adding up article counts for each region
###Code
#Article count set to zero
AFRICA = 0
NA = 0
LA = 0
ASIA = 0
EUROPE = 0
OCEANIA = 0
#High quality counts set to 0
AFRICA_hq = 0
NA_hq = 0
LA_hq = 0
ASIA_hq = 0
EUROPE_hq = 0
OCEANIA_hq = 0
#List of all countries belonging to a region
Africa_l = ["Algeria", "Egypt", "Libya",
"Morocco", "Sudan", "Tunisia",
"Western Sahara", "Benin", "Burkina Faso",
"Cape Verde", "Cote d'Ivoire", "Gambia", "Ghana",
"Guinea", "Guinea-Bissau", "Liberia, Mali" ,"Mauritania" ,"Niger" ,"Nigeria" ,"Senegal"
,"Sierra Leone" ,"Togo" ,"Burundi" ,"Comoros" ,"Djibouti" ,"Eritrea" ,"Ethiopia" ,"Kenya" ,"Madagascar" ,
"Malawi" ,"Mauritius" ,"Mozambique" ,"Rwanda" ,"Seychelles" ,"Somalia" ,"South Sudan" ,"Tanzania" ,"Uganda" ,
"Zambia" ,
"Zimbabwe" ,"Angola" ,"Cameroon" ,"Central African Republic" ,"Chad" ,"Congo" ,"Congo, Dem. Rep." ,"Equatorial Guinea" ,
"Gabon" ,"Sao Tome and Principe" ,"Botswana" ,"eSwatini" ,"Lesotho" ,"Namibia" ,"South Africa"]
NA_l = ["Canada","United States"]
LA_l = ["Belize" ,"Costa Rica" ,"El Salvador" ,"Guatemala" ,"Honduras" ,"Mexico" ,"Nicaragua" ,"Panama" ,"Antigua and Barbuda" ,
"Bahamas" ,"Barbados" ,"Cuba" ,"Curacao" ,"Dominica" ,"Dominican Republic" ,"Grenada" ,"Haiti" ,"Jamaica" ,"Puerto Rico" ,
"St. Kitts-Nevis" ,"Saint Lucia" ,"St. Vincent and the Grenadines" ,"Trinidad and Tobago" ,"Argentina" ,"Bolivia" ,
"Brazil" ,"Chile" ,"Colombia" ,"Ecuador" ,"Guyana" ,"Paraguay" ,"Peru" ,"Suriname" ,"Uruguay" ,"Venezuela"]
Asia_l = ["Armenia" ,"Azerbaijan" ,"Bahrain" ,"Cyprus" ,"Georgia" ,"Iraq" ,"Israel" ,"Jordan" ,"Kuwait" ,"Lebanon" ,"Oman" ,"Qatar"
,"Saudi Arabia" ,"Syria" ,"Turkey" ,"United Arab Emirates" ,"Yemen" ,"Kazakhstan" ,"Kyrgyzstan" ,"Tajikistan"
,"Turkmenistan" ,"Uzbekistan" ,"Afghanistan" ,"Bangladesh" ,"Bhutan" ,"India" ,"Iran" ,"Maldives" ,"Nepal"
,"Pakistan" ,"Sri Lanka" ,"Brunei" ,"Cambodia" ,"Indonesia" ,"Laos" ,"Malaysia" ,"Myanmar" ,"Philippines"
,"Singapore" ,"Thailand" ,"Timor-Leste" ,"Vietnam" ,"China" ,"Japan" ,"Korea, North" ,"Korea, South" ,"Mongolia" ,"Taiwan"]
Euro_l = ["Denmark" ,"Estonia" ,"Finland" ,"Iceland" ,"Ireland" ,"Latvia" ,"Lithuania" ,"Norway" ,"Sweden" ,"United Kingdom" ,
"Austria" ,"Belgium" ,"France" ,"Germany" ,"Liechtenstein" ,"Luxembourg" ,"Monaco" ,"Netherlands" ,"Switzerland" ,
"Belarus" ,"Bulgaria" ,"Czechia" ,"Hungary" ,"Moldova" ,"Poland" ,"Romania" ,"Russia" ,"Slovakia" ,"Ukraine" ,"Albania" ,
"Andorra" ,"Bosnia-Herzegovina" ,"Croatia" ,"Greece" ,"Italy" ,"Kosovo" ,"Macedonia" ,"Malta" ,"Montenegro" ,"Portugal" ,
"San Marino" ,"Serbia" ,"Slovenia" ,"Spain"]
Oceania_l = ["Australia" ,"Federated States of Micronesia" ,"Fiji" ,"French Polynesia" ,"Guam" ,"Kiribati" ,"Marshall Islands"
,"Nauru" ,"New Caledonia" ,"New Zealand" ,"Palau" ,"Papua New Guinea" ,"Samoa" ,"Solomon Islands" ,"Tonga" ,"Tuvalu"
,"Vanuatu"]
#Traversing a dataframe to get the number of articles (high quality and total)
for index,r in final_data.iterrows():
if r["country"] in Africa_l:
AFRICA = AFRICA + r['Num of articles']
AFRICA_hq = AFRICA_hq + r['Num of highquality articles']
elif r["country"] in NA_l:
NA = NA + r['Num of articles']
NA_hq = NA_hq + r['Num of highquality articles']
elif r["country"] in LA_l:
LA = LA + r['Num of articles']
LA_hq = LA_hq + r['Num of highquality articles']
elif r["country"] in Asia_l:
ASIA = ASIA + r['Num of articles']
ASIA_hq = ASIA_hq + r['Num of highquality articles']
elif r["country"] in Euro_l:
EUROPE = EUROPE + r['Num of articles']
EUROPE_hq = EUROPE_hq + r['Num of highquality articles']
elif r["country"] in Oceania_l:
OCEANIA = OCEANIA + r['Num of articles']
OCEANIA_hq = OCEANIA_hq + r['Num of highquality articles']
###Output
_____no_output_____
###Markdown
Step 6. Merging article counts and population data for regions
###Code
#Creating data frame for counts
region_data_article= pd.DataFrame({'Geography': ["AFRICA", "NORTHERN AMERICA", "LATIN AMERICA AND THE CARIBBEAN","ASIA","EUROPE","OCEANIA"], 'Num of Articles': [AFRICA, NA,LA,ASIA,EUROPE,OCEANIA], 'Num of highquality Articles':
[AFRICA_hq,NA_hq,LA_hq,ASIA_hq,EUROPE_hq,OCEANIA_hq]})
region_data = pd.merge(region_data,region_data_article,on="Geography",how="inner")
region_data.head(1)
###Output
_____no_output_____
###Markdown
Step 7. Converting population to numeric value, and calculatimg percentage of articles
###Code
region_data = region_data.rename(columns={"Population mid-2018 (millions)":"population"}) #renaming column
region_data["population"] = region_data["population"].apply(lambda s: s.replace(",", ""))
region_data['population'] = pd.to_numeric(region_data['population'],errors='coerce')*1000000 # Convert to numeric
region_data['percentage of articles to population'] = region_data['Num of Articles']*100/region_data['population']
region_data['percentage of high quality articles'] = region_data['Num of highquality Articles']*100/region_data['Num of Articles']
###Output
_____no_output_____
###Markdown
The Second half of this analysis will be looking at the 10 lowest and highest ranking countries in: a) Percentage of high quality articles - TOP TEN
###Code
final_data.sort_values('percentage of highquality to total',ascending = False).head(10)
###Output
_____no_output_____
###Markdown
- LOWEST TEN
###Code
final_data.sort_values('percentage of highquality to total',ascending = True).head(10)
###Output
_____no_output_____
###Markdown
b) Percentage of articles to population - TOP TEN
###Code
final_data.sort_values('percentage of articles to population',ascending = False).head(10)
###Output
_____no_output_____
###Markdown
- LOWEST TEN
###Code
final_data.sort_values('percentage of articles to population',ascending = True).head(10)
###Output
_____no_output_____
###Markdown
c) Highest to lowest region-wise ranking of - PERCENTAGE OF ARTICLES TO POPULATION
###Code
region_data.sort_values('percentage of articles to population',ascending=False)
###Output
_____no_output_____
###Markdown
- PERCENTAGE OF HIGH QUALITY ARTICLES
###Code
region_data.sort_values('percentage of high quality articles',ascending=False)
###Output
_____no_output_____
###Markdown
Conclusion- From the above tables we see that looking at just the ORES data would give us a completely different analysis, as we are looking at only "English" articles the country with the most highest ranking articles is as expected a country with it's first language being English, however when you look at this as a percentage of the overall number of articles interestingly you see countries that do not have English as their first language- Region wise this still holds to be true in terms of percentage of high-quality articles, which maybe because of the larger number of english articles written by these countries overpowers the granular look at countries with fewer articles
###Code
final_data.sort_values('population',ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Exploring Bias in Wikipedia's Political Articles by CountryHere, I retrieve English Wikipedia's political article names by country and merge it with world population data. Then, I analyze the coverage and quality of Wikipedia's politican articles by country. Our defintion of coverage and quality will be:> __Coverage__: The percent of political articles per country population.__Quality__: The percent of high-quality political articles per country's total political articles. A high-quality article will be one considered either a _Freatured Article (FA)_ or a _Good Article (GA)_ (the highest 2 of Wikipedia's 6 article quality options). SetupFirst, we will import the packages necessary to run the following code.
###Code
import csv
import requests
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Data Acquistion World Population DataThe world population data can be found [here](https://www.dropbox.com/s/5u7sy1xt7g0oi2c/WPDS_2018_data.csv?dl=0). Here we:* Import the world population data into a dataframe.* Rename the columns for simplicity and convert the population from millions.
###Code
# import population data
countries_df = pd.read_csv('./data/WPDS_2018_data.csv')
# rename columns
countries_df.columns = ['country', 'population']
# convert population from millions
countries_df['population'] = [float(c.replace(",", "")) * 1000000 for c in countries_df['population']]
# display
countries_df.head()
###Output
_____no_output_____
###Markdown
Wikipedia's Political Article Data Article Name Data:The article data can be found [here](https://figshare.com/articles/Untitled_Item/5513449), along the filepath country/data/page_data.csv.Here we:* Import the article data into a dataframe.
###Code
# import article data
page_data_df = pd.read_csv('./data/page_data.csv')
# display
page_data_df.head()
###Output
_____no_output_____
###Markdown
Article Quality Data:Next, we use Wikimedia's API to call thier _Objective Revision Evalution Service_ ([ORES](https://www.mediawiki.org/wiki/ORES)) which will output the predicted quality for each article.To get started, here we:* Create parameters called headers and model to pass to the API calls.* Define the API endpoint URL.* Covert the article revision IDs from our article name dataset into a list we will loop over in the next step.
###Code
# Parameters for the API call. Customize "headers" with your own information
headers = {'User-Agent': 'https://github.com/mag3141592',
'From': '[email protected]'}
model = 'wp10'
# API Endpoint
url = 'https://ores.wikimedia.org/v3/scores/enwiki/'
# Make list of article revision ids
rev_ids = list(page_data_df['rev_id'])
###Output
_____no_output_____
###Markdown
Here we loop over our 47198 revision IDs. Documentation for ORES can be found [here](https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_model). Through some experimentation, iterating over 100 at a time prevented it from crashing. Each request returns a json, from which we can extract the quality prediction. I set up a try and except scenerio, because for several revision IDs there is no available prediction. After, we parse for the prediction value we save it to our _predictions_ list. The below will continue iterating over 100 revision IDs at a time, until all 47198 requests have been processed.
###Code
# Empty list to store returned prediction values and revision IDs without predictions, respectively
predictions = []
missing = []
# Initiate index and define stop index
idx = 0
pages = len(rev_ids)
# Define number of revision IDs to send to the API at a time
threshold = 100
while idx < pages:
# Define end index to never be larger than stopping index
end_idx = min(idx + threshold, pages)
# Subsets the revision IDs
rev_param = '|'.join(str(x) for x in rev_ids[idx:end_idx])
params = {'model' : model,
'revids': rev_param}
# Calls API and stores the response JSON
call = requests.get(url, params, headers = headers)
response = call.json()
# Trys to retrieve a quolity prediction from the return JSON. If it fails, it stores NaN as the prediction and then revision_id in our missing list.
for rev in response['enwiki']['scores']:
try:
predict = response['enwiki']['scores'][rev][model]['score']['prediction']
except:
missing.append(rev)
predict = np.nan
predictions.append(predict)
# Print statement to see iteration progress
if end_idx%5000 == 0:
print(end_idx, ' processed')
# Updates the starting index
idx += threshold
###Output
5000 processed
10000 processed
15000 processed
20000 processed
25000 processed
30000 processed
35000 processed
40000 processed
45000 processed
###Markdown
Looking at the results of above, we see below, 113 article revision IDs failed to return a prediction. Those revision IDs are listed below.
###Code
print(missing)
###Output
['235107991', '550682925', '671484594', '684023803', '684023859', '698572327', '703773782', '712872338', '712872421', '712872473', '712872531', '712873183', '712873308', '712873386', '712878000', '712878267', '712878343', '712878396', '712881543', '712881676', '712881741', '712881882', '712889562', '712889594', '712889683', '712889781', '712889809', '712891291', '712891354', '712891378', '712891476', '713368646', '715273866', '717927381', '719581803', '720054719', '720356159', '720688837', '721509220', '726600165', '730950147', '734957625', '738514517', '738984692', '745915558', '747688056', '749326717', '755180326', '756697478', '757313957', '757961591', '763558111', '765662083', '768013050', '768871687', '769271454', '771213598', '771642775', '774023957', '777163201', '779101752', '779135011', '779954797', '779957437', '782170063', '783382630', '787181453', '787398581', '788310383', '788722110', '789281061', '789285762', '789286413', '790028876', '790147995', '791866288', '792400552', '792857662', '792933603', '794311724', '794854344', '794866848', '795588781', '798738453', '798891126', '799880073', '800574730', '801306872', '801638735', '801835819', '802148439', '802475516', '802515929', '802765034', '802904214', '803019384', '803784385', '804118645', '804158476', '804697858', '804791599', '804946658', '805041930', '805211553', '805586089', '805936041', '806030859', '806811023', '807161510', '807367030', '807367166', '807479587', '807484325']
###Markdown
Combining DatasetsNow that we have all the data (population, article names, and quality prediction), below we add a quality predictions column onto our article name dataframe (page_data_df).
###Code
page_data_df['prediction'] = predictions
page_data_df.head(10)
###Output
_____no_output_____
###Markdown
Now we merge the population dataframe with the article (name and prediction) dataframe. I used an inner join below to remove countries that existed in one file but not the other. Then we output our new combined dataset to a csv.
###Code
# Inner joins population dataframe with the article dataframe
csv_df = countries_df.join(page_data_df.set_index('country'), how = 'inner', on = 'country')
# Rename and reorder columns
csv_df.columns = ['country', 'population', 'article_name', 'revision_id', 'article_quality']
csv_df = csv_df[['country', 'article_name', 'revision_id', 'article_quality', 'population']]
# Outputs CSV of merged data
csv_df.to_csv('./data/hcds-a2-bias-data.csv', index = False)
# Displays
csv_df.head(10)
###Output
_____no_output_____
###Markdown
Analysis Coverage AnalysisFirst, I will focus on calculating our _coverage_ metric. To do so, we need to find the total number of poltical articles by country. Which is done below, by grouping by country and counting article names.
###Code
# Group and count articles over group
articles_per_country = csv_df.groupby(['country']).count()[['article_name']]
# Format and display results
articles_per_country.columns = ['total_article_count']
articles_per_country.head(10)
###Output
_____no_output_____
###Markdown
Next, we inner join our total articles per country dataframe with our countries data. We divide the articles per country by the respective population and convert to percent inorder to calculate _coverage_.
###Code
# Join article count and population datasets
apc_df = countries_df.join(articles_per_country, how = 'inner', on = 'country')
# Calculate coverage
apc_df['articles_per_population_%']= apc_df['total_article_count']/apc_df['population'] * 100
# Format and display results
apc_df = apc_df.set_index('country')
apc_df.head(10)
###Output
_____no_output_____
###Markdown
Now that we've finished out coverage metric, we will display the 10 highest and the 10 lowest ranking countries in terms of political article coverage.
###Code
# 1. Sort merged dataframe by descending coverage
apc_df.sort_values('articles_per_population_%', ascending = False).head(10)
# 2. Sort merged dataframe by ascending coverage
apc_df.sort_values('articles_per_population_%', ascending = True).head(10)
###Output
_____no_output_____
###Markdown
Next, we focus on calucalting our quality metric. To do so, we subset out joint population and article quality dataframe to just articles with high-quality ratings (FA and GA). Then we will, again, group by country and count the articles over the country.
###Code
# Subset our population and article quality dataframe into only articles of FA and GA quality
hq_df = csv_df[(csv_df['article_quality'] == 'FA')|(csv_df['article_quality'] == 'GA')]
# Group by country and count over articles
hq_df = hq_df.groupby(['country']).count()[['article_name']]
# Format and display
hq_df.columns = ['hq_article_count']
hq_df.head(10)
###Output
_____no_output_____
###Markdown
Then join our total articles per countries dataframe with our total high-quality articles per country. I will use a left join (the default) here in or to perserve the countries that had articles but no high-quality articles.
###Code
# Left join total articles per country with total high-quality articles per country
hq_df = articles_per_country.join(hq_df)
# Replace NaN with 0, these occured when the country had 0 high-quality articles as a result of the left join
hq_df = hq_df.fillna(0)
# Display
hq_df.head(10)
###Output
_____no_output_____
###Markdown
Finally, we will calculate our quality metric. Below we take the total high-quality articles per country and divide them by the total articles per country and convert to a percent.
###Code
# Calculate quality metric
hq_df['hq_article_%'] = hq_df['hq_article_count']/hq_df['total_article_count'] * 100
###Output
_____no_output_____
###Markdown
Now that we've finished out quality metric, we will display the 10 highest and the 10 lowest ranking countries in terms of political article quality.
###Code
# 3. Sort merged dataframe by descending quality
hq_df.sort_values('hq_article_%', ascending = False).head(10)
# 4. Sort merged dataframe by ascending quality
hq_df.sort_values('hq_article_%', ascending = True).head(10)
###Output
_____no_output_____
###Markdown
Above, we see the 10 lowest ranked countries in terms of quality all have a quality value of 0. Below I will show the total 36 countries with a quality of 0.
###Code
zeros = hq_df[(hq_df['hq_article_count'] == 0)].reset_index()
zeros
###Output
_____no_output_____
###Markdown
Now we will exclude all countries with quality = 0 just to see the new 10 lowest ranking countries.
###Code
# Subset to excluding quality = 0 countries and sort by ascending quality
hq_df[(hq_df['hq_article_count'] > 0)].sort_values('hq_article_%', ascending = True).head(10)
###Output
_____no_output_____
###Markdown
A2: Bias in data The goal of this assignment is to identify potential biases with the volume and quality of English Wikipedia articles about politicians, across many different countries. Two external data sources were used for this assignment and they are both stored in the `data` subdirectory of this repository. The first external data source is a CSV file storing minimal information about ~50,000 Wikipedia articles about politicians ("page_data.csv"), which can be downloaded at https://figshare.com/articles/Untitled_Item/5513449. The second external data source is another CSV file storing ~200 countries/continents and their populations ("WPDS_2018_data.csv"), which can be downloaded at https://www.dropbox.com/s/5u7sy1xt7g0oi2c/WPDS_2018_data.csv?dl=0.The schemas for the CSV files are as follows:**page_data.csv**|column |description ||-------|-----------------------------------------------------||page |name of Wikipedia article ||country|country of politician the Wikipedia article is about ||rev_id |revision ID of the last edit to the Wikipedia article|**WPDS_2018_data.csv**|column |description ||------------------------------|--------------------------------------------------------||Geography |country or contintent ||Population mid-2018 (millions)|population of country or continent in millions of people|
###Code
import csv
import pandas
import requests
###Output
_____no_output_____
###Markdown
1. Retrieving article quality predictions To predict an article's quality, we are using the Object Revision Evaluation Service ([ORES](https://www.mediawiki.org/wiki/ORES)) API provided by Wikimedia. Given a version of an article (a revision ID for an article), the ORES API provides class probabilities for six of the grades in the [WikiProject article quality grading scheme](https://en.wikipedia.org/wiki/Wikipedia:Content_assessmentGrades) (FA, GA, B, C, Start, Stub). The ORES API also provides a grade prediction, which we will be extracting for use in later stages of this assignment.The ORES API does allow for the [grades of multiple articles to be predicted at once](https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context), but there are limitations. First of all, there's a limit to how long a URL can be (2083 characters). Since each revision ID passed to the ORES API is 9 characters long and multiple revision IDs are separated by the "|" character, this limits us to passing ~200 Wikipedia articles to the ORES API with each call. Secondly, through trial and error, it seems like the ORES API provides its _own_ limit to the number of Wikipedia articles it can query within the same API call. This limit was found to be 140 Wikipedia articles, with any number of articles above this causing the API call to come back with a 503 Service Unavailable response.Due to the limit on revision IDs that can be passed to the ORES API with each call, we first separate the revision IDs found in the Wikipedia article CSV file into chunks no longer than 140 IDs. We then make an API call for each of these chunks and save each article's predicted grade/quality. There are some cases where predictions can't be made either because the revision ID can't be found (RevisionNotFound error) or the Wikipedia article has been deleted (TextDeleted error). We ignore Wikipedia articles where quality predictions cannot be made in later stages of this assignment. Since ~300 revision ID chunks are created from the Wikipedia article CSV file and ~300 API calls are made, this step may take a couple minutes.
###Code
# maximum revision IDs allowed per API call, found through trial and error
MAX_REV_IDS_PER_CALL = 140
def create_string_chunk(rev_ids, chunk_list):
"""
Transforms a list of revision IDs into a string separated by "|" characters to be passed to a single API call.
Adds the revision ID string chunk to a list of all revision ID string chunks, representing the number of API calls.
Clears the original list of revision IDs so we can start creating the next list of revision IDs.
"""
rev_ids_string = "|".join(rev_ids)
chunk_list.append(rev_ids_string)
rev_ids.clear()
rev_ids_string_chunks = []
with open("data/page_data.csv", "r") as csv_file:
csv_reader = csv.reader(csv_file)
is_header = True
tmp_rev_ids = []
for _, _, rev_id in csv_reader:
# ignore header row
if is_header:
is_header = False
continue
tmp_rev_ids.append(rev_id)
# if we've accumulated the maximum number of revision IDs allowed for an API call, transform the revision IDs
# into a string to pass as a parameter to an API call
if len(tmp_rev_ids) == MAX_REV_IDS_PER_CALL:
create_string_chunk(tmp_rev_ids, rev_ids_string_chunks)
# transform all leftover revision IDs into a string
if len(tmp_rev_ids) > 0:
create_string_chunk(tmp_rev_ids, rev_ids_string_chunks)
print("{} revision ID chunks created".format(len(rev_ids_string_chunks)))
# documentation found here: https://ores.wikimedia.org/v3/#!/scoring/get_v3_scores_context
ORES_ENDPOINT = "https://ores.wikimedia.org/v3/scores/{context}?models={model}&revids={revids}"
predictions = {}
for rev_ids_string in rev_ids_string_chunks:
ores_params = {
"context": "enwiki",
"model": "wp10",
"revids": rev_ids_string
}
response = requests.get(ORES_ENDPOINT.format(**ores_params))
# raise an AssertionError if the API call does not come back successfully
assert response.status_code == 200, "API call came back with status code: {}".format(response.status_code)
print("*", end="")
json = response.json()
scores = json["enwiki"]["scores"]
for rev_id in scores.keys():
model = scores[rev_id]["wp10"]
# some queries by the API don't return predictions because the revision ID wasn't found (RevisionNotFound error)
# or the Wikipedia article has since been deleted (TextDeleted error), we will not save these revision IDs in
# the predictions map
if "score" in model:
prediction = model["score"]["prediction"]
predictions[rev_id] = prediction
print("\nArticle quality predictions retrieved")
###Output
**************************************************************************************************************************************************************************************************************************************************************************************************************************************************
Article quality predictions retrieved
###Markdown
2. Merging the datasets Now that we effectively have three data sources (Wikipedia articles CSV file, countries CSV file, article quality predictions) that can be linked with each other, we will merge the data sources into one CSV outfile ("article_qualities.csv").The schema for the CSV outfile is as follows:|column |description ||---------------|----------------------------------------------------||country |country of politician the Wikipedia article is about||article_name |name of Wikipedia article ||revision_id |revision ID of last edit to Wikipedia article ||article_quality|predicted grade/quality of Wikipedia article ||population |population of country in millions of people |Before merging the three data sources, we have to take into account mismatching country names between the articles CSV file and the countries CSV file. I've gone through the articles CSV file, saved each country that didn't have a match in the countries CSV file and tallied up how many times that mismatching country name appeared in the articles CSV file. For mismatching country names that appeared more than 30 times (arbitrary number to make sure the number of manual country name mappings wasn't too small or large) in the articles CSV file and had a slightly different country name in the countries CSV file, I've created a manual mapping between the similar country names (e.g. "Czech Republic" in the articles CSV file maps to "Czechia" in the countries CSV file) so we are able to merge Wikipedia article data from these countries. The final country name that is used in the outfile is the country name seen in the countries CSV file. All Wikipedia articles with country names that still have no match in the countries CSV file will be ignored during later stages of this assignment. All Wikipedia articles where we weren't able to make quality predictions for will also be ignored.
###Code
# load the country populations CSV file into memory since its small, makes it easier to merge the three data sources
# together
populations = {}
with open("data/WPDS_2018_data.csv", "r") as csv_file:
csv_reader = csv.reader(csv_file)
is_header = True
for country, population in csv_reader:
# ignore header row
if is_header:
is_header = False
continue
# remove commas from populations (e.g. 1,284 -> 1284) to make it easier to convert populations to floats at a
# later stage
populations[country] = population.replace(",", "")
# mappings between country names (country name in articles CSV file -> country name in countries CSV file) that
# slightly differ in each CSV file, map only contains common mismatched country names
COMMON_MISMATCHING_COUNTRIES = {
"Czech Republic": "Czechia",
"Hondura": "Honduras",
"Congo, Dem. Rep. of": "Congo, Dem. Rep.",
"Salvadoran": "El Salvador",
"South Korean": "Korea, South",
"Ivorian": "Cote d'Ivoire",
"Samoan": "Samoa",
"Saint Lucian": "Saint Lucia",
"East Timorese": "Timor-Leste",
"Saint Kitts and Nevis": "St. Kitts-Nevis",
"Swaziland": "eSwatini",
}
with open("data/page_data.csv", "r") as csv_file_read:
with open("article_qualities.csv", "w") as csv_file_write:
csv_reader = csv.reader(csv_file_read)
csv_writer = csv.writer(csv_file_write)
# write header row
csv_writer.writerow(["country", "article_name", "revision_id", "article_quality", "population"])
is_header = True
for page, country, rev_id in csv_reader:
# ignore header row when reading
if is_header:
is_header = False
continue
# if possible, map the country name provided by the articles CSV file
country = COMMON_MISMATCHING_COUNTRIES.get(country, country)
# write to the CSV outfile only if we were able to predict the article's quality and the country name
# matches with a country name in the countries CSV file
if rev_id in predictions.keys() and country in populations.keys():
# join the article's quality prediction
quality_prediction = predictions[rev_id]
# join the country's population in millions of people
population = populations[country]
csv_writer.writerow([country, page, rev_id, quality_prediction, population])
###Output
_____no_output_____
###Markdown
3. Computing the analysis To identify potential biases with English Wikipedia articles about politicians across countries, we will use two different metrics.**articles-per-population**This metric will be displayed in the "articles/million people" column in the tables below. For each country, it represents the number of articles about politicians per one million people of country population.**proportion of high-quality articles**This metric will be displayed in the "perc. of high quality articles" column in the tables below. For each country, it represents the proportion of articles about politicians that were predicted to be either a featured article (FA) or a good article (GA). The proportion is represented as a percentage.
###Code
# the outputted CSV file as a pandas.DataFrame
articles_df = pandas.read_csv("article_qualities.csv")
# add a boolean column determining whether the articles was predicted to be of high quality (FA or GA)
articles_df["high_quality_article"] = (articles_df["article_quality"] == "FA") | (articles_df["article_quality"] == "GA")
# aggregate the original DataFrame to calculate the number of articles for each country
article_count_df = pandas.DataFrame({"article count": articles_df.groupby(["country", "population"]).size()}).reset_index()
# aggregate the original DataFrame to calculate the number of high quality articles for each country
hq_article_count_df = pandas.DataFrame({"high quality article count": articles_df.groupby(["country", "population"])["high_quality_article"].sum()}).reset_index()
# merge the two aggregated DataFrames together
countries_df = article_count_df.merge(hq_article_count_df)
# convert a count column from float to integer
countries_df["high quality article count"] = countries_df["high quality article count"].astype(int)
# add a column representing the number of articles per one million people
countries_df["articles/million people"] = countries_df["article count"]/countries_df["population"]
# add a column representing the proportion of high quality articles as a percentage
countries_df["perc. of high quality articles"] = (countries_df["high quality article count"]/countries_df["article count"])*100
###Output
_____no_output_____
###Markdown
10 highest ranked countries in terms of articles-per-population
###Code
countries_df.sort_values(by="articles/million people", ascending=False).head(10)
###Output
_____no_output_____
###Markdown
This table is quite unexciting since it's entirely made up of countries with less than half a million people. Below is another table displaying the 10 highest ranked countries in terms of articles-per-population, but filtering out countries with fewer than 100 articles about politicians.
###Code
countries_df[countries_df["article count"] >= 100].sort_values(by="articles/million people", ascending=False).head(10)
###Output
_____no_output_____
###Markdown
10 lowest ranked countries in terms of articles-per-population
###Code
countries_df.sort_values(by="articles/million people").head(10)
###Output
_____no_output_____
###Markdown
10 highest ranked countries in terms of proportion of high-quality articles
###Code
countries_df.sort_values(by="perc. of high quality articles", ascending=False).head(10)
###Output
_____no_output_____
###Markdown
10 lowest ranked countries in terms of proportion of high-quality articles Since there are more than 10 countries with zero high quality articles about politicians, I've also sorted this table by the volume of articles about politicians. So, countries that have no high quality articles about politicians but have a large number of _articles_ (regardless of quality) about politicians will appear higher in the table.
###Code
countries_df.sort_values(by=["perc. of high quality articles", "article count"], ascending=[True, False]).head(10)
###Output
_____no_output_____
###Markdown
Step 1: Getting the Article and Population Data
###Code
page_data = pd.read_csv('page_data.csv')
WPDS_2020_data = pd.read_csv('WPDS_2020_data.csv')
page_data
###Output
_____no_output_____
###Markdown
Step 2: Cleaning the Data
###Code
new_page = page_data[page_data['page'].str[0:9] != 'Template:']
new_page.reset_index(drop = True,inplace = True)
new_page
WPDS_2020_data
###Output
_____no_output_____
###Markdown
Manipulating data for later
###Code
keepLater = WPDS_2020_data[WPDS_2020_data['Name'].str.isupper()]
keepLater.reset_index(inplace = True)
keepLater[2:][:]
#WPDS_2020_data.drop(WPDS_2020_data[WPDS_2020_data['Name'].str[1].str.isupper()].index, inplace=True)
WPDS_2020_data.reset_index(inplace = True)
WPDS_2020_data
###Output
_____no_output_____
###Markdown
Creating this list at the end since instructions were poorly written
###Code
regionList = [0] * 234
i = 0
for j in keepLater['index'][2:]:
for i in WPDS_2020_data['index']:
if i > j:
regionList[i] = j
#regionList
WPDS_2020_data['regionInd'] = regionList
WPDS_2020_data
###Output
_____no_output_____
###Markdown
Setting up endpoint and function to read API
###Code
url_endpoint = "https://ores.wikimedia.org/v3/scores/enwiki/?models=articlequality&revids={rev_id}"
headers = {
'User-Agent': 'https://github.com/Samperebikovsky',
'From': '[email protected]'
}
def api_call(endpoint, rev_id):
#print(rev_id)
call = requests.get(endpoint.format(**rev_id), headers=headers)
info = call.json()
return info
###Output
_____no_output_____
###Markdown
Creating list of lists for checking 50 rev_ids at a time instead of all at once
###Code
article_results = []
rev_ids = new_page['rev_id'].to_list()
fullList = "|".join([str(i) for i in rev_ids])
#print(fullList)
chunk_size = 500
splitList = [fullList[i:i+chunk_size] for i in range(0, len(fullList), chunk_size)]
finRevs = [i[:-1] for i in splitList]
#finRevs
###Output
_____no_output_____
###Markdown
Function to accomplish the reading in batches
###Code
def rePreds(revList):
parms = {"project": "enwiki", "model":"articlequality", "rev_id": revList}
#print(parms)
resp = api_call(url_endpoint,parms)
preds = []
rid = []
for i in revList.split('|'):
preds.append(resp.get('enwiki',{}).get('scores', {})
.get(i,{}).get("articlequality").get("score", {}).get("prediction", np.nan))
rid.append(i)
return preds,rid
###Output
_____no_output_____
###Markdown
Running functions
###Code
fullPredictions = []
fullIds = []
for ids in finRevs:
out,nid = rePreds(ids)
fullPredictions.append(out)
fullIds.append(nid)
#fullPredictions
#fullIds
###Output
_____no_output_____
###Markdown
Flatten list of lists into one list to add to dataframe
###Code
flatPreds = [item for sublist in fullPredictions for item in sublist]
flatIds = [int(item) for sublist in fullIds for item in sublist]
predDf = pd.DataFrame()
predDf['rev_id'] = flatIds
predDf['ArtPred'] = flatPreds
#predDf
#new_page
###Output
_____no_output_____
###Markdown
Step 3: Combining the Datasets
###Code
WPDS_2020_data = WPDS_2020_data.rename(columns={'Name': 'country'})
WPDS_2020_data
fullData = pd.merge(new_page, predDf, on="rev_id", how = 'left')
fullData = pd.merge(fullData, WPDS_2020_data, on=['country'], how = 'left')
fullData
###Output
_____no_output_____
###Markdown
Limiting columns
###Code
fullData = fullData[['country', 'page', 'rev_id', 'ArtPred', 'Population','regionInd']]
fullData
###Output
_____no_output_____
###Markdown
Removing null here
###Code
nullData = fullData[fullData.isna().any(axis=1)]
nullData = nullData.rename(columns={'page': 'article_name', 'rev_id':'revision_id', 'ArtPred':'article_quality_est.'
, 'Population': 'population'})
nullData
predData = fullData.dropna()
predData = predData.rename(columns={'page': 'article_name', 'rev_id':'revision_id', 'ArtPred':'article_quality_est.'
, 'Population': 'population'})
predData
###Output
_____no_output_____
###Markdown
Output csv files here
###Code
predData[['country', 'article_name', 'revision_id', 'article_quality_est.', 'population']].to_csv('wp_wpds_politicians_by_country.csv')
nullData[['country', 'article_name', 'revision_id', 'article_quality_est.', 'population']].to_csv('wp_wpds_countries-no_match.csv')
###Output
_____no_output_____
###Markdown
Step 4: Analysis
###Code
predData[['country', 'article_name', 'article_quality_est.', 'population']].to_csv('final_article_data.csv')
predData
###Output
_____no_output_____
###Markdown
Queries to find number of articles by country
###Code
queryProp = """SELECT country, CAST(CAST(population AS FLOAT) AS bigint) AS population, COUNT(*) , (COUNT(*)/population *100) as [ArticleProportion (%)], regionInd as [index] FROM predData
GROUP BY country, population, regionInd;"""
queryQualProp = """SELECT country, CAST(CAST(population AS FLOAT) AS bigint) AS population, COUNT(*) , CAST((COUNT(*)/population *100) AS DECIMAL (8,8)) as [QualityArticleProportion (%)], regionInd as [index] FROM predData WHERE
[article_quality_est.] = 'GA' OR [article_quality_est.] = 'FA'
GROUP BY country, population, regionInd;"""
# CAST(CAST('7.54001e+006' AS FLOAT) AS bigint)
prop = sqldf.run(queryProp)
qProp = sqldf.run(queryQualProp)
prop = prop.sort_values('ArticleProportion (%)', ascending = False)
prop
#all articles
qProp = (qProp.sort_values('QualityArticleProportion (%)', ascending = False))
qProp.reset_index(inplace = True, drop = True)
#pd.options.display.float_format = '{:.8f}'.format
qProp
# high quality articles
########################################################
# Region Calculations
###Output
_____no_output_____
###Markdown
Modifying data for Regional results
###Code
regionData = pd.merge(prop, keepLater, on="index", how = 'left')
regionData = regionData[['country', 'Population', 'COUNT(*)', 'Name' ]]
regionData.rename(columns = {'COUNT(*)':'numArticles'}, inplace = True)
regionData
regionDataQ = pd.merge(qProp, keepLater, on="index", how = 'left')
regionDataQ = regionDataQ[['country', 'Population', 'COUNT(*)', 'Name' ]]
regionDataQ.rename(columns = {'COUNT(*)':'numArticlesQ'}, inplace = True)
regionDataQ
###Output
_____no_output_____
###Markdown
Queries for our regional data and counting number of articles
###Code
regQ1 = """SELECT Name, Population, SUM(numArticles) as numArticle, (SUM(numArticles)*100)/CAST(Population AS FLOAT) as articleProportion
FROM regionData GROUP BY Name, Population; """
regQ2 = """SELECT Name, Population, SUM(numArticlesQ) as numArticleQ, (SUM(numArticlesQ)*100)/CAST(Population AS FLOAT) as articleProportion
FROM regionDataQ GROUP BY Name, Population; """
regionProp = sqldf.run(regQ1)
regionPropQ = sqldf.run(regQ2)
pd.options.display.float_format = '{:.8f}'.format
regionProp = regionProp.sort_values('articleProportion', ascending = False)
regionProp
regionPropQ = regionPropQ.sort_values('articleProportion', ascending = False)
regionPropQ
###Output
_____no_output_____
###Markdown
Step 5: Results 1 Top 10 Countries by proportion of articles/population
###Code
#1
prop.head(10)
###Output
_____no_output_____
###Markdown
2 Bottom 10 countries by proportion of articles/population
###Code
#2
prop.tail(10)
###Output
_____no_output_____
###Markdown
3 Top 10 Countries by proportion of quality articles/population
###Code
#3
qProp.head(10)
###Output
_____no_output_____
###Markdown
4 Bottom 10 countries by proportion of quality articles/population
###Code
#4
qProp.tail(10)
###Output
_____no_output_____
###Markdown
5 Proportion of articles/population by region
###Code
#5
regionProp
###Output
_____no_output_____
###Markdown
6 Proportion of quality articles/population by region
###Code
#6
regionPropQ
###Output
_____no_output_____
###Markdown
Bias on WikipediaThe goal of this assignment is to explore the concept of 'bias' through data on Wikipedia articles - specifically, articles on political figures from a variety of countries.perform an analysis of how the coverage of politicians on Wikipedia and the quality of articles about politicians varies between countrieslist the countries with the greatest and least coverage of politicians on Wikipedia compared to their population.list the countries with the highest and lowest proportion of high quality articles about politicians. ORES requestORES(Objective Revision Evaluation Service) is an artificial intelligence system used to identify vandalism on Wikipedia and distinguish it from good faith edits. Referenceshttps://wiki.communitydata.cc/HCDS_(Fall_2017)/AssignmentsA2:_Bias_in_datahttps://en.wikipedia.org/wiki/Aaron_Halfakerhttps://www.mediawiki.org/wiki/ORES Data Sourceshttp://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14https://figshare.com/articles/Untitled_Item/5513449 Step 1,2: Data Acquisition and ProcessingIn this ste[ we are going to download data from following sources and save it as .csv for further processing:- Wikipedia articles data is downloaded from figshare. This project contains data on most English-language Wikipedia articles within the category "Category:Politicians by nationality" and subcategories, along with the code used to generate that data.- Population data is downloaded from Population Reference Bureau(PRB). This data is from year 2015 for 210 countries.In the next steps, we will get the article quality prediction by calling ORES api and merge article_quality with wikipedia and population data in a single dataframe. We will then write the dataframe in a csv file and save it to disk Getting the Data and Appending ORES Prediction Values In this step we will be reading the csv files in and appending the ORES Prediction values to their corresponding dataframe rows.
###Code
import csv
import requests
from multiprocessing.dummy import Pool as ThreadPool
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
###Output
_____no_output_____
###Markdown
Page Data: Read from csv and append ORES info
###Code
print('Reading data from page_data.csv')
data = []
with open('page_data.csv', encoding='utf-8') as page_file:
reader = csv.reader(page_file)
next(reader)
data = [row for row in reader]
url = 'https://ores.wikimedia.org/v3/scores/enwiki/?models=wp10&revids='
using the threadpooling will approximately decrease the execution time by 11 times.
Without the pooling it will take close to 11 hours while using pooling we will have the results in less than 1 hour.
def for_pool(row):
# create url for API request
tmp_url = url + row[2]
try:
# get request
result = requests.get(url=tmp_url).json()['enwiki']['scores']
# get prediction name
prediction = result[row[2]]['wp10']['score']['prediction']
return row + [prediction]
except:
return row + [None]
print('Collecting data using API (please wait about 1 hour...)')
pool = ThreadPool(28)
page_data_with_prediction = pool.map(for_pool, data)
pool.close()
pool.join()
###Output
_____no_output_____
###Markdown
Population Data: Read from csv and processOnce data is loaded, the population data needs some processing before it's ready to use. The first two rows and 'Foonotes' column needs to be trimmed. The format for population data needs to be changed to number so that it can be used for percentage calculation in later steps. Below section applies the steps mentioned.
###Code
#First, create a dictionary of key: value pairs: key - country name, value - population.
population_data = {}
with open('Population Mid-2015.csv', encoding='utf-8') as population_file:
reader = csv.reader(population_file)
next(reader)
next(reader)
next(reader)
for row in reader:
try:
population_data[row[0]] = int(row[4].replace(',',''))
except:
pass
###Output
_____no_output_____
###Markdown
Create final datasetFor each row in page_data_with_prediction, if score exists and if the population_data has country population, add population in the new dataset.
###Code
final_dataset = []
for row in page_data_with_prediction:
if row[3] != None:
try:
population = population_data[row[1]]
final_dataset.append(row + [population])
except:
pass
###Output
_____no_output_____
###Markdown
writing the final data set to the disk:
###Code
fieldname = ['article_name', 'country', 'revision_id', 'article_quality', 'population']
with open('final_dataset.csv', 'w', encoding='utf-8', newline='') as file:
writer = csv.writer(file)
writer.writerow(fieldname)
writer.writerows(final_dataset)
###Output
_____no_output_____
###Markdown
Step 2: AnalysisIn this step we are going to calculate the percentage of articles-per-population for each country andthe percentage of high-quality articles(where prediction is either 'FA' or 'GA') for each country.Based on the results, we will produce four tables that show:1. 10 highest-ranked countries in terms of number of politician articles as a proportion of country population2. 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population3. 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country4. 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that countryThe code below performs the percentage calculations for articles-per-population for each country:
###Code
import pandas as pd
# Load data in pandas dataframe
final_dataset = pd.read_csv('final_dataset.csv')
# find all unique countries in dataframe
countries = final_dataset['country'].unique()
articles_per_population = []
# for each country find articles_per_population
for country in countries:
tmp_dataset = final_dataset[final_dataset['country'] == country]
articles = len(tmp_dataset)
population = tmp_dataset['population'].iloc[0]
articles_per_population.append([country, articles/population*100])
articles_per_population = list(zip(*articles_per_population))
articles_per_population = pd.DataFrame({'country': articles_per_population[0],
'articles_per_population': articles_per_population[1]})
###Output
_____no_output_____
###Markdown
Table 1: 10 highest-ranked countries in terms of number of politician articles as a proportion of country populationHere is a peak of the data (the below section will run the code to produce this data):124 0.488029 Nauru114 0.466102 Tuvalu98 0.248485 San Marino134 0.105020 Monaco142 0.077189 Liechtenstein148 0.067273 Marshall Islands53 0.062268 Iceland138 0.060987 Tonga177 0.043590 Andorra180 0.036893 Federated States of Micronesia
###Code
articles_per_population.sort_values('articles_per_population').head(10)
###Output
_____no_output_____
###Markdown
Table 2: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country populationHere is a peak of the data (the below section will run the code to produce this data):articles_per_population country44 0.000075 India80 0.000083 China30 0.000084 Indonesia167 0.000093 Uzbekistan113 0.000107 Ethiopia119 0.000156 Korea, North0 0.000168 Zambia157 0.000172 Thailand110 0.000194 Congo, Dem. Rep. of43 0.000202 Bangladesh
###Code
articles_per_population.sort_values('articles_per_population', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
The code below performs the percentage calculations of high-quality articles(where prediction is either 'FA' or 'GA') for each country:
###Code
high_quality_articles = []
# for each country find high_quality_articles
for country in countries:
tmp_dataset = final_dataset[final_dataset['country'] == country]
row_index = ((tmp_dataset.article_quality == 'GA') | (tmp_dataset.article_quality == 'FA'))
tmp_high_quality = tmp_dataset[row_index]
high_quality_articles.append([country, len(tmp_high_quality)/len(tmp_dataset)*100])
high_quality_articles = list(zip(*high_quality_articles))
high_quality_articles = pd.DataFrame({'country': high_quality_articles[0],
'high_quality_articles': high_quality_articles[1]})
###Output
_____no_output_____
###Markdown
Table 3: 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that countrycountry high_quality_articles119 Korea, North 23.076923128 Saudi Arabia 13.445378167 Uzbekistan 10.344828172 Central African Republic 10.29411855 Romania 9.482759181 Dominica 8.33333391 Vietnam 7.853403162 Mauritania 7.692308129 Benin 7.446809166 Gambia 7.317073
###Code
high_quality_articles.sort_values('high_quality_articles', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Table 4: 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that countrycountry high_quality_articles0 Zambia 0.063 Switzerland 0.065 Belgium 0.0185 Belize 0.098 San Marino 0.0100 Turkmenistan 0.0102 French Guiana 0.0103 Djibouti 0.0107 Malta 0.0115 Antigua and Barbuda 0.0
###Code
high_quality_articles.sort_values('high_quality_articles').head(10)
###Output
_____no_output_____
###Markdown
A2 - Bias in Data AssignmentExploring the concept of bias through data on Wikipedia articles--specifically, articles on political figures from a variety of countries. Step 1: Getting the Article and Population DataBoth source data files, below, are formatted as CSVs in the `data` folder and can be read directly in from the folder. 1. Wikipedia politicians by country dataset, and2. World population data
###Code
import pandas as pd
import numpy as np
# Read in Wikipedia politicians by country dataset
page_data = pd.read_csv('data/page_data.csv')
# Read in world population data
world_population = pd.read_csv('data/WPDS_2020_data.csv')
page_data.head()
world_population.head()
###Output
_____no_output_____
###Markdown
Step 2: Cleaning the DataEven through visual inspection, it can be seen that both datasets contain some rows that will need to be filtered out and/or ignored when before combining the datasets in the next step.In the case of `page_data.csv`, the dataset contains some page names that start with the string "Template:". These pages are not Wikipedia articles, and should excluded from the analysis.
###Code
# Filter out page names that start with the string "Template:"
page_data = page_data[page_data.page.str.contains('^(?!Template:).+')]
page_data.head()
###Output
_____no_output_____
###Markdown
Similarly, `WPDS_2020_data.csv` contains some rows that provide cumulative regional population counts, rather than country-level counts. These rows are distinguished by ALL CAPS values in the `Name` field (e.g. AFRICA, OCEANIA). They can retained in a dataset different than those in a country-level. Furthermore, retain the ordering of the subregions as new columns in the country-level dataset.
###Code
# Separate cum. regional population rows from world population
regional_population = world_population[world_population.Name.str.isupper()]
# Identify continents by inspection
large_subregions = ['AFRICA', 'LATIN AMERICA AND THE CARIBBEAN', 'ASIA', 'EUROPE']
# Create addition columns to retain continents and more specific subregions
world_population['subregion_0'] = 'WORLD'
world_population['subregion_1'] = np.where(world_population.Name.isin(large_subregions), world_population.Name, np.nan)
world_population['subregion_2'] = np.where(world_population.Name.str.isupper(), world_population.Name, np.nan)
world_population = world_population.fillna(method='ffill')
# Filter out cum. regional population
country_population = world_population[~world_population.Name.isin(regional_population.Name)]
###Output
_____no_output_____
###Markdown
Step 3: Getting Article Quality PredictionsORES is a machine learning system that can provide estimates of Wikipedia article quality. It assigns one of these 6 categories to any `rev_id` sent to it.The article quality estimates are, from best to worst:1. FA - Featured article2. GA - Good article3. B - B-class article4. C - C-class article5. Start - Start-class article6. Stub - Stub-class articlePlease review the ORES REST [documentation](https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_model). In addition to the revision ID, it also expects a model and the name of the context to find model, which is `articlequality` and `enwiki`, respectively.
###Code
ores_endpoint = "https://ores.wikimedia.org/v3/scores/enwiki?models=articlequality&revids={rev_ids}"
###Output
_____no_output_____
###Markdown
Since article predicts are needed for each article, it's best to do the API call in batches. The `api_call_batch` function will call the API endpoint in batches of 50, such that the `batch = 1` will return the predicted quality scores for the first 50 articles/revision ids, and so on.
###Code
import json
import requests
def api_call_batch(endpoint, df, batch):
batch_start = batch*50
batch_end = (batch + 1)*50 - 1
batch_ids = df.rev_id[batch_start:batch_end]
rev_id = "|".join(str(x) for x in batch_ids)
call = requests.get(endpoint.format(rev_ids=rev_id))
response = call.json()
return response
# Calculate total number of batches
batch_total = int(np.floor(len(page_data)/50))
# batch_total = 1
# Create empty data frame for predictions
ores_score = pd.DataFrame()
# Call api until batches covers all pages
for batch in range(batch_total):
response = api_call_batch(ores_endpoint, page_data, batch)
# Transform json object into dataframe with just predition and rev_id
scores = pd.json_normalize(response['enwiki']['scores'])
pred = scores.filter(regex='prediction$', axis=1).transpose()
pred['rev_id'] = pred.index.str.split('.', 1).str[0].str.strip()
pred = pred.rename({0: 'prediction'}, axis=1)
pred = pred.reset_index(drop=True)
# Append json object to article quality predictions df
ores_score = ores_score.append(pred)
###Output
_____no_output_____
###Markdown
It is only normal that some articles do not return a score. Notice how all predictions fall within article quality estimates, list above, and there are fewer predictions than pages. The specific articles with missing ORES scores will be identified when combining datasets in the next step.
###Code
ores_score.describe()
ores_score.prediction.unique()
print('There are {} articles is missing ORES scores.'.format(len(page_data) - len(ores_score)))
###Output
There are 1206 articles is missing ORES scores.
###Markdown
Step 4: Combining the DatasetsThe analysis needs complete data. Thus, when combining the ORES data for each article, the wikipedia data and popultion data, remove rows that do not have complete data. Since incomplete data can be part of the bias, maintain logs of:- articles for which an ORES score could not be retrieved- countries that are not in the population or articles datasets
###Code
# Combine ORES, article and population data
page_data = page_data.astype('str')
page_ores_country = page_data.merge(ores_score, how='inner', on='rev_id').merge(country_population, left_on='country', right_on='Name')
# Format schema for analysis
page_ores_country = page_ores_country.filter(items=['country', 'page', 'rev_id', 'prediction', 'Population'])
page_ores_country = page_ores_country.rename(columns={'page': 'article_name', 'rev_id': 'revision_id', 'prediction': 'article_quality_est', 'Population': 'population'})
# Output CSV of those with complete matches
page_ores_country.to_csv('data/wp_wpds_politicians_by_country.csv', index=False, encoding='utf-8')
###Output
_____no_output_____
###Markdown
Combine just the ORES and politician article data to identify all articles for which an ORES score could not be retrieved.
###Code
# Combine ORES and article data
page_ores_data = page_data.merge(ores_score, how='outer', on='rev_id')
# Create log articles with missing ORES scores
missing_ores = page_ores_data[page_ores_data.prediction.isna()]
# Save missing ORES scores log as CSV
missing_ores.to_csv('data/articles_missing_ores.csv', index=False, encoding='utf-8')
###Output
_____no_output_____
###Markdown
Similarly, combine just the country population and politician article data and keep any rows that do not have matching data. Since volume of missing information per country influences bias, there is no need to remove duplicates.
###Code
# Combine country population and article data
page_population_data = page_data.merge(country_population, how='outer', left_on='country', right_on="Name")
# Create df of non-matching countries
country_no_match = page_population_data[page_population_data.country.isna()| page_population_data.Name.isna()]
# Save non-matching countries as CSV
country_no_match.to_csv('data/wp_wpds_countries-no_match.csv', index=False, encoding='utf-8')
###Output
_____no_output_____
###Markdown
Step 5: AnalysisThe analysis consists of calculating the proportion (as a percentage) of articles-per-population and high-quality articles for each country and for each geographic region. For this analysis, "high quality" articles includes those with in "FA" (featured article) or "GA" (good article) classes.
###Code
high_quality = ['GA', 'FA']
# Summarize population and articles by country
countries = page_ores_country.groupby('country').agg({
'population':'first',
'article_name':'count',
'article_quality_est': lambda x: sum(x.isin(high_quality))
})
countries = countries.rename(columns={
'article_name':'article_count',
'article_quality_est':'high_quality_count'
})
# Calculate proportions of articles per population
countries['coverage'] = countries.article_count/countries.population
# Calculate percentage of high-quality articles
countries['relative_quality'] = countries.high_quality_count/countries.article_count
countries.describe()
###Output
_____no_output_____
###Markdown
Recall that cumulative regional population data were retained in a separate dataset and that there is sub-region data within the `country_population` datatset. Using this, create a mapping of country to sub-region and perform proportion of articles-per-population and high-quality articles for each sub-region.
###Code
# Create key for country and geographical region
country_geo_map = country_population.melt(id_vars=['FIPS', 'Name', 'Type', 'TimeFrame', 'Data (M)', 'Population']).rename(columns = {'Name': 'country', 'value': 'subregion'}).filter(items=['country', 'subregion'])
# Append key to combined dataset
geographic_region = page_ores_country.merge(country_geo_map, on='country')
# Summarize population and articles by subregion
geographic_region = geographic_region.groupby('subregion').agg({
'population':'sum',
'article_name':'count',
'article_quality_est': lambda x: sum(x.isin(high_quality))
})
geographic_region = geographic_region.rename(columns={
'article_name':'article_count',
'article_quality_est':'high_quality_count'
})
# Calculate proportions of articles per population
geographic_region['coverage'] = geographic_region.article_count/geographic_region.population
# Calculate percentage of high-quality articles
geographic_region['relative_quality'] = geographic_region.high_quality_count/geographic_region.article_count
geographic_region.describe()
###Output
_____no_output_____
###Markdown
Since countries that did not reconciled with both the politician article and popultion datasets were removed, the population totals need to be recalculated. Below are the World and Subregion population statistics summarized.
###Code
regional_population.describe()
###Output
_____no_output_____
###Markdown
Step 6: ResultsThe results from this analysis will be published in the form of data tables. There are six total, each looking at either,- coverage: number of politician articles as a proportion of country/geographics population- relative quality: relative proportion of politician articles that are of GA and FA-qualityIn the following tables, the proportion of interest will be highlighted next to the country.
###Code
# Order columns for proportion type
cols = countries.columns.tolist()
coverage_cols = cols[-2:] + cols[:-2]
relative_quality_cols = cols[-1:] + cols[-2:-1] + cols[:-2]
###Output
_____no_output_____
###Markdown
Top 10 countries by coverageTop 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
countries.sort_values(by='coverage', ascending=False).head(10)[coverage_cols]
###Output
_____no_output_____
###Markdown
Bottom 10 countries by coverage
###Code
countries.sort_values(by='coverage', ascending=True).head(10)[coverage_cols]
###Output
_____no_output_____
###Markdown
Top 10 countries by relative quality10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
countries.sort_values(by='relative_quality', ascending=False).head(10)[relative_quality_cols]
###Output
_____no_output_____
###Markdown
Bottom 10* countries by relative qualityThere are \*37 countires that have tied for the lowest number of politician articles as a proportion of country population. This totals tables is also sorted by descending article count.
###Code
countries.sort_values(by=['relative_quality','article_count'], ascending=(True,False)).head(countries.relative_quality.value_counts()[0])[relative_quality_cols]
###Output
_____no_output_____
###Markdown
Geographic regions by coverageRanking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
###Code
geographic_region.sort_values(by='coverage', ascending=False)[coverage_cols]
###Output
_____no_output_____
###Markdown
Geographic regions by relative qualityRanking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
geographic_region.sort_values(by='relative_quality', ascending=False)[relative_quality_cols]
###Output
_____no_output_____
###Markdown
A2 Assignment - Bias in DataDarshan Mehta 1. [Data Preparation](Data-Preparation)2. [Data Analysis](Data-Analysis)3. [Results](Results)4. [Reflections](Reflections)
###Code
import oresapi as op
import pandas as pd
from IPython.display import display
###Output
_____no_output_____
###Markdown
Data Preparation Read the ```page_data.csv``` and the ```WPDS_2018_data.csv``` files. Display a sample of the input from both the files.
###Code
page_data = pd.read_csv('./page_data.csv')
wpds_data = pd.read_csv('./WPDS_2018_data.csv')
print('Wikipedia Politicians by Country Dataset')
display(page_data.head())
print()
print()
print('Population Data')
display(wpds_data.head())
###Output
Wikipedia Politicians by Country Dataset
###Markdown
In the ```page_data.csv```, some of the page names start with the string "Template:". These pages are do not represent Wikipedia articles, and hence should be removed since they don't concern this analysis.
###Code
filtered_page_data = page_data.loc[~page_data.page.str.startswith("Template:")]
###Output
_____no_output_____
###Markdown
Now, we notice that in ```WPDS_2018_data.csv``` there are some rows which have all caps values for the 'Geography' field. These rows provide cumulative regional population counts, instead of country-level counts. All the contries below a region name when iterating row-wise sequentially are the countries which belong to this region. So we create a country-region mapper dataframe and isolate the country only rows in the dataframe for our country level analysis.
###Code
# Get the indices of rows where the 'Geography' is in all caps
regional_rows = wpds_data["Geography"].str.isupper()
country_level_counts = wpds_data[~regional_rows]
# Make the region-country mapper by iterating through the rows sequentially.
region = ""
region_country_mapper = []
for idx, row in wpds_data.iterrows():
if row["Geography"].isupper():
region = row["Geography"]
else:
region_country_mapper.append({'region': region,
'country': row['Geography']})
region_country_mapper = pd.DataFrame(region_country_mapper)
###Output
_____no_output_____
###Markdown
Next, for each article in ```filtered_page_data```, we are going to rate the article using the ```oresapi``` library. This service rates the article into one of the following 6 categories:| Rank | Symbol | Description ||------|--------|-------------|| 1 | FA | Featured Article || 2 | GA | Good Article || 3 | B | B-class Article || 4 | C | C-class Article || 5 | Start | Start-class Article || 6 | Stub | Stub-class Article |For each article, the response contains probability scores for each of the ranks along with a prediction which contains the symbol of the class which has the highest probability score. A demo on how to call this API can be found [here](https://github.com/halfak/oresapi).
###Code
# Create a session for making the API calls
# Please specify the user-agent string to help the ORES team track requests
ores_session = op.Session("https://ores.wikimedia.org", user_agent="Class Project <[email protected]>")
# Now obtain a list of revids from the filtered_page_data dataframe
rev_ids = filtered_page_data.rev_id.values
# Make the API call to retrieve 'articlequality' for each rev_id
results = ores_session.score("enwiki", ["articlequality"], rev_ids)
# We read the results and parse it into a dataframe keeping only the rev_id and the prediction
# NOTE: There could be some rev_ids which could not be found by the API. We collect this
# in a list and write to a file.
df_results = []
error_revids = []
for rev_id, result in zip(rev_ids, results):
try:
result_dict = {'rev_id': rev_id,
'article_quality': result['articlequality']['score']['prediction']}
df_results.append(result_dict)
except:
error_revids.append(rev_id)
df_results = pd.DataFrame(df_results)
# Write the revision_ids which we couldn't make the prediction for to a file.
with open('invalid_rev_ids.txt', 'w') as err_file:
err_file.write('\n'.join(list(map(str, error_revids))))
# Display a few samples of the curated results dataframe
df_results.head()
###Output
_____no_output_____
###Markdown
Next, we merge this dataframe with the `filtered_page_data` based on the `rev_id` column.
###Code
scored_page_data = pd.merge(left=filtered_page_data, right=df_results,
left_on='rev_id', right_on='rev_id')
# Display few sample rows
scored_page_data.head()
###Output
_____no_output_____
###Markdown
Next, we analyze to see if we have population data on all the countries in `scored_page_data`.
###Code
# Get the list of countries
countries = set(country_level_counts.Geography.values)
# Get the indices of rows in `scored_page_data` which have a country not present in `countries`
valid_country_indices = scored_page_data.country.isin(countries)
# Split the dataframe based on the above indices
scored_page_data_valid = scored_page_data[valid_country_indices]
scored_page_data_invalid = scored_page_data[~valid_country_indices]
print('Number of records with no country match:', len(scored_page_data_invalid))
###Output
Number of records with no country match: 2082
###Markdown
We will save these records with no country match into the file named `wp_wpds_countries-no_match.csv` and merge the rest with the population data we have. We will also rename the column `rev_id` to `revision_id`, `Population mid-2018 (millions)` to `population`, `page` to `article_name` and drop `Geography` and keep `country` instead just for clarity purposes. We also multiple the `population` column by $10^6$ to denote the actual population. We will save the final dataset to `wp_wpds_politicians_by_country.csv`.
###Code
# Save invalid dataset to file
scored_page_data_invalid.to_csv('wp_wpds_countries-no_match.csv', index=False)
# Merge the rest with the population dataset
final_dataset = pd.merge(left=country_level_counts, right=scored_page_data_valid,
left_on='Geography', right_on='country')
# Drop the `Geography` column
final_dataset = final_dataset.drop(columns=['Geography'])
# Rename the columns
final_dataset = final_dataset.rename(columns={'Population mid-2018 (millions)': 'population',
'page': 'article_name',
'rev_id': 'revision_id'})
# Multiple the `population` column by 10^6
final_dataset['population'] = final_dataset['population'].apply(lambda x: float(x.replace(',', '')))
final_dataset['population'] = final_dataset.loc[:, 'population'] * 1e6
# Save the final dataset to `wp_wpds_politicians_by_country.csv`
final_dataset.to_csv('wp_wpds_politicians_by_country.csv', index=False)
# Display a sample of the final dataset
final_dataset.head()
###Output
_____no_output_____
###Markdown
Data Analysis We now create two temporary columns to make our analysis code simpler. The column `article_count` will be just filled with ones and the column `is_good_article` will be 1 if `article_quality` is FA or GA.
###Code
final_dataset['article_count'] = 1
final_dataset['is_good_article'] = final_dataset.article_quality.isin(['FA', 'GA']).astype(int)
###Output
_____no_output_____
###Markdown
Next we perform a transformation which would convert the value of `article_count` to actual count of articles by country.
###Code
final_dataset_counts_country = final_dataset.copy()
final_dataset_counts_country['article_count'] = \
final_dataset_counts_country.groupby('country')['article_count'].transform('sum')
###Output
_____no_output_____
###Markdown
Similarly, we create the column `good_article_count`.
###Code
final_dataset_counts_country['good_article_count'] = \
final_dataset_counts_country.groupby('country')['is_good_article'].transform('sum')
###Output
_____no_output_____
###Markdown
Next, we create the following columns:$$coverage = \frac{article\_count \times 100.0}{population}$$$$relative\_quality = \frac{good\_article\_count \times 100.0}{article\_count}$$
###Code
final_dataset_counts_country['coverage'] = \
(final_dataset_counts_country['article_count'] * 100.0 /
final_dataset_counts_country['population'])
final_dataset_counts_country['relative_quality'] = \
(final_dataset_counts_country['good_article_count'] * 100.0 /
final_dataset_counts_country['article_count'])
# Only keep one row corresponding to each country
final_dataset_counts_country = \
final_dataset_counts_country \
.drop_duplicates(subset='country') \
.reset_index(drop=True)
# Keep a copy of the dataframe for our region analysis
final_dataset_country_bkup = final_dataset_counts_country.copy()
final_dataset_counts_country = final_dataset_counts_country[['country', 'coverage',
'relative_quality',
'population', 'article_count',
'good_article_count']]
###Output
_____no_output_____
###Markdown
Now for our region-wise analysis, we will repeat the above steps but first begin with merging the region information into our `final_dataset_country_bkup` using the region-country mapper we prepared in the beginning. We also sum up the population of each country under the region to obtain the population of the entire region.
###Code
# Merge the dataframes to pull in the region information
final_data_region = pd.merge(left=region_country_mapper, right=final_dataset_country_bkup,
left_on='country', right_on='country')
# Repeat all the above steps for preparing the columns `coverage` and `relative_quality`
# by grouping on `region` this time.
final_dataset_counts_region = final_data_region.copy()
# Get the total population for each coutry
final_dataset_counts_region['population'] = \
final_dataset_counts_region.groupby('region')['population'].transform('sum')
final_dataset_counts_region['article_count'] = \
final_dataset_counts_region.groupby('region')['article_count'].transform('sum')
final_dataset_counts_region['good_article_count'] = \
final_dataset_counts_region.groupby('region')['good_article_count'].transform('sum')
final_dataset_counts_region['coverage'] = \
(final_dataset_counts_region['article_count'] * 100.0 /
final_dataset_counts_region['population'])
final_dataset_counts_region['relative_quality'] = \
(final_dataset_counts_region['good_article_count'] * 100.0 /
final_dataset_counts_region['article_count'])
# Only keep one row corresponding to each region
final_dataset_counts_region = \
final_dataset_counts_region \
.drop_duplicates(subset='region') \
.reset_index(drop=True)[['region', 'coverage',
'relative_quality',
'population', 'article_count',
'good_article_count']]
###Output
_____no_output_____
###Markdown
Results Top 10 countries by coverage
###Code
final_dataset_counts_country.sort_values('coverage', ascending=False).reset_index(drop=True).head(10)
###Output
_____no_output_____
###Markdown
Bottom 10 countries by coverage
###Code
final_dataset_counts_country.sort_values('coverage').reset_index(drop=True).head(10)
###Output
_____no_output_____
###Markdown
Top 10 countries by relative quality
###Code
final_dataset_counts_country.sort_values('relative_quality', ascending=False).reset_index(drop=True).head(10)
###Output
_____no_output_____
###Markdown
Bottom10 countries by relative quality
###Code
final_dataset_counts_country.sort_values('relative_quality').reset_index(drop=True).head(10)
###Output
_____no_output_____
###Markdown
Geographic regions by coverage
###Code
final_dataset_counts_region.sort_values('coverage', ascending=False).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Geographic regions by relative_quality
###Code
final_dataset_counts_region.sort_values('relative_quality', ascending=False).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
HCDS_(Fall_2017) A2 Assignments A2: Bias in databy Abhishek Anand The goal for project is to explore the concept of 'bias' through data on Wikipedia articles . Step 1 : Getting the article and population data Source : * Wikipedia Data Set : Politicians by Country from the English-language Wikipedia [https://figshare.com/articles/Untitled_Item/5513449] : File Name - page_data.csv 1. "country", containing the sanitised country name, extracted from the category name; 2. "page", containing the unsanitised page title. 3. "last_edit", containing the edit ID of the last edit to the page. * Population Data : http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14 File Name - opulation Mid-2015.csv * Article quality predictions : https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_model
###Code
## getting the data from the CSV files
import csv
# Data Source 1 :
# reading data and saving in dictionary of dictionaries
'''
{
"revision_id": {
"country": "",
"article_name": "",
"revision_id":"",
},
"revision_id": {
},
...
}
'''
page_data = dict()
skip_lines = 1
with open('page_data.csv', encoding="utf8") as csvfile:
reader = csv.reader(csvfile)
for row in reader:
#page_data.append([row[0],row[1],row[2]])
#['Template:ZambiaProvincialMinisters', 'Zambia', '235107991']
if(skip_lines!=1):
revision_id = row[2]
page_data[revision_id]=dict()
page_data[revision_id]["country"] = row[1]
page_data[revision_id]["article_name"] = row[0]
page_data[revision_id]["revision_id"] = row[2]
skip_lines = skip_lines+1
# print(page_data)
# Data Source 2 :
'''
{
"country_name_1":"population",
"country_name_1":"population"
}
'''
population_data = {}
# skip twolines from input csv file
skip_lines = 1
with open('Population Mid-2015.csv', encoding="utf8") as csvfile:
reader = csv.reader(csvfile)
for row in reader:
# last line in the raw data file is an empty line
# skip_lines > 2 skips first two lines
# row checks if the read list is empty
#print(row)
if(skip_lines>3 and row):
# ['Afghanistan', 'Country', 'Mid-2015', 'Number', '32,247,000', '']
population_data[row[0]] = row[4]
skip_lines = skip_lines + 1
print(population_data)
# Merging data set from wikipedia data(page_data) and population data(population_data)
# reading data from page_data and including poulation data
'''
{
"revision_id": {
"country": "",
"article_name": "",
"revision_id":"",
"population":""
},
"revision_id": {
},
...
}
'''
count=0
for key,value in page_data.items():
revision_id = key
#print(key)
country_page_data = value["country"]
#print(country_page_data)
if population_data.get(str(country_page_data)) is not None:
page_data[revision_id]["population"] = population_data[country_page_data]
else:
# for now setting population as 0 for countries which are not preset in population_data
page_data[revision_id]["population"] = 0
count = count+1
#print("no of entries which have popluation 0")
#print(count)
###Output
_____no_output_____
###Markdown
Getting article quality predictions
###Code
# Data Source 3 :
import requests
import json
headers = {'User-Agent' : 'https://github.com/abhishekanand', 'From' : '[email protected]'}
def get_ores_data(revision_ids, headers):
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# Specify the parameters - smushing all the revision IDs together separated by | marks.
# Yes, 'smush' is a technical term, trust me I'm a scientist.
# What do you mean "but people trusting scientists regularly goes horribly wrong" who taught you tha- oh.
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
json.dumps(response, indent=4, sort_keys=True)
#print(response)
return response
# So if we grab some example revision IDs and turn them into a list and then call get_ores_data...
#example_ids = [783381498, 807355596, 757539710]
#get_ores_data(example_ids, headers)
#get_ores_data(example_ids, headers)
no_revision_ids = len(page_data)
#print(no_revision_ids)
# getting all the revision ids from page_data in list format
revision_ids = list(page_data.keys())
counter = 0
call_count = 0
# this contains revsion id as key and article quality as value
data_revision_quality = {}
# prediction is being added in page_data
'''
{
"revision_id": {
"country": "",
"article_name": "",
"revision_id":"",
"population":"",
"article_quality":""
},
"revision_id": {
},
...
}
'''
while(counter<no_revision_ids):
temp = get_ores_data(revision_ids[counter:counter+100], headers)
hundred_revisions = temp["enwiki"]["scores"]
for key, value in hundred_revisions.items():
revision = key
if value["wp10"].get("score") is not None:
prediction = value["wp10"]["score"]["prediction"]
page_data[revision]["article_quality"] = prediction
#print(page_data[revision])
else:
page_data[revision]["article_quality"] = 'NA'
counter = counter + 100
call_count = call_count + 1
if(counter>no_revision_ids):
temp = get_ores_data(revision_ids[counter-100:(counter-100)+no_revision_ids%100], headers)
left_revisions = temp["enwiki"]["scores"]
counter = counter-100
call_count = call_count + 1
for key, value in left_revisions.items():
revision = key
counter = counter + 1
if value["wp10"].get("score") is not None:
prediction = value["wp10"]["score"]["prediction"]
page_data[revision]["article_quality"] = prediction
#print(page_data[revision])
#print(counter) # Numbe of entries received from the API Call
#print(call_count)
#print(no_revision_ids%100)
# Cleaning Data Set to contain following values only
#country
#article_name
#revision_id
#article_quality
#population
combined_data = [] # Empty List
for key, value in page_data.items():
combined_data.append(value)
'''
printing first 100 revision ids because of
"OPub data rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_data_rate_limit`."
'''
for i in range(100):
print(combined_data[i])
# The final data file should be named: en-wikipedia_traffic_200801-201709.csv
import csv
CSVOut = "bias_in_data.csv"
with open(CSVOut, "w",encoding="utf-8") as ofile:
ofile.write("country" + "," + "article_name" + "," + "revision_id"+"," + "article_quality"+"," + "population"+ "\n")
for item in combined_data:
#print(item["article_quality"])
#print(str(item["population"]).replace(',',''))
ofile.write(str(item["country"]).replace(',','') + "," + str(item["article_name"]).replace(',','') + "," + str(item["revision_id"])+"," + str(item["article_quality"])+"," + str(item["population"]).replace(',','') + "\n")
print("Done")
###Output
Done
###Markdown
AnalysisFA - Featured articleGA - Good articleB - B-class articleC - C-class articleStart - Start-class articleStub - Stub-class article if a country has a population of 10,000 people, and you found 10 articles about politicians from that country, then the percentage of articles-per-population would be .1%.
###Code
# Creating dataset Country and Article Count
'''
{
"country_name_1":"Count of Articles",
"country_name_1":"Count of Articles"
}
'''
country_articleCount = {}
for item in combined_data:
country_name = str(item["country"]).replace(',','')
if (country_name in country_articleCount.keys()):
if(item["article_name"]) is not None:
country_articleCount[country_name]=country_articleCount[country_name]+1
else:
country_articleCount[country_name]=0
if(item["article_name"]) is not None:
country_articleCount[country_name]=1
print(country_articleCount)
# Population Data
# print(population_data)
# Creating dataste containining Country and Population
'''
{
"country_name_1":"Population",
"country_name_1":"Population"
}
'''
country_population= {}
for key, value in population_data.items():
countryName = str(key).replace(',','')
country_population[countryName] = str(population_data[key]).replace(',','')
print(country_population)
# Creating Country : Proportion (Number of Article for politicians from a country /Country's Population )
# country_articleCount
# country_population
'''
{
"country_name_1":"Proportion",
"country_name_2":"Proportion"
}
'''
article_proportion = {}
for key, value in country_articleCount.items():
if (key in country_population.keys()):
article_proportion[key] = (int(country_articleCount[key])/int(country_population[key])*100)
print (article_proportion)
###Output
{'Zambia': 0.00016802486768041668, 'Chad': 0.0007295542423579193, 'Zimbabwe': 0.0009623141638815259, 'Uganda': 0.0004683490695299071, 'Namibia': 0.006647596793038153, 'Nigeria': 0.00037615610258282857, 'Colombia': 0.000597287320087934, 'Chile': 0.0019528432732316228, 'Fiji': 0.02295271049596309, 'Solomon Islands': 0.015267175572519085, 'Palestinian Territory': 0.004083732129487782, 'Somalia': 0.003047738919356289, 'Cambodia': 0.0014075280046182486, 'Slovakia': 0.002193932173572852, 'Slovenia': 0.0028585271317829457, 'Afghanistan': 0.0010140478184017118, 'Iraq': 0.0008149827288428326, 'Nepal': 0.0012946253432718714, 'Sri Lanka': 0.0022282067009123667, 'Laos': 0.001579012404518641, 'Albania': 0.015905947441217153, 'Costa Rica': 0.003104304635761589, 'Czech Republic': 0.00240730296106794, 'Canada': 0.0023776965367119695, 'Tunisia': 0.001269726101940867, 'Guatemala': 0.0005190390955076425, 'Burkina Faso': 0.0005257338594285219, 'Angola': 0.00044, 'Panama': 0.0027386934673366836, 'Japan': 0.0003476086182344604, 'Indonesia': 8.406910976635032e-05, 'Madagascar': 0.0010413322110086171, 'Malaysia': 0.0012699406668130401, 'Gabon': 0.0058823529411764705, 'Germany': 0.000866489178129468, 'Liberia': 0.003508771929824561, 'Ghana': 0.001427394408950305, 'Peru': 0.0011363766591701119, 'Argentina': 0.0011690944232310375, 'Spain': 0.0019000172532781228, 'South Africa': 0.0006940280881524681, 'Egypt': 0.0002683162314480095, 'Nicaragua': 0.0018524433088470138, 'Bangladesh': 0.00020198116089295621, 'India': 7.533686903819783e-05, 'Iran': 0.0010600961634635666, 'Philippines': 0.0005001685033695819, 'Turkey': 0.0004513200792686825, 'Austria': 0.003946167314012202, 'Azerbaijan': 0.0018858149414568437, 'Haiti': 0.0015195898938117906, 'Greece': 0.002699468829597983, 'Hungary': 0.006242990616195375, 'Iceland': 0.06226800633561851, 'Moldova': 0.010367486006327574, 'Romania': 0.0017541505571293066, 'Poland': 0.0021025000753027686, 'Luxembourg': 0.0316232198762478, 'Denmark': 0.0051268273131284655, 'Kenya': 0.0008554146165304924, 'Ecuador': 0.0011487192087966092, 'Finland': 0.01044552158305897, 'Portugal': 0.0031210744999516865, 'Switzerland': 0.004907841706067069, 'Sweden': 0.00387565590376624, 'Belgium': 0.004665034469520467, 'Italy': 0.0013255045321689384, 'Mexico': 0.0008510671799837816, 'Bolivia': 0.0017851176554818384, 'Bulgaria': 0.003147193984124774, 'Serbia': 0.003099818378823168, 'Russia': 0.000611218139734723, 'Tanzania': 0.0007802489912221989, 'Sierra Leone': 0.002552683700960793, 'Pakistan': 0.0005250008415085258, 'Croatia': 0.00398576512455516, 'Ukraine': 0.0007098110361606695, 'South Sudan': 0.0010944700460829493, 'United States': 0.0003418067240990787, 'Yemen': 0.00045629651793394917, 'China': 8.294944311621669e-05, 'New Zealand': 0.017202884865071533, 'Venezuela': 0.00044088830829523186, 'Australia': 0.006555592766242465, 'Estonia': 0.011674897596649839, 'Lebanon': 0.003039611964430073, 'Armenia': 0.006595724512164968, 'Taiwan': 0.002143344128174536, 'Cuba': 0.001580034114372924, 'Lithuania': 0.008518815074043274, 'Malawi': 0.0007103761499941772, 'Vietnam': 0.00020825591882947525, 'France': 0.0026248424162101814, 'Norway': 0.01266746123862744, 'Ireland': 0.008228394309838568, 'Israel': 0.005945996028361207, 'Jamaica': 0.0031169783645031168, 'Kyrgyzstan': 0.0012098806923206186, 'San Marino': 0.24848484848484848, 'Bosnia-Herzegovina': 0.0048765600197692455, 'Turkmenistan': 0.0006141820212171971, 'Algeria': 0.0002978872534294583, 'French Guiana': 0.011155378486055776, 'Djibouti': 0.004333333333333333, 'Vanuatu': 0.022342342342342343, 'Netherlands': 0.004143457353937373, 'Libya': 0.0017571632103846762, 'Malta': 0.023870994655678282, 'Paraguay': 0.0021225071225071225, 'Papua New Guinea': 0.00210469230173282, 'Congo Dem. Rep. of': 0.00019361823392900483, 'Togo': 0.0008989074816761167, 'United Arab Emirates': 0.000626500991959904, 'Ethiopia': 0.00010698129355666953, 'Tuvalu': 0.46610169491525427, 'Antigua and Barbuda': 0.027777777777777776, 'Uruguay': 0.008141493542953397, 'Senegal': 0.0002927081631541687, 'Brazil': 0.00027185685340223815, 'Korea North': 0.00015610615218348477, 'United Kingdom': 0.001331960916856142, 'Botswana': 0.003177718584980607, 'Qatar': 0.0021298596297218155, 'Rwanda': 0.0009266368377856026, 'Nauru': 0.4880294659300184, 'Sudan': 0.0002397031594343984, 'Korea South': 0.0005560609290551636, 'Mozambique': 0.00023313646254274168, 'Saudi Arabia': 0.0003769985397484292, 'Benin': 0.0008882140981499257, 'Syria': 0.0007735130383826547, 'Tajikistan': 0.0004732521997649593, 'Mauritius': 0.005385456100612991, 'Jordan': 0.0005912786400591279, 'Monaco': 0.10501995379122034, 'Morocco': 0.0006095952639137189, 'Grenada': 0.032432432432432434, 'Mali': 0.0006925786614126216, 'Tonga': 0.06098741529525654, 'Myanmar': 0.00045448443822271653, 'Congo': 0.0031335436382754996, 'Guyana': 0.0026917900403768506, 'Liechtenstein': 0.07718924673941975, 'Sao Tome and Principe': 0.011249169095464539, 'Guinea-Bissau': 0.0011744966442953021, 'Belarus': 0.000755965274735105, 'Singapore': 0.00124523539550932, 'Guinea': 0.0008101514710166035, 'Marshall Islands': 0.06727272727272728, 'Dominican Republic': 0.0006090597639893414, 'Maldives': 0.02421126054198636, 'Kazakhstan': 0.0004502893650657759, 'Macedonia': 0.003139944930196609, 'Kiribati': 0.02821869488536155, 'Mongolia': 0.0030369701601176496, 'Montenegro': 0.011889059013111703, 'Bhutan': 0.004359313077939233, 'Thailand': 0.0001719868706451427, 'Latvia': 0.002830492900011827, 'Suriname': 0.006944444444444444, 'Niger': 0.0004236286953793018, 'Martinique': 0.008970976253298154, 'Mauritania': 0.0014280661128699514, 'Cameroon': 0.0004465225999410253, 'Lesotho': 0.0015589428496747785, 'Cyprus': 0.008846487424111016, 'Gambia': 0.004055605316403984, 'Uzbekistan': 9.267902495657588e-05, 'Bahrain': 0.002973874512408491, 'Eritrea': 0.0003076923076923077, 'Kuwait': 0.000964119133856216, 'Burundi': 0.0007075032582386893, 'Central African Republic': 0.0012248059222968713, 'Equatorial Guinea': 0.003975155279503106, 'Guadeloupe': 0.01203931203931204, 'Kosovo': 0.002663706992230855, 'Cape Verde': 0.0071984435797665365, 'Andorra': 0.04358974358974359, 'Comoros': 0.006675392670157068, 'Trinidad and Tobago': 0.002072538860103627, 'Federated States of Micronesia': 0.036893203883495145, 'Dominica': 0.01764705882352941, 'Bahamas': 0.005305039787798409, 'Swaziland': 0.002488335925349922, 'Barbados': 0.005035971223021583, 'Belize': 0.004347826086956522, 'Seychelles': 0.023698469294324214}
###Markdown
if a country has 10 articles about politicians, and 2 of them are FA or GA class articles, then the percentage of high-quality articles would be 20%.
###Code
# Creating Creating Country : Proportion
# Combined Data
# country_hqarticle
# Creating dataste containining Country and Count of High Quality Articles
'''
{
"country_name_1":"Count High Quality Article",
"country_name_1":"Count High Quality Article"
}
'''
country_hqarticle = {}
for item in combined_data:
country_name = str(item["country"]).replace(',','')
#print(country_name)
if (country_name in country_hqarticle.keys()):
if(str(item["article_quality"]) is 'FA' or str(item["article_quality"]) == 'GA'):
country_hqarticle[country_name]=country_hqarticle[country_name]+1
else:
country_hqarticle[country_name]=0
print(country_hqarticle)
# Creating Country and High Quality Article proportion (Number of High quality Article /Country's Population )
# country_articleCount
# country_hqarticle
'''
{
"country_name_1":"HqProportion",
"country_name_2":"hqProportion"
}
'''
hqarticle_proportion = {}
for key, value in country_articleCount.items():
if (key in country_hqarticle.keys() and str(country_hqarticle[key]) != '0' ):
#print(country_hqarticle[key])
#print(country_articleCount[key])
hqarticle_proportion[key] = (int(country_hqarticle[key])/int(country_articleCount[key])*100)
#hqarticle_proportion[key]
print (hqarticle_proportion)
###Output
{'Chad': 1.0, 'Zimbabwe': 0.5988023952095809, 'Uganda': 0.5319148936170213, 'Namibia': 0.6060606060606061, 'Nigeria': 0.5847953216374269, 'Colombia': 1.0416666666666665, 'Chile': 0.8522727272727272, 'Palestinian Territory': 5.46448087431694, 'Somalia': 2.359882005899705, 'Cambodia': 1.8433179723502304, 'Slovakia': 1.680672268907563, 'Slovenia': 1.694915254237288, 'Afghanistan': 3.669724770642202, 'Iraq': 2.3178807947019866, 'Sri Lanka': 1.7204301075268817, 'Laos': 0.9174311926605505, 'Albania': 1.0869565217391304, 'Czech Republic': 0.39370078740157477, 'Canada': 2.112676056338028, 'Tunisia': 0.7142857142857143, 'Guatemala': 5.952380952380952, 'Burkina Faso': 2.0618556701030926, 'Angola': 0.9090909090909091, 'Panama': 3.669724770642202, 'Japan': 1.8140589569160999, 'Indonesia': 3.7209302325581395, 'Madagascar': 0.8333333333333334, 'Malaysia': 1.5345268542199488, 'Gabon': 1.9417475728155338, 'Germany': 1.1379800853485065, 'Liberia': 1.2658227848101267, 'Ghana': 1.0126582278481013, 'Peru': 0.2824858757062147, 'Argentina': 2.217741935483871, 'Spain': 1.0215664018161181, 'South Africa': 2.8795811518324608, 'Egypt': 2.928870292887029, 'Bangladesh': 0.9259259259259258, 'India': 1.2121212121212122, 'Iran': 1.8028846153846152, 'Philippines': 3.300970873786408, 'Turkey': 1.13314447592068, 'Austria': 0.8823529411764706, 'Azerbaijan': 1.098901098901099, 'Haiti': 3.0120481927710845, 'Greece': 0.6430868167202572, 'Hungary': 0.4885993485342019, 'Iceland': 0.9708737864077669, 'Romania': 2.8735632183908044, 'Poland': 0.9888751545117428, 'Luxembourg': 0.5555555555555556, 'Denmark': 1.0309278350515463, 'Kenya': 1.3192612137203166, 'Ecuador': 1.06951871657754, 'Salvadoran': 0.8403361344537815, 'Portugal': 0.9287925696594427, 'Sweden': 0.7894736842105263, 'Italy': 0.4830917874396135, 'Mexico': 0.46253469010175763, 'Bulgaria': 1.3274336283185841, 'Serbia': 0.45454545454545453, 'Russia': 2.8344671201814062, 'Tanzania': 0.24509803921568626, 'Pakistan': 0.8612440191387559, 'Croatia': 1.1904761904761905, 'Ukraine': 3.618421052631579, 'South Sudan': 0.7518796992481203, 'United States': 5.191256830601093, 'Yemen': 0.819672131147541, 'China': 2.2847100175746924, 'New Zealand': 1.1378002528445006, 'Venezuela': 2.2222222222222223, 'Australia': 2.0434227330779056, 'Estonia': 0.6535947712418301, 'Lebanon': 3.1914893617021276, 'Armenia': 2.0100502512562812, 'Taiwan': 1.1928429423459244, 'Cuba': 1.1363636363636365, 'Lithuania': 0.4032258064516129, 'Malawi': 2.459016393442623, 'Vietnam': 5.2356020942408374, 'France': 1.0065127294256957, 'Norway': 0.4559270516717325, 'Ireland': 5.2493438320209975, 'Israel': 2.610441767068273, 'Palauan': 4.3478260869565215, 'Jamaica': 4.705882352941177, 'Kyrgyzstan': 1.3888888888888888, 'Bosnia-Herzegovina': 3.3707865168539324, 'Algeria': 1.680672268907563, 'Vanuatu': 4.838709677419355, 'Ivorian': 1.2658227848101267, 'Netherlands': 0.9971509971509971, 'Libya': 1.8018018018018018, 'Paraguay': 0.6711409395973155, 'Rhodesian': 3.9473684210526314, 'Papua New Guinea': 5.521472392638037, 'Congo Dem. Rep. of': 4.929577464788732, 'Togo': 1.5384615384615385, 'United Arab Emirates': 3.3333333333333335, 'Ethiopia': 2.857142857142857, 'Tuvalu': 5.454545454545454, 'Uruguay': 0.6896551724137931, 'Senegal': 2.3255813953488373, 'Brazil': 0.7194244604316548, 'Korea North': 17.94871794871795, 'United Kingdom': 3.9215686274509802, 'Botswana': 1.4705882352941175, 'Qatar': 3.9215686274509802, 'Sudan': 2.0408163265306123, 'Korea South': 1.4184397163120568, 'Saudi Arabia': 10.084033613445378, 'Cape Colony': 2.4390243902439024, 'Benin': 4.25531914893617, 'Syria': 3.0303030303030303, 'Mauritius': 1.4705882352941175, 'Jordan': 2.083333333333333, 'Morocco': 0.4807692307692308, 'Grenada': 5.555555555555555, 'Samoan': 2.5974025974025974, 'Mali': 1.7241379310344827, 'Myanmar': 2.9535864978902953, 'Congo': 0.6711409395973155, 'Guyana': 5.0, 'Guinea-Bissau': 9.523809523809524, 'Singapore': 5.797101449275362, 'Guinea': 2.247191011235955, 'Maldives': 1.1904761904761905, 'Mongolia': 3.260869565217391, 'Abkhazia': 6.25, 'Montenegro': 2.7027027027027026, 'Bhutan': 9.090909090909092, 'Thailand': 2.6785714285714284, 'Latvia': 1.7857142857142856, 'Niger': 3.75, 'Martinique': 2.941176470588235, 'Mauritania': 5.769230769230769, 'Cameroon': 0.9433962264150944, 'Saint Lucian': 2.083333333333333, 'South African Republic': 6.666666666666667, 'Cyprus': 0.9803921568627451, 'Gambia': 7.317073170731707, 'Uzbekistan': 10.344827586206897, 'Chechen': 5.263157894736842, 'Jersey': 1.639344262295082, 'Kuwait': 2.7027027027027026, 'Burundi': 1.3157894736842104, 'Guernsey': 4.0, 'Central African Republic': 7.352941176470589, 'Equatorial Guinea': 3.125, 'Kosovo': 2.083333333333333, 'South Ossetian': 5.555555555555555, 'Trinidad and Tobago': 3.571428571428571, 'Dominica': 8.333333333333332}
###Markdown
VisualizationThe visualization should be pretty straightforward. Produce four visualizations that show:10 highest-ranked countries in terms of number of politician articles as a proportion of country population10 lowest-ranked countries in terms of number of politician articles as a proportion of country population10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
# 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
import pandas as pd
from collections import OrderedDict
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot')
%matplotlib inline
# sorts article_proportion dictionary in descending order
sorted_descending_article_proportion = [(k, article_proportion[k]) for k in sorted(article_proportion, key=article_proportion.get, reverse=True)]
#for k, v in sorted_top_10_article_proportion:
# print(k, v)
sorted_top_10_article_proportion={}
count =0
for k, v in sorted_descending_article_proportion:
count =count+1
sorted_top_10_article_proportion[k] =v
if count==10:
break
print(sorted_top_10_article_proportion)
# https://stackoverflow.com/questions/18837262/convert-python-dict-into-a-dataframe
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html
ad= pd.Series(sorted_top_10_article_proportion)
adata = {'proportion ':ad}
adf = pd.DataFrame(adata)
adf
adf.plot(kind='bar',figsize=(12,10) )
plt.xlabel('Country')
plt.ylabel('Proportion [Total wikipedia Article/ Population]')
plt.title('10 highest-ranked countries in terms of number of politician articles as a proportion of country population ')
plt.show()
print("10 highest-ranked countries in terms of number of politician articles as a proportion of country population")
print("")
print("COUNTRY PROPORTION")
print("-------------------------------------------")
print (ad.to_string(index=True))
# 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
# sorts article_proportion dictionary in ascending order
sorted_ascending_article_proportion = [(k, article_proportion[k]) for k in sorted(article_proportion, key=article_proportion.get, reverse=False)]
#for k, v in sorted_bottom_10_article_proportion:
#print(k, v)
sorted_bottom_10_article_proportion={}
count =0
for k, v in sorted_ascending_article_proportion:
count =count+1
sorted_bottom_10_article_proportion[k] =v
if count==10:
break
print(sorted_bottom_10_article_proportion)
bd = pd.Series(sorted_bottom_10_article_proportion)
bdata = {'proportion ':bd}
bdf = pd.DataFrame(bdata)
bdf
bdf.plot(kind='bar',figsize=(12,10), title='10 lowest-ranked countries in terms of number of politician articles as a proportion of country population' )
plt.xlabel('Country')
plt.ylabel('Proportion [Total wikipedia Article/ Population]')
plt.show()
print("10 lowest-ranked countries in terms of number of politician articles as a proportion of country population")
print("")
print("COUNTRY PROPORTION")
print("--------------------------------")
print (bd.to_string(index=True))
# 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles
# about politicians from that country
sorted_descending_articleQ_proportion = [(k, hqarticle_proportion[k]) for k in sorted(hqarticle_proportion, key=hqarticle_proportion.get, reverse=True)]
#for k, v in sorted_top_10_articleQ_proportion:
# print(k, v)
sorted_top_10_articleQ_proportion={}
count =0
for k, v in sorted_descending_articleQ_proportion:
count =count+1
sorted_top_10_articleQ_proportion[k] =v
if count==10:
break
print(sorted_top_10_articleQ_proportion)
cd = pd.Series(sorted_top_10_articleQ_proportion)
cdata = {'Percentage ':cd}
cdf = pd.DataFrame(cdata)
#cdf = cdf.sort('propotion')
cdf.plot(kind='bar',figsize=(12,10), title='10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country' )
plt.xlabel('Country')
plt.ylabel('Percentage [High Quality (FA or GA)wikipedia Article/ Total wikipedia Article *100]')
plt.show()
print("10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country")
print("")
print("COUNTRY Percentage")
print("----------------------------------------")
print (cd.to_string(index=True))
# 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles
# about politicians from that country
sorted_ascending_articleQ_proportion = [(k, hqarticle_proportion[k]) for k in sorted(hqarticle_proportion, key=hqarticle_proportion.get, reverse=False)]
#for k, v in sorted_bottom_10_articleQ_proportion:
#print(k, v)
sorted_bottom_10_articleQ_proportion={}
count =0
for k, v in sorted_ascending_articleQ_proportion:
count =count+1
sorted_bottom_10_articleQ_proportion[k] =v
if count==10:
break
print(sorted_bottom_10_articleQ_proportion)
dd = pd.Series(sorted_bottom_10_articleQ_proportion)
ddata = {'Percentage ':dd}
ddf = pd.DataFrame(ddata)
ddf
ddf.plot(kind='bar' ,figsize=(12,10), title='10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country' )
plt.xlabel('Country')
plt.ylabel('Percentage [High Quality (FA or GA)wikipedia Article/ Total wikipedia Article *100]')
plt.show()
print("10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country")
print("")
print("COUNTRY Percentage")
print("---------------------------")
print (dd.to_string(index=True))
###Output
10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
COUNTRY Percentage
---------------------------
Czech Republic 0.393701
Hungary 0.488599
Italy 0.483092
Lithuania 0.403226
Mexico 0.462535
Morocco 0.480769
Norway 0.455927
Peru 0.282486
Serbia 0.454545
Tanzania 0.245098
###Markdown
END Below this block is Rough Work - Not for grading purpose .
###Code
Final_data = {'Top10':ad,
'Bottom 10':bd,
'Top10HQ':cd,
'botton10HQ':dd}
Final_data_f = pd.DataFrame(Final_data)
Final_data_f
Final_data_f.plot(figsize=(12, 10))
Final_data_f.plot(kind='barh', figsize=(12, 30))
fig, axes = plt.subplots(nrows=4, ncols=1)
for i, c in enumerate(Final_data_f.columns):
Final_data_f[c].plot(kind='barh',ax=axes[i], figsize=(12,50), title=c)
#https://datasciencelab.wordpress.com/2013/12/21/beautiful-plots-with-pandas-and-matplotlib/
fig, axes = plt.subplots(nrows=4, ncols=1)
for i, c in enumerate(Final_data_f.columns):
Final_data_f[c].plot(kind='bar', ax=axes[i], figsize=(12, 30), title=c)
#plt.savefig('All .png', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
descending = df_prop_dict.sort_values(by=['Proportion'],ascending=False)
descending.head(10)
###Output
_____no_output_____
###Markdown
10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
ascending = df_prop_dict.sort_values(by=['Proportion'],ascending=True)
ascending.head(10)
articlecounts=totalgrouped.count()
articlecounts['Country']=articlecounts.index
articlecounts.head()
hqcounts =grouped.count()
hqcounts['Country']=hqcounts.index
df_articles = pd.merge(articlecounts,hqcounts,on='Country')
df_articles.head()
article_dict={}
for index, row in df_articles.iterrows():
article_dict[row['Country']]= row['revid_y']/row['Population_x']
df_article_dict = pd.DataFrame.from_dict(article_dict,orient='Index')
df_article_dict['Proportion']=df_article_dict[0]
df_article_dict.head()
###Output
_____no_output_____
###Markdown
10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
ascendingarticles = df_article_dict.sort_values(by=['Proportion'],ascending=True)
ascendingarticles.head(10)
###Output
_____no_output_____
###Markdown
10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
descendingarticle = df_article_dict.sort_values(by=['Proportion'],ascending=False)
descendingarticle.head(10)
###Output
_____no_output_____
###Markdown
Bias on WikipediaThe objective of the project is to find whether bias exists in Wikipedia articles of potical figures from countries around the world. We will perform an analysis on article quality and politician coverage among various countries.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Getting the article and population dataIn step 1, we read in two datasets, wikipedia dataset and population dataset from csv. The wikipedia dataset is downloaded from [Figshsare](https://figshare.com/articles/Untitled_Item/5513449); the population data is found on the [Population Research Bureau website](http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14).
###Code
wikidf = pd.read_csv('raw_data/page_data.csv')
populationdf = pd.read_csv('raw_data/Population Mid-2015.csv',skiprows=1)
del populationdf['Footnotes']
###Output
_____no_output_____
###Markdown
Getting article quality predictionsIn step 2, we calculate article quality using a machine learning API, ORES ("Objective Revision Evaluation Service"). ORES calculates quality of each article into one of the following 6 categories: * FA - Featured article* GA - Good article* B - B-class article* C - C-class article* Start - Start-class article* Stub - Stub-class article
###Code
import requests
import json
headers = {'User-Agent' : 'https://github.com/dianachenyu', 'From' : '[email protected]'}
def get_ores_data(revision_ids, headers):
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# Specify the parameters - smushing all the revision IDs together separated by | marks.
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
return response
def compute_chunk(rev_ids_chunk,result, headers):
ores = get_ores_data(rev_ids_chunk, headers)['enwiki']['scores']
for key in ores:
try:
result.append([key,ores[key]['wp10']['score']['prediction']])
except KeyError:
continue
return None
result = []
all_rev_id = wikidf["rev_id"]
recur_time = len(all_rev_id)/100
for i in range(int(recur_time)):
rev_ids_chunk = list(all_rev_id[100*i:100*(i+1)])
compute_chunk(rev_ids_chunk,result, headers)
i=int(recur_time)
rev_ids_chunk = list(all_rev_id[100*i:len(all_rev_id)])
compute_chunk(rev_ids_chunk,result, headers)
ores_df = pd.DataFrame(result,columns=['revision_id', 'article_quality'])
ores_df.head()
###Output
_____no_output_____
###Markdown
Combining the datasetsIn step 3, we combine wikipeida article quality data calcuated in last step, with population dataset. One thing to notice is both Wikipedia dataset and population dataset have fields containing country name. Values of the two fields do not overlap exactly. There are countries exist in one dataset but not the other. In this project, we only use countries found in both datasets and remove values of non-matching rows by using inner-join.
###Code
ores_df['revision_id']=ores_df['revision_id'].astype(str).astype(int)
wiki_quality_df = ores_df.merge(wikidf, how='inner',left_on='revision_id',right_on='rev_id',copy=True)
del wiki_quality_df['rev_id']
all_data_df = wiki_quality_df.merge(populationdf, how='inner',left_on='country',right_on='Location',copy=True)
all_data_df = all_data_df.drop(['Location', 'Location Type','TimeFrame','Data Type'], axis=1)
all_data_df.rename(columns={'page': 'article_name', 'data': 'population'}, inplace=True)
all_data_df.to_csv("wikipedia_ores_population.csv")
###Output
_____no_output_____
###Markdown
AnalysisIn Step 4, we calculate percentage of articles-per-population and the percentage of high-quality articles in all articles per country. * percentage of articles-per-population: the number of articles of that country/country population * 100* percentage of high-quality articles: the number of high-quality articles of that country/the number of articles of that country * 100
###Code
article_num = all_data_df.groupby(['country']).count()['revision_id']
high_qual= all_data_df.loc[all_data_df['article_quality'].isin(['FA','GA'])]
high_qual_article_num = high_qual.groupby(['country']).count()['revision_id']
article_df = pd.DataFrame(article_num).merge(pd.DataFrame(high_qual_article_num), how='outer',left_index = True, right_index = True,copy=True)
article_df.fillna(value=0, inplace = True)
article_df.rename(columns={'revision_id_x': 'num_article', 'revision_id_y': 'num_high_quality_article'}, inplace=True)
article_df['country']=article_df.index
article_df = article_df.merge(populationdf, how='left',left_on = 'country', right_on='Location',copy=True)
article_df = article_df.drop(['Location', 'Location Type','TimeFrame','Data Type'], axis=1)
article_df.rename(columns={'Data': 'population'}, inplace=True)
article_df['population']=article_df['population'].str.replace(',','').astype(int)
article_df['articles_per_population(as a percentage)'] = article_df['num_article']/article_df['population'] *100
article_df['high_qual_articles_proportion(as a percentage)'] = article_df['num_high_quality_article']/article_df['num_article'] *100
article_df.head()
###Output
_____no_output_____
###Markdown
TablesIn step 5, we sort and output the top 10 and lowest 10 counties by articles-per-population; and the top 10 and lowest 10 counties by the propotional of high quality articles in all aritcles per country.
###Code
table_highest_articles_per_population = article_df.sort_values('articles_per_population(as a percentage)', ascending=False).head(10)
table_highest_articles_per_population
table_lowest_articles_per_population = article_df.sort_values('articles_per_population(as a percentage)', ascending=True).head(10)
table_lowest_articles_per_population
table_highest_qual_articles_proportion = article_df.sort_values('high_qual_articles_proportion(as a percentage)', ascending=False).head(10)
table_highest_qual_articles_proportion
table_lowest_qual_articles_proportion = article_df.sort_values('high_qual_articles_proportion(as a percentage)', ascending=True).head(10)
table_lowest_qual_articles_proportion
###Output
_____no_output_____
###Markdown
Human Centered Data Science Assignment A2 Alyssa Goodrich November 2, 2017 Objective: The objective of this notebook is to explore the concept of bias by demonstrating that data can have variability of quality and quantity. To do this we will examine how politicians from various countries are covered on Wikipedia including quantity as well as quality of coverage. We will create four charts for this analysis including: - 10 highest-ranked countries in terms of number of politician articles as a proportion of country population- 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population- 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country- 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country Data Sources UsedTo create these charts, we will draw from three data sources, outlined below: **Source 1: Politicians by country from the English Language Wikipedia site.**This data includes:1. "country", containing the sanitized country name, extracted from the category name;2. "page", containing the unsanitized page title.3. "last_edit", containing the edit ID of the last edit to the page.It is available at the below website:https://doi.org/10.6084/m9.figshare.5513449.v5It is saved in the file: 'page_data.csv' **Source 2: Population by country** This data includes:Population, by country as of mid 2015This data was downloaded from this website: http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14 It is saved in the file: 'Population Mid-2015.csv' **Source 3: Predicted quality scores for articles for politicians from each country** This data includes quality scores generated by ORES for each article listed in source 1 above. We will access this data in the code below. More information about ORES can be found here:https://www.mediawiki.org/wiki/ORESThe ORES endpoint can be found here:https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_modelLicensing not listed, assuming licesnse of CC-BY-SA Key steps to complete analysesIn order to create this analysis we need to complete a series of steps that is summarized below and detailed throughout the document adjacent to the code that completes each step:1. Download available data sets (Sources 1 and 2 above) 2. Combine above datasets into a single data set that includes only articles and population data for countries that were present in both data sets above3. Create an API call to extract quality score data from Source 3 above for each article present in the data set created in step 2 above4. Output CSV of raw data into file called 'ArticleQuality.csv'5. Create summary data by calculating fields necessary to conduct analyses including: - Number of articles per country - Population of country - Articles per million people - Number of FA or GA articles per country - Proportion of total articles that are FA or GA6. Output CSV of summary data into file called 'CountrySummary.csv'7. Filter data set to create lists of top ten and bottom ten quality and quantity8. Create visualizations from top ten and bottom ten data Step 1: Extract data from available data sets (Sources 1 and 2 above) The data that is extracted in this step was downloaded from sources 1 and 2 above. In this step we are reading it in to our notebook so that we may access and analyze it. After importing it, we convert it to numpy array to facilitate analysis of the data.
###Code
import requests
import json
import csv
import numpy as np
#Read in politicians by country data that had been downloaded above. Save as numpy array
PageData = []
with open('page_data.csv', encoding = "utf8") as csvfile:
reader = csv.reader(csvfile)
for row in reader:
PageData.append([row[0],row[1],row[2],None])
PageData = np.array(PageData)
PageData[0,3] = 'Quality'
#Read in population data by country data that had been downloaded above. Save as numpy array for use in later analyses
PopulationData = [['country','Population']]
PopulationDict ={}
with open('Population Mid-2015.csv', encoding ='utf8') as csvfile:
reader = csv.reader(csvfile)
i = 0
for row in reader:
i += 1
if (i >3):
PopulationData.append([row[0],int(row[4])])
PopulationDict[row[0]]=int(row[4])
PopulationData = np.array(PopulationData)
###Output
_____no_output_____
###Markdown
Step 2: Combine above datasets into a single data set that includes only articles and population data for countries that were present in both data sets aboveWe are only interested in countries where we have both the population and wikipedia article data. For this reason we will create a list of countries that are present in both data sets, and then create a data set that combines article data and population for countries present our data sets from both source 1 and source 2.
###Code
#Create an array of countries that are in both data sets
CountriesInBothDataSets = np.intersect1d(PopulationData[1:,0],PageData[1:,1])
# Create a single array with all the data fields that we need. Article quality will be empty until we pull it in the next step
#below.
CombinedData = [['country', 'revision_id', 'article_quality', 'population']]
for i in range(len(PageData)):
Country = PageData[i,1]
if Country in CountriesInBothDataSets:
CombinedData.append([PageData[i,1],PageData[i,2],None,PopulationDict[Country]])
###Output
_____no_output_____
###Markdown
Step 3: Create an API call to extract quality score data from Source 3 above for each article present in the data set created in step 2 In this step we extract the quality score for each article from the ORES API endpoint. We have structured the API call to request one article at a time and use the 'revid' variable to look up each article. Once we have extracted the score, we will insert the score into the data structure created in step 2. The potential scores and their meanings are as follows:- FA - Featured article- GA - Good article- B - B-class article- C - C-class article- Start - Start-class article- Stub - Stub-class article
###Code
# This code was adapted from code supplied by Oliver Keyes and available at this url: http://hcds.yuvi.in/user/alyssacolony/notebooks/data-512-a2/NEW_hcds-a2-bias_demo.ipynb
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/{revid}/{model}'
headers = {'User-Agent' : 'https://github.com/alyssacolony', 'From' : '[email protected]'}
params = {'project' : 'enwiki',
'model' : 'wp10',
'revid' : '797882120'
}
# In this step we first extract the data from the API endpoint, and then store it in the empty spot in the data table that we created in step 2
for i in range(len(PageData)):
revid = CombinedData[i+2][1]
params['revid'] = revid
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
quality = response['enwiki']['scores'][revid]['wp10']['score']['prediction']
CombinedData[i+1][2] = quality
if i%100==0:
print(i)
###Output
_____no_output_____
###Markdown
Step 4: Output CSV of raw data into file called 'ArticleQuality.csv'Here we store the interim data that we just extracted. It includes the following information for each article: - country: Country of politician that article is about- revision_id: The ID number of the article, used to extract score - article_quality: Score extracted from ORES data base - population: Population of country
###Code
file = open('ArticleQuality.csv', 'w', newline='',encoding="UTF-8")
writer = csv.writer(file)
# Extract necessary data and write to CSV
for line in CombinedData:
writer.writerow(line)
file.close()
###Output
_____no_output_____
###Markdown
Step 5: Create summary data by calculating fields necessary to conduct analysesHere we calculte the fields necessary to conduct our analysis. They provide a summary for each country.Fields include: - Number of articles per country- Population of country- Articles per million people- Number of high quality articles per country, calculated as number of articles with a score of FA or GA - Ratio of high quality articles per thousand total articles
###Code
from collections import Counter
from collections import defaultdict
import collections
# Caluclate total articles per country using Counter fucntion
ArticlesPerCountry = Counter(np.array(CombinedData)[1:,0])
#Convert CombinedData into an numpy array for easier analysis
CombinedData = np.array(CombinedData)
SummaryDict = defaultdict(dict)
for country in ArticlesPerCountry:
#Capture Country and calculate and record number of articles per country
NumArticles = ArticlesPerCountry[country]
SummaryDict[country]['NumberOfArticles'] = NumArticles
#Capture and record population of country
Population = PopulationDict[country]
SummaryDict[country]['Population'] = Population
#Caluclate Articles per million people
ArtPerMillionPop = (NumArticles/Population)*1000000
SummaryDict[country]['ArticlesPerMillionPeople'] = ArtPerMillionPop
#Filter table to determine number of FA or GA articles per country, and capture result
CountryArray = CombinedData[CombinedData[:,0]==country]
QualityArray =CountryArray[(CountryArray[:,2]==('GA')) | (CountryArray[:,2]==('FA'))]
NumQual = len(QualityArray)
SummaryDict[country]['NumberOfQualityArticles'] = NumQual
#Calculate proportion of total articles that are FA or GA
NumQualPerThousArt = NumQual/NumArticles*1000
SummaryDict[country]['QualityArticlesPerThousandArticles'] = NumQualPerThousArt
###Output
_____no_output_____
###Markdown
Step 6: Output CSV of summary data into file called 'CountrySummary.csv'Here we extract a summary of the data and save it to a CSV to enable future analysis. The fields we collected include the following infomration for each country.- NumberOfArticles - Population - ArticlesPerMillionPeople - NumberOfQualityArticles - QualityArticlesPerThousandArticles This will be used in later analysis
###Code
# define headers
headers = ['Country',
'NumberOfArticles',
'Population',
'ArticlesPerMillionPeople',
'NumberOfQualityArticles',
'QualityArticlesPerThousandArticles' ]
# Open file and initalize writer
file = open('CountrySummary.csv', 'w', newline='')
writer = csv.writer(file)
writer.writerow(headers)
# Create a dates variable to create a table ordered by dates
SortedCountries = collections.OrderedDict(sorted(SummaryDict.items()))
Countries = []
for item in SortedCountries:
Countries.append(item)
# Extract necessary data and write to CSV
for Country in Countries:
line = [Country]
for header in headers[1:]:
line.append(SummaryDict[Country][header])
writer.writerow(line)
file.close()
###Output
_____no_output_____
###Markdown
Step 7: Filter data to find top ten and bottom ten quality and quantity countriesWe use the Pandas package to filter data to help us find the top ten and bottom ten countries on quality articles per thousand articles and total articles per million population.
###Code
# Read data in from CSV and create a Pandas dataframe
import pandas as pd
SummaryData = pd.read_csv('CountrySummary.csv', index_col = 0)
#Sort data to find countries with lowest and highest quality and quantity of articles
QualitySort = SummaryData.sort_values(by=['QualityArticlesPerThousandArticles','NumberOfArticles'], ascending = [True,False])
QuantitySort = SummaryData.sort_values(by=['ArticlesPerMillionPeople','Population'], ascending = [True,False])
# Data Frame with countries with lowest quantity of articles
LowQuant = QuantitySort[:10]
# Data Frame with countries with highest quantity of articles
HighQuant = QuantitySort[-10:]
#Data Frame with countries with lowest quality of articles
LowQual = QualitySort[:10]
#Data Frame with countries with highest quality of articles
HighQual = QualitySort[-10:]
###Output
_____no_output_____
###Markdown
Step 8: Create visualizations from top ten and bottom ten lists created aboveHere we create four bar charts to illustrate the variability in the quality of articles across countries. This is shown by charts creating charts for the top ten countries with the most articles per million people and the bottom ten countries with the fewest articles per million people. We also illustrate the variability in quality with charts for the top ten countries with the most high quality articles per thousand total articles for the bottom ten countries with the fewest high quality articles per thousand total articles.
###Code
import matplotlib.pyplot as plt
import pylab
%matplotlib inline
plt.figure();
plt.gcf().subplots_adjust(left=0.3)
axLowQuant = LowQuant['ArticlesPerMillionPeople'].plot.barh( title ="Countries with fewest Articles per Million Population");
axLowQuant.set_xlabel("Number of Articles per Million Population")
pylab.savefig('LowQuant.png')
plt.figure();
plt.gcf().subplots_adjust(left=0.3)
axLowQuant = HighQuant['ArticlesPerMillionPeople'].plot.barh( title ="Countries with most Articles per Million Population");
axLowQuant.set_xlabel("Number of Articles per Million Population")
pylab.savefig('HighQuant.png')
plt.figure();
plt.gcf().subplots_adjust(left=0.3)
axHighQual = HighQual['QualityArticlesPerThousandArticles'].plot.barh( title ="Countries with most high quality articles per thousand total articles");
axHighQual.set_xlabel("Number articles with 'FA' or 'GA' score per thousand articles")
pylab.savefig('HighQual.png')
plt.figure();
plt.gcf().subplots_adjust(left=0.3)
axLowQuant = LowQual['QualityArticlesPerThousandArticles'].plot.barh( title ="Countries with fewest high quality articles per thousand total articles");
axLowQuant.set_xlabel("Number articles with 'FA' or 'FA' score per thousand articles")
pylab.savefig('LowQual.png')
###Output
_____no_output_____
###Markdown
Bias in Data - Quality of Wikipedia Political PagesThis notebook explores the concept of bias through data on English Wikipedia articles - specifically, articles on political figures from a variety of countries. Using the ORES machine learning API, we will rate each article based on its quality and examine which countries/regions possess a larger percentages of high-quality articles on political figures.
###Code
import pandas as pd
import requests
import json
# Load page data from the 'data' folder
page_data = pd.read_csv('data/page_data.csv')
# Remove pages whose names begin with the string 'Template:', these are not wikipedia articles and
# will not be included in the analysis
page_data.drop(page_data[page_data['page'].str.startswith('Template:')].index, axis=0, inplace=True)
# Load population data from 'data' folder
population_data = pd.read_csv('data/WPDS_2020_data - WPDS_2020_data.csv')
# Separate into regional data and data by country, retain both dataframes for analysis
regional_pop_data = population_data[population_data['Name'].str.isupper()]
population_data.drop(regional_pop_data.index, axis=0, inplace=True)
###Output
_____no_output_____
###Markdown
Rating Estimation with ORES APINow that the datasets have been loaded and cleaned, we will use the ORES API (documentation: https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_model) to predict an article's rating out of six classes:1. FA - Featured article2. GA - Good article3. B - B-class article4. C - C-class article5. Start - Start-class article6. Stub - Stub-class articleWith FA being the best rating and the others following in descending order.
###Code
# Set endpoint specifying english wikipedia as our context, article quality as our model
# of choice, and a variable list of revision ids
endpoint = 'https://ores.wikimedia.org/v3/scores/enwiki?models=articlequality&revids={revids}'
# Set header with personal github account, email
headers = {
'User-Agent': 'https://github.com/TrevorNims',
'From': '[email protected]'
}
# Define API call to communicate with ORES scoring interface
def api_call(endpoint, parameters):
call = requests.get(endpoint.format(**parameters), headers=headers)
response = call.json()
return response
# Create dictionary to hold revision id coupled with their respective
# estimated scores. Create list to hold revision ids that cannot be scored.
score_dict = {}
unscorables = []
# Iterate over all revision ids in 'page_data' in batches of 50 - note that
# larger batches may cause a slowdown with the API calls.
for i in range(0, page_data.shape[0], 50):
# stop when we've reached the final revision id
end_idx = min(i+50, page_data.shape[0])
rev_ids = page_data['rev_id'].iloc[i:end_idx]
# concatenate revision ids as specified in the API documentation
revid_params = {'revids' : '|'.join(str(x) for x in rev_ids)}
data = api_call(endpoint, revid_params)
# for each revision id, save estimated score if it exists, otherwise save
# revision id in 'unscorables'
for score in data['enwiki']['scores']:
try:
score_dict[score] = data['enwiki']['scores'][score]['articlequality']['score']['prediction']
except KeyError as K:
unscorables.append(score)
# create dataframe of revision ids and their respective estimated scores
score_estimates = pd.DataFrame(score_dict.items(), columns=['rev_id', 'article_quality_est.'])
# save 'unscorables' as a .csv file in the data folder
pd.Series(unscorables).to_csv('data/unscorables.csv')
# Retype rev_id as an int for comparsion with 'page_data' dataframe
score_estimates['rev_id'] = score_estimates['rev_id'].astype(int)
# merge tables on rev_id, creating a single dataframe with page information and
# predicted score
page_data_with_scores = pd.merge(page_data, score_estimates, on='rev_id')
# Inspect 'page_data_woith scores'
page_data_with_scores
###Output
_____no_output_____
###Markdown
AnalysisNow that the pages have had their ranking estimated, we move on the production and measurement oftwo different metrics:1. Number of total Articles per population by country2. High Quality Articles as a proportion of total articles by countryIn this analysis, we will define a High Quality Article as one that has recieved either a 'FA' or a 'GA' rating from the ORES API.
###Code
# rename column to match 'page_data_with_scores' format and facilitate table merging
population_data.rename({'Name' : 'country'}, axis=1, inplace=True)
# merge 'population_data' with 'page_data_with_scores', drop unneeded columns and
# rename some columns to make them more ergonomic
pd_merged = pd.merge(page_data_with_scores, population_data, how='outer', on='country')
pd_merged.drop(columns=['FIPS', 'Type', 'TimeFrame', 'Data (M)'], axis=1, inplace=True)
pd_merged.rename({'page' : 'article_name', 'rev_id' : 'revision_id'}, axis=1, inplace=True)
# Identify and save rows in 'pd_merged' that contain a value that is NaN
# (meaning either the country does not have any scored articles in our dataset,
# or that population data for the country is not available in 'population_data')
no_match = pd_merged[pd_merged.isna().any(axis=1)]
no_match.to_csv('data/wp_wpds_countries-no_match.csv')
# Remove rows with NaN values, save remaining data to csv file in 'data' folder
pd_merged.drop(no_match.index, inplace=True)
pd_merged.to_csv('data/wp_wpds_politicians_by_country.csv')
# Obtain the total number of articles for each country
articles_by_country = pd_merged[['country', 'revision_id']].groupby(['country']).count()
# Obtain the number of High Quality articles for each country
quality_articles_by_country = pd_merged.loc[pd_merged['article_quality_est.'].isin(['GA', 'FA'])]\
[['country', 'revision_id']].groupby(['country']).count()
# Calculate the percentage of high quality articles per country
percentage_high_quality_articles = quality_articles_by_country/articles_by_country*100
percentage_high_quality_articles.rename({'revision_id' : 'percentage'}, axis=1, inplace=True)
# Calculate the percentage of articles-per-population
population_by_country = pd_merged.groupby('country').mean()['Population'].to_frame()
population_by_country.rename({'Population' : 'percentage'}, axis=1, inplace=True)
articles_by_country.rename({'revision_id' : 'percentage'}, axis=1, inplace=True)
percentage_articles_per_population = articles_by_country/population_by_country*100
###Output
_____no_output_____
###Markdown
Analysis 1Below we can see countries with the highest percentages of articles per population. These countries tend to have lower populations.
###Code
percentage_articles_per_population.sort_values('percentage', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Analysis 2Below we can see countries with the lowest percentages of articles per population. These countries tend to have higher populations.
###Code
percentage_articles_per_population.sort_values('percentage').head(10)
###Output
_____no_output_____
###Markdown
Analysis 3Below we can see countries with the highest percentages of High Quality Articles as a proportion of the counrty's total article count.
###Code
percentage_high_quality_articles.sort_values('percentage', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Analysis 4Below we can see countries with the lowest percentages of High Quality Articles as a proportion of the counrty's total article count.
###Code
percentage_high_quality_articles.sort_values('percentage').head(10)
###Output
_____no_output_____
###Markdown
Country Mapping to Sub-RegionsNow, we will analyze the same metrics by sub-region, however we first need to map each country to its respective sub-region(s).
###Code
# Map countries to sub-regions by examining their respective indices from the original DataFrame
# 'population_data'
# construct list of sub-region indices
regional_idx_list = regional_pop_data.index.tolist()
regional_list_idx = 0
country_to_region_dict = {}
while regional_list_idx+1 < len(regional_idx_list):
for p_idx in population_data.index:
# If the country's index is within the range of two sub-region indices, pick the lower index as the
# sub-region
if p_idx in range(regional_idx_list[regional_list_idx], regional_idx_list[regional_list_idx+1]):
country_to_region_dict[population_data.loc[p_idx]['country']] = \
regional_pop_data['Name'].loc[regional_idx_list[regional_list_idx]]
# Update sub-region after iterating through all countries
regional_list_idx += 1
# Original while loop misses final sub-region as it only examines index ranges between
# sub-regions, final sub-region needs to be added manually
for p_idx in population_data.index:
if p_idx > regional_idx_list[regional_list_idx]:
country_to_region_dict[population_data.loc[p_idx]['country']] = \
regional_pop_data['Name'].loc[regional_idx_list[regional_list_idx]]
# Construct DataFrame of each country with their associated sub-region
country_to_region = pd.DataFrame(country_to_region_dict.items(), columns=['country', 'Sub-Region'])
# Construct DataFrame for each "special" sub-region, these sub-regions consist of a collection
# of other sub-regions
africa_subset = country_to_region[country_to_region['Sub-Region'].str.contains('AFRICA')]
asia_subset = country_to_region[country_to_region['Sub-Region'].str.contains('ASIA')]
europe_subset = country_to_region[country_to_region['Sub-Region'].str.contains('EUROPE')]
latin_subset = country_to_region[country_to_region['Sub-Region'].isin(\
['CENTRAL AMERICA', 'CARIBBEAN', 'SOUTH AMERICA'])]
# Construct DataFrames of total article counts by sub-region/"special" sub-regions
articles_by_region = pd.merge(pd_merged, country_to_region, how='left', on='country')\
[['Sub-Region', 'revision_id']].groupby(['Sub-Region']).count()
articles_africa = pd.merge(pd_merged, africa_subset, how='left', on='country').dropna()\
[['Sub-Region', 'revision_id']].groupby(['Sub-Region']).count()
articles_africa = pd.DataFrame({'AFRICA' : articles_africa.sum()}).transpose()
articles_asia = pd.merge(pd_merged, asia_subset, how='left', on='country').dropna()\
[['Sub-Region', 'revision_id']].groupby(['Sub-Region']).count()
articles_asia = pd.DataFrame({'ASIA' : articles_asia.sum()}).transpose()
articles_europe = pd.merge(pd_merged, europe_subset, how='left', on='country').dropna()\
[['Sub-Region', 'revision_id']].groupby(['Sub-Region']).count()
articles_europe = pd.DataFrame({'EUROPE' : articles_europe.sum()}).transpose()
articles_latin = pd.merge(pd_merged, latin_subset, how='left', on='country').dropna()\
[['Sub-Region', 'revision_id']].groupby(['Sub-Region']).count()
articles_latin = pd.DataFrame({'LATIN AMERICA AND THE CARIBBEAN' : articles_latin.sum()}).transpose()
# construct list of DataFrames for iteration/merging purposes
region_article_list = [articles_by_region, articles_africa, articles_asia, articles_europe,
articles_latin]
# Construct DataFrames of quality article counts by sub-region/"special" sub-regions
quality_articles_by_region = pd.merge(pd_merged, country_to_region, how='left', on='country')\
.loc[pd.merge(pd_merged, country_to_region, how='left', on='country')['article_quality_est.']
.isin(['GA', 'FA'])][['Sub-Region', 'revision_id']].groupby(['Sub-Region']).count()
quality_articles_africa = pd.merge(pd_merged, africa_subset, how='left', on='country')\
.loc[pd.merge(pd_merged, africa_subset, how='left', on='country')['article_quality_est.']
.isin(['GA', 'FA'])][['Sub-Region', 'revision_id']].groupby(['Sub-Region']).count()
quality_articles_africa = pd.DataFrame({'AFRICA' : quality_articles_africa.sum()}).transpose()
quality_articles_asia = pd.merge(pd_merged, asia_subset, how='left', on='country')\
.loc[pd.merge(pd_merged, asia_subset, how='left', on='country')['article_quality_est.']
.isin(['GA', 'FA'])][['Sub-Region', 'revision_id']].groupby(['Sub-Region']).count()
quality_articles_asia = pd.DataFrame({'ASIA' : quality_articles_asia.sum()}).transpose()
quality_articles_europe = pd.merge(pd_merged, europe_subset, how='left', on='country')\
.loc[pd.merge(pd_merged, europe_subset, how='left', on='country')['article_quality_est.']
.isin(['GA', 'FA'])][['Sub-Region', 'revision_id']].groupby(['Sub-Region']).count()
quality_articles_europe = pd.DataFrame({'EUROPE' : quality_articles_europe.sum()}).transpose()
quality_articles_latin = pd.merge(pd_merged, latin_subset, how='left', on='country')\
.loc[pd.merge(pd_merged, latin_subset, how='left', on='country')['article_quality_est.']
.isin(['GA', 'FA'])][['Sub-Region', 'revision_id']].groupby(['Sub-Region']).count()
quality_articles_latin = pd.DataFrame({'LATIN AMERICA AND THE CARIBBEAN' : quality_articles_latin.sum()}).transpose()
# construct list of DataFrames for iteration/merging purposes
region_quality_article_list = [quality_articles_by_region, quality_articles_africa, quality_articles_asia,
quality_articles_europe, quality_articles_latin]
# Construct DataFrames of population totals by sub-region/"special" sub-regions
population_by_region = pd.merge(pd_merged, country_to_region, how='left', on='country')\
.groupby('Sub-Region').mean()['Population'].to_frame()
population_by_region
population_africa = pd.merge(pd_merged, africa_subset, how='left', on='country').dropna()\
.groupby(['Sub-Region']).mean()['Population'].to_frame()
population_africa = pd.DataFrame({'AFRICA' : population_africa.sum()}).transpose()
population_asia = pd.merge(pd_merged, asia_subset, how='left', on='country').dropna()\
.groupby(['Sub-Region']).mean()['Population'].to_frame()
population_asia = pd.DataFrame({'ASIA' : population_asia.sum()}).transpose()
population_europe = pd.merge(pd_merged, europe_subset, how='left', on='country').dropna()\
.groupby(['Sub-Region']).mean()['Population'].to_frame()
population_europe = pd.DataFrame({'EUROPE' : population_europe.sum()}).transpose()
population_latin = pd.merge(pd_merged, latin_subset, how='left', on='country').dropna()\
.groupby(['Sub-Region']).mean()['Population'].to_frame()
population_latin = pd.DataFrame({'LATIN AMERICA AND THE CARIBBEAN' : population_latin.sum()}).transpose()
# construct list of DataFrames for iteration/merging purposes
region_population_list = [population_by_region, population_africa, population_asia, population_europe,
population_latin]
# Iterate through each of the corresponding DataFrames in all three lists, caluculating
# metrics upon each iteration
regional_percentage_quality_articles = []
regional_percentage_articles_per_population = []
for article_count, quality_article_count, pop in zip(region_article_list, region_quality_article_list,
region_population_list):
regional_percentage_quality_articles.append(quality_article_count/article_count*100)
regional_percentage_quality_articles[-1].rename({'revision_id' : 'percentage'}, axis=1, inplace=True)
pop.rename({'Population' : 'percentage'}, axis=1, inplace=True)
article_count.rename({'revision_id' : 'percentage'}, axis=1, inplace=True)
regional_percentage_articles_per_population.append(article_count/pop*100)
# Merge DataFrames for each metric into a single DataFrame for display
regional_percentage_articles_per_population_merged = pd.concat(regional_percentage_articles_per_population)
regional_percentage_quality_articles_merged = pd.concat(regional_percentage_quality_articles)
###Output
_____no_output_____
###Markdown
Analysis 5Below we can see the percentages of articles per population by sub-region, sorted in descending order.
###Code
regional_percentage_articles_per_population_merged.sort_values('percentage', ascending=False)
###Output
_____no_output_____
###Markdown
Analysis 6Below we can see all sub-regions' percentage of High Quality Articles as a proportion of the sub-region's total article count, sorted in descending order.
###Code
regional_percentage_quality_articles_merged.sort_values('percentage', ascending=False)
###Output
_____no_output_____
###Markdown
Assignment A2: Bias in data Richard Todd Step 1: Data acquisitionThis assignment combines data from three sources:* Wikipedia politicians by country, made available on [figshare](https://figshare.com/articles/Untitled_Item/5513449) under the CC-BY-SA 4.0 license. This was downloaded from source and unzipped.* Population data from the United Nations [International Indicators](https://www.prb.org/international/indicator/population/table/), made available under a CC BY 3.0 license. This data was provided in csv format as part of the class assignment.* Output from the ORES ("Objective Revision Evaluation Service") machine learning package. In this data, each page is assigned one of six quality categories used in English Wikipedia [content assessment](https://en.wikipedia.org/wiki/Wikipedia:WikiProject_assessmentGrades). First we import python libraries used to access, process and analyze the data:
###Code
import pandas as pd
import numpy as np
import json
import requests
###Output
_____no_output_____
###Markdown
Load the two csv files acessed as described above.
###Code
page_df = pd.read_csv('page_data.csv')
wpds_df = pd.read_csv('WPDS_2018_data.csv')
###Output
_____no_output_____
###Markdown
Step 2: Data processing Cleaning page data
###Code
page_df.shape
###Output
_____no_output_____
###Markdown
Examining the data shows that some pages have a 'template' prefix, which should be removed for this analysis:
###Code
page_df.head()
page_df = page_df[~page_df["page"].str.startswith("Template")]
page_df.shape
###Output
_____no_output_____
###Markdown
Cleaning population data
###Code
wpds_df.shape
wpds_df.head()
###Output
_____no_output_____
###Markdown
The WPDS_2018_data combines county and regional population counts. Regional counts are distinguished by upper-case names (both in the 'Geography' field in the dataframe). I split these two groups into two dataframes ahead of analysis:
###Code
countries_df = wpds_df[~wpds_df['Geography'].str.isupper()]
regions_df = wpds_df[wpds_df['Geography'].str.isupper()]
###Output
_____no_output_____
###Markdown
Acquire and attach article quality predictions The methodology and code in this section is based upon material provided to the class in the [class wiki](https://wiki.communitydata.science/Human_Centered_Data_Science_(Fall_2019)/AssignmentsA2:_Bias_in_data) and related materials. I use REST API calls to return quality estimates of each page generated with the ORES ("Objective Revision Evaluation Service") machine learning package. In this data, each page is assigned one of six quality categories used in English Wikipedia [content assessment](https://en.wikipedia.org/wiki/Wikipedia:WikiProject_assessmentGrades). Create a function to access ORES data:
###Code
default_headers = {'User-Agent': 'https://github.com/rcctodd', 'From': '[email protected]'}
def get_ores_data(revision_ids, headers=default_headers):
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
params = {
'project': 'enwiki',
'model': 'wp10',
'revids': '|'.join(str(x) for x in revision_ids)
}
json_response = requests.get(endpoint.format(**params)).json()
return json_response
###Output
_____no_output_____
###Markdown
The json output comes in nested dictionary format that requries careful extraction; example format is available from wikipedia [here](https://www.mediawiki.org/wiki/ORESEdit_quality). Here I create a function which extracts only the quality class prediction from the JSON output:
###Code
def extract_quality(json_input):
quality_list = []
for key, value in json_input["enwiki"]["scores"].items():
#wp10 is the name of the current ORES model and is the container label for its scores
temp_dict = value["wp10"]
#need to account for error values
if "error" not in temp_dict:
quality_pred = {
'rev_id': int(key),
'quality_cat': temp_dict["score"]["prediction"]
}
quality_list.append(quality_pred)
return quality_list
###Output
_____no_output_____
###Markdown
In order not to overwhelm the API (following advice recieved in assignment instructions!), I create a simple function to chunk the page list and query each in turn (code here adapted from a [geeksforgeeks](https://www.geeksforgeeks.org/break-list-chunks-size-n-python/) posting).
###Code
def chunk_query(l, n):
for i in range(0, len(l), n):
yield l[i:i + n]
chunked_pages = list(chunk_query(page_df['rev_id'], 100))
###Output
_____no_output_____
###Markdown
Using the functions created above, I incrementally retrieve ORES data, extract the quality field and convert the resulting information to a dataframe:
###Code
quality_json = [get_ores_data(subset) for subset in chunked_pages]
ores_predictions = [extract_quality(subset) for subset in quality_json]
ores_prediction_dfs = [pd.DataFrame.from_records(json_subset) for json_subset in ores_predictions]
quality_prediction_df = pd.concat(ores_prediction_dfs)
quality_prediction_df.to_csv("ores_quality_preds.csv", index=False)
quality_prediction_df.head()
###Output
_____no_output_____
###Markdown
Combine data sources In order to combine page and country data, rename "geography" field to "country"
###Code
countries_df.rename(columns={'Geography':'country'}, inplace=True)
###Output
C:\Users\Richard\Anaconda3\lib\site-packages\pandas\core\frame.py:3781: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
return super(DataFrame, self).rename(**kwargs)
###Markdown
I create a dataframe from merging page data: and country data:
###Code
wp_wpds_politicians_by_country = pd.merge(page_df, countries_df, on='country', how='outer')
###Output
_____no_output_____
###Markdown
To this, I merge in the ORES quality prediction.
###Code
wp_wpds_politicians_by_country = pd.merge(wp_wpds_politicians_by_country, quality_prediction_df, on='rev_id', how='outer')
###Output
_____no_output_____
###Markdown
Records without a quality prediction are dropped:
###Code
wp_wpds_politicians_by_country = wp_wpds_politicians_by_country[wp_wpds_politicians_by_country['quality_cat'].notnull()]
###Output
_____no_output_____
###Markdown
Records with and without an associated country match separated and saved as csvs.
###Code
wp_wpds_countries_no_match_df = wp_wpds_politicians_by_country[wp_wpds_politicians_by_country['country'].isna()]
wp_wpds_countries_no_match_df.to_csv("wp_wpds_countries-no_match_df.csv", index=False)
wp_wpds_politicians_by_country = wp_wpds_politicians_by_country[wp_wpds_politicians_by_country['country'].notnull()]
wp_wpds_politicians_by_country.to_csv("wp_wpds_politicians_by_country.csv", index=False)
###Output
_____no_output_____
###Markdown
Step 3: Analysis In this stage I explore the relationship between population, numbers of articles about politicians and the quality of those articles. "High quality" articles are defined as having an ORES-predicted class of "FA" ("featured article") or "GA" ("good article"). Country-level analysis In order to calculate the ten highest-ranked countries in by number of politician articles as a proportion of country population, I convert population data into numeric data, then group population data by country and append a calculation of count of articles by country.
###Code
wp_wpds_politicians_by_country['Population mid-2018 (millions)'] = pd.to_numeric(wp_wpds_politicians_by_country['Population mid-2018 (millions)'].str.replace(',', ''))
country_df = pd.DataFrame(wp_wpds_politicians_by_country.groupby(['country'])['Population mid-2018 (millions)'].max())
country_df['pagecount']= wp_wpds_politicians_by_country.groupby(['country'])['page'].count()
###Output
_____no_output_____
###Markdown
Add a calculation of articles per million people population:
###Code
country_df['articles_per_million_pop'] = country_df['pagecount'] / country_df['Population mid-2018 (millions)']
###Output
_____no_output_____
###Markdown
Add to the dataframe a count of articles by quality prediction - replacing NAs with 0 - then calculate the proportion of articles that are high quality:
###Code
country_df = country_df.join(wp_wpds_politicians_by_country.groupby(['country'])['quality_cat'].value_counts().unstack().fillna(0))
country_df['prop_high_quality']=(country_df['GA']+country_df['FA'])/country_df['pagecount']
###Output
_____no_output_____
###Markdown
Sort and truncate dataframe, displaying only variables of interest: Top 10 countries by coverage: 10 highest-ranked countries by politician articles as a proportion of country population
###Code
country_df[['articles_per_million_pop']].sort_values('articles_per_million_pop', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Bottom 10 countries by coverage: 10 lowest-ranked countries by politician articles as a proportion of country population
###Code
country_df[['articles_per_million_pop']].sort_values('articles_per_million_pop', ascending=True).head(10)
###Output
_____no_output_____
###Markdown
Top 10 countries by relative quality: 10 highest-ranked countries by relative proportion of politician articles that are of GA and FA-quality
###Code
country_df[['prop_high_quality']].sort_values('prop_high_quality', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Bottom 10 countries by relative quality: 10 lowest-ranked countries by relative proportion of politician articles that are of GA and FA-quality
###Code
country_df[['prop_high_quality']].sort_values('prop_high_quality', ascending=True).head(10)
###Output
_____no_output_____
###Markdown
Analysis below shows that 62 counties (~28% have no articles predicted to be high quality, so the ten selected above are arbitrary.
###Code
country_df.shape[0] - np.count_nonzero(country_df[['prop_high_quality']])
###Output
_____no_output_____
###Markdown
Region-level analysis The original population data file contained regions as well as countries, with an upper-case region preceding countries in that region. We can use this structure to loop through the dataframe and allocate countries to regions, then merging this into the country dataframe.
###Code
region_list = []
for geog in wpds_df['Geography'].tolist():
if geog.isupper():
current_region = geog
region_list.append('regionname')
else:
region_list.append(current_region)
wpds_df['region_cat'] = region_list
country_df = country_df.merge(wpds_df[['Geography','region_cat']],left_index=True,right_on='Geography')
###Output
_____no_output_____
###Markdown
Resetting the index to make the dataframe consistent with above.
###Code
country_df = country_df.set_index('Geography')
###Output
_____no_output_____
###Markdown
Grouping data by region, then calculating articles per million population as above.
###Code
region_df = pd.DataFrame(country_df.groupby(['region_cat'])[['Population mid-2018 (millions)','pagecount','GA','FA']].sum())
region_df.columns
region_df['articles_per_million_pop'] = region_df['pagecount'] / region_df['Population mid-2018 (millions)']
###Output
_____no_output_____
###Markdown
Sorting values for purposes of output: Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
###Code
region_df.sort_values('articles_per_million_pop', ascending=False)
###Output
_____no_output_____
###Markdown
As above, add to the dataframe a count of articles by quality prediction - replacing NAs with 0 - then calculate the proportion of articles that are high quality:
###Code
region_df['prop_high_quality']=(region_df['GA']+region_df['FA'])/region_df['pagecount']
###Output
_____no_output_____
###Markdown
Geographic regions by quality: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
region_df.sort_values('prop_high_quality', ascending=False)
###Output
_____no_output_____
###Markdown
Load the required packages used in this project.
###Code
library(httr)
library(lubridate)
library(jsonlite)
library(dplyr)
library(ggplot2)
library(tidyr)
library(data.table)
library(gridExtra)
options(stringsAsFactors = F)
###Output
Attaching package: ‘lubridate’
The following object is masked from ‘package:base’:
date
Attaching package: ‘dplyr’
The following objects are masked from ‘package:lubridate’:
intersect, setdiff, union
The following objects are masked from ‘package:stats’:
filter, lag
The following objects are masked from ‘package:base’:
intersect, setdiff, setequal, union
Attaching package: ‘data.table’
The following objects are masked from ‘package:dplyr’:
between, first, last
The following objects are masked from ‘package:lubridate’:
hour, isoweek, mday, minute, month, quarter, second, wday, week,
yday, year
Attaching package: ‘gridExtra’
The following object is masked from ‘package:dplyr’:
combine
###Markdown
First we need to load that beautiful, sexy data.
###Code
page_data <- read.csv('page_data.csv')
# need fread to automatically delete the first two rows
population <- fread('Population Mid-2015.csv')
###Output
_____no_output_____
###Markdown
In order to obtain article quality predictions from ORES, we first need to subset the articles into 75 unit chunks in order to speed up the process of predicting article quality.
###Code
# create a list of 75 rev_ids at a time
sub_list <- list()
t=1
i=1
z=75
end <- round(length(page_data$rev_id)/z)+1
for(j in 1:end){
sub_list[[t]] <- page_data$rev_id[i:z]
i=i+75
z=z+75
t=t+1
}
###Output
_____no_output_____
###Markdown
Now that we have 75 unit chunks of article ids, we can ping the API getting predictions on those 75 articles at once.
###Code
# get ORES data from API
get.ores.data <- function(rev_ids){
# this function takes article ids and returns the predicted article quality
# by running it through the ORES machine learning algorithm
ores_list <- list()
endpoint_raw <- 'https://ores.wikimedia.org/v3/scores/enwiki/?models=wp10&revids='
for(begin in 1:end){
revids <- paste(na.omit(sub_list[[begin]]), collapse='|')
endpoint <- paste0(endpoint_raw, revids)
raw.data <- GET(endpoint)
raw.content <- rawToChar(raw.data$content)
parsed.data <- fromJSON(raw.content)
ores_list[[begin]] <- parsed.data
}
return(ores_list)
}
# ping the API by running this function on the subset list created above
ores_list <- get.ores.data(sub_list)
###Output
_____no_output_____
###Markdown
Now that we have the raw data from the API, we need to extract the relevant information that we need, namely the articles and their predicted quality.
###Code
a=1
ids <- c()
preds <- c()
for(i in 1:length(ores_list)){
for(j in 1:length(ores_list[[i]]$enwiki$scores)){
ids[a] <- names(ores_list[[i]]$enwiki$scores[j])
# one of the predictions was NULL so include this conditional to allow the loop to run
if(is.null(ores_list[[i]]$enwiki$scores[j][[1]][[1]][[1]]$prediction[1])){
preds[a] <- NA
}
else{
preds[a] <- ores_list[[i]]$enwiki$scores[j][[1]][[1]][[1]]$prediction[1]
}
a=a+1
}
}
###Output
_____no_output_____
###Markdown
The next step is to merge the data loaded in the first step with the data from the API. The data is merged on the country in order to create a dataframe with the country, the population, and the predicted article quality for each article. Following the merge, we must create a final data frame.
###Code
# merge the data
population_formerge <- population[,-c(2:4,6)]
sub_merge <- merge(page_data, population_formerge, by.x = 'country', by.y = 'Location', all=F)
names(sub_merge)[4] <- 'population'
temp_df <- data.frame('ids'=ids, 'predictions'=preds)
final_merge <- merge(sub_merge, temp_df, by.x = 'rev_id', by.y = 'ids', all = F)
# create final dataframe
names(final_merge) <- c('revision_id', 'country', 'article_name', 'population', 'article_quality')
final_merge <- na.omit(final_merge[,c(2,3,1,5,4)])
write.csv(final_merge, file='final_data.csv', row.names=F)
final_merge <- transform(final_merge, country = factor(country),
article_quality = factor(article_quality))
###Output
_____no_output_____
###Markdown
In order to complete the required analysis, the final_merge dataframe must be transformed using the group by function within the dplyr package. We group by the country and compute a number of metrics for subsequent visualization.
###Code
total_articles <- final_merge %>%
group_by(country) %>%
summarise('total_articles'= as.numeric(length(page)),
'population'= (unique(population)),
'FA'= sum(article_quality == 'FA'),
'GA' = sum(article_quality == 'GA'),
'total_high' = (FA + GA))
total_articles <- as.data.frame(total_articles)
total_articles$population <- as.numeric(gsub(total_articles$population,
pattern = ',', replacement = ''))
total_articles$perpop <- with(total_articles, total_articles/population)*100
total_articles$percenthigh <- with(total_articles, total_high/population * 100)
###Output
_____no_output_____
###Markdown
Once the summary table has been created we make four plots summarizing the data for various factors that were chosen by people much smarter than I who are more versed, accomplished, and generally all around better human beings.
###Code
high_articles_plot <- ggplot(head(total_articles[order(total_articles$perpop,
decreasing = T),],10),
aes(country, perpop)) +
geom_bar(stat = 'identity', color = 'black', fill='blue') +
ylab('Percentage of Articles Per Population (%)') +
theme(axis.text.x = element_text(angle = 45, hjust = 1)) +
ylim(c(0,0.01)) +
ggtitle('Greatest Number of Articles Written per Population') +
xlab('Country')
png(file = 'high_article_percent.png')
plot(high_articles_plot)
dev.off()
high_articles_plot
low_articles_plot <- ggplot(tail(total_articles[order(total_articles$perpop,
decreasing = T),],10),
aes(country, (perpop*10000))) +
geom_bar(stat = 'identity', color = 'black', fill='red') +
ylab('Percentage of Articles Per Population (%)') +
theme(axis.text.x = element_text(angle = 45, hjust = 1)) +
ylim(c(0,0.01)) +
ggtitle('Least Number of Articles Written per Population (÷10,000)') +
xlab('Country')
png(file = 'low_article_percent.png')
plot(low_articles_plot)
dev.off()
low_articles_plot
high_qual_articles <- ggplot(head(total_articles[order(total_articles$percenthigh,
decreasing = T),],10),
aes(country, percenthigh)) +
geom_bar(stat = 'identity', color = 'black', fill='blue') +
ylab('Percentage of Articles Per Population (%)') +
theme(axis.text.x = element_text(angle = 45, hjust = 1)) +
ylim(c(0,0.01)) +
ggtitle('Greatest Number of High Quality Articles Written per Population') +
xlab('Country')
png(file = 'top10_high_quality_articles.png')
plot(high_qual_articles)
dev.off()
high_qual_articles
low_qual_articles <- ggplot(tail(total_articles[order(total_articles$percenthigh,
decreasing = T),],10),
aes(country, sort(percenthigh))) +
geom_bar(stat = 'identity', color = 'black', fill='red') +
ylab('Percentage of Articles Per Population (%)') +
theme(axis.text.x = element_text(angle = 45, hjust = 1)) +
ylim(c(0,0.01)) +
ggtitle('Least Number of High Quality Articles Written per Population') +
xlab('Country')
png(file = 'bottom10_high_quality_articles.png')
plot(low_qual_articles)
dev.off()
low_qual_articles
###Output
_____no_output_____
###Markdown
Bias on WikipediaThis assignment calculates bias on wikipedia by computing two metric. a) of articles by country populationb) Ratio of high quality articles to article countData Inputs for this analyis:a) Page data provided by Oliver Kyes, Human Centered Design (HCD), University of Washinton and is available at: https://ndownloader.figshare.com/files/9614893b) Population data from: http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14c) Page quality data using ORES REST API. References: https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_contexthttps://www.mediawiki.org/wiki/ORESAnalysis involves observing the top and bottom 10 countries for the above two metricCountries that are low on " of articles by country population", are under-represented on wikipedia and vice versa. Countries with low "ratio of high quality articles to article count" metric have low representation of good quality articles on wikipedia and vice versa Step 0 - Data Acquisition - Download page and population data1) Download page data from: https://ndownloader.figshare.com/files/9614893. This data is provided by Oliver Kyes, Human Centered Design (HCD), University of Washinton2) Download population data from: http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14 . Click on the Microsoft Excel icon on top right cornerSave both csv files locally on your computer Step 1 - Data Acquisition - Load the previously downloaded csv files Load the population data and page data csv files in data frames Note: Update the localPath variable with the right path for your machine where csv files are stored
###Code
## getting the data from the CSV files. Please update localPath variable to location where you down load the csv file
import csv
import pandas as pd
localPath = 'C:/Users/amnag/OneDrive/DataScience/HumanCenteredDS/week4/country/country/data/'
# Create an empty list to store page data
page_data = []
with open(localPath + 'page_data.csv',encoding='utf-8') as csvfile:
reader = csv.reader(csvfile)
header = True
# Using two hints to get to the data row:
# 1 - Skip rows that have less than 3 columns
# 2 - Skip the header row
for row in reader:
if(len(row) >= 3):
if(header==True):
header=False
else:
page_data.append([row[0],row[1],row[2]])
# Convert the page_data list to a dataframe and assign column names
page_data_df = pd.DataFrame(page_data,columns=['article_name','country','revision_id'])
# Add a column to store article quality from ORES. Initialize the column with NA
page_data_df = page_data_df.assign(article_quality = lambda x: 'NA')
# Store the data frame to an intermediate csv.
# This will be re-loaded later to update the article_quality using ORES REST API
page_data_df.to_csv(localPath+'page_data_with_ORES_score.csv')
# Create an empty list to store population data
population_data = []
with open(localPath + 'Population Mid-2015.csv',encoding='utf-8') as csvfile:
reader = csv.reader(csvfile)
header = True
for row in reader:
# Skip the header row and then read data
if(len(row) >= 6):
if(header==True):
header=False
else:
population_data.append([row[0],row[1],row[2],row[3],row[4],row[5]])
# Convert the population_data list to a dataframe and assign column names
population_data_df = pd.DataFrame(population_data)
population_data_df.columns = ['country','Location Type','TimeFrame','Data Type','population','Footnotes']
# Drop columns that are not required for analysis to improve processing time and data readability
population_data_df.drop(['Location Type','TimeFrame','Data Type','Footnotes'], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Step 2 - Data Acquisition - Call ORES API to find the article quality for each pagePage data with ORES article quality is saved to an intermediate file: page_data_with_ORES_score.csv. This intemediate file contains page data with article quality. It is updated as soon data is downloaded for 100 rev ids. This is helpful in avoiding to restart the ORES API query from beginning in case of network connectivity loss.
###Code
import requests
import json
import json
headers = {'User-Agent' : 'https://github.com/amitabhnag', 'From' : '[email protected]'}
# Function that calls the ORES REST API for a set of revision ids and returns the response object
# This function is developed based off code sample provided by Prof. Jonathan Morgan, Human Centered Design (HCD),
# University of Washington
def get_ores_data(revision_ids, headers):
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# Specify the parameters - smushing all the revision IDs together separated by | marks.
# Yes, 'smush' is a technical term, trust me I'm a scientist.
# What do you mean "but people trusting scientists regularly goes horribly wrong" who taught you tha- oh.
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
return(response)
# This function returns the index of the page from which the ORES REST API needs to be called.
# This function is useful as in many cases network connectivity issues can prevent you from receiving the response
# from ORES REST API. You may be left with partially filled article quality.
# This function loads the previously saved page_data_with_ORES_score.csv
# and returns the index from where the article quality needs to be calculated
def findIndexToGetORESData():
currentIndex = 0
page_data= []
header = True
with open(localPath + 'page_data_with_ORES_score.csv') as csvfile:
reader = csv.reader(csvfile)
# Skip the header
for row in reader:
if (header == True):
header = False
else:
page_data.append([row[1],row[2],row[3],row[4]])
page_data_df = pd.DataFrame(page_data,columns=['article_name','country','revision_id','article_quality'])
# Skip rows that do not have a artcle_quality as NA and return the first index that has NA
for i in range(0,len(page_data_df)):
if(page_data_df['article_quality'][i]!='NA'):
currentIndex = currentIndex + 1
return currentIndex
# Call findIndexToGetORESData() to find the index from where ORES REST API needs to be called
currentIndex = findIndexToGetORESData()
totalPages = len(page_data)
# This loop calls get_ores_data() to get article quality in batches of 100 pages.
while(currentIndex < totalPages):
print('currently processing index:=' + str(currentIndex))
# Get a list of 100 rev ids to be passed to get_ores_data()
# If 100 rev ids are not there, call get_ores_data() with the remaining rev ids
if(totalPages - currentIndex >= 100 ):
revids = page_data_df['revision_id'][currentIndex : (currentIndex + 100)]
currentIndex = currentIndex + 100
else:
revids = page_data_df['revision_id'][currentIndex : totalPages]
currentIndex = totalPages
# Get the ORES data
response = get_ores_data(revids, headers)
for revid in revids:
if(('error' in response['enwiki']['scores'][revid]['wp10']) == False):
prediction = response['enwiki']['scores'][revid]['wp10']['score']['prediction']
page_data_df.set_value(page_data_df[page_data_df['revision_id'] == revid].index[0],'article_quality',prediction )
else:
print('error in rev_id:' + str(revid) )
# Save the page data with ORES to a file so that in case of network connection issue data can still be retrived
page_data_df.to_csv(localPath+'page_data_with_ORES_score.csv')
###Output
currently processing index:=0
currently processing index:=100
currently processing index:=200
currently processing index:=300
currently processing index:=400
currently processing index:=500
currently processing index:=600
currently processing index:=700
currently processing index:=800
currently processing index:=900
currently processing index:=1000
currently processing index:=1100
currently processing index:=1200
currently processing index:=1300
currently processing index:=1400
currently processing index:=1500
currently processing index:=1600
currently processing index:=1700
currently processing index:=1800
currently processing index:=1900
currently processing index:=2000
currently processing index:=2100
currently processing index:=2200
currently processing index:=2300
currently processing index:=2400
currently processing index:=2500
currently processing index:=2600
currently processing index:=2700
currently processing index:=2800
currently processing index:=2900
currently processing index:=3000
currently processing index:=3100
currently processing index:=3200
currently processing index:=3300
currently processing index:=3400
currently processing index:=3500
currently processing index:=3600
currently processing index:=3700
currently processing index:=3800
currently processing index:=3900
currently processing index:=4000
currently processing index:=4100
currently processing index:=4200
currently processing index:=4300
currently processing index:=4400
currently processing index:=4500
currently processing index:=4600
currently processing index:=4700
currently processing index:=4800
currently processing index:=4900
currently processing index:=5000
currently processing index:=5100
currently processing index:=5200
currently processing index:=5300
currently processing index:=5400
currently processing index:=5500
currently processing index:=5600
currently processing index:=5700
currently processing index:=5800
currently processing index:=5900
currently processing index:=6000
currently processing index:=6100
currently processing index:=6200
currently processing index:=6300
currently processing index:=6400
currently processing index:=6500
currently processing index:=6600
currently processing index:=6700
currently processing index:=6800
currently processing index:=6900
currently processing index:=7000
currently processing index:=7100
currently processing index:=7200
currently processing index:=7300
currently processing index:=7400
currently processing index:=7500
currently processing index:=7600
currently processing index:=7700
currently processing index:=7800
currently processing index:=7900
currently processing index:=8000
currently processing index:=8100
currently processing index:=8200
currently processing index:=8300
currently processing index:=8400
currently processing index:=8500
currently processing index:=8600
currently processing index:=8700
currently processing index:=8800
currently processing index:=8900
currently processing index:=9000
currently processing index:=9100
currently processing index:=9200
currently processing index:=9300
currently processing index:=9400
currently processing index:=9500
currently processing index:=9600
currently processing index:=9700
currently processing index:=9800
currently processing index:=9900
currently processing index:=10000
currently processing index:=10100
currently processing index:=10200
currently processing index:=10300
currently processing index:=10400
currently processing index:=10500
currently processing index:=10600
currently processing index:=10700
currently processing index:=10800
currently processing index:=10900
currently processing index:=11000
currently processing index:=11100
currently processing index:=11200
currently processing index:=11300
currently processing index:=11400
currently processing index:=11500
currently processing index:=11600
currently processing index:=11700
currently processing index:=11800
currently processing index:=11900
currently processing index:=12000
currently processing index:=12100
currently processing index:=12200
currently processing index:=12300
currently processing index:=12400
currently processing index:=12500
currently processing index:=12600
currently processing index:=12700
currently processing index:=12800
currently processing index:=12900
currently processing index:=13000
currently processing index:=13100
currently processing index:=13200
currently processing index:=13300
currently processing index:=13400
currently processing index:=13500
currently processing index:=13600
currently processing index:=13700
currently processing index:=13800
currently processing index:=13900
currently processing index:=14000
currently processing index:=14100
currently processing index:=14200
currently processing index:=14300
currently processing index:=14400
currently processing index:=14500
currently processing index:=14600
currently processing index:=14700
currently processing index:=14800
currently processing index:=14900
currently processing index:=15000
currently processing index:=15100
currently processing index:=15200
currently processing index:=15300
currently processing index:=15400
currently processing index:=15500
currently processing index:=15600
currently processing index:=15700
currently processing index:=15800
currently processing index:=15900
currently processing index:=16000
currently processing index:=16100
currently processing index:=16200
currently processing index:=16300
currently processing index:=16400
currently processing index:=16500
currently processing index:=16600
currently processing index:=16700
currently processing index:=16800
currently processing index:=16900
currently processing index:=17000
currently processing index:=17100
currently processing index:=17200
currently processing index:=17300
currently processing index:=17400
currently processing index:=17500
currently processing index:=17600
currently processing index:=17700
currently processing index:=17800
currently processing index:=17900
currently processing index:=18000
currently processing index:=18100
currently processing index:=18200
currently processing index:=18300
currently processing index:=18400
currently processing index:=18500
currently processing index:=18600
currently processing index:=18700
currently processing index:=18800
currently processing index:=18900
currently processing index:=19000
currently processing index:=19100
currently processing index:=19200
currently processing index:=19300
currently processing index:=19400
currently processing index:=19500
currently processing index:=19600
currently processing index:=19700
currently processing index:=19800
currently processing index:=19900
currently processing index:=20000
currently processing index:=20100
currently processing index:=20200
currently processing index:=20300
currently processing index:=20400
currently processing index:=20500
currently processing index:=20600
currently processing index:=20700
currently processing index:=20800
currently processing index:=20900
currently processing index:=21000
currently processing index:=21100
currently processing index:=21200
currently processing index:=21300
currently processing index:=21400
currently processing index:=21500
currently processing index:=21600
currently processing index:=21700
currently processing index:=21800
currently processing index:=21900
currently processing index:=22000
currently processing index:=22100
currently processing index:=22200
currently processing index:=22300
currently processing index:=22400
currently processing index:=22500
currently processing index:=22600
currently processing index:=22700
currently processing index:=22800
currently processing index:=22900
currently processing index:=23000
currently processing index:=23100
currently processing index:=23200
currently processing index:=23300
currently processing index:=23400
currently processing index:=23500
currently processing index:=23600
currently processing index:=23700
currently processing index:=23800
currently processing index:=23900
currently processing index:=24000
currently processing index:=24100
currently processing index:=24200
currently processing index:=24300
currently processing index:=24400
currently processing index:=24500
currently processing index:=24600
currently processing index:=24700
currently processing index:=24800
currently processing index:=24900
currently processing index:=25000
currently processing index:=25100
currently processing index:=25200
currently processing index:=25300
currently processing index:=25400
currently processing index:=25500
currently processing index:=25600
currently processing index:=25700
currently processing index:=25800
currently processing index:=25900
currently processing index:=26000
currently processing index:=26100
currently processing index:=26200
currently processing index:=26300
currently processing index:=26400
currently processing index:=26500
currently processing index:=26600
currently processing index:=26700
currently processing index:=26800
currently processing index:=26900
currently processing index:=27000
currently processing index:=27100
currently processing index:=27200
currently processing index:=27300
currently processing index:=27400
currently processing index:=27500
currently processing index:=27600
currently processing index:=27700
currently processing index:=27800
currently processing index:=27900
currently processing index:=28000
currently processing index:=28100
currently processing index:=28200
currently processing index:=28300
currently processing index:=28400
currently processing index:=28500
currently processing index:=28600
currently processing index:=28700
currently processing index:=28800
currently processing index:=28900
currently processing index:=29000
currently processing index:=29100
currently processing index:=29200
currently processing index:=29300
currently processing index:=29400
currently processing index:=29500
currently processing index:=29600
currently processing index:=29700
currently processing index:=29800
currently processing index:=29900
currently processing index:=30000
currently processing index:=30100
currently processing index:=30200
currently processing index:=30300
currently processing index:=30400
currently processing index:=30500
currently processing index:=30600
currently processing index:=30700
currently processing index:=30800
currently processing index:=30900
currently processing index:=31000
currently processing index:=31100
currently processing index:=31200
currently processing index:=31300
currently processing index:=31400
currently processing index:=31500
currently processing index:=31600
currently processing index:=31700
currently processing index:=31800
currently processing index:=31900
currently processing index:=32000
currently processing index:=32100
currently processing index:=32200
currently processing index:=32300
currently processing index:=32400
currently processing index:=32500
currently processing index:=32600
currently processing index:=32700
currently processing index:=32800
currently processing index:=32900
currently processing index:=33000
currently processing index:=33100
currently processing index:=33200
currently processing index:=33300
currently processing index:=33400
currently processing index:=33500
currently processing index:=33600
currently processing index:=33700
currently processing index:=33800
currently processing index:=33900
currently processing index:=34000
currently processing index:=34100
currently processing index:=34200
currently processing index:=34300
currently processing index:=34400
currently processing index:=34500
currently processing index:=34600
currently processing index:=34700
currently processing index:=34800
currently processing index:=34900
currently processing index:=35000
currently processing index:=35100
currently processing index:=35200
currently processing index:=35300
currently processing index:=35400
currently processing index:=35500
currently processing index:=35600
currently processing index:=35700
currently processing index:=35800
currently processing index:=35900
currently processing index:=36000
currently processing index:=36100
currently processing index:=36200
currently processing index:=36300
currently processing index:=36400
currently processing index:=36500
currently processing index:=36600
currently processing index:=36700
currently processing index:=36800
currently processing index:=36900
currently processing index:=37000
currently processing index:=37100
currently processing index:=37200
currently processing index:=37300
currently processing index:=37400
currently processing index:=37500
currently processing index:=37600
currently processing index:=37700
currently processing index:=37800
currently processing index:=37900
currently processing index:=38000
currently processing index:=38100
currently processing index:=38200
currently processing index:=38300
currently processing index:=38400
currently processing index:=38500
currently processing index:=38600
currently processing index:=38700
currently processing index:=38800
currently processing index:=38900
currently processing index:=39000
currently processing index:=39100
currently processing index:=39200
currently processing index:=39300
currently processing index:=39400
currently processing index:=39500
currently processing index:=39600
currently processing index:=39700
currently processing index:=39800
currently processing index:=39900
currently processing index:=40000
currently processing index:=40100
currently processing index:=40200
currently processing index:=40300
currently processing index:=40400
currently processing index:=40500
currently processing index:=40600
currently processing index:=40700
currently processing index:=40800
currently processing index:=40900
currently processing index:=41000
currently processing index:=41100
currently processing index:=41200
currently processing index:=41300
currently processing index:=41400
currently processing index:=41500
currently processing index:=41600
currently processing index:=41700
currently processing index:=41800
currently processing index:=41900
currently processing index:=42000
currently processing index:=42100
currently processing index:=42200
currently processing index:=42300
currently processing index:=42400
currently processing index:=42500
currently processing index:=42600
currently processing index:=42700
currently processing index:=42800
currently processing index:=42900
currently processing index:=43000
currently processing index:=43100
currently processing index:=43200
currently processing index:=43300
currently processing index:=43400
currently processing index:=43500
currently processing index:=43600
currently processing index:=43700
currently processing index:=43800
currently processing index:=43900
currently processing index:=44000
currently processing index:=44100
currently processing index:=44200
currently processing index:=44300
currently processing index:=44400
currently processing index:=44500
currently processing index:=44600
currently processing index:=44700
currently processing index:=44800
currently processing index:=44900
currently processing index:=45000
currently processing index:=45100
currently processing index:=45200
currently processing index:=45300
currently processing index:=45400
currently processing index:=45500
currently processing index:=45600
currently processing index:=45700
currently processing index:=45800
currently processing index:=45900
currently processing index:=46000
currently processing index:=46100
currently processing index:=46200
currently processing index:=46300
currently processing index:=46400
currently processing index:=46500
currently processing index:=46600
currently processing index:=46700
currently processing index:=46800
error in rev_id:807367030
error in rev_id:807367166
currently processing index:=46900
currently processing index:=47000
currently processing index:=47100
###Markdown
Step 3 - Data processing - Load the page data with article quality from intermediate fileLoad the page data from page_data_with_ORES_score.csv into a data frame. This file is updated in Step 2 with article quality.
###Code
page_data_ores = []
# Load the data from page_data_with_ORES_score.csv into a list page_data_ores
with open(localPath + 'page_data_with_ORES_score.csv') as csvfile:
reader = csv.reader(csvfile)
header = True
for row in reader:
# Skip the header row and then read data
if(len(row) >= 3):
if(header==True):
header=False
else:
page_data_ores.append([row[1],row[2],row[3],row[4]])
# Create data frame from the list and assign column names
page_data_ores_df = pd.DataFrame(page_data_ores)
page_data_ores_df.columns = ['article_name','country','revision_id','article_quality']
page_data_ores_df
###Output
_____no_output_____
###Markdown
Step 4 - Data Processing - Merge the page data and population data
###Code
# Merge the page data and population data on country
merged_df = population_data_df.merge(page_data_ores_df,on=['country'],how='inner')
# Remove thousand "," symbol from population data. This symbol makes it harder to convert population to integer
merged_df['population'] = merged_df['population'].apply(lambda x: x.replace(',',''))
# Convert the population to integer
merged_df['population'] = merged_df['population'].astype(int)
# Save data to a csv file
merged_df.to_csv(localPath+'ConsolidatedData.csv')
merged_df
###Output
_____no_output_____
###Markdown
Step 5 - Analysis - Compute the two metrics and find top and bottom 10 values: a) of articles by country population b) Ratio of high quality articles to article count
###Code
from IPython.display import display
import warnings
warnings.filterwarnings("ignore")
# Create a dataframe that holds these two metric
# a) Number of articles per country
# b) Ratio of high quality articles to article count
# Create a data frame analysis_df that stores the metric values for analysis
analysis_df = pd.DataFrame()
# Compute article count per country
analysis_df['article count'] = merged_df.groupby('country')['revision_id'].count()
# Sice population is in each row for a country, find the population from the first row for a country
analysis_df['population'] = merged_df.groupby('country')['population'].first()
# Calculate high quality articles for each country
analysis_df['high quality articles'] = merged_df[(merged_df.article_quality == 'GA') | (merged_df.article_quality == 'FA')].groupby('country')['article_quality'].count()
# Calculate the metric # of articles by country population
analysis_df['# of articles by country population'] = analysis_df['article count']/analysis_df['population']*100
# Calculate the metric # of articles by country population
analysis_df['ratio of high quality articles to article count'] = analysis_df['high quality articles']/analysis_df['article count']*100
# Display the results for top and bottom 10 values for # of articles per country
print('10 highest-ranked countries in terms of number of politician articles as a proportion of country population')
display(analysis_df.sort(['# of articles by country population'],ascending=0)[0:10])
print('10 lowest-ranked countries in terms of number of politician articles as a proportion of country population')
display(analysis_df.sort(['# of articles by country population'],ascending=0)[len(analysis_df)-10:len(analysis_df)])
# Display the results for top and bottom 10 values for ratio of high quality articles to article count
print('10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country')
display(analysis_df.sort(['ratio of high quality articles to article count'],ascending=0)[0:10])
print('10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country')
display(analysis_df.sort(['ratio of high quality articles to article count'],ascending=0)[len(analysis_df)-10:len(analysis_df)])
###Output
10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Markdown
Assignment 2: Bias
###Code
import numpy as np
import pandas as pd
import json
import requests
###Output
_____no_output_____
###Markdown
Filter out the rows in the page_data data frame that contain "Template:" in the "page" column.
###Code
page_data = pd.read_csv('country/data/page_data.csv')
page_data = page_data[page_data['page'].str.contains('Template:', na = False) == 0]
page_data
###Output
_____no_output_____
###Markdown
Filter out the data frame to fields without capital letters, and store the capital-only fields in a separate variable for later anaylsis.
###Code
wpds = pd.read_csv('WPDS_2020_data.csv')
wpds_caps = wpds[wpds['Name'].str.isupper()]
wpds = wpds[wpds['Name'].str.isupper() == 0]
###Output
_____no_output_____
###Markdown
Write the grouping function that will batch the API call into 50 at a time.
###Code
def grouping(count, lst):
for i in range(0,len(lst),count):
yield lst[i:i+count]
###Output
_____no_output_____
###Markdown
Write the API call function that uses the endpoint to access the score predictions group them.
###Code
def api_call(rev_id):
headers = {
'User-Agent': 'https://github.com/anantr98',
'From': '[email protected]'
}
endpoint = 'https://ores.wikimedia.org/v3/scores/enwiki/?models=articlequality&revids={rev_id}'
call = requests.get(endpoint.format(rev_id = rev_id), headers=headers)
response = call.json()
qual_preds = []
for rev_id, val in response['enwiki']['scores'].items():
val_dict = val['articlequality']
if "error" not in val_dict:
prediction = {
'rev_id': int(rev_id),
'prediction': val_dict['score']['prediction']
}
qual_preds.append(prediction)
return qual_preds
###Output
_____no_output_____
###Markdown
Get the predictions from the call.
###Code
id_group = list(grouping(50,page_data['rev_id']))
predictions=[]
for id_val in id_group:
predictions.append(api_call("|".join(str(x) for x in id_val)))
###Output
_____no_output_____
###Markdown
Create a data frame with solely the rev_ids and the prediction scores for that particular ID.
###Code
rev_id = []
prediction = []
for val in predictions:
for innerVal in val:
rev_id.append(innerVal['rev_id'])
prediction.append(innerVal['prediction'])
wiki_data = pd.DataFrame({'rev_id' : rev_id,'prediction':prediction})
###Output
_____no_output_____
###Markdown
Merge the wiki data and the population data together.
###Code
merge1 = pd.merge(wiki_data,page_data,on='rev_id',how='left')
merge1 = merge1.rename(columns={'country':'Name'})
merge2 = pd.merge(merge1, wpds, on = 'Name', how = 'left')
###Output
_____no_output_____
###Markdown
Separate the data frame into two separate data frames, those with matches and those without matches for population data.
###Code
wp_wpds_politicians_by_country = merge2.dropna()
wp_wpds_countries_no_match = merge2[merge2.isna().any(axis=1)]
###Output
_____no_output_____
###Markdown
Filter out the data frame to include only the 5 columns of concern.
###Code
wp_wpds_politicians_by_country = wp_wpds_politicians_by_country[['Name', 'page', 'rev_id', 'prediction', 'Population']]
wp_wpds_politicians_by_country = wp_wpds_politicians_by_country.rename(columns={'Name':'country',
'page':'article_name',
'rev_id':'revision_id',
'prediction': 'article_quality_est.',
'Population': 'population'})
#wp_wpds_politicians_by_country.head()
###Output
_____no_output_____
###Markdown
Write the two new data frames to the csv.
###Code
wp_wpds_countries_no_match.to_csv('wp_wpds_countries_no_match.csv')
wp_wpds_politicians_by_country.to_csv('wp_wpds_politicians_by_country.csv')
###Output
_____no_output_____
###Markdown
Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
countries = {}
for country in wp_wpds_politicians_by_country['country'].unique():
countries[country] = wp_wpds_politicians_by_country['country'].value_counts()[country]/wp_wpds_politicians_by_country['population'][wp_wpds_politicians_by_country['country']==country].unique()[0]
top_ten_countries_by_proportion = pd.DataFrame(countries, index=[0]).T.sort_values(by=[0], ascending=False)[0:10]
top_ten_countries_by_proportion
###Output
_____no_output_____
###Markdown
Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
bottom_ten_countries_by_proportion = pd.DataFrame(countries, index=[0]).T.sort_values(by=[0], ascending=True)[0:10]
bottom_ten_countries_by_proportion
###Output
_____no_output_____
###Markdown
Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
good_quality_by_country = wp_wpds_politicians_by_country[(wp_wpds_politicians_by_country['article_quality_est.']=='GA') | (wp_wpds_politicians_by_country['article_quality_est.']=='FA')]
countries = {}
for country in good_quality_by_country['country'].unique():
good_count = len(good_quality_by_country[good_quality_by_country['country']==country])
total = len(wp_wpds_politicians_by_country[wp_wpds_politicians_by_country['country']==country])
countries[country] = good_count/total
top_ten_countries_by_relative_quality = pd.DataFrame(countries, index=[0]).T.sort_values(by=[0], ascending=False)[0:10]
top_ten_countries_by_relative_quality
###Output
_____no_output_____
###Markdown
Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
bottom_ten_countries_by_relative_quality = pd.DataFrame(countries, index=[0]).T.sort_values(by=[0], ascending=True)[0:10]
bottom_ten_countries_by_relative_quality
###Output
_____no_output_____
###Markdown
Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
###Code
wp_wpds_politicians_by_country = wp_wpds_politicians_by_country.reset_index(drop=False)
## Define the regions
wpds_original = pd.read_csv('WPDS_2020_data.csv')
northern_africa = wpds_original[3:10]
western_africa = wpds_original[11:27]
eastern_africa = wpds_original[28:48]
middle_africa = wpds_original[49:58]
southern_africa = wpds_original[59:64]
northern_america = wpds_original[65:67]
central_america = wpds_original[69:77]
caribbean = wpds_original[78:95]
south_america = wpds_original[96:110]
western_asia = wpds_original[111:129]
central_asia = wpds_original[130:135]
south_asia = wpds_original[136:145]
southeast_asia = wpds_original[146:157]
east_asia = wpds_original[158:166]
northern_europe = wpds_original[168:179]
western_europe = wpds_original[180:189]
eastern_europe = wpds_original[190:200]
southern_europe = wpds_original[201:216]
oceania = wpds_original[217:233]
sub_regions = ['NORTHERN AFRICA', 'WESTERN AFRICA',
'EASTERN AFRICA', 'MIDDLE AFRICA', 'SOUTHERN AFRICA',
'NORTHERN AMERICA','CENTRAL AMERICA', 'CARIBBEAN', 'SOUTH AMERICA',
'WESTERN ASIA', 'CENTRAL ASIA', 'SOUTH ASIA', 'SOUTHEAST ASIA',
'EAST ASIA', 'NORTHERN EUROPE', 'WESTERN EUROPE',
'EASTERN EUROPE', 'SOUTHERN EUROPE', 'OCEANIA']
subsets = [northern_africa, western_africa, eastern_africa, middle_africa,
southern_africa, northern_america,central_america, caribbean,
south_america, western_asia, central_asia, south_asia,
southeast_asia, east_asia, northern_europe, western_europe,
eastern_europe, southern_europe, oceania]
region = []
for i in range(0,len(subsets)):
for j in range(0,len(subsets[i])):
region.append(sub_regions[i])
wpds['region'] = region
wpds = wpds.rename(columns={'Name':'country'})
wpds_merged = pd.merge(wp_wpds_politicians_by_country, wpds[['country', 'region']],on='country',how='left')
sub_region_counts = {}
for subreg in wpds_merged['region'].unique():
sub_region_counts[subreg] = wpds_merged['region'].value_counts()[subreg]/int(wpds_caps['Population'][wpds_caps['Name']==subreg])
top_ten_subregions_by_proportion = pd.DataFrame(sub_region_counts, index=[0]).T.sort_values(by=[0], ascending=False)[0:10]
top_ten_subregions_by_proportion
###Output
_____no_output_____
###Markdown
Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
good_quality_by_subregion = wpds_merged[(wpds_merged['article_quality_est.']=='GA') | (wpds_merged['article_quality_est.']=='FA')]
good_quality_subregion = {}
for country in good_quality_by_subregion['region'].unique():
good_quality_subregion[country] = good_quality_by_subregion['region'].value_counts()[country]/wpds_merged['region'].value_counts()[country]
top_ten_subregions_by_quality = pd.DataFrame(good_quality_subregion, index=[0]).T.sort_values(by=[0], ascending=False)[0:10]
top_ten_subregions_by_quality
###Output
_____no_output_____
###Markdown
Assignment 2: Bias in Data Emily Linebarger 1. Data extraction The two datasets I'll be using for this analysis are the Politicians by Country dataset from FigShare (https://figshare.com/articles/dataset/Untitled_Item/5513449) and the World Population Data Sheet (https://docs.google.com/spreadsheets/d/1CFJO2zna2No5KqNm9rPK5PCACoXKzb-nycJFhV689Iw/editgid=283125346), from the Population Reference Bureau (https://www.prb.org/international/indicator/population/table/).All data was downloaded on October 9, 2021 and was placed in the "raw" folder without edits. 2. Data cleaning
###Code
import pandas as pd
import numpy as np
# First, clean the data on politicians by country
politicians = pd.read_csv("raw/country/data/page_data.csv")
politicians.head()
politicians.shape
# All of the 'page' rows that start with "Template" are not Wikipedia articles, and should be dropped.
mask = politicians.page.str.contains("^Template")
politicians[mask]
politicians = politicians[~mask]
politicians.shape # This drops 496 rows.
politicians.to_csv('clean/politicians.csv')
# Next, clean the population data.
# There are some regional aggregates, which are distinguished by all-caps in the 'geography' field.
# These won't match the country strings in the politicians dataset, but they're important to keep around
# to get regional aggregates.
population = pd.read_csv('raw/WPDS_2020_data - WPDS_2020_data.csv.csv')
population = population.rename(columns={'Name':'country'}) # Rename to match politicians schema
population.head()
population.to_csv('clean/population.csv')
###Output
_____no_output_____
###Markdown
3. Getting article quality predictions To get article quality scores, I will use the ORES API, which uses a machine-learning model to attach a quality score to a given revision ID. Documentation is here: https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_modelFor each group of revision IDs, I'll need to build up a URL string of the format: https://ores.wikimedia.org/v3/scores/enwiki?models=articlequality&revids=355319463%7C498683267This queries the "enwiki" database (the content parameter), with the "articlequality" model (model parameter). From the API documentation, the database errors when more than 200 revision IDs are queried, so I'll query them in batches and write out temporary files.
###Code
import requests
import json
from datetime import datetime
import os
def query_api_batch(start_idx, end_idx, data, date):
# Get the revision IDs from the start to the end index
rev_ids = data.rev_id[start_idx:end_idx].astype('int')
rev_ids = rev_ids.astype('str')
rev_ids = '|'.join(rev_ids.to_list())
# Create a datetime string for data saving
date = datetime.today().strftime("%Y_%m_%d_%H_%M_%S")
os.makedirs(f"api_queries_raw/{date}", exist_ok = True)
os.makedirs(f"cleaned_queries/{date}", exist_ok = True)
# Query the API
r = requests.get(f"https://ores.wikimedia.org/v3/scores/enwiki?models=articlequality&revids={rev_ids}")
# Manipulate the data to get the 'prediction' column for each ID
data = json.loads(r.text)
# Save this query output
with open(f'api_queries_raw/{date}/{start_idx}_{end_idx}.txt', 'w') as outfile:
json.dump(data, outfile)
# Extract just the columns you need from the queries - prediction and revision ID
cleaned_data = dict()
for rev_id in data['enwiki']['scores'].keys():
if 'error' in data['enwiki']['scores'][rev_id]['articlequality'].keys():
score = np.nan
else:
score = data['enwiki']['scores'][rev_id]['articlequality']['score']['prediction']
cleaned_data[rev_id] = score
cleaned_data = pd.DataFrame({'rev_id': cleaned_data.keys(), 'score': cleaned_data.values()})
cleaned_data.to_csv(f'cleaned_queries/{date}/{start_idx}_{end_idx}.csv')
# First, read in past results. The API starts to reject requests after a certain number of queries, so I had
# to query in batches and save results to disk.
# ** Note - for the first two runs on 10/9/2021 and 10/11/2021, I did not save the time. So I've given these
# folders a time of midnight (00_00_00).
from pathlib import Path
all_dates = [x for x in Path('cleaned_queries').iterdir()]
previous_results = list()
for date in all_dates:
previous_results.extend([x for x in date.iterdir() if x.is_file()])
print(f"Previous results found: {len(previous_results)}")
# Glob all of these results together
wiki_codes = []
for filename in previous_results:
df = pd.read_csv(filename)
wiki_codes.append(df)
wiki_codes = pd.concat(wiki_codes, axis=0, ignore_index=True)
wiki_codes.head()
wiki_codes.shape
# Merge these results onto data, so you only query lines that are missing
data = pd.read_csv('clean/politicians.csv')
data['rev_id'] = np.round(data['rev_id'])
data.head()
wiki_codes = wiki_codes[['rev_id', 'score']]
scored_data = data.merge(wiki_codes, on = 'rev_id', how = 'outer')
scored_data.head()
# Save this data out
has_scores = scored_data.loc[~scored_data.score.isnull()]
has_scores.to_csv('clean/pages_with_scores.csv')
# Pull out the missing lines, and query the database for their scores.
missing_scores = scored_data.loc[scored_data.score.isnull()]
missing_scores = missing_scores.drop_duplicates()
missing_scores.head()
missing_scores.shape
# Save the results you were unable to score to disk
missing_scores.to_csv('clean/unable_to_score_pages.csv')
# Iterate through the entire dataset, and save all query results
# There are 277 pages that couldn't be scored. Iterate through this loop again in you find more than this.
if missing_scores.shape[0] > 277:
# Create a datetime string for data saving
date = datetime.today().strftime("%Y_%m_%d_%H_%M_%S")
os.makedirs(f"api_queries_raw/{date}", exist_ok = True)
os.makedirs(f"cleaned_queries/{date}", exist_ok = True)
# Iterate through missing data
step_size = 50
for i in range(0, missing_scores.shape[0], step_size):
start_idx = i # First start index will be 0, then 50, 100, etc.
end_idx = i + (step_size - 1) # First end index will be 49, then 99, 149, etc.
if (end_idx > missing_scores.shape[0]):
print("Reached the end!")
end_idx = missing_scores.shape[0] # If you've reached the end, only query the remaining IDs available
query_api_batch(start_idx, end_idx, missing_scores, date)
print(f"Start at idx {start_idx}, end at idx {end_idx}")
else:
print("All pages have been scored!")
###Output
All pages have been scored!
###Markdown
4. Combining the datasets Now, I'll merge the scored pages with the population data.
###Code
scored_politicians = pd.read_csv('clean/pages_with_scores.csv')
population = pd.read_csv('clean/population.csv')
# Do an outer merge on the 'country' column, so nonmatching observations are kept.
results = scored_politicians.merge(population, on='country', how='outer')
results = results[['page', 'country', 'rev_id', 'score', 'FIPS', 'Type', 'TimeFrame', 'Data (M)', 'Population']]
results.head()
# Write to disk any rows that did not exist in both datasets
no_match = results.loc[(results.Population.isnull()) | (results.score.isnull())]
no_match.to_csv("clean/wp_wpds_countries-no_match.csv")
# Save the results that did match.
match = results.loc[~results.rev_id.isin(no_match.rev_id)]
match = match[['country', 'page', 'rev_id', 'score', 'Population']]
match.columns = ['country', 'article_name', 'revision_id', 'article_quality_est', 'population']
match.to_csv("clean/wp_wpds_politicians_by_country.csv")
###Output
_____no_output_____
###Markdown
5. Analysis For the analysis, I will calculate the proportion of articles per population and high quality articles for each country/geographic region. I define "high quality" as having either a "FA" or "GA" score.
###Code
match.head()
match.article_quality_est.unique()
# First, calculate the total number of articles by a country's population.
articles_per_population = match.copy()
articles_per_population['num_articles'] = 1 # Create a count variable to collapse by
articles_per_population = articles_per_population.groupby(['country', 'population'])['num_articles'].sum().reset_index()
# Then, generate the proportion of "number of articles" / "population" as a percentage
articles_per_population['articles_per_population'] = (articles_per_population['num_articles'] / articles_per_population['population'])*100
articles_per_population.head()
# Now, calculate the percentage of high quality articles.
# Out of the total number of high-quality articles in a country, how many are high-quality?
article_quality = match.copy()
article_quality['num_articles'] = 1
article_quality['quality_article'] = article_quality.article_quality_est.isin(['GA', 'FA']).astype('int')
# Now, sum these two columns and create the proportion column
article_quality = article_quality.groupby('country').agg({'num_articles':'sum', 'quality_article':'sum'}).reset_index()
article_quality['pct_quality_articles'] = (article_quality['quality_article'] / article_quality['num_articles'])*100
article_quality.head()
# Finally, prepare regional aggregates. First, make a map of regions to countries.
regions_to_countries = pd.read_csv('clean/population.csv')
regions_to_countries.head()
# Each sub-region is a header over the countries it contains. So, extend these down.
regions_to_countries['region'] = regions_to_countries['country']
regions_to_countries.loc[regions_to_countries.Type!='Sub-Region', 'region'] = np.nan
regions_to_countries['region'] = regions_to_countries['region'].fillna(method = 'ffill')
regions_to_countries = regions_to_countries[['country', 'region']]
# Finally, grab the regional population from the original dataset
regional_pops = pd.read_csv('clean/population.csv')
regional_pops = regional_pops.loc[regional_pops.Type == 'Sub-Region', ['country', 'Population']]
regional_pops.columns = ['region', 'regional_population']
regional_pops.head()
regions_to_countries = regions_to_countries.merge(regional_pops, on = 'region', how = 'left')
# For some reason the Channel Islands got coded as a region? Drop this.
regions_to_countries = regions_to_countries.loc[regions_to_countries.region != "Channel Islands"]
regions_to_countries.region.unique()
regions_to_countries.to_csv('clean/regions_to_countries_map.csv')
# Merge this regional data onto both the articles-per-population and article-quality datasets.
# Only keep the rows from the original dataset, so aggregate region names will be dropped.
articles_per_population = articles_per_population.merge(regions_to_countries, on = 'country', how = 'left')
articles_per_population.head()
article_quality = article_quality.merge(regions_to_countries, on = 'country', how = 'left')
article_quality.head()
###Output
_____no_output_____
###Markdown
6. Results Table 1: 10 highest-ranked countries in terms of articles-per-population
###Code
articles_per_population = articles_per_population.sort_values(by='articles_per_population', ascending = False)
articles_per_population.head(10)
###Output
_____no_output_____
###Markdown
Table 2: 10 lowest-ranked countries in terms of articles-per-population
###Code
articles_per_population = articles_per_population.sort_values(by='articles_per_population', ascending = True)
articles_per_population.head(10)
###Output
_____no_output_____
###Markdown
Table 3: 10 highest-ranked countries in terms of quality article percentage
###Code
article_quality = article_quality.sort_values(by='pct_quality_articles', ascending = False)
article_quality.head(10)
###Output
_____no_output_____
###Markdown
Table 4: 10 lowest-ranked countries in terms of quality article percentage
###Code
article_quality = article_quality.sort_values(by='pct_quality_articles', ascending = True)
article_quality.head(10)
###Output
_____no_output_____
###Markdown
Table 5: Ranking of geographic regions (in descending order) in terms of total count of politician articles over regional population
###Code
# Collapse the articles per population dataset by regions and recalculate the indicator.
# Important: Use the regional population as the denominator this time.
articles_per_population = articles_per_population.groupby(['region', 'regional_population'])['num_articles'].sum().reset_index()
articles_per_population['articles_per_population'] = (articles_per_population['num_articles'] / articles_per_population['regional_population'])*100
articles_per_population = articles_per_population.sort_values(by = 'articles_per_population', ascending = False)
articles_per_population.head(10)
###Output
_____no_output_____
###Markdown
Table 6: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles that are of high quality (ranked FA or GA)
###Code
# Collapse the article_quality dataset by regions and recalculate the indicator.
article_quality = article_quality.groupby('region').agg({'num_articles':'sum', 'quality_article':'sum', 'regional_population': 'mean'}).reset_index()
article_quality['pct_quality_articles'] = (article_quality['quality_article'] / article_quality['num_articles'])*100
article_quality = article_quality.sort_values(by = 'pct_quality_articles', ascending = False)
article_quality.head(10)
###Output
_____no_output_____
###Markdown
Declare all required import packages.
###Code
"""
This code file creates homework assignment #2
Gary Gregg
DATA 512A
University of Washington
Autumn 2017
"""
import numpy as np
import csv
import matplotlib.pyplot as plt
plt.rcdefaults()
import os.path
import requests
###Output
_____no_output_____
###Markdown
Although they refer to the same country, some of the country names in the article data file do not match those in the population data file. Create a map from one to the other.
###Code
# Country Map
COUNTRY_MAP = {
"East Timorese" : "Timor-Leste",
"Hondura" : "Honduras",
"Rhodesian" : "Zimbabwe",
"Salvadoran" : "El Salvador",
"Samoan" : "Samoa",
"São Tomé and Príncipe" : "Sao Tome and Principe",
"Somaliland" : "Somalia",
"South African Republic" : "South Africa",
"South Korean" : "Korea, South"
}
###Output
_____no_output_____
###Markdown
Declare below field offsets for each kind of record list we use. The offsets for the 'augmented' data file will be the same as those for the article (or page) data file, except that they append article quality and country population. The count record will contain synthesized data that needs to be graphed including the two required percentage values. The article (or page) data file is one of our input files, as is the population data file.
###Code
# Augmented Page Data Fields
AUG_COUNTRY = 0
AUG_PAGE = 1
AUG_REVISION_ID = 2
AUG_QUALITY = 3
AUG_POPULATION = 4
# Count Fields
CNT_COUNTRY = 0
CNT_POPULATION = 1
CNT_ARTICLES = 2
CNT_ARTICLE_PCT = 3
CNT_HQ_ARTICLES = 4
CNT_HQ_ARTICLE_PCT = 5
# Page Data Fields
PDT_COUNTRY = 0
PDT_PAGE = 1
PDT_REVISION_ID = 2
# Population Fields
POP_COUNTRY = 0
POP_CNT_TYPE = 1
POP_TIMEFRAME = 2
POP_DATATYPE = 3
POP_DATA = 4
POP_FOOTNOTES = 5
###Output
_____no_output_____
###Markdown
Declare and initialize global constants.
###Code
# Miscellaneous Constants
AUGMENTED_PAGE_DATA_PATH = 'data-512-a2.csv'
DEFAULT_ROW_COUNT = 2
MODEL = 'wp10'
PAGE_DATA_PATH = 'page_data.csv'
POPULATION_DATA_PATH = 'Population Mid-2015.csv'
PER_CALL = 140
PROJECT = 'enwiki'
###Output
_____no_output_____
###Markdown
The augment_page_data function augments rows from the article data file with a quality rating and a country population. The quality rating is supplied by a quality dictionary indexed by revision ID, and the population dictionary is indexed by country name.
###Code
def augment_page_data(page_data, quality_dictionary, population_dictionary):
"""
Augments page data with article quality and the population of the country
in which the subject resides.
@param page_data: The original page data
@type page_data: list
@param quality_dictionary: An article quality dictionary, indexed by
revision ID
@type quality_dictionary: dict
@param population_dictionary: A population dictionary, indexed by country
@type population_dictionary: dict
@return: Page data augmented with article quality and the population of the
country in which the subject resides
@rtype: list
"""
# Declare and initialize a dictionary of missing countries, and a list to
# received the augmented page data.
missing = {}
new_page_data = [['country',
'article_name',
'revision_id',
'article_quality',
'population']]
# Cycle for each row in the page data.
for index in range(1, len(page_data)):
# Get the indexed row. Get the article revision and country name
# for the first/next row.
row = page_data[index]
article_revision = row[PDT_REVISION_ID]
country_name = get_country(row[PDT_COUNTRY])
# Write a message if the article revision is not in the quality
# dictionary. This really should not happen.
if article_revision not in quality_dictionary:
print('Missing quality entry for revision ID \'%s\'.' %
article_revision)
# The article revision is in the quality dictionary.
else:
# Initialize, or increment the count of articles for the
# given country name if the country name is not in the
# population dictionary.
if country_name not in population_dictionary:
if country_name not in missing:
missing[country_name] = 1
else:
missing[country_name] += 1
# The country is in the population dictionary. Create
# an augmented page data row.
else:
new_page_data.append([country_name,
row[PDT_PAGE],
article_revision,
quality_dictionary[article_revision],
population_dictionary[country_name]])
# Describe the counts of articles for 'countries' that were missing
# a population in the population dictionary. Return the augmented page
# data.
print('The following is the counts of articles about persons in countries '
'that are missing a registered population: %s' % missing)
return new_page_data
###Output
_____no_output_____
###Markdown
The build_country_to_population function builds a dictionary of country names to the corresponding country's population.
###Code
def build_country_to_population(country_data):
"""
Builds a dictionary of countries to their populations.
@param country_data: A list of countries with name as the first field,
and population as the fifth field
@type country_data: list
@return: A dictionary of countries to their population
@rtype: dict
"""
# Declare and initialize the population dictionary, and cycle for each
# country in the list.
population_dictionary = {}
for index in range(3, len(country_data) - 1):
# Add a new dictionary for the first/next country.
population_dictionary[country_data[index][POP_COUNTRY]] =\
int(country_data[index][POP_DATA].replace(',', ''))
# Return the population dictionary.
return population_dictionary
###Output
_____no_output_____
###Markdown
The calculate_percentages function calculates the article count to population percentage, and the good article count to article count percentage for each country.
###Code
def calculate_percentages(counts):
"""
Calculates the percentage of articles per population, and the percentage of
high-quality articles for a country dictionary, list or tuple.
@param counts: A country dictionary, list or tuple
@type counts: dict, list or tuple
@return: None
@rtype: None
"""
# Declare and initialize a percent multiplier. Cycle for each country.
percent = 100.
for country in counts:
# Get the value list for the first/next country. Get the article count
# and population from the list.
value = counts[country]
article_count = value[CNT_ARTICLES]
population = value[CNT_POPULATION]
# Calculate the percentage of articles per population if the population
# is greater than zero.
if population > 0:
value[CNT_ARTICLE_PCT] = article_count / population * percent
# Calculate the percentage of high-quality articles if there are one or
# more articles.
if article_count > 0:
value[CNT_HQ_ARTICLE_PCT] = value[CNT_HQ_ARTICLES] / article_count * percent
# Done, so return.
return
###Output
_____no_output_____
###Markdown
The construct_display_values function constructs two lists, one for the top (or bottom) ten values to be displayed along the horizontal axis (which will be country names), and the top (or bottom) ten values to be displayed along the vertical axis (which will be percentages). The function receives the indices of the display fields as its last two arguments.
###Code
def construct_display_values(value_list, horizontal, vertical):
"""
Constructs two lists of display values, one for the horizontal axis, and
one for the vertical axis.
@param value_list: A list containing the display values
@type value_list: list or tuple
@param horizontal: The index of the horizontal display values
@type horizontal: int
@param vertical: The index of the vertical display values
@type vertical: int
@return: Two lists
@rtype: list
"""
# Declare and initialize the lists to be returned. Cycle for the number of
# items to be displayed.
horizontal_list = []
vertical_list = []
for i in range(0, 10):
# Add the values for the first/next item in the list.
horizontal_list.append(value_list[i][horizontal])
vertical_list.append(value_list[i][vertical])
return horizontal_list, vertical_list
###Output
_____no_output_____
###Markdown
The function create_assignment_2 is the starting point for this show. It creates the augmented article (or page) data file, calculates the percentages to be displayed, then creates the four bar graphs required by the assignment.
###Code
def create_assignment_2():
"""
Creates homework assignment #2. No input parameters or return value.
Everything this function does is a side-effect.
@return: None
@rtype: None
"""
# Create the country list.
country_list = list(create_country_dictionary().values())
# Sort the country list by descending article/population percentage, and
# graph it.
sort_and_display(country_list, get_article_percentage, CNT_ARTICLE_PCT,
True,
'Highest-Ranked Countries in Terms of Number of Politician '
'Articles as a Proportion of Country Population')
# Sort the country list by ascending article/population percentage, and
# graph it.
sort_and_display(country_list, get_article_percentage, CNT_ARTICLE_PCT,
False,
'Lowest-Ranked Countries in Terms of Number of Politician '
'Articles as a Proportion of Country Population')
# Sort the country list by descending high-quality/all-article percentage,
# and graph it.
sort_and_display(country_list, get_hq_article_percentage, CNT_HQ_ARTICLE_PCT,
True,
'Highest-Ranked Countries in Terms of Number of GA and '
'FA-Quality Articles as a Proportion of all Articles '
'About Politicians from that Country')
# Sort the country list by ascending high-quality/all-article percentage,
# and graph it.
sort_and_display(country_list, get_hq_article_percentage, CNT_HQ_ARTICLE_PCT,
False,
'Lowest-Ranked Countries in Terms of Number of GA and '
'FA-Quality Articles as a Proportion of all Articles '
'About Politicians from that Country')
###Output
_____no_output_____
###Markdown
The function create_augmented_page_data creates the augmented page data file from the article data file and the population data file.
###Code
def create_augmented_page_data():
"""
Creates the augmented page data file.
@return: None
@rtype: None
"""
# Read the page data from CSV. Create the page quality map, and the
# country-to-population map. Using all of these, create the augmented
# page data and write it to CSV.
page_data = read_from_csv(PAGE_DATA_PATH)
write_to_csv(AUGMENTED_PAGE_DATA_PATH,
augment_page_data(page_data,
get_quality_all(page_data, 101),
build_country_to_population(
read_from_csv(POPULATION_DATA_PATH))))
###Output
_____no_output_____
###Markdown
The function create_country_dictionary creates a dictionary of statistics about a country, indexed by the country name.
###Code
def create_country_dictionary():
"""
Creates a dictionary of countries, and statistics about them.
Precondition: The augmented page data file exists, and is formatted
correctly.
@return: A dictionary of countries, and statistics about them
@rtype: dict
"""
# Here is the current list of fields for values in the dictionary:
#
# CNT_COUNTRY
# CNT_POPULATION
# CNT_ARTICLES
# CNT_ARTICLE_PCT
# CNT_HQ_ARTICLES
# CNT_HQ_ARTICLE_PCT
# Initialize an empty country dictionary. Read rows from the augmented
# page data file.
country_dictionary = {}
augmented_page_data = read_augmented_csv()
# Delete the header row from the augmented page data. Cycle for each
# remaining row in the file.
del augmented_page_data[0]
for data_row in augmented_page_data:
# Extract the country name from the row. Is there an existing entry
# in the country dictionary. Get it if so.
country = data_row[AUG_COUNTRY]
if country in country_dictionary:
country_row = country_dictionary[country]
# There is no existing entry in the country dictionary. Create one
# with initial values.
else:
country_row = [country, int(data_row[AUG_POPULATION]),
0, 0., 0, 0.]
# Increment the count of articles for the given country.
country_row[CNT_ARTICLES] += 1
# Get the quality from the data row. Increment the count of high-
# quality articles if the article has a high-quality rating.
quality = data_row[AUG_QUALITY]
if quality == 'FA' or quality == 'GA':
country_row[CNT_HQ_ARTICLES] += 1
# Return, or add the country value to the country dictionary indexed
# by country.
country_dictionary[country] = country_row
# Calculate the percentage of articles per population, and the percentage
# of high-quality articles.
calculate_percentages(country_dictionary)
return country_dictionary
###Output
_____no_output_____
###Markdown
The function display_barchart displays a barchart given the horizontal axis values, the vertical axis values, and a graph title.
###Code
def display_barchart(horizontal_values, vertical_values, title):
"""
Displays a barchart of country to percentage.
@param horizontal_values: Country names to be displayed along the
horizontal axis
@type horizontal_values: str
@param vertical_values: Percentages to be displayed along the vertical axis
@type vertical_values: float
@param title: Title for the graph
@type title: str
@return: None
@rtype: None
"""
# Set the figure size. Declare and initialize an array of evenly spaced
# values. Construct the plot.
plt.figure(figsize=(20, 10))
y_position = np.arange(len(horizontal_values))
plt.bar(y_position, vertical_values, align='center', alpha=1.0,
color=['#66cdaa']) # Color is Medium Aquamarine
# Set the x-ticks, the x-label and the y-label.
plt.xticks(y_position, horizontal_values)
plt.xlabel('Country Name')
plt.ylabel('Percentage')
# Set the title, and show the graph.
plt.title(title)
plt.show()
###Output
_____no_output_____
###Markdown
The function get_article_percentage gets the article percentage field from a row in a country statistics list. This method is used by 'sorted' to sort the country statistics list by the article percentage field.
###Code
def get_article_percentage(country):
"""
Gets the percentage of articles to population from a list.
@param country: A country attributes entry
@type country: list
@return: Percentage of articles to population
@rtype: float
"""
return country[CNT_ARTICLE_PCT]
###Output
_____no_output_____
###Markdown
The function get_article_quality uses the Wikimedia ORES API to retrieve article quality for a series of articles given by revision ID.
###Code
def get_article_quality(article_quality, revision_ids):
"""
Gets predicted article quality for a series of revision IDs. Returns a dictionary
indexed by revision ID. Possible values for each revision ID are:
FA - Featured article
GA - Good article
B - B-class article
C - C-class article
Start - Start-class article
Stub - Stub-class article
@param article_quality: An existing dictionary of revision IDs to
article quality
@type article_quality: dictionary
@param revision_ids: A series of revision IDs
@type revision_ids: list or tuple
@return: article_quality
@rtype: dict
"""
# Hardcoded endpoint for the ORES API
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# The parameters to be passed to the ORES API
params = {'project': PROJECT,
'model': MODEL,
'revids': '|'.join(str(x) for x in revision_ids)
}
# Call the API, and return the response as JSON.
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
# Build and return a dictionary of article quality predictions
# indexed by revision ID. Return the article quality dictionary.
for key, value in response[PROJECT]['scores'].items():
article_quality[key] = value[MODEL]['score']['prediction']
return article_quality
###Output
_____no_output_____
###Markdown
The function get_country uses the COUNTRY_MAP to return possibly alternate country names used in the population data file. If the names used are different, it returns the different name. Otherwise it returns the name used in the article data file.
###Code
def get_country(country):
"""
Determines if a given country is mapped to another name.
@param country: A given country
@type country: str
@return: A mapped country name if a name exists in the country map,
the unmapped parameter otherwise
@rtype: str
"""
# Reset the country name if a name exists in the country map, and
# return the country.
if country in COUNTRY_MAP:
country = COUNTRY_MAP[country]
return country
###Output
_____no_output_____
###Markdown
The function get_hq_article_percentage gets the high-quality article percentage field from a row in a country statistics list. This method is used by 'sorted' to sort the country statistics list by the high-quality article percentage field.
###Code
def get_hq_article_percentage(country):
"""
Gets the percentage of high-quality articles from a list.
@param country: A country attributes entry
@type country: list
@return: Percentage of high-quality articles from a list
@rtype: float
"""
return country[CNT_HQ_ARTICLE_PCT]
###Output
_____no_output_____
###Markdown
I discovered that the Wikimedia ORES API will not successfully return quality statistics for more than about 140 articles at a time. The get_quality_all function will get quality statistics for all rows in the article data file by calling the ORES API as many times as it needs to, asking for no more than 140 articles at a time.
###Code
def get_quality_all(page_data, last_index=DEFAULT_ROW_COUNT):
"""
Gets article quality for all revision IDs in a page data list, up
to a given maximum.
@param page_data: A page data list, formatted with revision ID as the
third element in each row
@type page_data: list or tuple
@param last_index: The last index to consider
@type last_index: int
@return: article_quality
@rtype: dict
"""
# Use the the full length of the page data if the last index is less than
# a minimum number of rows.
if last_index <= DEFAULT_ROW_COUNT:
last_index = len(page_data)
# Declare and initialize the quality dictionary, and determine the number
# of iterative calls.
quality_dictionary = {}
calls = last_index // PER_CALL
# Declare and initialize the base index, and cycle for the given number of
# full calls required to retrieve the indicated number of rows.
base = 1
for _ in range(0, calls):
# Calculate the last index, and print a message.
count = base + PER_CALL
print('Retrieving quality rating for articles %d to %d...'
% (base, count - 1))
# Update the quality dictionary.
quality_dictionary = make_quality_call(quality_dictionary,
page_data,
base,
count)
# Update the base index.
base = count
# Is the base index less than the last index? If so, there is
# a remaining number of rows...
if base < last_index:
# Print a message.
print('Retrieving quality rating for articles %d to %d...' %
(base, last_index - 1))
# Update the quality dictionary with the remaining number of rows.
quality_dictionary = make_quality_call(quality_dictionary,
page_data,
base,
last_index)
# Describe how long the dictionary is, and return is.
print('Length of quality dictionary is %d' % len(quality_dictionary))
return quality_dictionary
###Output
_____no_output_____
###Markdown
The function make_quality_call assists in batch calling of the ORES API by creating a list of only the desired 140 revision IDs. It then calls ORES with these revision IDs, and adds the results to an existing article quality dictionary, which is slowly built up until the quality ratings have been retrieved all the articles.
###Code
def make_quality_call(existing_dictionary, page_data, start, stop):
"""
Makes a call to get article quality for a given set of indices into a page
data list.
@param existing_dictionary: An existing dictionary of quality entries
indexed by revision ID
@type existing_dictionary: dictionary
@param page_data: A page data list, formatted with revision ID as the
third element in each row
@type page_data: list or tuple
@param start: The first index to use, inclusive
@type start: int
@param stop: The last index, exclusive
@type stop: int
@return: article_quality
@rtype: dict
"""
# Declare and initialize an empty list of revision IDs. Cycle for each row
# in the given range. Append the first/next ID to the list.
ids = []
for row in range(start, stop):
ids.append(page_data[row][PDT_REVISION_ID])
# Get article quality for the selected revision IDs.
return get_article_quality(existing_dictionary, ids)
###Output
_____no_output_____
###Markdown
The read_augmented_csv function reads the augmented data file, which is the article data file with quality ratings and country population counts appended to each row. Note that the function will use an existing augmented data file if its exists, or create a new one if it does not.
###Code
def read_augmented_csv():
"""
Reads fields from the augmented page data file.
@return: The rows read from the file
@rtype: list
"""
# Create the augmented page data file if it does not already exist.
if not os.path.isfile(AUGMENTED_PAGE_DATA_PATH):
create_augmented_page_data()
# Read the file, and return the rows.
return read_from_csv(AUGMENTED_PAGE_DATA_PATH)
###Output
_____no_output_____
###Markdown
The function read_from_csv reads rows from a CSV file.
###Code
def read_from_csv(file_name):
"""
Reads fields from a CSV file.
@param file_name: A file path.
@type file_name: str
@return: The rows read from the file
@rtype: list
"""
# Declare and initialize a empty row list. Open a CSV reader using the
# given file name.
row_list = []
with (open(file_name)) as csvfile:
reader = csv.reader(csvfile)
# Append the row for each row read by the reader.
for row in reader:
row_list.append(row)
# Return the row list.
return row_list
###Output
_____no_output_____
###Markdown
The function sort_and_display sorts a country statistics list using a supplied sort function, then display the top (or bottom) rows of the list with a percentage indicated by the percentage_index argument. The sort and occur either ascending or descending, and the resulting display has the indicated title.
###Code
def sort_and_display(value_list, sorter, percentage_index, descending, title):
"""
Sorts a values list, and displays a barchart.
@param value_list: A list of values
@type value_list: list
@param sorter: The key function to use for sort
@type sorter: function
@param percentage_index: The index of the desired percentage in the value
list
@type percentage_index: int
@param descending: True to sort largest to smallest, false otherwise
@type descending: bool
@param title: The title of the resulting graph
@type title: str
@return: None
@rtype: None
"""
# Sort the value list. Extract values for the horizontal and vertical
# axes.
value_list = sorted(value_list, key=sorter, reverse=descending)
horizontal, vertical = construct_display_values(value_list,
CNT_COUNTRY,
percentage_index)
# Display a barchart with the extracted values and the given title.
display_barchart(horizontal, vertical, title)
###Output
_____no_output_____
###Markdown
The function write_to_csv writes a row list in CSV format to a file with the indicated name. In particular, this function is used to create the augmented data file.
###Code
def write_to_csv(file_name, row_list):
"""
Writes fields to a CSV file.
@param file_name: A file path.
@type file_name: str
@param row_list: The rows to write to the file
@type row_list: list
"""
# Open a CSV writer using the given file name. Write the given rows.
with(open(file_name, 'w')) as csvfile:
writer = csv.writer(csvfile)
writer.writerows(row_list)
###Output
_____no_output_____
###Markdown
Here is where the whole thing starts: Call create_assignment_2!
###Code
# Create the products for homework assignment #2.
create_assignment_2()
###Output
_____no_output_____
###Markdown
Get the data
###Code
import pandas as pd
# Downloaded from https://figshare.com/articles/dataset/Untitled_Item/5513449
page_df = pd.read_csv('country/data/page_data.csv')
page_df.columns
page_df
pop_df = pd.read_csv('WPDS_2020_data.csv')
pop_df#.columns
pop_df
###Output
_____no_output_____
###Markdown
Clean the data- Ignore articles with names beginning with 'Template:'- Set aside regional populations (names in all CAPS)
###Code
page_df = page_df[~page_df['page'].str.startswith('Template:')]
page_df
region_pop_df = pop_df[(pop_df['FIPS'].str.len() > 2) & (pop_df['FIPS'].str.isupper())]
region_pop_df
country_pop_df = pop_df[~(pop_df['Name'].str.isupper())]
country_pop_df
region_to_country_dict = {'NORTHERN AFRICA':['Algeria', 'Egypt', 'Libya', 'Morocco', 'Sudan', 'Tunisia', 'Western Sahara'],
'WESTERN AFRICA':['Benin', 'Burkina Faso', 'Cape Verde', "Cote d'Ivoire", 'Gambia',
'Ghana', 'Guinea', 'Guinea-Bissau', 'Liberia', 'Mali', 'Mauritania',
'Niger', 'Nigeria', 'Senegal', 'Sierra Leone', 'Togo'],
'EASTERN AFRICA':['Burundi', 'Comoros', 'Djibouti', 'Eritrea', 'Ethiopia', 'Kenya',
'Madagascar', 'Malawi', 'Mauritius', 'Mayotte', 'Mozambique', 'Reunion',
'Rwanda', 'Seychelles', 'Somalia', 'South Sudan', 'Tanzania', 'Uganda',
'Zambia', 'Zimbabwe'],
'MIDDLE AFRICA':['Angola', 'Cameroon', 'Central African Republic', 'Chad', 'Congo',
'Congo, Dem. Rep.', 'Equatorial Guinea', 'Gabon', 'Sao Tome and Principe'],
'SOUTHERN AFRICA':['Botswana', 'eSwatini', 'Lesotho', 'Namibia', 'South Africa'],
'NORTHERN AMERICA':['Canada', 'United States'],
'CENTRAL AMERICA':['Belize', 'Costa Rica', 'El Salvador', 'Guatemala', 'Honduras', 'Mexico',
'Nicaragua', 'Panama'],
'CARIBBEAN':['Antigua and Barbuda', 'Bahamas', 'Barbados', 'Cuba', 'Curacao', 'Dominica',
'Dominican Republic', 'Grenada', 'Guadeloupe', 'Haiti', 'Jamaica', 'Martinique',
'Puerto Rico', 'St. Kitts-Nevis', 'Saint Lucia', 'St. Vincent and the Grenadines',
'Trinidad and Tobago'],
'SOUTH AMERICA':['Argentina', 'Bolivia', 'Brazil', 'Chile', 'Colombia', 'Ecuador',
'French Guiana', 'Guyana', 'Paraguay', 'Peru', 'Suriname', 'Uruguay',
'Venezuela'],
'WESTERN ASIA':['Armenia', 'Azerbaijan', 'Bahrain', 'Cyprus', 'Georgia', 'Iraq', 'Israel',
'Jordan', 'Kuwait', 'Lebanon', 'Oman', 'Palestinian Territory', 'Qatar',
'Saudi Arabia', 'Syria', 'Turkey', 'United Arab Emirates', 'Yemen'],
'CENTRAL ASIA':['Kazakhstan', 'Kyrgyzstan', 'Tajikistan', 'Turkmenistan', 'Uzbekistan'],
'SOUTH ASIA':['Afghanistan', 'Bangladesh', 'Bhutan', 'India', 'Iran', 'Maldives', 'Nepal',
'Pakistan', 'Sri Lanka'],
'SOUTHEAST ASIA':['Brunei', 'Cambodia', 'Indonesia', 'Laos', 'Malaysia', 'Myanmar',
'Philippines', 'Singapore', 'Thailand', 'Timor-Leste', 'Vietnam'],
'EAST ASIA':['China', 'China, Hong Kong SAR', 'China, Macao SAR', 'Japan', 'Korea, North',
'Korea, South', 'Mongolia', 'Taiwan'],
'NORTHERN EUROPE':['Channel Islands', 'Denmark', 'Estonia', 'Finland', 'Iceland', 'Ireland',
'Latvia', 'Lithuania', 'Norway', 'Sweden', 'United Kingdom'],
'WESTERN EUROPE':['Austria', 'Belgium', 'France', 'Germany', 'Liechtenstein', 'Luxembourg',
'Monaco', 'Netherlands', 'Switzerland'],
'EASTERN EUROPE':['Belarus', 'Bulgaria', 'Czechia', 'Hungary', 'Moldova', 'Poland',
'Romania', 'Russia', 'Slovakia', 'Ukraine'],
'SOUTHERN EUROPE':['Albania', 'Andorra', 'Bosnia-Herzegovina', 'Croatia', 'Greece', 'Italy',
'Kosovo', 'Malta', 'Montenegro', 'North Macedonia', 'Portugal',
'San Marino', 'Serbia', 'Slovenia', 'Spain'],
'OCEANIA':['Australia', 'Federated States of Micronesia', 'Fiji', 'French Polynesia',
'Guam', 'Kiribati', 'Marshall Islands', 'Nauru', 'New Caledonia', 'New Zealand',
'Palau', 'Papua New Guinea', 'Samoa', 'Solomon Islands', 'Tonga', 'Tuvalu',
'Vanuatu']
}
region_to_region_dict = {'AFRICA': region_to_country_dict['NORTHERN AFRICA'] +
region_to_country_dict['WESTERN AFRICA'] +
region_to_country_dict['EASTERN AFRICA'] +
region_to_country_dict['MIDDLE AFRICA'] +
region_to_country_dict['SOUTHERN AFRICA'],
'LATIN AMERICA AND THE CARIBBEAN': region_to_country_dict['CENTRAL AMERICA'] +
region_to_country_dict['CARIBBEAN'] +
region_to_country_dict['SOUTH AMERICA'],
'ASIA': region_to_country_dict['WESTERN ASIA'] +
region_to_country_dict['CENTRAL ASIA'] +
region_to_country_dict['SOUTH ASIA'] +
region_to_country_dict['SOUTHEAST ASIA'] +
region_to_country_dict['EAST ASIA'],
'EUROPE': region_to_country_dict['NORTHERN EUROPE'] +
region_to_country_dict['WESTERN EUROPE'] +
region_to_country_dict['EASTERN EUROPE'] +
region_to_country_dict['SOUTHERN EUROPE']
}
###Output
_____no_output_____
###Markdown
Get page quality predictions from ORES
###Code
import json
import requests
def api_call(endpoint, parameters):
call = requests.get(endpoint.format(**parameters), headers=headers)
response = call.json()
return response
endpoint_ores = 'https://ores.wikimedia.org/v3/scores/enwiki/?models=articlequality&revids={rev_id}'
model = 'articlequality'
headers = {
'User-Agent': 'https://github.com/castlea',
'From': '[email protected]'
}
preds_df = pd.DataFrame(columns=['revid', 'prediction'])
could_not_score = pd.DataFrame(columns=['revid', 'message'])
print("Progress counter:")
for x in range(0, page_df.shape[0], 50):
if x%5000 == 0:
print(x)
revids_to_score = page_df.iloc[x:x+50, 2]
batch_of_50 = api_call(endpoint_ores, {"rev_id": "|".join(list(str(x) for x in revids_to_score))})
for revid in revids_to_score:
if 'error' not in batch_of_50['enwiki']['scores'][str(revid)]['articlequality'].keys():
preds_df = preds_df.append({'revid': revid, 'prediction': batch_of_50['enwiki']['scores'][str(revid)]['articlequality']['score']['prediction']}, ignore_index = True)
else:
could_not_score = could_not_score.append({'revid':str(revid), 'message':batch_of_50['enwiki']['scores'][str(revid)]['articlequality']['error']}, ignore_index = True)
could_not_score
# write dataframe of pages that weren't scored
could_not_score.to_csv('wp_wpds_pages_no_score.csv')
###Output
_____no_output_____
###Markdown
Combine datasets
###Code
scored_pages = preds_df.merge(page_df, left_on='revid', right_on='rev_id', how='left').drop(columns=['rev_id'])
scored_pages = scored_pages[['country', 'page', 'revid', 'prediction']]
final_df = scored_pages.merge(pop_df[['Name', 'Population']], left_on='country', right_on='Name', how='inner').drop(columns=['Name'])
final_df.columns = ['country', 'article_name', 'revision_id', 'article_quality_est', 'population']
final_df
final_df.to_csv('wp_wpds_politicians_by_country.csv')
could_not_join = scored_pages.merge(pop_df[['Name', 'Population']], left_on='country', right_on='Name', how='outer')
could_not_join = could_not_join[(could_not_join['country'].isnull()) | (could_not_join['Name'].isnull())]
could_not_join
could_not_join.to_csv('wp_wpds_countries-no_match.csv')
###Output
_____no_output_____
###Markdown
Analysis
###Code
df = pd.read_csv('wp_wpds_politicians_by_country.csv').drop('Unnamed: 0', axis = 1)
df['sub-region'] = ''
df['region'] = ''
df
for key in region_to_country_dict.keys():
df.loc[df['country'].isin(region_to_country_dict[key]), 'sub-region'] = key
df
for key in region_to_region_dict.keys():
df.loc[df['country'].isin(region_to_region_dict[key]), 'region'] = key
df
per_country = df[['country', 'population', 'revision_id']]
per_country = per_country.groupby(['country']).agg(article_count=('revision_id','count'), population=('population','sum')).reset_index()
per_country['proportion_to_pop'] = per_country['article_count']/per_country['population']
quality = df[['country', 'article_quality_est']][df['article_quality_est'].isin(['FA', 'GA'])].groupby('country').agg(high_quality_count=('article_quality_est', 'count')).reset_index()
per_country = per_country.merge(quality, on='country')
per_country['proportion_high_quality'] = per_country['high_quality_count']/per_country['article_count']
per_country
per_subreg = df[['sub-region', 'population', 'revision_id']].groupby(['sub-region']).agg(article_count=('revision_id','count'), population=('population','sum')).reset_index()
per_subreg['proportion_to_pop'] = per_subreg['article_count']/per_subreg['population']
quality = df[['sub-region', 'article_quality_est']][df['article_quality_est'].isin(['FA', 'GA'])].groupby('sub-region').agg(high_quality_count=('article_quality_est', 'count')).reset_index()
per_subreg = per_subreg.merge(quality, on='sub-region')
per_subreg['proportion_high_quality'] = per_subreg['high_quality_count']/per_subreg['article_count']
per_subreg.columns = ['region','article_count','population','proportion_to_pop','high_quality_count','proportion_high_quality']
#per_subreg
per_reg = df[['region', 'population', 'revision_id']].groupby(['region']).agg(article_count=('revision_id','count'), population=('population','sum')).reset_index()
per_reg['proportion_to_pop'] = per_reg['article_count']/per_reg['population']
quality = df[['region', 'article_quality_est']][df['article_quality_est'].isin(['FA', 'GA'])].groupby('region').agg(high_quality_count=('article_quality_est', 'count')).reset_index()
per_reg = per_reg.merge(quality, on='region')
per_reg['proportion_high_quality'] = per_reg['high_quality_count']/per_reg['article_count']
per_reg = per_reg[per_reg['region']!='']
#per_reg
all_regions = pd.concat([per_reg, per_subreg], axis=0)
all_regions
###Output
_____no_output_____
###Markdown
Results / Conclusion
###Code
import matplotlib.pyplot as plt
import seaborn as sns
visualize = per_country[['country', 'article_count', 'high_quality_count', 'population']].merge(df[['country', 'region']], on='country').reset_index()
fig = plt.subplots(figsize=(15,10))
sns.scatterplot(data=visualize, x='article_count', y='high_quality_count', hue='region', size='population')
plt.show()
visualize = per_country[['country', 'article_count', 'population']].merge(df[['country', 'region']], on='country').reset_index()
fig = plt.subplots(figsize=(15,10))
sns.scatterplot(data=visualize, x="article_count", y="population", hue='region')
plt.show()
###Output
_____no_output_____
###Markdown
Table 1: Top 10 countries by coverage
###Code
per_country[['country', 'proportion_to_pop', 'article_count', 'population']].sort_values(by='proportion_to_pop', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Table 2: Bottom 10 countries by coverage
###Code
per_country[['country', 'proportion_to_pop', 'article_count', 'population']].sort_values(by='proportion_to_pop', ascending=True).head(10)
###Output
_____no_output_____
###Markdown
Table 3: Top 10 countries by relative quality
###Code
per_country[['country', 'proportion_high_quality', 'high_quality_count', 'article_count']].sort_values(by='proportion_high_quality', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Table 4: Bottom 10 countries by relative quality
###Code
per_country[['country', 'proportion_high_quality', 'high_quality_count', 'article_count']].sort_values(by='proportion_high_quality', ascending=True).head(10)
###Output
_____no_output_____
###Markdown
Table 5: Geographic regions ranked by coverage
###Code
all_regions[['region', 'proportion_to_pop', 'article_count', 'population']].sort_values(by='proportion_to_pop', ascending=False)
###Output
_____no_output_____
###Markdown
Table 6: Geographic regions ranked by relative quality
###Code
all_regions[['region', 'proportion_high_quality', 'high_quality_count', 'article_count']].sort_values(by='proportion_high_quality', ascending=False)
###Output
_____no_output_____
###Markdown
DATA 512: Assignment 1: Data Curation - Created by: Libby Montague- Date: 10/29/2017- The intent of this script is to collect data on wikipedia articles about politicains and understand biases in the data. The country of origin of the polician and the population of the country are used to help understand potentail biases. - The script accesses the wikipedia ores API (please see the README for more information about the liscense for the data). Additionally, the script requires data from figshare and Population Research Bureau (see README for more information). - The first output of the script is wikipedia_ores_data_formatted.csv file with the complete merged data. Format of the file: - country - the country of the politician - article_name - the name of the wikipedia page (article) - revision_id - the wikipedia revision id of the article - article_quality - quality of the article using the ORES classification (FA, GA, B, C, Start, Stub) - population - population of the country - The second output of the script is a figure with four graphs showing the countries with the most and least political articles per capita and good quality political articles per capita. Good quality articles are defined as 'FA' or 'GA' scoring by ORES. - Copyright 2017 Elizabeth Montague [email protected], under MIT License
###Code
# import libraries
import requests
import json
import pandas as pd
import numpy as nm
import matplotlib.pyplot as plt
import copy
# store the working directory
working_folder='{your working directory}'
###Output
_____no_output_____
###Markdown
1. Data- The data are pulled from 3 sources: - Figshare: the wikipedia data on the politicans by country. The data should be downloaded from: https://figshare.com/articles/Untitled_Item/5513449. And the 'page_data.csv' file should be stored in the working folder. The column headers should be: page, country, rev_id. - Population Research Bureau: the population data by country. The data should be downloaded from: http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14. The the 'Population Mid-2015.csv' file should be stored in the working directory. The column headers should be: Location, Location Type, TimeFrame, Data Type, Data, Footnotes. The only Location Type should be 'Country' and the only Data Type should be 'Number.' - ORES API: the 'rev_id' from the wikipedia data (Figshare) is used to query the ORES API for the article quality. The predicted score is the only value kept from the call. These results are then stored to a file in the working directory. The API call is slow, so this prevents the user from needing to call the API each time the script is run. - The data are then processed and stored in wikipedia_ores_data_formatted.csv. - Read in the data from the .csv files.
###Code
# read in the .csv data files
# from figshare: https://figshare.com/articles/Untitled_Item/5513449
wikipedia_data=pd.read_csv(working_folder+'page_data.csv')
# from Population Research Bureau: http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14
prb_data=pd.read_csv(working_folder+'Population Mid-2015.csv',skiprows=1,thousands=",")
# rename the columns to make merging easier
prb_data.columns=("country","Location Type","TimeFrame","Data Type","Data","Footnotes")
###Output
_____no_output_____
###Markdown
- Use the revision IDs from the wikipedia data (Figshare) to call the API. - First the revision IDs need to be broken up into sets of 50 for the batch query. There are > 47,000 revision IDs, so the API needs to be called 959 times. - Since the API call takes more than a couple mintues, there is a progress status printed. Additionally, the data are stored to a .csv file so that the user doesn't need to run the API call multiple times.
###Code
# break up the revision IDs to be in the limit of 50 for the batch query
# limit according to instructors of UW class 512
i=1
revids_all=list()
revids=""
for revid in wikipedia_data['rev_id']:
if i == 1:
revids=str(revid)
else:
if i % 50 == 0:
revids_all.append(revids)
revids=str(revid)
else:
revids=revids+"|"+str(revid)
i=i+1
## subset for testing
revids_all_subset=list()
revids_all_subset.append(revids_all[1])
revids_all_subset.append(revids_all[2])
revids_all_subset.append(revids_all[3])
# adapted from template for assignment 2 for UW class 512
# note: this takes a while to run therefore I recommend running once and saving to a .csv
endpoint = 'https://ores.wikimedia.org/v3/scores/enwiki/?models=wp10&revids={revid}'
headers = {'User-Agent' : '{insert you github webpage}', 'From' : '{insert your email}'}
i=0
for revids in revids_all:
params = {
'revid' : revids
}
api_call = requests.get(endpoint.format(**params))
#print(endpoint.format(**params))
response = api_call.json()
revids_list=revids.split("|")
for revid in revids_list:
for k in response["enwiki"]["scores"][str(revid)]["wp10"]:
if k == "score":
new_row=[revid,response["enwiki"]["scores"][str(revid)]["wp10"]["score"]["prediction"]]
else:
print("error with "+str(revid)+", key = "+k)
if i == 0:
result=new_row
else:
result=nm.vstack((result,new_row))
if i % 100 == 0:
print('progress: '+str(i))
i=i+1
# store the api call - so that the code above doesn't need to be read often
result=pd.DataFrame(result)
result.columns=("rev_id","score")
result.to_csv(working_folder+'wikipedia_ores_data_response.csv',header=True,index=False)
###Output
progress: 0
progress: 100
progress: 200
progress: 300
progress: 400
progress: 500
progress: 600
progress: 700
progress: 800
progress: 900
error with 807367030, key = error
error with 807367166, key = error
###Markdown
- Merge the data from the .csv files and the API call. - Write the final data file to a .csv file in the working directy.
###Code
# merge the data
result['rev_id']=pd.to_numeric(result['rev_id'])
merged_result=pd.merge(result,wikipedia_data,on=['rev_id'],how="inner")
merged_result=pd.merge(merged_result,prb_data,on=['country'],how="inner")
# reorder the columns
merged_result=merged_result[['country','page','rev_id','score','Data']]
# rename the columns
merged_result.columns=('country','article_name','revision_id','article_quality','population')
# write the result to a .csv
merged_result.to_csv(working_folder+'wikipedia_ores_data_formatted.csv',header=True,index=False)
###Output
_____no_output_____
###Markdown
2. Analysis - Calculate the percent of articles per capita.
###Code
country_result=pd.DataFrame(nm.vstack((merged_result['country'].value_counts().index,
merged_result['country'].value_counts().values))).T
country_result.columns=('country','num_articles')
country_result=pd.merge(country_result,prb_data,on=['country'],how='inner')
country_result=country_result[['country','num_articles','Data']]
country_result.columns=('country','num_articles','population')
country_result['articles_by_population']=(country_result['num_articles']/pd.to_numeric(country_result['population']))*100000
country_result.index=country_result['country']
###Output
_____no_output_____
###Markdown
- Calculate the percent of quality articles ('FA' or 'GA') per total articles.
###Code
total_articles_country=pd.DataFrame(nm.vstack((merged_result['country'].value_counts().index,
merged_result['country'].value_counts().values))).T
total_articles_country.columns=('country','num_articles')
merged_result_quality=copy.deepcopy(merged_result)
merged_result_quality['article_quality']=merged_result_quality['article_quality'].map({'GA':'good','FA':'good'})
merged_result_quality=merged_result_quality.loc[merged_result_quality.article_quality=='good']
quality_articles_country=pd.DataFrame(nm.vstack((merged_result_quality['country'].value_counts().index,
merged_result_quality['country'].value_counts().values))).T
quality_articles_country.columns=('country','num_quality_articles')
quality_country_result=pd.merge(quality_articles_country,total_articles_country,on=['country'],how='inner')
quality_country_result['per_quality_articles']=(quality_country_result['num_quality_articles']/
quality_country_result['num_articles'])*100
quality_country_result.index=quality_country_result['country']
###Output
_____no_output_____
###Markdown
3. Visualization - Visualize the data based on the top and bottom countries according to the articles per capita and the quality articles per total articles.
###Code
## plot within the jupyter notebook
%matplotlib inline
figure,axis=plt.subplots(nrows=2,ncols=2)
data1=country_result.sort_values(by=['articles_by_population'])[['articles_by_population']][:10]
fig1=data1.plot(kind='barh',ax=axis[1,0],legend=False)
fig1.set_ylabel('')
fig1.set_xlabel('Articles per 100k People')
data2=country_result.sort_values(by=['articles_by_population'],ascending=False)[['articles_by_population']][:10]
data2=data2.sort_values(by=['articles_by_population'],ascending=True)[['articles_by_population']]
fig2=data2.plot(kind='barh',ax=axis[0,0],legend=False)
fig2.set_ylabel('')
fig2.set_title('% Articles per Capita')
data3=quality_country_result.sort_values(by=['per_quality_articles'],ascending=False)[['per_quality_articles']][:10]
data3=data3.sort_values(by=['per_quality_articles'],ascending=True)[['per_quality_articles']]
fig3=data3.plot(kind='barh',ax=axis[0,1],legend=False)
fig3.set_title('% Quality Articles')
fig3.yaxis.tick_right()
data4=quality_country_result.sort_values(by=['per_quality_articles'],ascending=True)[['per_quality_articles']][:10]
fig4=data4.plot(kind='barh',ax=axis[1,1],legend=False)
fig4.set_xlabel('Quality Articles per Article')
fig4.yaxis.tick_right()
figure.savefig(working_folder+'wikipedia_quality_by_country.png',bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Download the article data from https://figshare.com/articles/Untitled_Item/5513449 and extract to main directory of project.
###Code
# edit file_path to reflect path to csv file in the extracted data folder.
file_path = 'country/data/page_data.csv'
# Reading the article data
page_data = pd.read_csv(file_path)
# looking at the contents
page_data.head()
# as mentioned on the MediaWiki ORES page: https://www.mediawiki.org/wiki/ORES
# Note: We are using the wp10 model and care about only English Wikipedia articles
api_endpoint = 'https://ores.wikimedia.org/v3/scores/enwiki/?models=wp10&revids={rev_ids}'
headers = {'User-Agent' : 'https://github.com/CoderHam', 'From' : '[email protected]'}
###Output
_____no_output_____
###Markdown
We chunk the revids into chunks of $50$ and make requests to the api ($50$ revids per request)
###Code
all_revids_list = list(page_data['rev_id'])
revids_chunks = [all_revids_list[i:i+50] for i in range(0, len(all_revids_list), 50)]
def get_ores_data_parallel(revids_chunk):
params = {'rev_ids':'|'.join(str(rid) for rid in revids_chunk)}
resp = requests.get(api_endpoint.format(**params), headers)
return resp.json()['enwiki']['scores']
# ['enwiki']['scores']
###Output
_____no_output_____
###Markdown
Serially making $98$ requests will be very slow as shown below:
###Code
# %time get_ores_data(list(page_data['rev_id']))
# CPU times: user 33.8 s, sys: 1.02 s, total: 34.9 s
# Wall time: 4min 32s
print("Note: We are making a total of",len(revids_chunks),"requests")
pool = multiprocessing.Pool(processes=4)
%time json_output = pool.map(get_ores_data_parallel,revids_chunks)
pool.close()
pool.join()
###Output
CPU times: user 297 ms, sys: 105 ms, total: 403 ms
Wall time: 1min 20s
###Markdown
After parallelizing by a factor of $4$ we get a speedup of approximately $4x$.
###Code
def process_json(json_output):
revid_rating = pd.DataFrame()
for jo in json_output:
for rid in jo:
try:
rating = jo[rid]['wp10']['score']['prediction']
revid_rating = revid_rating.append({'rev_id':rid, 'article_quality':rating}, ignore_index=True)
except:
rating = np.nan
# print(rid,"not found or unable to read score for this rev_id")
# revid_rating = revid_rating.append({'rev_id':rid, 'article_quality':rating}, ignore_index=True)
return revid_rating
revid_rating = process_json(json_output)
#But wait we messed up! We need to get the rev_id back to an integer.
revid_rating['rev_id'] = revid_rating['rev_id'].apply(lambda rid: int(rid))
#Lets have a look at the revid_rating table
revid_rating.head()
revid_rating.to_csv('revid_rating.csv',index_label=False,index=False)
###Output
_____no_output_____
###Markdown
We now want to join/merge the **revid_rating** data with the article data in **page_data**.
###Code
master_pagedata_rating = page_data.merge(revid_rating, on='rev_id')
master_pagedata_rating.head()
###Output
_____no_output_____
###Markdown
We now want to join/merge the **wpds** (population) data with the merged rating+article data in **master_pagedata_rating**.
###Code
wpds.rename(columns={'Geography':'country'}, inplace=True)
master = master_pagedata_rating.merge(wpds, on='country')
master.rename(columns={'Population mid-2018 (millions)':'population','page':'article_name','rev_id':'revision_id'}, inplace=True)
master.head()
master.to_csv('master_data.csv',index_label=False,index=False)
###Output
_____no_output_____
###Markdown
Analysis and Results
###Code
from collections import Counter
country_count = Counter(list(master['country']))
country_list = list(wpds['country'])
pop_list = wpds['Population mid-2018 (millions)'].apply(lambda x: float(x.replace(',','')))
count_list = [country_count[c] if c in country_count else 0 for c in country_list]
ratio_list = [count_list[i]/pop_list[i]*(10**-6) if count_list[i]!=0 else 0 for i in range(0,len(country_list))]
wpds['count_articles'] = count_list
wpds['per_person_articles'] = ratio_list
###Output
_____no_output_____
###Markdown
1. Top 10 countries in terms of number of articles about politicians as a proportion of the country population
###Code
# wpds = wpds.dropna()
# Top 10 countries in terms of articles per person
wpds.sort_values('per_person_articles',ascending=False).head(10)
###Output
_____no_output_____
###Markdown
2. Lowest 10 countries in terms of number of articles about politicians as a proportion of the country population
###Code
# Lowest 10 countries in terms of articles per person
wpds_tmp = wpds[wpds['count_articles']!=0]
wpds_tmp.sort_values('per_person_articles').head(10)
# Since quality matters, we rather just run a group by to find the high quality articles i.e. GA and FA
# As given https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html
hq_master = master_pagedata_rating[(master_pagedata_rating['article_quality']=='GA') | (master_pagedata_rating['article_quality']=='FA')]
hq_master = hq_master.groupby('country').size().reset_index(name='hq_count_articles')
hq_master['count_articles'] = [country_count[c] if c in country_count else 0 for c in hq_master['country']]
hq_master = hq_master.dropna()
hq_master['hq_prop'] = hq_master['hq_count_articles']/hq_master['count_articles']
# *100
###Output
_____no_output_____
###Markdown
3. Top 10 countries in terms of number of high quality articles about politicians as a proportion of the country population
###Code
# Top 10 countries in terms of proportion high quality articles
hq_master_tmp = hq_master[hq_master['count_articles']!=0]
hq_master_tmp.sort_values('hq_prop',ascending=False).head(10)
###Output
_____no_output_____
###Markdown
4. Lowest 10 countries in terms of number of high quality articles about politicians as a proportion of the country population
###Code
# Lowest 10 countries in terms of proportion high quality articles
hq_master.sort_values('hq_prop').head(10)
###Output
_____no_output_____
###Markdown
A2 : Bias on WikipediaObjective : Analyze what the nature of political articles on Wikipedia tell us about bias in Wikipedia's content.Author : Niharika Sharma References : 1. https://figshare.com/articles/Untitled_Item/55134492. https://wiki.communitydata.cc/HCDS_(Fall_2017)/AssignmentsA2:_Bias_in_data3. http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=144. https://www.mediawiki.org/wiki/ORES Import all the libraries
###Code
# import all the libraries - Set up
import requests
import json
import csv
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Making ORES requestsThe function makes a request with multiple revision IDs.
###Code
headers = {'User-Agent' : 'https://github.com/niharikasharma', 'From' : '[email protected]'}
def get_ores_data(revision_ids, headers):
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# Specify the parameters - smushing all the revision IDs together separated by | marks.
# Yes, 'smush' is a technical term, trust me I'm a scientist.
# What do you mean "but people trusting scientists regularly goes horribly wrong" who taught you tha- oh.
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
return json.dumps(response, indent=4, sort_keys=True)
###Output
_____no_output_____
###Markdown
Getting article quality predictionsImporting the other data from page_data.csv and calling the get_ores_data function. We are appending 100 revision_ids and then calling the get_ores_data function in batch form.ORES API returns the quality of an article and a series of probabilities for each of the quality. There are 6 article qualities as follows:1. FA - Featured article2. GA - Good article3. B - B-class article4. C - C-class article5. Start - Start-class article6. Stub - Stub-class article
###Code
i = 0
ids = []
revision_id = []
article_quality = []
with open('data/page_data.csv') as csvfile:
# Skip first line (if any)
next(csvfile, None)
reader = csv.reader(csvfile)
for row in reader:
ids.append(row[2])
i = i + 1
if i == 100:
# batch of 100 revision_ids
i = 0
# call the function
result = get_ores_data(ids, headers)
# load the result as JSON
data = json.loads(result)
for d in data['enwiki']['scores']:
try:
article_quality.append(data['enwiki']['scores'][d]['wp10']['score']['prediction'])
revision_id.append(d)
except KeyError:
print(data['enwiki']['scores'][d]['wp10']['error']['message'])
ids = []
if len(ids) != 0:
result = get_ores_data(ids, headers)
# load the result as JSON
data = json.loads(result)
for d in data['enwiki']['scores']:
try:
article_quality.append(data['enwiki']['scores'][d]['wp10']['score']['prediction'])
revision_id.append(d)
except KeyError:
print(data['enwiki']['scores'][d]['wp10']['error']['message'])
# Store the ORES data into dataframe
df_ores = pd.DataFrame(
{'revision_id': revision_id,
'article_quality': article_quality
})
print(df_ores.shape)
###Output
RevisionNotFound: Could not find revision ({revision}:806811023)
RevisionNotFound: Could not find revision ({revision}:807367166)
RevisionNotFound: Could not find revision ({revision}:807367030)
RevisionNotFound: Could not find revision ({revision}:807484325)
(47193, 2)
###Markdown
Combining the datasetsMerging the ORES API data, page_data.csv and Population Mid-2015.csv, and excluding ids with no prediction.
###Code
# page data
df_page = pd.read_csv('data/page_data.csv', names=['page', 'country', 'rev_id'])
# Join the tables using left join as we want to exclude the revision ids with no prediction
df_wikipedia = pd.merge(df_ores, df_page, how = 'left', left_on = 'revision_id', right_on = 'rev_id')
# merge wikipedia and population data
df_pop = pd.read_csv('data/Population Mid-2015.csv', skiprows=[0, 1, 2], names=['Location','Location Type','TimeFrame','Data Type','Data','Footnotes'])
# inner join so that we can remove the rows that do not have matching data
df_final = pd.merge(df_wikipedia, df_pop, how = 'inner', left_on = 'country', right_on = 'Location')
# delete unwanted columns
columns = ['Location Type','TimeFrame','Data Type','Footnotes', 'rev_id', 'Location']
df_final = df_final.drop(columns, axis=1)
# rename columns
df_final = df_final.rename(columns={'page': 'article_name', 'Data': 'population'})
print(df_final.columns)
print(df_final.shape)
###Output
Index(['article_quality', 'revision_id', 'article_name', 'country',
'population'],
dtype='object')
(45795, 5)
###Markdown
Consolidate the remaining data into a single CSV fileFinally, saving the dataframe in final-data.csv. The headers of the file are as follows: Column country article_name revision_id article_quality population
###Code
# convert population to int
df_final["population"] = df_final["population"].apply(lambda x: int(x.replace(',', '')))
print(df_final["population"].head())
# save the final dataframe into csv
df_final.to_csv('data/final-data.csv', sep=',', encoding='utf-8', index=False)
###Output
0 11211064
1 11211064
2 11211064
3 11211064
4 11211064
Name: population, dtype: int64
###Markdown
AnalysisCalculating the proportion (as a percentage) of articles-per-population articles for each country.
###Code
# articles-per-population for each country data
df = pd.read_csv('data/final-data.csv')
grouped = df.groupby(['country'])
df_articles_per_population = grouped.agg({'revision_id' : 'count', 'population' : np.mean}).reset_index()
df_articles_per_population['percent'] = 100.00*df_articles_per_population['revision_id']/df_articles_per_population['population']
print(df_articles_per_population.head())
###Output
country revision_id population percent
0 Afghanistan 327 32247000 0.001014
1 Albania 460 2892000 0.015906
2 Algeria 119 39948000 0.000298
3 Andorra 34 78000 0.043590
4 Angola 110 25000000 0.000440
###Markdown
AnalysisCalculating the proportion (as a percentage) of high-quality articles for each country.
###Code
# High-quality articles for each country
df_high_quality = df.groupby(['country', 'article_quality']).agg({'revision_id' : 'count'})
# quality percentage per category
df_high_quality['percent'] = df_high_quality.groupby(level=0).apply(lambda x: 100.0 * x / x.sum())
df_high_quality = df_high_quality.reset_index()
# remove data other than FA and GA
df_high_quality = df_high_quality[(df_high_quality["article_quality"] == 'FA') | (df_high_quality["article_quality"] == 'GA')]
df_high_quality = df_high_quality.groupby(['country']).agg({'percent' : 'sum'}).reset_index()
print(df_high_quality.head())
###Output
country percent
0 Afghanistan 4.587156
1 Albania 1.086957
2 Algeria 1.680672
3 Angola 0.909091
4 Argentina 3.427419
###Markdown
Tables and Visualizations 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
# 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
viz1 = df_articles_per_population.sort_values(by=['percent'], ascending=[0]).head(10).reset_index()
print(viz1)
fig = plt.figure(figsize=(17,8))
plt.xticks(fontsize=9)
plt.yticks(fontsize=9)
ax = fig.add_subplot(111)
ax.bar(viz1.index, viz1.percent, width=0.8)
ax.set_xticks(viz1.index)
ax.set_xticklabels(viz1.country)
ax.set_xlabel("Countries", fontsize=10)
ax.set_ylabel("Percentage", fontsize=10)
ax.set_title("Highest-ranked countries in terms of number of politician articles as a proportion of country population", fontsize=10)
plt.show()
# Generate a .png formatted image of the final graph
fig.savefig('highest_ranked_countries_number_of_politians_per_article.png')
###Output
index country revision_id population percent
0 120 Nauru 53 10860 0.488029
1 173 Tuvalu 55 11800 0.466102
2 141 San Marino 82 33000 0.248485
3 113 Monaco 40 38088 0.105020
4 97 Liechtenstein 29 37570 0.077189
5 107 Marshall Islands 37 55000 0.067273
6 72 Iceland 206 330828 0.062268
7 168 Tonga 63 103300 0.060987
8 3 Andorra 34 78000 0.043590
9 54 Federated States of Micronesia 38 103000 0.036893
###Markdown
10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
# 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
viz2 = df_articles_per_population.sort_values(by=['percent'], ascending=[1]).head(10).reset_index()
print(viz2)
fig = plt.figure(figsize=(17,8))
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
ax = fig.add_subplot(111)
ax.bar(viz2.index, viz2.percent, width=0.8, color = 'r')
ax.set_xticks(viz2.index)
ax.set_xticklabels(viz2.country)
ax.set_xlabel("Countries", fontsize=10)
ax.set_ylabel("Percentage", fontsize=10)
ax.set_title("Lowest-ranked countries in terms of number of politician articles as a proportion of country population", fontsize=10)
plt.show()
# Generate a .png formatted image of the final graph
fig.savefig('lowest_ranked_countries_number_of_politians_per_article.png')
###Output
index country revision_id population percent
0 73 India 989 1314097616 0.000075
1 34 China 1138 1371920000 0.000083
2 74 Indonesia 215 255741973 0.000084
3 180 Uzbekistan 29 31290791 0.000093
4 53 Ethiopia 105 98148000 0.000107
5 86 Korea, North 39 24983000 0.000156
6 185 Zambia 26 15473900 0.000168
7 166 Thailand 112 65121250 0.000172
8 38 Congo, Dem. Rep. of 142 73340200 0.000194
9 13 Bangladesh 324 160411000 0.000202
###Markdown
10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
# highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
viz3 = df_high_quality.sort_values(by=['percent'], ascending=[0]).head(10).reset_index()
print(viz3)
fig = plt.figure(figsize=(17,8))
plt.xticks(fontsize=9)
plt.yticks(fontsize=9)
ax = fig.add_subplot(111)
ax.bar(viz3.index, viz3.percent, width=0.8)
ax.set_xticks(viz3.index)
ax.set_xticklabels(viz3.country)
ax.set_xlabel("Countries", fontsize=10)
ax.set_ylabel("Percentage", fontsize=10)
ax.set_title("Highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country", fontsize=10)
plt.show()
# Generate a .png formatted image of the final graph
fig.savefig('highest_ranked_countries_aritcle_qualitywise.png')
###Output
index country percent
0 68 Korea, North 23.076923
1 113 Saudi Arabia 11.764706
2 142 Uzbekistan 10.344828
3 23 Central African Republic 10.294118
4 110 Romania 9.770115
5 52 Guinea-Bissau 9.523810
6 12 Bhutan 9.090909
7 145 Vietnam 8.376963
8 35 Dominica 8.333333
9 86 Mauritania 7.692308
###Markdown
10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
# lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
viz4 = df_high_quality.sort_values(by=['percent'], ascending=[1]).head(10).reset_index()
print(viz4)
fig = plt.figure(figsize=(17,8))
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
ax = fig.add_subplot(111)
ax.bar(viz4.index, viz4.percent, width=0.8, color = 'r')
ax.set_xticks(viz4.index)
ax.set_xticklabels(viz4.country)
ax.set_xlabel("Countries", fontsize=10)
ax.set_ylabel("Percentage", fontsize=10)
ax.set_title("Lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country", fontsize=10)
plt.show()
fig.savefig('lowestest_ranked_countries_aritcle_qualitywise.png')
###Output
index country percent
0 129 Tanzania 0.245098
1 33 Czech Republic 0.393701
2 78 Lithuania 0.403226
3 91 Morocco 0.480769
4 42 Fiji 0.502513
5 136 Uganda 0.531915
6 13 Bolivia 0.534759
7 79 Luxembourg 0.555556
8 105 Peru 0.564972
9 116 Sierra Leone 0.602410
###Markdown
Getting the data from csvs and putting it into a DataFrame
###Code
import csv
import pandas as pd
import math
# Reading the csv files
page_data = pd.read_csv('page_data.csv')
population_data = pd.read_csv( 'Population_Mid_2015.csv' )
# Renaming the columns and removing redundant columns
population_data['country'] = population_data['Location']
population_data['Data'] = population_data['Data'].str.replace(',', '')
population_data['population'] = pd.to_numeric(population_data['Data'], errors='ignore')
population_data = population_data[ ['country', 'population'] ]
#Merging the page_data and population_data to get overall_data
overall_data = page_data.merge( population_data, on = 'country')
overall_data.columns = ['article_name', 'country', 'revision_id', 'population']
###Output
_____no_output_____
###Markdown
Using ORES APIs to extract the quality of articles
###Code
import requests
import json
def get_ores_data(rev_ids, headers):
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# Specify the parameters - smushing all the revision IDs together separated by | marks.
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in rev_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
articlesQuality = []
# Combining all the article quality scores in a list and returning
for a in response['enwiki']['scores']:
if 'error' in response['enwiki']['scores'][a]['wp10']:
articlesQuality.append('No Revision')
else:
articlesQuality.append( response['enwiki']['scores'][a]['wp10']['score']['prediction'] )
return(articlesQuality)
###Output
_____no_output_____
###Markdown
Iterating over the articles and obtaining corresponding article quality using ORES API
###Code
headers = {'User-Agent' : 'https://github.com/r1rajiv92', 'From' : '[email protected]'}
numRows = len(overall_data)
articlesQuality = []
j = 0
# Iterating over articles 50 at a time to make sure te URL length and API call works properly
for i in range( math.ceil(numRows/50) ):
rev_ids = overall_data.iloc[j:j+50]['revision_id']
articlesQuality += get_ores_data(rev_ids, headers)
j+= 50
# Appending article quality class to all the articles in data
overall_data['article_quality'] = articlesQuality
###Output
_____no_output_____
###Markdown
Calculating number of articles per country and joining the Populating table for per article per population calculation
###Code
numArticlesPerCountry = overall_data.groupby(['country']).size().reset_index(name='numArticles')
numArticlesPopulationPerCountry = numArticlesPerCountry.merge(population_data, on = 'country')
numArticlesPopulationPerCountry['pct articles per population'] = numArticlesPopulationPerCountry['numArticles'] * 100 / \
numArticlesPopulationPerCountry['population']
numArticlesPopulationPerCountry = numArticlesPopulationPerCountry.sort_values( ['pct articles per population'], ascending=[0] )
###Output
_____no_output_____
###Markdown
10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
topTEN_articles_per_population = numArticlesPopulationPerCountry.iloc[0:10]
topTEN_articles_per_population
###Output
_____no_output_____
###Markdown
10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
lowestTEN_articles_per_population = numArticlesPopulationPerCountry.iloc[-10:]
lowestTEN_articles_per_population
###Output
_____no_output_____
###Markdown
Calculating percentage of GA/FA articles
###Code
## Function to check if the article is GA or FA
def Is_GA_FA(row):
if row['article_quality'] == 'FA' or row['article_quality'] == 'GA':
val = 1
else:
val = 0
return val
overall_data['Is_GA_FA'] = overall_data.apply(Is_GA_FA, axis=1)
num_GA_FA_acticles_per_country = overall_data.groupby(['country'])['Is_GA_FA'].sum().reset_index(name ='num_GA_FA')
num_GA_FA_and_num_articles_per_country = numArticlesPerCountry.merge(num_GA_FA_acticles_per_country, on = 'country')
num_GA_FA_and_num_articles_per_country['pct_GA_FA_articles'] = num_GA_FA_and_num_articles_per_country['num_GA_FA'] * 100 / \
num_GA_FA_and_num_articles_per_country['numArticles']
num_GA_FA_and_num_articles_per_country = num_GA_FA_and_num_articles_per_country.sort_values( ['pct_GA_FA_articles'], ascending=[0] )
###Output
_____no_output_____
###Markdown
10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
num_GA_FA_and_num_articles_per_country.iloc[0:10]
###Output
_____no_output_____
###Markdown
10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
num_GA_FA_and_num_articles_per_country.iloc[-10:]
###Output
_____no_output_____
###Markdown
Investigating Bias in Wikipedia Articles About Political Figures The goal of this project is analysing articles on political figures, and using their existence and quality to examine the different kinds of bias in Wikipedia's data. First, read in all the packages that we require for this analysis
###Code
import pandas as pd
import csv
import requests
import json
import numpy as np
from IPython.display import display, HTML
from copy import deepcopy
###Output
_____no_output_____
###Markdown
Step 1: Reading in the Data The first step that we do is read in the data that we're using for this analysis. We use two data sources in this analysis. The first data source contains articles about political figures by country. This data source is contained in a zip folder titled country.zip and can be found at the following link: https://figshare.com/articles/Untitled_Item/5513449. The data file is stored in the data directory of the country directory. It is titled page_data.csv. The first step that I take is reading the data into a Pandas dataframe. This data is licensed under CC-BY-SA 4.0 license. You can distribute the data, but you **must** attribute.
###Code
page_data = pd.read_csv('data/raw/page_data.csv')
page_data.head()
###Output
_____no_output_____
###Markdown
The page_data data contains three columns, page, country and rev_id. The rev_id is what we use to call the ORES API. The other data that we use is the population data. This data contains world populations for 207 countries as of 2018. The file can be found at the following link: https://www.dropbox.com/s/5u7sy1xt7g0oi2c/WPDS_2018_data.csv?dl=0. In order to download it, hit the download button at the top right corner, and select direct download. Let's load and have a look at this data. No license is explicitly stated for the data, so according to convention, it likely means that all rights are reserved for the data. Due to this not being stated, I am not including the data with this repository. You will need to download the data from the link to use it.
###Code
population_data = pd.read_csv('data/raw/WPDS_2018_data.csv')
population_data.head()
###Output
_____no_output_____
###Markdown
As observed above, this data contains two columns, one listing the country and the population of that country as of mid-2018 (in millions). Step 2: Get Article Scores from ORES As a part of this investigation, we need to determine the countries with the highest and lowest proportion of high quality articles about politicians. In order to do this, we require article scores, which we can obtain using the ORES API. You can find the documentation for the ORES API at this link: https://www.mediawiki.org/wiki/ORES.The first step that we take is setting up the endpoints and headers.
###Code
endpoint_def = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
headers = {'User-Agent' : 'https://github.com/tejasmhos', 'From' : '[email protected]'}
###Output
_____no_output_____
###Markdown
After that's done, we can proceed to write a function that goes through the list of all rev_id's and returns the score that's associated with the article.
###Code
rev_ids = list(page_data['rev_id'])
def get_ores_data(revision_ids, headers):
"""
This code was taken from the sample notebook that was provided to us. All code belongs to
its original author, who in this case is Os.
"""
# Define the endpoint
endpoint = endpoint_def
#Define the parameters
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
return response
def predictions_split(rev_ids):
"""
This function takes the rev_ids list, iterates 100 rows at a time
and then combines the results together to form the final list of
revids and ratings. This is done 100 at a time due to the API having
a limit on the number of values that can be passed at once. The return
object is a dataframe that contains the revids and the rating for
those revids
"""
start = 0
flag = 0
end = 100
dataframe_final = pd.DataFrame(columns=['rev_ids','ratings'])
while(1):
response_ores = get_ores_data(rev_ids[start:end], headers)
for revid in response_ores['enwiki']['scores']:
try:
rating = response_ores['enwiki']['scores'][revid]['wp10']['score']['prediction']
except:
rating = np.nan
dataframe_final = dataframe_final.append({'rev_ids':revid, 'ratings':rating}, ignore_index=True)
if flag == 1:
break
start +=100
if end+100 > len(rev_ids):
end = len(rev_ids)
flag = 1
else:
end += 100
return dataframe_final
###Output
_____no_output_____
###Markdown
The next step is running our code on the list of rev_ids, and getting the ratings associated with them. Some articles may not be in the database, so we assign a NaN value to those articles. Those then disappear when we join this data back to the original page_data dataframe.
###Code
#run the code, get the results for ratings
result = predictions_split(rev_ids)
###Output
_____no_output_____
###Markdown
Step 3: Joining Datasets Together to Get Final Dataset The final step to constructing our complete dataset is performing a number of joins. We first join the result dataframe with the page_data dataframe on the rev_id. Before we do this, the type of rev_id in the results dataframe is of type string. We need to coerce the type of this column to int to ensure that the join works correctly. Thats what we do below.
###Code
#explicit type conversion
result['rev_ids'] = result['rev_ids'].astype(int)
###Output
_____no_output_____
###Markdown
Now we perform the merge operation on page_data and result dataframes on the column rev_id and rev_ids respectively.
###Code
intermed_2 = page_data.merge(result, left_on='rev_id', right_on='rev_ids', how='inner')
###Output
_____no_output_____
###Markdown
The next step is joining this intermediate table to the country table. We do this join on the country fields in both the tables. In order to avoid any sort of mismatch in case, I lower the case of the country in both tables to ensure a join that's as synergetic as possible.
###Code
intermed_2['country'] = intermed_2['country'].apply(lambda x:x.lower())
population_data['Geography'] = population_data['Geography'].apply(lambda x:x.lower())
#perform the final join
final_data = intermed_2.merge(population_data, left_on ='country', right_on = 'Geography', how='inner')
#remove the nans from the table
final_data.dropna(inplace=True)
#Select the columns we need, remove duplicate columns
final_data = final_data[['country', 'rev_id', 'page','Population mid-2018 (millions)','ratings']]
#reset the index
final_data.reset_index(inplace = True)
#rename column names according to convention
final_data.rename(index=str, columns={"page": "article_name", "rev_id": "revision_id","ratings":"article_quality", "Population mid-2018 (millions)":"population"}, inplace=True)
#Again, duplication columns and index is removed
final_data = final_data[['country', 'article_name', 'revision_id', 'article_quality','population']]
final_data['population'] = final_data['population'].apply(lambda x:x.replace(',',''))
final_data['population'] = final_data['population'].astype('float')
#Converting to millions
final_data['population'] = final_data['population'].apply(lambda x:x*1000000)
#Again, back to int
final_data['population'] = final_data['population'].astype(int)
#Having a look at how our final data looks
final_data.head()
###Output
_____no_output_____
###Markdown
We performed a number of steps here. Firstly, we dropped the NaNs from the ratings. Next, we selected the columns that we require for this analysis, and got rid of columns that were duplicated as a result of the join. Finally, we reset the index so that the index was sequential. We also renamed the columns in accordance with the scheme that was given .Since our data is processed and ready, we save it to a CSV and proceed to the next stage which is the analysis step.
###Code
final_data.to_csv('data/final_data.csv')
###Output
_____no_output_____
###Markdown
Step 4: Performing Analysis There are two analyses that we perform. The first analysis is addressed here. We need to find the proportion of articles per population of a country. Analysis 1 : Countries with the highest and lowest proportion of articles as compared to their population This analysis is pretty simple. We run a simple groupby on the country and population, and get the count of the number of articles that are associated with this country.
###Code
prop_articles_per_country = final_data.groupby(['country','population'])['revision_id'].count().to_frame()
prop_articles_per_country = prop_articles_per_country.reset_index()
#Calculating the proportions
prop_articles_per_country['proportions'] = (prop_articles_per_country['revision_id']/prop_articles_per_country['population']) * 100
#This code styles the tables to make them look good
%%HTML
<style type="text/css">
table.dataframe td, table.dataframe th {
border-style: solid;
}
</style>
prop_articles_per_country = prop_articles_per_country.rename(columns = {'revision_id':'number_of_articles'})
#10 highest ranked countries with respect to number of articles as a proportion of population
prop_articles_per_country.sort_values(by='proportions', ascending = False).head(10)[['country', 'population','number_of_articles', 'proportions']]
###Output
_____no_output_____
###Markdown
The table above shows the countries that have the highest rank with respect to the number of articles as a proportion of their population. The results aren't that surprising, the countries that have a small population have a higher proportion of articles with respect to the size of their population. The highest being Tuvalu and Nauru, both extremely small islands in Australia.
###Code
#10 lowest ranked countries with respect to number of articles as a proportion of population
prop_articles_per_country.sort_values(by='proportions', ascending=True).head(10)[['country','population','number_of_articles','proportions']]
###Output
_____no_output_____
###Markdown
The table above shows the data for the reverse case, the countries that have the lowest rank with respect to the number of articles as a proportion of their population. The list is full of developing countries, and the two most populous countries appear in this list (China and India). There's nothing that surprises me here. Even though the most populous countries have more number of populations, it's likely that there are only a few who are popular enough to have their own page. Analysis 2 : Countries with the highest and lowest proportion of high quality articles as compared to the total number of articles they have In this stage of analysis, we compute the proportion of high quality articles to total articles in the politicians category of each country. The first step of our analysis is determining the number of high quality articles. As per the definition given to us, a high quality article is one that has a quality of FA or GA. We use a groupby to get this count, after filtering our data down to FA and GA.
###Code
#deepcopying our data
high_quality_count = deepcopy(final_data)
high_quality_count =high_quality_count[(high_quality_count['article_quality'] == 'FA') | (high_quality_count['article_quality'] == 'GA')]
#groupby to get the count of high quality articles by country
high_quality_count = high_quality_count.groupby(['country','population'])['revision_id'].count().to_frame()
#make sure that country and population are actual columns
high_quality_count.reset_index(inplace=True)
###Output
_____no_output_____
###Markdown
In order to get our final counts, we need to join back with the table that contains the total number of articles (the table prop_articles_per_country). In order to avoid duplication of columns, I first deepcopy this table into a new variable, rename some duplicated columns and then perform the join on country.
###Code
total_articles = deepcopy(prop_articles_per_country)
total_articles = total_articles.rename(columns={'revision_id':'total_articles'})
#selecting the columns we need
total_articles = total_articles[['country','total_articles']]
#renaming columns and selecting the ones we need for this analysis
high_quality_count = high_quality_count.rename(columns = {'revision_id':'high_quality_articles'})
high_quality_count = high_quality_count[['country','high_quality_articles']]
high_quality_count.shape
###Output
_____no_output_____
###Markdown
You can notice that the number of countries with high quality articles is significantly smaller than the total number of countries we have in the data. So, I perform a left join on the total_articles table and fill the NaN values with 0. This ensures that we don't lose any data. The final step to get our table is calculating the proportions and adding them to a separate column of the dataframe. We do this below. The final tables are also shown below.
###Code
#performing the join, getting the table with total and high quality articles
high_quality_prop = total_articles.merge(high_quality_count, left_on = 'country', right_on = 'country', how = 'left')
high_quality_prop = high_quality_prop.fillna(0)
high_quality_prop['high_quality_articles'] = high_quality_prop['high_quality_articles'].astype(int)
#Calculating the proportion
high_quality_prop['high_quality_articles_proportion'] = (high_quality_prop['high_quality_articles']/high_quality_prop['total_articles'])*100
###Output
_____no_output_____
###Markdown
The final step is presenting our results. We first show the countries that have the highest proportion of high quality articles as compared to the total number of articles that they have.
###Code
#Finding countries with highest proportion of high quality articles
high_quality_prop.sort_values(by='high_quality_articles_proportion', ascending = False).head(10)[['country', 'total_articles', 'high_quality_articles','high_quality_articles_proportion']]
###Output
_____no_output_____
###Markdown
The results are very interesting. The country with the highest proportion is North Korea. This is followed by Saudi Arabia. Personally, I think the large amount of interest in the way the government in these two countries function has lead to a lot of good quality articles being written about politicians in these two countries. One astute observation I notice is that Romania and United States are the only two Western nations in the entire top ten. Now we do the same thing, but for the countries with lowest proportion of high ranked articles.
###Code
#Finding the countries with the lowest proportion of high ranked articles
high_quality_prop.sort_values(by='high_quality_articles_proportion', ascending = True).head(10)[['country', 'total_articles', 'high_quality_articles','high_quality_articles_proportion']]
###Output
_____no_output_____
###Markdown
Get article and population data
###Code
# Read the page data
page_data_df = pd.read_csv('country/data/page_data.csv')
page_data_df.head()
# Read the population data
pop_df = pd.read_csv('WPDS_2018_data.csv')
pop_df.head()
###Output
_____no_output_____
###Markdown
Combine the datasets
###Code
# Convert countries to lower case to merge the datasets using it
page_data_df['country'] = page_data_df['country'].apply(lambda x: x.lower())
pop_df['Geography'] = pop_df['Geography'].apply(lambda x: x.lower())
# Merge the datasets
final_df = pd.merge(page_data_df, pop_df, how='inner', left_on='country', right_on='Geography')
# Drop duplication column for country
final_df.drop(['Geography'], axis=1, inplace=True)
# Rename the column names as per requirement
final_df.columns = ['article_name', 'country', 'revision_id', 'population (millions)']
# Add new column for article quality
final_df['article_quality'] = np.NaN
# Re-order all columns as per requirment
final_df = final_df[['country', 'article_name', 'revision_id', 'article_quality', 'population (millions)']]
# Review the final dataset
final_df.head()
# Print no. of documents that couldn't be matched
len(page_data_df) - len(final_df)
###Output
_____no_output_____
###Markdown
Fetch article quality predictions
###Code
# Setup values
url = "https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revid}"
params = {'project':'enwiki', 'model':'wp10'}
# Function to take ORES API response as input and update final dataset with article quality score
def update_scores(resp):
global final_df
# Function to extract score for a single revision id from the json response of API
def extract_score(rev_id):
try:
score = resp[params['project']]['scores'][rev_id][params['model']]['score']['prediction']
except:
score = np.NaN
return score
rev_ids = list(resp[params['project']]['scores'].keys())
scores = list(map(extract_score, rev_ids))
for rev_id, score in zip(rev_ids, scores):
final_df.loc[final_df['revision_id'] == int(rev_id), 'article_quality'] = score
# Fetch the article quality score using ORES API for each page in the final dataset
chunk_size = 100
ptr = 0
done = False
while not done:
chunk = final_df.loc[ptr:ptr+chunk_size, ['revision_id']].values.ravel().astype(str)
if chunk.size == 0:
done = True
print("All Done!")
break
else:
params['revid'] = "|".join(chunk)
response = requests.get(url.format(**params)).json()
update_scores(response)
ptr += chunk_size
if ptr % 10000 == 0:
print(ptr, ' revision ids processed')
# Drop the rows for which the quality score could not be found
final_df.dropna(subset=['article_quality'], inplace=True)
# Review the final dataset
final_df.head()
###Output
_____no_output_____
###Markdown
Save the combined dataset with article quality scores
###Code
# Save the final dataset in csv format
final_df.to_csv('combined_data.csv', index=False)
print('File "/combined_data.csv" saved!')
###Output
File "/combined_data.csv" saved!
###Markdown
Analysis Table 1
###Code
# Get count of articles and population values for each country in alphabetical order of countries
countries_sorted = final_df['country'].sort_values().unique()
article_counts = final_df['country'].value_counts().sort_index().values
populations = final_df[['country', \
'population (millions)']].sort_values(by='country').drop_duplicates()['population (millions)'].values
populations = list(map(lambda x: float(x.replace(",", "")), populations))
# Create a dataframe with countries, article counts and proportions per population
columns = ['country', 'article_counts', 'population (millions)']
total_proportion_df = pd.DataFrame(np.column_stack((countries_sorted, article_counts, populations)), columns=columns)
total_proportion_df['proportion %'] = total_proportion_df['article_counts'] * 100 \
/ (total_proportion_df['population (millions)'] * 1000000)
# Create Table 1
table1 = total_proportion_df.sort_values(by='proportion %', ascending=False).iloc[:10]
table1.reset_index(drop=True, inplace=True)
display(Markdown('# 10 highest-ranked countries in terms of no. of politician articles as a proportion of country population'))
table1.index += 1
table1
###Output
_____no_output_____
###Markdown
Observation:- As expected, the countries with lower populations were going to be on top as having just a few articles would significantly boost the proportion which is computed as a percentage. Though I didn't expect to see countries like Tuvalu and Nauru which I had not heard of until now. Proportion based on overall population is probabaly not the best way to analyze this data.- Iceland appears to pop out of this list above with highest no. of articles. Table 2
###Code
# Create Table 2
table2 = total_proportion_df.sort_values(by='proportion %', ascending=True).iloc[:10]
table2.reset_index(drop=True, inplace=True)
display(Markdown('# 10 lowest-ranked countries in terms of no. of politician articles as a proportion of country population'))
table2.index += 1
table2
###Output
_____no_output_____
###Markdown
Observation:- Once again, since we are ranking based on proportion computed on overall population, the countries with highest populations with English not as their primary language are expected to feature here. - Another expectation is also to see some under-developed countries with relativley limited access to internet. For example the African countries in the list above. Table 3
###Code
final_high_qual_df = final_df[(final_df['article_quality'] == 'FA') | (final_df['article_quality'] == 'GA')]
high_quality_article_counts = final_high_qual_df['country'].value_counts().sort_index().values
high_quality_countries_sorted = final_high_qual_df['country'].sort_values().unique()
columns2 = ['country', 'high_quality_article_counts']
high_quality_proportion_df = pd.DataFrame(np.column_stack((high_quality_countries_sorted, high_quality_article_counts)), \
columns=columns2)
high_quality_proportion_df = pd.merge(total_proportion_df, \
high_quality_proportion_df[['country', 'high_quality_article_counts']], \
how='left', on='country')
high_quality_proportion_df.fillna(0, inplace=True)
high_quality_proportion_df['proportion %'] = high_quality_proportion_df['high_quality_article_counts'] * 100 \
/ (high_quality_proportion_df['article_counts'])
# Create Table 3
table3 = high_quality_proportion_df.sort_values(by='proportion %', \
ascending=False)[['country', \
'article_counts', \
'high_quality_article_counts', \
'proportion %']].iloc[:10]
table3.reset_index(drop=True, inplace=True)
display(Markdown('# 10 highest-ranked countries in terms of no. of GA and FA-quality articles' + \
' as a proportion of all articles about politicians from that country'))
table3.index += 1
table3
###Output
_____no_output_____
###Markdown
Observation:- It is very interesting to find North Korea and Saudi Arabia on top of this list considering the tight control on freedom of overall speech in both countries.- It is also interesting to see that USA has only 82 high quality articles out of a total of 1092. Though 82 is highest in the list, it is ranked 9 due to the ranking based on proportion.- To dig deeper, we'll need to understand the ORES API Scoring algorithm and find out what attributes contribute towards a higher score. Table 4
###Code
# Create Table 4
table4 = high_quality_proportion_df[high_quality_proportion_df['high_quality_article_counts'] == 0]
table4 = table4.sort_values(by='article_counts', ascending=False)[['country', \
'article_counts', \
'high_quality_article_counts', \
'proportion %']]
table4.reset_index(drop=True, inplace=True)
display(Markdown('# All lowest-ranked countries in terms of no. of GA and FA-quality articles' + \
' as a proportion of all articles about politicians from that country'))
table4.index += 1
table4
###Output
_____no_output_____
###Markdown
Observation:- All the above countries have the zero high quality articles.- The countries have been ranked based on the total no. of articles published. Country with highest no. of articles with not a single high quality article is ranked as 1.- It is interesting to see developed countries like Finland, Belgium and Switzerland have a good count of articles, but zero high quality ones. It is hard to speculate without understanding the scoring mechanism. If I had to hazard a guess, I would attribute the lack of high quality to the fact that English is not the primary language of these countries. Analysis of Countries with most articles
###Code
most_articles_df = total_proportion_df.sort_values(by='article_counts', ascending=False).head(10)
most_articles_df['population'] = most_articles_df['population (millions)'].apply(round)
plt.figure(figsize=(10,5))
colors=['red', 'blue', 'green', 'orange', 'yellow', 'magenta', 'cyan', 'pink', 'brown', 'gray']
cols = dict(zip(most_articles_df['country'].values,colors))
plt.scatter(x='population', y='article_counts', data=most_articles_df, s='population', c=colors)
legend_items = [Line2D([0], [0], marker='o', color='w', label=key,
markerfacecolor=value, markersize=15) for key, value in cols.items()]
plt.legend(handles=legend_items, loc='right', bbox_to_anchor=(1.3, 0.5))
plt.xlabel('Population (millions)')
plt.ylabel('Total Articles')
plt.title('Countries with highest number of articles')
###Output
_____no_output_____
###Markdown
DATA 512 A2 Riley Waters In this notebook, I will compare Wikipedia articles on political figures in different countries. I am looking to see how article coverage and article quality differ in the different countries and regions. This could uncover some underlying bias within Wikipedia articles.The quality of the articles is found using the ORES system. Documentation here: https://www.mediawiki.org/wiki/ORES Getting the article and population dataTwo data sources are used. The first is the "Wikipedia politians by country" dataset which can be found here as 'page_data.csv': https://figshare.com/articles/Untitled_Item/5513449
###Code
import pandas as pd
#Get the wiki articles data
article_df = pd.read_csv('./data/source/page_data.csv')
article_df.head()
###Output
_____no_output_____
###Markdown
The second source is world population data, drawn from the Population Reference Bureau here: https://www.prb.org/international/indicator/population/table/
###Code
# Get the country population data
population_df = pd.read_csv('./data/source/WPDS_2018_data.csv')
population_df.head()
###Output
_____no_output_____
###Markdown
Cleaning the dataThe articles data set has some page names starting with 'Template:'. These are not Wikipedia articles, so they are filtered out. I also re-name some fields for clarity
###Code
# Get rid of the rows in article that start with Template:
article_clean_df = article_df[~article_df['page'].str.startswith('Template:')]
# Rename fields
article_clean_df = article_clean_df.rename(columns={'page': 'article_name', 'rev_id': 'revision_id'})
article_clean_df.head()
###Output
_____no_output_____
###Markdown
I'll also rename fields in the population dataset and convert population to its full numerical value.
###Code
# Rename values
population_clean_df = population_df.rename(columns={'Geography': 'country', 'Population mid-2018 (millions)':'population'})
# Turn population into numerical actual population
population_clean_df['population'] = population_clean_df['population'].apply(lambda x: float(x.replace(',',''))*1e6)
population_clean_df.head()
###Output
_____no_output_____
###Markdown
Some of the countries are actually regions. I'll collect those and put them in a csv, then leave only the actual countries in my cleaned dataset
###Code
# Split the regions from the countries
cumulative_region_df = population_clean_df[population_clean_df['country'].str.isupper()]
cumulative_region_df.to_csv('./data/region_cumulatives.csv', sep=',',index=False)
cumulative_region_df.head()
# Keep only the actual countries
population_clean_df = population_clean_df[~population_clean_df['country'].str.isupper()]
population_clean_df.head()
###Output
_____no_output_____
###Markdown
Getting article quality predictions As mentioned, the article quality scores come from a machine learning system called ORES. Using the oresapi package, we retrieve article quality scores for each revision id in the article dataset. Some more information on oresapi can be found here: https://pypi.org/project/oresapi/. It assigns each article a quality rating from the following options:FA - Featured articleGA - Good articleB - B-class articleC - C-class articleStart - Start-class articleStub - Stub-class article
###Code
import oresapi
# Get the ores results for each revid in article df
rev_id_list = article_clean_df['revision_id'].tolist()
ores_session = oresapi.Session("https://ores.wikimedia.org", "Class project [email protected]")
results = ores_session.score("enwiki", ["articlequality"], rev_id_list)
# Keep the rev ids and their corresponding results attached
id_res_zip = zip (rev_id_list, results)
###Output
_____no_output_____
###Markdown
Some of the rev ids cannot be found and result in an error. I collect these error ids and store them in a csv. Then, I merge the non-error results into my article dataframe.
###Code
error_id_list = []
temp_list = []
for res in id_res_zip:
if 'error' not in res[1]['articlequality']:
# If there is no error, grab the quality
article_quality = res[1]['articlequality']['score']['prediction']
temp_dict = {
'revision_id':res[0],
'article_quality':article_quality
}
temp_list.append(temp_dict)
else:
# If there is an error, grab the error rev id
error_id_list.append(res[0])
temp_df = pd.DataFrame(temp_list)
# Merge the non-error quality ratings into the dataframe
article_score_df = pd.merge(article_clean_df, temp_df, on='revision_id')
article_score_df.head()
# Store the error ids into a csv
error_df = pd.DataFrame(data={"error_id": error_id_list})
error_df.to_csv('./data/error_rev_ids.csv', sep=',',index=False)
###Output
_____no_output_____
###Markdown
Combining the datasetsThe population and article dataframes are merged on their country name using an outer join. Any rows that are missing an article name or a population have a country that is in one dataset but not the other. These countries are separated and their rows are stored. The rows with matching countries are used for the final analysis.
###Code
# Outer join the two datasets on country
combined_df = pd.merge(article_score_df, population_clean_df, on='country', how='outer')
combined_df.head()
# Get all rows where population or page is null
no_match_df = combined_df.loc[combined_df['population'].isnull() | combined_df['article_name'].isnull()]
# Save these rows to a csv
no_match_df.to_csv('./data/wp_wpds_countries-no_match.csv', index=False)
no_match_df.head()
# Get all rows where population and page are not null
final_df = combined_df.loc[combined_df['population'].notnull() & combined_df['article_name'].notnull()]
# Save these to a csv and use it for the final analysis
final_df.to_csv('./data/wp_wpds_politicians_by_country.csv', index=False)
final_df.head()
###Output
_____no_output_____
###Markdown
For analysis purposes, I will also need the region of each country and that regions total population. Recall that these are in the original population dataframe. I attach the region and the region total population to each row using their country. This works because the dataset lists the region totals followed by the countries in that region, so the order is important.
###Code
region = ''
regional_pop = 0
temp_list = []
for idx, row in population_df.iterrows():
if row['Geography'].isupper():
# Uppercase indicates a region. Save this and the population
region = row['Geography']
regional_pop = row['Population mid-2018 (millions)']
else:
# Lowercase indicates a country. Use the previous region to figure out which region the country is in
temp_dict = {
'region': region,
'country': row['Geography'],
'regional_population': regional_pop
}
temp_list.append(temp_dict)
# Create a country region mapping dataframe
country_region_df = pd.DataFrame(temp_list)
country_region_df['regional_population'] = country_region_df['regional_population'].apply(lambda x: float(x.replace(',',''))*1e6)
country_region_df.head()
# Merge the mapping dataframe into my main dataframe
final_df = pd.merge(country_region_df, final_df, on='country')
final_df.head()
###Output
_____no_output_____
###Markdown
AnalysisFor the analysis, I need to find out the coverage and relative quality of articles in each country and each region. Coverage is the percent of articles per population. Relative quality is the percent of quality articles ('FA' or 'GA') per total articles.
###Code
# Group by the country
group_df = final_df.groupby('country')
temp_list = []
for country, group in group_df:
# Total articles
articles_in_group = len(group)
# filter to quality articles
quality_articles = group[group['article_quality'].isin(['FA', 'GA'])]
# Country population
population = group['population'].iloc[0]
# Number of quality articles
quality_articles_count = len(quality_articles)
temp_dict = {
'country': country,
'articles_count': articles_in_group,
'population': population,
'quality_articles_count': quality_articles_count,
'coverage': (articles_in_group/population)*100.0,
'relative_quality': (quality_articles_count/articles_in_group)*100.0
}
temp_list.append(temp_dict)
# Create the analysis table per country
analysis_country_df = pd.DataFrame(temp_list)
analysis_country_df.to_csv('./data/final_analysis_data.csv', index=False)
analysis_country_df.head()
# Group by the region
group_df = final_df.groupby('region')
temp_list = []
for region, group in group_df:
# Total articles
articles_in_group = len(group)
# filter to quality articles
quality_articles = group[group['article_quality'].isin(['FA', 'GA'])]
# Regional population
population = group['regional_population'].iloc[0]
# Number of quality articles
quality_articles_count = len(quality_articles)
temp_dict = {
'region': region,
'articles_count': articles_in_group,
'population': population,
'quality_articles_count': quality_articles_count,
'coverage': (articles_in_group/population)*100.0,
'relative_quality': (quality_articles_count/articles_in_group)*100.0
}
temp_list.append(temp_dict)
# Create the analysis table per region
analysis_regional_df = pd.DataFrame(temp_list)
analysis_regional_df.to_csv('./data/final_analysis_data_regional.csv', index=False)
analysis_regional_df.head()
###Output
_____no_output_____
###Markdown
Results Top 10 countries by coverage"10 highest-ranked countries in terms of number of politician articles as a proportion of country population"
###Code
analysis_country_df.sort_values('coverage', ascending=False).reset_index(drop=True).head(10)
###Output
_____no_output_____
###Markdown
Bottom 10 countries by coverage"10 lowest-ranked countries in terms of number of politician articles as a proportion of country population"
###Code
analysis_country_df.sort_values('coverage', ascending=True).reset_index(drop=True).head(10)
###Output
_____no_output_____
###Markdown
Top 10 countries by relative quality"10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality"
###Code
analysis_country_df.sort_values('relative_quality', ascending=False).reset_index(drop=True).head(10)
###Output
_____no_output_____
###Markdown
Bottom 10 countries by relative quality"10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality"Note that many countries have 0 as their percentage of quality articles. There are more than these 10 that have the same.
###Code
analysis_country_df.sort_values('relative_quality', ascending=True).reset_index(drop=True).head(10)
###Output
_____no_output_____
###Markdown
Geographic regions by coverage"Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population"
###Code
analysis_regional_df.sort_values('coverage', ascending=False).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Geographic regions by relative quality"Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality"
###Code
analysis_regional_df.sort_values('relative_quality', ascending=False).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Assignment 2 - Bias in DataSean Miller () OverviewThis notebook outlines an analysis of English Wikipedia articles on political figures from many countries. We seek to explore the ratio of articles compared to population of the country and the percent of those articles that are high quality to understand how the English Wikipedia might be biased. LibrariesAll of the following code was written and tested against the default packages present in **Anaconda3 v4.4.0**. You can find a download for Anaconda and its latest versions at . Preparation
###Code
import json
import os
import pandas as pd
import requests
%matplotlib inline
###Output
_____no_output_____
###Markdown
First, we'll prepare our folder structure for our analysis. Any data sets we've downloaded or will scrape from the web will be stored in the *raw_data* folder, any data sets that have been processed by our code will be stored in *clean_data* and any visualizations or tables used for our final analysis will be stored in the *outputs* folder.
###Code
# If the folder raw_data doesn't already exist, create it
# raw_data is where any initial data sets are stored
if not os.path.exists("raw_data"):
os.makedirs("./raw_data")
# If the folder clean_data doesn't already exist, create it
# clean_data is where any processed data sets are stored
if not os.path.exists("clean_data"):
os.makedirs("./clean_data")
# If the folder outputs doesn't already exist, create it
# The outputs folder is where visualizations for our analysis will be stored
if not os.path.exists("outputs"):
os.makedirs("./outputs")
###Output
_____no_output_____
###Markdown
Reading in the DataTo perform this analysis, we'll be joining data from three different data sets. These data sets and relevant information are listed below.|Data Set | File Name | URL | Documentation | License ||--------------------------------|| EN-Wikipedia Articles On Politicians | page_data.csv | [Figshare](https://figshare.com/articles/Untitled_Item/5513449) | Same as URL | [CC-BY-SA 4.0](https://figshare.com/articles/Untitled_Item/5513449) || Country Population Data (Mid-2015) | Population Mid-2015.csv | [Population Research Bureau website](http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14) | Same as URL | I have no idea || Wikipedia ORES | N/A | [ORES](https://www.mediawiki.org/wiki/ORES) | [ORES Swagger](https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_model) | [CC-BY-SA 3.0](https://wikimediafoundation.org/wiki/Terms_of_Use/en7._Licensing_of_Content) |For the first two data sets, we'll be manually downloading the data from the provided links, copying the files to the *raw_data* folder and reading in the csv files with the [pandas](http://pandas.pydata.org/) library.
###Code
# Paths to files
population_data_file = "./raw_data/Population Mid-2015.csv"
politician_file_path = "./raw_data/page_data.csv"
# Read in population data
# We skip the first line using header=1 as we're uninterested in information before the column headers
population_df = pd.read_csv(population_url, header=1)
# Remove "," characters and cast the population column Data to a numeric value
population_df["Data"] = population_df["Data"].str.replace(",", "")
population_df["Data"] = population_df["Data"].apply(pd.to_numeric)
# Write the data our to a csv
population_df.to_csv(population_file_path, index=False)
# Read in Wikipedia politician data
politician_df = pd.read_csv(politician_file_path)
# Print out sample of population DataFrame
population_df.head(4)
# Print out sample of politician DataFrame
politician_df.head(4)
###Output
_____no_output_____
###Markdown
ORESAfter reading in our initial two data sets, we'll want to map the rev_id column of the politician DataFrame to a corresponding article quality using the [ORES API](https://www.mediawiki.org/wiki/ORES). The predicted article quality can map to one of the six following values. Documentation for how to format the URLs for this API can be found at the [ORES Swagger](https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_model).[HCDS Fall 2017 - Assignment 2 - Article Ratings](https://wiki.communitydata.cc/HCDS_(Fall_2017%29/AssignmentsGetting_article_quality_predictions)1. FA - Featured article2. GA - Good article3. B - B-class article4. C - C-class article5. Start - Start-class article6. Stub - Stub-class article**To Note**You can submit up to 50 articles at a time to be evaluated by the ORES API.If a page has been deleted, the ORES API will return "RevisionNotFound: Could not find revision". Within this function we handle that by outputting the JSON blob of the article that could not be found.As part of the Terms and conditions from the [Wikimedia REST API](https://www.mediawiki.org/wiki/REST_API), we agree to send a unique User-Agent header in our requests so Wikimedia can contact us if any problem arises from our script.
###Code
# ORES API endpoint Example code
endpoint = "https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}"
# Create user-agent header
headers = {"User-Agent": "https://github.com/awfuldynne", "From": "[email protected]"}
params = \
{
"project": "enwiki",
"model": "wp10",
"revids": "391862070"
}
api_call = requests.get(endpoint.format(**params), headers=headers)
response = api_call.json()
print(json.dumps(response, indent=4, sort_keys=True))
def get_ores_page_quality_prediction(rev_ids, batch_size=50):
"""Method to get the wp10 model"s prediction of page quality for a list of Wikipedia pages identified by revision ID
https://en.wikipedia.org/wiki/Wikipedia:WikiProject_assessment#Grades
:param rev_ids: List of revision IDs for Wikipedia pages.
:type rev_ids: list of int.
:param batch_size: Number of pages to send to ORES per iteration.
:type batch_size: int.
:returns: Pandas Dataframe -- DataFrame with columns rev_id and article_quality
"""
# ORES API endpoint
endpoint = "https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}"
# Create user-agent header
headers = {"User-Agent": "https://github.com/awfuldynne", "From": "[email protected]"}
# Create column list
columns = ["rev_id", "article_quality"]
# Create empty DataFrame for article quality result set
df = pd.DataFrame(columns=columns)
# Indexes to keep track of what subset in the rev_id list we should be processing
start_index = 0
end_index = start_index + batch_size
done_processing = False
# Iterate through our list of revision IDs appending to df as we process the results
while not done_processing:
params = \
{
"project": "enwiki",
"model": "wp10",
# Create a string of revision IDs like "123123|123124"
"revids": "|".join(str(rev) for rev in rev_ids[start_index:end_index])
}
api_call = requests.get(endpoint.format(**params), headers=headers)
response = api_call.json()
for quality_score in response["enwiki"]["scores"]:
# Create a new Series to append to the DataFrame
new_row = pd.Series(index=columns)
new_row.rev_id = quality_score
try:
new_row.article_quality = response["enwiki"]["scores"][quality_score]["wp10"]["score"]["prediction"]
df = df.append(new_row, ignore_index=True)
except:
# The target article no longer exists in wikipedia. Print each data point that
# couldn't be retrieved
print(response["enwiki"]["scores"][quality_score])
# Update indexes
start_index += batch_size
end_index += batch_size
# If start_indexd is greater then the length of rev_ids we are finished processing our list
done_processing = start_index >= len(rev_ids)
return df
article_quality_df = get_ores_page_quality_prediction(politician_df.rev_id.tolist())
article_quality_df.to_csv("./raw_data/article_quality_data.csv", index=False)
###Output
{'wp10': {'error': {'message': 'RevisionNotFound: Could not find revision ({revision}:807367030)', 'type': 'RevisionNotFound'}}}
{'wp10': {'error': {'message': 'RevisionNotFound: Could not find revision ({revision}:807367166)', 'type': 'RevisionNotFound'}}}
###Markdown
After creating the mapping of revision ID to article quality, we then want to join this to the politician DataFrame.
###Code
def get_article_quality(rev_id):
"""Method used to map a Wikipedia revision ID to an article quality within article_quality_df
:param rev_id: Wikipedia Revision ID
:type rev_id: int.
:return: str -- Article quality from article_quality_df if exists, None if not
"""
article_quality = None
# If the revision ID exists in the article quality DataFrame, set article quality to the mapped value
if (article_quality_df.rev_id == rev_id).any():
article_quality = article_quality_df.loc[article_quality_df.rev_id == rev_id].article_quality.iloc[0]
return article_quality
# Join the politician DataFrame to the article quality DataFrame
politician_df["article_quality"] = politician_df.apply(lambda row: get_article_quality(row.rev_id), axis=1)
###Output
_____no_output_____
###Markdown
In a similar fashion, we also want to join the population data to the politician DataFrame.
###Code
def get_country_population(country_name):
"""Method used to map country name to a population within population_df
:param country_name: Country name
:type country_name: str.
:return: int -- Population value from population_df if exists, None if not
"""
population = None
# If the country exists in the population DataFrame, set population to the mapped value
if (population_df.Location == country_name).any():
population = population_df.loc[population_df.Location == country_name].Data.iloc[0]
return population
# Join the politician DataFrame to the country population DataFrame
politician_df["population"] = \
politician_df.apply(lambda row: get_country_population(row.country), axis=1)
###Output
_____no_output_____
###Markdown
Cleaning our Analysis DataFrameTo simplify our analysis, any row without a corresponding country population or a corresponding article quality will be removed from the data set. We perform some additional cleaning by ordering our rows, renaming our columns and representing population as an integer before writing it out to the *clean_data* directory.Our DataFrame will look like the following:| Column | Value ||-----------------------|| country | Name of the Country the article belongs to || article_name | Name of the Wikipedia article || revision_id | Integer ID that maps to the given Wikipedia page's last edit || article_quality | Quality of the Article as determined by ORES || population | Number of people living in the country in mid-2015 |
###Code
# Filter out any countries without a population or without an article quality
df = politician_df[(pd.notnull(politician_df.population)) & (pd.notnull(politician_df.article_quality))]
print("{} rows were removed".format(politician_df.shape[0] - df.shape[0]))
# Reorder columns
df = df[["country", "page", "rev_id", "article_quality", "population"]]
# Rename columns to match assignment definition
df.columns = ["country", "article_name", "revision_id", "article_quality", "population"]
# Change population column to integer
df.loc[:, "population"] = df["population"].astype(int)
# Write analysis data set out to file
cleaned_data_file_path = "./clean_data/en-wikipedia_politician_article_quality.csv"
df.to_csv(cleaned_data_file_path, index=False)
# Print example of analysis DataFrame
df.head(4)
###Output
1400 rows were removed
###Markdown
AnalysisAs mentioned at the start of this notebook, our analysis seeks to understand bias on Wikipedia through two metrics:1. The percent of articles-per-poulation for each country2. The percent of high quality articles for each countryWe also output population and the number of articles within the aggregate DataFrame for readability.
###Code
# Group our DataFrame by country
country_group = df.groupby("country")
# Returns the number of articles as a percent of the population
def articles_per_population(group):
articles = group.article_name.nunique()
population = group.population.max()
return articles * 100 / float(population)
# Returns the proportion of articles which are ranked FA or GA in quality
def high_quality_articles(group):
high_quality_rating_list = ["FA", "GA"]
article_count = group.shape[0]
high_quality_article_count = group[group.article_quality.isin(high_quality_rating_list)].shape[0]
return high_quality_article_count * 100 / article_count
# Returns the population for a given country.
def population(group):
return group.population.max()
# Returns the number of articles a country has
def number_of_articles(group):
return group.shape[0]
# https://stackoverflow.com/questions/40532024/pandas-apply-multiple-functions-of-multiple-columns-to-groupby-object
# Aggregate method which generates our four aggregate metrics
def get_aggregate_stats(group):
return pd.Series({"articles_per_population_percent": articles_per_population(group),
"population": population(group),
"percent_high_quality_article": high_quality_articles(group),
"number_of_articles": number_of_articles(group)})
agg_df = country_group.apply(get_aggregate_stats)
agg_df.index.name = "Country"
# Print example of aggregate DataFrame
agg_df.head(4)
###Output
_____no_output_____
###Markdown
Next we create our four DataFrames to look at the top and bottom 10 countries for both of these metrics.
###Code
# Suppress scientific notation
# SO Post: https://stackoverflow.com/questions/21137150/format-suppress-scientific-notation-from-python-pandas-aggregation-results
pd.set_option('display.float_format', lambda x: '%.6f' % x)
# Top 10 of Articles per Population
print("Top 10 Countries - Percent of Articles-Per-Population")
top_10_article_per_pop = \
agg_df.sort_values(by=["articles_per_population_percent"], ascending=False).head(10)[["articles_per_population_percent"]]
top_10_article_per_pop.columns = ["Percent of Articles-Per-Population"]
print(top_10_article_per_pop)
print("\n")
# Bottom 10 of Articles per Population
print("Bottom 10 Countries - Percent of Articles-Per-Population")
bottom_10_article_per_pop = \
agg_df.sort_values(by=["articles_per_population_percent"], ascending=True).head(10)[["articles_per_population_percent"]]
bottom_10_article_per_pop.columns = ["Percent of Articles-Per-Population"]
print(bottom_10_article_per_pop)
print("\n")
# Top 10 of High Quality Articles
print("Top 10 Countries - Percent of Articles that are High Quality")
top_10_high_quality_articles = \
agg_df.sort_values(by=["percent_high_quality_article"], ascending=False).head(10)[["percent_high_quality_article"]]
top_10_high_quality_articles.columns = ["Percent of High Quality Articles"]
print(top_10_high_quality_articles)
print("\n")
# Bottom 10 of High Quality Articles
print("Bottom 10 Countries - Percent of Articles that are High Quality")
bottom_10_high_quality_articles = \
agg_df.sort_values(by=["percent_high_quality_article"], ascending=True).head(10)[["percent_high_quality_article"]]
bottom_10_high_quality_articles.columns = ["Percent of High Quality Articles"]
print(bottom_10_high_quality_articles)
print("\n")
###Output
Top 10 Countries - Percent of Articles-Per-Population
Percent of Articles-Per-Population
Country
Nauru 0.488029
Tuvalu 0.466102
San Marino 0.248485
Monaco 0.105020
Liechtenstein 0.077189
Marshall Islands 0.067273
Iceland 0.062268
Tonga 0.060987
Andorra 0.043590
Federated States of Micronesia 0.036893
Bottom 10 Countries - Percent of Articles-Per-Population
Percent of Articles-Per-Population
Country
India 0.000075
China 0.000083
Indonesia 0.000084
Uzbekistan 0.000093
Ethiopia 0.000107
Korea, North 0.000156
Zambia 0.000168
Thailand 0.000172
Congo, Dem. Rep. of 0.000194
Bangladesh 0.000202
Top 10 Countries - Percent of Articles that are High Quality
Percent of High Quality Articles
Country
Korea, North 23.076923
Romania 12.931034
Saudi Arabia 12.605042
Central African Republic 11.764706
Qatar 9.803922
Guinea-Bissau 9.523810
Vietnam 9.424084
Bhutan 9.090909
Ireland 8.136483
United States 7.832423
Bottom 10 Countries - Percent of Articles that are High Quality
Percent of High Quality Articles
Country
Sao Tome and Principe 0.000000
Turkmenistan 0.000000
Marshall Islands 0.000000
Guyana 0.000000
Comoros 0.000000
Tunisia 0.000000
Djibouti 0.000000
Dominica 0.000000
Macedonia 0.000000
Tonga 0.000000
###Markdown
Appendix Population DataFor those that are interested in how to download the population data directly from the [Population Research Bureau website](http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14) the following code downloads the file and writes it out to the *raw_data* directory.
###Code
population_file_path = "./raw_data/Population Mid-2015.csv"
population_url = "http://www.prb.org/RawData.axd?ind=14&fmt=14&tf=76&loc=34235%2c249%2c250%2c251%2c252%2c253%2c254%2" \
"c34227%2c255%2c257%2c258%2c259%2c260%2c261%2c262%2c263%2c264%2c265%2c266%2c267%2c268%2c269%2c270%2" \
"c271%2c272%2c274%2c275%2c276%2c277%2c278%2c279%2c280%2c281%2c282%2c283%2c284%2c285%2c286%2c287%2c2" \
"88%2c289%2c290%2c291%2c292%2c294%2c295%2c296%2c297%2c298%2c299%2c300%2c301%2c302%2c304%2c305%2c306" \
"%2c307%2c308%2c311%2c312%2c315%2c316%2c317%2c318%2c319%2c320%2c321%2c322%2c324%2c325%2c326%2c327%2" \
"c328%2c34234%2c329%2c330%2c331%2c332%2c333%2c334%2c336%2c337%2c338%2c339%2c340%2c342%2c343%2c344%2" \
"c345%2c346%2c347%2c348%2c349%2c350%2c351%2c352%2c353%2c354%2c358%2c359%2c360%2c361%2c362%2c363%2c3" \
"64%2c365%2c366%2c367%2c368%2c369%2c370%2c371%2c372%2c373%2c374%2c375%2c377%2c378%2c379%2c380%2c381" \
"%2c382%2c383%2c384%2c385%2c386%2c387%2c388%2c389%2c390%2c392%2c393%2c394%2c395%2c396%2c397%2c398%2" \
"c399%2c400%2c401%2c402%2c404%2c405%2c406%2c407%2c408%2c409%2c410%2c411%2c415%2c416%2c417%2c418%2c4" \
"19%2c420%2c421%2c422%2c423%2c424%2c425%2c427%2c428%2c429%2c430%2c431%2c432%2c433%2c434%2c435%2c437" \
"%2c438%2c439%2c440%2c441%2c442%2c443%2c444%2c445%2c446%2c448%2c449%2c450%2c451%2c452%2c453%2c454%2" \
"c455%2c456%2c457%2c458%2c459%2c460%2c461%2c462%2c464%2c465%2c466%2c467%2c468%2c469%2c470%2c471%2c4" \
"72%2c473%2c474%2c475%2c476%2c477%2c478%2c479%2c480"
# Use pandas read_csv function to read the file directly from the website
# We skip the first line using header=1 as we're uninterested in information before the column headers
population_df = pd.read_csv(population_url, header=1)
# Remove "," characters and cast the population column Data to a numeric value
population_df["Data"] = population_df["Data"].str.replace(",", "")
population_df["Data"] = population_df["Data"].apply(pd.to_numeric)
# Write the data out to a csv
population_df.to_csv(population_file_path, index=False)
# Print a few lines of the data set
population_df.head(4)
###Output
_____no_output_____
###Markdown
DATA 512: Assignment 2: Bias in DataCharles Duze - 11/02/2017 ObjectiveFor this assignment the task is to analyze "what the nature of political articles on Wikipedia - both their existence, and their quality - can tell us about bias in Wikipedia's content"*. Process and ContentsWe gather data from mulitple sources to get information about a Country's population, number of political articles and number of high quality political articles. High-Quality is based of the ORES API described below. We then calculate both the percentage of artcles per population and percentage of high quality articles per total articles. There is a write-up at the end summarizing my opinion on the findings./* Content from https://wiki.communitydata.cc/HCDS_(Fall_2017)/Assignments Step 1: Data Acquisition Import StatementsImporting the modules we'll need.
###Code
import requests
import json
import csv
import pandas as pd
###Output
_____no_output_____
###Markdown
Function to Get ORES DataThis function takes in a set of revIds separated by "|" and returns a list of the Wikipedia scores. It is possible that some revIds do not have score, in which case we denote it with "Error".It is calling a Wikimedia API endpoint for a machine learning system called ORES ("Objective Revision Evaluation Service"). More information about the API including all the values returned and the ratings definition can be found here: https://www.mediawiki.org/wiki/ORES.
###Code
# Funciton to get scores from ORES API in batches
def get_ores_data(revision_ids):
# Generate endpoint string with appended revIds
endpoint = 'https://ores.wikimedia.org/v3/scores/enwiki/?models=wp10&revids=' + revids
# Make the JSON call
api_call = requests.get(endpoint)
response = api_call.json()
# Extract scores from response and handle errors
scores = [response['enwiki']['scores'][x]['wp10']['score']['prediction'] if "score" in response['enwiki']['scores'][x]['wp10'] else "Error" for x in response['enwiki']['scores']]
return scores
###Output
_____no_output_____
###Markdown
Read in the Page_Data csv fileThis section loads in a CSV file, Politicians by Country from the English-language Wikipedia by Oliver Keyes, and stores it the "data" list. The download as well as additional information can be found at: https://figshare.com/articles/Untitled_Item/5513449
###Code
## getting the data from the CSV files
data = []
with open('page_data.csv') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
data.append([row[0],row[1],row[2]])
###Output
_____no_output_____
###Markdown
Get the ORES score for articles in the Page_Data csv fileThis section obtains the scores from the ORES API for the articles in the Page_Data csv file in batches. The batch size is set as a parameter. I skip the first row of the csv since it's just the header. I loop through the Page_Data articles in batches and store the scores in the "ores_scores" list. It was initialized with placeholders ("None").
###Code
#Let's get the length of page_data
page_data_len = len(data)
#Create an empty array to store the values as we get them
ores_scores = [None] * page_data_len
#Define Batch Size
batchSize = 130
# Initial starting index. 0 is header.
currStart = 1
#Loop through the page_data get the scores
while currStart < page_data_len:
#initialize revIds for this iteration
revids = ""
# Calculate the end index
currEnd = currStart + batchSize
# Make sure currEnd is not out of bounds
if currEnd > page_data_len -1:
currEnd = page_data_len -1
# Progress update. It takes a while.
print("Getting ",currStart, "-", currEnd)
# Construct the list of revIds in this batch and append "|"
for x in range(currStart,currEnd+1):
revids = revids + data[x][2] + "|"
# Remove the last "|" otherwise will cause an error with the API
revids = revids[:-1]
# Call the ORES API via the function defined above
myresp = get_ores_data(revids)
# Store the scores we got back in the ores_scores list.
ores_scores[currStart:currEnd+1] = myresp
# Update starting index for the next iteration
currStart = currEnd + 1
# Update the num of recored retrived.
print ("Got back", len(ores_scores), "Records")
###Output
Getting 1 - 131
Getting 132 - 262
Getting 263 - 393
Getting 394 - 524
Getting 525 - 655
Getting 656 - 786
Getting 787 - 917
Getting 918 - 1048
Getting 1049 - 1179
Getting 1180 - 1310
Getting 1311 - 1441
Getting 1442 - 1572
Getting 1573 - 1703
Getting 1704 - 1834
Getting 1835 - 1965
Getting 1966 - 2096
Getting 2097 - 2227
Getting 2228 - 2358
Getting 2359 - 2489
Getting 2490 - 2620
Getting 2621 - 2751
Getting 2752 - 2882
Getting 2883 - 3013
Getting 3014 - 3144
Getting 3145 - 3275
Getting 3276 - 3406
Getting 3407 - 3537
Getting 3538 - 3668
Getting 3669 - 3799
Getting 3800 - 3930
Getting 3931 - 4061
Getting 4062 - 4192
Getting 4193 - 4323
Getting 4324 - 4454
Getting 4455 - 4585
Getting 4586 - 4716
Getting 4717 - 4847
Getting 4848 - 4978
Getting 4979 - 5109
Getting 5110 - 5240
Getting 5241 - 5371
Getting 5372 - 5502
Getting 5503 - 5633
Getting 5634 - 5764
Getting 5765 - 5895
Getting 5896 - 6026
Getting 6027 - 6157
Getting 6158 - 6288
Getting 6289 - 6419
Getting 6420 - 6550
Getting 6551 - 6681
Getting 6682 - 6812
Getting 6813 - 6943
Getting 6944 - 7074
Getting 7075 - 7205
Getting 7206 - 7336
Getting 7337 - 7467
Getting 7468 - 7598
Getting 7599 - 7729
Getting 7730 - 7860
Getting 7861 - 7991
Getting 7992 - 8122
Getting 8123 - 8253
Getting 8254 - 8384
Getting 8385 - 8515
Getting 8516 - 8646
Getting 8647 - 8777
Getting 8778 - 8908
Getting 8909 - 9039
Getting 9040 - 9170
Getting 9171 - 9301
Getting 9302 - 9432
Getting 9433 - 9563
Getting 9564 - 9694
Getting 9695 - 9825
Getting 9826 - 9956
Getting 9957 - 10087
Getting 10088 - 10218
Getting 10219 - 10349
Getting 10350 - 10480
Getting 10481 - 10611
Getting 10612 - 10742
Getting 10743 - 10873
Getting 10874 - 11004
Getting 11005 - 11135
Getting 11136 - 11266
Getting 11267 - 11397
Getting 11398 - 11528
Getting 11529 - 11659
Getting 11660 - 11790
Getting 11791 - 11921
Getting 11922 - 12052
Getting 12053 - 12183
Getting 12184 - 12314
Getting 12315 - 12445
Getting 12446 - 12576
Getting 12577 - 12707
Getting 12708 - 12838
Getting 12839 - 12969
Getting 12970 - 13100
Getting 13101 - 13231
Getting 13232 - 13362
Getting 13363 - 13493
Getting 13494 - 13624
Getting 13625 - 13755
Getting 13756 - 13886
Getting 13887 - 14017
Getting 14018 - 14148
Getting 14149 - 14279
Getting 14280 - 14410
Getting 14411 - 14541
Getting 14542 - 14672
Getting 14673 - 14803
Getting 14804 - 14934
Getting 14935 - 15065
Getting 15066 - 15196
Getting 15197 - 15327
Getting 15328 - 15458
Getting 15459 - 15589
Getting 15590 - 15720
Getting 15721 - 15851
Getting 15852 - 15982
Getting 15983 - 16113
Getting 16114 - 16244
Getting 16245 - 16375
Getting 16376 - 16506
Getting 16507 - 16637
Getting 16638 - 16768
Getting 16769 - 16899
Getting 16900 - 17030
Getting 17031 - 17161
Getting 17162 - 17292
Getting 17293 - 17423
Getting 17424 - 17554
Getting 17555 - 17685
Getting 17686 - 17816
Getting 17817 - 17947
Getting 17948 - 18078
Getting 18079 - 18209
Getting 18210 - 18340
Getting 18341 - 18471
Getting 18472 - 18602
Getting 18603 - 18733
Getting 18734 - 18864
Getting 18865 - 18995
Getting 18996 - 19126
Getting 19127 - 19257
Getting 19258 - 19388
Getting 19389 - 19519
Getting 19520 - 19650
Getting 19651 - 19781
Getting 19782 - 19912
Getting 19913 - 20043
Getting 20044 - 20174
Getting 20175 - 20305
Getting 20306 - 20436
Getting 20437 - 20567
Getting 20568 - 20698
Getting 20699 - 20829
Getting 20830 - 20960
Getting 20961 - 21091
Getting 21092 - 21222
Getting 21223 - 21353
Getting 21354 - 21484
Getting 21485 - 21615
Getting 21616 - 21746
Getting 21747 - 21877
Getting 21878 - 22008
Getting 22009 - 22139
Getting 22140 - 22270
Getting 22271 - 22401
Getting 22402 - 22532
Getting 22533 - 22663
Getting 22664 - 22794
Getting 22795 - 22925
Getting 22926 - 23056
Getting 23057 - 23187
Getting 23188 - 23318
Getting 23319 - 23449
Getting 23450 - 23580
Getting 23581 - 23711
Getting 23712 - 23842
Getting 23843 - 23973
Getting 23974 - 24104
Getting 24105 - 24235
Getting 24236 - 24366
Getting 24367 - 24497
Getting 24498 - 24628
Getting 24629 - 24759
Getting 24760 - 24890
Getting 24891 - 25021
Getting 25022 - 25152
Getting 25153 - 25283
Getting 25284 - 25414
Getting 25415 - 25545
Getting 25546 - 25676
Getting 25677 - 25807
Getting 25808 - 25938
Getting 25939 - 26069
Getting 26070 - 26200
Getting 26201 - 26331
Getting 26332 - 26462
Getting 26463 - 26593
Getting 26594 - 26724
Getting 26725 - 26855
Getting 26856 - 26986
Getting 26987 - 27117
Getting 27118 - 27248
Getting 27249 - 27379
Getting 27380 - 27510
Getting 27511 - 27641
Getting 27642 - 27772
Getting 27773 - 27903
Getting 27904 - 28034
Getting 28035 - 28165
Getting 28166 - 28296
Getting 28297 - 28427
Getting 28428 - 28558
Getting 28559 - 28689
Getting 28690 - 28820
Getting 28821 - 28951
Getting 28952 - 29082
Getting 29083 - 29213
Getting 29214 - 29344
Getting 29345 - 29475
Getting 29476 - 29606
Getting 29607 - 29737
Getting 29738 - 29868
Getting 29869 - 29999
Getting 30000 - 30130
Getting 30131 - 30261
Getting 30262 - 30392
Getting 30393 - 30523
Getting 30524 - 30654
Getting 30655 - 30785
Getting 30786 - 30916
Getting 30917 - 31047
Getting 31048 - 31178
Getting 31179 - 31309
Getting 31310 - 31440
Getting 31441 - 31571
Getting 31572 - 31702
Getting 31703 - 31833
Getting 31834 - 31964
Getting 31965 - 32095
Getting 32096 - 32226
Getting 32227 - 32357
Getting 32358 - 32488
Getting 32489 - 32619
Getting 32620 - 32750
Getting 32751 - 32881
Getting 32882 - 33012
Getting 33013 - 33143
Getting 33144 - 33274
Getting 33275 - 33405
Getting 33406 - 33536
Getting 33537 - 33667
Getting 33668 - 33798
Getting 33799 - 33929
Getting 33930 - 34060
Getting 34061 - 34191
Getting 34192 - 34322
Getting 34323 - 34453
Getting 34454 - 34584
Getting 34585 - 34715
Getting 34716 - 34846
Getting 34847 - 34977
Getting 34978 - 35108
Getting 35109 - 35239
Getting 35240 - 35370
Getting 35371 - 35501
Getting 35502 - 35632
Getting 35633 - 35763
Getting 35764 - 35894
Getting 35895 - 36025
Getting 36026 - 36156
Getting 36157 - 36287
Getting 36288 - 36418
Getting 36419 - 36549
Getting 36550 - 36680
Getting 36681 - 36811
Getting 36812 - 36942
Getting 36943 - 37073
Getting 37074 - 37204
Getting 37205 - 37335
Getting 37336 - 37466
Getting 37467 - 37597
Getting 37598 - 37728
Getting 37729 - 37859
Getting 37860 - 37990
Getting 37991 - 38121
Getting 38122 - 38252
Getting 38253 - 38383
Getting 38384 - 38514
Getting 38515 - 38645
Getting 38646 - 38776
Getting 38777 - 38907
Getting 38908 - 39038
Getting 39039 - 39169
Getting 39170 - 39300
Getting 39301 - 39431
Getting 39432 - 39562
Getting 39563 - 39693
Getting 39694 - 39824
Getting 39825 - 39955
Getting 39956 - 40086
Getting 40087 - 40217
Getting 40218 - 40348
Getting 40349 - 40479
Getting 40480 - 40610
Getting 40611 - 40741
Getting 40742 - 40872
Getting 40873 - 41003
Getting 41004 - 41134
Getting 41135 - 41265
Getting 41266 - 41396
Getting 41397 - 41527
Getting 41528 - 41658
Getting 41659 - 41789
Getting 41790 - 41920
Getting 41921 - 42051
Getting 42052 - 42182
Getting 42183 - 42313
Getting 42314 - 42444
Getting 42445 - 42575
Getting 42576 - 42706
Getting 42707 - 42837
Getting 42838 - 42968
Getting 42969 - 43099
Getting 43100 - 43230
Getting 43231 - 43361
Getting 43362 - 43492
Getting 43493 - 43623
Getting 43624 - 43754
Getting 43755 - 43885
Getting 43886 - 44016
Getting 44017 - 44147
Getting 44148 - 44278
Getting 44279 - 44409
Getting 44410 - 44540
Getting 44541 - 44671
Getting 44672 - 44802
Getting 44803 - 44933
Getting 44934 - 45064
Getting 45065 - 45195
Getting 45196 - 45326
Getting 45327 - 45457
Getting 45458 - 45588
Getting 45589 - 45719
Getting 45720 - 45850
Getting 45851 - 45981
Getting 45982 - 46112
Getting 46113 - 46243
Getting 46244 - 46374
Getting 46375 - 46505
Getting 46506 - 46636
Getting 46637 - 46767
Getting 46768 - 46898
Getting 46899 - 47029
Getting 47030 - 47160
Getting 47161 - 47197
Got back 47198 Records
###Markdown
Merge Page_Data and the ORES scores in a DataFrameIn this section, I push Page_Data into a DataFrame and set column names. I also append the ores_scores retrieved in the same DataFrame.
###Code
# Push Page_Data into a DataFrame and set column names
df = pd.DataFrame(data[1:], columns=['article_name','country', 'revision_id'])
# Append the ores_scores retrieved.
df["article_quality"] = ores_scores[1:]
# Display a sample
df[0:10]
###Output
_____no_output_____
###Markdown
Read in the "Population Mid-2015" csv fileThis section loads in a CSV file, "Population Mid-2015" from the Population Reference Bureau, and stores it the "pop_data" DataFrame. The download as well as additional information can be found at: http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14. I select the "country" and "population" columns and set the column names. Note: that I had to manually delete a few extra lines at the top of the file so that the headers are the first row before importing.
###Code
# Read csv into DataFrame
pop_data = pd.read_csv('Population Mid-2015.csv', thousands=',')
# Select columns
pop_data = pop_data.iloc[:,[0,4]]
# Name selected columns
pop_data.columns = ["country", "population" ]
# Display a sample
pop_data[0:10]
###Output
_____no_output_____
###Markdown
Merge the Page_Data (+ ORES score) with the Population DataThis section merges the Page_Data (+ ORES score) with the Population Data. I use an inner join so I only get back rows that have country values in both datasets.
###Code
# Merge data from both data sets
myData = df.merge(pop_data, on='country', how='inner')
# Display number of returned records
print ("Got", len(myData), "Records after merge")
# Display sample
myData[0:10]
###Output
Got 45799 Records after merge
###Markdown
Step 2: Data Processing (By Country Aggregation) Population for each CountryIn this section, I'm getting a unique row for each country with the population data.
###Code
# Select country and population columns then dedupe.
myData_pop = myData.iloc[:,[1,4]].drop_duplicates()
# Print number of records
print ("Got", len(myData_pop), "Records after merge")
# Display a sample
myData_pop[0:10]
#myData_art = pd.DataFrame(myData['country'].value_counts(), columns="ArticleCount")
#myData_art["country"] = myData_art.index
#myData_art.head()
#myData_art.columns
#myData_art.head()
###Output
_____no_output_____
###Markdown
Article Count for each CountryIn this section, I do a Group By on "country" and count the number of rows which gives the total article per country. I have to re-add the "country" column so I can join later.
###Code
# Select columns
myData_art = myData.iloc[:,[0,1]]
# Get counts after Group By
myData_art = myData_art.groupby("country").count()
# Re-add country as a column
myData_art["country"] = myData_art.index
# Set column names
myData_art.columns = ['articleCount', 'country']
# Display sample
myData_art[0:10]
###Output
_____no_output_____
###Markdown
High Quality Article Count for each CountryIn this section, I first filter the dataset to just High Quality articles. I do a Group By on "country" and count the number of rows which gives the total High Quality article per country. I have to re-add the "country" column so I can join later.
###Code
# Filter to just High Quality articles ("GA" and "FA")
myData_HQ = myData[myData['article_quality'].isin(['GA','FA'])]
# Select columns
myData_HQ = myData_HQ.iloc[:,[0,1]]
# Get counts after Group By
myData_HQ = myData_HQ.groupby("country").count()
# Re-add country as a column
myData_HQ["country"] = myData_HQ.index
# Set column names
myData_HQ.columns = ['HQarticleCount', 'country']
# Display sample
myData_HQ.head()
###Output
_____no_output_____
###Markdown
Merge the 3 datasets and calculate percentagesIn this section, I merge the 3 datasets from immediately above. I fill in zeros for countrys without any High Quality articles. Then I calculate percentages for "articles-per-population-percent" and "FA-GA-articles-percent".
###Code
# Merge the population dataset with the total articles dataset.
myAnalysisData = myData_pop.merge(myData_art, on='country', how='left')
# Merge the combined dataset with the total High Quality articles dataset.
myAnalysisData = myAnalysisData.merge(myData_HQ, on='country', how='left')
# Fill in zeros for countrys without any High Quality articles.
myAnalysisData['HQarticleCount'].fillna(0, inplace=True)
# Calculate percentages
myAnalysisData['articles-per-population-percent'] = (myAnalysisData['articleCount']/myAnalysisData['population'])*100
myAnalysisData['FA-GA-articles-percent'] = (myAnalysisData['HQarticleCount']/myAnalysisData['articleCount'])*100
# Display sample
myAnalysisData[0:10]
###Output
_____no_output_____
###Markdown
Step 3: AnalysisFor this section, we generate four tables that show: * 10 highest-ranked countries in terms of number of politician articles as a proportion of country population* 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population* 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country* 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
myAnalysisData.sort_values(by='articles-per-population-percent', ascending= False)[0:10]
###Output
_____no_output_____
###Markdown
10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
myAnalysisData.sort_values(by='articles-per-population-percent', ascending= True)[0:10]
###Output
_____no_output_____
###Markdown
10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
myAnalysisData.sort_values(by='FA-GA-articles-percent', ascending= False)[0:10]
###Output
_____no_output_____
###Markdown
10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
myAnalysisData.sort_values(by='FA-GA-articles-percent', ascending= True)[0:10]
###Output
_____no_output_____
###Markdown
The sort order for the 0s may be arbitrary. This just shows 10. Step 4: Final Output File Output the file to a csv (with some column re-ordering)
###Code
# Extract the key columns
final_output = myData.iloc[:,[1,0,2,3,4]]
#Write out the final output
filename = 'en-wikipedia_articles_country_population_and_ratings.csv'
final_output.to_csv(filename)
###Output
_____no_output_____
###Markdown
Bias on Wikipedia Todd SchultzDue: November 2, 2017Bias is an increasing important topic with today's reliance on data and aglorithms. Here, bias in policitical articles on the English Wikipedia will be investigated in terms of number of articles about politicians for each country normalized by population and the percentage of total number of articles about policitians that are considered high-quality as predicted by a machine learning model. The results can then be reviewed to observe any biases or trends present. ImportsThe Python libraries used in the analysis throughout this notebook are imported here.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import requests
import json
import copy
%matplotlib notebook
###Output
_____no_output_____
###Markdown
Import data of politicians by countryImport the data of policitcians by country provided by Oliver Keyes and found at https://figshare.com/articles/Untitled_Item/5513449. This data set contains the name of the country, the name of the politician as representented by the name of the English Wikipedia article about them, and the revision or article identification number in the English Wikipedia.
###Code
politicianFile = 'PolbyCountry_data.csv'
politicianNames = pd.read_csv(politicianFile)
# rename variables
politicianNames.rename(columns = {'page':'article_name'}, inplace = True)
politicianNames.rename(columns = {'rev_id':'revision_id'}, inplace = True)
politicianNames[0:4]
politicianNames.shape
###Output
_____no_output_____
###Markdown
Import population by countryImport the population by country provided PRB and found at http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14. The data is from mid-2015 and includes the name of the country and the population estimate.
###Code
countryFile = 'Population Mid-2015.csv'
tempDF = pd.read_csv(countryFile, header=1)
# change population to a numeric value
a = np.zeros(tempDF.shape[0])
for idata in range(0,tempDF.shape[0]):
b = tempDF['Data'][idata]
a[idata] = float(b.replace(',', ''))
#countryPop = pd.DataFrame(data={'country': tempDF['Location'], 'population': tempDF['Data']})
countryPop = pd.DataFrame(data={'country': tempDF['Location'], 'population': a})
countryPop[0:5]
###Output
_____no_output_____
###Markdown
Combined dataCombine the data frames into a single data frame with the following variables. Column, country, article_name, revision_id, article_quality, populationMake a placeholder, empty variable for article_quality to be filled in in the next section using the Wikipedia ORES API for predicting article quality. Merging the data sets here also eliminates any entries in the policitian names who countries population is unavailable and removes any countries that have no English Wikipedia articles about their policitians.
###Code
# First add placeholder to politicianNames dataframe for article quality
politicianNames = politicianNames.assign(article_quality = "")
# Next, join politicianNames with countryPop
politicData = politicianNames.merge(countryPop,how = 'inner')
#politicianNames[0:5]
politicData[0:5]
politicData.shape
###Output
_____no_output_____
###Markdown
ORES article quality dataRetrieve the predicted article quality using the ORES service. ORES ("Objective Revision Evaluation Service") is a machine learning system trained on pre-graded Wikipedia articles for the purpose of predicting artcle quality. The service is found at https://www.mediawiki.org/wiki/ORES and documentaiton is found at https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_model. The output of the API service is a prediction of the proabability of the article quality being assigned to one of six different classes listed below from best to worst:* FA - Featured article* GA - Good article* B - B-class article* C - C-class article* Start - Start-class article* Stub - Stub-class articleThe category with the highest probability is selected as the predicted quality grade.
###Code
# ORES
# Construct API call
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/{revid}/{model}'
headers = {'User-Agent' : 'https://github.com/your_github_username', 'From' : '[email protected]'}
# loop over all articles to retrieve predicted quality grades
for irevid in range(0, politicData.shape[0]):
revidstr = str(politicData['revision_id'][irevid])
#print(revidstr)
params = {'project' : 'enwiki',
'model' : 'wp10',
'revid' : revidstr
}
try:
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
#print(json.dumps(response, indent=4, sort_keys=True))
# Store article quality in the dataframe
politicData.loc[irevid,'article_quality'] = response['enwiki']['scores'][revidstr]['wp10']['score']['prediction']
except:
print('Error at ' + str(irevid))
if irevid % 500 == 0:
print(irevid)
# Write out csv file
politicData.to_csv('en-wikipedia_bias_2015.csv', index=False)
politicData[0:4]
# Drop the row without article quality scores
# politicData.drop(politicData.index[[14258,14259]])
#politicData['article_quality'][14258,14259]
print(politicData.shape)
politicData = politicData.loc[~(politicData['article_quality'] == '')]
print(politicData.shape)
# Read in csv file if needed
# The ORES calls to retrieve all the predicted article quality grades can be long, thus storing the
# results locally as a file can save time reloading if needed.
#politicData = pd.read_csv('en-wikipedia_bias_2015.csv')
#politicData[0:4]
###Output
_____no_output_____
###Markdown
AnalysisThe data set is now processed to acculumate counts of the number of articles for each country and to consider the percentage of articles from each country that are predicted to be 'high-quaility'. For the purpose of this analysis, high-quality articles are defined to be articles with a predicted ORES quality grade of either 'FA', a featured article, or 'GA', a good article. The total number of articles for each country is also normalized by the countries population. VisualizationsAlong with generating the numeric analysis results, four visualizations are created to help better understand the data. The four visualizations are plots of the numeric results for one of the processed paramters, number of articles for each country normalized by population, and the percentage of high-quality articles for each county, each for the top 10 and bottom 10 ranked countries. The results are then reviewed for any observed trends.
###Code
# Create dataframe variables
# Find all unique countries with politician articles
uniquecountries = copy.deepcopy(politicData.country.unique())
# Initialize dataframe for the results
countryData = pd.DataFrame(data={'country': uniquecountries})
countryData = countryData.assign(**{'article_per_pop_percent': np.zeros(uniquecountries.shape[0])})
countryData = countryData.assign(**{'highqual_art_percent': np.zeros(uniquecountries.shape[0])})
countryData = copy.deepcopy(countryData)
print(countryData.shape)
countryData[0:4]
# Compute the processed results
# disable warning about sliced variable assignment in the dataframe, found on stackoverflow.com
pd.options.mode.chained_assignment = None # default='warn'
# Compute articles-per-population for each country, and percent high-quality articles for each country
for icountry in range(0,countryData.shape[0]):
loopcountry = countryData['country'][icountry]
looppop = countryPop['population'][countryPop['country'] == loopcountry]
# find articles for politicians from loopcountry
Idxarts = politicData['country'] == loopcountry
looparticles = copy.copy(politicData['article_quality'][Idxarts])
IdxGA = looparticles == 'GA'
IdxFA = looparticles == 'FA'
nHQarts = sum(IdxGA) + sum(IdxFA)
#countryData.loc[icountry,'article_per_pop_percent'] = 100*sum(Idxarts)/looppop
#countryData.loc[icountry,'highqual_art_percent'] = 100*nHQarts/sum(Idxarts)
countryData['article_per_pop_percent'][icountry] = 100*sum(Idxarts)/looppop
countryData['highqual_art_percent'][icountry] = 100*nHQarts/sum(Idxarts)
countryData[0:4]
###Output
_____no_output_____
###Markdown
Create bar graphs for the top 10 and bottom 10 countries with respect number of politician articles normalized by popoluations.
###Code
# sort countryData by article_per_pop_percent
cdsorted = countryData.sort_values(by='article_per_pop_percent', ascending=0)
cdsorted[0:4]
# 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
plt.figure(figsize=(6,5))
plt.bar(range(0,10), cdsorted['article_per_pop_percent'][0:10])
plt.title('Top 10 Countries for Articles per Population')
plt.ylabel('Politician Articles per Population (%)')
plt.xticks(range(0,10), cdsorted['country'][0:10], rotation=90)
plt.ylim((0,0.5))
plt.tight_layout()
plt.savefig('Top10ArticlesperPopulation.jpg')
# 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
plt.figure(figsize=(6,5))
plt.bar(range(0,10), cdsorted['article_per_pop_percent'][-10:])
plt.title('Bottom 10 Countries for Articles per Population')
plt.ylabel('Politician Articles per Population (%)')
plt.xticks(range(0,10), cdsorted['country'][-10:], rotation=90)
plt.ylim((0,0.0005))
plt.tight_layout()
plt.savefig('Bottom10ArticlesperPopulation.jpg')
###Output
_____no_output_____
###Markdown
Create bar graphs for the top 10 and bottom 10 countries with respect percentage of high-quality articles.
###Code
# sort countryData by article_per_pop_percent
cdsorted = countryData.sort_values(by='highqual_art_percent', ascending=0)
cdsorted[0:4]
# 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
plt.figure(figsize=(6,5))
plt.bar(range(0,10), cdsorted['highqual_art_percent'][0:10])
plt.title('Top 10 Countries for Percentage of High-quality Articles')
plt.ylabel('Percent of high-quality articles (%)')
plt.xticks(range(0,10), cdsorted['country'][0:10], rotation=90)
plt.ylim((0,15))
plt.tight_layout()
plt.savefig('Top10HQArticlespercent.jpg')
# 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
plt.figure(figsize=(6,5))
plt.bar(range(0,10), cdsorted['highqual_art_percent'][-10:])
plt.title('Bottom 10 Countries for Percentage of High-quality Articles')
plt.ylabel('Percent of high-quality articles (%)')
plt.xticks(range(0,10), cdsorted['country'][-10:], rotation=90)
plt.ylim((0,0.0005))
plt.tight_layout()
plt.savefig('Bottom10HQArticlespercent.jpg')
# Investigate bottom 10 for percentage of high-quality articles
cdsorted['highqual_art_percent'][-10:]
# Okay, they are all zero. So, let's find all the countries that have no high-quality articles.
InoHQ = countryData['highqual_art_percent']==0
print('Total number of countries without high-quality articles: ' + str(sum(InoHQ)))
countryData['country'][InoHQ]
###Output
Total number of countries without high-quality articles: 39
###Markdown
The goalThroughout this notebook I will show that a clear bias exists in the Wikipedia dataset. Specifically, we will discuss the bias in terms of politician coverage through number and quality of the articles in relation to their home countries population. We will gather the data from two sources: one will be our source for population data and the other contains the relevant article metadata. For each article, we will query a machine learning service to estimate the quality of the article. And finally, we will generate a few tables and visualizations to display the bias. Data Load Wikipedia Articles - [Figshare](https://figshare.com/articles/Untitled_Item/5513449)This dataset contains the article metadata we need to estimate the number and quality of articles within a given country. I download the data directly to this notebook, extract the compressed file contents, and stream directly into a Pandas DataFrame.
###Code
# download the data from figshare
figshare = 'https://ndownloader.figshare.com/files/9614893'
r = requests.get(figshare)
# make sure the result is valid
if r.ok:
# feed a byte stream into a ZipFile
stream = io.BytesIO(r.content)
zf = ZipFile(stream)
# locate the csv file within the list of files embedded in the ZipFile generated above
# I make sure to not include the files within the 'MAX OS' directory
file = [
f for f in zf.filelist if f.filename.find('page_data.csv') > 0 and f.filename.find('MAC') == -1
][0]
# extract the csv file and read into a pandas dataframe
page_data = pd.read_csv(zf.extract(file))
# print this if the request failed for some reason
else:
print(f'failed to download page data: {r.status}')
page_data.head()
###Output
_____no_output_____
###Markdown
Population Data - [DropBox](https://www.dropbox.com/s/5u7sy1xt7g0oi2c/WPDS_2018_data.csv?dl=0) location.This is a random DropBox location provided for the assignment. It contains each country and their population estimated at some point in the middle of 2018. Again, I download the file directly from DropBox and feed the resulting byte stream into a Pandas DataFrame.
###Code
# download the data from Drop Box
dropbox = 'https://www.dropbox.com/s/5u7sy1xt7g0oi2c/WPDS_2018_data.csv?dl=1'
r = requests.get(dropbox)
# make sure the result is valid
if r.ok:
# this time, feed the csv byte stream into a pandas dataframe directly
stream = io.BytesIO(r.content)
pop = pd.read_csv(stream)
# print this if the request failed for some reason
else:
print(f'failed to download population data: {r.status}')
pop.head()
###Output
_____no_output_____
###Markdown
Right at the start we see inconsistencies in the format of our joining keys - Geography and country. I run some quick string tools to ensure they're consistent through both datasets. The columns are renamed in order to make the joining easier to read. Additionally, I verify that the joining keys are, in fact, keys then perform an inner join to include only the countries that have data in both sets.
###Code
# explicitly rename the columns
pop.rename(columns={
'Geography':'country',
'Population mid-2018 (millions)':'population'
}, inplace=True)
page_data.rename(columns={
'page':'article_name',
'rev_id':'revision_id'
}, inplace=True)
# enforce string format consistency
pop.country = pop.country.apply(str.title)
page_data.country = page_data.country.apply(str.title)
# double check the 'keys' in each dataframe are in fact joinable keys by verifying uniqueness
assert len(pd.unique(pop.country)) == len(pop)
assert len(pd.unique([(t.article_name, t.country) for t in page_data.itertuples()])) == len(page_data)
# convert the population to a float - I use replace to remove any commas in the string represenation
pop.population = pop.population.apply(lambda x: float(x.replace(',', '')))
# merge the data frames
df = pop.merge(page_data, on='country', how='inner')
df.head()
len(df)
###Output
_____no_output_____
###Markdown
Make API calls to get articles predictionsNow that we have the two datasets prepared and merged into one; we use the metadata to query the ORES machine learning service to get article quality estimates. There are over 45K revision ids in the dataset above. Because of this, I write a function to handle a single query and process them in parallel using a process pool.
###Code
def get_ores_data(revision_ids):
"""Make an API call to ORES
Args:
revision_ids: [(str | int),] list of revision ids to include with the query
Returns:
[(int, str),] A list of tuples containing two elements - [(revision_id, quality),]
"""
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# Specify the parameters - joining the revision IDs together separated by | marks.
params = {
'project': 'enwiki',
'model': 'wp10',
'revids': '|'.join(str(x) for x in revision_ids)
}
# make the call and verify the response before proceeding
response = requests.get(endpoint.format(**params))
if response.ok:
# convert the response to json
response = response.json()
# return the scores as a list of tuples, taking only the prediction
results = []
for rid in revision_ids:
# start at the parent node we're interested in
parent = response['enwiki']['scores'][str(rid)]['wp10']
# check for any errors, append either the error or prediction
if 'error' in parent.keys():
results.append((rid, parent['error']['type']))
else:
results.append((rid, parent['score']['prediction']))
return results
# if the request failed, return the status code and list of revision ids so we can retry later
else:
return dict(err=response.status, revision_ids=revision_ids)
###Output
_____no_output_____
###Markdown
Process the revision ids in batches of 50 to preclude Wikimedia blocking the request. I also run these requests in parallel and cache the results. To force the cell to run again you can do one of two things: comment out the cell magic in line 1 or delete the cached pkl file.
###Code
%%cache api_results.pkl results
# define how many rev_ids to include in each API call
step = 50
# start a task pool using all available cores
pool = mul.Pool(mul.cpu_count())
# process the calls in parallel
results = list(pool.map(
get_ores_data,
[df.revision_id[i:i+step] for i in range(0, len(df), step)]
))
# make sure to kill the children
pool.close(); pool.join()
###Output
[Skipped the cell's code and loaded variables results from file 'C:\Users\lukew\OneDrive\School\DATA 512 [HCDS - Ethics]\DATA512_A2\api_results.pkl'.]
###Markdown
Create the final dataset I flatten the list of batched requests into a single list then convert them to a dataframe and join with the main.
###Code
# flatten the batches into a single list
results = list(itertools.chain.from_iterable(results))
# convert to a dataframe
results = pd.DataFrame(results, columns=['revision_id', 'prediction'])
# join with the main
df_ = df.merge(results, on='revision_id', how='inner')
# verify that each revision id was processed before resetting the variable
assert len(df) == len(df_)
df = df_; del df_
# persist the final dataframe to disk
df.to_csv('data-512-a2.csv', index=None)
df.head()
###Output
_____no_output_____
###Markdown
AnalysisI will create two tables to show the bias. One to answer the question regarding the number of articles of per country and the other for quality. The number of articles is calculated as a ratio over the population count in order to compare the countries. For quality, I create a ratio of the number of good articles or the total number of articles for the country.
###Code
# reset the population index to make the following computations more readable
pop.set_index('country', inplace=True)
# group the dataframe by country , count the number of articles in each
# additionally, reset the index inorder to collapse the dataframe back
# and finally, rename the prediction column to what it now represents
table12 = df.loc[:, ['country', 'prediction']]\
.groupby(['country'])\
.count()\
.reset_index()\
.rename(columns={'prediction':'num_articles'})
# calculate the articles per million as a new column
table12['articles_per_million'] = [
np.round(t.num_articles/pop.loc[t.country].population, 1)
for t in table12.itertuples()
]
# sort the table by the number of articles per million
table12 = table12.sort_values(by='articles_per_million', ascending=False)
# extract the highest and lowest ten countries into separate dataframes
t1 = table12.drop(columns='num_articles').iloc[:10]
t2 = table12.drop(columns='num_articles').iloc[-10:]
t1; t2
###Output
_____no_output_____
###Markdown
Just from the number of articles we can see a very clear bias. The top two countries are much higher than the remaining. Tuvalu and Nauru are tiny islands in the South Pacific with less than 15k people each. The remaining countries on the top ten list, besides Iceland, are tiny nation states in the Pacific or Europe. Conversely, we see densely populated and/or repressed countries in the bottom ten. The numbers in the table speak loudly giving a very wide range - (0.7, 5500). However, to drive the numbers home a bit more I show a boxplot of the values. Notice how extreme the top countries are as outliers. It is skewed so badly that you can only distinghish the top countries. Running the notebook rather than viewing it directly on GitHub lets you interact with the plot if you want to zoom into the dense region or hover to see which country each dot represents.
###Code
# create the first boxplot
h1 = go.Box(
name='.',
x=table12.articles_per_million,
text=table12.country,
boxpoints='all',
jitter=.9,
xaxis='x1',
marker=dict(color='#2A3D54')
)
# show the plot
fig = go.Figure(
[h1],
go.Layout(
title='Percentage of Articles Per Million People',
height=400,
width=1000,
),
)
# write it to disk
pio.write_image(fig, 'hcds-a2-counts.png')
# because Plotly images do not display in GitHub, I use a byte stream to display
# rather than the standard plotly iplot function
ibytes = pio.to_image(fig, format='png')
Image(ibytes)
# uncomment the next line to display the interactive plot
#iplot(fig)
###Output
_____no_output_____
###Markdown
Create the second table with a proportion of articles estimated as having high quality.
###Code
# first, extract only the articles deemed high quality by creating a mask
# then, group by country and count the number of articles in each and reset the index
# rename the column to what it now represents
# finally, merge with table12 so we have the full count of articles in the same frame
table34 = df.loc[[p in ['GA', 'FA'] for p in df.prediction], ['country', 'prediction']]\
.groupby(['country'])\
.count()\
.reset_index()\
.rename(columns={'prediction':'num_quality_articles'})\
.merge(
table12.drop(columns='articles_per_million'),
on='country'
)
# calculate the percent of articles deemed high quality as a new column
table34['percent_quality'] = [
np.round(t.num_quality_articles/t.num_articles, 3)
for t in table34.itertuples()
]
# drop the unnecessary columns and sort by the percentage of quality articles
table34 = table34.drop(columns=['num_quality_articles', 'num_articles'])\
.sort_values(by='percent_quality', ascending=False)
# extract the highest and lowest ten into new dataframes
t3 = table34.iloc[:10]
t4 = table34.iloc[-10:]
t3; t4
###Output
_____no_output_____
###Markdown
Again, we see an enormous disparity between the top and bottom ten countries. The most suprising thing to me was the quality of articles. We see that North Korea has only 1.5 articles per million people but of those articles, they have high percentage of good quality. I don't know who wrote these articles. Maybe the reason for such a high percentage is because the originating authors are from western sources or this is a result of North Korean governance. Both are assumptions and it would be fun to explore with more data. The ten lowest quality do not necessarily suprise me. These are countries are neither English speaking or known to have great public education systems that would lead to high quality articles being written. Again, I show a boxplot to visualize the skew. This is more symmetric than the number of articles but still skewed. But the skew isn't the most interesting part, its the content and location of styles of governemnt within the list. Again, I'd like to align this dataset with the world indicators to bring out more interesting insights.
###Code
# create the second boxplot
h2 = go.Box(
name='.',
x=table34.percent_quality,
text=table34.country,
boxpoints='all',
jitter=0.3,
xaxis='x2',
marker=dict(color='#B26C10')
)
# create the figure
fig = go.Figure(
[h2],
go.Layout(
title='Percentage of Articles Deemed High Quality',
height=400,
width=1000
)
)
# write it to disk
pio.write_image(fig, 'hcds-a2-quality.png')
# because Plotly images do not display in GitHub, I use a byte stream to display
# rather than the standard plotly iplot function
ibytes = pio.to_image(fig, format='png')
Image(ibytes)
# uncomment the next line to display the interactive plot
#iplot(fig)
###Output
_____no_output_____
###Markdown
Hasnah SaidA2: Bias in DataOctober 12, 2021 Step 1: Get the Article and Population Data The datasets, politician by country and world population, used in this assignment are obtained from Figshare and the Population Reference Bureau (PBR).
###Code
import pandas as pd
import numpy as np
import requests
import json
from collections import defaultdict
# Load the data
page_data = pd.read_csv('raw_data/page_data.csv')
wpds_data = pd.read_csv('raw_data/WPDS_2020_data.csv')
page_data.head()
wpds_data.head()
###Output
_____no_output_____
###Markdown
Step 2: Cleaning the Data The following will be done to prepare the datasets for the analysis:* **page_data.csv:** Remove rows that contain 'Template:' in the page's name.* **WPDS_2020_data.csv:** Remove rows that provide cumulative regional population counts, rather than country-level counts. Theses rows are distinguished by having ALL CAPS* Retain regions mapping for the analysis section
###Code
clean_page_data = page_data[~page_data.page.str.contains("Template:")]
clean_wpds_country_data = wpds_data[~wpds_data.Name.str.isupper()]
clean_wpds_region_data = wpds_data[wpds_data.Name.str.isupper()]
# create a df with country-region mapping
country_region_dic = {}
sub_region = ""
for i, row in wpds_data.iterrows():
if row['Type'] == 'Sub-Region':
sub_region = row['Name']
elif row['Type'] == 'Country':
country_region_dic[row['Name']] = sub_region
country_region_df = pd.DataFrame(country_region_dic.items(), columns=['country', 'sub_region'])
# export the clean data
clean_page_data.to_csv('clean_data/clean_page_data.csv')
clean_wpds_country_data.to_csv('clean_data/clean_wpds_country_data.csv')
clean_wpds_region_data.to_csv('clean_data/clean_wpds_region_data.csv')
country_region_df.to_csv('clean_data/country_region_mapping.csv')
###Output
_____no_output_____
###Markdown
Step 3: Getting Article Quality Predictions In this step, I will use ORES to get the predicted quality score for each article in the Wikipedia dataset using their RESTAPI. ORES supports querying for up to 50 revisions per request with 4 parallell requests. (Sources: https://www.mediawiki.org/wiki/ORES)The article quality estimates are, from best to worst:1. FA - Featured article2. GA - Good article3. B - B-class article4. C - C-class article5. Start - Start-class article6. Stub - Stub-class articleI made a batch API call then stored the prediction with the revision ID in a python dictionary. I then converted the scores dictionary to a pandas framework and merged it with the page_data by adding a column for the predictions.
###Code
# Example of an ORES API call for batch revisions:
# http://ores.wmflabs.org/v3/scores/enwiki/?models=draftquality|wp10&revids=34854345|485104318
ores_get_scores_url = "https://ores.wikimedia.org/v3/scores/enwiki/?models=articlequality&revids={}"
# Get all the revision IDs from clean_data
all_rev_ids = clean_page_data['rev_id'].tolist()
# Break all_rev_ids into chunks of 50 to make the API calls and get article predictions
n = 50
rev_ids_chunks = [all_rev_ids[i:i + n] for i in range(0, len(all_rev_ids), n)]
# Make the API call and get predictions and store them in a dictionary
all_scores = {}
for chunk in rev_ids_chunks:
revids = "|".join([str(element) for element in chunk])
endpoint = ores_get_scores_url.format(revids)
req = (requests.get(endpoint)).json()
scores = req['enwiki']['scores']
for s in scores:
try:
all_scores[s] = scores[s]['articlequality']['score']['prediction']
except:
all_scores[s] = 'error'
# Write out all_scores dictionary so that I don't have to make another api call
with open('clean_data/all_scores.txt', 'w') as outfile:
json.dump(all_scores, outfile)
rev_pred = pd.DataFrame(all_scores.items(), columns=['rev_id', 'article_quality_est'])
rev_pred.rev_id = rev_pred.rev_id.astype(int)
clean_page_data_with_preds = pd.merge(clean_page_data, rev_pred, on='rev_id')
clean_page_data_with_preds.to_csv('clean_data/clean_page_data_with_preds.csv')
clean_page_data_with_preds.head()
###Output
_____no_output_____
###Markdown
Step 4: Combining the Datasets In this step, page_data and population_data are combined into one dataframe based on the company. After merging the data, there will be entries that can't be merged and they will be removed and stored. The data then will be be exportd to two CSV files: one with rows that had no matches and the other one is the final combined data
###Code
final_combined_data = pd.merge(clean_page_data_with_preds, clean_wpds_country_data, left_on='country', right_on='Name', how='outer')
final_combined_data
# Create a dataframe with no null rows
wp_wpds_politicians_by_country = final_combined_data.dropna(how='any',axis=0)
# Create a dataframe with all the null rows
wp_wpds_countries_no_match = final_combined_data[final_combined_data.isnull().any(axis=1)]
# Check if numbers match up
(len(final_combined_data)) == (len(wp_wpds_politicians_by_country) + len(wp_wpds_countries_no_match))
# Drop extra columns, rename, and reorder the rest in the final dataframe
wp_wpds_politicians_by_country = wp_wpds_politicians_by_country.drop(columns=['FIPS', 'Type', 'Name', 'TimeFrame', 'Data (M)'])
wp_wpds_politicians_by_country = wp_wpds_politicians_by_country.rename(columns={'page': 'article_name', 'rev_id': 'revision_id', 'Population':'population'})
wp_wpds_politicians_by_country = wp_wpds_politicians_by_country[["country", "article_name", "revision_id", "article_quality_est", "population"]]
wp_wpds_politicians_by_country
# Export final dataframes to CSV files
wp_wpds_politicians_by_country.to_csv('clean_data/wp_wpds_politicians_by_country.csv')
wp_wpds_countries_no_match.to_csv('clean_data/wp_wpds_countries-no_match.csv')
###Output
_____no_output_____
###Markdown
Step 5: Analysis Calculate the proportion (percentage) of articles per population and high-quality articles for each country and each geographic region.* percentage of article coverage per country* percentage of high-quality articles per country**The steps I followed to get proportions for articles per population proportion:**1. create a dataframe with article count column2. merge the dataframe on country to get population 3. divide count column by population to get coverage percentage (articles_per_population)**The steps I followed to get proportions for high-quality proportions:** 1. I selected articles with rating GA or FA 2. I grouped the rows by country and added up the high quality counts
###Code
# country_article_dic = defaultdict(int)
# country_article_rating = defaultdict(int)
# country_population_dic = {}
# for index, row in wp_wpds_politicians_by_country.iterrows():
# country = row.values[0]
# rating = row.values[3]
# population = row.values[4]
# country_article_dic[country] += 1
# country_population_dic[country] = population
# if rating == 'GA' or rating == 'FA':
# country_article_rating[country] += 1
df = wp_wpds_politicians_by_country
# count the number of high quality rows
high_quality = df[(df['article_quality_est']=='GA')|(df['article_quality_est']=='FA' )]
high_quality = high_quality.groupby(['country']).size().reset_index(name='high_quality_count')
high_quality.sort_values('high_quality_count')
# count the articles for each country
article_counts = wp_wpds_politicians_by_country.country.value_counts().reset_index().rename(columns={'index':'country', 'country':'article_count'})
# merge article counts with population and drop duplicates
article_count_population = pd.merge(article_counts, wp_wpds_politicians_by_country, on='country').drop_duplicates(subset=['country'])
# drop extra columns
article_count_population_clean = article_count_population.drop(columns=['article_name', 'revision_id', 'article_quality_est']).reset_index()
# create coverage column that has the articles_per_population percentage (count/population)
article_count_population_clean['coverage'] = (article_count_population_clean['article_count']/article_count_population_clean['population']) * 100
# merge article count and high quality count dataframes
final = pd.merge(article_count_population_clean, high_quality, on='country', how='left')
final
# calculate high quality article proportion
final['high_quality_count'] = final['high_quality_count'].fillna(0)
final['high_quality_proportion'] = (final['high_quality_count']/final['article_count']) * 100
# add subregion column
final = pd.merge(final, country_region_df, on='country', how='left')
final.sort_values('high_quality_proportion')
# drop unnecessary columns for step 6 analysis
final_data = final.drop(columns=['index', 'population'])
final_data.head()
###Output
_____no_output_____
###Markdown
Step 6: Results **1. Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population**
###Code
final_data.sort_values('coverage', ascending=False).reset_index(drop=True).head(10)
###Output
_____no_output_____
###Markdown
**2. Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population**
###Code
final_data.sort_values('coverage', ascending=True).reset_index(drop=True).head(10)
###Output
_____no_output_____
###Markdown
**3. Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality**
###Code
final_data.sort_values('high_quality_proportion', ascending=False).reset_index(drop=True).head(10)
###Output
_____no_output_____
###Markdown
**4. Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality**
###Code
final_data.sort_values('high_quality_proportion', ascending=True).reset_index(drop=True).head(10)
###Output
_____no_output_____
###Markdown
**5. Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population** To answer this question, I'll add a column for each region then merge it with clean_wpds_region_data to get population
###Code
sub_region_article = final.groupby(['sub_region'])['article_count'].sum()
sub_region_article = pd.merge(sub_region_article, clean_wpds_region_data, left_on='sub_region', right_on='Name')
sub_region_article = sub_region_article.drop(columns=['FIPS', 'Type', 'TimeFrame', 'Data (M)'])
sub_region_article['region_coverage'] = (sub_region_article['article_count']/sub_region_article['Population']) * 100
sub_region_article = sub_region_article.rename(columns={'Name': 'sub_region', 'Population':'population'})
sub_region_article = sub_region_article[['sub_region', 'population', 'article_count', 'region_coverage']]
sub_region_article.sort_values('region_coverage', ascending=False)
###Output
_____no_output_____
###Markdown
**6. Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality**
###Code
high_quality_count = final.groupby(['sub_region'])['high_quality_count'].sum()
sub_region_article = pd.merge(high_quality_count, sub_region_article, on='sub_region')
sub_region_article['high_quality_proportion'] = (sub_region_article['high_quality_count']/sub_region_article['article_count']) * 100
sub_region_article.sort_values('high_quality_proportion', ascending=False)
###Output
_____no_output_____
###Markdown
A2 - Bias in Data Assignment Data 512: Human Centered Data Science Aaliyah Hänni 10/7/2021 Project OverviewThe goal of this assignment is to explore the concept of bias through data on Wikipedia articles - specifically, articles on political figures from a variety of countries. Wikipedia articles and country populations datasets are combined, and ORES is used to estimate the quality of each article by country.This notebook contains step-by-step analysis, from data aquisition to results, of how the coverage of politicians on Wikipedia and the quality of articles about politicians varies between countries.The 'Results' section of this notebook contains tables that display:1. the countries with the greatest and least coverage of politicians on Wikipedia compared to their population.2. the countries with the highest and lowest proportion of high quality articles about politicians.3. a ranking of geographic regions by articles-per-person and proportion of high quality articles.In the 'Reflection' section contains a short reflection on the project that focuses on how both findings from this analysis and the process we went through to reach the findings, helped me to understand the causes and consequences of biased data in large, complex data science projects.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Data AcquisitionThere are two data sources used for this analysis, one for the Wikipedia articles and another for the world population. The Wikipedia politicians by country dataset can be found on Figshare. The population data is available in CSV format as WPDS_2020_data.csv. This dataset is drawn from the world population data sheet published by the Population Reference Bureau. Data Source 1: Politicians by Country from the English-language WikipediaThe data was extracted via the Wikimedia API using the associated code. It is formatted as a CSV and saved as page_data.csv in the "data" directory. Columns are:1. "country", containing the sanitised country name, extracted from the category name;2. "page", containing the unsanitised page title.3. "last_edit", containing the edit ID of the last edit to the page.Data Source: https://figshare.com/articles/dataset/Untitled_Item/5513449Keyes, Os (2017): Politicians by Country from the English-language Wikipedia. figshare. Dataset. https://doi.org/10.6084/m9.figshare.5513449.v6
###Code
#importing wikipedia politicians pages and their countries
wiki_country_politician = pd.read_csv("data/page_data.csv")
wiki_country_politician.head()
###Output
_____no_output_____
###Markdown
Data Source 2: World Population Data SheetThis dataset was extracted from the Population Reference Bureau. It contains the world population counts by region for 2019.Columns are: 1. "FIPS", contains the Federal Information Processing Standards codes for place2. "Name", contains the name of the place3. "Type" , contains the type of place: World, Sub-Region, World4. "TimeFrame", contains the year (2019)5. "Data (M)", contains the population count in millions6. "Population", contains the population countAbout the data: https://www.prb.org/international/indicator/population/table/ Data Source:https://docs.google.com/spreadsheets/d/1CFJO2zna2No5KqNm9rPK5PCACoXKzb-nycJFhV689Iw/editgid=283125346
###Code
#importing the world population (2020) data sheet
world_population_2019 = pd.read_csv("data/WPDS_2020_data.csv")
world_population_2019.head()
###Output
_____no_output_____
###Markdown
Data ProcessingBoth page_data.csv and WPDS_2020_data.csv contain some rows need to filter out and/or ignored when combining the datasets below. In the case of page_data.csv, the dataset contains some page names that start with the string "Template:". Note that these pages are not Wikipedia articles, and should not be included in the analysis.Similarly, WPDS_2020_data.csv contains some rows that provide cumulative regional population counts, rather than country-level counts. These rows are distinguished by having ALL CAPS values in the 'geography' field (e.g. AFRICA, OCEANIA). These rows won't match the country values in page_data.csv, but will be retained (either in the original file, or a separate file) so that we can report coverage and quality by region in the analysis section.
###Code
#select row indices with "Template:"
templates_rows = wiki_country_politician[wiki_country_politician['page'].str.contains('Template:')].index
#removing pages that start with the string "Template:"
wiki_country_politician = wiki_country_politician.drop(index = templates_rows)
wiki_country_politician.head()
#removing non-country types from population dataset
#list of all non-country (=True) and country (=False) by indice
isupper = world_population_2019['Name'].str.isupper()
#get list of all regional population (not including countries)
regional_population = world_population_2019[isupper]
#get list of only country populations
country_population = world_population_2019[isupper == False]
country_population_full = country_population
country_population.head()
###Output
_____no_output_____
###Markdown
Getting Article Quality Predictions To get the predicted quality scores for each article in the Wikipedia dataset, we're using a machine learning system called ORES. This was originally an acronym for "Objective Revision Evaluation Service" but was simply renamed “ORES”. ORES is a machine learning tool that can provide estimates of Wikipedia article quality. The article quality estimates are, from best to worst:1. FA - Featured article2. GA - Good article3. B - B-class article4. C - C-class article5. Start - Start-class article6. Stub - Stub-class articleThese were learned based on articles in Wikipedia that were peer-reviewed using the Wikipedia content assessment procedures.These quality classes are a sub-set of quality assessment categories developed by Wikipedia editors. In order to get article predictions for each article in the Wikipedia dataset, we will first need to read page_data.csv into Python, and then read through the dataset line by line, using the value of the rev_id column to make an API query.ORES REST API - Documentation: https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_modelNote: The ORES API returns a prediction value that contains the name of one category, as well as probability values for each of the 6 quality categories. For this assignment, we only need to capture and use the value for prediction. Missing PredictionsIt is important to mention that some Wikipedia artcles are not able to receive a score from the ORES api. Due to limits in the ORES model, there are inconclusive predictions. The list of Wikipedia articles that were not able to receive a valid score are stored in a csv file labled, "wikipedia_politcian_pages_no_ORES_pred.csv", and contains the following columns: 1. page, the name of the Wikipedia article2. country, the assocaited country that the article politician represents3. rev_id, the unique id used to identify the article
###Code
import json
import requests
#api endpoint for getting ores scores
endpoint = 'https://ores.wikimedia.org/v3/scores/enwiki?models=articlequality&revids={rev_id}'
# Customize these with your own information
headers = {
'User-Agent': 'https://github.com/aaliyahfiala42',
'From': '[email protected]'
}
#function to access api, given id parameter
def api_call(endpoint, ids):
call = requests.get(endpoint.format(rev_id=ids), headers=headers)
response = call.json()
return response
#getting predictions for all revsision_ids, in batches of 50
temp_predictions = [] #store nested predictions temporarily
rev_id = [] #variable to store rev_id
pred = [] #variable to store final predictions, including errors
batchsize = 50 #set the batch size
for i in range(0, len(wiki_country_politician), batchsize):
#get the batch of revision id's
batch = wiki_country_politician.rev_id.iloc[i:i+batchsize] # the result might be shorter than batchsize at the end
json_results = api_call(endpoint, '|'.join(str(x) for x in batch))
scores = json_results['enwiki']['scores']
#parse json from latest batch
for p_id, p_info in scores.items():
temp_predictions.append(p_info['articlequality'])
rev_id.append(p_id) #store rev_id
#get all predictions from latest batch
for p in temp_predictions:
for p_id, p_info in p.items():
if p_id == 'score':
#store predicted quality
pred.append(p_info['prediction'])
else:
#error: could not get the predicted quailty
pred.append('error')
#reset temp variable
temp_predictions = []
#len(pred) #validate expected number of predictions
#len(rev_id) #validate expected number of rev id's
#convert lists to pd dataframes
pred = pd.DataFrame(pred, columns = ['pred'])
rev_id = pd.DataFrame(rev_id, columns = ['rev_id'])
#merge rev_id's with associated predictions into a single dataframe
predictions = pd.concat([rev_id, pred], axis = 1)
predictions.head()
#format rev id to ints
predictions['rev_id'] = [int(item) for item in predictions['rev_id']]
#printing out all of the revision id's of wiki pages that the quality could not be predicted by the model
rev_no_prediction = predictions[predictions['pred'] == 'error']['rev_id'].tolist()
#printing 275 pages without predictions
pd.option_context("display.max_rows", 300, "display.max_columns", 300)
wiki_no_pred = wiki_country_politician[wiki_country_politician['rev_id'].isin(rev_no_prediction)]
display(wiki_no_pred)
#save all no prediction values to a csv
wiki_no_pred.to_csv("wikipedia_politcian_pages_no_ORES_pred.csv")
#predictions[predictions['pred'] == 'error']
#get indice of error predictions
err_pred = predictions[predictions['pred'] == 'error'].index
#drop rev_ids with no prediction
predictions = predictions.drop(index = err_pred)
###Output
_____no_output_____
###Markdown
Combining DatasetsGiven the predictions and the two distinct data sets, of Wikipedia articles and country population, we will now need to merge them all together into a final dataset. This final dataset is labeled, 'wp_wpds_politicians_by_country.csv' and contains the following columns:1. country, the associated country name2. article_name, the Wikipedia article name3. revision_id, the unique revision id of the Wikipedia article4. article_quality_est, the predicted article quality (FA) or (GA)5. population, the country populationAs a result, there is several data that cannot be matched because of missing predictions, null populations, or null articles. The data that was not able to be merges is contained in the file labeled, 'wp_wpds_countries-no_match.csv', and contains the following columns:1. country, the associated country name2. page, the Wikipedia article name3. rev_id, the unique revision id of the Wikipedia article4. pred, the predicted article quality (FA) or (GA)5. population, the country population6. FIPS, contains the Federal Information Processing Standards codes for place7. Name, contains the name of the place8. TimeFrame, contains the year (2019)9. Data (M), contains the population count in millions
###Code
#predictions.describe()
#wiki_country_politician.describe()
#country_population.describe()
#merge datasets by revision id
merged_revs = pd.merge(predictions, wiki_country_politician, how = 'left', on = 'rev_id')
#merge datasets by country
merged_pop = pd.merge(country_population, merged_revs, how = 'left', left_on = 'Name', right_on = 'country')
politician_by_country = merged_pop[['country', 'page', 'rev_id', 'pred', 'Population']]
politician_by_country.columns = ['country', 'article_name', 'revision_id', 'article_quality_est', 'population']
#politician_by_country.head()
#create csv of final merged dataset
politician_by_country.to_csv('wp_wpds_politicians_by_country.csv')
#get list of rows not merges
no_merged_revs = pd.merge(predictions, wiki_country_politician, how = 'outer', on = 'rev_id')
#get list of rows not merges
no_merged_pop = pd.merge(country_population, merged_revs, how = 'outer', left_on = 'Name', right_on = 'country')
#merge lists of no merges
no_match = pd.concat([no_merged_revs, no_merged_pop], axis = 0)
#list of no matches because pred was null, or population was null, or page was null
#no_match.tail()
#create a csv of data not matched
no_match.to_csv('wp_wpds_countries-no_match.csv')
###Output
_____no_output_____
###Markdown
AnalysisThe analysis below consists of several calculations and merges in order to get the concluding datasets used in the results section. Speficially, below calculates the proportion of articles-per-population and high-quality articles for each country AND for each geographic region. These calculations allow us to conclude the countires with the best and worst coverage of articles to their population, and proportion of good articles to total articles. We then do the same anaylsis by region.By "high quality" articles, in this case we mean the number of articles about politicians in a given country that ORES predicted would be in either the "FA" (featured article) or "GA" (good article) classes. Proportion of good/featured articles by total articles
###Code
#get the number of pages per country
num_pgs_country = politician_by_country.groupby('country').size()
len(num_pgs_country)
#get the number of pages per country with featured (FA) or good (GA) articles
good_articles = politician_by_country[politician_by_country['article_quality_est'].isin(['FA','GA'])]
num_good_pgs_country = good_articles.groupby('country').size()
len(num_good_pgs_country)
#merge together values by country
total_pages_type = pd.concat([num_pgs_country,num_good_pgs_country], axis = 1)
total_pages_type.columns = ['total', 'good']
#fill in nans with 0's
total_pages_type = total_pages_type.fillna(0)
#total_pages_type
#get percentage of good to total articles by country
prop_good_total = total_pages_type['good']/total_pages_type['total'] * 100
#convert to dataframe
prop_good_art = prop_good_total.to_frame()
prop_good_art.columns = ['prop_good_articles']
good_to_total_results = pd.concat([prop_good_art, num_pgs_country, num_good_pgs_country], axis = 1)
good_to_total_results.columns = ['prop_good_to_total', 'total_articles', 'total_good_articles']
#replace nans with 0's (as they are 0 or approx 0)
good_to_total_results = good_to_total_results.fillna(0)
#final dataframe for proportion by total articles
display(good_to_total_results)
###Output
_____no_output_____
###Markdown
Proportion of articles by total population
###Code
#get population by country
country_population = politician_by_country[['country', 'population']].drop_duplicates()
country_population = country_population.dropna()
total_pages_type["country"] = total_pages_type.index
total_pages_type.index.names = ['index']
#total_pages_type
#country_population
#merge together values by country
total_pages_pop = pd.merge(country_population, total_pages_type, how = 'left', on = 'country')
#fill in nans with 0's
total_pages_pop = total_pages_pop.fillna(0)
#total_pages_pop
#get the percent of articles to population
total_pages_pop['prop'] = total_pages_pop['total']/total_pages_pop['population'] * 100
population_results = total_pages_pop[['country', 'population', 'total', 'prop']]
population_results.columns = ["country", "population", "total_articles", "prop_articles_to_population"]
#final dataframe for proportion by population
display(population_results)
###Output
_____no_output_____
###Markdown
Mapping country to regionsBy the layout of the world population dataset, we know that the country belongs to the region listed above it, when ordered by index. Below, we created a new column called 'region', that associated each country by region in a table.
###Code
region = []
temp = ''
#iterate through country indices, rows
for country_index, country_row in country_population_full.iterrows():
stop = 0
#iterate through region indices, rows (skip "WORLD")
for reg_index, reg_row in regional_population[1:].iterrows():
#if the region index is before the country, then the country belongs to that region
if reg_index > country_index and stop == 0:
country_row['region'] = reg_row['Name']
region.append(temp) #store region for each country in a variable
#country_population_by_region.append(reg_row['Name'])
stop = 1
temp = reg_row['Name']
#handle the issue that iterations cancel one iteration too early
for i in range(len(country_population_full) - len(region)):
region.append('OCEANIA')
#create a dataframe of regions
region = pd.Series(region).to_frame()
region.columns = ['region']
#variable to store all country populations by region
country_population_by_region = pd.concat([region, country_population_full.reset_index()], axis = 1)
###Output
_____no_output_____
###Markdown
Proportion of good to total articles, and total to population, by Region
###Code
#population count by region
population_by_region = country_population_by_region.groupby('region').sum('Population')
#population_by_region['Population']
#total number of articles and good articles by region
summary_articles_country = good_to_total_results
summary_articles_country['country'] = summary_articles_country.index #create country column to join on
summary_articles_country.index.names = ['index']
articles_by_region = pd.merge(country_population_by_region, summary_articles_country, how = 'left', left_on = 'Name', right_on = 'country')
#sum total articles & good articles by region
articles_by_region = articles_by_region.groupby('region').sum()
articles_by_region
#resting the prop_good_to_total value to be correct
articles_by_region['prop_good_to_total'] = articles_by_region['total_good_articles']/articles_by_region['total_articles']
#select columns of interest
articles_by_region = articles_by_region[['Population', 'prop_good_to_total', 'total_good_articles', 'total_articles']]
#adding in articles as a proportion of population column
articles_by_region['prop_articles_to_population'] = articles_by_region['total_articles']/articles_by_region['Population']
display(articles_by_region)
###Output
_____no_output_____
###Markdown
Results 1. Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
display(population_results.sort_values('prop_articles_to_population', ascending = 0)[:10])
###Output
_____no_output_____
###Markdown
2. Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
display(population_results.sort_values('prop_articles_to_population', ascending = 1)[:10])
###Output
_____no_output_____
###Markdown
3. Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
display(good_to_total_results.sort_values('prop_good_to_total', ascending = 0)[:10])
###Output
_____no_output_____
###Markdown
4. Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
display(good_to_total_results.sort_values('prop_good_to_total', ascending = 1)[:10])
###Output
_____no_output_____
###Markdown
5. Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
###Code
display(articles_by_region.sort_values('prop_articles_to_population', ascending = 0))
###Output
_____no_output_____
###Markdown
6. Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
display(articles_by_region.sort_values('prop_good_to_total', ascending = 1))
###Output
_____no_output_____
###Markdown
Import Statements
###Code
import pandas as pd
import numpy as np
import requests
import json
from urllib.parse import urlencode
from IPython.core.interactiveshell import InteractiveShell
# This allows for multiple outputs in a single jupyter notebook codeblock.
InteractiveShell.ast_node_interactivity = "all"
###Output
_____no_output_____
###Markdown
Aquire DataThe page_data.csv is the Wikimedia [politicians by county dataset](https://figshare.com/articles/Untitled_Item/5513449) and was downloaded on Figshare. I unzipped the folder and stored page_data.csv in the same working directory as my notebook. The [WPDS_2020_data.csv](https://docs.google.com/spreadsheets/d/1CFJO2zna2No5KqNm9rPK5PCACoXKzb-nycJFhV689Iw/edit?usp=sharing) file is published by Population Reference Bureau and can be drawn from the [world population data sheet](https://www.prb.org/international/indicator/population/table/). I downloaded the WPDS_2020_data.csv from the google spreadsheet and stored it in the same working directory as my notebook.
###Code
# use pandas to read the csv files as dataframes
# politicians by country (pbc)
pbc = pd.read_csv('raw_data/page_data.csv')
# world population data sheet (wpds)
wpds = pd.read_csv('raw_data/WPDS_2020_data.csv')
###Output
_____no_output_____
###Markdown
Data cleaning Clean pbc data by removing the Template data.These pages are not Wikipedia articles, and should not be included in the analysis.
###Code
pbc.shape
pbc = pbc[~pbc['page'].str.contains('Template:')]
pbc.shape
# sanity check data
pbc.head()
###Output
_____no_output_____
###Markdown
Clean WPDS data by separating out the cumulative regional population count rows. These rows are distinguished by having ALL CAPS values in the 'Name' field.
###Code
wpds.shape
wpds_orig = wpds
wpds_regional = wpds[wpds['Name'].str.isupper()]
wpds = wpds[~wpds['Name'].str.isupper()]
wpds.shape
###Output
_____no_output_____
###Markdown
Get Article Quality Predictions
###Code
# I use requests package to make the calls.
def api_call(endpoint):
call = requests.get(endpoint)
response = call.json()
return response
endpoint = 'https://ores.wikimedia.org/v3/scores/{context}?'
context_value = 'enwiki'
model = 'articlequality'
pbc[model] = np.NaN
pbc.set_index('rev_id', inplace=True)
revids = pbc.index.to_list()
#batch the calls into batches of 50. 50 works for me, feel free to change this.
batched_revids = list(map(list, np.array_split(revids, round(len(revids)/50))))
revids_batched = np.array_split(revids, round(len(revids)/50))
###Output
_____no_output_____
###Markdown
Make multiple batch calls until all revids have been passed to ORES for a prediction.This step takes about 10 minutes to run on my machine. YMMV.
###Code
no_match_revids = []
for batch in revids_batched:
parameters = {
'revids': '|'.join(str(int(x)) for x in batch),
'models': model
}
final_endpoint = endpoint.format(context=context_value) + urlencode(parameters)
response = api_call(final_endpoint)
try:
scores = response[context_value]['scores']
except:
continue
for revid in scores.keys():
try:
prediction = scores[revid][model]['score']['prediction']
except:
no_match_revids.append(revid)
continue
pbc.loc[int(revid), model] = prediction
###Output
_____no_output_____
###Markdown
We save the articles with missing revids under articles_with_missing_revids.csv
###Code
no_match_df = pd.DataFrame(no_match_revids, columns=['rev_id'])
no_match_df.to_csv('processed_data/articles_with_missing_revids.csv')
###Output
_____no_output_____
###Markdown
Combining the DatasetsI remove any rows that do not have matching data, and output them to the CSV file called:wp_wpds_countries-no_match.csvI consolidate the remaining data into a single CSV file called:wp_wpds_politicians_by_country.csv
###Code
pbc.reset_index(inplace=True)
wpds_pbc = pd.merge(left=pbc, right=wpds, left_on='country', right_on='Name')
#rename columns to match requirements of the assignment
wpds_pbc.rename(columns={'page':'article_name',
'rev_id':'revision_id',
'articlequality': 'article_quality_est',
'Population':'population'}, inplace=True)
wpds_pbc.drop(columns=['FIPS', 'Name', 'Type', 'TimeFrame', 'Data (M)'], inplace=True)
# write the combined data to csv.
wpds_pbc.to_csv('processed_data/wp_wpds_politicians_by_country.csv')
pbc_country_no_match = pbc[~pbc['country'].isin(wpds_pbc['country'])].country.unique()
wpds_country_no_match = wpds[~wpds['Name'].isin(wpds_pbc['country'])].Name.unique()
all_country_no_match = np.unique(np.append(pbc_country_no_match, wpds_country_no_match))
all_country_no_match_df = pd.DataFrame(all_country_no_match, columns=['country'])
#write the no-match data to a csv.
all_country_no_match_df.to_csv('processed_data/wp_wpds_countries-no_match.csv')
###Output
_____no_output_____
###Markdown
AnalysisPivot tables are used to summarize the data. I'm primarily interested in the occurance of articles and good articles compared to population size.* FA - Featured article* GA - Good article* B - B-class article* C - C-class article* Start - Start-class article* Stub - Stub-class article
###Code
# Article rankings are explained in the readme.
article_rankings = ['B', 'C', 'FA', 'GA', 'Start', 'Stub']
hq_article_rankings = ['FA', 'GA']
###Output
_____no_output_____
###Markdown
The pivot table below summarized the number of articles in each ORES category for each country in the data.
###Code
analysis_df = pd.pivot_table(wpds_pbc,
index=['country'],
columns=['article_quality_est'],
aggfunc={'article_quality_est': 'count'},
fill_value=0
)
analysis_df.columns = analysis_df.columns.droplevel() #clean up multilevel index
analysis_df.head()
###Output
_____no_output_____
###Markdown
I add population data to the table and calulate the various metrics I want to observe in the Results section
###Code
country_pop = wpds_pbc.groupby(['country'])['population'].mean()
analysis_df = pd.merge(left=analysis_df,
right=country_pop,
left_index=True,
right_index=True)
analysis_df['article_count'] = analysis_df[article_rankings].sum(axis=1)
analysis_df['percent_articles_per_person'] = (analysis_df['article_count'] / analysis_df['population']) * 100
analysis_df['hq_article_count'] = analysis_df[hq_article_rankings].sum(axis=1)
analysis_df['percent_hq_articles_per_person'] = (analysis_df['hq_article_count'] / analysis_df['article_count']) * 100
analysis_df.head()
###Output
_____no_output_____
###Markdown
Next I want to repeat this analysis but for regions instead of countries.
###Code
# This links countries to their respective region.
# Need to set region to the first value in wpds_orig
region = wpds_orig.Name[0]
regions = []
for i in range(len(wpds_orig)):
if wpds_orig.iloc[i]['Type'] == 'Sub-Region':
region = wpds_orig.iloc[i]['Name']
regions.append(region)
wpds_orig['region'] = regions
# Merge the per country population and articles by country datatset
wpds_country_region = pd.merge(left=wpds_pbc,
right=wpds_orig,
left_on='country',
right_on='Name',
how='left')
#Drop unneeded columns
wpds_country_region.drop(columns={'FIPS', 'Name', 'Type', 'TimeFrame', 'Data (M)', 'Population'}, inplace=True)
###Output
_____no_output_____
###Markdown
I add regional population data to the table and calulate the various metrics I want to observe in the Results section
###Code
regional_analysis_df = pd.pivot_table(wpds_country_region,
index=['region'],
columns=['article_quality_est'],
aggfunc={'article_quality_est': 'count'},
fill_value=0
)
regional_analysis_df.columns = regional_analysis_df.columns.droplevel()
region_pop_lookup = dict(zip(wpds_regional.Name, wpds_regional.Population))
regional_analysis_df['population'] = regional_analysis_df.index.map(region_pop_lookup)
regional_analysis_df['article_count'] = regional_analysis_df[article_rankings].sum(axis=1)
regional_analysis_df['percent_articles_per_person'] = (regional_analysis_df['article_count'] / regional_analysis_df['population']) * 100
regional_analysis_df['hq_article_count'] = regional_analysis_df[hq_article_rankings].sum(axis=1)
regional_analysis_df['percent_hq_articles_per_person'] = (regional_analysis_df['hq_article_count'] / regional_analysis_df['article_count']) * 100
###Output
_____no_output_____
###Markdown
Results Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
analysis_df.nlargest(10,'percent_articles_per_person')[['population', 'article_count', 'percent_articles_per_person']]
###Output
_____no_output_____
###Markdown
Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
analysis_df.nsmallest(10,'percent_articles_per_person')[['population', 'article_count', 'percent_articles_per_person']]
###Output
_____no_output_____
###Markdown
Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
analysis_df.nlargest(10,'percent_hq_articles_per_person')[['population', 'article_count','hq_article_count', 'percent_hq_articles_per_person']]
analysis_df.nsmallest(10,'percent_hq_articles_per_person')[['population', 'article_count','hq_article_count', 'percent_hq_articles_per_person']]
###Output
_____no_output_____
###Markdown
Top regions by coverage (all articles): Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
###Code
regional_analysis_df.nlargest(len(regional_analysis_df),'percent_articles_per_person')[['population', 'article_count', 'percent_articles_per_person']]
###Output
_____no_output_____
###Markdown
Top regions by coverage (quality articles): Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
regional_analysis_df.nlargest(len(regional_analysis_df),'percent_hq_articles_per_person')[['population', 'article_count', 'percent_hq_articles_per_person']]
###Output
_____no_output_____
###Markdown
__Data Preprocessing__ 1. I am reading the page_data and WPDS csv files to preprocess the data.2. I add a new column called keep, that removes all the data points where the word 'Template' is in the page column. These articles are not to be processed.3. I filter the data frame to keep only the records which do not have the word 'Template' in them.
###Code
page_data = pd.read_csv('page_data.csv')
world_population_data = pd.read_csv('WPDS_2018_data.csv')
#This function returns 1 if word is found in the row
#Here , row refers to the page column in the dataset and word refers to 'Template'
def filter_rows(row, word):
if word in row:
return(1)
else:
return(0)
#Adding a keep column to the page_data dataframe to decide which rows to keep and which rows to omit
page_data['keep'] = page_data.apply(lambda x: filter_rows(x['page'], 'Template'), axis=1)
#Filtering the page_data dataframe based on the values in the keep column
#Keep = 0: record does not contain 'Template', Keep = 1: record contains 'Template'
page_data = page_data[page_data['keep'] == 0]
###Output
_____no_output_____
###Markdown
__Predicting the article quality__ 1. Here, I will use the API to get the article quality predictions for each article in our preprocessed dataset.2. We will add an 'article quality' column in the page_data dataframe that assigns an article quality to each rev_id3. For some rev_ids, we do not have an article quality returned. We add a try catch block to keep a track of these records.
###Code
headers = {'User-Agent' : 'https://github.com/yashkale94', 'From' : '[email protected]'}
def get_ores_data(revision_ids, headers):
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
return(response)
rev_ids = page_data['rev_id'].values
low = 0
high = 100
scores = []
flag = 0
while(True):
#Get predictions for each rev_id in batches of 100
predictions = get_ores_data(rev_ids[low:high], headers)
#Access the scores for the given rev_ids
predictions = predictions['enwiki']['scores']
#Checks if we have reached the end of the list of rev_ids
if high > len(rev_ids):
high = len(rev_ids)
flag = 1
#Try catch block to check for rev_ids that do not return any score/prediction
for revid in rev_ids[low:high]:
try:
score = predictions[str(revid)]['wp10']['score']['prediction']
scores.append(score)
except:
scores.append('No score found')
if flag == 1:
break
low+=100
high+=100
#We add a new article quality column to our page_data dataframe
page_data['article_quality'] = scores
###Output
_____no_output_____
###Markdown
__Merging of the two dataframes for final dataframe__ 1. We want to create a final dataframe for our analysis. For this, we merge the page_data and world_population_data dataframes.2. We create two new columns in the page_data dataframe, namely: 'Region' and 'Population'. This stores the region and population of the country the article that falls in.3. We use a try catch block to take care of those articles that do not have a country or population associated with it in the world_population_data file.
###Code
#Get the two columns of world_population_data dataframe in a list
region_country = world_population_data[['Geography','Population mid-2018 (millions)']].values
count = 0
#Create a dictionary to keep the map of every country to the region it belongs to
#If the name is in block letters, it is a region name and not a country name
country_to_region_map = {}
for i in region_country:
if i[0].isupper():
key = i[0]
else:
country_to_region_map[i[0]] = [key, i[1]]
region_list = []
population_list = []
#We iterate on the page_data dataframe to create two columns, that store the population and region name for each country
for index, row in page_data.iterrows():
try:
region, population = country_to_region_map[row.values[1]]
region_list.append(region)
population_list.append(population)
except:
#If we do not have a corresponding population for a country, we add 'Population not found'
region_list.append('Country not found')
population_list.append('Population not found')
page_data['Region'] = region_list
page_data['Population'] = population_list
page_data.drop(columns=['Region'], inplace=True)
page_data = page_data[['country', 'page','rev_id','article_quality', 'Population']]
data_not_found = page_data[(page_data['article_quality'] == 'No score found') | (page_data['Population'] == 'Population not found')]
page_data_complete = page_data[(page_data['article_quality'] != 'No score found') & (page_data['Population'] != 'Population not found')]
###Output
_____no_output_____
###Markdown
__Calculating the countries by coverage__ 1. Here, we use a dictionary to keep a map of number of articles per country.2. We then use the information we have about the population of each country to calculate the coverage per country.
###Code
#creates a dictionary to store country:articles as key value pair
articles_per_country = defaultdict(int)
for i, row in page_data_complete.iterrows():
country = row.values[0]
articles_per_country[country]+=1
#creates a list to store the countries and their articles coverage
proportion_country = []
for country, value in articles_per_country.items():
country_population = country_to_region_map[country][1]
#Removes the character ',' from the population and converts it to float so that we can add the population value later
country_population = float(country_population.replace(',',''))
#Population values are in millions. Hence we multiply by 10^6
proportion_country.append((country, value/(country_population*10**6)))
#This gives us the top 10 countries by their coverage
top_countries = sorted(proportion_country, key = lambda x: (x[1]), reverse=True)[0:10]
###Output
_____no_output_____
###Markdown
__Top 10 countries by coverage percentage__
###Code
df = pd.DataFrame(top_countries, columns=['Country','Coverage'])
df['Coverage'] = df['Coverage']*100
df
###Output
_____no_output_____
###Markdown
__Bottom 10 countries by coverage percentage:__
###Code
bottom_countries = sorted(proportion_country, key = lambda x: (x[1]))[0:10]
df = pd.DataFrame(bottom_countries, columns=['Country','Coverage'])
df['Coverage'] = df['Coverage']*100
df
###Output
_____no_output_____
###Markdown
__Calculating countries based on the coverage of GA/FA articles__ 1. Here, we want to calculate the relative quality of countries, based on the number of GA/FA articles published in these countries.2. We create a dictionary that will store the GA/FA articles per country as a key value pair.
###Code
#We filter on the dataframe to keep only those records that have atleast one GA/FA article
page_data_high = page_data_complete[(page_data_complete['article_quality'] == 'GA') | (page_data_complete['article_quality'] == 'FA')]
#We create a dictionary to store the number of high quality articles per country
high_rated_articles = defaultdict(int)
for i, row in page_data_high.iterrows():
article_quality = row.values[3]
country = row.values[0]
high_rated_articles[country]+=1
high_rated_articles_ratio = []
for key, value in high_rated_articles.items():
articles_in_country = articles_per_country[key]
high_rated_articles_ratio.append((key, value/articles_in_country))
###Output
_____no_output_____
###Markdown
__Top 10 countries by relative quality:__
###Code
#We create a dataframe from a list that sorts the countries based on their relative article_quality
top_countries = sorted(high_rated_articles_ratio, key=lambda x: x[1], reverse=True)[0:10]
df = pd.DataFrame(top_countries, columns=['Country','article_quality'])
df['article_quality'] = df['article_quality']*100
df
###Output
_____no_output_____
###Markdown
1. To calculate the bottom countries, we make sure that we consider countries that have 0 GA/FA articles also.
###Code
#We keep a count of number of articles in each country.
#For each country, we keep a track of number of articles in each given article_quality possibility
articles_count_country = {}
for index, row in page_data_complete.iterrows():
if row.values[0] not in articles_count_country.keys():
articles_count_country[row.values[0]] = {}
articles_count_country[row.values[0]][row.values[3]] = 1
else:
if row.values[3] not in articles_count_country[row.values[0]].keys():
articles_count_country[row.values[0]][row.values[3]] = 1
else:
articles_count_country[row.values[0]][row.values[3]]+=1
#With this, we make sure we add 0 to those countries that do not have GA or FA in them
for key, value in articles_count_country.items():
if 'GA' not in value.keys():
articles_count_country[key]['GA'] = 0
if 'FA' not in value.keys():
articles_count_country[key]['FA'] = 0
###Output
_____no_output_____
###Markdown
__Bottom 10 countries by relative quality:__
###Code
high_rated_articles_ratio = []
for key, value in articles_per_country.items():
high_rated_articles = articles_count_country[key]['GA'] + articles_count_country[key]['FA']
high_rated_articles_ratio.append((key,high_rated_articles/articles_per_country[key]))
bottom_countries = sorted(high_rated_articles_ratio, key=lambda x: x[1])[0:10]
df = pd.DataFrame(bottom_countries, columns=['Country','article_quality'])
df['article_quality'] = df['article_quality']*100
df
###Output
_____no_output_____
###Markdown
1. Now, we want to calculate the similar metrics at a geographical region2. For this, we require the number of articles in the geogrpahical region.3. We maintain a dictionary that stores the number of articles in a given region.
###Code
#We create a dictionary to store articles per region
articles_by_region = defaultdict(int)
for key, value in articles_per_country.items():
region_population = country_to_region_map[key][1]
region_articles = value
articles_by_region[country_to_region_map[key][0]]+=value
#We create a dictionary to store the population in every region
population_by_region = defaultdict(int)
for key, value in articles_per_country.items():
region_population = country_to_region_map[key][1]
region_population = region_population.replace(',','')
population_by_region[country_to_region_map[key][0]]+=(float(region_population))*10**6
#For each region, we store the population. We converted the string value to a float value and multiply by 10^6, as the population
#is in millions
africa_population = world_population_data[world_population_data['Geography'] == 'AFRICA']['Population mid-2018 (millions)'].values[0]
northern_america = world_population_data[world_population_data['Geography'] == 'NORTHERN AMERICA']['Population mid-2018 (millions)'].values[0]
latin_america = world_population_data[world_population_data['Geography'] == 'LATIN AMERICA AND THE CARIBBEAN']['Population mid-2018 (millions)'].values[0]
asia = world_population_data[world_population_data['Geography'] == 'ASIA']['Population mid-2018 (millions)'].values[0]
europe = world_population_data[world_population_data['Geography'] == 'EUROPE']['Population mid-2018 (millions)'].values[0]
oceania = world_population_data[world_population_data['Geography'] == 'OCEANIA']['Population mid-2018 (millions)'].values[0]
africa_population = float(africa_population.replace(',',''))*10**6
northern_america = float(northern_america.replace(',',''))*10**6
latin_america = float(latin_america.replace(',',''))*10**6
asia = float(asia.replace(',',''))*10**6
europe = float(europe.replace(',',''))*10**6
oceania = float(oceania.replace(',',''))*10**6
###Output
_____no_output_____
###Markdown
__Geographic regions by coverage:__
###Code
articles_regions_order = []
articles_regions_order.append(('AFRICA', articles_by_region['AFRICA']/ africa_population))
articles_regions_order.append(('NORTHERN AMERICA', articles_by_region['NORTHERN AMERICA']/ northern_america))
articles_regions_order.append(('LATIN AMERICA AND THE CARIBBEAN', articles_by_region['LATIN AMERICA AND THE CARIBBEAN']/ latin_america))
articles_regions_order.append(('ASIA', articles_by_region['ASIA']/ asia))
articles_regions_order.append(('EUROPE', articles_by_region['EUROPE']/ europe))
articles_regions_order.append(('OCEANIA', articles_by_region['OCEANIA']/ oceania))
###Output
_____no_output_____
###Markdown
__Top 10 Geographic regions by coverage percentage__
###Code
top_regions = sorted(articles_regions_order, key=lambda x: x[1], reverse=True)
df = pd.DataFrame(top_regions, columns=['Region','Coverage'])
df['Coverage'] = df['Coverage']*100
df
###Output
_____no_output_____
###Markdown
Similarly, we store the number of high quality articles per region in a dictionary.
###Code
#We create a dictionary to store the number of high quality articles per country
high_rated_articles = defaultdict(int)
for i, row in page_data_high.iterrows():
article_quality = row.values[3]
country = row.values[0]
high_rated_articles[country]+=1
high_articles_regionwise = defaultdict(int)
for key, value in high_rated_articles.items():
high_articles_regionwise[country_to_region_map[key][0]]+=value
high_articles_regionwise_order = []
high_articles_regionwise_order.append(('AFRICA',high_articles_regionwise['AFRICA']/ articles_by_region['AFRICA']))
high_articles_regionwise_order.append(('NORTHERN AMERICA',high_articles_regionwise['NORTHERN AMERICA']/ articles_by_region['NORTHERN AMERICA']))
high_articles_regionwise_order.append(('LATIN AMERICA AND THE CARIBBEAN',high_articles_regionwise['LATIN AMERICA AND THE CARIBBEAN']/ articles_by_region['LATIN AMERICA AND THE CARIBBEAN']))
high_articles_regionwise_order.append(('ASIA',high_articles_regionwise['ASIA']/ articles_by_region['ASIA']))
high_articles_regionwise_order.append(('EUROPE',high_articles_regionwise['EUROPE']/ articles_by_region['EUROPE']))
high_articles_regionwise_order.append(('OCEANIA',high_articles_regionwise['OCEANIA']/ articles_by_region['OCEANIA']))
###Output
_____no_output_____
###Markdown
__Top 10 Geographic regions by relative article quality__
###Code
top_regions = sorted(high_articles_regionwise_order, key=lambda x: x[1], reverse=True)
df = pd.DataFrame(top_regions, columns=['Region','article_quality'])
df['article_quality'] = df['article_quality']*100
df
###Output
_____no_output_____
###Markdown
__We store the results of those records, that do not have population data, or do not have any articles by their name in a different file__
###Code
page_data1 = pd.read_csv('page_data.csv')
world_population_data1 = pd.read_csv('WPDS_2018_data.csv')
no_population = page_data[page_data['Population'] == 'Population not found']
countries_article = set(page_data1['country'].values)
#We filter the countries who do not have corresponding article data
def filter_countries(country, countries):
if country not in countries:
return(1)
else:
return(0)
world_population_data1['keep'] = world_population_data1.apply(lambda x: filter_countries(x['Geography'], countries_article), axis=1)
world_population_data1 = world_population_data1[world_population_data1['keep'] == 1]
#We add another rev_id column in the population data so that we can merge this with the page_data dataframe
world_population_data1['rev_id'] = 'No Articles found'
world_population_data1 = world_population_data1[['Geography','Population mid-2018 (millions)','rev_id']]
no_population = no_population[['country', 'Population', 'rev_id']]
df_final = pd.concat([no_population, world_population_data1.rename(columns={'Geography':'country','Population mid-2018 (millions)'
:'Population'})], ignore_index=True)
#We then store these records that have no match in a file
df_final.to_csv('wp_wpds_countries-no_match.csv', index=False)
#We store the data for which we have performed analysis and have entire data available in another separate file.
page_data_complete.to_csv('wp_wpds_politicians_by_country.csv', index=False)
###Output
_____no_output_____
###Markdown
Stage 0: SETUPThe following libraries are used directly. For the full list of isntalled ppackages and versions, please see requuirements.txt
###Code
# For accessing ORES API
import requests
# For processing
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Stage 1: Data AcquisitionData is downloaded as csv files, and is already available in this repository in the data folder. See the readme for details on the source of the data. Page DataPage data is downloaded from [this](https://figshare.com/articles/dataset/Untitled_Item/5513449) repository.
###Code
page_data = pd.read_csv('data/raw/page_data.csv')
page_data.head()
###Output
_____no_output_____
###Markdown
Population DataPopulation Data is downloaded from [this](https://docs.google.com/spreadsheets/d/1CFJO2zna2No5KqNm9rPK5PCACoXKzb-nycJFhV689Iw/editgid=283125346) google doc.
###Code
pop_data = pd.read_csv('data/raw/WPDS_2020_data.csv')
pop_data.head()
###Output
_____no_output_____
###Markdown
Stage 2: Data ProcessingIn this stage we combine and clean the data, and use the [ORES](https://github.com/wikimedia/ores) client to get predicted article quality. Clean and combineRemove the templates from page data and regions from pop data
###Code
# Fix Channel Islands, which is incorrectly recorded as Sub-Region
pop_data.loc[168,'Type'] = 'Country'
# Add Region as column
regions = pop_data["Name"][pop_data["Type"] == "Sub-Region"]
regions.name = "region"
pop_data_with_region = pd.merge_asof(pop_data, regions, left_index=True, right_index=True)
# Filter out values
page_data_clean = page_data.loc[~page_data["page"].str.contains("^Template"), :]
pop_data_clean = pop_data_with_region.loc[pop_data_with_region["Type"] == "Country", :]
# Left join to keep countries without articles. Keep revid as int
combined_data = pop_data_clean.merge(page_data_clean, how="outer", left_on="Name", right_on="country")
# Record unmatched countries
unmatched_pop_data = combined_data[combined_data["page"].isna()].drop(columns=page_data_clean.columns)
unmatched_pop_data.to_csv('data/unmatched/wp_wpds_countries-no_match.csv')
print("{} Countries could not be matched".format(len(unmatched_pop_data)))
# Record unmatched pages
unmatched_page_data = combined_data[combined_data["Name"].isna()].drop(columns=pop_data_clean.columns)
unmatched_page_data.to_csv('data/unmatched/page_data-no_match.csv')
print("{} Pages could not be matched".format(len(unmatched_page_data)))
# Clean
combined_data_complete = combined_data.dropna() \
.drop(columns=["FIPS", "Name", "Type","TimeFrame", "Data (M)"]) \
.rename(columns={"country":"country", "name": "article_name", "rev_id":"revision_id", "Population": "population"})
combined_data_complete["revision_id"] = combined_data_complete["revision_id"].astype(int)
###Output
27 Countries could not be matched
1859 Pages could not be matched
###Markdown
Get ORES DataData is acquired from the [ORES API](https://ores.wikimedia.org/v3/). We request the "articlequality" model from the "enwiki" context for batches of revids at a time. Max 50 per request.
###Code
# Takes a batch of revids
def api_call(revids, context='enwiki', model='articlequality'):
endpoint = "https://ores.wikimedia.org/v3/scores/{context}".format(context=context)
headers = {
'User-Agent': 'https://github.com/TheCaseca',
'From': '[email protected]'
}
call = requests.get(endpoint, headers=headers, params={"models":model, "revids": "|".join(revids)})
response = call.json()
return response
# Clean revid format to str
revids = combined_data_complete["revision_id"]
revids = revids.astype(int)
revids = revids.astype(str)
n = 50
preds = {}
for i in range(len(revids)//n):
if i % 100 == 0:
print("Collecting rows {} to {} of {}".format(i*n, (i+1)*100*n, len(revids)))
api_data = api_call(list(revids[i*n: (i+1)*n]))
new_preds = {revid: score['articlequality'].get('score', {}).get('prediction') for revid, score in api_data['enwiki']['scores'].items()}
preds.update(new_preds)
# Update dataframe
pred_df = pd.DataFrame.from_dict(preds, orient='index', columns=['article_quality_est'])
pred_df.index = pred_df.index.astype(int)
combined_data_complete_preds = combined_data_complete.merge(pred_df, left_on='revision_id', right_index=True)
combined_data_complete_preds.head()
###Output
Collecting rows 0 to 5000 of 44680
Collecting rows 5000 to 505000 of 44680
Collecting rows 10000 to 1005000 of 44680
Collecting rows 15000 to 1505000 of 44680
Collecting rows 20000 to 2005000 of 44680
Collecting rows 25000 to 2505000 of 44680
Collecting rows 30000 to 3005000 of 44680
Collecting rows 35000 to 3505000 of 44680
Collecting rows 40000 to 4005000 of 44680
###Markdown
We filter out any missing predictions and record them. 274 articles did not have a prediction from ORES.
###Code
# Recordd missing
missing_preds = combined_data_complete_preds[combined_data_complete_preds['article_quality_est'].isna()]
missing_preds.to_csv('data/unmatched/wp_wpds_politicians-no_prediction.csv')
print("{} Pages could not be predicted".format(len(missing_preds)))
# Remove from data, format and save
final_data = combined_data_complete_preds.dropna(subset=['article_quality_est'])
final_data.to_csv('wp_wpds_politicians_by_country.csv')
###Output
274 Pages could not be predicted
###Markdown
Stage 3: AnalysisWe analyze by comparing high-quality articles per population and per total articles. We define "high-quality" to be Good Article or Featured Article class.
###Code
data = pd.read_csv('wp_wpds_politicians_by_country.csv')
# High Quality column
data["High Quality"] = data["article_quality_est"].isin(["FA", "GA"]).astype(int)
# Group by country and take mean
data_by_country = data[["country", "region", "population", "page", "High Quality"]] \
.groupby(["country", "region"]) \
.agg({"population": "mean", "page":"size", "High Quality": "sum"}) \
.rename(columns={"population": "Pop", "page": "Article Count"})
# Add calcualted fields
data_by_country["Proportion HQ"] = data_by_country["High Quality"] / data_by_country["Article Count"]
data_by_country["Article Per Mil. People"] = data_by_country["Article Count"] / data_by_country["Pop"] * 1000000
data_by_country.head()
# Styler for tables below
style_args = {
"precision":0,
"na_rep":'MISSING',
"thousands":",",
"formatter":{
"Proportion HQ": "{:.1%}",
"Article Per Mil. People": "{0:,.2f}",
}
}
###Output
_____no_output_____
###Markdown
Top 10 Countries by Coverage10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
data_by_country.sort_values(by="Article Per Mil. People", ascending=False).iloc[0:10,].style.format(**style_args)
###Output
_____no_output_____
###Markdown
Bottom 10 Countries by Coverage10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
data_by_country.sort_values(by="Article Per Mil. People").iloc[0:10,].style.format(**style_args)
###Output
_____no_output_____
###Markdown
Top 10 Countries by Relative Quality10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
data_by_country.sort_values(by="Proportion HQ", ascending=False).iloc[0:10,].style.format(**style_args)
###Output
_____no_output_____
###Markdown
Bottom 10 Countries by Relative Quality10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality. Note that 37 countries had no High Quality articles, so they are subsequently sorted by Article Count in descending order to produce the bottom 10.
###Code
data_by_country.sort_values(by=["Proportion HQ", "Article Count"], ascending=[True, False]).iloc[0:10,].style.format(**style_args)
# Group by region and take mean
data_by_region = data[["region", "page", "High Quality"]] \
.groupby("region") \
.agg({"page":"size", "High Quality": "sum"}) \
.rename(columns={"page": "Article Count"})
# Use region populations to account for missing countries
pop_data = pd.read_csv('data/raw/WPDS_2020_data.csv')
data_by_region_with_pop = data_by_region \
.reset_index() \
.merge(pop_data[["Name", "Population"]], left_on="region", right_on="Name") \
.rename(columns={'Population': "Pop"}) \
.drop(columns=["Name"]) \
.set_index("region")
# Add calcualted fields
data_by_region_with_pop["Proportion HQ"] = data_by_region_with_pop["High Quality"] / data_by_region_with_pop["Article Count"]
data_by_region_with_pop["Article Per Mil. People"] = data_by_region_with_pop["Article Count"] / data_by_region_with_pop["Pop"] * 1000000
data_by_region_with_pop.head()
###Output
_____no_output_____
###Markdown
Regions by CoverageRegions terms of number of politician articles as a proportion of region population
###Code
data_by_region_with_pop.sort_values(by="Article Per Mil. People", ascending=False).style.format(**style_args)
###Output
_____no_output_____
###Markdown
Regions by Relative QualityRegions in terms of the relative proportion of politician articles that are of GA and FA-quality.
###Code
data_by_region_with_pop.sort_values(by="Proportion HQ", ascending=False).style.format(**style_args)
###Output
_____no_output_____
###Markdown
1. Ariticle and Population dataThe datasets used in this assignment were provided at the following locations: This code assumes that these files are pre-downloaded into the 'data-raw' folder. article data: https://figshare.com/articles/Untitled_Item/5513449 population data: https://www.dropbox.com/s/5u7sy1xt7g0oi2c/WPDS_2018_data.csv?dl=0 The article data contains the list of wikipedia articles, their country, and the revision id. The population data contains the populations of a list of geographical regions.
###Code
import pandas as pd
data_raw_path = './data-raw'
page_data = pd.read_csv(f'{data_raw_path}/page_data.csv')
wpds_data = pd.read_csv(f'{data_raw_path}/WPDS_2018_data.csv')
display(page_data.head())
display(wpds_data.head())
###Output
_____no_output_____
###Markdown
Joining the datasetsThe two pandas dataframes have country/geography in common (although not exactly in the same format). It is not clear if they have the same set of values.
###Code
set_page = set(page_data.country)
set_wpds = set(wpds_data.Geography)
print('regions only in article data:', len(set_page - set_wpds))
print('regions only in population data:', len(set_wpds - set_page))
print('regions in both:', len(set_page.intersection(set_wpds)))
###Output
regions only in article data: 39
regions only in population data: 27
regions in both: 180
###Markdown
This shows that both page data and population data have places that do not show in the other dataset. Also, The population column is formatted as a string, with some values being ',' separated.
###Code
df = pd.merge(page_data, wpds_data, left_on='country', right_on='Geography', how='inner')
df = df.rename({'Population mid-2018 (millions)': 'population'}, axis=1)
df.population = df.population.str.replace(',', '').astype(float)
df = df.drop('Geography', axis=1)
df.head()
###Output
_____no_output_____
###Markdown
Adding ORES scores to the dataframeIn this step, we call the wikipedia Objective Revision Evaluation Service API for all the aritcles in the dataframe/Some of the code used in this block is borrowed from the example code provided in the in the template repository for this assignment (https://github.com/Ironholds/data-512-a2)
###Code
import requests
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
headers = {
'user-agent' : 'https://github.com/viv-r',
'from' : '[email protected]'
}
params = {
'project' : 'enwiki',
'model' : 'wp10',
}
def get_ores_data(revision_ids):
params['revids'] = '|'.join(str(x) for x in revision_ids)
api_call = requests.get(endpoint.format(**params))
response = api_call.json()['enwiki']['scores']
predictions = []
for i in response:
try:
predictions.append(response[i]['wp10']['score']['prediction'])
except KeyError:
predictions.append('Error')
return predictions
###Output
_____no_output_____
###Markdown
We take 25 rows at a time from the data frame and construct a column with the scores from the above API
###Code
batch_size = 140
# df = df.sample(1000) # uncomment for testing, since this api takes a while
groups = df.set_index(df.index // batch_size).groupby(level=0)
predictions = sum(groups.apply(lambda x: get_ores_data(x.rev_id)).values.tolist(), [])
df['prediction'] = predictions
df.head()
df.to_csv('data-processed/dataset.csv')
###Output
_____no_output_____
###Markdown
AnalysisArticles ranked GA or FA by the API above are categorized as high-quality. In this section, we create 4 tables showing the following information2 tables showing the highest and lowest ranked regions interms of politician articles as a proportion of population2 tables showing the highest and lowest ranked regions interms of proportions of high quality articles to the number of total articles.
###Code
df['good_quality'] = (df.prediction == 'GA') | (df.prediction == 'FA')
df['article_count'] = df.groupby('country').transform('count').page
summary = df.groupby('country').mean()
summary['ratio_article_to_pop'] = summary.article_count / summary.population
summary['ratio_quality_to_articles'] = 100 * summary.good_quality / summary.article_count
###Output
_____no_output_____
###Markdown
10 lowest-ranked countries: Ratio of politician articles to country population
###Code
%matplotlib inline
table = summary[['ratio_article_to_pop']].sort_values('ratio_article_to_pop').head(10)
display(table); table.plot(kind='bar')
###Output
_____no_output_____
###Markdown
10 highest-ranked countries: Ratio of politician articles to country population
###Code
table = summary[['ratio_article_to_pop']].sort_values('ratio_article_to_pop', ascending=False).head(10)
display(table); table.plot(kind='bar')
###Output
_____no_output_____
###Markdown
10 lowest-ranked countries: ratio of article quality to article count
###Code
table = summary[['ratio_quality_to_articles']].sort_values('ratio_quality_to_articles').head(10)
table
###Output
_____no_output_____
###Markdown
10 highest-ranked countries: ratio of article quality to article count
###Code
table = summary[['ratio_quality_to_articles']].sort_values('ratio_quality_to_articles', ascending=False).head(10)
display(table); table.plot(kind='bar')
###Output
_____no_output_____
###Markdown
Bias on WikipediaFor this assignment (https://wiki.communitydata.cc/HCDS_(Fall_2017)/AssignmentsA2:_Bias_in_data), your job is to analyze what the nature of political articles on Wikipedia - both their existence, and their quality - can tell us about bias in Wikipedia's content. Getting the article and population dataThe first step is to load data files downloaded from different online resources. The data files are:1. page_data.csv: Wikipedia political articles data2. Population Mid-2015.csv: population data of a variety of countriesGetting the data from page_data.csv file
###Code
import csv
data = []
revid = []
with open('page_data.csv') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
data.append([row[0],row[1],row[2]])
revid.append(row[2])
# Remove the first element ('rev_id') from revid so that the list only contains revision IDs.
revid.pop(0)
###Output
_____no_output_____
###Markdown
Getting the data (country and population) from the population file
###Code
from itertools import islice
import csv
import pandas as pd
population = []
with open('Population Mid-2015.csv') as population_file:
reader = csv.reader(population_file)
# note that first row is title; the second and last two rows are blank
# skip first and last two rows in the csv file
for row in islice(reader,2,213):
population.append([row[0],row[4]])
###Output
_____no_output_____
###Markdown
Getting article quality predictionsIn this step, we'll get article quality predictions by using ORES API. In order to avoid hitting the limits in ORES, we split all revision IDs into chunks of 50. The response from ORES for each article is in one of 6 categories:1. FA - Featured article2. GA - Good article3. B - B-class article4. C - C-class article5. Start - Start-class article6. Stub - Stub-class article Split revision IDs into chunks of 50
###Code
chunks = [revid[x:x+50] for x in range(0, len(revid), 50)]
###Output
_____no_output_____
###Markdown
Write a function to make a request with multiple revision IDs
###Code
import requests
import json
def get_ores_data(revision_ids, headers):
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# Specify the parameters - smushing all the revision IDs together separated by | marks.
# Yes, 'smush' is a technical term, trust me I'm a scientist.
# What do you mean "but people trusting scientists regularly goes horribly wrong" who taught you tha- oh.
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
return response
###Output
_____no_output_____
###Markdown
Request the values for prediction (the quality of an article) from ORES API.
###Code
headers = {'User-Agent' : 'https://github.com/yawen32', 'From' : '[email protected]'}
article_quality = []
for i in range(len(chunks)):
response = get_ores_data(chunks[i],headers)
aq = response['enwiki']['scores']
for j in range(len(chunks[i])):
for key in aq[chunks[i][j]]["wp10"]:
# Flag the articles have been deleted
if key == "error":
article_quality.append("None")
else:
article_quality.append(aq[chunks[i][j]]['wp10']['score']['prediction'])
###Output
_____no_output_____
###Markdown
Save prediction values to a file
###Code
aq = open("article_quality.txt","w")
for item in article_quality:
aq.write("{}\n".format(item))
aq.close()
with open("article_quality.csv","w",newline="") as f:
aqcsv = csv.writer(f)
aqcsv.writerow(article_quality)
###Output
_____no_output_____
###Markdown
Read prediction values from the saved file
###Code
with open('article_quality.txt','r') as f:
articleQuality = f.read().splitlines()
###Output
_____no_output_____
###Markdown
Combining the datasetsIn this step, we'll combine the article quality data, article data and population data together. In addition, the rows without matching data will be removed in the process of combining the data. Write merged data into a single CSV file contains five columns: country, article_name, revision_id, article_quality, populationFirst, add the ORES data into the Wikipedia data, then merge the Wikipedia data and population data together on the common key value (country).
###Code
wiki_data = pd.DataFrame(data[1:],columns=data[0])
wiki_data
len(pd.Series(articleQuality).values)
# Add the ORES data into the Wikipedia data
wiki_data["article_quality"] = pd.Series(articleQuality).values
# Rename columns of the Wikipedia data
wiki_data.columns = ["article_name","country","revision_id","article_quality"]
# Convert data (country and population) from the population file to dataframe
population_data = pd.DataFrame(population[1:],columns=population[0])
# Renames the columns with suitable names
population_data.columns = ["Location","population"]
# Merge two datasets(wiki_data and population_data) base on the common key (country name). This step removes the rows do not have
# matching data automatically.
merge_data = pd.merge(wiki_data, population_data, left_on = 'country', right_on = 'Location', how = 'inner')
merge_data = merge_data.drop('Location', axis=1)
# Swap first and second columns so that the dataframe follows the formatting conventions
merge_data = merge_data[["country","article_name","revision_id","article_quality","population"]]
###Output
_____no_output_____
###Markdown
Write merged data to a CSV file
###Code
merge_data.to_csv("final_data.csv")
###Output
_____no_output_____
###Markdown
AnalysisIn this step, we'll analyze merged dataset ("final_data.csv") and understand how the coverage of politicians on Wikipedia and the quality of articles about politicians varies among different countries Calculate the proportion (as a percentage) of articles-per-population
###Code
# Extract column "country" from merge data
merge_country = merge_data.iloc[:,0].tolist()
# Count the number of articles for each country
from collections import Counter
count_article = Counter(merge_country)
prop_article_per_population = []
df_prop_article_per_population = pd.DataFrame(columns=['country', 'population', 'num_articles','prop_article_per_population'])
num_country = 0
for country in count_article:
population = int(population_data.loc[population_data["Location"] == country, "population"].iloc[0].replace(",",""))
percentage = count_article[country] / population
prop_article_per_population.append("{:.10%}".format(percentage))
df_prop_article_per_population.loc[num_country] = [country,population,count_article[country],"{:.10%}".format(percentage)]
num_country += 1
# Show the table of the proportion of articles-per-population for each country
df_prop_article_per_population
###Output
_____no_output_____
###Markdown
Calculate the proportion (as a percentage) of high-quality articles for each country.
###Code
prop_high_quality_articles_each_country = []
df_prop_high_quality_articles_each_country = pd.DataFrame(columns=["country","num_high_quality_articles","num_articles","prop_high_quality_articles"])
num_country = 0
for country in count_article:
num_FA = Counter(merge_data.loc[merge_data['country'] == country].iloc[:,3].tolist())['FA']
num_GA = Counter(merge_data.loc[merge_data['country'] == country].iloc[:,3].tolist())['GA']
num_high_quality = num_FA + num_GA
percentage = num_high_quality / count_article[country]
prop_high_quality_articles_each_country.append("{:.10%}".format(percentage))
df_prop_high_quality_articles_each_country.loc[num_country] = [country,num_high_quality,count_article[country],"{:.10%}".format(percentage)]
num_country += 1
# Show the table of the proportion of high-quality articles for each country
df_prop_high_quality_articles_each_country
###Output
_____no_output_____
###Markdown
TablesProduce four tables that show:1. 10 highest-ranked countries in terms of number of politician articles as a proportion of country population2. 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population3. 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country4. 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
# Get index of 10 highest-ranked countries
idx = df_prop_article_per_population["prop_article_per_population"].apply(lambda x:float(x.strip('%'))/100).sort_values(ascending=False).index[0:10]
# Retrieve these rows by index values
highest_rank_10_prop_article_per_population = df_prop_article_per_population.loc[idx]
highest_rank_10_prop_article_per_population.to_csv("highest_rank_10_prop_article_per_population.csv")
highest_rank_10_prop_article_per_population
###Output
_____no_output_____
###Markdown
10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
# Get index of 10 lowest-ranked countries
idx = df_prop_article_per_population["prop_article_per_population"].apply(lambda x:float(x.strip('%'))/100).sort_values(ascending=True).index[0:10]
# Retrieve these rows by index values
lowest_rank_10_prop_article_per_population = df_prop_article_per_population.loc[idx]
lowest_rank_10_prop_article_per_population.to_csv("lowest_rank_10_prop_article_per_population.csv")
lowest_rank_10_prop_article_per_population
###Output
_____no_output_____
###Markdown
10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
# Get index of 10 highest-ranked countries
idx = df_prop_high_quality_articles_each_country["prop_high_quality_articles"].apply(lambda x:float(x.strip('%'))/100).sort_values(ascending=False).index[0:10]
# Retrieve these rows by index values
highest_rank_10_prop_high_quality_articles = df_prop_high_quality_articles_each_country.loc[idx]
highest_rank_10_prop_high_quality_articles.to_csv("highest_rank_10_prop_high_quality_articles.csv")
highest_rank_10_prop_high_quality_articles
###Output
_____no_output_____
###Markdown
10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
# Get index of 10 lowest-ranked countries
idx = df_prop_high_quality_articles_each_country["prop_high_quality_articles"].apply(lambda x:float(x.strip('%'))/100).sort_values(ascending=True).index[0:10]
# Retrieve these rows by index values
lowest_rank_10_prop_high_quality_articles = df_prop_high_quality_articles_each_country.loc[idx]
lowest_rank_10_prop_high_quality_articles.to_csv("lowest_rank_10_prop_high_quality_articles_allzeros.csv")
lowest_rank_10_prop_high_quality_articles
# Get index of 10 lowest-ranked countries that proportions of high-quality articles are NOT equal to 0
idx = df_prop_high_quality_articles_each_country["prop_high_quality_articles"].apply(lambda x:float(x.strip('%'))/100).sort_values(ascending=True)!=0
idx_not_zero = idx[idx == True].index[0:10]
lowest_rank_10_prop_high_quality_articles_not_zero = df_prop_high_quality_articles_each_country.loc[idx_not_zero]
lowest_rank_10_prop_high_quality_articles_not_zero.to_csv("lowest_rank_10_prop_high_quality_articles_notzeros.csv")
lowest_rank_10_prop_high_quality_articles_not_zero
###Output
_____no_output_____
###Markdown
DATA 512 Assignment 2: Bias in Data**DATA 512 Fall 2018****Ryan Bae**Due: November 1st, 2018The instructions to the assignment can be found in the following link:https://wiki.communitydata.cc/Human_Centered_Data_Science_(Fall_2018)/AssignmentsA2:_Bias_in_data
###Code
# import modules
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import csv
import requests
import json
###Output
_____no_output_____
###Markdown
Import Data and Call ORES API
###Code
# load datasets
page_data = pd.read_csv('page_data.csv')
wpds_2018 = pd.read_csv('WPDS_2018_data.csv')
###Output
_____no_output_____
###Markdown
Below is the function to call the Wikimedia ORES API to get the quality ratings of each article. The code below is taken from the course instructor Jonathan Morgan's github page in the link below:https://github.com/Ironholds/data-512-a2/blob/master/hcds-a2-bias_demo.ipynbThe ORES API returns the quality of article in the following format, according to the assignment instructions:1. FA - Featured article2. GA - Good article3. B - B-class article4. C - C-class article5. Start - Start-class article6. Stub - Stub-class article
###Code
# function to call ORES API. API limit is ~290, so the rev_ids must be split into smaller chunks.
headers = {'User-Agent' : 'https://github.com/ryanbae89',
'From' : '[email protected]'}
def get_ores_data(revision_ids, headers):
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# Specify the parameters - smushing all the revision IDs together separated by | marks.
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
# print(json.dumps(response, indent=4, sort_keys=True))
return response
###Output
_____no_output_____
###Markdown
Code in cell below divides the revision ids from page_data into chunks of size 100 each. It then loops through each chunk and obtains the article quality from ORES API.Each ORES API result (dict) for each revision id chunk is saved into a list in `ores_ratings`.
###Code
# get ORES scores for each article
# get_ores_data must be called in chunks due to API limits
rev_ids_full = list(page_data['rev_id'])
n = 100
rev_ids_chunks = [rev_ids_full[i:i+n] for i in range(0, len(rev_ids_full), n)]
ores_ratings = []
# call API using the function in cell above for each rev_ids chunk
for rev_ids in rev_ids_chunks:
ores_results = get_ores_data(rev_ids, headers)
ores_ratings.append(ores_results['enwiki'])
###Output
_____no_output_____
###Markdown
Data Cleaning and EngineeringCell blocks below perform data engineering and cleaning to get the final table for analysis. The final dataframe has the following schema:| country | article_name | revision_id | article_quality | population ||---------:|:-------------:|-------------:|----------------:|-----------:|| Chad |Bir I of Kanem | 355319463 | Stub | 15400000.0 |First, the dict in `ores_ratings` is turned into pandas dataframes and concatenated.
###Code
# turn to pandas dataframes and concatenate each chunk
ores = pd.DataFrame()
for ores_ratings_chunk in ores_ratings:
ores_chunk = pd.DataFrame.from_dict(ores_ratings_chunk['scores'], orient='index')
ores = pd.concat([ores, ores_chunk])
ores = ores.reset_index()
ores.columns = ['rev_id', 'score']
###Output
_____no_output_____
###Markdown
The dict inside each `score` column of `ores` is further processed to get the article_quality feature. Rows without valid article_quality rating are dropped.
###Code
# function to get the prediction
def get_pred(row):
if 'score' in row.keys():
return row['score']['prediction']
else:
return 'NaN'
# apply to every row in the ores dataframe
ores['article_quality'] = ores['score'].apply(lambda x: get_pred(x))
ores = ores.drop('score', axis=1)
# change datatypes for joins
ores['rev_id'] = ores['rev_id'].apply(int)
# drop rows that do not have article_quality
ores = ores[ores['article_quality'] != 'NaN']
###Output
_____no_output_____
###Markdown
The `ores` dataframe is now inner-joined with `page_data` and `wpds_2018` to get population and article_name columns.
###Code
# join with page_data and wpds_2018 tables
ores = ores.merge(page_data, on='rev_id', how='inner')
ores = ores.merge(wpds_2018, left_on='country', right_on='Geography', how='inner')
# rename columns, drop unnecessary columns, and reorder columns
ores = ores.rename(index=str, columns={"page": "article_name",
"Population mid-2018 (millions)": "population",
"rev_id": "revision_id"})
ores = ores.drop('Geography', axis=1)
ores = ores[['country', 'article_name', 'revision_id', 'article_quality', 'population']]
###Output
_____no_output_____
###Markdown
The `population` feature is a string, so it must be processed and changed into a float.
###Code
# clean population feature
def clean_population_column(population):
population = population.replace(',', '')
return float(population)*1e6
ores['population'] = ores['population'].apply(clean_population_column)
###Output
_____no_output_____
###Markdown
The final `ores` dataframe is shown below:
###Code
print(ores.shape)
ores.head()
###Output
(44973, 5)
###Markdown
This is the final cleaned table containing the article, it's revision id, article quality from ORES, country, and country's population. It is saved to a csv file in the cell below.
###Code
# save to csv
ores.to_csv('final_data.csv')
###Output
_____no_output_____
###Markdown
Data AnalysisCode below performs the data analysis using the cleaned final dataframe `ores`. Per assignment instructions, the following 4 tables are produced:1. 10 highest-ranked countries in terms of number of politician articles as a proportion of country population2. 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population3. 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country4. 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
# Get articles per population for each country
# start with copy of ores dataframe and get articles per country
articles_per_pop = ores.copy()
articles_per_country = ores.groupby('country')['revision_id'].count().to_frame().reset_index()
# get population for each country
pop_per_country = ores.groupby('country')['population'].mean().to_frame().reset_index()
# join the two tables on country and calculate articles per population for each country
articles_per_country = articles_per_country.merge(pop_per_country, on='country', how='inner')
articles_per_country['articles_per_population(%)'] = (articles_per_country['revision_id'] \
/ articles_per_country['population'])*100
# rename columns and sort
articles_per_country = articles_per_country.rename(index=str,
columns={'revision_id':'num_articles'})
articles_per_country = articles_per_country.sort_values('articles_per_population(%)',
ascending=False).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
1. 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
# 10 highest-ranked countries in terms of number of politician articles as a
# proportion of country population
articles_per_country.head(10)
###Output
_____no_output_____
###Markdown
2. 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
# 10 lowest-ranked countries in terms of number of politician articles as a proportion
# of country population
articles_per_country[::-1].head(10)
# Get quality articles per country
# start with copy of ores dataframe
quality_articles = ores[ores['article_quality'].isin(['FA', 'GA'])]
quality_articles = quality_articles.groupby('country')['revision_id'].count().to_frame().reset_index()
# join with articles_per_country and rename columns
quality_per_country = articles_per_country.merge(quality_articles,
on='country',
how='inner')
quality_per_country = quality_per_country.rename(index=str,
columns={'revision_id':'num_quality_articles'})
# calculate quality articles percentage and sort
quality_per_country['quality_article_percentage'] = (quality_per_country['num_quality_articles'] \
/ quality_per_country['num_articles'])*100
quality_per_country = quality_per_country.sort_values('quality_article_percentage',
ascending=False).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
3. 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
# 10 highest-ranked countries in terms of number of GA and FA-quality articles as a
# proportion of all articles about politicians from that country
quality_per_country.head(10)
###Output
_____no_output_____
###Markdown
4. 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
# 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a
# proportion of all articles about politicians from that country
quality_per_country[::-1].head(10)
###Output
_____no_output_____
###Markdown
Step 0: Pre-processing
###Code
import requests
import json
import pandas as pd
from pandas.io.json import json_normalize
from datetime import datetime
import numpy as np
###Output
_____no_output_____
###Markdown
The base_path should be set to a location on your local machine where you'd like the script to output files and input source data from.
###Code
base_path = 'C:/Users/geoffc.REDMOND/OneDrive/Data512/A2/'
###Output
_____no_output_____
###Markdown
Step 1: Data acquisition First, we pull wikipedia article data and population reference bureau data from the CSV files we have in the base_path location.
###Code
#wikipedia data
wiki_page_data = pd.read_csv(base_path+'page_data.csv',header=0)
wiki_page_data = wiki_page_data.sort_values(by=['rev_id'],ascending = True) #The data appears to be pre-sorted but better safe than sorry.
#population reference bureau data
prb_data = pd.read_csv(base_path+'population_prb.csv',header=2)
prb_data = prb_data.drop(prb_data.columns[[1,2,3,5]],axis=1) #Drop location type, timeframe, data type, footnotes
prb_data.columns = ['country','population'] #Rename columns
###Output
_____no_output_____
###Markdown
Next, define a function to call the ORES API and return json with given revision ids, ORES quality prediction and several other fields for the associated article. This code was (heavily) based on a example use of this API provided by Oliver Keyes.
###Code
def get_ores_data(revision_ids, headers):
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
return json.dumps(json.loads(api_call.text))
###Output
_____no_output_____
###Markdown
Define a function that given the json returned by the ORES API will turn it into a sorted pandas dataframe object with just the revision ids and ORES quality prediction.
###Code
def process_ores_data(ores_json):
out = pd.read_json(pd.read_json(ores_json,'index',typ='frame')['scores'].to_json(),date_unit='us')['enwiki'].astype(str)
#take only the prediction from the json
out = out.str.split(',',expand=True)[0]
out = out.str.split("'",expand=True)[7]
out = out.to_frame()
out.columns = ['prediction']
out['rev_id'] = out.index
out = out.sort_values(by=['rev_id'],ascending = True)
return out
###Output
_____no_output_____
###Markdown
Define a function which given a header and sorted pandas dataframe of rev_ids will process rev_ids through ORES in batches of 50 and return a sorted list of predictions.
###Code
def process_rev_id_list(revision_ids, headers):
start = revision_ids[0:1]
out = process_ores_data(get_ores_data(start, headers))
#print('out:', out)
index = 1
inc = 50
list_len = len(revision_ids)
while (index < list_len):
end = min(list_len,index+inc)
#print('end:',end)
lst = revision_ids[index:end]
res = process_ores_data(get_ores_data(lst, headers))
#print('res:',res)
out = out.append(res)
index = end
return out
###Output
_____no_output_____
###Markdown
Using our functions, process the list of revisions_ids from the wikipedia article data.
###Code
headers = {'User-Agent' : 'https://github.com/gdc3000', 'From' : '[email protected]'}
input_ids = wiki_page_data['rev_id'].tolist()
output_df = process_rev_id_list(input_ids,headers)
###Output
_____no_output_____
###Markdown
Step 2: Data processing Now that we have the wikipedia page data, ORES quality scores and population we merge the data into one table for analysis. First, we join the resulting ORES dataframe with wiki the other wikipedia page fields.
###Code
wiki_page_data_wPrediction = pd.merge(wiki_page_data,output_df,on='rev_id',how='outer')
###Output
_____no_output_____
###Markdown
Define a function which outputs a given dataframe to a CSV file with given name at the given path.
###Code
def expToCSV(path,filename,dataframe):
combined_data.to_csv(path_or_buf=path+filename,sep=',', encoding='utf-8',index=False)
###Output
_____no_output_____
###Markdown
Next, join the wikipedia and population data together on country. Where countries in the two datasets do not match, we will remove the row (i.e. we are doing an inner join).
###Code
#convert population to numeric type
prb_data['population'] = prb_data['population'].str.replace(',', '')
prb_data['population'] = pd.to_numeric(prb_data['population'])
#combine data
combined_data = pd.merge(wiki_page_data_wPrediction,prb_data,on='country',how='inner')
combined_data = combined_data[['country','page','rev_id','prediction','population']]
combined_data.columns = ['country','article_name','revision_id','article_quality','population']
###Output
_____no_output_____
###Markdown
There are a few rows in the data where the ORES API couldn't return a quality score. The quality score returned starts with "RevisionNotFound". We'll remove these rows from the data even though it only appears there are two of them.
###Code
print(combined_data[combined_data.article_quality.str.match('RevisionNotFound',case=False)].shape)
combined_data_clean = combined_data[~combined_data.article_quality.str.match('RevisionNotFound',case=False)]
###Output
(2, 5)
###Markdown
Before starting the analysis step, we will export the scrubbed, combined data to a CSV file.
###Code
expToCSV(base_path,'final_data_a2.csv',combined_data_clean)
###Output
_____no_output_____
###Markdown
Step 3: Analysis Taking the table resulting from step 2, flag any articles where article_quality is 'FA' or 'GA' as high quality and add these flags as a field to the table. The warnings shown below should not affect our analysis.
###Code
combined_data['high_quality'] = 0
combined_data['high_quality'][combined_data['article_quality'] == 'FA'] = 1
combined_data['high_quality'][combined_data['article_quality'] == 'GA'] = 1
###Output
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:3: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
Compute the proportion of high quality articles in terms of the total number of politician articles per country.
###Code
qual_byarticlecount = combined_data.groupby('country',as_index=False).agg({'high_quality': np.sum, 'article_name': np.size})
qual_byarticlecount.columns = ['country','high_quality','article_count'] #fix column name
qual_byarticlecount['proportion'] = qual_byarticlecount['high_quality'] / qual_byarticlecount['article_count']
###Output
_____no_output_____
###Markdown
Next, compute the proportion of total articles in terms of the population of each country.
###Code
qual_bypop = combined_data.groupby(['country','population'],as_index=False).agg({'article_name': np.size})
qual_bypop.columns = ['country','population','article_count'] #fix column name
qual_bypop['proportion'] = qual_bypop['article_count'] / qual_bypop['population']
###Output
_____no_output_____
###Markdown
Next, sort these tables by the proportion.
###Code
qual_bypop = qual_bypop.sort_values(by=['proportion','population'],ascending = [False,True])
qual_byarticlecount = qual_byarticlecount.sort_values(by=['proportion','article_count'],ascending = [False,True])
###Output
_____no_output_____
###Markdown
Display the 10 highest-ranked countries in terms of proportion of politician articles to a country's population.
###Code
qual_bypop.head(10)
###Output
_____no_output_____
###Markdown
Display the 10 lowest-ranked countries in terms of proportion of politician articles to a country's population. For countries with an equivalent proportion (if that occurred), we show those with the highest population at the bottom.
###Code
qual_bypop.tail(10)
###Output
_____no_output_____
###Markdown
Display the 10 highest-ranked countries in terms of the proportion of high-quality politician articles to total articles.
###Code
qual_byarticlecount.head(10)
###Output
_____no_output_____
###Markdown
Display the 10 lowest-ranked countries in terms of the proportion of high-quality politician articles to total articles. For countries with an equivalent proportion of high quality articles, we show those with the highest total article_count at the bottom.
###Code
qual_byarticlecount.tail(10)
###Output
_____no_output_____
###Markdown
Bias on WikipediaThis ipython notebook is created for DATA512 at UW for this assignment: https://wiki.communitydata.cc/HCDS_(Fall_2017)/AssignmentsA2:_Bias_in_dataOur goal is to analyze the content of wikipedia to understand the biases of the site by looking at the content coverage for political members of countries. We look at how many pages there are (as a percent of the country's population) and how many of the pages are high quality (using scores from the ORES system, more info below).In the end, we show the top/bottom 10 countries for these 2 categories. Related Data Filesraw data files:- page_data.csv : raw wikipedia data- WPDS_2018_data.csv : raw country population dataOutput files:- ores_data.csv : articles scores from the ORES system- combined_data.csv : combined data (country population, ores data and wikipedia data) First, import necessary packages
###Code
import requests
import json
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Import the data, and print out the first few rows to see examples.Data comes from a few different sources. Wikipedia data is available via figshare (https://figshare.com/articles/Untitled_Item/5513449 , under country/data/) with license CC-BY-SA 4.0. This contains "most English-language Wikipedia articles within the category 'Category:Politicians by nationality' and subcategories". This data contains 3 columns, which are called out in the above link as follows:1. "country", containing the sanitised country name, extracted from the category name;2. "page", containing the unsanitised page title.3. "last_edit", containing the edit ID of the last edit to the page.Population data is available via https://www.dropbox.com/s/5u7sy1xt7g0oi2c/WPDS_2018_data.csv?dl=0.This file contains the population in millions from mid-2018 along with the country name.A copy of the datasets, downloaded in oct, 2018, are available in this repo.
###Code
wiki_data = pd.read_csv('page_data.csv')
country_data = pd.read_csv('WPDS_2018_data.csv',thousands=',')
country_data.rename(columns={"Population mid-2018 (millions)": "population"},inplace=True)
wiki_data.head()
country_data.head()
###Output
_____no_output_____
###Markdown
Here we create a helper function for getting ores scoresThis function takes revision ids (and the headers needed to make the call) and scores the function using the ORES system. The score and the revision id are appended to the ores_data list.ORES (Objective Revision Evaluation Service) is a machine learning service that ranks the quality of a given article. The ranks go from best to worst as FA, GA, B, C, Start and Stub. For the purposes of this analysis, we use only the predicted category (rather than the probabilities, which are also available).link with more info: https://www.mediawiki.org/wiki/ORES
###Code
def get_ores_data(revision_ids, headers):
temp_data = []
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = pd.read_json(json.dumps(api_call.json(), indent=4, sort_keys=True))
for id in response['enwiki']['scores']:
try:
ores_data.append([id, response['enwiki']['scores'][id]['wp10']['score']['prediction']])
except:
pass
#print(json.dumps(response, indent=4, sort_keys=True))
#return temp_data #response
###Output
_____no_output_____
###Markdown
Here we define the header needed to call the above function and iterate over all of the revions, calling the function in batches (of about 100, or 472 batches for slightly less than 47k revisions).
###Code
%%time
# So if we grab some example revision IDs and turn them into a list and then call get_ores_data...
ores_data = [] #pd.DataFrame(columns =['revid','category'])
#ores_data.append([['a','b']])
#print(ores_data)
headers = {'User-Agent' : 'https://github.com/your_github_username', 'From' : '[email protected]'}
for i in np.array_split(np.asarray(wiki_data['rev_id']),472): #, 472): #split into buckets of approximately 100
get_ores_data(i, headers)#,columns =['revid','category']
#temp_data = pd.DataFrame(get_ores_data(i, headers),columns =['revid','category'])
#print("here")
#print(ores_data)
#print(temp_data)
#ores_data.append(temp_data)
###Output
CPU times: user 14.1 s, sys: 664 ms, total: 14.7 s
Wall time: 2min 24s
###Markdown
Here we convert the ores_data into a pandas dataframe and save to a csv for reference.
###Code
ores_data = pd.DataFrame(ores_data,columns =['revision_id','article_quality'])#.set_index('revision_id')
ores_data.to_csv('ores_data.csv')
###Output
_____no_output_____
###Markdown
We convert revision_id to a int so we can join it to the wikipedia data.
###Code
#check out ores
ores_data['revision_id'] = ores_data['revision_id'].astype(int)
#ores_data.set_index('revid')
#ores_data.reset_index(inplace=True)
ores_data.head()
###Output
_____no_output_____
###Markdown
Here we merge the wikipedia data to the ores data on the revision id. We also merge onto the country data on the country/geography columns. There are 44,973 rows left after we inner join.
###Code
# Merge data
combined_data = wiki_data.merge(country_data,
how = 'inner',
left_on ='country',
right_on = 'Geography').merge(ores_data,
how = 'inner',
left_on = 'rev_id',
right_on = 'revision_id'
)
print(combined_data.shape)
###Output
(44973, 7)
###Markdown
Here is a preview of the US data:
###Code
combined_data[combined_data['country']=='United States'].head()
###Output
_____no_output_____
###Markdown
We filter the new dataset to remove duplicate columns and save this to a csv.
###Code
combined_data = combined_data[['country','page','revision_id','article_quality','population']]
combined_data.to_csv('combined_data.csv')
###Output
_____no_output_____
###Markdown
AnalysisHere we start analysing the data. First, we create a pivot table with population by country.
###Code
# Analysis
articles_and_population = combined_data.pivot_table(values = ['population'],
index = ['country'],
dropna = False,
#columns = ['article_quality'],
aggfunc = {'population': min,'country':'count'}
).rename(columns={"country": "num_articles"}).reset_index()
articles_and_population.shape
###Output
_____no_output_____
###Markdown
Next, we create a pivot table with number of high quality articles by country.
###Code
high_qual_articles = combined_data[combined_data['article_quality'].isin(['FA','GA'])].pivot_table(values = ['population'],
index = ['country'],
dropna = False,
#columns = ['article_quality'],
aggfunc = {'country':'count'}
).rename(columns={"country": "num_high_quality_articles"}).reset_index()
high_qual_articles.shape
###Output
_____no_output_____
###Markdown
We join the datasets and fill NAs with zeros. We change num_articles to be an int and population to be a float.We then calculate the articles_per_population (which is per million people) and the high quality article percentage for each country.Finally, we set the index as the country (as these are unique) and display the results.
###Code
dataset = articles_and_population.merge(high_qual_articles, how='left').fillna(0)
dataset['num_articles'] = dataset['num_articles'].astype(int)
dataset['population'] = dataset['population'].astype(float)
#dataset.dropna(inplace=True)
dataset['articles_per_population'] = dataset['num_articles'] / dataset['population']
dataset['high_qual_article_perc'] = dataset['num_high_quality_articles'] / dataset['num_articles']
dataset.set_index('country',inplace=True)
dataset
###Output
_____no_output_____
###Markdown
Finally, display the top and bottome countries by articles per million people. Tuvalu has the highest value, but does have an extremely small population. Of the represented countries, India has the smallest article per million people.
###Code
dataset.sort_values(by = 'articles_per_population',ascending = False)[0:10]
dataset.sort_values(by = 'articles_per_population',ascending = True)[0:10]
###Output
_____no_output_____
###Markdown
And lastly, we look at the top and bottom countries by high quality article percentage. North Korea has the highest percentage at approximately 18% while Tanzania has the lowest at around .2%. Note that there are some countries that have been removed due to not having any high quality articles. The full list of these countries is at the end.
###Code
dataset.sort_values(by = 'high_qual_article_perc',ascending = False)[0:10]
#dataset.sort_values(by = 'high_qual_article_perc',ascending = True)[0:10]
dataset[dataset['high_qual_article_perc']>0].sort_values(by = 'high_qual_article_perc',ascending = True)[0:10]
###Output
_____no_output_____
###Markdown
Countries with 0 high quality articles:
###Code
dataset[dataset['high_qual_article_perc']==0].index
import matplotlib.pyplot as plt
plt.scatter(np.log(dataset['high_qual_article_perc']+.0001),
np.log(dataset['articles_per_population']),
c='r',
s=1
)
plt.show()
###Output
_____no_output_____
###Markdown
Data Cleaning Read the data into Pandas Dataframes.
###Code
page_data_file_path = "./page_data.csv"
wpds_data_file_path = "./WPDS_2018_data.csv"
page_data_df = pd.read_csv(page_data_file_path)
wpds_df = pd.read_csv(wpds_data_file_path)
page_data_df.head()
wpds_df.head()
###Output
_____no_output_____
###Markdown
Clean page_data by removing the pages which represent templates.
###Code
is_template = page_data_df['page'].str.match('Template:')
page_data_cleaned_df = page_data_df[~is_template]
###Output
_____no_output_____
###Markdown
Clean wpds data by removing the rows representing cumulative regions or continents.
###Code
wpds_df["is_continent"] = wpds_df.Geography.str.isupper()
wpds_countries_df = wpds_df[~wpds_df["is_continent"]]
wpds_continents_df = wpds_df[wpds_df["is_continent"]]
###Output
_____no_output_____
###Markdown
Showing the wpds rows corresponding to Cumulative regions (continents).
###Code
wpds_continents_df
###Output
_____no_output_____
###Markdown
Map each country to its region.
###Code
country_region_dict = {}
cur_region = None
for row in wpds_df.iterrows():
geography = row[1]["Geography"]
if geography.isupper():
cur_region = geography
else:
country_region_dict[geography] = cur_region
country_region_df = pd.DataFrame(list(country_region_dict.items()), columns=['country', 'region'])
country_region_df.head()
###Output
_____no_output_____
###Markdown
Getting article quality predictions from ORES. Making ORES requests using REST API. Alternatively, the ORES python package can be used, but it has additional dependencies which may cause trouble while installing.
###Code
# Copied from Demo: "https://github.com/Ironholds/data-512-a2/blob/master/hcds-a2-bias_demo.ipynb".
headers = {'User-Agent' : 'https://github.com/bhuvi3', 'From' : '[email protected]'}
def get_ores_data(revision_ids, headers, batch_size=100):
def chunker(seq, size):
"""
Taken from Stack Overflow answer by 'nosklo': https://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks.
"""
return (seq[pos:pos + size] for pos in range(0, len(seq), size))
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
aggregated_response = {}
for rev_ids_group in chunker(revision_ids, batch_size):
# Specify the parameters - smushing all the revision IDs together separated by | marks.
# Yes, 'smush' is a technical term, trust me I'm a scientist.
# What do you mean "but people trusting scientists regularly goes horribly wrong" who taught you tha- oh.
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in rev_ids_group)
}
uri = endpoint.format(**params)
api_call = requests.get(uri)
cur_response = api_call.json()
aggregated_response.update(cur_response['enwiki']['scores'])
return aggregated_response
###Output
_____no_output_____
###Markdown
The API call over all the revision ids might take few minutes. The ORES REST API was throwing errors when queried for more than approx. 200 revision ids in a single call. Hence, I am querying the revision ids in batches. Also, I am storing the queries results in a local pickle file, so that we can avoid making API calls if running this multiple times.
###Code
# Note: This cell may take few minutes to run (~5 min)
# For each revision_id in our data, we get ORES quality class predictions.
ores_res_cache_file = "cached_ores_api_call_res.pickle"
if os.path.exists(ores_res_cache_file):
with open(ores_res_cache_file, "rb") as fp:
ores_call_res = pickle.load(fp)
else:
revision_ids = []
for row in page_data_cleaned_df.iterrows():
row_series = row[1]
revision_ids.append(int(row_series["rev_id"]))
ores_call_res = get_ores_data(revision_ids, headers)
###Output
_____no_output_____
###Markdown
Parse the API call result and add the article_quality to the page_data. Ignore the article for which the ORES quality could not be retrieved, and store these article revision ids in a file locally.
###Code
quality_categories_dict = {}
missed_rev_ids = []
for key, value in ores_call_res.items():
try:
quality_categories_dict[key] = value["wp10"]["score"]['prediction']
except:
quality_categories_dict[key] = "missed"
missed_rev_ids.append(key)
missed_rev_ids_file = "ores_missed_rev_ids.txt"
with open(missed_rev_ids_file, "w") as fp:
for rev_id in missed_rev_ids:
fp.write("%s\n" % rev_id)
print("Total number of articles for which ORES quality could not be retrieved: %s. "
"The revision_ids of these articles have been written to %s"
% (len(missed_rev_ids), missed_rev_ids_file))
page_quality_df = pd.DataFrame(list(quality_categories_dict.items()), columns=['rev_id', 'article_quality']).astype({'rev_id': 'int64'})
page_data_joined_df = page_data_cleaned_df.merge(page_quality_df, on="rev_id", how="inner")
page_data_joined_filtered_df = page_data_joined_df[page_data_joined_df["article_quality"] != "missed"]
page_data_joined_filtered_df = page_data_joined_filtered_df.rename(columns={"rev_id": "revision_id", "page": "article_name"})
page_data_joined_filtered_df.head()
wpds_countries_df["Population mid-2018 (millions)"] = wpds_countries_df["Population mid-2018 (millions)"].str.replace(',', '')
wpds_countries_df = wpds_countries_df.astype({"Population mid-2018 (millions)": "float32"})
wpds_countries_df["population"] = wpds_countries_df["Population mid-2018 (millions)"] * 1000000
wpds_countries_df = wpds_countries_df.drop(columns=["is_continent", "Population mid-2018 (millions)"])
wpds_countries_df = wpds_countries_df.rename(columns={"Geography": "country"})
wpds_countries_df.head()
###Output
_____no_output_____
###Markdown
Combine the Wikipedia and Population data (from WPDS).
###Code
page_wpds_merged_df = page_data_joined_filtered_df.merge(wpds_countries_df, on="country", how="left")
is_no_match = page_wpds_merged_df["population"].isnull()
no_match_rows_file = "wp_wpds_countries-no_match.csv"
page_wpds_merged_df_no_match = page_wpds_merged_df[is_no_match]
page_wpds_merged_df_no_match.to_csv(no_match_rows_file, index=False)
print("Rows which did not match have been saved at %s" % no_match_rows_file)
page_wpds_merged_df_matched = page_wpds_merged_df[~is_no_match]
matched_rows_file = "wp_wpds_politicians_by_country.csv"
page_wpds_merged_df_matched.to_csv(matched_rows_file, index=False)
print("Rows matched have been saved at %s" % matched_rows_file)
# Rows where the countries did not match.
page_wpds_merged_df_no_match.head()
# Rows where countries matched.
page_wpds_merged_df_matched.head()
###Output
_____no_output_____
###Markdown
Analysis Create an analysis df with the following metrics for analying the bias.- coverage: The percentage of articles by population. If a country has a population of 10,000 people, and you found 10 articles about politicians from that country, then the percentage of articles-per-population would be .1%.- relative_quality: The percentage of high-quality articles. If a country has 10 articles about politicians, and 2 of them are FA or GA class articles, then the percentage of high-quality articles would be 20%.
###Code
# Find number of articles per country.
country_article_counts_df = page_wpds_merged_df_matched.groupby("country").size().reset_index(name='article_count')
# Find number of high quality articles per country.
is_high_quality = (page_wpds_merged_df_matched["article_quality"] == "FA") | (page_wpds_merged_df_matched["article_quality"] == "GA")
country_high_quality_article_count_df = page_wpds_merged_df_matched[is_high_quality].groupby("country").size().reset_index(name='high_quality_article_count')
# Make an analysis dataframe with computed metrics.
analysis_df = country_article_counts_df.merge(wpds_countries_df, on="country", how="inner")
analysis_df = analysis_df.merge(country_high_quality_article_count_df, on="country", how="left")
analysis_df['high_quality_article_count'] = analysis_df['high_quality_article_count'].fillna(value=0).astype("int64")
# Add the percentage metrics.
analysis_df["coverage_perc"] = (analysis_df["article_count"] / analysis_df["population"]) * 100
analysis_df["relative_quality"] = (analysis_df["high_quality_article_count"] / analysis_df["article_count"]) * 100
analysis_df.head()
###Output
_____no_output_____
###Markdown
Add region-wise metrics.
###Code
region_analysis_df = analysis_df.drop(columns=["coverage_perc", "relative_quality"]).merge(country_region_df, on="country", how="inner")
region_analysis_df = region_analysis_df.groupby("region").sum()
region_analysis_df["coverage_perc"] = (region_analysis_df["article_count"] / region_analysis_df["population"]) * 100
region_analysis_df["relative_quality"] = (region_analysis_df["high_quality_article_count"] / region_analysis_df["article_count"]) * 100
region_analysis_df
###Output
_____no_output_____
###Markdown
Analysis Results Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population.
###Code
# Additional columns have been retained to allow for observation.
analysis_df.sort_values("coverage_perc", ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population.
###Code
analysis_df.sort_values("coverage_perc", ascending=True).head(10)
###Output
_____no_output_____
###Markdown
Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality.
###Code
analysis_df.sort_values("relative_quality", ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality.
###Code
analysis_df.sort_values("relative_quality", ascending=True).head(10)
###Output
_____no_output_____
###Markdown
Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population.
###Code
region_analysis_df.sort_values("coverage_perc", ascending=False)
###Output
_____no_output_____
###Markdown
Geographic regions by relative quality: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality.
###Code
region_analysis_df.sort_values("relative_quality", ascending=False)
###Output
_____no_output_____
###Markdown
Name : Sindhu Madhadi Assignment A2: The goal of this assignment is to explore the concept of bias through data on Wikipedia articles Step 1: Getting the Article and Population Data
###Code
import json
import pandas as pd
import numpy as np
import requests
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Wikipedia Politicians by country dataset: https://figshare.com/articles/dataset/Untitled_Item/5513449
###Code
df_article = pd.read_csv('A2_data/page_data.csv')
df_article.head()
###Output
_____no_output_____
###Markdown
The population data is available in CSV format as WPDS_2020_data.csv : https://docs.google.com/spreadsheets/d/1CFJO2zna2No5KqNm9rPK5PCACoXKzb-nycJFhV689Iw/editgid=283125346
###Code
df_population = pd.read_csv('A2_data/WPDS_2020_data.csv')
df_population.head()
###Output
_____no_output_____
###Markdown
Step 2: Cleaning the Data Ignore rows that provide cumulative regional population counts, rather than country-level counts. But remain them for future use[Rows with capital letters in name]
###Code
df_population[df_population["Name"].str.isupper()]
#Save them for future use:
Capital_population_name = df_population[df_population["Name"].str.isupper()]
Capital_population_name
df_population.drop(df_population[df_population["Name"].str.isupper()].index,inplace=True)
###Output
_____no_output_____
###Markdown
The dataset of Wikipedia Politicians contains some page names that start with the string "Template:". These pages are not Wikipedia articles, and should not be included in your analysis.
###Code
template_article_name=df_article[df_article["page"].str.startswith("Template:")]
template_article_name
df_article.drop(df_article[df_article["page"].str.startswith("Template:")].index,inplace=True)
df_article
###Output
_____no_output_____
###Markdown
Step 3: Getting Article Quality Predictions Get the predicted quality scores for each article in the Wikipedia dataset. We're using a machine learning system called ORES The article quality estimates are, from best to worst: FA - Featured article GA - Good article B - B-class article C - C-class article Start - Start-class article Stub - Stub-class article
###Code
def api_ores_data(revision_ids):
#Defining headers:
HEADERS = {'User-Agent': 'https://github.com/sindhumadhadi09', 'From': '[email protected]'}
#endpoint:
endpoint="https://ores.wikimedia.org/v3/scores/{context}/?models={model}&revids={revid}"
#parameters:
params = {'context': 'enwiki',
'model' : 'articlequality',
'revid' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params), headers=HEADERS)
response = api_call.json()
# Stripping out the predictions:
rev_prediction_arr = []
pred_notfound_arr = []
for rev_id in revision_ids:
try:
prediction = response['enwiki']["scores"][str(rev_id)]["articlequality"]["score"]["prediction"]
rev_prediction_arr.append({'rev_id':rev_id,
'preditcion':prediction})
except:
# Storing the rev_ids for which we couldn't get any prediction.
pred_notfound_arr.append(rev_id)
return rev_prediction_arr, pred_notfound_arr
# Call API:
revs_prediction_arr = []
log_pred_notfound = []
for i, rev_ids in enumerate(np.array_split(df_article, 1000)):
# Getting the prediction and storing the results in arrays.
rev_ids=rev_ids['rev_id'].tolist()
rev_prediction_value, pred_notfound_value = api_ores_data(rev_ids)
revs_prediction_arr.extend(rev_prediction_value)
log_pred_notfound.extend(pred_notfound_value)
len(revs_prediction_arr),len(log_pred_notfound)
###Output
_____no_output_____
###Markdown
Step 4: Combining the Datasets
###Code
# Convert the prediction array to a dataframe:
rev_prediction_df=pd.DataFrame(revs_prediction_arr)
rev_prediction_df.head()
#combine the aarticle data with predciltion value:
df_article = df_article.merge(rev_prediction_df, on='rev_id')
df_article.head()
# Saving the data.
df_article.to_csv('A2_data/page_data_prediction.csv')
# Combine the two data frames:
output_data = df_article.merge(df_population, how='outer', left_on ="country" ,right_on="Name")
output_data
###Output
_____no_output_____
###Markdown
consiering the edge cases: Either the population dataset does not have an entry for the equivalent Wikipedia country, or vise versa.
###Code
output_countries_no_match = output_data[(output_data["country"].isnull())|(output_data["Name"].isnull())]
#save to file:
output_countries_no_match.to_csv("A2_data/wp_wpds_countries-no_match.csv")
# remaining data:
remaining_data = output_data[(output_data["country"].isnull()==False)&(output_data["Name"].isnull()==False)]
# renaming column names:
remaining_data.columns = ['article_name', 'country', 'revision_id', 'article_quality_est.',
'FIPS', 'Name', 'Type', 'TimeFrame', 'Data (M)',
'population']
remaining_data
#Finalising Schema:
remaining_data = remaining_data[["country","article_name","revision_id","article_quality_est.","population"]]
remaining_data
# save to file :
remaining_data.to_csv("A2_data/wp_wpds_politicians_by_country.csv")
###Output
_____no_output_____
###Markdown
Step 5: Analysis articles-per-population and high-quality articles for each country AND for each geographic region. Articles-per-population
###Code
articles_per_population = remaining_data.groupby(["country"]).apply(lambda s: (s.article_name.count()/s.population.max())*100)
articles_per_population
###Output
_____no_output_____
###Markdown
High - Quality Articles:
###Code
remaining_data["hight_quality_article"]= (remaining_data["article_quality_est."]=="FA")|(remaining_data["article_quality_est."]=="GA")
high_quality_articles = remaining_data.groupby(["country"]).apply(lambda s: (s.hight_quality_article.sum()/s.article_name.count())*100)
high_quality_articles
###Output
_____no_output_____
###Markdown
Step 6: Results 1.Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
articles_per_population.to_frame("articles_per_population_per_country").reset_index().sort_values("articles_per_population_per_country",ascending = False).head(10)
###Output
_____no_output_____
###Markdown
2.Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
articles_per_population.to_frame("articles_per_population_per_country").reset_index().sort_values("articles_per_population_per_country",ascending = True).head(10)
###Output
_____no_output_____
###Markdown
3.Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
high_quality_articles.to_frame("high_articles_per_count").reset_index().sort_values("high_articles_per_count",ascending = False).head(10)
###Output
_____no_output_____
###Markdown
4.Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
high_quality_articles.to_frame("high_articles_per_count").reset_index().sort_values("high_articles_per_count",ascending = True).head(10)
# To answer furter questions we need:
data_WPDS = pd.read_csv("A2_data/WPDS_2020_data.csv")
country_region = data_WPDS["Type"]
country_name = data_WPDS["Name"]
population = data_WPDS["Population"]
set_regions_country = {}
set_regions_population ={}
#create region country mapping
for p,cr,cn in zip(population,country_region,country_name):
if cr=="Sub-Region":
set_regions_country[cn]=[]
current = cn
set_regions_population[cn]=p
if cr=="Country":
if current!= None:
set_regions_country[current].append(cn)
#country---> region:
set_country_region ={}
for r,c in set_regions_country.items():
for i in c :
set_country_region[i]=r
remaining_data["region"] = remaining_data["country"].replace(set_country_region)
#create a region for region population
remaining_data["region_population"] = remaining_data["region"].replace(set_regions_population)
###Output
_____no_output_____
###Markdown
5.Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
###Code
remaining_data.groupby(["region"]).apply(lambda s: (s.article_name.count()/s.population.max())*100).to_frame("articles_per_regional_pop_per_country").sort_values("articles_per_regional_pop_per_country",ascending = False)
###Output
_____no_output_____
###Markdown
6.Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
remaining_data.groupby(["region"]).apply(lambda s: (s.high_quality_article.sum()/s.article_name.count())*100).to_frame("high_articles_per_count").sort_values("high_articles_per_count",ascending = False)
###Output
_____no_output_____
###Markdown
Assignment 2: Bias in data Getting article quality predictions
###Code
import csv
import requests
import json
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# getting the article data from the CSV files
headers = {'User-Agent' : 'https://github.com/jingyany', 'From' : '[email protected]'}
data = []
with open('page_data.csv') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
data.append([row[0],row[1],row[2]])
del data[0]
# Get the predicted quality scores for each article from ORES
def get_data(revision_ids, headers):
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
return response
# Get the article quality for each article from ORES
def get_prediction(id_lst, headers):
res = []
start = 0
while start < len(id_lst):
end = min(start + 100, len(id_lst))
score = get_data(id_lst[start : end], headers)
for rev_id in score['enwiki']['scores']:
if 'score' in score['enwiki']['scores'][rev_id]['wp10']:
res.append(score['enwiki']['scores'][rev_id]['wp10']['score']['prediction'])
else:
res.append(None)
start = start + 100
return res
id_lst = [item[2] for item in data]
len(id_lst)
prediction = get_prediction(id_lst, headers)
len(prediction)
###Output
_____no_output_____
###Markdown
Combine the datasets
###Code
# Read article data to pandas from csv file
df = pd.read_csv('page_data.csv')
df.head()
# Add article quality to the dataset
pred = pd.Series(prediction)
df['article_quality'] = pred.values
df.head()
# Read population data to pandas from csv file
df2 = pd.read_csv('Population Mid-2015.csv')
df2.head()
# Merge article data and population data based on country
new_df = pd.merge(df, df2, on='country', how='inner')
new_df.head()
# Generate final dataset for the data analysis
final = pd.DataFrame({'country' : new_df.country,
'article_name' : new_df.page,
'revision_id': new_df.rev_id,
'article_quality': new_df.article_quality,
'population': new_df.Data})
final.head()
# Save the final dataset to a csv file
final.to_csv('final_data.csv')
###Output
_____no_output_____
###Markdown
Analysis Calculate the propertion of articles per population
###Code
# Calculate the total number of articles of each country
article_per_country = final[['article_name','country']].groupby(['country']).agg(['count'])
article_per_country = article_per_country.reset_index()
article_per_country.columns = article_per_country.columns.get_level_values(0)
article_per_country.head()
temp = pd.merge(article_per_country, df2, on='country', how='inner' )
temp.head()
temp['Data'] = temp['Data'].str.replace(',', '')
temp['Data'] = temp['Data'].astype(int)
temp.dtypes
# Calculate articles per population
temp['articles_per_popilation'] = temp.article_name/temp.Data
temp.head()
###Output
_____no_output_____
###Markdown
Calculate the percentage of high-quality articles for each country
###Code
# Select articles with high quality
temp2 = final[(final['article_quality'] == 'FA') | (final['article_quality'] == 'GA')]
temp2.head()
# Count number of high-quality articles per country
highquality_per_country = temp2[['article_name','country']].groupby(['country']).agg(['count'])
highquality_per_country = highquality_per_country.reset_index()
highquality_per_country.columns = highquality_per_country.columns.get_level_values(0)
highquality_per_country.head()
temp = pd.merge(highquality_per_country, temp, on='country', how='outer' )
temp.head()
# Calculate the percentage of high-quality articles per country
temp['high_quality_articles'] = temp.article_name_x/temp.article_name_y
temp.head()
analysis_data = pd.DataFrame({'country': temp.country,
'articles_per_population': temp.articles_per_popilation,
'high_quality_articles': temp.high_quality_articles})
analysis_data.head()
###Output
_____no_output_____
###Markdown
Visualization 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
proportion_asc = analysis_data.sort('articles_per_population', ascending=False)
del proportion_asc['high_quality_articles']
proportion_asc = proportion_asc.set_index('country')
proportion_asc = proportion_asc.head(10)
proportion_asc
ax = proportion_asc[['articles_per_population']].head(10).plot(kind='bar',
title ="10 highest-ranked countries in terms of number of politician articles as a proportion of country population",
figsize=(15, 10), legend=True, fontsize=12)
ax.set_xlabel("Countries", fontsize=12)
ax.set_ylabel("Percentage", fontsize=12)
plt.show()
###Output
_____no_output_____
###Markdown
10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
proportion_desc = analysis_data.sort('articles_per_population', ascending=True)
del proportion_desc['high_quality_articles']
proportion_desc = proportion_desc.set_index('country')
proportion_desc = proportion_desc.head(10)
proportion_desc
ax = proportion_desc[['articles_per_population']].head(10).plot(kind='bar',
title ="10 lowest-ranked countries in terms of number of politician articles as a proportion of country population",
figsize=(15, 10), fontsize=12)
ax.set_xlabel("Countries", fontsize=15)
ax.set_ylabel("Percentage", fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
highquality_asc = analysis_data.sort('high_quality_articles', ascending=False)
del highquality_asc['articles_per_population']
highquality_asc = highquality_asc.set_index('country')
highquality_asc = highquality_asc.head(10)
highquality_asc
ax = highquality_asc[['high_quality_articles']].head(10).plot(kind='bar',
title ="10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country",
figsize=(15, 10), fontsize=12)
ax.set_xlabel("Countries", fontsize=15)
ax.set_ylabel("Percentage of high-quality articles", fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
highquality_desc = analysis_data.sort('high_quality_articles', ascending=True)
del highquality_desc['articles_per_population']
highquality_desc = highquality_desc.set_index('country')
highquality_desc = highquality_desc.head(10)
highquality_desc
ax = highquality_desc[['high_quality_articles']].head(10).plot(kind='bar',
title ="10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country",
figsize=(15, 10), fontsize=12)
ax.set_xlabel("Countries", fontsize=15)
ax.set_ylabel("Percentage of high-quality articles", fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
DATA 512 - A2 Bias in DataDaniel White - November 1, 2018 OverviewThis assignment explores the bias in the number and quality of Wikipedia articles on politicians published for each country. Data on the quality of Wikipedia articles is collected using the ORES API which has documentation available here: (https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_model)The article quality data is collected and then merged with a dataset containing the population of each country. Potential bias is assessed by calculating the number of articles per capita and the percentage of high quality articles for each country. Data Collection First, the necessary libraries are imported and the csv file containing the article data is loaded into memory. The article dataset can be found on Figshare (https://figshare.com/articles/Untitled_Item/5513449) in the page_data.csv file. Population data from 2018 is also loaded into memory and I changed the column headers for easier future reference. The population data can be accessed via Dropbox here: (https://www.dropbox.com/s/5u7sy1xt7g0oi2c/WPDS_2018_data.csv?dl=0).
###Code
#Import libaries
import requests
import json
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#Import page data from csv file, downloaded from the link here.
page_data = pd.read_csv('data/page_data.csv', index_col = None)
page_data.head()
pop_df = pd.read_csv('data/WPDS_2018_data.csv')
pop_df.rename(columns = {'Geography': 'country',
'Population mid-2018 (millions)': 'population'}, inplace = True)
pop_df.head()
###Output
_____no_output_____
###Markdown
A function is created in order to collect data on the article quality from the ORES API. Note that this code for the get_ores_data function was largely inspired by a similar function created in this repository (https://github.com/Ironholds/data-512-a2). This function takes a list of revision ids and passes them through the ORES API.
###Code
headers = {'User-Agent' : 'https://github.com/dwhite105', 'From' : '[email protected]'}
def get_ores_data(revision_ids, headers):
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
return response
###Output
_____no_output_____
###Markdown
The revision IDs are taken from the page data dataset and converted into a string. There are 47,197 IDs to run through the API, however, there is a limit to how many can be run through the API at one time. The code below does three things:* Collects the rev_ids in increments of 100 to pass through the ORES API* Extracts the article quality prediction for each ID. For articles without a prediction, it logs a NaN.* Iterates through until all IDs are passed through the API, returning a list of all the predictions.
###Code
rev_ids = list(page_data.rev_id.apply(str))
rev_idx = 0
increment = 100
article_predictions = []
length = len(page_data)
while rev_idx < length:
if rev_idx + increment > length:
rev_ids_subset = rev_ids[rev_idx:len(page_data)]
else:
rev_ids_subset = rev_ids[rev_idx:rev_idx+increment]
ores = get_ores_data(rev_ids_subset,headers)
for i in rev_ids_subset:
try:
prediction = ores['enwiki']['scores'][i]['wp10']['score']['prediction']
except KeyError:
prediction = np.nan
article_predictions.append(prediction)
rev_idx = rev_idx + increment
article_predictions
###Output
_____no_output_____
###Markdown
Data Processing Next, a dataframe is constructed with each article name, article quality, revision ID, and country.
###Code
page_df = pd.DataFrame({'country': page_data.country,
'article_name':page_data.page ,
'revision_id': rev_ids,
'article_quality' : article_predictions})
page_df.head()
###Output
_____no_output_____
###Markdown
The article dataframe and population dataframe are then merged on a common country name. A left join is performed to preserve all the articles in the articles dataframe.
###Code
df = pd.merge(page_df, pop_df, how='left', on = 'country')
df.head()
###Output
_____no_output_____
###Markdown
The dataframe snapshot above shows that some articles do not have predictions for article quality or were not matched up with a country during the merge. The observations that contain any NAs or missing data are removed from the dataset and the dataframe is saved as .csv file.
###Code
df1 = df.dropna()
df1.to_csv('data/wiki_article_quality_population_data.csv', index = False)
df1.head()
###Output
_____no_output_____
###Markdown
A new column named 'high_quality' is created which indicates a 1 if the article is rated FA or GA, and a 0 if its not. A count of the values in this column show that 980 articles were deemed of high quality.
###Code
df1['high_quality'] = np.where((df1.article_quality == 'FA') | (df1.article_quality == 'GA'),1,0)
df1.head()
df1['high_quality'].value_counts()
###Output
_____no_output_____
###Markdown
The dataset is then grouped by country, counting the number of articles and the number of high quality articles. The number of articles per capita (millions) and the percentage of high quality articles are calculated as new columns in the dataframe.
###Code
# Group data and aggregate by country
country_articles = df1.groupby(['country'], as_index = False).agg({'article_name': 'count',
'high_quality': 'sum',
'population' : 'max'})
country_articles.rename(columns = {'article_name' : 'article_count',
'high_quality' : 'quality_article_count'}, inplace = True)
#Remove commas from population, convert to integer
country_articles['population'] = country_articles['population'].str.replace(',','')
country_articles['population'] = country_articles['population'].apply(pd.to_numeric)
#Calculation of new columns
country_articles['articles_per_population'] = country_articles['article_count'] / (country_articles['population'])
country_articles['percent_quality_article'] = country_articles['quality_article_count'] / country_articles['article_count']
country_articles.head()
###Output
_____no_output_____
###Markdown
Data Analysis I created some exploratory visualizations below to get a better sense of the distribution of the data.The first histogram shows the number of articles per country for all countries in the dataset. The data is left skewed, with most countrires having less than 250 articles. The second histogram shows the number of high quality articles per country. Again, the data shows some left skew with most countries having less than 10 high quality articles.
###Code
plt.hist(country_articles['article_count'], bins = 100)
plt.xlabel('Number of Articles')
plt.ylabel('Frequency')
plt.title('Number of Wikipedia Articles By Country Histogram')
plt.show()
plt.hist(country_articles['quality_article_count'], bins = 100)
plt.xlabel('Number of Quality Articles')
plt.ylabel('Frequency')
plt.title('Number of High Quality Articles By Country Histogram')
plt.show()
###Output
_____no_output_____
###Markdown
Four tables are then created by sorted the values in the dataframe. This includes:* 10 highest-ranked countries in terms of number of politician articles as a proportion of country population* 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population* 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country* 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country(Table descriptions taken from A2 Bias in Data assignment page)
###Code
#Total articles per capita (calculated per million people)
top10_article_per_population = country_articles[
['country','articles_per_population','article_count','population']].sort_values(
'articles_per_population', ascending = False).head(10)
bottom10_article_per_population = country_articles[
['country','articles_per_population','article_count','population']].sort_values(
'articles_per_population').head(10)
top10_article_per_population.index = np.arange(1, len(top10_article_per_population)+1)
bottom10_article_per_population.index = np.arange(1, len(bottom10_article_per_population)+1)
#Percent of quality articles of total articles per country
top10_percent_quality_article = country_articles[
['country','percent_quality_article', 'quality_article_count','article_count']].sort_values(
'percent_quality_article', ascending = False).head(10)
bottom10_percent_quality_article1 = country_articles[
['country','percent_quality_article', 'quality_article_count','article_count']].sort_values(
['percent_quality_article', 'article_count'], ascending = [True, False]).head(10)
bottom10_percent_quality_article2 = country_articles[
['country','percent_quality_article', 'quality_article_count','article_count']].sort_values(
['percent_quality_article', 'article_count'], ascending = [True, True]).head(10)
top10_percent_quality_article.index = np.arange(1, len(top10_percent_quality_article)+1)
bottom10_percent_quality_article1.index = np.arange(1, len(bottom10_percent_quality_article1)+1)
bottom10_percent_quality_article2.index = np.arange(1, len(bottom10_percent_quality_article1)+1)
###Output
_____no_output_____
###Markdown
Top 10 Countries - Wikipedia Articles per CapitaThe table below shows the number Wikipedia articles per capita in the dataset. Note that the column named article_per_population indicates the number of articles per million people. Not surprisingly, the list is mostly comprised of countries with very small populations. Many of the top entries on the list are small islands in the South Pacific. These could have more Wikipedia articles if they are considered territories of more populous countries (i.e. Marshall Islands is a state associated with the United States) The other countries listed are very small European countries. A large portion of Wikipedia contributors are European which could explain the disproportionate number of these articles.
###Code
top10_article_per_population
###Output
_____no_output_____
###Markdown
Bottom 10 Countries - Wikipedia Articles per CapitaAs expected, many of the countries in the bottom 10 for Wikipedia articles per capita are highly populous countries such as China and India. Though each have many articles, their population are so large that it brings down the ratio of articles per capita. Some more surprising inclusions on this list were Uzbekistan and Ethiopia. I am honestly not sure why these countries have so few articles in the dataset and it could be a question worth further investigation. North Korea was a interesting entry. Given the restrictions on communication with the outside world, it's not surprising that they have very few Wikipedia articles.It's notable that the countries with the lowest articles per capita are all in Asia or Africa. Wikipedia editing is most predominant in North America and Europe which could explain this bias.
###Code
bottom10_article_per_population
###Output
_____no_output_____
###Markdown
Top 10 Countries - Percentage of High Quality Wikipedia ArticlesFirst, I constructed a histogram to better understand the distribution high quality article data. It appears the percentage of high quality articles is left skewed, with many countries having zero high quality articles.
###Code
plt.hist(country_articles['percent_quality_article'], bins = 50)
plt.xlabel('High Quality Article Percentage')
plt.ylabel('Frequency')
plt.title('Percentage of High Quality Articles By Country Histogram')
plt.show()
###Output
_____no_output_____
###Markdown
The table below shows the percentage of Wikipedia articles that are 'high quality' according to the ORES API. I expected the top entries to be countries with a well-educated population. However, I was surprised to see North Korea here, given that it appeared on the list with the fewest number of articles. North Korea could have a high proportion of high quality articles because it receives a lot of attention in world politics. Saudi Arabia and Romania were the most surprising entries. I would conduct further research to understand why they have such a high percentage of quality articles. Many other entries were countries with a few high quality articles and a low overall article count. The United States would have been my guess for the top country listed, and it is \9 overall.
###Code
top10_percent_quality_article
###Output
_____no_output_____
###Markdown
Bottom 10 Countries - Percentage of High Quality Wikipedia ArticlesAs shown in the histogram, there are 37 countries with zero high quality articles. I made two separate tables to better assess this trend. Both tables contain the countries with no high quality articles -- but the first is ordered by countries with the most articles and the second is ordered by countries with the fewest articles. The first table is particularly interesting because Finland, Belgium, Moldova, Switzerland and Nepal are either populous or well-educated countries, but they all have no quality Wikipedia articles. This is very surprising given they have over 360 articles each in the dataset. It was interesting that many of these countries are in Europe. I expected the Wikipedia's North American and European bias would be evident in the number of high quality articles. The second table is mostly small countries, as expected. These countries have fewer article written about them overall so it is not surprising that they do not have many high quality articles.
###Code
bottom10_percent_quality_article1
bottom10_percent_quality_article2
###Output
_____no_output_____
###Markdown
Step 2: Cleaning the Data
###Code
page_data = pd.read_csv('page_data.csv')
page_data_2 = page_data[page_data["page"].str.contains("Template") == False]
wpds_data = pd.read_csv('WPDS_2020_data.csv')
# WPDS_2020_data_country = WPDS_2020_data[WPDS_2020_data["Type"].str.contains("Sub-Region") == False]
wpds_2020_data_ctry = wpds_data[wpds_data.Type != 'Sub-Region']
wpds_2020_data_ctry_rmw = wpds_2020_data_ctry[wpds_2020_data_ctry.Type != 'World']
wpds_2020_data_ctry_rmw.tail()
wpds_2020_data_ctry_rmw.to_csv('wpds_2020_data_ctry_rmw.csv', index=False)
wpds_2020_data_sub = wpds_data[wpds_data.Type != 'Country']
wpds_2020_data_sub_rmw = wpds_2020_data_sub[wpds_2020_data_sub.Type != 'World']
wpds_2020_data_sub_rmw.tail()
wpds_2020_data_sub_rmw.to_csv('wpds_2020_data_sub_rmw.csv', index=False)
###Output
_____no_output_____
###Markdown
Step 3 Getting Article Quality Predictions
###Code
endpoint = 'https://ores.wikimedia.org/v3/scores/enwiki?models=articlequality&revids={rev_ids}'
# Customize these with your own information
headers = {
'User-Agent': 'https://github.com/lanfuli',
'From': '[email protected]'
}
# call the api function
def api_call(endpoint, rev_ids):
call = requests.get(endpoint.format(rev_ids = rev_ids), headers=headers)
response = call.json()
return response
def get_score(score_map, data, endpoint):
l = len(data)
# send 50 rev_ids each time avoid crash (information from class slack)
for i in range(0, l, 50):
if i + 50 <= l:
mini_batch_id = data['rev_id'].iloc[i:i+50]
else:
mini_batch_id = data['rev_id'].iloc[i:]
temp = api_call(endpoint, '|'.join(str(s) for s in mini_batch_id))
for j in temp['enwiki']['scores']:
if 'score' in temp['enwiki']['scores'][j]['articlequality']:
score_map[j] = temp['enwiki']['scores'][j]['articlequality']['score']['prediction']
else:
score_map[j] = 'NA'
# It takes time to finish
score_map = {}
get_score(score_map, page_data_2, endpoint)
score = page_data_2['rev_id'].astype(str).map(score_map)
page_data_2['article_score'] = score
na_score = page_data_2[page_data_2.article_score == 'NA']
print(len(na_score))
page_data_score_nna = page_data_2[page_data_2.article_score != 'NA']
print(len(page_data_score_nna))
# Save to csv log
page_data_score_nna.to_csv('page_data_score_nna.csv', index=False)
na_score.to_csv('na_score.csv', index=False)
###Output
_____no_output_____
###Markdown
Step 4 Combining the Datasets
###Code
page_data_score_nna
data_merge = page_data_score_nna.merge(wpds_2020_data_ctry_rmw, how = 'outer', left_on = ['country'], right_on = ['Name'])
no_match = data_merge[data_merge.country.isna() | data_merge.Name.isna()]
no_match.to_csv('no_match.csv', index=False)
data_merge_2 = data_merge.dropna(subset = ['country', 'Name'])
data_merge_match = data_merge_2[['country', 'page', 'rev_id', 'article_score', 'Population']]
data_merge_match = data_merge_match.rename(columns={'page': 'article_name', 'rev_id' : 'revision_id',
'article_score' : 'article_quality_est.', 'Population' : 'population'})
data_merge_match.to_csv('data_merge_match.csv', index=False)
###Output
_____no_output_____
###Markdown
Step 5 Analysis
###Code
articles_per_country = data_merge_match.groupby('country').agg({'article_name':'count'})
articles_per_country
pop_data = data_merge_match.groupby('country').agg({'population':'mean'})
pop_data
# merge these two tables: population and article per country
articles_per_pop = articles_per_country.merge(pop_data, left_on='country', right_on='country', how='inner')
articles_per_pop
# 1. Top 10 countries by coverage: 10 highest-ranked countries in terms of number of
# politician articles as a proportion of country population
articles_per_pop['percentage'] = articles_per_pop['article_name'] * 100 / articles_per_pop['population']
articles_per_pop_rank =articles_per_pop.sort_values(['percentage'], ascending=[False])
articles_per_pop_rank.head(10)
# 2. Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
articles_per_pop_rank.tail(10)
high_qua = pd.concat([data_merge_match[data_merge_match['article_quality_est.'] == 'FA'],
data_merge_match[data_merge_match['article_quality_est.'] == 'GA']])
# 3. Top 10 countries by relative quality: 10 highest-ranked countries in terms of the
# relative proportion of politician articles that are of GA and FA-quality
high_articles_per_country = high_qua.groupby('country').agg({'article_name':'count'})
high_pop_data = high_qua.groupby('country').agg({'population':'mean'})
high_articles_per_pop = high_articles_per_country.merge(high_pop_data, left_on='country', right_on='country', how='inner')
high_articles_per_pop
high_articles_per_pop_2 = high_articles_per_pop.rename(columns={'article_name': 'high_article_count', 'population': 'high_qty_population'})
combine_all_article = high_articles_per_pop_2.merge(articles_per_pop_rank, left_on='country', right_on='country', how='inner')
combine_all_article = combine_all_article.drop(columns={'percentage', 'population'})
combine_all_article['percentage'] = combine_all_article['high_article_count'] * 100 / combine_all_article['article_name']
combine_all_article = combine_all_article.sort_values('percentage', ascending=False)
combine_all_article.head(10)
# 4. Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
combine_all_article.tail(10)
# 5. Geographic regions by coverage: Ranking of geographic regions (in descending order) in
# terms of the total count of politician articles from countries in each region
# as a proportion of total regional population
df = wpds_data.drop(columns=['FIPS', 'TimeFrame', 'Data (M)', 'Population'])
# loop the dataframe, add country to sub-region dict, return a dict
def region_country_map(df, region_country_dict):
# read each row, add
region_name = ''
# source from: https://stackoverflow.com/questions/16476924/how-to-iterate-over-rows-in-a-dataframe-in-pandas
for index, row in df.iterrows():
# remove the world
if row['Type'] == 'World':
continue
# check upper for the special case: channel island
elif row['Type'] == 'Sub-Region' and row['Name'].isupper():
region_name = row['Name']
continue
region_country_dict[row['Name']] = region_name
return region_country_dict
region_country_dict = {}
region_country = region_country_map(df,region_country_dict )
# map articles_per_pop with region_country_dict
articles_per_pop['Sub-Region'] = articles_per_pop.index.map(region_country)
articles_per_pop
articles_per_region = articles_per_pop.groupby('Sub-Region').agg({'article_name':'sum'})
articles_per_region.reset_index()
# merge the population by region and articles_per_region
pop_per_region = wpds_2020_data_sub_rmw[['Name', 'Population']]
articles_per_region = articles_per_region.merge(pop_per_region, how = 'left', left_on = 'Sub-Region', right_on = ['Name'] )
articles_per_region['article_region_percentage'] = articles_per_region['article_name'] *100 / articles_per_region['Population']
articles_per_region.sort_values(by = ['article_region_percentage'], axis=0, ascending=False, inplace=True)
articles_per_region
# 6. Geographic regions by coverage: Ranking of geographic regions (in descending
# order) in terms of the relative proportion of politician articles from countries in each
# region that are of GA and FA-quality
high_article_pop = high_articles_per_pop.reset_index()
high_article_pop['Sub-Region'] = high_article_pop['country'].map(region_country)
high_article_region = high_article_pop.groupby('Sub-Region').agg({'article_name':'sum'})
articles_per_region_pop = high_article_region.merge(pop_per_region, how = 'left', left_on = 'Sub-Region', right_on = 'Name' )
articles_per_region_pop = articles_per_region_pop.rename(columns={'article_name': 'high_qty_count', 'Name': 'high_qty_Name', 'Population' : 'high_qty_pop'})
frames = [articles_per_region, articles_per_region_pop]
combine_all = pd.concat(frames, axis=1)
combine_all = combine_all.drop(columns=['article_region_percentage', 'Name', 'Population'])
combine_all['high_art_percentage'] = combine_all['high_qty_count'] *100 / combine_all['article_name']
combine_all.sort_values(by = ['high_art_percentage'], axis=0, ascending=False, inplace=True)
combine_all
###Output
_____no_output_____ |
examples/plugin-PolyLineOffset.ipynb | ###Markdown
**Note** : The examples presented below are the copy of the ones presented on https://github.com/bbecquet/Leaflet.PolylineOffset Basic Demo - The dashed line is the "model", with no offset applied. - The Red line is with a -5px offset,- The Green line is with a 10px offset.The three are distinct Polyline objects but uses the same coordinate array
###Code
import folium
from folium import plugins
m = folium.Map(location=[58.0, -11.0], zoom_start=4, tiles="cartodbpositron")
coords = [
[58.44773, -28.65234],
[53, -23.33496],
[53, -14.32617],
[58.1707, -10.37109],
[59, -13],
[57, -15],
[57, -18],
[60, -18],
[63, -5],
[59, -7],
[58, -3],
[56, -3],
[60, -4],
]
plugins.PolyLineOffset(
coords, weight=2, dash_array="5,10", color="black", opacity=1
).add_to(m)
plugins.PolyLineOffset(coords, color="#f00", opacity=1, offset=-5).add_to(m)
plugins.PolyLineOffset(coords, color="#080", opacity=1, offset=10).add_to(m)
m
###Output
_____no_output_____
###Markdown
Bus Lines A more complex demo. Offsets are computed automatically depending on the number of bus lines using the same segment.Other non-offset polylines are used to achieve the white and black outline effect.
###Code
m = folium.Map(location=[48.868, 2.365], zoom_start=15)
geojson = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {"lines": [0, 1]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.357919216156006, 48.87621773324153],
[2.357339859008789, 48.874834693731664],
[2.362983226776123, 48.86855408432749],
[2.362382411956787, 48.86796126699168],
[2.3633265495300293, 48.86735432768131],
],
},
},
{
"type": "Feature",
"properties": {"lines": [2, 3]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.351503372192383, 48.86443950493823],
[2.361609935760498, 48.866775611250205],
[2.3633265495300293, 48.86735432768131],
],
},
},
{
"type": "Feature",
"properties": {"lines": [1, 2]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.369627058506012, 48.86619159489603],
[2.3724031448364253, 48.8626397112042],
[2.3728322982788086, 48.8616233285001],
[2.372767925262451, 48.86080456075567],
],
},
},
{
"type": "Feature",
"properties": {"lines": [0]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.3647427558898926, 48.86653565369396],
[2.3647642135620117, 48.86630981023694],
[2.3666739463806152, 48.86314789481612],
[2.3673176765441895, 48.86066339254944],
],
},
},
{
"type": "Feature",
"properties": {"lines": [0, 1, 2, 3]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.3633265495300293, 48.86735432768131],
[2.3647427558898926, 48.86653565369396],
],
},
},
{
"type": "Feature",
"properties": {"lines": [1, 2, 3]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.3647427558898926, 48.86653565369396],
[2.3650002479553223, 48.86660622956524],
[2.365509867668152, 48.866987337550164],
[2.369627058506012, 48.86619159489603],
],
},
},
{
"type": "Feature",
"properties": {"lines": [3]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.369627058506012, 48.86619159489603],
[2.372349500656128, 48.865702850895744],
],
},
},
],
}
# manage overlays in groups to ease superposition order
outlines = folium.FeatureGroup("outlines")
line_bg = folium.FeatureGroup("lineBg")
bus_lines = folium.FeatureGroup("busLines")
bus_stops = folium.FeatureGroup("busStops")
line_weight = 6
line_colors = ["red", "#08f", "#0c0", "#f80"]
stops = []
for line_segment in geojson["features"]:
# Get every bus line coordinates
segment_coords = [[x[1], x[0]] for x in line_segment["geometry"]["coordinates"]]
# Get bus stops coordinates
stops.append(segment_coords[0])
stops.append(segment_coords[-1])
# Get number of bus lines sharing the same coordinates
lines_on_segment = line_segment["properties"]["lines"]
# Width of segment proportional to the number of bus lines
segment_width = len(lines_on_segment) * (line_weight + 1)
# For the white and black outline effect
folium.PolyLine(
segment_coords, color="#000", weight=segment_width + 5, opacity=1
).add_to(outlines)
folium.PolyLine(
segment_coords, color="#fff", weight=segment_width + 3, opacity=1
).add_to(line_bg)
# Draw parallel bus lines with different color and offset
for j, line_number in enumerate(lines_on_segment):
plugins.PolyLineOffset(
segment_coords,
color=line_colors[line_number],
weight=line_weight,
opacity=1,
offset=j * (line_weight + 1) - (segment_width / 2) + ((line_weight + 1) / 2),
).add_to(bus_lines)
# Draw bus stops
for stop in stops:
folium.CircleMarker(
stop,
color="#000",
fill_color="#ccc",
fill_opacity=1,
radius=10,
weight=4,
opacity=1,
).add_to(bus_stops)
outlines.add_to(m)
line_bg.add_to(m)
bus_lines.add_to(m)
bus_stops.add_to(m)
m
###Output
_____no_output_____
###Markdown
**Note** : The examples presented below are the copy of the ones presented on https://github.com/bbecquet/Leaflet.PolylineOffset Basic Demo - The dashed line is the "model", with no offset applied. - The Red line is with a -5px offset,- The Green line is with a 10px offset.The three are distinct Polyline objects but uses the same coordinate array
###Code
from folium import plugins
m = folium.Map(location=[58.0, -11.0], zoom_start=4, tiles="Mapbox Bright")
coords = [
[58.44773, -28.65234],
[53, -23.33496],
[53, -14.32617],
[58.1707, -10.37109],
[59, -13],
[57, -15],
[57, -18],
[60, -18],
[63, -5],
[59, -7],
[58, -3],
[56, -3],
[60, -4],
]
plugins.PolyLineOffset(
coords, weight=2, dash_array="5,10", color="black", opacity=1
).add_to(m)
plugins.PolyLineOffset(coords, color="#f00", opacity=1, offset=-5).add_to(m)
plugins.PolyLineOffset(coords, color="#080", opacity=1, offset=10).add_to(m)
m.save(os.path.join('results', "PolyLineOffset_simple.html"))
m
###Output
_____no_output_____
###Markdown
Bus Lines A more complex demo. Offsets are computed automatically depending on the number of bus lines using the same segment.Other non-offset polylines are used to achieve the white and black outline effect.
###Code
m = folium.Map(location=[48.868, 2.365], zoom_start=15)
geojson = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {"lines": [0, 1]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.357919216156006, 48.87621773324153],
[2.357339859008789, 48.874834693731664],
[2.362983226776123, 48.86855408432749],
[2.362382411956787, 48.86796126699168],
[2.3633265495300293, 48.86735432768131],
],
},
},
{
"type": "Feature",
"properties": {"lines": [2, 3]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.351503372192383, 48.86443950493823],
[2.361609935760498, 48.866775611250205],
[2.3633265495300293, 48.86735432768131],
],
},
},
{
"type": "Feature",
"properties": {"lines": [1, 2]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.369627058506012, 48.86619159489603],
[2.3724031448364253, 48.8626397112042],
[2.3728322982788086, 48.8616233285001],
[2.372767925262451, 48.86080456075567],
],
},
},
{
"type": "Feature",
"properties": {"lines": [0]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.3647427558898926, 48.86653565369396],
[2.3647642135620117, 48.86630981023694],
[2.3666739463806152, 48.86314789481612],
[2.3673176765441895, 48.86066339254944],
],
},
},
{
"type": "Feature",
"properties": {"lines": [0, 1, 2, 3]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.3633265495300293, 48.86735432768131],
[2.3647427558898926, 48.86653565369396],
],
},
},
{
"type": "Feature",
"properties": {"lines": [1, 2, 3]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.3647427558898926, 48.86653565369396],
[2.3650002479553223, 48.86660622956524],
[2.365509867668152, 48.866987337550164],
[2.369627058506012, 48.86619159489603],
],
},
},
{
"type": "Feature",
"properties": {"lines": [3]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.369627058506012, 48.86619159489603],
[2.372349500656128, 48.865702850895744],
],
},
},
],
}
# manage overlays in groups to ease superposition order
outlines = folium.FeatureGroup("outlines")
line_bg = folium.FeatureGroup("lineBg")
bus_lines = folium.FeatureGroup("busLines")
bus_stops = folium.FeatureGroup("busStops")
line_weight = 6
line_colors = ["red", "#08f", "#0c0", "#f80"]
stops = []
for line_segment in geojson["features"]:
# Get every bus line coordinates
segment_coords = [[x[1], x[0]] for x in line_segment["geometry"]["coordinates"]]
# Get bus stops coordinates
stops.append(segment_coords[0])
stops.append(segment_coords[-1])
# Get number of bus lines sharing the same coordinates
lines_on_segment = line_segment["properties"]["lines"]
# Width of segment proportional to the number of bus lines
segment_width = len(lines_on_segment) * (line_weight + 1)
# For the white and black outline effect
folium.PolyLine(
segment_coords, color="#000", weight=segment_width + 5, opacity=1
).add_to(outlines)
folium.PolyLine(
segment_coords, color="#fff", weight=segment_width + 3, opacity=1
).add_to(line_bg)
# Draw parallel bus lines with different color and offset
for j, line_number in enumerate(lines_on_segment):
plugins.PolyLineOffset(
segment_coords,
color=line_colors[line_number],
weight=line_weight,
opacity=1,
offset=j * (line_weight + 1) - (segment_width / 2) + ((line_weight + 1) / 2),
).add_to(bus_lines)
# Draw bus stops
for stop in stops:
folium.CircleMarker(
stop,
color="#000",
fill_color="#ccc",
fill_opacity=1,
radius=10,
weight=4,
opacity=1,
).add_to(bus_stops)
outlines.add_to(m)
line_bg.add_to(m)
bus_lines.add_to(m)
bus_stops.add_to(m)
m.save(os.path.join('results', "PolyLineOffset_bus.html"))
m
###Output
_____no_output_____
###Markdown
**Note** : The examples presented below are the copy of the ones presented on https://github.com/bbecquet/Leaflet.PolylineOffset Basic Demo - The dashed line is the "model", with no offset applied. - The Red line is with a -5px offset,- The Green line is with a 10px offset.The three are distinct Polyline objects but uses the same coordinate array
###Code
from folium import plugins
m = folium.Map(location=[58.0, -11.0], zoom_start=4, tiles="cartodbpositron")
coords = [
[58.44773, -28.65234],
[53, -23.33496],
[53, -14.32617],
[58.1707, -10.37109],
[59, -13],
[57, -15],
[57, -18],
[60, -18],
[63, -5],
[59, -7],
[58, -3],
[56, -3],
[60, -4],
]
plugins.PolyLineOffset(
coords, weight=2, dash_array="5,10", color="black", opacity=1
).add_to(m)
plugins.PolyLineOffset(coords, color="#f00", opacity=1, offset=-5).add_to(m)
plugins.PolyLineOffset(coords, color="#080", opacity=1, offset=10).add_to(m)
m.save(os.path.join('results', "PolyLineOffset_simple.html"))
m
###Output
_____no_output_____
###Markdown
Bus Lines A more complex demo. Offsets are computed automatically depending on the number of bus lines using the same segment.Other non-offset polylines are used to achieve the white and black outline effect.
###Code
m = folium.Map(location=[48.868, 2.365], zoom_start=15)
geojson = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {"lines": [0, 1]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.357919216156006, 48.87621773324153],
[2.357339859008789, 48.874834693731664],
[2.362983226776123, 48.86855408432749],
[2.362382411956787, 48.86796126699168],
[2.3633265495300293, 48.86735432768131],
],
},
},
{
"type": "Feature",
"properties": {"lines": [2, 3]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.351503372192383, 48.86443950493823],
[2.361609935760498, 48.866775611250205],
[2.3633265495300293, 48.86735432768131],
],
},
},
{
"type": "Feature",
"properties": {"lines": [1, 2]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.369627058506012, 48.86619159489603],
[2.3724031448364253, 48.8626397112042],
[2.3728322982788086, 48.8616233285001],
[2.372767925262451, 48.86080456075567],
],
},
},
{
"type": "Feature",
"properties": {"lines": [0]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.3647427558898926, 48.86653565369396],
[2.3647642135620117, 48.86630981023694],
[2.3666739463806152, 48.86314789481612],
[2.3673176765441895, 48.86066339254944],
],
},
},
{
"type": "Feature",
"properties": {"lines": [0, 1, 2, 3]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.3633265495300293, 48.86735432768131],
[2.3647427558898926, 48.86653565369396],
],
},
},
{
"type": "Feature",
"properties": {"lines": [1, 2, 3]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.3647427558898926, 48.86653565369396],
[2.3650002479553223, 48.86660622956524],
[2.365509867668152, 48.866987337550164],
[2.369627058506012, 48.86619159489603],
],
},
},
{
"type": "Feature",
"properties": {"lines": [3]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.369627058506012, 48.86619159489603],
[2.372349500656128, 48.865702850895744],
],
},
},
],
}
# manage overlays in groups to ease superposition order
outlines = folium.FeatureGroup("outlines")
line_bg = folium.FeatureGroup("lineBg")
bus_lines = folium.FeatureGroup("busLines")
bus_stops = folium.FeatureGroup("busStops")
line_weight = 6
line_colors = ["red", "#08f", "#0c0", "#f80"]
stops = []
for line_segment in geojson["features"]:
# Get every bus line coordinates
segment_coords = [[x[1], x[0]] for x in line_segment["geometry"]["coordinates"]]
# Get bus stops coordinates
stops.append(segment_coords[0])
stops.append(segment_coords[-1])
# Get number of bus lines sharing the same coordinates
lines_on_segment = line_segment["properties"]["lines"]
# Width of segment proportional to the number of bus lines
segment_width = len(lines_on_segment) * (line_weight + 1)
# For the white and black outline effect
folium.PolyLine(
segment_coords, color="#000", weight=segment_width + 5, opacity=1
).add_to(outlines)
folium.PolyLine(
segment_coords, color="#fff", weight=segment_width + 3, opacity=1
).add_to(line_bg)
# Draw parallel bus lines with different color and offset
for j, line_number in enumerate(lines_on_segment):
plugins.PolyLineOffset(
segment_coords,
color=line_colors[line_number],
weight=line_weight,
opacity=1,
offset=j * (line_weight + 1) - (segment_width / 2) + ((line_weight + 1) / 2),
).add_to(bus_lines)
# Draw bus stops
for stop in stops:
folium.CircleMarker(
stop,
color="#000",
fill_color="#ccc",
fill_opacity=1,
radius=10,
weight=4,
opacity=1,
).add_to(bus_stops)
outlines.add_to(m)
line_bg.add_to(m)
bus_lines.add_to(m)
bus_stops.add_to(m)
m.save(os.path.join('results', "PolyLineOffset_bus.html"))
m
###Output
_____no_output_____
###Markdown
**Note** : The examples presented below are the copy of the ones presented on https://github.com/bbecquet/Leaflet.PolylineOffset Basic Demo - The dashed line is the "model", with no offset applied. - The Red line is with a -5px offset,- The Green line is with a 10px offset.The three are distinct Polyline objects but uses the same coordinate array
###Code
from folium import plugins
m = folium.Map(location=[58.0, -11.0], zoom_start=4, tiles="Mapbox Bright")
coords = [
[58.44773, -28.65234],
[53, -23.33496],
[53, -14.32617],
[58.1707, -10.37109],
[59, -13],
[57, -15],
[57, -18],
[60, -18],
[63, -5],
[59, -7],
[58, -3],
[56, -3],
[60, -4],
]
plugins.PolyLineOffset(
coords, weight=2, dash_array="5,10", color="black", opacity=1
).add_to(m)
plugins.PolyLineOffset(coords, color="#f00", opacity=1, offset=-5).add_to(m)
plugins.PolyLineOffset(coords, color="#080", opacity=1, offset=10).add_to(m)
m.save(os.path.join('results', "PolyLineOffset_simple.html"))
m
###Output
_____no_output_____
###Markdown
Bus Lines A more complex demo. Offsets are computed automatically depending on the number of bus lines using the same segment.Other non-offset polylines are used to achieve the white and black outline effect.
###Code
m = folium.Map(location=[48.868, 2.365], zoom_start=15)
geojson = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {"lines": [0, 1]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.357919216156006, 48.87621773324153],
[2.357339859008789, 48.874834693731664],
[2.362983226776123, 48.86855408432749],
[2.362382411956787, 48.86796126699168],
[2.3633265495300293, 48.86735432768131],
],
},
},
{
"type": "Feature",
"properties": {"lines": [2, 3]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.351503372192383, 48.86443950493823],
[2.361609935760498, 48.866775611250205],
[2.3633265495300293, 48.86735432768131],
],
},
},
{
"type": "Feature",
"properties": {"lines": [1, 2]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.369627058506012, 48.86619159489603],
[2.3724031448364253, 48.8626397112042],
[2.3728322982788086, 48.8616233285001],
[2.372767925262451, 48.86080456075567],
],
},
},
{
"type": "Feature",
"properties": {"lines": [0]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.3647427558898926, 48.86653565369396],
[2.3647642135620117, 48.86630981023694],
[2.3666739463806152, 48.86314789481612],
[2.3673176765441895, 48.86066339254944],
],
},
},
{
"type": "Feature",
"properties": {"lines": [0, 1, 2, 3]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.3633265495300293, 48.86735432768131],
[2.3647427558898926, 48.86653565369396],
],
},
},
{
"type": "Feature",
"properties": {"lines": [1, 2, 3]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.3647427558898926, 48.86653565369396],
[2.3650002479553223, 48.86660622956524],
[2.365509867668152, 48.866987337550164],
[2.369627058506012, 48.86619159489603],
],
},
},
{
"type": "Feature",
"properties": {"lines": [3]},
"geometry": {
"type": "LineString",
"coordinates": [
[2.369627058506012, 48.86619159489603],
[2.372349500656128, 48.865702850895744],
],
},
},
],
}
# manage overlays in groups to ease superposition order
outlines = folium.FeatureGroup("outlines")
line_bg = folium.FeatureGroup("lineBg")
bus_lines = folium.FeatureGroup("busLines")
bus_stops = folium.FeatureGroup("busStops")
line_weight = 6
line_colors = ["red", "#08f", "#0c0", "#f80"]
stops = []
for line_segment in geojson["features"]:
# Get every bus line coordinates
segment_coords = [[x[1], x[0]] for x in line_segment["geometry"]["coordinates"]]
# Get bus stops coordinates
stops.append(segment_coords[0])
stops.append(segment_coords[-1])
# Get number of bus lines sharing the same coordinates
lines_on_segment = line_segment["properties"]["lines"]
# Width of segment proportional to the number of bus lines
segment_width = len(lines_on_segment) * (line_weight + 1)
# For the white and black outline effect
folium.PolyLine(
segment_coords, color="#000", weight=segment_width + 5, opacity=1
).add_to(outlines)
folium.PolyLine(
segment_coords, color="#fff", weight=segment_width + 3, opacity=1
).add_to(line_bg)
# Draw parallel bus lines with different color and offset
for j, line_number in enumerate(lines_on_segment):
plugins.PolyLineOffset(
segment_coords,
color=line_colors[line_number],
weight=line_weight,
opacity=1,
offset=j * (line_weight + 1) - (segment_width / 2) + ((line_weight + 1) / 2),
).add_to(bus_lines)
# Draw bus stops
for stop in stops:
folium.CircleMarker(
stop,
color="#000",
fill_color="#ccc",
fill_opacity=1,
radius=10,
weight=4,
opacity=1,
).add_to(bus_stops)
outlines.add_to(m)
line_bg.add_to(m)
bus_lines.add_to(m)
bus_stops.add_to(m)
m.save(os.path.join('results', "PolyLineOffset_bus.html"))
m
###Output
_____no_output_____ |
NASA/Python_codes/ML_Books/01_02_transfer_learning-ResultAnalysis_EVI.ipynb | ###Markdown
Make Prediction
###Code
model_dir = "/Users/hn/Documents/01_research_data/NASA/ML_Models/"
model = load_model(model_dir + '01_TL_SingleDouble.h5')
# load and prepare the image
def load_image(filename):
# load the image
img = load_img(filename, target_size=(224, 224))
# convert to array
img = img_to_array(img)
# reshape into a single sample with 3 channels
img = img.reshape(1, 224, 224, 3)
# center pixel data
img = img.astype('float32')
img = img - [123.68, 116.779, 103.939]
return img
# # load an image and predict the class
# def run_example():
# # load the image
# test_dir = "/Users/hn/Documents/01_research_data/NASA/ML_data/limitCrops_nonExpert_images_" + idx + "/"
# img = load_image(test_dir+'double_101682_WSDA_SF_2018.jpg')
# # load model
# model_dir = "/Users/hn/Documents/01_research_data/NASA/ML_Models/"
# model = load_model(model_dir + '01_TL_SingleDouble.h5')
# # predict the class
# result = model.predict(img)
# print(result[0])
# entry point, run the example
# run_example()
file_name = 'double_101163_WSDA_SF_2017.jpg'
test_dir = "/Users/hn/Documents/01_research_data/NASA/ML_data/limitCrops_nonExpert_images_" + idx + "/"
img = load_image(test_dir+file_name)
result = model.predict(img)
print ("probability of being single cropped is {}.".format(result[0]))
pyplot.subplot(111)
# define filename
filename = img
image = imread(test_dir+file_name)
pyplot.imshow(image)
pyplot.show()
###Output
probability of being single cropped is [0.00012851].
###Markdown
Test Phase
###Code
test_dir = "/Users/hn/Documents/01_research_data/NASA/ML_data/limitCrops_nonExpert_images_" + idx + "/"
test_filenames = os.listdir(test_dir)
test_df = pd.DataFrame({
'filename': test_filenames
})
nb_samples = test_df.shape[0]
test_df["human_predict"] = test_df.filename.str.split("_", expand=True)[0]
test_df["prob_single"]=-1.0
print (test_df.shape)
test_df.head(2)
# test_datagen = ImageDataGenerator(featurewise_center=True)
# Image_Size = (224, 224)
# test_generator = test_datagen.flow_from_directory(test_dir,
# target_size=Image_Size)
# We have done this once before. So, commented out here. and read below.
# for idx in test_df.index:
# img = load_image(test_dir + test_df.loc[idx, 'filename'])
# test_df.loc[idx, 'prob_single'] = model.predict(img)[0][0]
# for prob in [0.3, 0.4, 0.5, 0.6, 0.7]:
# colName = "prob_point"+str(prob)[2:]
# test_df.loc[test_df.prob_single<prob, colName] = 'double'
# test_df.loc[test_df.prob_single>=prob, colName] = 'single'
# out_dir = "/Users/hn/Documents/01_research_data/NASA/ML_data/01_transfer_learning_result/"
# out_name = out_dir + "01_TL_testSet_predictions_" + idx + ".csv"
# test_df.to_csv(out_name, index = False)
test_df.loc[40:50]
out_dir = "/Users/hn/Documents/01_research_data/NASA/ML_data/01_transfer_learning_result/"
test_df = pd.read_csv(out_dir + "01_TL_testSet_predictions_" + idx + ".csv")
test_df.head(2)
test_df.loc[30:40]
# pip show keras
# pip list --outdated
# !pip3 install --upgrade keras
for ii in [3, 4, 5, 6, 7]:
curr_prob = "prob_point"+str(ii)
curr_pred_type = "predType_point" + str(ii)
test_df[curr_pred_type]="a"
for idx in test_df.index:
if test_df.loc[idx, "human_predict"]==test_df.loc[idx, curr_prob]=="single":
test_df.loc[idx, curr_pred_type]="True Single"
elif test_df.loc[idx, "human_predict"]==test_df.loc[idx, curr_prob]=="double":
test_df.loc[idx, curr_pred_type]="True Double"
elif test_df.loc[idx, "human_predict"]=="double" and test_df.loc[idx, curr_prob]=="single":
test_df.loc[idx, curr_pred_type]="False Single"
elif test_df.loc[idx, "human_predict"]=="single" and test_df.loc[idx, curr_prob]=="double":
test_df.loc[idx, curr_pred_type]="False Double"
test_df.head(10)
needed_cols = ["predType_point3", "predType_point4",
"predType_point5",
"predType_point6", "predType_point7"]
test_df_trimmed = test_df[needed_cols].copy()
test_df_trimmed.head(2)
TFR=pd.DataFrame()
for col in test_df_trimmed.columns:
TFR[col]=test_df_trimmed[col].value_counts()
TFR
out_name = out_dir + "01_TL_predType_TFR.csv"
# TFR.to_csv(out_name, index = True)
test_df.head(2)
test_df["ID"] = test_df.filename.str.split("_", expand=True)[1]+ "_" + \
test_df.filename.str.split("_", expand=True)[2]+ "_" + \
test_df.filename.str.split("_", expand=True)[3]+ "_" + \
test_df.filename.str.split("_", expand=True)[4].str.split(".", expand=True)[0]
test_df.head(2)
###Output
_____no_output_____
###Markdown
Read field info to add areas
###Code
eval_set = pd.read_csv("/Users/hn/Documents/01_research_data/NASA/parameters/evaluation_set.csv")
eval_set.head(2)
test_df = pd.merge(test_df, eval_set, on=['ID'], how='left')
test_df.head(2)
acr_predTypes = pd.DataFrame(columns=['pred_type'])
lst = ['False Single', 'False Double', 'True Double', 'True Single']
acr_predTypes["pred_type"] = lst
for ii in [3, 4, 5, 6, 7]:
curr_col = "predType_point" + str(ii)
A = test_df[[curr_col, 'Acres']].groupby([curr_col]).sum()
A.rename(columns={"Acres": "Acres_point"+ str(ii)}, inplace=True)
acr_predTypes = pd.merge(acr_predTypes, A.reset_index(),
left_on='pred_type', right_on=curr_col,
how='left')
for ii in [3, 4, 5, 6, 7]:
curr_col = "predType_point" + str(ii)
acr_predTypes.drop(curr_col, axis="columns", inplace=True)
acr_predTypes
out_dir = "/Users/hn/Documents/01_research_data/NASA/ML_data/01_transfer_learning_result/"
out_name = out_dir + "01_TL_" + idx + "_Acreage_TFPR.csv"
acr_predTypes.to_csv(out_name, index = False)
###Output
_____no_output_____ |
Lecture Notebooks/Econ126_Class_12_blank.ipynb | ###Markdown
Class 12: Introduction to the `linearsolve` ModuleIn general, dynamic stochastic general equilibrium (DSGE) models do not admit analytic (i.e., pencil-and-paper) solutions and they are time-consuming to work with. The `linearsolve` module approximates, solves, and simulates DSGE models and therefore makes DSGE models easier to use. Installing `linearsolve``linearsolve` is not included in the Anaconda Python installation and so before you can import it, you need to download and install the `linearsolve` package from PyPI, the Python Package Index. In Windows, open the Anaconda Prompt and in Mac, open the Terminal and run the following commmand: pip install linearsolve You only have to install the package once.
###Code
# Import the linearsolve under the 'ls' namespace
###Output
_____no_output_____
###Markdown
Example: A One-Equation Model of TFPConsider the following AR(1) specification for $\log$ TFP:\begin{align}\log A_{t+1} & = \rho \log A_t + \epsilon_{t+1}\end{align}where $\epsilon_{t+1} \sim \mathcal{N}(0,\sigma^2)$. Let's simulate the model with `linearsolve`. To do this we need to do several things:1. Create a Pandas series that stores the names of the parameters of the model.2. Define a function that returns the equilibrium conditions of the model solved for zero.3. Initialize an instance of the `linearsolve.model` class4. Compute and input the steady state of the model.5. Approximate and solve the model.6. Compute simulations of the model. **Step 1:** Create a variable called `parameters` that stores parameter values as a Pandas Series.
###Code
# Create a variable called 'parameters' that stores the model parameter values in a Pandas Series
# Print stored parameter values
# Create variable called 'varNames' that stores the variable names in a list with state variables ordered first
# Create variable called 'shockNames' that stores an exogenous shock name for each state variable.
###Output
_____no_output_____
###Markdown
**Step 2:** Define a function called `equilibrium_equations` that evaluates the equilibrium equations of model solved for zero. The function should accept three arguments:1. `variables_forward`: Values of $t+1$-dated variables2. `variables_current`: Values of $t-1$-dated variables3. `parameters`: Pandas Series with the parameters for the modelThe function should return a NumPy array of the model's equilibrium conditions solved for zero.
###Code
# Define a function that evaluates the equilibrium conditions of the model solved for zero. PROVIDED
def equilibrium_equations(variables_forward,variables_current,parameters):
# Create a variable called 'p' that storres the model's parameters. PROVIDED.
p = parameters
# Create variable called 'cur' that stores the values of current (date t) variables. PROVIDED.
cur = variables_current
# Create variable called 'fwd' that stores the values of one-period-ahead (date t+1) variables. PROVIDED.
fwd = variables_forward
# Create variable called 'tfp_proc' that returns the law of motion for TFP solved for zero (exogenous shock excluded)
# Return equilibrium conditions stacked in a numpy array. PROVIDED.
return np.array([
###Output
_____no_output_____
###Markdown
**Step 3:** Initialize a model instance using the `ls.model()` function. The 1. `equations`: Name of function that stores the equilibrium conditions2. `nstates`: Number of *state* variables (i.e., variables that are *predetermined*)3. `varNames`: List of the names of the endogenous variables4. `shockNames`: List of the names of the exogenous shocks5. `parameters`: Pandas Series of parameter values
###Code
# Initialize the model into a variable named 'rbc_model' using the ls.model() function
###Output
_____no_output_____
###Markdown
**Step 4:** Set the steady state of the model. Either use the `.compute_ss()` method which requires an initial guess of what the steady state is. Or set the steady state `.ss` attribute directly.
###Code
# Compute the steady state numerically using .compute_ss() method of model
# Set the .ss attribute of model directly
###Output
_____no_output_____
###Markdown
**Step 5:** Compute a transform the model into a log-linear approximation around the nonstochastic steady state. Then rewrite the equilibrium conditions so that all endogenous variables are expressed as linear functions of state variables and exogenous shocks.
###Code
# Appproximate and solve using the .approximate_and_solve() method of model
###Output
_____no_output_____
###Markdown
**Step 6:** Simulate the model using one of the following methods:1. `.impulse()`: Compute impulse responses to a one-time shock to each exogenous variables. Results stored in `.irs` attribute.2. `.stoch_sim()`: Compute stochastic simulation. Results stored in `.simulated` attribute.First, we'll compute an impulse response simulation. Let's consider the effect of a one time shock to $\epsilon$ of 0.01 in period 5. Simulate 41 periods.
###Code
# Compute impulse response of a to a one-time shock to e_a
###Output
_____no_output_____
###Markdown
The impulse response simulations are stored in the `.irs` attribute as a dictionary with keys equal to the names of the exogenous shocks.
###Code
# Print model.irs
###Output
_____no_output_____
###Markdown
Let's look at the first 10 rows of the `'e_a'` element of `model.irs`.
###Code
# Print first 10 rows of the element in model.irs that corresponds to the shock to TFP
# Plot simulated impulse response to e_a
###Output
_____no_output_____
###Markdown
Next, we'll use the `.stoch_sim()` method to compute a stochastic simulation. The method takes arguments:1. `seed`: Seed of NumPy RNG. (Optional)2. `T`: Number of periods to simulate3. `covMat`: Covariance matrix for exogenous shock processThe simulation should be for 201 periods.
###Code
# Compute stochastic simulation
# Print the first 10 rows of `model.simulated`
###Output
_____no_output_____
###Markdown
The stochastic simulations are stored in the `.stoch_sim` attribute as a Pandas `DataFrame`.
###Code
# Plot the stochastic simulation
###Output
_____no_output_____
###Markdown
Example 2: The Stochastic Solow Growth Model RevisitedNow consider the following system of equations:\begin{align}Y_t & = A_t K_t^{\alpha} \\K_{t+1} & = sY_t + (1-\delta) K_t\\\log A_{t+1} & = \rho \log A_t + \epsilon_{t+1}\end{align}where $\epsilon_{t+1} \sim \mathcal{N}(0,\sigma^2)$. Let's simulate the model with `linearsolve`. Before proceding, let's also go ahead and rewrite the model with all variables moved to the lefthand side of the equations:\begin{align}0 & = A_t K_t^{\alpha} - Y_t \\0 & = sY_t + (1-\delta) K_t - K_{t+1}\\0 & = \rho \log A_t + \epsilon_{t+1} - \log A_{t+1}\end{align}Capital and TFP are called *state variables* because they're $t+1$ values are predetermined. Output is called a *costate* or *control* variable. Note that the model as 3 endogenous variables with 2 state variables. Use the following values for the simulation:| $\rho$ | $\sigma$ | $s$ | $\alpha$ | $\delta $ | $T$ ||--------|----------|-----|----------|-----------|------|| 0.75 | 0.006 | 0.1 | 0.35 | 0.025 | 201 | Initialization, Approximation, and SolutionThe next several cells initialize the model in `linearsolve` and then approximate and solve it.
###Code
# Create a variable called 'parameters' that stores the model parameter values in a Pandas Series
# Print the model's parameters
# Create variable called 'varNames' that stores the variable names in a list with state variables ordered first
# Create variable called 'shockNames' that stores an exogenous shock name for each state variable.
# Define a function that evaluates the equilibrium conditions of the model solved for zero. PROVIDED
def equilibrium_equations(variables_forward,variables_current,parameters):
# Parameters. PROVIDED
p = parameters
# Current variables. PROVIDED
cur = variables_current
# Forward variables. PROVIDED
fwd = variables_forward
# Production function
# Capital evolution
# Exogenous tfp
# Stack equilibrium conditions into a numpy array
###Output
_____no_output_____
###Markdown
Next, initialize the model.
###Code
# Initialize the model into a variable named 'rbc_model'
# Compute the steady state numerically using .compute_ss() method of model
# Print the computed steady state
# Find the log-linear approximation around the non-stochastic steady state and solve using .approximate_and_solve() method of model
###Output
_____no_output_____
###Markdown
A Few Details About the Approximation (Optional)The previous step constructs a log-linear approximation of the model and then solves for the endogenous variables as functions of the state variables and exogenous shocks only.View the approximated model by calling the `.approximated()` method.
###Code
# Print the log-linear approximation to the models's equilibrium conditions
###Output
_____no_output_____
###Markdown
Each variable represents the log-deviation from the steady state of the respective variable in our model. For example, the variable `y[t]` means $\log(Y_t) - \log(Y)$ in terms of the stochastic Solow model. But how do these equations relate to the original model?The first equation appears to be:\begin{align}0 &= -2.1095\cdot a_t-0.7383\cdot k_t + 2.1095\cdot y_t\end{align}Note that dividing by 2.1095 and solving for $y_t$ yields:\begin{align}y_t &= a_t +0.3499\cdot k_t,\end{align}so the coefficient on $k_t$ appears to be close to $\alpha=0.35$. We can derive this linear equation directly. First, start with the production function:\begin{align}Y_t &= A_t K_t^{\alpha}.\end{align}Then divide both sides by steady state output:\begin{align}\frac{Y_t}{Y} &= \frac{A_t K_t^{\alpha}}{AK^{\alpha}} \, = \, \frac{A_t}{A}\frac{K_t^{\alpha}}{K^{\alpha}}.\end{align}Then, take the log of both sides:\begin{align}\log\left(\frac{Y_t}{Y} \right)&= \log\left(\frac{A_t}{A}\right) + \alpha\log\left(\frac{K_t}{K}\right).\end{align}finally, letting $y_t = \log(Y_t/Y)$, k_t = $\log(K_t/K)$, and $a_t = \log(A_t/A)$, we have:\begin{align}y_t &= a_t + \alpha k_t.\end{align}However, understanding this process isn't as important as being able to interpret the graphs and statistics that compute using the output of `linearsolve`. A Few Details About the Solution (Optional)It's also worth seeing what it means for a model to be *solved*. After `linearsolve` computes the log-linear approximation to the model, it solves for each endogenous variable as a function of state variables only. View the solved model by calling the `.solved()` method.
###Code
# Print the solved model
###Output
_____no_output_____
###Markdown
Impulse ResponsesCompute a 41 period impulse responses of the model's variables to a 0.01 unit shock to TFP in period 5.
###Code
# Compute impulse responses
# Print the first 10 rows of the computed impulse responses.
# Plot the computed impulse responses to a TFP shock
###Output
_____no_output_____
###Markdown
Stochastic SimulationCompute a 201 period stochastic simulation of the model's variables. Set the variance of $\epsilon_t$ to $\sigma^2$ and the variance of the shock to capital to 0 so that the covariance matrix for the shock process is:\begin{align}\text{Covariance matrix} & = \left[\begin{array}{cc} \sigma^2 & 0\\ 0 & 0\end{array} \right]\end{align}
###Code
# Compute stochastic simulation and print the simulated values.
# Print first 10 rows of model.simulated
# Plot the computed stochastic simulation
# Compute standard deviations of simulated TFP, output, and capital
# Compute correlation coefficients of simulated TFP, output, and capital
###Output
_____no_output_____
###Markdown
Class 12: Introduction to the `linearsolve` ModuleIn general, dynamic stochastic general equilibrium (DSGE) models do not admit analytic (i.e., pencil-and-paper) solutions and they are time-consuming to work with. The `linearsolve` module approximates, solves, and simulates DSGE models and therefore makes DSGE models easier to use. Installing `linearsolve``linearsolve` is not included in the Anaconda Python installation and so before you can import it, you need to download and install the `linearsolve` package from PyPI, the Python Package Index. In Windows, open the Anaconda Prompt and in Mac, open the Terminal and run the following commmand: pip install linearsolve You only have to install the package once.
###Code
# Import the linearsolve under the 'ls' namespace
###Output
_____no_output_____
###Markdown
Example: A One-Equation Model of TFPConsider the following AR(1) specification for $\log$ TFP:\begin{align}\log A_{t+1} & = \rho \log A_t + \epsilon_{t+1},\end{align}where $\epsilon_{t+1} \sim \mathcal{N}(0,\sigma^2)$. The goal is to simulate \begin{align}\log (A_{t}/\bar{A}) & \approx \frac{A_t - \bar{A}}{\bar{A}},\end{align}where $\bar{A}$ is the nonstochastic (i.e., $\epsilon_t = 0$) steady state value of $A_t$. Let's compute the model simulation with `linearsolve`. To do this we need to do several things:1. Create a Pandas series that stores the names of the parameters of the model.2. Define a function that returns the equilibrium conditions of the model solved for zero.3. Initialize an instance of the `linearsolve.model` class4. Compute and input the steady state of the model.5. Approximate and solve the model.6. Compute simulations of the model.Use the following values for the simulation:| $\rho$ | $\sigma$ ||--------|----------|| 0.95 | 0.01 | **Step 1:** Create a variable called `parameters` that stores parameter values as a Pandas Series.
###Code
# Create a variable called 'parameters' that stores the model parameter values in a Pandas Series
# Print stored parameter values
# Create a variable called 'sigma' that stores the value of sigma
# Create variable called 'var_names' that stores the variable names in a list with state variables ordered first
# Create variable called 'shock_names' that stores an exogenous shock name for each state variable.
###Output
_____no_output_____
###Markdown
**Step 2:** Define a function called `equilibrium_equations` that evaluates the equilibrium equations of model solved for zero. The function should accept three arguments:1. `variables_forward`: Values of $t+1$-dated variables2. `variables_current`: Values of $t$-dated variables3. `parameters`: Pandas Series with the parameters for the modelThe function should return a NumPy array of the model's equilibrium conditions solved for zero.
###Code
# Define a function that evaluates the equilibrium conditions of the model solved for zero. PROVIDED
def equilibrium_equations(variables_forward,variables_current,parameters):
# Create a variable called 'p' that storres the model's parameters. PROVIDED.
p = parameters
# Create variable called 'cur' that stores the values of current (date t) variables. PROVIDED.
cur = variables_current
# Create variable called 'fwd' that stores the values of one-period-ahead (date t+1) variables. PROVIDED.
fwd = variables_forward
# Create variable called 'tfp_proc' that returns the law of motion for TFP solved for zero (exogenous shock excluded)
# Return equilibrium conditions stacked in a numpy array. PROVIDED.
return np.array([
###Output
_____no_output_____
###Markdown
**Step 3:** Initialize a model instance using the `ls.model()` function. The 1. `equations`: Name of function that stores the equilibrium conditions2. `n_states`: Number of *state* variables (i.e., variables that are *predetermined*)3. `var_names`: List of the names of the endogenous variables4. `shock_names`: List of the names of the exogenous shocks5. `parameters`: Pandas Series of parameter values
###Code
# Initialize the model into a variable named 'ar1_model' using the ls.model() function
###Output
_____no_output_____
###Markdown
**Step 4:** Set the steady state of the model. Either use the `.compute_ss()` method which requires an initial guess of what the steady state is. Or set the steady state `.ss` attribute directly.
###Code
# Compute the steady state numerically using .compute_ss() method of ar1_model
# Set the .ss attribute of ar1_model directly
###Output
_____no_output_____
###Markdown
**Step 5:** Compute a transform the model into a log-linear approximation around the nonstochastic steady state. Then rewrite the equilibrium conditions so that all endogenous variables are expressed as linear functions of state variables and exogenous shocks.
###Code
# Appproximate and solve using the .approximate_and_solve() method of ar1_model
###Output
_____no_output_____
###Markdown
**Step 6:** Simulate the model using one of the following methods:1. `.impulse()`: Compute impulse responses to a one-time shock to each exogenous variables. Results stored in `.irs` attribute.2. `.stoch_sim()`: Compute stochastic simulation. Results stored in `.simulated` attribute.First, we'll compute an impulse response simulation. Let's consider the effect of a one time shock to $\epsilon$ of 0.01 in period 5. Simulate 41 periods.
###Code
# Compute impulse response of a to a one-time shock to e_a
###Output
_____no_output_____
###Markdown
The impulse response simulations are stored in the `.irs` attribute as a dictionary with keys equal to the names of the exogenous shocks.
###Code
# Print ar1_model.irs
###Output
_____no_output_____
###Markdown
Let's look at the first 10 rows of the `'e_a'` element of `ar1_model.irs`.
###Code
# Print first 10 rows of the element in ar1_model.irs that corresponds to the shock to TFP
# Plot simulated impulse response to e_a
###Output
_____no_output_____
###Markdown
Next, we'll use the `.stoch_sim()` method to compute a stochastic simulation. The method takes arguments:1. `seed`: Seed of NumPy RNG. (Optional)2. `T`: Number of periods to simulate3. `covMat`: Covariance matrix for exogenous shock processThe simulation should be for 201 periods.
###Code
# Compute stochastic simulation
# Print the first 10 rows of `model.simulated`
###Output
_____no_output_____
###Markdown
The stochastic simulations are stored in the `.stoch_sim` attribute as a Pandas `DataFrame`.
###Code
# Plot the stochastic simulation
###Output
_____no_output_____
###Markdown
Example 2: The Stochastic Solow Growth Model RevisitedNow consider the following system of equations:\begin{align}Y_t & = A_t K_t^{\alpha} \\K_{t+1} & = sY_t + (1-\delta) K_t\\\log A_{t+1} & = \rho \log A_t + \epsilon_{t+1}\end{align}where $\epsilon_{t+1} \sim \mathcal{N}(0,\sigma^2)$. Let's simulate the model with `linearsolve`. Before proceding, let's also go ahead and rewrite the model with all variables moved to the lefthand side of the equations:\begin{align}0 & = A_t K_t^{\alpha} - Y_t \\0 & = sY_t + (1-\delta) K_t - K_{t+1}\\0 & = \rho \log A_t + \epsilon_{t+1} - \log A_{t+1}\end{align}Capital and TFP are called *state variables* because they're $t+1$ values are predetermined. Output is called a *costate* or *control* variable. Note that the model as 3 endogenous variables with 2 state variables. Use the following values for the simulation:| $\rho$ | $\sigma$ | $s$ | $\alpha$ | $\delta $ | $T$ ||--------|----------|-----|----------|-----------|------|| 0.75 | 0.006 | 0.1 | 0.35 | 0.025 | 201 | Initialization, Approximation, and SolutionThe next several cells initialize the model in `linearsolve` and then approximate and solve it.
###Code
# Create a variable called 'parameters' that stores the model parameter values in a Pandas Series
# Print the model's parameters
# Create a variable called 'sigma' that stores the value of sigma
# Create variable called 'var_names' that stores the variable names in a list with state variables ordered first
# Create variable called 'shock_names' that stores an exogenous shock name for each state variable.
# Define a function that evaluates the equilibrium conditions of the model solved for zero. PROVIDED
def equilibrium_equations(variables_forward,variables_current,parameters):
# Parameters. PROVIDED
p = parameters
# Current variables. PROVIDED
cur = variables_current
# Forward variables. PROVIDED
fwd = variables_forward
# Production function
# Capital evolution
# Exogenous tfp
# Stack equilibrium conditions into a numpy array
###Output
_____no_output_____
###Markdown
Next, initialize the model.
###Code
# Initialize the model into a variable named 'solow_model'
# Compute the steady state numerically using .compute_ss() method of solow_model
# Print the computed steady state
# Find the log-linear approximation around the non-stochastic steady state and solve using .approximate_and_solve() method of solow_model
###Output
_____no_output_____
###Markdown
A Few Details About the Approximation (Optional)The previous step constructs a log-linear approximation of the model and then solves for the endogenous variables as functions of the state variables and exogenous shocks only.View the approximated model by calling the `.approximated()` method.
###Code
# Print the log-linear approximation to the models's equilibrium conditions
###Output
_____no_output_____
###Markdown
Each variable represents the log-deviation from the steady state of the respective variable in our model. For example, the variable `y[t]` means $\log(Y_t) - \log(Y)$ in terms of the stochastic Solow model. But how do these equations relate to the original model?The first equation appears to be:\begin{align}0 &= -2.1095\cdot a_t-0.7383\cdot k_t + 2.1095\cdot y_t\end{align}Note that dividing by 2.1095 and solving for $y_t$ yields:\begin{align}y_t &= a_t +0.3499\cdot k_t,\end{align}so the coefficient on $k_t$ appears to be close to $\alpha=0.35$. We can derive this linear equation directly. First, start with the production function:\begin{align}Y_t &= A_t K_t^{\alpha}.\end{align}Then divide both sides by steady state output:\begin{align}\frac{Y_t}{Y} &= \frac{A_t K_t^{\alpha}}{AK^{\alpha}} \, = \, \frac{A_t}{A}\frac{K_t^{\alpha}}{K^{\alpha}}.\end{align}Then, take the log of both sides:\begin{align}\log\left(\frac{Y_t}{Y} \right)&= \log\left(\frac{A_t}{A}\right) + \alpha\log\left(\frac{K_t}{K}\right).\end{align}finally, letting $y_t = \log(Y_t/Y)$, k_t = $\log(K_t/K)$, and $a_t = \log(A_t/A)$, we have:\begin{align}y_t &= a_t + \alpha k_t.\end{align}However, understanding this process isn't as important as being able to interpret the graphs and statistics that compute using the output of `linearsolve`. A Few Details About the Solution (Optional)It's also worth seeing what it means for a model to be *solved*. After `linearsolve` computes the log-linear approximation to the model, it solves for each endogenous variable as a function of state variables only. View the solved model by calling the `.solved()` method.
###Code
# Print the solved model
###Output
_____no_output_____
###Markdown
Impulse ResponsesCompute a 41 period impulse responses of the model's variables to a 0.01 unit shock to TFP in period 5.
###Code
# Compute impulse responses
# Print the first 10 rows of the computed impulse responses.
# Plot the computed impulse responses to a TFP shock
###Output
_____no_output_____
###Markdown
Stochastic SimulationCompute a 201 period stochastic simulation of the model's variables. Set the variance of $\epsilon_t$ to $\sigma^2$ and the variance of the shock to capital to 0 so that the covariance matrix for the shock process is:\begin{align}\text{Covariance matrix} & = \left[\begin{array}{cc} \sigma^2 & 0\\ 0 & 0\end{array} \right]\end{align}
###Code
# Compute stochastic simulation and print the simulated values.
# Print first 10 rows of model.simulated
# Plot the computed stochastic simulation
# Compute standard deviations of simulated TFP, output, and capital
# Compute correlation coefficients of simulated TFP, output, and capital
###Output
_____no_output_____
###Markdown
Class 12: Introduction to the `linearsolve` ModuleIn general, dynamic stochastic general equilibrium (DSGE) models do not admit analytic (i.e., pencil-and-paper) solutions and they are time-consuming to work with. The `linearsolve` module approximates, solves, and simulates DSGE models and therefore makes DSGE models easier to use. Installing `linearsolve``linearsolve` is not included in the Anaconda Python installation and so before you can import it, you need to download and install the `linearsolve` package from PyPI, the Python Package Index. In Windows, open the Anaconda Prompt and in Mac, open the Terminal and run the following commmand: pip install linearsolve You only have to install the package once.
###Code
# Import the linearsolve under the 'ls' namespace
###Output
_____no_output_____
###Markdown
Example: A One-Equation Model of TFPConsider the following AR(1) specification for $\log$ TFP:\begin{align}\log A_{t+1} & = \rho \log A_t + \epsilon_{t+1},\end{align}where $\epsilon_{t+1} \sim \mathcal{N}(0,\sigma^2)$. The goal is to simulate \begin{align}\log (A_{t}/\bar{A}) & \approx \frac{A_t - \bar{A}}{\bar{A}},\end{align}where $\bar{A}$ is the nonstochastic (i.e., $\epsilon_t = 0$) steady state value of $A_t$. Let's compute the model simulation with `linearsolve`. To do this we need to do several things:1. Create a Pandas series that stores the names of the parameters of the model.2. Define a function that returns the equilibrium conditions of the model solved for zero.3. Initialize an instance of the `linearsolve.model` class4. Compute and input the steady state of the model.5. Approximate and solve the model.6. Compute simulations of the model.Use the following values for the simulation:| $$\rho$$ | $$\sigma$$ ||----------|------------|| 0.95 | 0.01 | **Step 1:** Create a variable called `parameters` that stores parameter values as a Pandas Series.
###Code
# Create a variable called 'parameters' that stores the model parameter values in a Pandas Series
# Print stored parameter values
# Create a variable called 'sigma' that stores the value of sigma
# Create variable called 'var_names' that stores the variable names in a list with state variables ordered first
# Create variable called 'shock_names' that stores an exogenous shock name for each state variable.
###Output
_____no_output_____
###Markdown
**Step 2:** Define a function called `equilibrium_equations` that evaluates the equilibrium equations of model solved for zero. The function should accept three arguments:1. `variables_forward`: Values of $t+1$-dated variables2. `variables_current`: Values of $t$-dated variables3. `parameters`: Pandas Series with the parameters for the modelThe function should return a NumPy array of the model's equilibrium conditions solved for zero.
###Code
# Define a function that evaluates the equilibrium conditions of the model solved for zero. PROVIDED
def equilibrium_equations(variables_forward,variables_current,parameters):
# Create a variable called 'p' that storres the model's parameters. PROVIDED.
p = parameters
# Create variable called 'cur' that stores the values of current (date t) variables. PROVIDED.
cur = variables_current
# Create variable called 'fwd' that stores the values of one-period-ahead (date t+1) variables. PROVIDED.
fwd = variables_forward
# Create variable called 'tfp_proc' that returns the law of motion for TFP solved for zero (exogenous shock excluded)
# Return equilibrium conditions stacked in a numpy array. PROVIDED.
return np.array([
###Output
_____no_output_____
###Markdown
**Step 3:** Initialize a model instance using the `ls.model()` function. The 1. `equations`: Name of function that stores the equilibrium conditions2. `n_states`: Number of *state* variables (i.e., variables that are *predetermined*)3. `var_names`: List of the names of the endogenous variables4. `shock_names`: List of the names of the exogenous shocks5. `parameters`: Pandas Series of parameter values
###Code
# Initialize the model into a variable named 'ar1_model' using the ls.model() function
###Output
_____no_output_____
###Markdown
**Step 4:** Set the steady state of the model. Either use the `.compute_ss()` method which requires an initial guess of what the steady state is. Or set the steady state `.ss` attribute directly.
###Code
# Compute the steady state numerically using .compute_ss() method of ar1_model
# Set the .ss attribute of ar1_model directly
###Output
_____no_output_____
###Markdown
**Step 5:** Compute a transform the model into a log-linear approximation around the nonstochastic steady state. Then rewrite the equilibrium conditions so that all endogenous variables are expressed as linear functions of state variables and exogenous shocks.
###Code
# Appproximate and solve using the .approximate_and_solve() method of ar1_model
###Output
_____no_output_____
###Markdown
**Step 6:** Simulate the model using one of the following methods:1. `.impulse()`: Compute impulse responses to a one-time shock to each exogenous variables. Results stored in `.irs` attribute.2. `.stoch_sim()`: Compute stochastic simulation. Results stored in `.simulated` attribute.First, we'll compute an impulse response simulation. Let's consider the effect of a one time shock to $\epsilon$ of 0.01 in period 5. Simulate 41 periods.
###Code
# Compute impulse response of a to a one-time shock to e_a
###Output
_____no_output_____
###Markdown
The impulse response simulations are stored in the `.irs` attribute as a dictionary with keys equal to the names of the exogenous shocks.
###Code
# Print ar1_model.irs
###Output
_____no_output_____
###Markdown
Let's look at the first 10 rows of the `'e_a'` element of `ar1_model.irs`.
###Code
# Print first 10 rows of the element in ar1_model.irs that corresponds to the shock to TFP
# Plot simulated impulse response to e_a
###Output
_____no_output_____
###Markdown
Next, we'll use the `.stoch_sim()` method to compute a stochastic simulation. The method takes arguments:1. `seed`: Seed of NumPy RNG. (Optional)2. `T`: Number of periods to simulate3. `covMat`: Covariance matrix for exogenous shock processThe simulation should be for 201 periods.
###Code
# Compute stochastic simulation
# Print the first 10 rows of `model.simulated`
###Output
_____no_output_____
###Markdown
The stochastic simulations are stored in the `.stoch_sim` attribute as a Pandas `DataFrame`.
###Code
# Plot the stochastic simulation
###Output
_____no_output_____
###Markdown
Example 2: The Stochastic Solow Growth Model RevisitedNow consider the following system of equations:\begin{align}Y_t & = A_t K_t^{\alpha} \\K_{t+1} & = sY_t + (1-\delta) K_t\\\log A_{t+1} & = \rho \log A_t + \epsilon_{t+1}\end{align}where $\epsilon_{t+1} \sim \mathcal{N}(0,\sigma^2)$. Let's simulate the model with `linearsolve`. Before proceding, let's also go ahead and rewrite the model with all variables moved to the lefthand side of the equations:\begin{align}0 & = A_t K_t^{\alpha} - Y_t \\0 & = sY_t + (1-\delta) K_t - K_{t+1}\\0 & = \rho \log A_t + \epsilon_{t+1} - \log A_{t+1}\end{align}Capital and TFP are called *state variables* because they're $t+1$ values are predetermined. Output is called a *costate* or *control* variable. Note that the model as 3 endogenous variables with 2 state variables. Use the following values for the simulation:| $$\rho$$ | $$\sigma$$ | $$s$$ | $$\alpha$$ | $$\delta $$ | $$T$$ ||----------|------------|-------|------------|-------------|--------|| 0.75 | 0.006 | 0.1 | 0.35 | 0.025 | 201 | Initialization, Approximation, and SolutionThe next several cells initialize the model in `linearsolve` and then approximate and solve it.
###Code
# Create a variable called 'parameters' that stores the model parameter values in a Pandas Series
# Print the model's parameters
# Create a variable called 'sigma' that stores the value of sigma
# Create variable called 'var_names' that stores the variable names in a list with state variables ordered first
# Create variable called 'shock_names' that stores an exogenous shock name for each state variable.
# Define a function that evaluates the equilibrium conditions of the model solved for zero. PROVIDED
def equilibrium_equations(variables_forward,variables_current,parameters):
# Parameters. PROVIDED
p = parameters
# Current variables. PROVIDED
cur = variables_current
# Forward variables. PROVIDED
fwd = variables_forward
# Production function
# Capital evolution
# Exogenous tfp
# Stack equilibrium conditions into a numpy array
###Output
_____no_output_____
###Markdown
Next, initialize the model.
###Code
# Initialize the model into a variable named 'solow_model'
# Compute the steady state numerically using .compute_ss() method of solow_model
# Print the computed steady state
# Find the log-linear approximation around the non-stochastic steady state and solve using .approximate_and_solve() method of solow_model
###Output
_____no_output_____
###Markdown
A Few Details About the Approximation (Optional)The previous step constructs a log-linear approximation of the model and then solves for the endogenous variables as functions of the state variables and exogenous shocks only.View the approximated model by calling the `.approximated()` method.
###Code
# Print the log-linear approximation to the models's equilibrium conditions
###Output
_____no_output_____
###Markdown
Each variable represents the log-deviation from the steady state of the respective variable in our model. For example, the variable `y[t]` means $\log(Y_t) - \log(Y)$ in terms of the stochastic Solow model. But how do these equations relate to the original model?The first equation appears to be:\begin{align}0 &= -2.1095\cdot a_t-0.7383\cdot k_t + 2.1095\cdot y_t\end{align}Note that dividing by 2.1095 and solving for $y_t$ yields:\begin{align}y_t &= a_t +0.3499\cdot k_t,\end{align}so the coefficient on $k_t$ appears to be close to $\alpha=0.35$. We can derive this linear equation directly. First, start with the production function:\begin{align}Y_t &= A_t K_t^{\alpha}.\end{align}Then divide both sides by steady state output:\begin{align}\frac{Y_t}{Y} &= \frac{A_t K_t^{\alpha}}{AK^{\alpha}} \, = \, \frac{A_t}{A}\frac{K_t^{\alpha}}{K^{\alpha}}.\end{align}Then, take the log of both sides:\begin{align}\log\left(\frac{Y_t}{Y} \right)&= \log\left(\frac{A_t}{A}\right) + \alpha\log\left(\frac{K_t}{K}\right).\end{align}finally, letting $y_t = \log(Y_t/Y)$, k_t = $\log(K_t/K)$, and $a_t = \log(A_t/A)$, we have:\begin{align}y_t &= a_t + \alpha k_t.\end{align}However, understanding this process isn't as important as being able to interpret the graphs and statistics that compute using the output of `linearsolve`. A Few Details About the Solution (Optional)It's also worth seeing what it means for a model to be *solved*. After `linearsolve` computes the log-linear approximation to the model, it solves for each endogenous variable as a function of state variables only. View the solved model by calling the `.solved()` method.
###Code
# Print the solved model
###Output
_____no_output_____
###Markdown
Impulse ResponsesCompute a 41 period impulse responses of the model's variables to a 0.01 unit shock to TFP in period 5.
###Code
# Compute impulse responses
# Print the first 10 rows of the computed impulse responses.
# Plot the computed impulse responses to a TFP shock
###Output
_____no_output_____
###Markdown
Stochastic SimulationCompute a 201 period stochastic simulation of the model's variables. Set the variance of $\epsilon_t$ to $\sigma^2$ and the variance of the shock to capital to 0 so that the covariance matrix for the shock process is:\begin{align}\text{Covariance matrix} & = \left[\begin{array}{cc} \sigma^2 & 0\\ 0 & 0\end{array} \right]\end{align}
###Code
# Compute stochastic simulation and print the simulated values.
# Print first 10 rows of model.simulated
# Plot the computed stochastic simulation
# Compute standard deviations of simulated TFP, output, and capital
# Compute correlation coefficients of simulated TFP, output, and capital
###Output
_____no_output_____ |
notebooks/Drexel_EVEREST_nPLD_example.ipynb | ###Markdown
After the application of PLD, we remove the overall slope is added into the light curve by subtracting a linear fit from the light curve.We also remove the first 3 days of the campaign on the grounds that the telescope is still adjusting to a new position relative to the sun, which affects the focus.
###Code
# -----------------do addtional cut and slope subtraction-----------------
# 30 mintue intervals between cadences
cutoff_day = 3*24*2
#cutoff = np.logical_and(cad>cutoff_day, cad<cad[-1]-5*cutoff_day)
cutoff = cad>cutoff_day
# finding linear fit
m,b = np.polyfit(cad[cutoff], flux_pld[cutoff], 1)
# subtracting it
flux_corrected = flux_pld[cutoff] - (m*cad[cutoff])
# time from the light curve
time = lc.time[cutoff]
plt.plot(time, flux_corrected)
plt.xlabel("Time - 2454833[BKJD days]")
plt.ylabel("nPLD Flux")
plt.title("EPIC %s"%epic)
# Calculateing the PSD
f = np.fft.rfftfreq(80*48, 30.0*60)
# Compute the LS based power spectrum estimates
model = LombScargle(time*86400,flux_corrected)
power_ls = model.power(f[1:-1], method="fast", normalization="psd")
# >>> To get the LS based PSD in the correct units, normalize by N <<<
power_ls /= len(time)
freq = f[1:-1]
plt.plot(freq,power_ls)
plt.xlabel("frequency [Hz]")
plt.ylabel("power [$\mathrm{ppm}^2/\mathrm{Hz}$]")
plt.yscale("log")
plt.xscale("log")
###Output
_____no_output_____ |
Lesson 5.ipynb | ###Markdown
Курс «Введение в нейронные сети» Урок 5. Рекуррентные нейронные сети Домашняя работа к уроку 5
###Code
from __future__ import print_function
import numpy as np
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding, Activation
from keras.layers import LSTM
from keras.layers.recurrent import SimpleRNN, LSTM, GRU
from keras.datasets import imdb
###Output
_____no_output_____
###Markdown
Задание 1 Попробуйте изменить параметры нейронной сети работающей с датасетом imdb так, чтобы улучшить ее точность. Приложите анализ. Сначала вопроизводим то, что на уроке, потом уже меняем парметры. Задаем начальные значения параметров.
###Code
max_features = 20000
maxlen = 80
batch_size = 50
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
print('Загрузка данных...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'тренировочные последовательности')
print(len(x_test), 'тестовые последовательности')
###Output
Загрузка данных...
25000 тренировочные последовательности
25000 тестовые последовательности
###Markdown
Выравниваем длину подпоследовательностей.
###Code
print('Pad последовательности (примеров в x единицу времени)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
###Output
Pad последовательности (примеров в x единицу времени)
x_train shape: (25000, 80)
x_test shape: (25000, 80)
###Markdown
Строим модель
###Code
print('Построение модели...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
###Output
Построение модели...
###Markdown
Компилируем
###Code
# стоит попробовать использовать другие оптимайзер и другие конфигурации оптимайзеров
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Обучаем
###Code
print('Процесс обучения...')
model.fit(x_train, y_train, batch_size=batch_size, epochs=1, validation_data=(x_test, y_test))
###Output
Процесс обучения...
500/500 [==============================] - 61s 120ms/step - loss: 0.5208 - accuracy: 0.7188 - val_loss: 0.3591 - val_accuracy: 0.8420
###Markdown
Оцениваем результат
###Code
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print('Результат при тестировании:', score)
print('Тестовая точность:', acc)
###Output
500/500 [==============================] - 7s 13ms/step - loss: 0.3591 - accuracy: 0.8420
Результат при тестировании: 0.3590894043445587
Тестовая точность: 0.8419600129127502
###Markdown
Чем можно поиграться: max_features maxlen batch_size optimizer Увеличим количество эпох до 15, будем считать что до этого значения точность будет расти быстро, а после будет расти медленнее (по итогам прошлых домашних заданий). **num_words** - величина ранжирования слов.
###Code
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=1)
print(x_train[0])
(x_train, y_train), (x_test, y_test) = imdb.load_data()
print(x_train[0])
###Output
[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 22665, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 21631, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 19193, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 10311, 8, 4, 107, 117, 5952, 15, 256, 4, 31050, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 12118, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]
###Markdown
Очевидно малое значение num_words нам ничего не даст. Найдем слово с максимальным рангом.
###Code
max_value = 0
for i in range(len(x_train)):
if (max(x_train[i]) > max_value):
max_value = max(x_train[i])
print(max_value)
max_value = 0
for i in range(len(x_test)):
if (max(x_test[i]) > max_value):
max_value = max(x_test[i])
print(max_value)
###Output
88584
###Markdown
**Вывод:** Если мы будем усекать этот параметр, то может так случиться, что наша сеть будет получать одни и теже значения и обучения не произойдет, поэтому оставим этот параметр по умолчанию, т.е. будуем вызыват его со значением по умолчанию, т.е. None. **maxlen** - величина усечения, максимальная длина всех последовательностей. Если не указан, последовательности будут дополнены до длины самой длинной отдельной последовательности.
###Code
x_train_1 = sequence.pad_sequences(x_train)
x_test_1 = sequence.pad_sequences(x_test)
print('x_train shape:', x_train_1.shape)
print('x_test shape:', x_test_1.shape)
min_value = 25000
for i in range(len(x_train)):
if (len(x_train[i]) < min_value):
min_value = len(x_train[i])
print(min_value)
###Output
11
###Markdown
Диапазон подпоследовательностей от 11 до 2494. Рассмотрим следующие значения диапазона: 10, 60, 110, 160, 210. Больше брать не будем, чтобы нейросетка долго не считалась.
###Code
max_features = 20000
maxlen = 80
batch_size = 50
for i in range(210, 9, -50):
print('maxlen:', i)
print('Загрузка данных...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'тренировочные последовательности')
print(len(x_test), 'тестовые последовательности')
print('Pad последовательности (примеров в x единицу времени)')
x_train = sequence.pad_sequences(x_train, maxlen=i)
x_test = sequence.pad_sequences(x_test, maxlen=i)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Построение модели...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# стоит попробовать использовать другие оптимайзер и другие конфигурации оптимайзеров
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print('Процесс обучения...')
model.fit(x_train, y_train, batch_size=batch_size, epochs=1, validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print('Результат при тестировании:', score)
print('Тестовая точность:', acc, '\n')
###Output
maxlen: 210
Загрузка данных...
25000 тренировочные последовательности
25000 тестовые последовательности
Pad последовательности (примеров в x единицу времени)
x_train shape: (25000, 210)
x_test shape: (25000, 210)
Построение модели...
Процесс обучения...
500/500 [==============================] - 205s 407ms/step - loss: 0.5445 - accuracy: 0.6993 - val_loss: 0.3184 - val_accuracy: 0.8699
500/500 [==============================] - 19s 37ms/step - loss: 0.3184 - accuracy: 0.8699
Результат при тестировании: 0.31836071610450745
Тестовая точность: 0.869920015335083
maxlen: 160
Загрузка данных...
25000 тренировочные последовательности
25000 тестовые последовательности
Pad последовательности (примеров в x единицу времени)
x_train shape: (25000, 160)
x_test shape: (25000, 160)
Построение модели...
Процесс обучения...
500/500 [==============================] - 154s 305ms/step - loss: 0.5038 - accuracy: 0.7346 - val_loss: 0.3287 - val_accuracy: 0.8582
500/500 [==============================] - 16s 33ms/step - loss: 0.3287 - accuracy: 0.8582
Результат при тестировании: 0.32870545983314514
Тестовая точность: 0.8581600189208984
maxlen: 110
Загрузка данных...
25000 тренировочные последовательности
25000 тестовые последовательности
Pad последовательности (примеров в x единицу времени)
x_train shape: (25000, 110)
x_test shape: (25000, 110)
Построение модели...
Процесс обучения...
500/500 [==============================] - 96s 190ms/step - loss: 0.5620 - accuracy: 0.6880 - val_loss: 0.3493 - val_accuracy: 0.8486
500/500 [==============================] - 11s 22ms/step - loss: 0.3493 - accuracy: 0.8486
Результат при тестировании: 0.3493329584598541
Тестовая точность: 0.8485999703407288
maxlen: 60
Загрузка данных...
25000 тренировочные последовательности
25000 тестовые последовательности
Pad последовательности (примеров в x единицу времени)
x_train shape: (25000, 60)
x_test shape: (25000, 60)
Построение модели...
Процесс обучения...
500/500 [==============================] - 57s 113ms/step - loss: 0.5218 - accuracy: 0.7155 - val_loss: 0.3983 - val_accuracy: 0.8212
500/500 [==============================] - 6s 13ms/step - loss: 0.3983 - accuracy: 0.8212
Результат при тестировании: 0.3983440101146698
Тестовая точность: 0.8211600184440613
maxlen: 10
Загрузка данных...
25000 тренировочные последовательности
25000 тестовые последовательности
Pad последовательности (примеров в x единицу времени)
x_train shape: (25000, 10)
x_test shape: (25000, 10)
Построение модели...
Процесс обучения...
500/500 [==============================] - 25s 47ms/step - loss: 0.6097 - accuracy: 0.6464 - val_loss: 0.5266 - val_accuracy: 0.7257
500/500 [==============================] - 2s 4ms/step - loss: 0.5266 - accuracy: 0.7257
Результат при тестировании: 0.5266367793083191
Тестовая точность: 0.7257199883460999
###Markdown
**Вывод:** при короткой отсечке точность падает, при отсечке порядка 100 слов на сообщение точность существенно уже не меняется. Выберем оптимизатор: SGD — оптимизатор градиентного спуска (с импульсом). RMSprop — оптимизатор, реализующий алгоритм RMSprop: поддержание скользящего среднего квадрата градиентов и разделение градиента на корень этого среднего. Adam — оптимизатор, реализующий метод стохастического градиентного спуска, основанный на адаптивной оценке моментов первого и второго порядка. Adadelta — оптимизатор, реализующий метод стохастического градиентного спуска, основанный на скорости адаптивного обучения по каждому измерению и устраняющий два недостатка: постоянное снижение темпов обучения на протяжении всего обучения; необходимость в выбранной вручную глобальной скорости обучения Adagrad — оптимизатор со скоростью обучения, зависящей от параметра, которая адаптирована в зависимости от того, как часто параметр обновляется во время обучения, чем больше обновлений получает параметр, тем меньше обновлений. Adamax — вариант Adam, основанный на норме бесконечности. Параметры по умолчанию соответствуют указанным в документе. Иногда Adamax превосходит Adam, особенно в моделях с вложениями. Nadam — это Adam с импульсом Нестерова. Ftrl — оптимизатор, реализующий алгоритм FTRL. **SGD**
###Code
print('Загрузка данных...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'тренировочные последовательности')
print(len(x_test), 'тестовые последовательности')
print('Pad последовательности (примеров в x единицу времени)')
x_train = sequence.pad_sequences(x_train, maxlen=100)
x_test = sequence.pad_sequences(x_test, maxlen=100)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Построение модели...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# стоит попробовать использовать другие оптимайзер и другие конфигурации оптимайзеров
model.compile(loss='binary_crossentropy', optimizer='SGD', metrics=['accuracy'])
print('Процесс обучения...')
model.fit(x_train, y_train, batch_size=batch_size, epochs=1, validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print('Результат при тестировании:', score)
print('Тестовая точность:', acc, '\n')
###Output
Загрузка данных...
25000 тренировочные последовательности
25000 тестовые последовательности
Pad последовательности (примеров в x единицу времени)
x_train shape: (25000, 100)
x_test shape: (25000, 100)
Построение модели...
Процесс обучения...
500/500 [==============================] - 89s 176ms/step - loss: 0.6930 - accuracy: 0.5150 - val_loss: 0.6929 - val_accuracy: 0.5174
500/500 [==============================] - 12s 25ms/step - loss: 0.6929 - accuracy: 0.5174
Результат при тестировании: 0.6928873658180237
Тестовая точность: 0.5174000263214111
###Markdown
**RMSprop**
###Code
print('Загрузка данных...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'тренировочные последовательности')
print(len(x_test), 'тестовые последовательности')
print('Pad последовательности (примеров в x единицу времени)')
x_train = sequence.pad_sequences(x_train, maxlen=100)
x_test = sequence.pad_sequences(x_test, maxlen=100)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Построение модели...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# стоит попробовать использовать другие оптимайзер и другие конфигурации оптимайзеров
model.compile(loss='binary_crossentropy', optimizer='RMSprop', metrics=['accuracy'])
print('Процесс обучения...')
model.fit(x_train, y_train, batch_size=batch_size, epochs=1, validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print('Результат при тестировании:', score)
print('Тестовая точность:', acc, '\n')
###Output
Загрузка данных...
25000 тренировочные последовательности
25000 тестовые последовательности
Pad последовательности (примеров в x единицу времени)
x_train shape: (25000, 100)
x_test shape: (25000, 100)
Построение модели...
Процесс обучения...
500/500 [==============================] - 91s 179ms/step - loss: 0.5258 - accuracy: 0.7333 - val_loss: 0.3392 - val_accuracy: 0.8541
500/500 [==============================] - 11s 23ms/step - loss: 0.3392 - accuracy: 0.8541
Результат при тестировании: 0.33921995759010315
Тестовая точность: 0.8540800213813782
###Markdown
**Adam**
###Code
print('Загрузка данных...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'тренировочные последовательности')
print(len(x_test), 'тестовые последовательности')
print('Pad последовательности (примеров в x единицу времени)')
x_train = sequence.pad_sequences(x_train, maxlen=100)
x_test = sequence.pad_sequences(x_test, maxlen=100)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Построение модели...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# стоит попробовать использовать другие оптимайзер и другие конфигурации оптимайзеров
model.compile(loss='binary_crossentropy', optimizer='Adam', metrics=['accuracy'])
print('Процесс обучения...')
model.fit(x_train, y_train, batch_size=batch_size, epochs=1, validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print('Результат при тестировании:', score)
print('Тестовая точность:', acc, '\n')
###Output
Загрузка данных...
25000 тренировочные последовательности
25000 тестовые последовательности
Pad последовательности (примеров в x единицу времени)
x_train shape: (25000, 100)
x_test shape: (25000, 100)
Построение модели...
Процесс обучения...
500/500 [==============================] - 98s 194ms/step - loss: 0.5109 - accuracy: 0.7285 - val_loss: 0.3493 - val_accuracy: 0.8487
500/500 [==============================] - 10s 21ms/step - loss: 0.3493 - accuracy: 0.8487
Результат при тестировании: 0.3493058979511261
Тестовая точность: 0.8486800193786621
###Markdown
**Adadelta**
###Code
print('Загрузка данных...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'тренировочные последовательности')
print(len(x_test), 'тестовые последовательности')
print('Pad последовательности (примеров в x единицу времени)')
x_train = sequence.pad_sequences(x_train, maxlen=100)
x_test = sequence.pad_sequences(x_test, maxlen=100)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Построение модели...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# стоит попробовать использовать другие оптимайзер и другие конфигурации оптимайзеров
model.compile(loss='binary_crossentropy', optimizer='Adadelta', metrics=['accuracy'])
print('Процесс обучения...')
model.fit(x_train, y_train, batch_size=batch_size, epochs=1, validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print('Результат при тестировании:', score)
print('Тестовая точность:', acc, '\n')
###Output
Загрузка данных...
25000 тренировочные последовательности
25000 тестовые последовательности
Pad последовательности (примеров в x единицу времени)
x_train shape: (25000, 100)
x_test shape: (25000, 100)
Построение модели...
Процесс обучения...
500/500 [==============================] - 81s 160ms/step - loss: 0.6932 - accuracy: 0.5016 - val_loss: 0.6932 - val_accuracy: 0.4974
500/500 [==============================] - 10s 20ms/step - loss: 0.6932 - accuracy: 0.4974
Результат при тестировании: 0.6931918263435364
Тестовая точность: 0.49744001030921936
###Markdown
**Adagrad**
###Code
print('Загрузка данных...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'тренировочные последовательности')
print(len(x_test), 'тестовые последовательности')
print('Pad последовательности (примеров в x единицу времени)')
x_train = sequence.pad_sequences(x_train, maxlen=100)
x_test = sequence.pad_sequences(x_test, maxlen=100)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Построение модели...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# стоит попробовать использовать другие оптимайзер и другие конфигурации оптимайзеров
model.compile(loss='binary_crossentropy', optimizer='Adagrad', metrics=['accuracy'])
print('Процесс обучения...')
model.fit(x_train, y_train, batch_size=batch_size, epochs=1, validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print('Результат при тестировании:', score)
print('Тестовая точность:', acc, '\n')
###Output
Загрузка данных...
25000 тренировочные последовательности
25000 тестовые последовательности
Pad последовательности (примеров в x единицу времени)
x_train shape: (25000, 100)
x_test shape: (25000, 100)
Построение модели...
Процесс обучения...
500/500 [==============================] - 81s 160ms/step - loss: 0.6932 - accuracy: 0.5046 - val_loss: 0.6932 - val_accuracy: 0.4969
500/500 [==============================] - 10s 20ms/step - loss: 0.6932 - accuracy: 0.4969
Результат при тестировании: 0.6931880712509155
Тестовая точность: 0.49691998958587646
###Markdown
**Adamax**
###Code
print('Загрузка данных...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'тренировочные последовательности')
print(len(x_test), 'тестовые последовательности')
print('Pad последовательности (примеров в x единицу времени)')
x_train = sequence.pad_sequences(x_train, maxlen=100)
x_test = sequence.pad_sequences(x_test, maxlen=100)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Построение модели...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# стоит попробовать использовать другие оптимайзер и другие конфигурации оптимайзеров
model.compile(loss='binary_crossentropy', optimizer='Adamax', metrics=['accuracy'])
print('Процесс обучения...')
model.fit(x_train, y_train, batch_size=batch_size, epochs=1, validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print('Результат при тестировании:', score)
print('Тестовая точность:', acc, '\n')
###Output
Загрузка данных...
25000 тренировочные последовательности
25000 тестовые последовательности
Pad последовательности (примеров в x единицу времени)
x_train shape: (25000, 100)
x_test shape: (25000, 100)
Построение модели...
Процесс обучения...
500/500 [==============================] - 81s 161ms/step - loss: 0.5713 - accuracy: 0.6780 - val_loss: 0.3684 - val_accuracy: 0.8381
500/500 [==============================] - 10s 20ms/step - loss: 0.3684 - accuracy: 0.8381
Результат при тестировании: 0.3684289753437042
Тестовая точность: 0.8380799889564514
###Markdown
**Nadam**
###Code
print('Загрузка данных...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'тренировочные последовательности')
print(len(x_test), 'тестовые последовательности')
print('Pad последовательности (примеров в x единицу времени)')
x_train = sequence.pad_sequences(x_train, maxlen=100)
x_test = sequence.pad_sequences(x_test, maxlen=100)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Построение модели...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# стоит попробовать использовать другие оптимайзер и другие конфигурации оптимайзеров
model.compile(loss='binary_crossentropy', optimizer='Nadam', metrics=['accuracy'])
print('Процесс обучения...')
model.fit(x_train, y_train, batch_size=batch_size, epochs=1, validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print('Результат при тестировании:', score)
print('Тестовая точность:', acc, '\n')
###Output
Загрузка данных...
25000 тренировочные последовательности
25000 тестовые последовательности
Pad последовательности (примеров в x единицу времени)
x_train shape: (25000, 100)
x_test shape: (25000, 100)
Построение модели...
Процесс обучения...
500/500 [==============================] - 92s 182ms/step - loss: 0.5237 - accuracy: 0.7230 - val_loss: 0.3506 - val_accuracy: 0.8488
500/500 [==============================] - 10s 20ms/step - loss: 0.3506 - accuracy: 0.84880s - l
Результат при тестировании: 0.35064640641212463
Тестовая точность: 0.8487600088119507
###Markdown
**Ftrl**
###Code
print('Загрузка данных...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'тренировочные последовательности')
print(len(x_test), 'тестовые последовательности')
print('Pad последовательности (примеров в x единицу времени)')
x_train = sequence.pad_sequences(x_train, maxlen=100)
x_test = sequence.pad_sequences(x_test, maxlen=100)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Построение модели...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# стоит попробовать использовать другие оптимайзер и другие конфигурации оптимайзеров
model.compile(loss='binary_crossentropy', optimizer='Ftrl', metrics=['accuracy'])
print('Процесс обучения...')
model.fit(x_train, y_train, batch_size=batch_size, epochs=1, validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print('Результат при тестировании:', score)
print('Тестовая точность:', acc, '\n')
###Output
Загрузка данных...
25000 тренировочные последовательности
25000 тестовые последовательности
Pad последовательности (примеров в x единицу времени)
x_train shape: (25000, 100)
x_test shape: (25000, 100)
Построение модели...
Процесс обучения...
500/500 [==============================] - 79s 157ms/step - loss: 0.6931 - accuracy: 0.5054 - val_loss: 0.6931 - val_accuracy: 0.5000
500/500 [==============================] - 10s 20ms/step - loss: 0.6931 - accuracy: 0.5000
Результат при тестировании: 0.6931463479995728
Тестовая точность: 0.5
###Markdown
**Вывод:** самый худший результат показал оптимизатор Ftrl, самую лучшую точность показал оптимизатор RMSprop (он самый долгий, но самый точный). Неплохую точность показали оптимизаторы семейства Adam на уровне оптимизатора RMSprop. Попробуем поменять слои. 1 слой LSTM
###Code
print('Загрузка данных...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'тренировочные последовательности')
print(len(x_test), 'тестовые последовательности')
print('Pad последовательности (примеров в x единицу времени)')
x_train = sequence.pad_sequences(x_train, maxlen=100)
x_test = sequence.pad_sequences(x_test, maxlen=100)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Построение модели...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# стоит попробовать использовать другие оптимайзер и другие конфигурации оптимайзеров
model.compile(loss='binary_crossentropy', optimizer='RMSprop', metrics=['accuracy'])
print('Процесс обучения...')
model.fit(x_train, y_train, batch_size=batch_size, epochs=1, validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print('Результат при тестировании:', score)
print('Тестовая точность:', acc, '\n')
###Output
Загрузка данных...
25000 тренировочные последовательности
25000 тестовые последовательности
Pad последовательности (примеров в x единицу времени)
x_train shape: (25000, 100)
x_test shape: (25000, 100)
Построение модели...
Процесс обучения...
500/500 [==============================] - 94s 185ms/step - loss: 0.5193 - accuracy: 0.7441 - val_loss: 0.3703 - val_accuracy: 0.8470
500/500 [==============================] - 11s 22ms/step - loss: 0.3703 - accuracy: 0.8470
Результат при тестировании: 0.3703368604183197
Тестовая точность: 0.8469600081443787
###Markdown
2 слоя LSTM
###Code
print('Загрузка данных...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'тренировочные последовательности')
print(len(x_test), 'тестовые последовательности')
print('Pad последовательности (примеров в x единицу времени)')
x_train = sequence.pad_sequences(x_train, maxlen=100)
x_test = sequence.pad_sequences(x_test, maxlen=100)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Построение модели...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, activation='relu', return_sequences=True))
model.add(LSTM(128, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# стоит попробовать использовать другие оптимайзер и другие конфигурации оптимайзеров
model.compile(loss='binary_crossentropy', optimizer='RMSprop', metrics=['accuracy'])
print('Процесс обучения...')
model.fit(x_train, y_train, batch_size=batch_size, epochs=1, validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print('Результат при тестировании:', score)
print('Тестовая точность:', acc, '\n')
###Output
Загрузка данных...
25000 тренировочные последовательности
25000 тестовые последовательности
Pad последовательности (примеров в x единицу времени)
x_train shape: (25000, 100)
x_test shape: (25000, 100)
Построение модели...
Процесс обучения...
500/500 [==============================] - 95s 188ms/step - loss: 102227507.4488 - accuracy: 0.6294 - val_loss: 1.9524 - val_accuracy: 0.7432
500/500 [==============================] - 20s 40ms/step - loss: 1.9524 - accuracy: 0.7432
Результат при тестировании: 1.9524117708206177
Тестовая точность: 0.7431600093841553
###Markdown
3 слоя LSTM
###Code
print('Загрузка данных...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'тренировочные последовательности')
print(len(x_test), 'тестовые последовательности')
print('Pad последовательности (примеров в x единицу времени)')
x_train = sequence.pad_sequences(x_train, maxlen=100)
x_test = sequence.pad_sequences(x_test, maxlen=100)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Построение модели...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, activation='relu', return_sequences=True))
model.add(LSTM(128, activation='relu', return_sequences=True))
model.add(LSTM(128, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# стоит попробовать использовать другие оптимайзер и другие конфигурации оптимайзеров
model.compile(loss='binary_crossentropy', optimizer='RMSprop', metrics=['accuracy'])
print('Процесс обучения...')
model.fit(x_train, y_train, batch_size=batch_size, epochs=1, validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print('Результат при тестировании:', score)
print('Тестовая точность:', acc, '\n')
###Output
Загрузка данных...
25000 тренировочные последовательности
25000 тестовые последовательности
Pad последовательности (примеров в x единицу времени)
x_train shape: (25000, 100)
x_test shape: (25000, 100)
Построение модели...
Процесс обучения...
500/500 [==============================] - 148s 293ms/step - loss: nan - accuracy: 0.5035 - val_loss: nan - val_accuracy: 0.5000
500/500 [==============================] - 30s 59ms/step - loss: nan - accuracy: 0.5000
Результат при тестировании: nan
Тестовая точность: 0.5
###Markdown
4 слоя LSTM
###Code
print('Загрузка данных...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'тренировочные последовательности')
print(len(x_test), 'тестовые последовательности')
print('Pad последовательности (примеров в x единицу времени)')
x_train = sequence.pad_sequences(x_train, maxlen=100)
x_test = sequence.pad_sequences(x_test, maxlen=100)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Построение модели...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, activation='relu', return_sequences=True))
model.add(LSTM(128, activation='relu', return_sequences=True))
model.add(LSTM(128, activation='relu', return_sequences=True))
model.add(LSTM(128, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# стоит попробовать использовать другие оптимайзер и другие конфигурации оптимайзеров
model.compile(loss='binary_crossentropy', optimizer='RMSprop', metrics=['accuracy'])
print('Процесс обучения...')
model.fit(x_train, y_train, batch_size=batch_size, epochs=1, validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print('Результат при тестировании:', score)
print('Тестовая точность:', acc, '\n')
###Output
Загрузка данных...
25000 тренировочные последовательности
25000 тестовые последовательности
Pad последовательности (примеров в x единицу времени)
x_train shape: (25000, 100)
x_test shape: (25000, 100)
Построение модели...
Процесс обучения...
500/500 [==============================] - 241s 479ms/step - loss: 41490029490217.2109 - accuracy: 0.5653 - val_loss: 8.9476 - val_accuracy: 0.6985
500/500 [==============================] - 42s 83ms/step - loss: 8.9476 - accuracy: 0.6985
Результат при тестировании: 8.947563171386719
Тестовая точность: 0.6984800100326538
###Markdown
**Вывод:** чем больше слоев LSTM, тем больше падает точность, самое оптимальное, это 1 слой. **Итоговая настроенная модель:**
###Code
max_features = 20000
maxlen = 100
batch_size = 100
print('Загрузка данных...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'тренировочные последовательности')
print(len(x_test), 'тестовые последовательности')
print('Pad последовательности (примеров в x единицу времени)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Построение модели...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# стоит попробовать использовать другие оптимайзер и другие конфигурации оптимайзеров
model.compile(loss='binary_crossentropy', optimizer='RMSprop', metrics=['accuracy'])
print('Процесс обучения...')
model.fit(x_train, y_train, batch_size=batch_size, epochs=15, validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print('Результат при тестировании:', score)
print('Тестовая точность:', acc, '\n')
###Output
Загрузка данных...
25000 тренировочные последовательности
25000 тестовые последовательности
Pad последовательности (примеров в x единицу времени)
x_train shape: (25000, 100)
x_test shape: (25000, 100)
Построение модели...
Процесс обучения...
Epoch 1/15
250/250 [==============================] - 87s 345ms/step - loss: 0.5554 - accuracy: 0.7252 - val_loss: 0.3531 - val_accuracy: 0.8474
Epoch 2/15
250/250 [==============================] - 86s 346ms/step - loss: 0.2878 - accuracy: 0.8857 - val_loss: 0.3610 - val_accuracy: 0.8532
Epoch 3/15
250/250 [==============================] - 87s 347ms/step - loss: 0.2342 - accuracy: 0.9095 - val_loss: 0.3463 - val_accuracy: 0.8469
Epoch 4/15
250/250 [==============================] - 87s 346ms/step - loss: 0.1928 - accuracy: 0.9270 - val_loss: 0.4199 - val_accuracy: 0.8499
Epoch 5/15
250/250 [==============================] - 86s 345ms/step - loss: 0.1612 - accuracy: 0.9400 - val_loss: 0.3573 - val_accuracy: 0.8488
Epoch 6/15
250/250 [==============================] - 87s 346ms/step - loss: 0.1347 - accuracy: 0.9520 - val_loss: 0.3793 - val_accuracy: 0.8428
Epoch 7/15
250/250 [==============================] - 86s 343ms/step - loss: 0.1130 - accuracy: 0.9597 - val_loss: 0.4796 - val_accuracy: 0.8363
Epoch 8/15
250/250 [==============================] - 87s 346ms/step - loss: 0.0935 - accuracy: 0.9671 - val_loss: 0.4554 - val_accuracy: 0.8372
Epoch 9/15
250/250 [==============================] - 86s 346ms/step - loss: 0.0875 - accuracy: 0.9697 - val_loss: 0.4750 - val_accuracy: 0.8347
Epoch 10/15
250/250 [==============================] - 86s 345ms/step - loss: 0.0666 - accuracy: 0.9773 - val_loss: 0.5841 - val_accuracy: 0.8270
Epoch 11/15
250/250 [==============================] - 86s 346ms/step - loss: 0.0559 - accuracy: 0.9806 - val_loss: 0.5099 - val_accuracy: 0.8314
Epoch 12/15
250/250 [==============================] - 86s 342ms/step - loss: 0.0425 - accuracy: 0.9865 - val_loss: 0.5664 - val_accuracy: 0.8279
Epoch 13/15
250/250 [==============================] - 87s 346ms/step - loss: 0.0347 - accuracy: 0.9889 - val_loss: 0.7112 - val_accuracy: 0.8265
Epoch 14/15
250/250 [==============================] - 87s 348ms/step - loss: 0.0313 - accuracy: 0.9893 - val_loss: 0.6804 - val_accuracy: 0.8236
Epoch 15/15
250/250 [==============================] - 87s 348ms/step - loss: 0.0227 - accuracy: 0.9930 - val_loss: 0.9315 - val_accuracy: 0.8011
250/250 [==============================] - 11s 44ms/step - loss: 0.9315 - accuracy: 0.8011
Результат при тестировании: 0.9314731359481812
Тестовая точность: 0.8010799884796143
###Markdown
**Выводы:** max_features сделать максимальным не получится, так как сверх 20000 возникает ошибка, оставляем так. Самый оптимальный размер усечения 80-100 слов на сообщение. Самый оптимальный оптимизатор RMSprop, от него не отстает семейство оптимизаторов Adam, остальные так себе. Слоев LSTM оптимально 1-2, что сверх ведет к потере точности. Чем больше batch_size, тем быстрее считается, но при этом точность несколько падает. Оптимальное значение эпох обучения 14-15, после чего точность существенно не растет и даже может падать. Задание 2 Попробуйте изменить параметры нейронной сети генерирующий текст таким образом, чтобы добиться генерации как можно более осмысленного текста. Пришлите лучший получившейся у вас текст и опишите, что вы предприняли, чтобы его получить. Можно использовать текст другого прозведения. Будем сразу работать с оптимальными параметрами. В качестве обучающей выборки возьмем все романы Ф. М. Достоевского.
###Code
# построчное чтение текстов
for i in range(1, 12, 1):
lines = []
with open(f'dostoevsky/00{i}.txt' if i < 10 else f'dostoevsky/0{i}.txt', 'rb') as _in:
for line in _in:
line = line.strip().lower().decode()
if len(line) == 0:
continue
lines.append(line)
###Output
_____no_output_____
###Markdown
Собрали текст в единый массив.
###Code
text = " ".join(lines)
chars = set([c for c in text])
nb_chars = len(chars)
# создание индекса символов и reverse mapping чтобы передвигаться между значениями numerical
# ID и определенный символ. Numerical ID будет соответсвовать колонке
# число при использовании one-hot кодировки для представление входов символов
char2index = {c: i for i, c in enumerate(chars)}
index2char = {i: c for i, c in enumerate(chars)}
SEQLEN, STEP = 50, 1
input_chars, label_chars = [], []
# конвертация data в серии разных SEQLEN-length субпоследовательностей
for i in range(0, len(text) - SEQLEN, STEP):
input_chars.append(text[i: i + SEQLEN])
label_chars.append(text[i + SEQLEN])
# Вычисление one-hot encoding входных последовательностей X и следующего символа (the label) y
X = np.zeros((len(input_chars), SEQLEN, nb_chars), dtype=np.bool)
y = np.zeros((len(input_chars), nb_chars), dtype=np.bool)
for i, input_char in enumerate(input_chars):
for j, ch in enumerate(input_char):
X[i, j, char2index[ch]] = 1
y[i, char2index[label_chars[i]]] = 1
# установка ряда метапамертров для нейронной сети и процесса тренировки
BATCH_SIZE, HIDDEN_SIZE = 128, 128
NUM_ITERATIONS = 2
NUM_EPOCHS_PER_ITERATION = 10
NUM_PREDS_PER_EPOCH = 100
model = Sequential()
model.add(
GRU( # вы можете изменить эту часть на LSTM или SimpleRNN, чтобы попробовать альтернативы
HIDDEN_SIZE,
return_sequences=False,
input_shape=(SEQLEN, nb_chars),
unroll=True
)
)
model.add(Dense(nb_chars))
model.add(Activation("softmax"))
model.compile(loss="categorical_crossentropy", optimizer="rmsprop")
# выполнение серий тренировочных и демонстрационных итераций
for iteration in range(NUM_ITERATIONS):
# для каждой итерации запуск передачи данных в модель
print("=" * 50)
print("Итерация #: %d" % (iteration))
model.fit(X, y, batch_size=BATCH_SIZE, epochs=NUM_EPOCHS_PER_ITERATION)
# Select a random example input sequence.
test_idx = np.random.randint(len(input_chars))
test_chars = input_chars[test_idx]
# для числа шагов предсказаний использование текущей тренируемой модели
# конструирование one-hot encoding для тестирования input и добавление предсказания.
print("Генерация из посева: %s" % (test_chars))
print(test_chars, end="")
for i in range(NUM_PREDS_PER_EPOCH):
# здесь one-hot encoding.
X_test = np.zeros((1, SEQLEN, nb_chars))
for j, ch in enumerate(test_chars):
X_test[0, j, char2index[ch]] = 1
# осуществление предсказания с помощью текущей модели.
pred = model.predict(X_test, verbose=0)[0]
y_pred = index2char[np.argmax(pred)]
# вывод предсказания добавленного к тестовому примеру
print(y_pred, end="")
# инкрементация тестового примера содержащего предсказание
test_chars = test_chars[1:] + y_pred
print()
###Output
==================================================
Итерация #: 0
Epoch 1/10
4865/4865 [==============================] - 286s 58ms/step - loss: 2.5292
Epoch 2/10
4865/4865 [==============================] - 280s 58ms/step - loss: 1.9518
Epoch 3/10
4865/4865 [==============================] - 280s 58ms/step - loss: 1.7995
Epoch 4/10
4865/4865 [==============================] - 280s 58ms/step - loss: 1.7162
Epoch 5/10
4865/4865 [==============================] - 281s 58ms/step - loss: 1.6658
Epoch 6/10
4865/4865 [==============================] - 282s 58ms/step - loss: 1.6277
Epoch 7/10
4865/4865 [==============================] - 282s 58ms/step - loss: 1.6016
Epoch 8/10
4865/4865 [==============================] - 281s 58ms/step - loss: 1.5789
Epoch 9/10
4865/4865 [==============================] - 282s 58ms/step - loss: 1.5636
Epoch 10/10
4865/4865 [==============================] - 282s 58ms/step - loss: 1.5498
Генерация из посева: авненно ничтожнее, чем того, главного убийцу, жела
авненно ничтожнее, чем того, главного убийцу, желали в том же в постельки подозрения совсем не подумал он в том же в постельки подозрения совсем не по==================================================
Итерация #: 1
Epoch 1/10
4865/4865 [==============================] - 274s 56ms/step - loss: 1.5414
Epoch 2/10
4865/4865 [==============================] - 277s 57ms/step - loss: 1.5317
Epoch 3/10
4865/4865 [==============================] - 277s 57ms/step - loss: 1.5232
Epoch 4/10
4865/4865 [==============================] - 277s 57ms/step - loss: 1.5154
Epoch 5/10
4865/4865 [==============================] - 277s 57ms/step - loss: 1.5089
Epoch 6/10
4865/4865 [==============================] - 277s 57ms/step - loss: 1.5029
Epoch 7/10
4865/4865 [==============================] - 277s 57ms/step - loss: 1.4978
Epoch 8/10
4865/4865 [==============================] - 277s 57ms/step - loss: 1.4931
Epoch 9/10
4865/4865 [==============================] - 277s 57ms/step - loss: 1.4887
Epoch 10/10
4865/4865 [==============================] - 277s 57ms/step - loss: 1.4845
Генерация из посева: титулярного советника Федора Павловича Карамазова
титулярного советника Федора Павловича Карамазова в сердце своей страдание совсем не понимают пристально просто тогда пришел в самом деле в тот же се
###Markdown
Курс «Машинное обучение в бизнесе» Урок 5. Кейс 1. Аномалии и артефакты Домашнее задание к уроку 5
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import statsmodels.api as sm
from statsmodels.tsa.arima_model import ARIMA
import itertools
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
import os
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_absolute_error, mean_squared_error, median_absolute_error, r2_score
from sklearn.cluster import KMeans, DBSCAN
import warnings
def show_plot_data(data, field_name, title, xlabel):
plt.figure(figsize =(20,6))
plt.plot(data.index, data[field_name], 'b')
plt.title(title)
plt.xlabel(xlabel)
plt.show()
def show_difference(data, delta, field_name, label, title, xlabel):
result = (data[field_name].values[delta:]- data[field_name].values[:-delta])/np.absolute(2 * delta)
# Потому что производная.
s = np.std(result)**0.5*3
plt.figure(figsize=(20,6))
plt.plot(data.iloc[:-delta].index,result,'.', label=label)
plt.plot(data.iloc[[delta,-delta]].index,[s, s],'--k',label ='3 sig')
plt.plot(data.iloc[[delta,-delta]].index,[-s, -s],'--k')
plt.xlabel(xlabel)
plt.legend()
plt.title(title)
plt.show()
return result
def split_data_b(data, split_date, index_name):
return data.loc[data.index.get_level_values(index_name) <= split_date].copy(), \
data.loc[data.index.get_level_values(index_name) > split_date].copy()
def show_train_test_data(train, test, data, field_name, xlabel, ylabel, train_label, test_label, title):
plt.figure( figsize=( 15, 7 ))
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.plot(train.index.get_level_values(field_name),train[data.columns[0]], label=train_label)
plt.plot(test.index.get_level_values(field_name),test[data.columns[0]], label=test_label)
plt.title(title)
plt.legend()
plt.show()
def mean_absolute_percentage_error(y_true, y_pred):
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
def show_train_test_data_h(test_b, X_test_pred_gb, y_test_b, data, filed, date, xlabel, ylabel, test_pred_label, test_label, data_label):
plt.figure(figsize=( 20, 6))
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.plot(test_b.index[h:],X_test_pred_gb, label=test_pred_label)
plt.plot(test_b.index[h:],y_test_b, label=test_label)
plt.plot(data.loc[date:].index, data.loc[date:][filed], label=data_label)
plt.legend()
plt.show()
er_g = mean_absolute_percentage_error(y_true=y_test_usd_b, y_pred=X_test_usd_pred_gb)
print('ошибка градиентного бустинга : ', er_g, '%')
###Output
_____no_output_____
###Markdown
Задание 1 Прочитайте базу my_BRENT2019.csv и перейдите к представлению ряда первыми разностями.
###Code
data = pd.read_csv('my_BRENT2019.csv', index_col=[0], parse_dates=[0])
data.head()
data.describe()
show_plot_data(data, 'Значение', 'Цена нефти Brent, USA dollar', 't')
d_data = show_difference(data, 1, 'Значение', 'd(BRENT)/dt', 'Цена нефти Brent, USA Dollar', 't')
###Output
_____no_output_____
###Markdown
Задание 2 Добавьте первые разности по стоимости доллара и евро к рублю (можно добавить и сами курсы валют - прошлые значения).
###Code
data_usd = pd.read_csv('USD_RUB.csv', index_col=[0], parse_dates=[0])
data_usd.head()
data_usd.describe()
show_plot_data(data_usd, 'value', 'USA — RUB', 't')
d_data_usd = show_difference(data_usd, 1, 'value', 'd(value)/dt', 'USD — RUB', 't')
data_eur = pd.read_csv('EUR_RUB.csv', index_col=[0], parse_dates=[0])
data_eur.head()
data_eur.describe()
show_plot_data(data_eur, 'value', 'EUR — RUB', 't')
d_data_usd = show_difference(data_eur, 1, 'value', 'd(value)/dt', 'EUR — RUB', 't')
###Output
_____no_output_____
###Markdown
Задание 3 Сделайте ее перрасчет (ресемплинг) в представление по неделям.
###Code
weekly_summary_usd = pd.DataFrame()
weekly_summary_usd['usd'] = data_usd['value'].resample('W').mean()
show_plot_data(weekly_summary_usd, 'usd', 'USA — RUB', 't')
d_weekly_summary_usd = show_difference(weekly_summary_usd, 1, 'usd', 'd(value)/dt', 'EUR — RUB', 't')
weekly_summary_eur = pd.DataFrame()
weekly_summary_eur['eur'] = data_eur['value'].resample('W').mean()
show_plot_data(weekly_summary_eur, 'eur', 'USA — RUB', 't')
d_weekly_summary_eur = show_difference(weekly_summary_eur, 1, 'eur', 'd(value)/dt', 'EUR — RUB', 't')
###Output
_____no_output_____
###Markdown
Задание 4 Постройте модель предсказания 1-й точки от текущей (h=1).
###Code
train_usd_b, test_usd_b = split_data_b(data_usd, '01-01-2017', 'date')
X_train_usd_b = train_usd_b.iloc[:-1,:]
y_train_usd_b = train_usd_b[data_usd.columns[0]].values[1:]
X_test_usd_b = test_usd_b.iloc[:-1,:]
y_test_usd_b = test_usd_b[data_usd.columns[0]].values[1:]
show_train_test_data(train_usd_b,
test_usd_b,
data_usd,
'date',
'Время',
'Цена USD',
'train data',
'test data',
'Тестовые и тренировочные данные')
h=1
X_train_usd_b = train_usd_b.iloc[:-h,:]
y_train_usd_b = train_usd_b[data_usd.columns[0]].values[h:]
X_test_usd_b = test_usd_b.iloc[:-h,:]
y_test_usd_b = test_usd_b[data_usd.columns[0]].values[h:]
model_gb_usd = GradientBoostingRegressor(max_depth=15, random_state=0, n_estimators=100)
model_gb_usd.fit(X_train_usd_b, y_train_usd_b)
X_test_usd_pred_gb = model_gb_usd.predict(X_test_usd_b)
show_train_test_data_h(test_usd_b,
X_test_usd_pred_gb,
y_test_usd_b,
data_usd,
'value',
'01-01-2017',
'Время',
'Цена USD',
'predict GB data',
'test data',
'исходный ряд')
train_eur_b, test_eur_b = split_data_b(data_eur, '01-01-2017', 'date')
X_train_eur_b = train_eur_b.iloc[:-1,:]
y_train_eur_b = train_eur_b[data_eur.columns[0]].values[1:]
X_test_eur_b = test_eur_b.iloc[:-1,:]
y_test_eur_b = test_eur_b[data_eur.columns[0]].values[1:]
show_train_test_data(train_eur_b,
test_eur_b,
data_eur,
'date',
'Время',
'Цена EUR',
'train data',
'test data',
'Тестовые и тренировочные данные')
h=1
X_train_eur_b = train_eur_b.iloc[:-h,:]
y_train_eur_b = train_eur_b[data_eur.columns[0]].values[h:]
X_test_eur_b = test_eur_b.iloc[:-h,:]
y_test_eur_b = test_eur_b[data_eur.columns[0]].values[h:]
model_gb_eur = GradientBoostingRegressor(max_depth=15, random_state=0, n_estimators=100)
model_gb_eur.fit(X_train_eur_b, y_train_eur_b)
X_test_eur_pred_gb = model_gb_eur.predict(X_test_eur_b)
show_train_test_data_h(test_usd_b,
X_test_usd_pred_gb,
y_test_usd_b,
data_usd,
'value',
'01-01-2017',
'Время',
'Цена USD',
'predict GB data',
'test data',
'исходный ряд')
###Output
_____no_output_____
###Markdown
Review Lesson 4Import pygame and initialize pygmaeAdd the game loopAdd an exit eventChange the title, logo, and background color of game window Lesson 5Get something on the screen. Anything. Add the Player1) Download the player icon 1) Go to www.flaticon.com and download the PNG image. 2) Can use any image you want, make sure it is 64 x 64px 3) Add the .PNG file to your project at the root level. Laying out the game screenOur game screen is 800 width by 600 heightCounting starts from the top left, and is position 0, 0If we start in the top left corner and move right, the X value starts to increase. On the top right corner the X value is 800.If we move in the top left corner and move down, the Y value starts to increase. When we are at the bottom, the Y value is 600.This is important, because we want to put our player on a specific position on the screen.Feel free to experiment with coordiantes on your own, but for now we will put it on a specific part of the screen.
###Code
# Finally let's right some code
# Player
playerImg = pygame.image.load('player.img')
playerX = 370
playerY = 480
# blit draws in image to the screen
def player():
sceen.blit(playerImg, (playerX, playerY))
# inside the game loop
player()
###Output
_____no_output_____
###Markdown
Курс «Алгоритмы анализа данных» Урок 5. Случайный лес Домашняя работа к уроку 5
###Code
import matplotlib.pyplot as plt
import random
from matplotlib.colors import ListedColormap
from sklearn import datasets
from sklearn import model_selection
import numpy as np
def get_bootstrap(data, labels, N):
n_samples = data.shape[0]
bootstrap = []
for i in range(N):
b_data = np.zeros(data.shape)
b_labels = np.zeros(labels.shape)
for j in range(n_samples):
sample_index = random.randint(0, n_samples-1)
b_data[j] = data[sample_index]
b_labels[j] = labels[sample_index]
bootstrap.append((b_data, b_labels))
return bootstrap
def get_subsample(len_sample):
# будем сохранять не сами признаки, а их индексы
sample_indexes = [i for i in range(len_sample)]
len_subsample = int(np.sqrt(len_sample))
subsample = []
random.shuffle(sample_indexes)
for _ in range(len_subsample):
subsample.append(sample_indexes.pop())
return subsample
# Реализуем класс узла
class Node:
def __init__(self, index, t, true_branch, false_branch):
self.index = index # индекс признака, по которому ведется сравнение с порогом в этом узле
self.t = t # значение порога
self.true_branch = true_branch # поддерево, удовлетворяющее условию в узле
self.false_branch = false_branch # поддерево, не удовлетворяющее условию в узле
# И класс терминального узла (листа)
class Leaf:
def __init__(self, data, labels):
self.data = data
self.labels = labels
self.prediction = self.predict()
def predict(self):
# подсчет количества объектов разных классов
classes = {} # сформируем словарь "класс: количество объектов"
for label in self.labels:
if label not in classes:
classes[label] = 0
classes[label] += 1
# найдем класс, количество объектов которого будет максимальным в этом листе и вернем его
prediction = max(classes, key=classes.get)
return prediction
# Расчет критерия Джини
def gini(labels):
# подсчет количества объектов разных классов
classes = {}
for label in labels:
if label not in classes:
classes[label] = 0
classes[label] += 1
# расчет критерия
impurity = 1
for label in classes:
p = classes[label] / len(labels)
impurity -= p ** 2
return impurity
# Расчет качества
def quality(left_labels, right_labels, current_gini):
# доля выбоки, ушедшая в левое поддерево
p = float(left_labels.shape[0]) / (left_labels.shape[0] + right_labels.shape[0])
return current_gini - p * gini(left_labels) - (1 - p) * gini(right_labels)
# Разбиение датасета в узле
def split(data, labels, index, t):
left = np.where(data[:, index] <= t)
right = np.where(data[:, index] > t)
true_data = data[left]
false_data = data[right]
true_labels = labels[left]
false_labels = labels[right]
return true_data, false_data, true_labels, false_labels
# Нахождение наилучшего разбиения
def find_best_split(data, labels):
# обозначим минимальное количество объектов в узле
min_leaf = 1
current_gini = gini(labels)
best_quality = 0
best_t = None
best_index = None
n_features = data.shape[1]
# выбор индекса из подвыборки длиной sqrt(n_features)
subsample = get_subsample(n_features)
for index in subsample:
# будем проверять только уникальные значения признака, исключая повторения
t_values = np.unique([row[index] for row in data])
for t in t_values:
true_data, false_data, true_labels, false_labels = split(data, labels, index, t)
# пропускаем разбиения, в которых в узле остается менее 5 объектов
if len(true_data) < min_leaf or len(false_data) < min_leaf:
continue
current_quality = quality(true_labels, false_labels, current_gini)
# выбираем порог, на котором получается максимальный прирост качества
if current_quality > best_quality:
best_quality, best_t, best_index = current_quality, t, index
return best_quality, best_t, best_index
# Построение дерева с помощью рекурсивной функции
def build_tree(data, labels):
quality, t, index = find_best_split(data, labels)
# Базовый случай - прекращаем рекурсию, когда нет прироста в качества
if quality == 0:
return Leaf(data, labels)
true_data, false_data, true_labels, false_labels = split(data, labels, index, t)
# Рекурсивно строим два поддерева
true_branch = build_tree(true_data, true_labels)
false_branch = build_tree(false_data, false_labels)
# Возвращаем класс узла со всеми поддеревьями, то есть целого дерева
return Node(index, t, true_branch, false_branch)
def random_forest(data, labels, n_trees):
forest = []
bootstrap = get_bootstrap(data, labels, n_trees)
for b_data, b_labels in bootstrap:
forest.append(build_tree(b_data, b_labels))
return forest
# Функция классификации отдельного объекта
def classify_object(obj, node):
# Останавливаем рекурсию, если достигли листа
if isinstance(node, Leaf):
answer = node.prediction
return answer
if obj[node.index] <= node.t:
return classify_object(obj, node.true_branch)
else:
return classify_object(obj, node.false_branch)
# функция формирования предсказания по выборке на одном дереве
def predict(data, tree):
classes = []
for obj in data:
prediction = classify_object(obj, tree)
classes.append(prediction)
return classes
# предсказание голосованием деревьев
def tree_vote(forest, data):
# добавим предсказания всех деревьев в список
predictions = []
for tree in forest:
predictions.append(predict(data, tree))
# сформируем список с предсказаниями для каждого объекта
predictions_per_object = list(zip(*predictions))
# выберем в качестве итогового предсказания для каждого объекта то,
# за которое проголосовало большинство деревьев
voted_predictions = []
for obj in predictions_per_object:
voted_predictions.append(max(set(obj), key=obj.count))
return voted_predictions
# Введем функцию подсчета точности как доли правильных ответов
def accuracy_metric(actual, predicted):
correct = 0
for i in range(len(actual)):
if actual[i] == predicted[i]:
correct += 1
return correct / float(len(actual)) * 100.0
# Визуализируем дерево на графике
def get_meshgrid(data, step=.05, border=1.2):
x_min, x_max = data[:, 0].min() - border, data[:, 0].max() + border
y_min, y_max = data[:, 1].min() - border, data[:, 1].max() + border
return np.meshgrid(np.arange(x_min, x_max, step), np.arange(y_min, y_max, step))
# Напечатаем ход нашего дерева
def print_tree(node, spacing=""):
# Если лист, то выводим его прогноз
if isinstance(node, Leaf):
print(spacing + "Прогноз:", node.prediction)
return
# Выведем значение индекса и порога на этом узле
print(spacing + 'Индекс', str(node.index))
print(spacing + 'Порог', str(node.t))
# Рекурсионный вызов функции на положительном поддереве
print (spacing + '--> True:')
print_tree(node.true_branch, spacing + " ")
# Рекурсионный вызов функции на положительном поддереве
print (spacing + '--> False:')
print_tree(node.false_branch, spacing + " ")
# Печатаем лес
def print_forest(values):
for i in range(len(values)):
print (f'Дерево {i}')
print_tree(values[i], " ")
def build_forest(n_trees, train_data, train_labels, test_data, test_labels):
forest = random_forest(train_data, train_labels, n_trees)
# print_forest(forest)
# print()
train_answers = tree_vote(forest, train_data)
test_answers = tree_vote(forest, test_data)
train_accuracy = accuracy_metric(train_labels, train_answers)
print(f'Точность случайного леса из {n_trees} деревьев на обучающей выборке: {train_accuracy:.3f}')
test_accuracy = accuracy_metric(test_labels, test_answers)
print(f'Точность случайного леса из {n_trees} деревьев на тестовой выборке: {test_accuracy:.3f}')
plt.figure(figsize = (16, 7))
plt.subplot(1,2,1)
xx, yy = get_meshgrid(train_data)
mesh_predictions = np.array(tree_vote(forest, np.c_[xx.ravel(), yy.ravel()])).reshape(xx.shape)
plt.pcolormesh(xx, yy, mesh_predictions, cmap = light_colors)
plt.scatter(train_data[:, 0], train_data[:, 1], c = train_labels, cmap = colors)
plt.title(f'Train accuracy={train_accuracy:.2f}')
plt.subplot(1,2,2)
plt.pcolormesh(xx, yy, mesh_predictions, cmap = light_colors)
plt.scatter(test_data[:, 0], test_data[:, 1], c = test_labels, cmap = colors)
plt.title(f'Test accuracy={test_accuracy:.2f}')
# сгенерируем данные, представляющие собой 500 объектов с 5-ю признаками
classification_data, classification_labels = datasets.make_classification(n_samples=500,
n_features = 5, n_informative = 5,
n_classes = 2, n_redundant=0,
n_clusters_per_class=1, random_state=23)
# визуализируем сгенерированные данные
colors = ListedColormap(['red', 'blue'])
light_colors = ListedColormap(['lightcoral', 'lightblue'])
plt.figure(figsize=(8,8))
plt.scatter(list(map(lambda x: x[0], classification_data)), list(map(lambda x: x[1], classification_data)),
c=classification_labels, cmap=colors)
random.seed(42)
# Разобьем выборку на обучающую и тестовую
train_data, test_data, train_labels, test_labels = model_selection.train_test_split(classification_data,
classification_labels,
test_size = 0.3,
random_state = 1)
n_trees = 1
my_forest_1 = random_forest(train_data, train_labels, n_trees)
# Получим ответы для обучающей выборки
train_answers = tree_vote(my_forest_1, train_data)
# И получим ответы для тестовой выборки
test_answers = tree_vote(my_forest_1, test_data)
# Точность на обучающей выборке
train_accuracy = accuracy_metric(train_labels, train_answers)
print(f'Точность случайного леса из {n_trees} деревьев на обучающей выборке: {train_accuracy:.3f}')
# Точность на тестовой выборке
test_accuracy = accuracy_metric(test_labels, test_answers)
print(f'Точность случайного леса из {n_trees} деревьев на тестовой выборке: {test_accuracy:.3f}')
n_trees = 50
my_forest_50 = random_forest(train_data, train_labels, n_trees)
# Получим ответы для обучающей выборки
train_answers = tree_vote(my_forest_50, train_data)
# И получим ответы для тестовой выборки
test_answers = tree_vote(my_forest_50, test_data)
# Точность на обучающей выборке
train_accuracy = accuracy_metric(train_labels, train_answers)
print(f'Точность случайного леса из {n_trees} деревьев на обучающей выборке: {train_accuracy:.3f}')
# Точность на тестовой выборке
test_accuracy = accuracy_metric(test_labels, test_answers)
print(f'Точность случайного леса из {n_trees} деревьев на тестовой выборке: {test_accuracy:.3f}')
###Output
Точность случайного леса из 50 деревьев на тестовой выборке: 95.333
###Markdown
Задание 1 Сформировать с помощью sklearn.make_classification датасет из 100 объектов с двумя признаками, обучить случайный лес из 1, 3, 10 и 50 деревьев и визуализировать их разделяющие гиперплоскости на графиках (по подобию визуализации деревьев из предыдущего урока, необходимо только заменить вызов функции predict на tree_vote). Сделать выводы о получаемой сложности гиперплоскости и недообучении или переобучении случайного леса в зависимости от количества деревьев в нем. Генерим данные.
###Code
classification_data, classification_labels = datasets.make_classification(n_samples=100,
n_features = 2, n_informative = 2,
n_classes = 2, n_redundant=0,
n_clusters_per_class=1, random_state=23)
random.seed(42)
###Output
_____no_output_____
###Markdown
Разбиваем выборку на две части.
###Code
train_data, test_data, train_labels, test_labels = model_selection.train_test_split(classification_data,
classification_labels,
test_size = 0.3,
random_state = 1)
###Output
_____no_output_____
###Markdown
__Случайный лес из одного дерева__
###Code
build_forest(1, train_data, train_labels, test_data, test_labels)
###Output
Точность случайного леса из 1 деревьев на обучающей выборке: 97.143
Точность случайного леса из 1 деревьев на тестовой выборке: 80.000
###Markdown
Гиперплоскость простая, модель переобучилась. __Случайный лес из трех деревьев__
###Code
build_forest(3, train_data, train_labels, test_data, test_labels)
###Output
Точность случайного леса из 3 деревьев на обучающей выборке: 97.143
Точность случайного леса из 3 деревьев на тестовой выборке: 80.000
###Markdown
Гиперплоскость относительно простая, модель переобучилась. __Случайный лес из десяти деревьев__
###Code
build_forest(10, train_data, train_labels, test_data, test_labels)
###Output
Точность случайного леса из 10 деревьев на обучающей выборке: 100.000
Точность случайного леса из 10 деревьев на тестовой выборке: 86.667
###Markdown
Гиперплоскость стала сложнее, модель переобучилась, при этом точность случайного леса на обучающей и тестовой выборках достигли своего предела. __Случайный лес из пятидесяти деревьев__
###Code
build_forest(50, train_data, train_labels, test_data, test_labels)
###Output
Точность случайного леса из 50 деревьев на обучающей выборке: 100.000
Точность случайного леса из 50 деревьев на тестовой выборке: 86.667
|
lectures/not_yet_booked/adv0_chromosomes.ipynb | ###Markdown
Measuring chromatin fluorescenceGoal: we want to quantify the amount of a particular protein (red fluorescence) localized on the centromeres (green) versus the rest of the chromosome (blue).The main challenge here is the uneven illumination, which makes isolating the chromosomes a struggle.
###Code
import numpy as np
from matplotlib import cm, pyplot as plt
import skdemo
plt.rcParams['image.cmap'] = 'cubehelix'
plt.rcParams['image.interpolation'] = 'none'
from skimage import io
image = io.imread('../images/chromosomes.tif')
skdemo.imshow_with_histogram(image);
###Output
_____no_output_____
###Markdown
Let's separate the channels so we can work on each individually.
###Code
protein, centromeres, chromosomes = image.transpose((2, 0, 1))
###Output
_____no_output_____
###Markdown
Getting the centromeres is easy because the signal is so clean:
###Code
from skimage.filter import threshold_otsu
centromeres_binary = centromeres > threshold_otsu(centromeres)
skdemo.imshow_all(centromeres, centromeres_binary)
###Output
_____no_output_____
###Markdown
But getting the chromosomes is not so easy:
###Code
chromosomes_binary = chromosomes > threshold_otsu(chromosomes)
skdemo.imshow_all(chromosomes, chromosomes_binary, cmap='gray')
###Output
_____no_output_____
###Markdown
Let's try using an adaptive threshold:
###Code
from skimage.filter import threshold_adaptive
chromosomes_adapt = threshold_adaptive(chromosomes, block_size=51)
# Question: how did I choose this block size?
skdemo.imshow_all(chromosomes, chromosomes_adapt)
###Output
_____no_output_____
###Markdown
Not only is the uneven illumination a problem, but there seem to be some artifacts due to the illumination pattern!**Exercise:** Can you think of a way to fix this?(Hint: in addition to everything you've learned so far, check out [`skimage.morphology.remove_small_objects`](http://scikit-image.org/docs/dev/api/skimage.morphology.htmlskimage.morphology.remove_small_objects)) Now that we have the centromeres and the chromosomes, it's time to do the science: get the distribution of intensities in the red channel using both centromere and chromosome locations.
###Code
# Replace "None" below with the right expressions!
centromere_intensities = None
chromosome_intensities = None
all_intensities = np.concatenate((centromere_intensities,
chromosome_intensities))
minint = np.min(all_intensities)
maxint = np.max(all_intensities)
bins = np.linspace(minint, maxint, 100)
plt.hist(centromere_intensities, bins=bins, color='blue',
alpha=0.5, label='centromeres')
plt.hist(chromosome_intensities, bins=bins, color='orange',
alpha=0.5, label='chromosomes')
plt.legend(loc='upper right')
plt.show()
###Output
_____no_output_____
###Markdown
---
###Code
%reload_ext load_style
%load_style ../themes/tutorial.css
###Output
_____no_output_____ |
demos/montage_processing_serialem.ipynb | ###Markdown
PySerialEM - montaginghttps://github.com/instamatic-dev/pyserialemThis notebook shows how to process a grid montage acquired using `SerialEM`. The data for this demo were collected on a zeolite sample (2020-02-12), using a JEOL JEM-1400 @ 120 kV in combination with a TVIPS F-416 camera. The data are available from zenodo: https://doi.org/10.5281/zenodo.3923718These data were chosen, because the stitching from SerialEM was particularly bad. We will show an example of how `PySerialEM` can be used to try to get a better montage.For this demo to work, change the `work` directory below to point at the right location.
###Code
from pyserialem import Montage
import numpy as np
from pathlib import Path
np.set_printoptions(suppress=True)
# work directory
work = Path(r"C:\s\2020-02-12\serialem_montage")
work
###Output
_____no_output_____
###Markdown
Setting up the montageLoad the `gm.mrc` file and the associated images. For SerialEM data, the gridshape must be specified, because it cannot be obtained from the data or `.mdoc` direction. There are also several parameters to tune the orientation of the images to match them with the input of the stagematrix (if needed). First we must get the coordinate settings to match those of SerialEM. These variables ap
###Code
m = Montage.from_serialem_mrc(
work / 'gm.mrc',
gridshape=(5,5),
direction='updown',
zigzag=True,
flip=False,
image_rot90 = 3,
image_flipud = False,
image_fliplr = True,
)
m.gridspec
###Output
_____no_output_____
###Markdown
First, we can check what the data actually look like. To do so, we can simply `stitch` and `plot` the data using a `binning=4` to conserve a bit of memory. This naively plots the data at the expected positions. Although the stitching is not that great, it's enough to get a feeling for the data.Note that SerialEM includes the pixel coordinates in the `.mdoc` file, so it is not necessary to calculate these again. Instead, the `PieceCoordinates` are mapped to `m.coords`.
###Code
# Use `optimized = False` to prevent using the aligned piece coordinates
m.stitch(binning=4, optimized=False)
m.plot()
###Output
_____no_output_____
###Markdown
SerialEM has also already calculated the aligned image coordinates (`AlignedPieceCoordsVS`/`AlignedPieceCoords`). These can be accessed via the `.optimized_coords` attribute. To plot, you can do the following:
###Code
# optimized = True is the default, so it can be left out
m.stitch(binning=4, optimized=True)
m.plot()
montage_serialem = m.stitched # store reference for later
###Output
_____no_output_____
###Markdown
The stitching from SerialEM is particularly bad, so we can try to optimize it using the algorithm in `pyserialem`.First, we must ensure that we entered the gridspec correctly. If the layout of the tiles below does not look right (i.e. similar to above), go back to loading the `Montage` and fiddle with the rotation of the images. The operation below just places the tiles at the positions calculated by `pyserialem`.
###Code
m.calculate_montage_coords()
m.stitch(binning=4, optimized=False)
m.plot()
###Output
_____no_output_____
###Markdown
It is still possible to try to get better stitching using the algorithm in `pyserialem`: 1. Better estimate the difference vectors between each tile using cross correlation 2. Optimize the coordinates of the difference vectors using least-squares minimizationThis approach is based on *Globally optimal stitching of tiled 3D microscopic image acquisitions* by Preibish et al., Bioinformatics 25 (2009), 1463–1465 (https://doi.org/10.1093/bioinformatics/btp184).Some metrics, such as the obtained shifts and FFT scores are plotted to evaluate the stitching.
###Code
# Use cross correlation to get difference vectors
m.calculate_difference_vectors(
threshold=0.08,
# method='skimage',
plot=False
)
# plot the fft_scores
m.plot_fft_scores()
# plot the pixel shifts
m.plot_shifts()
# get coords optimized using cross correlation
m.optimize_montage_coords(plot=True)
# stitch image, use binning 4 for speed-up and memory conservation
m.stitch(binning=4)
# plot the stitched image
m.plot()
montage_pyserialem = m.stitched # store reference for later
###Output
_____no_output_____
###Markdown
It's not a perfect stitching, but much better than what SerialEM produced! I believe the reason is that SerialEM does some adjustments to the stageposition as it is moving. Below is an example of the same grid map collected with [instamatic](http://github.com/instamatic-dev/instamatic), using the same coordinates and imaging conditions, reconstructed with the same algorithm.
###Code
import tifffile
import matplotlib.pyplot as plt
with tifffile.TiffFile(work / 'stitched_instamatic.tiff') as f:
montage_instamatic = f.asarray()
fig, (ax0, ax1, ax2) = plt.subplots(ncols=3, figsize=(20,10))
ax0.imshow(montage_serialem)
ax0.set_title('Data: SerialEM\n'
'Stitching: SerialEM')
ax1.imshow(montage_pyserialem)
ax1.set_title('Data: SerialEM\n'
'Stitching: PySerialEM')
ax2.imshow(montage_instamatic)
ax2.set_title('Data: Instamatic\n'
'Stitching: PySerialEM');
###Output
_____no_output_____
###Markdown
When the image has been stitched (with or without optimization), we can look for the positions of the grid squares/squircles. First, we should tell `pyserialem` about the imaging conditions by setting the stagematrix to relate the pixelpositions back to stage coordinates. The easiest way to do it is to pass the `StageToCameraMatrix` directly. It can be found in `SerialEMcalibrations.txt`. Look for the last number, which gives the magnification.(They can also be set directly via `.set_stagematrix` and `.set_pixelsize`)
###Code
StageToCameraMatrix = "StageToCameraMatrix 10 0 8.797544 0.052175 0.239726 8.460119 0.741238 100"
m.set_stagematrix_from_serialem_calib(StageToCameraMatrix)
m.stagematrix
###Output
_____no_output_____
###Markdown
This also sets the pixelsize.
###Code
m.pixelsize
###Output
_____no_output_____
###Markdown
To find the holes, call the method `.find_holes`. The grid squares are identified as objects roughly sized `diameter` with a tolerance of 10%. The median as well as 5/95 percentiles are printed to evaluate the hole size distribution. By default the `otsu` method is used to define the threshold, but the threshold can be changed if the segmentation looks poor.
###Code
stagecoords, imagecoords = m.find_holes(
plot=True,
tolerance=0.2)
###Output
_____no_output_____
###Markdown
IF a `.nav` file was saved, the stage coordinates can be added and read back into `SerialEM`.
###Code
from pyserialem import read_nav_file, write_nav_file
nav = read_nav_file(work / "nav.nav")
map_item = nav[0]
items = map_item.add_marker_group(coords=stagecoords/1000, kind='stage')
write_nav_file("nav_new.nav", map_item, *items)
###Output
_____no_output_____
###Markdown
It is possible to optimize the stage coordinates for more efficient navigation. In this example, the total stage movement can be reduced by about 75%, which will save a lot of time. The function uses the _two-opt_ algorithm for finding the shortest path: https://en.wikipedia.org/wiki/2-opt.
###Code
from pyserialem.navigation import sort_nav_items_by_shortest_path
stagecoords = sort_nav_items_by_shortest_path(
stagecoords,
plot=True
)
###Output
_____no_output_____
###Markdown
We can re-run the command (or set the `threshold` to something like `0.01`) to try to get a better path. In this case it's possible to improve it a little bit more.
###Code
stagecoords = sort_nav_items_by_shortest_path(
stagecoords,
threshold=0.01,
plot=True
)
###Output
_____no_output_____
###Markdown
PySerialEM - montaginghttps://github.com/stefsmeets/pyserialemThis notebook shows how to process a grid montage acquired using `SerialEM`. The data for this demo were collected on a zeolite sample (2020-02-12), using a JEOL JEM-1400 @ 120 kV in combination with a TVIPS F-416 camera. The data are available from zenodo: https://doi.org/10.5281/zenodo.3923718These data were chosen, because the stitching from SerialEM was particularly bad. We will show an example of how `PySerialEM` can be used to try to get a better montage.For this demo to work, change the `work` directory below to point at the right location.
###Code
from pyserialem import Montage
import numpy as np
from pathlib import Path
np.set_printoptions(suppress=True)
# work directory
work = Path(r"C:\s\2020-02-12\serialem_montage")
work
###Output
_____no_output_____
###Markdown
Setting up the montageLoad the `gm.mrc` file and the associated images. For SerialEM data, the gridshape must be specified, because it cannot be obtained from the data or `.mdoc` direction. There are also several parameters to tune the orientation of the images to match them with the input of the stagematrix (if needed). First we must get the coordinate settings to match those of SerialEM. These variables ap
###Code
m = Montage.from_serialem_mrc(
work / 'gm.mrc',
gridshape=(5,5),
direction='updown',
zigzag=True,
flip=False,
image_rot90 = 3,
image_flipud = False,
image_fliplr = True,
)
m.gridspec
###Output
_____no_output_____
###Markdown
First, we can check what the data actually look like. To do so, we can simply `stitch` and `plot` the data using a `binning=4` to conserve a bit of memory. This naively plots the data at the expected positions. Although the stitching is not that great, it's enough to get a feeling for the data.Note that SerialEM includes the pixel coordinates in the `.mdoc` file, so it is not necessary to calculate these again. Instead, the `PieceCoordinates` are mapped to `m.coords`.
###Code
# Use `optimized = False` to prevent using the aligned piece coordinates
m.stitch(binning=4, optimized=False)
m.plot()
###Output
_____no_output_____
###Markdown
SerialEM has also already calculated the aligned image coordinates (`AlignedPieceCoordsVS`/`AlignedPieceCoords`). These can be accessed via the `.optimized_coords` attribute. To plot, you can do the following:
###Code
# optimized = True is the default, so it can be left out
m.stitch(binning=4, optimized=True)
m.plot()
montage_serialem = m.stitched # store reference for later
###Output
_____no_output_____
###Markdown
The stitching from SerialEM is particularly bad, so we can try to optimize it using the algorithm in `pyserialem`.First, we must ensure that we entered the gridspec correctly. If the layout of the tiles below does not look right (i.e. similar to above), go back to loading the `Montage` and fiddle with the rotation of the images. The operation below just places the tiles at the positions calculated by `pyserialem`.
###Code
m.calculate_montage_coords()
m.stitch(binning=4, optimized=False)
m.plot()
###Output
_____no_output_____
###Markdown
It is still possible to try to get better stitching using the algorithm in `pyserialem`: 1. Better estimate the difference vectors between each tile using cross correlation 2. Optimize the coordinates of the difference vectors using least-squares minimizationThis approach is based on *Globally optimal stitching of tiled 3D microscopic image acquisitions* by Preibish et al., Bioinformatics 25 (2009), 1463–1465 (https://doi.org/10.1093/bioinformatics/btp184).Some metrics, such as the obtained shifts and FFT scores are plotted to evaluate the stitching.
###Code
# Use cross correlation to get difference vectors
m.calculate_difference_vectors(
threshold=0.08,
# method='skimage',
plot=False
)
# plot the fft_scores
m.plot_fft_scores()
# plot the pixel shifts
m.plot_shifts()
# get coords optimized using cross correlation
m.optimize_montage_coords(plot=True)
# stitch image, use binning 4 for speed-up and memory conservation
m.stitch(binning=4)
# plot the stitched image
m.plot()
montage_pyserialem = m.stitched # store reference for later
###Output
_____no_output_____
###Markdown
It's not a perfect stitching, but much better than what SerialEM produced! I believe the reason is that SerialEM does some adjustments to the stageposition as it is moving. Below is an example of the same grid map collected with [instamatic](http://github.com/stefsmeets/instamatic), using the same coordinates and imaging conditions, reconstructed with the same algorithm.
###Code
import tifffile
import matplotlib.pyplot as plt
with tifffile.TiffFile(work / 'stitched_instamatic.tiff') as f:
montage_instamatic = f.asarray()
fig, (ax0, ax1, ax2) = plt.subplots(ncols=3, figsize=(20,10))
ax0.imshow(montage_serialem)
ax0.set_title('Data: SerialEM\n'
'Stitching: SerialEM')
ax1.imshow(montage_pyserialem)
ax1.set_title('Data: SerialEM\n'
'Stitching: PySerialEM')
ax2.imshow(montage_instamatic)
ax2.set_title('Data: Instamatic\n'
'Stitching: PySerialEM');
###Output
_____no_output_____
###Markdown
When the image has been stitched (with or without optimization), we can look for the positions of the grid squares/squircles. First, we should tell `pyserialem` about the imaging conditions by setting the stagematrix to relate the pixelpositions back to stage coordinates. The easiest way to do it is to pass the `StageToCameraMatrix` directly. It can be found in `SerialEMcalibrations.txt`. Look for the last number, which gives the magnification.(They can also be set directly via `.set_stagematrix` and `.set_pixelsize`)
###Code
StageToCameraMatrix = "StageToCameraMatrix 10 0 8.797544 0.052175 0.239726 8.460119 0.741238 100"
m.set_stagematrix_from_serialem_calib(StageToCameraMatrix)
m.stagematrix
###Output
_____no_output_____
###Markdown
This also sets the pixelsize.
###Code
m.pixelsize
###Output
_____no_output_____
###Markdown
To find the holes, call the method `.find_holes`. The grid squares are identified as objects roughly sized `diameter` with a tolerance of 10%. The median as well as 5/95 percentiles are printed to evaluate the hole size distribution. By default the `otsu` method is used to define the threshold, but the threshold can be changed if the segmentation looks poor.
###Code
stagecoords, imagecoords = m.find_holes(
plot=True,
tolerance=0.2)
###Output
_____no_output_____
###Markdown
IF a `.nav` file was saved, the stage coordinates can be added and read back into `SerialEM`.
###Code
from pyserialem import read_nav_file, write_nav_file
nav = read_nav_file(work / "nav.nav")
map_item = nav[0]
items = map_item.add_marker_group(coords=stagecoords/1000, kind='stage')
write_nav_file("nav_new.nav", map_item, *items)
###Output
_____no_output_____
###Markdown
It is possible to optimize the stage coordinates for more efficient navigation. In this example, the total stage movement can be reduced by about 75%, which will save a lot of time. The function uses the _two-opt_ algorithm for finding the shortest path: https://en.wikipedia.org/wiki/2-opt.
###Code
from pyserialem.navigation import sort_nav_items_by_shortest_path
stagecoords = sort_nav_items_by_shortest_path(
stagecoords,
plot=True
)
###Output
_____no_output_____
###Markdown
We can re-run the command (or set the `threshold` to something like `0.01`) to try to get a better path. In this case it's possible to improve it a little bit more.
###Code
stagecoords = sort_nav_items_by_shortest_path(
stagecoords,
threshold=0.01,
plot=True
)
###Output
_____no_output_____ |
examples/03_catalog.ipynb | ###Markdown
Catalog Search
###Code
%load_ext autoreload
%autoreload 2
import up42
up42.authenticate(cfg_file="config.json")
catalog = up42.initialize_catalog()
catalog
###Output
_____no_output_____
###Markdown
Search available scenes within aoi
###Code
#aoi = up42.read_vector_file("data/aoi_washington.geojson",
# as_dataframe=False)
aoi = up42.get_example_aoi(location="Berlin", as_dataframe=True)
aoi
search_paramaters = catalog.construct_parameters(geometry=aoi,
start_date="2014-01-01",
end_date="2020-12-31",
sensors=["pleiades"],
max_cloudcover=20,
sortby="cloudCoverage",
limit=4)
search_results = catalog.search(search_paramaters=search_paramaters)
display(search_results.head())
catalog.plot_coverage(scenes=search_results,
aoi=aoi,
legend_column="scene_id")
###Output
_____no_output_____
###Markdown
Quicklooks
###Code
catalog.download_quicklooks(image_ids=search_results.id.to_list(), provider="sobloo")
catalog.plot_quicklooks(figsize=(20,20))
###Output
_____no_output_____ |
module4/NB_LS_DS9_224.ipynb | ###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
If you have matplotlib version 3.1.1 then seaborn heatmaps will be cut offBecause of this issue: [sns.heatmap top and bottom boxes are cut off](https://github.com/mwaskom/seaborn/issues/1773)> This was a matplotlib regression introduced in 3.1.1 which has been fixed in 3.1.2 (still forthcoming). For now the fix is to downgrade matplotlib to a prior version.This code checks your matplotlib version:
###Code
import matplotlib
print(matplotlib.__version__)
###Output
_____no_output_____
###Markdown
If you have version 3.1.1, you can downgrade if you want, but you don't have to, I just want you to be aware of the issue. Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
_____no_output_____
###Markdown
Follow AlongScikit-learn's [confusion matrix](https://scikit-learn.org/stable/modules/model_evaluation.htmlconfusion-matrix) function just returns a matrix of numbers, which is hard to read.Scikit-learn docs have an example [plot_confusion_matrix](https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html) function. The output looks good, but the code is long and hard to understand. It's written just with numpy and matplotlib.We can write our own function using pandas and seaborn. The code will be shorter and easier to understand. Let's write the function iteratively.
###Code
###Output
_____no_output_____
###Markdown
How many correct predictions were made?
###Code
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
###Output
_____no_output_____
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report)
###Code
###Output
_____no_output_____
###Markdown
Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
###Output
_____no_output_____
###Markdown
How many correct predictions of "non functional"?
###Code
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
###Output
_____no_output_____
###Markdown
How many actual "non functional" waterpumps?
###Code
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
###Output
_____no_output_____
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
2000 * 0.46
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix:
###Code
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
###Output
_____no_output_____
###Markdown
We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
###Output
_____no_output_____
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
###Code
###Output
_____no_output_____
###Markdown
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
###Output
_____no_output_____
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
###Output
_____no_output_____
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____ |
code/0.about2cjc.ipynb | ###Markdown
****** 计算新闻传播学 课程简介******王成军 [email protected]计算传播网 http://computational-communication.com  http://computational-communication.com
###Code
import mistune
mistune.__version__
mistune.markdown('\n <img src="link" align="right" width=100> \n', escape=False)
###Output
_____no_output_____
###Markdown
复旦大学新闻学院新媒体硕士课程****** 《计算新闻传播学》课程简介******王成军 [email protected]计算传播网 http://computational-communication.com http://computational-communication.com 内容- 时间安排- 课程资料- 授课计划- 课前准备 时间安排- 36学时,两学分| 时间 | 上午 | 下午 |晚上 | 课时数量 || -------------|:-------------:|:-------------:|:-------------:|-----:|| 2016-05-13 周五| 9:00-12:00 | 15:30-17:30 | 作业&答疑 | 5学时| 2016-05-14 周六 | 9:00-12:00 | 14:00-17:00 | 18:00-21:00 | 9学时|| 2016-05-15 周天 | 9:00-12:00 | 14:00-17:00 | 作业&答疑 | 6学时|| 2016-05-19 周四 | | | 18:00-21:00 | 3学时|| 2016-05-20 周五 | 9:00-12:00 | 14:00-17:00 | 作业&答疑 | 6学时|| 2016-05-21 周六 | 9:00-12:00| 14:00-17:00 | 18:00-21:00| 9学时|| 2016-05-22 周天 | 9:00-12:00 | 作业&答疑 | | 3学时|
###Code
5+9+6+3+6+9+3
###Output
_____no_output_____
###Markdown
****** 计算新闻传播学 课程简介******王成军 [email protected]计算传播网 http://computational-communication.com  http://computational-communication.com
###Code
import mistune
mistune.__version__
mistune.markdown('\n <img src="link" align="right" width=100> \n', escape=False)
###Output
_____no_output_____ |
_notebooks/2020-03-17-data-cleaning-checklist.ipynb | ###Markdown
Data cleaning checklist- hide: false- toc: true- comments: true- categories: [pandas] A step-by-step recipe for getting to know a new dataset.
###Code
import pandas as pd
import seaborn as sns
sns.set_style('whitegrid')
df = pd.read_csv('data/competitor_prices.csv')
df.head(3)
###Output
_____no_output_____
###Markdown
A first quick look
###Code
def inspect(df, nrows=2):
print('({:,}, {})'.format(*df.shape))
display(df.head(nrows))
inspect(df)
df.info()
df.describe()
df.Competitor_id.describe()
###Output
_____no_output_____
###Markdown
Check data integrity Missing valuestodo: best practices of use of: https://github.com/ResidentMario/missingno Duplicates
###Code
def dups(df):
d = df.duplicated().sum()
print(f'{d} of {len(df)} rows ({d/len(df):.1%}) are duplicates.')
dups(df)
###Output
500 of 15395 rows (3.2%) are duplicates.
|
notebooks/00-preproceso_masa.ipynb | ###Markdown
Preproceso para csvsEn la carpeta datos tenemos un csv para cada mes del año, vamos leer y procesarlos para obtaner un archivo con todos los tuitas del año para la zona de la CDMX.Como son un montón de datos, entonces mejor vamos a cortar cada mes con la zona de análisis para no cargar con todos los tuits que njo nos interesan
###Code
%load_ext autoreload
%autoreload 2
import geopandas as gpd
from shapely.geometry import LineString, Point
from datetime import datetime, timedelta
import pandas as pd
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import glob
import multiprocessing
import os
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
from src.preprocess import preprocesa_archivo
###Output
_____no_output_____
###Markdown
Empezamos por leer un sólo archivo y ver cómo hacemos todo el preproceso
###Code
un_mes = pd.read_csv("../data/enero2018.csv")
un_mes
###Output
_____no_output_____
###Markdown
Nos quedamos con las columnas que nos interesan y convertimos en GeoDataFrame
###Code
un_mes = un_mes.loc[:,['ID', 'Usuario', 'Fecha_tweet', 'Latitud', 'Longitud']]
un_mes = (gpd.GeoDataFrame(un_mes, geometry=gpd.points_from_xy (un_mes.Longitud, un_mes.Latitud))
.drop(['Latitud', 'Longitud'], axis=1)
.set_crs(4326)
)
un_mes
###Output
_____no_output_____
###Markdown
Filtramos con los distritos de análisis (y de una vez asignamos el id del distrito)
###Code
distritos = gpd.read_file("../data/shapes/DistritosEODHogaresZMVM2017.shp")
un_mes = (gpd.sjoin(un_mes, distritos.to_crs(4326))
.drop(['index_right', 'Descripcio'], axis=1)
)
un_mes
###Output
_____no_output_____
###Markdown
Ahora convertimos `Fecha_tweet` en timestamp para poder manipularlo más adelante
###Code
un_mes['fecha_hora_dt'] = pd.to_datetime(
un_mes['Fecha_tweet'],
format='%Y-%m-%d %H:%M:%S',
utc=False)
# un_mes.fecha_hora_dt = un_mes.fecha_hora_dt.dt.tz_convert(
# 'America/Mexico_City')
un_mes
###Output
_____no_output_____
###Markdown
Este es todo el preprocesamiento que se necesita para cada archivo de datos. Para leer todos los archivos y procesarlos eficientemente vamos a necesitar paralelizar la lectura, para eso necesitamos empaquetar el flujo en una función que tome como entrada el path a cada archivo
###Code
def preprocesa_archivo(file_path):
un_mes = pd.read_csv(file_path)
distritos = gpd.read_file("../data/shapes/DistritosEODHogaresZMVM2017.shp")
un_mes = un_mes.loc[:,['ID', 'Usuario', 'Fecha_tweet', 'Latitud', 'Longitud']]
un_mes = (gpd.GeoDataFrame(un_mes, geometry=gpd.points_from_xy (un_mes.Longitud, un_mes.Latitud))
.drop(['Latitud', 'Longitud'], axis=1)
.set_crs(4326)
)
un_mes = (gpd.sjoin(un_mes, distritos.to_crs(4326))
.drop(['index_right', 'Descripcio'], axis=1)
)
un_mes['fecha_hora_dt'] = pd.to_datetime(
un_mes['Fecha_tweet'],
format='%Y-%m-%d %H:%M:%S',
utc=False)
return un_mes
un_mes = preprocesa_archivo("../data/enero2018.csv")
un_mes.head()
###Output
_____no_output_____
###Markdown
Ya tenemos la función que preprocesa cada archivo, ahora tenemos que sacar todos los archivos con datos
###Code
archivos = glob.glob("../data/*.csv")
archivos
###Output
_____no_output_____
###Markdown
Ahora vamos a utilizar `multiprocess.pool` para iterar en paralelo sobre la lista
###Code
a_pool = multiprocessing.Pool()
result = a_pool.map(preprocesa_archivo, archivos)
###Output
_____no_output_____
###Markdown
concatenamos los resulados y exportamos
###Code
final = pd.concat(result, axis=0)
final
final.to_file("../output/tuits_2018.gpkg", layer='tuits', driver="GPKG")
###Output
_____no_output_____ |
notebook/Sample_Churn_Data_ETL.ipynb | ###Markdown
Set up S3 Bucket
###Code
account_id = boto3.client('sts').get_caller_identity()["Account"]
region = sagemaker.Session().boto_region_name
S3_BUCKET_NAME = f"train-inference-pipeline-{account_id}"
GLUE_CRAWLER_NAME = "glue-crawler-tif"
DATABASE = S3_BUCKET_NAME
REGION = "ap-southeast-2"
try:
s3_client = boto3.client('s3', region_name=region)
s3_client.create_bucket(Bucket=S3_BUCKET_NAME,
ACL='private',
CreateBucketConfiguration={'LocationConstraint': region})
print(f'Create S3 bucket {S3_BUCKET_NAME}: SUCCESS')
except Exception as e:
if e.response['Error']['Code'] == 'BucketAlreadyOwnedByYou':
print(f'Using existing bucket: {S3_BUCKET_NAME}')
else:
raise(e)
###Output
_____no_output_____
###Markdown
Fetch Synthetic Sample Data
###Code
!aws s3 cp s3://sagemaker-sample-files/datasets/tabular/synthetic/churn.txt ./data/
###Output
_____no_output_____
###Markdown
Split Data
###Code
import os
os.makedirs("./data/train/", exist_ok=True)
os.makedirs("./data/infer/", exist_ok=True)
import pandas as pd
df_churn = pd.read_csv("../data/churn.txt", header=0)
df_churn.sample(frac=1).reset_index(drop=True, inplace=True)
df_train, df_test = df_churn[:100], df_churn[100:]
df_train.to_csv("./data/train/churn_train.txt", index=False)
df_test = df_test.drop(df_test.columns[-1], axis=1)
df_test.to_csv("./data/infer/churn_test.txt", index=False)
###Output
_____no_output_____
###Markdown
Upload Data to S3
###Code
!cd .. && aws s3 sync ./data s3://{S3_BUCKET_NAME}/demo/
###Output
_____no_output_____
###Markdown
Setup Athena
###Code
paras = [
{
"ParameterKey": "DataBucketName",
"ParameterValue": S3_BUCKET_NAME,
},
]
import json
with open('paras.json', 'w') as fp:
json.dump(paras, fp)
!cat paras.json
!aws cloudformation --region {REGION} create-change-set \
--stack-name "tip" \
--change-set-name ImportChangeSet \
--change-set-type IMPORT \
--resources-to-import "[{\"ResourceType\":\"AWS::Athena::WorkGroup\",\"LogicalResourceId\":\"AthenaPrimaryWorkGroup\",\"ResourceIdentifier\":{\"Name\":\"primary\"}}]" \
--parameters file://paras.json \
--template-body file://../cfn/01-athena.yaml
!rm paras.json
###Output
_____no_output_____
###Markdown
**Wait for the cloudformation stack creation complete before executing the following command**
###Code
!aws cloudformation--region {REGION} execute-change-set --change-set-name ImportChangeSet --stack-name "tip"
###Output
_____no_output_____
###Markdown
Setup Glue
###Code
cfn_stack_name = "tip-glue"
!aws cloudformation --region "ap-southeast-2" create-stack \
--stack-name {cfn_stack_name} \
--template-body file://../cfn/02-crawler.yaml \
--capabilities CAPABILITY_NAMED_IAM \
--parameters ParameterKey=RawDataBucketName,ParameterValue={S3_BUCKET_NAME}\
ParameterKey=CrawlerName,ParameterValue={GLUE_CRAWLER_NAME}
###Output
_____no_output_____
###Markdown
Start Glue Crawler **Wait for the glue Crawler Creation Complete before starting the crawler with the following command.**
###Code
!aws glue --region {REGION} start-crawler --name {GLUE_CRAWLER_NAME}
###Output
_____no_output_____
###Markdown
**Wait for the Glue Crawler being stopped before querying the Athena Database**
###Code
query_exec_id = !aws athena --region {REGION} start-query-execution --query-string "SELECT * FROM train limit 3;" --query-execution-context Database={DATABASE}
query_exec_id = eval(" ".join(query_exec_id))["QueryExecutionId"]
query_exec_id
!aws athena --region {REGION} get-query-results --query-execution-id {query_exec_id}
###Output
_____no_output_____
###Markdown
Upload Scripts to S3For different dataset, update - `../script/preprocessing.py`- `../script/inferpreprocessing.py`
###Code
!cd .. && aws s3 sync ./scripts s3://{S3_BUCKET_NAME}/script/
###Output
_____no_output_____
###Markdown
Deployment
###Code
!aws s3 cp ../cfn/pipeline.yaml s3://{S3_BUCKET_NAME}/cfn/
!aws --region {REGION} cloudformation create-stack \
--stack-name "tip-syd" \
--template-url https://{S3_BUCKET_NAME}.s3-{REGION}.amazonaws.com/cfn/pipeline.yaml \
--capabilities CAPABILITY_NAMED_IAM \
--parameters ParameterKey=AthenaDatabaseName,ParameterValue={DATABASE} \
ParameterKey=PipelineBucketName,ParameterValue={S3_BUCKET_NAME} \
--disable-rollback
###Output
_____no_output_____
###Markdown
Trigger Training
###Code
!aws lambda --region "ap-southeast-2" invoke --function-name invokeTrainingStepFunction --payload '{ "": ""}' out
###Output
_____no_output_____
###Markdown
Trigger Inference
###Code
!aws lambda --region "ap-southeast-2" invoke --function-name invokeInferStepFunction --payload '{ "": ""}' out
###Output
_____no_output_____
###Markdown
Set up S3 Bucket
###Code
account_id = boto3.client('sts').get_caller_identity()["Account"]
region = sagemaker.Session().boto_region_name
S3_BUCKET_NAME = f"train-inference-pipeline-{account_id}"
GLUE_CRAWLER_NAME = "glue-crawler-tif"
DATABASE = S3_BUCKET_NAME
REGION = "ap-southeast-2"
try:
s3_client = boto3.client('s3', region_name=region)
s3_client.create_bucket(Bucket=S3_BUCKET_NAME,
ACL='private',
CreateBucketConfiguration={'LocationConstraint': region})
print(f'Create S3 bucket {S3_BUCKET_NAME}: SUCCESS')
except Exception as e:
if e.response['Error']['Code'] == 'BucketAlreadyOwnedByYou':
print(f'Using existing bucket: {S3_BUCKET_NAME}')
else:
raise(e)
###Output
_____no_output_____
###Markdown
Fetch Synthetic Sample Data
###Code
!aws s3 cp s3://sagemaker-sample-files/datasets/tabular/synthetic/churn.txt ./data/
###Output
_____no_output_____
###Markdown
Split Data
###Code
import os
os.makedirs("./data/train/", exist_ok=True)
os.makedirs("./data/infer/", exist_ok=True)
import pandas as pd
from sklearn.model_selection import train_test_split
df_churn = pd.read_csv("../data/churn.txt", header=0)
df_train, df_test = train_test_split(df_churn, test_size=0.20, random_state=333)
df_train.to_csv("./data/train/churn_train.txt", index=False)
df_test = df_test.drop(df_test.columns[-1], axis=1)
df_test.to_csv("./data/infer/churn_test.txt", index=False)
print(df_churn.shape)
print(df_train.shape)
print(df_test.shape)
###Output
(5000, 21)
(4000, 21)
(1000, 20)
###Markdown
Upload Data to S3
###Code
!cd .. && aws s3 sync ./data s3://{S3_BUCKET_NAME}/demo/
###Output
_____no_output_____
###Markdown
Setup Athena
###Code
paras = [
{
"ParameterKey": "DataBucketName",
"ParameterValue": S3_BUCKET_NAME,
},
]
import json
with open('paras.json', 'w') as fp:
json.dump(paras, fp)
!cat paras.json
!aws cloudformation --region {REGION} create-change-set \
--stack-name "tip" \
--change-set-name ImportChangeSet \
--change-set-type IMPORT \
--resources-to-import "[{\"ResourceType\":\"AWS::Athena::WorkGroup\",\"LogicalResourceId\":\"AthenaPrimaryWorkGroup\",\"ResourceIdentifier\":{\"Name\":\"primary\"}}]" \
--parameters file://paras.json \
--template-body file://../cfn/01-athena.yaml
!rm paras.json
###Output
_____no_output_____
###Markdown
**Wait for the cloudformation stack creation complete before executing the following command**
###Code
!aws cloudformation--region {REGION} execute-change-set --change-set-name ImportChangeSet --stack-name "tip"
###Output
_____no_output_____
###Markdown
Setup Glue
###Code
cfn_stack_name = "tip-glue"
!aws cloudformation --region "ap-southeast-2" create-stack \
--stack-name {cfn_stack_name} \
--template-body file://../cfn/02-crawler.yaml \
--capabilities CAPABILITY_NAMED_IAM \
--parameters ParameterKey=RawDataBucketName,ParameterValue={S3_BUCKET_NAME}\
ParameterKey=CrawlerName,ParameterValue={GLUE_CRAWLER_NAME}
###Output
_____no_output_____
###Markdown
Start Glue Crawler **Wait for the glue Crawler Creation Complete before starting the crawler with the following command.**
###Code
!aws glue --region {REGION} start-crawler --name {GLUE_CRAWLER_NAME}
###Output
_____no_output_____
###Markdown
**Wait for the Glue Crawler being stopped before querying the Athena Database**
###Code
query_exec_id = !aws athena --region {REGION} start-query-execution --query-string "SELECT * FROM train limit 3;" --query-execution-context Database={DATABASE}
query_exec_id = eval(" ".join(query_exec_id))["QueryExecutionId"]
query_exec_id
!aws athena --region {REGION} get-query-results --query-execution-id {query_exec_id}
###Output
_____no_output_____
###Markdown
Upload Scripts to S3For different dataset, update - `../script/preprocessing.py`- `../script/inferpreprocessing.py`
###Code
!cd .. && aws s3 sync ./scripts s3://{S3_BUCKET_NAME}/script/
###Output
_____no_output_____
###Markdown
Deployment
###Code
!aws s3 cp ../cfn/pipeline.yaml s3://{S3_BUCKET_NAME}/cfn/
!aws --region {REGION} cloudformation create-stack \
--stack-name "tip-syd" \
--template-url https://{S3_BUCKET_NAME}.s3-{REGION}.amazonaws.com/cfn/pipeline.yaml \
--capabilities CAPABILITY_NAMED_IAM \
--parameters ParameterKey=AthenaDatabaseName,ParameterValue={DATABASE} \
ParameterKey=PipelineBucketName,ParameterValue={S3_BUCKET_NAME} \
--disable-rollback
###Output
_____no_output_____
###Markdown
Trigger Training
###Code
!aws lambda --region "ap-southeast-2" invoke --function-name invokeTrainingStepFunction --payload '{ "": ""}' out
###Output
_____no_output_____
###Markdown
Trigger Inference
###Code
!aws lambda --region "ap-southeast-2" invoke --function-name invokeInferStepFunction --payload '{ "": ""}' out
###Output
_____no_output_____ |
Vine_review_analysis.ipynb | ###Markdown
###Code
import os
# Find the latest version of spark 3.0 from http://www.apache.org/dist/spark/ and enter as the spark version
# For example:
# spark_version = 'spark-3.0.3'
spark_version = 'spark-3.0.3'
os.environ['SPARK_VERSION']=spark_version
# Install Spark and Java
!apt-get update
!apt-get install openjdk-11-jdk-headless -qq > /dev/null
!wget -q http://www.apache.org/dist/spark/$SPARK_VERSION/$SPARK_VERSION-bin-hadoop2.7.tgz
!tar xf $SPARK_VERSION-bin-hadoop2.7.tgz
!pip install -q findspark
# Set Environment Variables
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-11-openjdk-amd64"
os.environ["SPARK_HOME"] = f"/content/{spark_version}-bin-hadoop2.7"
# Start a SparkSession
import findspark
findspark.init()
# Download the Postgres driver that will allow Spark to interact with Postgres.
!wget https://jdbc.postgresql.org/download/postgresql-42.2.16.jar
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("M16-Amazon-Challenge").config("spark.driver.extraClassPath","/content/postgresql-42.2.16.jar").getOrCreate()
from pyspark import SparkFiles
url = "https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Wireless_v1_00.tsv.gz"
spark.sparkContext.addFile(url)
df = spark.read.option("encoding", "UTF-8").csv(SparkFiles.get("amazon_reviews_us_Wireless_v1_00.tsv.gz"), sep="\t", header=True, inferSchema=True)
df.show(5)
from pyspark.sql.functions import to_date
# Read in the Review dataset as a DataFrame
vine_df=df.filter(df.total_votes>20)
vine_df.show(5)
votes_df=df.filter((vine_df.helpful_votes / vine_df.total_votes) >= 0.5)
votes_df.show(5)
vine1_df=votes_df.filter(votes_df.vine=='Y')
vine1_df.show(5)
vine2_df=votes_df.filter(votes_df.vine=='N')
vine2_df.show(5)
reviewstotal1=(vine1_df.count())
number5star=(vine1_df.filter(vine1_df.star_rating==5).count())
reviewperc=(number5star/reviewstotal1)*100
print(reviewperc)
reviewstotal2=(vine2_df.count())
number5star=(vine2_df.filter(vine2_df.star_rating==5).count())
reviewperc=(number5star/reviewstotal2)*100
print(reviewperc)
###Output
49.643182629130386
|
src/Hospital_Billing/Alpha+.ipynb | ###Markdown
Alpha+ Miner Step 1: Handling and import event data
###Code
import pm4py
from pm4py.objects.log.importer.xes import importer as xes_importer
log = xes_importer.apply('Hospital Billing - Event Log.xes')
###Output
_____no_output_____
###Markdown
Step 2: Mining event log - Process Discovery
###Code
net, initial_marking, final_marking = pm4py.discover_petri_net_alpha_plus(log)
###Output
_____no_output_____
###Markdown
Step 3: Visualize Petri of Mined Process from log
###Code
pm4py.view_petri_net(net, initial_marking, final_marking)
###Output
_____no_output_____
###Markdown
Step 4: Convert Petri Net to BPMN
###Code
bpmn_graph = pm4py.convert_to_bpmn(*[net, initial_marking, final_marking])
pm4py.view_bpmn(bpmn_graph, "png")
###Output
_____no_output_____
###Markdown
Step 5: Log-Model Evaluation Replay Fitness
###Code
# The calculation of the replay fitness aim to calculate how much of the behavior in the log is admitted by the process model. We propose two methods to calculate replay fitness, based on token-based replay and alignments respectively.
# The two variants of replay fitness are implemented as Variants.TOKEN_BASED and Variants.ALIGNMENT_BASED respectively.
# To calculate the replay fitness between an event log and a Petri net model, using the token-based replay method, the code on the right side can be used. The resulting value is a number between 0 and 1.
from pm4py.algo.evaluation.replay_fitness import algorithm as replay_fitness_evaluator
fitness = replay_fitness_evaluator.apply(log, net, initial_marking, final_marking, variant=replay_fitness_evaluator.Variants.TOKEN_BASED)
fitness
###Output
_____no_output_____
###Markdown
Precision
###Code
# We propose two approaches for the measurement of precision in PM4Py:
# ETConformance (using token-based replay): the reference paper is Muñoz-Gama, Jorge, and Josep Carmona. "A fresh look at precision in process conformance." International Conference on Business Process Management. Springer, Berlin, Heidelberg, 2010.
# Align-ETConformance (using alignments): the reference paper is Adriansyah, Arya, et al. "Measuring precision of modeled behavior." Information systems and e-Business Management 13.1 (2015): 37-67.
from pm4py.algo.evaluation.precision import algorithm as precision_evaluator
prec = precision_evaluator.apply(log, net, initial_marking, final_marking, variant=precision_evaluator.Variants.ETCONFORMANCE_TOKEN)
prec
###Output
_____no_output_____
###Markdown
F-Measure
###Code
def f_measure(f, p):
return (2*f*p)/(f+p)
f_measure(fitness['average_trace_fitness'], prec)
###Output
_____no_output_____ |
NHL_data_shape_notebooks/Approach_2_cumul_1seas_Pischedda_data_process/Fixing date, season, team names/Tools for fixing dates, team names, seasons/Fix_Seasons_and_Cut_down.ipynb | ###Markdown
CUT DOWN SEASONS TO 2008-09 to 2019-20
###Code
data_frames = [df_mp_teams, df_mp_teams_all, df_betting, df_game]
for X in data_frames:
X = X.loc[X['season'].isin(seasons), :].copy()
X.reset_index(drop = True, inplace = True)
#restrict seasons:
df_betting = df_betting.loc[df_game['season'].isin(seasons), :].copy()
df_game = df_game.loc[df_game['season'].isin(seasons), :].copy()
df_mp_teams = df_mp_teams.loc[df_mp_teams['season'].isin(seasons), :].copy()
df_mp_teams_all = df_mp_teams_all.loc[df_mp_teams_all['season'].isin(seasons), :].copy()
## next investigate game overlaps
len(set(df_mp_teams_all['gameId']).symmetric_difference(set(df_game['game_id'])))
len(set(df_mp_teams_all['gameId']).intersection(set(df_game['game_id'])))
###Output
_____no_output_____ |
514/hwk1/Introduction.ipynb | ###Markdown
Introduction to Python ... and Jupyter notebooks
###Code
def velocity(t):
return t**2/2
velocity(1)
###Output
_____no_output_____ |
multiple_linear_regression.ipynb | ###Markdown
Multiple Linear Regression Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('50_Startups.csv')
X = dataset.iloc[:, :-1]
y = dataset.iloc[:,-1]
###Output
_____no_output_____
###Markdown
Encoding categorical data
###Code
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(), [3])], remainder='passthrough')
X = np.array(ct.fit_transform(X))
print(x)
###Output
_____no_output_____
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
print(x_test)
###Output
_____no_output_____
###Markdown
Training the Multiple Linear Regression model on the Training set
###Code
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predicting the Test set results
###Code
y_pred = regressor.predict(X_test)
np.set_printoptions(precision=2)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.to_numpy().reshape(len(y_test),1)),1))
###Output
[[103015.2 103282.38]
[132582.28 144259.4 ]
[132447.74 146121.95]
[ 71976.1 77798.83]
[178537.48 191050.39]
[116161.24 105008.31]
[ 67851.69 81229.06]
[ 98791.73 97483.56]
[113969.44 110352.25]
[167921.07 166187.94]]
###Markdown
Multiple Linear Regression 1 - Load Packages and Data
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from MLR_utils import computeCost
%matplotlib inline
df = pd.read_pickle('./Data/train_final')
m = df.shape[0]
df.head()
###Output
_____no_output_____
###Markdown
2 - Split into Training and Validation SetsWe will split the training data into a smaller training set (70%) and a validation set (30%). **Note that the validation set is denoted here as X_test and y_test.**
###Code
from sklearn.model_selection import train_test_split
X = df.drop('logSalePrice',axis=1)
y = df['logSalePrice']
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.3, random_state=101)
###Output
_____no_output_____
###Markdown
3 - Fit Model
###Code
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm.fit(X_train,y_train)
theta_hat = np.r_[lm.intercept_,lm.coef_]
X_train_1 = np.c_[np.ones((X_train.shape[0],1)),X_train]
theta_hat = theta_hat.reshape((len(theta_hat),1))
y_tr = y_train.values.reshape((len(y_train),1))
X_val_1 = np.c_[np.ones((X_val.shape[0],1)), X_val]
y_te = y_val.values.reshape((len(y_val),1))
error_train = computeCost(theta_hat,X_train_1,y_tr)
error_val = computeCost(theta_hat,X_val_1,y_te)
print("Training Error: " + str(error_train))
print("Validation Error: " + str(error_val))
###Output
Training Error: 0.007568033439315971
Validation Error: 0.011185798456694027
###Markdown
3 - Prediction and Residual Plots (Validation Set)Make sure to use the **not scaled** outputs in this section!
###Code
predictions = lm.predict(X_val)
fig, axes = plt.subplots(1, 2, figsize=(12, 4))
axes[0].scatter(y_val,predictions)
axes[0].set_title("Predictions vs Actual logSalePrice")
sns.distplot((y_val-predictions),bins=50);
axes[1].set_title("Distribution Plot");
###Output
_____no_output_____
###Markdown
4 - Performance Evaluation Metrics (Validation Set)
###Code
from sklearn import metrics
print('log-scale MSE:', round(np.sqrt(metrics.mean_squared_error(y_val, predictions)),10))
preds = pd.Series(data=predictions)
y_val_ri = y_val.reset_index()
y_val_ri.drop('index',axis=1,inplace=True)
compare = pd.concat([y_val_ri['logSalePrice'],preds],axis=1)
compare.rename(columns={0: 'prediction'}, inplace=True)
compare = compare.apply(lambda x: round(np.exp(x),1))
compare['SalePrice'] = compare['logSalePrice']
compare.drop('logSalePrice', axis=1, inplace=True)
compare['difference'] = compare['SalePrice']-compare['prediction']
#print(compare.head())
from sklearn import metrics
mse = round(metrics.mean_squared_error(np.exp(y_val), np.exp(predictions)),1)
print('Validation MSE:', "{:,}".format(mse))
pd.options.display.float_format = '{:,.2f}'.format
print(compare.head(10))
###Output
prediction SalePrice difference
0 251,229.50 255,000.00 3,770.50
1 137,790.70 145,000.00 7,209.30
2 141,322.40 150,500.00 9,177.60
3 401,586.30 412,500.00 10,913.70
4 331,098.20 402,861.00 71,762.80
5 106,560.10 113,000.00 6,439.90
6 137,950.30 136,000.00 -1,950.30
7 150,397.40 144,152.00 -6,245.40
8 151,361.10 145,250.00 -6,111.10
9 146,118.30 135,000.00 -11,118.30
###Markdown
5 - Test Predictions
###Code
#test set features
X_test = pd.read_pickle('./Data/test_final')
test_prediction = lm.predict(X_test)
test_price = np.exp(test_prediction)
#get ID's
id_test = pd.read_csv('./Data/test')
id_test['SalePrice'] = test_price
id_test[['Id','SalePrice']].head()
test_submit = id_test[['Id','SalePrice']]
test_submit.to_csv('./Data/testsaleprice.txt', header = True, index = False, sep=',')
###Output
_____no_output_____
###Markdown
ボール投げのデータ握力,身長,体重からボール投げの記録(距離)を推定
###Code
ball<-read.table("ball.dat",header=TRUE)
print(ball)
x <- ball[,2:4]
y <- ball[,1:1]
print(x)
print(y)
reg <- lm(y~.,data=x)
summary(reg)
plot(reg)
###Output
_____no_output_____
###Markdown
The "Duncan" data from statsmodels has 45 rows and 4 columns and contains data on the 'prestige' and other characteristics of some U.S. occupations in the year 1950:
###Code
duncan_prestige = sm.datasets.get_rdataset("Duncan", "carData")
Y = duncan_prestige.data['income']
X = duncan_prestige.data[['education', 'prestige']]
X = sm.add_constant(X)
duncan_prestige.data.head()
class OLS():
def __init__(self, X, y):
self.X = X
self.y = y
self.reg = LinearRegression().fit(self.X, self.y)
# self.reg = sm.OLS(self.y, self.X)
self.summary = sm.OLS(self.y, self.X).fit().summary()
predictions = self.reg.predict(self.X)
# predictions = self.reg.fit()
self.df_results = pd.DataFrame({'Actual': self.y, 'Predicted': predictions})
self.df_results['Residuals'] = abs(self.df_results['Actual']) - abs(self.df_results['Predicted'])
def predict(self, X):
return self.reg.predict(X)
def results(self):
print('R^2: {0}'.format(self.reg.score(self.X, self.y)))
def linear_assumption(self):
df_results = self.df_results
print("Assumption 1: Linear Relationship exists between the DV and the IV(s).", '\n')
print("Checking with a scatter plot of actual vs. predicted.",
'Preditions should follow the diagonal line.')
sns.lmplot(x='Actual', y='Predicted', data=df_results, fit_reg=False, height=7)
line_coords = np.arange(df_results.min().min(), df_results.max().max())
plt.plot(line_coords, line_coords,
color='darkorange', linestyle='--')
plt.title('Actual vs. Predicted')
plt.show()
def normal_errors_assumption(self):
print("Assumption 2: The error terms are normally distributed.", '\n')
print('Using the Anderson-Darling test for normal distribution:')
p_value = normal_ad(self.df_results['Residuals'])[1]
print('p-value from the test - below 0.05 generally means non-normal:', round(p_value, 2))
if round(p_value, 2) < 0.05:
print('Residuals are not normally distributed.')
else:
print('Residuals are normally distributed.')
plt.subplots(figsize=(12, 6))
plt.title('Distribution of Residuals')
sns.histplot(self.df_results['Residuals'])
plt.show()
if p_value > 0.05:
print('Assumption satisfied.')
else:
print('Assumption not satisfied.')
print('Confidence intervals will likely be affected.')
print('Try performing nonlinear transformations on variables.')
def multicollinearity_assumption(self, feature_names=None):
print("Assumption 3: Little to no multicollinearity among predictors.")
plt.figure(figsize=(10, 8))
sns.heatmap(pd.DataFrame(self.X, columns=feature_names).corr(), annot=True)
plt.title('Correlation of Variables')
plt.show()
print('Variance Inflation Factors (VIF):')
print('>10: An indication that multicollinearity may be present.')
print('>100: Certain multicollinearity among the variables.')
print('------------------------------------')
VIF = [variance_inflation_factor(self.X.values, i) for i in range(len(self.X.columns))]
for idx, vif in enumerate(VIF):
print('{0}: {1}'.format(idx, vif))
possible_multicollinearity = sum([1 for vif in VIF if vif > 10])
definite_multicollinearity = sum([1 for vif in VIF if vif > 100])
print('{0} cases of possible multicollinearity.'.format(possible_multicollinearity))
print('{0} cases of definite multicollinearity.'.format(definite_multicollinearity))
if definite_multicollinearity == 0:
if possible_multicollinearity == 0:
print('Assumption satisfied.')
else:
print("Assumption possibly satisfied.")
print('Coefficient interpretability may be problematic.')
print('Consider removing variables with a high Variance Inflation Factor (VIF).')
else:
print('Assumption not satisfied.')
print('Coefficient interpretability will be problematic.')
print('Consider removing variables with a high Variance Inflation Factor (VIF).')
def autocorrelation_assumption(self):
print('Assumption 4: No Autocorrelation.', '\n')
print('\nPerforming Durbin-Watson Test...')
print('Values of 1.5 < d < 2.5 generally show that there is no autocorrelation in the data.')
print('0 to 2 < is positive autocorrelation.')
print('>2 to 4 is negative autocorrelation.')
print('---------------------------------------')
durbinWatson = durbin_watson(self.df_results['Residuals'])
print('Durbin-Watson:', durbinWatson)
if durbinWatson < 1.5:
print('Signs of positive autocorrelation.', '\n')
print('Assumption not satisfied.')
elif durbinWatson > 2.5:
print('Signs of negative autocorrelation.', '\n')
print('Assumption not satisfied.')
else:
print('Little to no autocorrelation.', '\n')
print('Assumption satisfied.')
def homoskedasticity_assumption(self):
print('Assumption 5: Homoskedasticity of Error Terms.', '\n')
print('Residuals should have relative constant variance.')
plt.subplots(figsize=(12, 6))
ax = plt.subplot(111)
plt.scatter(x=self.df_results.index, y=self.df_results.Residuals, alpha=0.5)
plt.plot(np.repeat(0, int(len(self.df_results))), color='darkorange', linestyle='--')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.title('Residuals')
plt.show()
names = ['Lagrange multiplier statistic', 'p-value',
'f-value', 'f p-value']
test = het_breuschpagan(self.df_results['Residuals'], self.X)
results = lzip(names, test)
print(results, '\n')
print(results[1][1])
if results[1][1] >= 0.05:
print("Because the p-value is not less than 0.05, we fail to reject the null hypothesis.\n")
print(
"We do not have sufficient evidence to say that heteroskedasticity is present in the regression model."
)
else:
print("Because the p-value is less than 0.05, we reject the null hypothesis in favor of\n")
print("the alternative hypothesis.\n")
print("We have sufficient evidence to say that heteroskedasticity is present in the regression model.")
print("\nWe should try transforming the DV by taking it's log instead or find a new definition of")
print("the dependent variable. One way to do this would be to take the rate instead of the raw value.")
def all_assumptions(self):
self.linear_assumption()
print('-' * 120)
ols.normal_errors_assumption()
print('-' * 120)
ols.multicollinearity_assumption()
print('-' * 120)
ols.autocorrelation_assumption()
print('-' * 120)
ols.homoskedasticity_assumption()
print('-' * 120)
ols = OLS(X, Y)
ols.predict(X)
ols.results()
ols.reg.intercept_
ols.reg.coef_
ols.summary
ols.linear_assumption()
ols.normal_errors_assumption()
ols.multicollinearity_assumption()
ols.autocorrelation_assumption()
ols.homoskedasticity_assumption()
###Output
Assumption 5: Homoskedasticity of Error Terms.
Residuals should have relative constant variance.
###Markdown
Multiple Linear Regression Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('50_Startups.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
print(X)
###Output
[[165349.2 136897.8 471784.1 'New York']
[162597.7 151377.59 443898.53 'California']
[153441.51 101145.55 407934.54 'Florida']
[144372.41 118671.85 383199.62 'New York']
[142107.34 91391.77 366168.42 'Florida']
[131876.9 99814.71 362861.36 'New York']
[134615.46 147198.87 127716.82 'California']
[130298.13 145530.06 323876.68 'Florida']
[120542.52 148718.95 311613.29 'New York']
[123334.88 108679.17 304981.62 'California']
[101913.08 110594.11 229160.95 'Florida']
[100671.96 91790.61 249744.55 'California']
[93863.75 127320.38 249839.44 'Florida']
[91992.39 135495.07 252664.93 'California']
[119943.24 156547.42 256512.92 'Florida']
[114523.61 122616.84 261776.23 'New York']
[78013.11 121597.55 264346.06 'California']
[94657.16 145077.58 282574.31 'New York']
[91749.16 114175.79 294919.57 'Florida']
[86419.7 153514.11 0.0 'New York']
[76253.86 113867.3 298664.47 'California']
[78389.47 153773.43 299737.29 'New York']
[73994.56 122782.75 303319.26 'Florida']
[67532.53 105751.03 304768.73 'Florida']
[77044.01 99281.34 140574.81 'New York']
[64664.71 139553.16 137962.62 'California']
[75328.87 144135.98 134050.07 'Florida']
[72107.6 127864.55 353183.81 'New York']
[66051.52 182645.56 118148.2 'Florida']
[65605.48 153032.06 107138.38 'New York']
[61994.48 115641.28 91131.24 'Florida']
[61136.38 152701.92 88218.23 'New York']
[63408.86 129219.61 46085.25 'California']
[55493.95 103057.49 214634.81 'Florida']
[46426.07 157693.92 210797.67 'California']
[46014.02 85047.44 205517.64 'New York']
[28663.76 127056.21 201126.82 'Florida']
[44069.95 51283.14 197029.42 'California']
[20229.59 65947.93 185265.1 'New York']
[38558.51 82982.09 174999.3 'California']
[28754.33 118546.05 172795.67 'California']
[27892.92 84710.77 164470.71 'Florida']
[23640.93 96189.63 148001.11 'California']
[15505.73 127382.3 35534.17 'New York']
[22177.74 154806.14 28334.72 'California']
[1000.23 124153.04 1903.93 'New York']
[1315.46 115816.21 297114.46 'Florida']
[0.0 135426.92 0.0 'California']
[542.05 51743.15 0.0 'New York']
[0.0 116983.8 45173.06 'California']]
###Markdown
Encoding categorical data
###Code
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(), [3])], remainder='passthrough')
X = np.array(ct.fit_transform(X))
print(X)
###Output
[[0.0 0.0 1.0 165349.2 136897.8 471784.1]
[1.0 0.0 0.0 162597.7 151377.59 443898.53]
[0.0 1.0 0.0 153441.51 101145.55 407934.54]
[0.0 0.0 1.0 144372.41 118671.85 383199.62]
[0.0 1.0 0.0 142107.34 91391.77 366168.42]
[0.0 0.0 1.0 131876.9 99814.71 362861.36]
[1.0 0.0 0.0 134615.46 147198.87 127716.82]
[0.0 1.0 0.0 130298.13 145530.06 323876.68]
[0.0 0.0 1.0 120542.52 148718.95 311613.29]
[1.0 0.0 0.0 123334.88 108679.17 304981.62]
[0.0 1.0 0.0 101913.08 110594.11 229160.95]
[1.0 0.0 0.0 100671.96 91790.61 249744.55]
[0.0 1.0 0.0 93863.75 127320.38 249839.44]
[1.0 0.0 0.0 91992.39 135495.07 252664.93]
[0.0 1.0 0.0 119943.24 156547.42 256512.92]
[0.0 0.0 1.0 114523.61 122616.84 261776.23]
[1.0 0.0 0.0 78013.11 121597.55 264346.06]
[0.0 0.0 1.0 94657.16 145077.58 282574.31]
[0.0 1.0 0.0 91749.16 114175.79 294919.57]
[0.0 0.0 1.0 86419.7 153514.11 0.0]
[1.0 0.0 0.0 76253.86 113867.3 298664.47]
[0.0 0.0 1.0 78389.47 153773.43 299737.29]
[0.0 1.0 0.0 73994.56 122782.75 303319.26]
[0.0 1.0 0.0 67532.53 105751.03 304768.73]
[0.0 0.0 1.0 77044.01 99281.34 140574.81]
[1.0 0.0 0.0 64664.71 139553.16 137962.62]
[0.0 1.0 0.0 75328.87 144135.98 134050.07]
[0.0 0.0 1.0 72107.6 127864.55 353183.81]
[0.0 1.0 0.0 66051.52 182645.56 118148.2]
[0.0 0.0 1.0 65605.48 153032.06 107138.38]
[0.0 1.0 0.0 61994.48 115641.28 91131.24]
[0.0 0.0 1.0 61136.38 152701.92 88218.23]
[1.0 0.0 0.0 63408.86 129219.61 46085.25]
[0.0 1.0 0.0 55493.95 103057.49 214634.81]
[1.0 0.0 0.0 46426.07 157693.92 210797.67]
[0.0 0.0 1.0 46014.02 85047.44 205517.64]
[0.0 1.0 0.0 28663.76 127056.21 201126.82]
[1.0 0.0 0.0 44069.95 51283.14 197029.42]
[0.0 0.0 1.0 20229.59 65947.93 185265.1]
[1.0 0.0 0.0 38558.51 82982.09 174999.3]
[1.0 0.0 0.0 28754.33 118546.05 172795.67]
[0.0 1.0 0.0 27892.92 84710.77 164470.71]
[1.0 0.0 0.0 23640.93 96189.63 148001.11]
[0.0 0.0 1.0 15505.73 127382.3 35534.17]
[1.0 0.0 0.0 22177.74 154806.14 28334.72]
[0.0 0.0 1.0 1000.23 124153.04 1903.93]
[0.0 1.0 0.0 1315.46 115816.21 297114.46]
[1.0 0.0 0.0 0.0 135426.92 0.0]
[0.0 0.0 1.0 542.05 51743.15 0.0]
[1.0 0.0 0.0 0.0 116983.8 45173.06]]
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
###Output
_____no_output_____
###Markdown
Training the Multiple Linear Regression model on the Training set
###Code
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predicting the Test set results
###Code
y_pred = regressor.predict(X_test)
np.set_printoptions(precision=2)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
###Output
[[103015.2 103282.38]
[132582.28 144259.4 ]
[132447.74 146121.95]
[ 71976.1 77798.83]
[178537.48 191050.39]
[116161.24 105008.31]
[ 67851.69 81229.06]
[ 98791.73 97483.56]
[113969.44 110352.25]
[167921.07 166187.94]]
###Markdown
Multiple Linear Regression Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('ENTER_THE_NAME_OF_YOUR_DATASET_HERE.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
###Output
_____no_output_____
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
###Output
_____no_output_____
###Markdown
Training the Multiple Linear Regression model on the Training set
###Code
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predicting the Test set results
###Code
y_pred = regressor.predict(X_test)
np.set_printoptions(precision=2)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
###Output
_____no_output_____
###Markdown
Evaluating the Model Performance
###Code
from sklearn.metrics import r2_score
r2_score(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Multiple Linear Regression Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv("50_Startups.csv")
x = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
print(x)
###Output
[[165349.2 136897.8 471784.1 'New York']
[162597.7 151377.59 443898.53 'California']
[153441.51 101145.55 407934.54 'Florida']
[144372.41 118671.85 383199.62 'New York']
[142107.34 91391.77 366168.42 'Florida']
[131876.9 99814.71 362861.36 'New York']
[134615.46 147198.87 127716.82 'California']
[130298.13 145530.06 323876.68 'Florida']
[120542.52 148718.95 311613.29 'New York']
[123334.88 108679.17 304981.62 'California']
[101913.08 110594.11 229160.95 'Florida']
[100671.96 91790.61 249744.55 'California']
[93863.75 127320.38 249839.44 'Florida']
[91992.39 135495.07 252664.93 'California']
[119943.24 156547.42 256512.92 'Florida']
[114523.61 122616.84 261776.23 'New York']
[78013.11 121597.55 264346.06 'California']
[94657.16 145077.58 282574.31 'New York']
[91749.16 114175.79 294919.57 'Florida']
[86419.7 153514.11 0.0 'New York']
[76253.86 113867.3 298664.47 'California']
[78389.47 153773.43 299737.29 'New York']
[73994.56 122782.75 303319.26 'Florida']
[67532.53 105751.03 304768.73 'Florida']
[77044.01 99281.34 140574.81 'New York']
[64664.71 139553.16 137962.62 'California']
[75328.87 144135.98 134050.07 'Florida']
[72107.6 127864.55 353183.81 'New York']
[66051.52 182645.56 118148.2 'Florida']
[65605.48 153032.06 107138.38 'New York']
[61994.48 115641.28 91131.24 'Florida']
[61136.38 152701.92 88218.23 'New York']
[63408.86 129219.61 46085.25 'California']
[55493.95 103057.49 214634.81 'Florida']
[46426.07 157693.92 210797.67 'California']
[46014.02 85047.44 205517.64 'New York']
[28663.76 127056.21 201126.82 'Florida']
[44069.95 51283.14 197029.42 'California']
[20229.59 65947.93 185265.1 'New York']
[38558.51 82982.09 174999.3 'California']
[28754.33 118546.05 172795.67 'California']
[27892.92 84710.77 164470.71 'Florida']
[23640.93 96189.63 148001.11 'California']
[15505.73 127382.3 35534.17 'New York']
[22177.74 154806.14 28334.72 'California']
[1000.23 124153.04 1903.93 'New York']
[1315.46 115816.21 297114.46 'Florida']
[0.0 135426.92 0.0 'California']
[542.05 51743.15 0.0 'New York']
[0.0 116983.8 45173.06 'California']]
###Markdown
Encoding categorical data
###Code
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(), [3])], remainder='passthrough')
x = np.array(ct.fit_transform(x))
print(x)
###Output
[[0.0 0.0 1.0 165349.2 136897.8 471784.1]
[1.0 0.0 0.0 162597.7 151377.59 443898.53]
[0.0 1.0 0.0 153441.51 101145.55 407934.54]
[0.0 0.0 1.0 144372.41 118671.85 383199.62]
[0.0 1.0 0.0 142107.34 91391.77 366168.42]
[0.0 0.0 1.0 131876.9 99814.71 362861.36]
[1.0 0.0 0.0 134615.46 147198.87 127716.82]
[0.0 1.0 0.0 130298.13 145530.06 323876.68]
[0.0 0.0 1.0 120542.52 148718.95 311613.29]
[1.0 0.0 0.0 123334.88 108679.17 304981.62]
[0.0 1.0 0.0 101913.08 110594.11 229160.95]
[1.0 0.0 0.0 100671.96 91790.61 249744.55]
[0.0 1.0 0.0 93863.75 127320.38 249839.44]
[1.0 0.0 0.0 91992.39 135495.07 252664.93]
[0.0 1.0 0.0 119943.24 156547.42 256512.92]
[0.0 0.0 1.0 114523.61 122616.84 261776.23]
[1.0 0.0 0.0 78013.11 121597.55 264346.06]
[0.0 0.0 1.0 94657.16 145077.58 282574.31]
[0.0 1.0 0.0 91749.16 114175.79 294919.57]
[0.0 0.0 1.0 86419.7 153514.11 0.0]
[1.0 0.0 0.0 76253.86 113867.3 298664.47]
[0.0 0.0 1.0 78389.47 153773.43 299737.29]
[0.0 1.0 0.0 73994.56 122782.75 303319.26]
[0.0 1.0 0.0 67532.53 105751.03 304768.73]
[0.0 0.0 1.0 77044.01 99281.34 140574.81]
[1.0 0.0 0.0 64664.71 139553.16 137962.62]
[0.0 1.0 0.0 75328.87 144135.98 134050.07]
[0.0 0.0 1.0 72107.6 127864.55 353183.81]
[0.0 1.0 0.0 66051.52 182645.56 118148.2]
[0.0 0.0 1.0 65605.48 153032.06 107138.38]
[0.0 1.0 0.0 61994.48 115641.28 91131.24]
[0.0 0.0 1.0 61136.38 152701.92 88218.23]
[1.0 0.0 0.0 63408.86 129219.61 46085.25]
[0.0 1.0 0.0 55493.95 103057.49 214634.81]
[1.0 0.0 0.0 46426.07 157693.92 210797.67]
[0.0 0.0 1.0 46014.02 85047.44 205517.64]
[0.0 1.0 0.0 28663.76 127056.21 201126.82]
[1.0 0.0 0.0 44069.95 51283.14 197029.42]
[0.0 0.0 1.0 20229.59 65947.93 185265.1]
[1.0 0.0 0.0 38558.51 82982.09 174999.3]
[1.0 0.0 0.0 28754.33 118546.05 172795.67]
[0.0 1.0 0.0 27892.92 84710.77 164470.71]
[1.0 0.0 0.0 23640.93 96189.63 148001.11]
[0.0 0.0 1.0 15505.73 127382.3 35534.17]
[1.0 0.0 0.0 22177.74 154806.14 28334.72]
[0.0 0.0 1.0 1000.23 124153.04 1903.93]
[0.0 1.0 0.0 1315.46 115816.21 297114.46]
[1.0 0.0 0.0 0.0 135426.92 0.0]
[0.0 0.0 1.0 542.05 51743.15 0.0]
[1.0 0.0 0.0 0.0 116983.8 45173.06]]
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
Training the Multiple Linear Regression model on the Training set
###Code
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
Predicting the Test set results
###Code
y_pred = regressor.predict(x_test)
np.set_printoptions(precision=2)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
###Output
[[103015.2 103282.38]
[132582.28 144259.4 ]
[132447.74 146121.95]
[ 71976.1 77798.83]
[178537.48 191050.39]
[116161.24 105008.31]
[ 67851.69 81229.06]
[ 98791.73 97483.56]
[113969.44 110352.25]
[167921.07 166187.94]]
###Markdown
Making a single prediction (for example the profit of a startup with R&D Spend = 160000, Administration Spend = 130000, Marketing Spend = 300000 and State = 'California')
###Code
print(regressor.predict([[1, 0, 0, 160000, 130000, 300000]]))
###Output
[181566.92]
###Markdown
Therefore, our model predicts that the profit of a Californian startup which spent 160000 in R&D, 130000 in Administration and 300000 in Marketing is $ 181566,92.**Important note 1:** Notice that the values of the features were all input in a double pair of square brackets. That's because the "predict" method always expects a 2D array as the format of its inputs. And putting our values into a double pair of square brackets makes the input exactly a 2D array. Simply put:$1, 0, 0, 160000, 130000, 300000 \rightarrow \textrm{scalars}$$[1, 0, 0, 160000, 130000, 300000] \rightarrow \textrm{1D array}$$[[1, 0, 0, 160000, 130000, 300000]] \rightarrow \textrm{2D array}$**Important note 2:** Notice also that the "California" state was not input as a string in the last column but as "1, 0, 0" in the first three columns. That's because of course the predict method expects the one-hot-encoded values of the state, and as we see in the second row of the matrix of features X, "California" was encoded as "1, 0, 0". And be careful to include these values in the first three columns, not the last three ones, because the dummy variables are always created in the first columns. Getting the final linear regression equation with the values of the coefficients
###Code
print(regressor.coef_)
print(regressor.intercept_)
###Output
[ 8.66e+01 -8.73e+02 7.86e+02 7.73e-01 3.29e-02 3.66e-02]
42467.52924853204
###Markdown
Multiple Linear Regression Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('50_Startups.csv')
x = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
print(x)
print(y)
###Output
[192261.83 191792.06 191050.39 182901.99 166187.94 156991.12 156122.51
155752.6 152211.77 149759.96 146121.95 144259.4 141585.52 134307.35
132602.65 129917.04 126992.93 125370.37 124266.9 122776.86 118474.03
111313.02 110352.25 108733.99 108552.04 107404.34 105733.54 105008.31
103282.38 101004.64 99937.59 97483.56 97427.84 96778.92 96712.8
96479.51 90708.19 89949.14 81229.06 81005.76 78239.91 77798.83
71498.49 69758.98 65200.33 64926.08 49490.75 42559.73 35673.41
14681.4 ]
###Markdown
Encoding categorical data
###Code
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer(transformers =[('encoder', OneHotEncoder(),[3])], remainder = 'passthrough')
X = np.array(ct.fit_transform(x))
print(X)
###Output
[[0.0 0.0 1.0 165349.2 136897.8 471784.1]
[1.0 0.0 0.0 162597.7 151377.59 443898.53]
[0.0 1.0 0.0 153441.51 101145.55 407934.54]
[0.0 0.0 1.0 144372.41 118671.85 383199.62]
[0.0 1.0 0.0 142107.34 91391.77 366168.42]
[0.0 0.0 1.0 131876.9 99814.71 362861.36]
[1.0 0.0 0.0 134615.46 147198.87 127716.82]
[0.0 1.0 0.0 130298.13 145530.06 323876.68]
[0.0 0.0 1.0 120542.52 148718.95 311613.29]
[1.0 0.0 0.0 123334.88 108679.17 304981.62]
[0.0 1.0 0.0 101913.08 110594.11 229160.95]
[1.0 0.0 0.0 100671.96 91790.61 249744.55]
[0.0 1.0 0.0 93863.75 127320.38 249839.44]
[1.0 0.0 0.0 91992.39 135495.07 252664.93]
[0.0 1.0 0.0 119943.24 156547.42 256512.92]
[0.0 0.0 1.0 114523.61 122616.84 261776.23]
[1.0 0.0 0.0 78013.11 121597.55 264346.06]
[0.0 0.0 1.0 94657.16 145077.58 282574.31]
[0.0 1.0 0.0 91749.16 114175.79 294919.57]
[0.0 0.0 1.0 86419.7 153514.11 0.0]
[1.0 0.0 0.0 76253.86 113867.3 298664.47]
[0.0 0.0 1.0 78389.47 153773.43 299737.29]
[0.0 1.0 0.0 73994.56 122782.75 303319.26]
[0.0 1.0 0.0 67532.53 105751.03 304768.73]
[0.0 0.0 1.0 77044.01 99281.34 140574.81]
[1.0 0.0 0.0 64664.71 139553.16 137962.62]
[0.0 1.0 0.0 75328.87 144135.98 134050.07]
[0.0 0.0 1.0 72107.6 127864.55 353183.81]
[0.0 1.0 0.0 66051.52 182645.56 118148.2]
[0.0 0.0 1.0 65605.48 153032.06 107138.38]
[0.0 1.0 0.0 61994.48 115641.28 91131.24]
[0.0 0.0 1.0 61136.38 152701.92 88218.23]
[1.0 0.0 0.0 63408.86 129219.61 46085.25]
[0.0 1.0 0.0 55493.95 103057.49 214634.81]
[1.0 0.0 0.0 46426.07 157693.92 210797.67]
[0.0 0.0 1.0 46014.02 85047.44 205517.64]
[0.0 1.0 0.0 28663.76 127056.21 201126.82]
[1.0 0.0 0.0 44069.95 51283.14 197029.42]
[0.0 0.0 1.0 20229.59 65947.93 185265.1]
[1.0 0.0 0.0 38558.51 82982.09 174999.3]
[1.0 0.0 0.0 28754.33 118546.05 172795.67]
[0.0 1.0 0.0 27892.92 84710.77 164470.71]
[1.0 0.0 0.0 23640.93 96189.63 148001.11]
[0.0 0.0 1.0 15505.73 127382.3 35534.17]
[1.0 0.0 0.0 22177.74 154806.14 28334.72]
[0.0 0.0 1.0 1000.23 124153.04 1903.93]
[0.0 1.0 0.0 1315.46 115816.21 297114.46]
[1.0 0.0 0.0 0.0 135426.92 0.0]
[0.0 0.0 1.0 542.05 51743.15 0.0]
[1.0 0.0 0.0 0.0 116983.8 45173.06]]
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
print(X_train)
print(X_test)
print(y_train)
print(y_test)
###Output
[103282.38 144259.4 146121.95 77798.83 191050.39 105008.31 81229.06
97483.56 110352.25 166187.94]
###Markdown
Training the Multiple Linear Regression model on the Training set
###Code
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predicting the Test set results
###Code
y_pred = regressor.predict(X_test)
np.set_printoptions(precision=2) # precison =2 will display any numerical value with only two desimal points
print(np.concatenate((y_pred.reshape(len(y_pred),1),y_test.reshape(len(y_test),1)),1))
#print here will displya two columns one is real profit another is predicted profit
# concatinate function always expects tuples of arrays that we want to concatinate in parenthesis
#reshape is used to cover the horizontal array into verticle. Axis is 1
# len(y_pred) is number of rows i.e(x) and 1 is for number of columns i.e (y)
###Output
[[103015.2 103282.38]
[132582.28 144259.4 ]
[132447.74 146121.95]
[ 71976.1 77798.83]
[178537.48 191050.39]
[116161.24 105008.31]
[ 67851.69 81229.06]
[ 98791.73 97483.56]
[113969.44 110352.25]
[167921.07 166187.94]]
###Markdown
Making a single prediction (for example the profit of a startup with R&D Spend = 160000, Administration Spend = 130000, Marketing Spend = 300000 and State = 'California')
###Code
print(regressor.predict([[1, 0, 0, 160000, 130000, 300000]]))
###Output
[181566.92]
###Markdown
Therefore, our model predicts that the profit of a Californian startup which spent 160000 in R&D, 130000 in Administration and 300000 in Marketing is $ 181566,92.Important note 1: Notice that the values of the features were all input in a double pair of square brackets. That's because the "predict" method always expects a 2D array as the format of its inputs. And putting our values into a double pair of square brackets makes the input exactly a 2D array. Simply put:1,0,0,160000,130000,300000→scalars [1,0,0,160000,130000,300000]→1D array [[1,0,0,160000,130000,300000]]→2D array Important note 2: Notice also that the "California" state was not input as a string in the last column but as "1, 0, 0" in the first three columns. That's because of course the predict method expects the one-hot-encoded values of the state, and as we see in the second row of the matrix of features X, "California" was encoded as "1, 0, 0". And be careful to include these values in the first three columns, not the last three ones, because the dummy variables are always created in the first columns. Getting the final linear regression equation with the values of the coefficients
###Code
print(regressor.coef_)
print(regressor.intercept_)
###Output
[ 8.66e+01 -8.73e+02 7.86e+02 7.73e-01 3.29e-02 3.66e-02]
42467.52924853204
###Markdown
Therefore, the equation of our multiple linear regression model is:Profit=86.6×Dummy State 1−873×Dummy State 2+786×Dummy State 3−0.773×R&D Spend+0.0329×Administration+0.0366×Marketing Spend+42467.53Important Note: To get these coefficients we called the "coef_" and "intercept_" attributes from our regressor object. Attributes in Python are different than methods and usually return a simple value or an array of values.
###Code
###Output
_____no_output_____
###Markdown
Multiple Linear Regression Importing The Libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing The Dataset
###Code
dataset = pd.read_csv('50_Startups.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
print(X)
###Output
[[165349.2 136897.8 471784.1 'New York']
[162597.7 151377.59 443898.53 'California']
[153441.51 101145.55 407934.54 'Florida']
[144372.41 118671.85 383199.62 'New York']
[142107.34 91391.77 366168.42 'Florida']
[131876.9 99814.71 362861.36 'New York']
[134615.46 147198.87 127716.82 'California']
[130298.13 145530.06 323876.68 'Florida']
[120542.52 148718.95 311613.29 'New York']
[123334.88 108679.17 304981.62 'California']
[101913.08 110594.11 229160.95 'Florida']
[100671.96 91790.61 249744.55 'California']
[93863.75 127320.38 249839.44 'Florida']
[91992.39 135495.07 252664.93 'California']
[119943.24 156547.42 256512.92 'Florida']
[114523.61 122616.84 261776.23 'New York']
[78013.11 121597.55 264346.06 'California']
[94657.16 145077.58 282574.31 'New York']
[91749.16 114175.79 294919.57 'Florida']
[86419.7 153514.11 0.0 'New York']
[76253.86 113867.3 298664.47 'California']
[78389.47 153773.43 299737.29 'New York']
[73994.56 122782.75 303319.26 'Florida']
[67532.53 105751.03 304768.73 'Florida']
[77044.01 99281.34 140574.81 'New York']
[64664.71 139553.16 137962.62 'California']
[75328.87 144135.98 134050.07 'Florida']
[72107.6 127864.55 353183.81 'New York']
[66051.52 182645.56 118148.2 'Florida']
[65605.48 153032.06 107138.38 'New York']
[61994.48 115641.28 91131.24 'Florida']
[61136.38 152701.92 88218.23 'New York']
[63408.86 129219.61 46085.25 'California']
[55493.95 103057.49 214634.81 'Florida']
[46426.07 157693.92 210797.67 'California']
[46014.02 85047.44 205517.64 'New York']
[28663.76 127056.21 201126.82 'Florida']
[44069.95 51283.14 197029.42 'California']
[20229.59 65947.93 185265.1 'New York']
[38558.51 82982.09 174999.3 'California']
[28754.33 118546.05 172795.67 'California']
[27892.92 84710.77 164470.71 'Florida']
[23640.93 96189.63 148001.11 'California']
[15505.73 127382.3 35534.17 'New York']
[22177.74 154806.14 28334.72 'California']
[1000.23 124153.04 1903.93 'New York']
[1315.46 115816.21 297114.46 'Florida']
[0.0 135426.92 0.0 'California']
[542.05 51743.15 0.0 'New York']
[0.0 116983.8 45173.06 'California']]
###Markdown
Encoding Categorical Data
###Code
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(), [3])], remainder='passthrough')
X = np.array(ct.fit_transform(X))
print(X)
###Output
[[0.0 0.0 1.0 165349.2 136897.8 471784.1]
[1.0 0.0 0.0 162597.7 151377.59 443898.53]
[0.0 1.0 0.0 153441.51 101145.55 407934.54]
[0.0 0.0 1.0 144372.41 118671.85 383199.62]
[0.0 1.0 0.0 142107.34 91391.77 366168.42]
[0.0 0.0 1.0 131876.9 99814.71 362861.36]
[1.0 0.0 0.0 134615.46 147198.87 127716.82]
[0.0 1.0 0.0 130298.13 145530.06 323876.68]
[0.0 0.0 1.0 120542.52 148718.95 311613.29]
[1.0 0.0 0.0 123334.88 108679.17 304981.62]
[0.0 1.0 0.0 101913.08 110594.11 229160.95]
[1.0 0.0 0.0 100671.96 91790.61 249744.55]
[0.0 1.0 0.0 93863.75 127320.38 249839.44]
[1.0 0.0 0.0 91992.39 135495.07 252664.93]
[0.0 1.0 0.0 119943.24 156547.42 256512.92]
[0.0 0.0 1.0 114523.61 122616.84 261776.23]
[1.0 0.0 0.0 78013.11 121597.55 264346.06]
[0.0 0.0 1.0 94657.16 145077.58 282574.31]
[0.0 1.0 0.0 91749.16 114175.79 294919.57]
[0.0 0.0 1.0 86419.7 153514.11 0.0]
[1.0 0.0 0.0 76253.86 113867.3 298664.47]
[0.0 0.0 1.0 78389.47 153773.43 299737.29]
[0.0 1.0 0.0 73994.56 122782.75 303319.26]
[0.0 1.0 0.0 67532.53 105751.03 304768.73]
[0.0 0.0 1.0 77044.01 99281.34 140574.81]
[1.0 0.0 0.0 64664.71 139553.16 137962.62]
[0.0 1.0 0.0 75328.87 144135.98 134050.07]
[0.0 0.0 1.0 72107.6 127864.55 353183.81]
[0.0 1.0 0.0 66051.52 182645.56 118148.2]
[0.0 0.0 1.0 65605.48 153032.06 107138.38]
[0.0 1.0 0.0 61994.48 115641.28 91131.24]
[0.0 0.0 1.0 61136.38 152701.92 88218.23]
[1.0 0.0 0.0 63408.86 129219.61 46085.25]
[0.0 1.0 0.0 55493.95 103057.49 214634.81]
[1.0 0.0 0.0 46426.07 157693.92 210797.67]
[0.0 0.0 1.0 46014.02 85047.44 205517.64]
[0.0 1.0 0.0 28663.76 127056.21 201126.82]
[1.0 0.0 0.0 44069.95 51283.14 197029.42]
[0.0 0.0 1.0 20229.59 65947.93 185265.1]
[1.0 0.0 0.0 38558.51 82982.09 174999.3]
[1.0 0.0 0.0 28754.33 118546.05 172795.67]
[0.0 1.0 0.0 27892.92 84710.77 164470.71]
[1.0 0.0 0.0 23640.93 96189.63 148001.11]
[0.0 0.0 1.0 15505.73 127382.3 35534.17]
[1.0 0.0 0.0 22177.74 154806.14 28334.72]
[0.0 0.0 1.0 1000.23 124153.04 1903.93]
[0.0 1.0 0.0 1315.46 115816.21 297114.46]
[1.0 0.0 0.0 0.0 135426.92 0.0]
[0.0 0.0 1.0 542.05 51743.15 0.0]
[1.0 0.0 0.0 0.0 116983.8 45173.06]]
###Markdown
Splitting The Dataset Into The Training Set & The Test Set
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
###Output
_____no_output_____
###Markdown
Training The Multiple Linear Regression Model On The Training Set
###Code
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predicting The Test Set Results
###Code
y_pred = regressor.predict(X_test)
np.set_printoptions(precision=2)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
###Output
[[103015.2 103282.38]
[132582.28 144259.4 ]
[132447.74 146121.95]
[ 71976.1 77798.83]
[178537.48 191050.39]
[116161.24 105008.31]
[ 67851.69 81229.06]
[ 98791.73 97483.56]
[113969.44 110352.25]
[167921.07 166187.94]]
###Markdown
Multiple Linear Regression Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('50_Startups.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
print(X)
###Output
[[165349.2 136897.8 471784.1 'New York']
[162597.7 151377.59 443898.53 'California']
[153441.51 101145.55 407934.54 'Florida']
[144372.41 118671.85 383199.62 'New York']
[142107.34 91391.77 366168.42 'Florida']
[131876.9 99814.71 362861.36 'New York']
[134615.46 147198.87 127716.82 'California']
[130298.13 145530.06 323876.68 'Florida']
[120542.52 148718.95 311613.29 'New York']
[123334.88 108679.17 304981.62 'California']
[101913.08 110594.11 229160.95 'Florida']
[100671.96 91790.61 249744.55 'California']
[93863.75 127320.38 249839.44 'Florida']
[91992.39 135495.07 252664.93 'California']
[119943.24 156547.42 256512.92 'Florida']
[114523.61 122616.84 261776.23 'New York']
[78013.11 121597.55 264346.06 'California']
[94657.16 145077.58 282574.31 'New York']
[91749.16 114175.79 294919.57 'Florida']
[86419.7 153514.11 0.0 'New York']
[76253.86 113867.3 298664.47 'California']
[78389.47 153773.43 299737.29 'New York']
[73994.56 122782.75 303319.26 'Florida']
[67532.53 105751.03 304768.73 'Florida']
[77044.01 99281.34 140574.81 'New York']
[64664.71 139553.16 137962.62 'California']
[75328.87 144135.98 134050.07 'Florida']
[72107.6 127864.55 353183.81 'New York']
[66051.52 182645.56 118148.2 'Florida']
[65605.48 153032.06 107138.38 'New York']
[61994.48 115641.28 91131.24 'Florida']
[61136.38 152701.92 88218.23 'New York']
[63408.86 129219.61 46085.25 'California']
[55493.95 103057.49 214634.81 'Florida']
[46426.07 157693.92 210797.67 'California']
[46014.02 85047.44 205517.64 'New York']
[28663.76 127056.21 201126.82 'Florida']
[44069.95 51283.14 197029.42 'California']
[20229.59 65947.93 185265.1 'New York']
[38558.51 82982.09 174999.3 'California']
[28754.33 118546.05 172795.67 'California']
[27892.92 84710.77 164470.71 'Florida']
[23640.93 96189.63 148001.11 'California']
[15505.73 127382.3 35534.17 'New York']
[22177.74 154806.14 28334.72 'California']
[1000.23 124153.04 1903.93 'New York']
[1315.46 115816.21 297114.46 'Florida']
[0.0 135426.92 0.0 'California']
[542.05 51743.15 0.0 'New York']
[0.0 116983.8 45173.06 'California']]
###Markdown
Encoding categorical data
###Code
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(), [3])], remainder='passthrough')
X = np.array(ct.fit_transform(X))
print(X)
###Output
[[0.0 0.0 1.0 165349.2 136897.8 471784.1]
[1.0 0.0 0.0 162597.7 151377.59 443898.53]
[0.0 1.0 0.0 153441.51 101145.55 407934.54]
[0.0 0.0 1.0 144372.41 118671.85 383199.62]
[0.0 1.0 0.0 142107.34 91391.77 366168.42]
[0.0 0.0 1.0 131876.9 99814.71 362861.36]
[1.0 0.0 0.0 134615.46 147198.87 127716.82]
[0.0 1.0 0.0 130298.13 145530.06 323876.68]
[0.0 0.0 1.0 120542.52 148718.95 311613.29]
[1.0 0.0 0.0 123334.88 108679.17 304981.62]
[0.0 1.0 0.0 101913.08 110594.11 229160.95]
[1.0 0.0 0.0 100671.96 91790.61 249744.55]
[0.0 1.0 0.0 93863.75 127320.38 249839.44]
[1.0 0.0 0.0 91992.39 135495.07 252664.93]
[0.0 1.0 0.0 119943.24 156547.42 256512.92]
[0.0 0.0 1.0 114523.61 122616.84 261776.23]
[1.0 0.0 0.0 78013.11 121597.55 264346.06]
[0.0 0.0 1.0 94657.16 145077.58 282574.31]
[0.0 1.0 0.0 91749.16 114175.79 294919.57]
[0.0 0.0 1.0 86419.7 153514.11 0.0]
[1.0 0.0 0.0 76253.86 113867.3 298664.47]
[0.0 0.0 1.0 78389.47 153773.43 299737.29]
[0.0 1.0 0.0 73994.56 122782.75 303319.26]
[0.0 1.0 0.0 67532.53 105751.03 304768.73]
[0.0 0.0 1.0 77044.01 99281.34 140574.81]
[1.0 0.0 0.0 64664.71 139553.16 137962.62]
[0.0 1.0 0.0 75328.87 144135.98 134050.07]
[0.0 0.0 1.0 72107.6 127864.55 353183.81]
[0.0 1.0 0.0 66051.52 182645.56 118148.2]
[0.0 0.0 1.0 65605.48 153032.06 107138.38]
[0.0 1.0 0.0 61994.48 115641.28 91131.24]
[0.0 0.0 1.0 61136.38 152701.92 88218.23]
[1.0 0.0 0.0 63408.86 129219.61 46085.25]
[0.0 1.0 0.0 55493.95 103057.49 214634.81]
[1.0 0.0 0.0 46426.07 157693.92 210797.67]
[0.0 0.0 1.0 46014.02 85047.44 205517.64]
[0.0 1.0 0.0 28663.76 127056.21 201126.82]
[1.0 0.0 0.0 44069.95 51283.14 197029.42]
[0.0 0.0 1.0 20229.59 65947.93 185265.1]
[1.0 0.0 0.0 38558.51 82982.09 174999.3]
[1.0 0.0 0.0 28754.33 118546.05 172795.67]
[0.0 1.0 0.0 27892.92 84710.77 164470.71]
[1.0 0.0 0.0 23640.93 96189.63 148001.11]
[0.0 0.0 1.0 15505.73 127382.3 35534.17]
[1.0 0.0 0.0 22177.74 154806.14 28334.72]
[0.0 0.0 1.0 1000.23 124153.04 1903.93]
[0.0 1.0 0.0 1315.46 115816.21 297114.46]
[1.0 0.0 0.0 0.0 135426.92 0.0]
[0.0 0.0 1.0 542.05 51743.15 0.0]
[1.0 0.0 0.0 0.0 116983.8 45173.06]]
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
###Output
_____no_output_____
###Markdown
Training the Multiple Linear Regression model on the Training set
###Code
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predicting the Test set results
###Code
y_pred = regressor.predict(X_test)
np.set_printoptions(precision=2)
print(np.concatenate((y_pred.reshape(len(y_pred), 1), y_test.reshape(len(y_test), 1)), 1))
###Output
[[103015.2 103282.38]
[132582.28 144259.4 ]
[132447.74 146121.95]
[ 71976.1 77798.83]
[178537.48 191050.39]
[116161.24 105008.31]
[ 67851.69 81229.06]
[ 98791.73 97483.56]
[113969.44 110352.25]
[167921.07 166187.94]]
###Markdown
Multiple Linear Regression Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('50_Startups.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
###Output
_____no_output_____
###Markdown
Encoding categorical data
###Code
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(), [3])], remainder='passthrough')
X = np.array(ct.fit_transform(X))
###Output
_____no_output_____
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
print(X_train)
print(y_train)
###Output
[[0.0 1.0 0.0 55493.95 103057.49 214634.81]
[0.0 0.0 1.0 46014.02 85047.44 205517.64]
[0.0 1.0 0.0 75328.87 144135.98 134050.07]
[1.0 0.0 0.0 46426.07 157693.92 210797.67]
[0.0 1.0 0.0 91749.16 114175.79 294919.57]
[0.0 1.0 0.0 130298.13 145530.06 323876.68]
[0.0 1.0 0.0 119943.24 156547.42 256512.92]
[0.0 0.0 1.0 1000.23 124153.04 1903.93]
[0.0 0.0 1.0 542.05 51743.15 0.0]
[0.0 0.0 1.0 65605.48 153032.06 107138.38]
[0.0 0.0 1.0 114523.61 122616.84 261776.23]
[0.0 1.0 0.0 61994.48 115641.28 91131.24]
[1.0 0.0 0.0 63408.86 129219.61 46085.25]
[1.0 0.0 0.0 78013.11 121597.55 264346.06]
[1.0 0.0 0.0 23640.93 96189.63 148001.11]
[1.0 0.0 0.0 76253.86 113867.3 298664.47]
[0.0 0.0 1.0 15505.73 127382.3 35534.17]
[0.0 0.0 1.0 120542.52 148718.95 311613.29]
[1.0 0.0 0.0 91992.39 135495.07 252664.93]
[1.0 0.0 0.0 64664.71 139553.16 137962.62]
[0.0 0.0 1.0 131876.9 99814.71 362861.36]
[0.0 0.0 1.0 94657.16 145077.58 282574.31]
[1.0 0.0 0.0 28754.33 118546.05 172795.67]
[1.0 0.0 0.0 0.0 116983.8 45173.06]
[1.0 0.0 0.0 162597.7 151377.59 443898.53]
[0.0 1.0 0.0 93863.75 127320.38 249839.44]
[1.0 0.0 0.0 44069.95 51283.14 197029.42]
[0.0 0.0 1.0 77044.01 99281.34 140574.81]
[1.0 0.0 0.0 134615.46 147198.87 127716.82]
[0.0 1.0 0.0 67532.53 105751.03 304768.73]
[0.0 1.0 0.0 28663.76 127056.21 201126.82]
[0.0 0.0 1.0 78389.47 153773.43 299737.29]
[0.0 0.0 1.0 86419.7 153514.11 0.0]
[1.0 0.0 0.0 123334.88 108679.17 304981.62]
[1.0 0.0 0.0 38558.51 82982.09 174999.3]
[0.0 1.0 0.0 1315.46 115816.21 297114.46]
[0.0 0.0 1.0 144372.41 118671.85 383199.62]
[0.0 0.0 1.0 165349.2 136897.8 471784.1]
[1.0 0.0 0.0 0.0 135426.92 0.0]
[1.0 0.0 0.0 22177.74 154806.14 28334.72]]
[ 96778.92 96479.51 105733.54 96712.8 124266.9 155752.6 132602.65
64926.08 35673.41 101004.64 129917.04 99937.59 97427.84 126992.93
71498.49 118474.03 69758.98 152211.77 134307.35 107404.34 156991.12
125370.37 78239.91 14681.4 191792.06 141585.52 89949.14 108552.04
156122.51 108733.99 90708.19 111313.02 122776.86 149759.96 81005.76
49490.75 182901.99 192261.83 42559.73 65200.33]
###Markdown
Training the Multiple Linear Regression model on the Training set
###Code
from sklearn.linear_model import LinearRegression
rg = LinearRegression()
rg.fit(X_train, y_train)
predict = rg.predict(X_test)
print(rg.coef_)
print(rg.intercept_)
###Output
[103015.20159795 132582.27760816 132447.73845175 71976.09851258
178537.48221057 116161.24230167 67851.69209676 98791.73374687
113969.43533014 167921.06569552]
[ 8.66383692e+01 -8.72645791e+02 7.86007422e+02 7.73467193e-01
3.28845975e-02 3.66100259e-02]
42467.52924853204
###Markdown
Predicting the Test set results
###Code
np.set_printoptions(precision=2)
print(np.concatenate((predict.reshape(len(predict), 1),y_test.reshape(len(y_test), 1)),axis=1))
###Output
[[103015.2 103282.38]
[132582.28 144259.4 ]
[132447.74 146121.95]
[ 71976.1 77798.83]
[178537.48 191050.39]
[116161.24 105008.31]
[ 67851.69 81229.06]
[ 98791.73 97483.56]
[113969.44 110352.25]
[167921.07 166187.94]]
|
5/z2.ipynb | ###Markdown
WprowadzenieSkrypt pokazuje jak użyć pakietu SciKit do klasyfikacji danych. Rozważane są dwa przykłady: zestaw danych IRIS oraz zestaw danych TITANIC (do ściągnięcia z https://www.kaggle.com/c/titanic, dokładniej: potrzebny jest plik https://www.kaggle.com/c/titanic/download/train.csv).
###Code
% matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn import tree
###Output
_____no_output_____
###Markdown
1. Pierwszy zestaw danychDane IRIS
###Code
# wczytanie zestawu danych
from sklearn import datasets
iris = datasets.load_iris()
data = pd.DataFrame(iris.data, columns=iris.feature_names)
data['species'] = pd.Categorical.from_codes(iris.target, iris.target_names)
data.head()
# rozbicie zestawu danych na dane opisujące kwiat (X) i etykietę klasy (y)
y = data['species']
X = data.drop('species', axis = 1)
# stworzenie drzewa klasyfikacyjnego
t = tree.DecisionTreeClassifier()
t = t.fit(X, y)
# zapisanie drzewa klasyfikacyjnego do pliku .dot
# plik ten można przekształcić do pliku .pdf za pomocą programu graphviz używając polecenia:
# dot -Tpdf iris.dot -o iris.pdf
with open("output/iris.dot", "w") as f:
tree.export_graphviz(t, out_file=f, feature_names=X.columns)
# ocena stworzonego klasyfikatora na danych uczących
t.score(X, y)
# Uczciwiej byłoby oceniać klasyfikator na danych, które nie były używane podczas tworzenia
# klasyfikatora. Dlatego cały zestaw danych warto podzielić na dwie części: dane uczące i dane
# testowe.
data['train'] = np.random.uniform(0, 1, len(data))
data_train = data[data['train'] <= 0.65]
data_test = data[data['train'] > 0.65]
y = data_train['species']
X = data_train.drop('species', axis = 1)
t = tree.DecisionTreeClassifier()
t = t.fit(X, y)
print(t.score(X, y))
y = data_test['species']
X = data_test.drop('species', axis = 1)
print(t.score(X, y))
###Output
1.0
0.9692307692307692
###Markdown
2. Drugi zestaw danychDane TITANIC (do ściągnięcia z https://www.kaggle.com/c/titanic, dokładniej: potrzebny jest plik https://www.kaggle.com/c/titanic/download/train.csv).
###Code
# wczytanie zestawu danych z pliku
data = pd.read_csv("data/titanic.csv")
data.head()
# usunięcie z zestawu danych atrybutów nieistotnych dla klasyfikacji
data = data.drop(['PassengerId', 'Name', 'Ticket', 'Cabin'], axis = 1)
data = data.dropna()
data.head()
# zmiana kodowania atrybutów nominalnych w zestawie danych
data['Sex'] = pd.Categorical(data['Sex']).codes
data['Embarked'] = pd.Categorical(data['Embarked']).codes
data.head()
###Output
_____no_output_____
###Markdown
a) Domyślnie indeks Giniego, ale można też wybrać entropię.
###Code
# rozbicie zestawu danych na dane opisujące pasażera (X) i etykietę klasy (y)
y = data['Survived']
X = data.drop('Survived', axis = 1)
# stworzenie drzewa klasyfikacyjnego
tGini = tree.DecisionTreeClassifier()
tGini = tGini.fit(X, y)
tEnt = tree.DecisionTreeClassifier(criterion='entropy')
tEnt = tEnt.fit(X, y)
# zapisanie drzewa klasyfikacyjnego do pliku .dot
# plik ten można przekształcić do pliku .pdf za pomocą programu graphviz używając polecenia:
# dot -Tpdf titanic.dot -o titanic.pdf
with open("output/titanicGini.dot", "w") as f:
tree.export_graphviz(tGini, out_file=f, feature_names=X.columns)
with open("output/titanicEnt.dot", "w") as f:
tree.export_graphviz(tEnt, out_file=f, feature_names=X.columns)
# ocena stworzonego klasyfikatora na danych uczących
print(tGini.score(X, y))
print(tEnt.score(X, y))
###Output
0.9859550561797753
0.9859550561797753
###Markdown
b)
###Code
# Uczciwiej byłoby oceniać klasyfikator na danych, które nie były używane podczas tworzenia
# klasyfikatora. Dlatego cały zestaw danych warto podzielić na dwie części: dane uczące i dane
# testowe (lista 5 zadanie 2b).
data['train'] = np.random.uniform(0, 1, len(data))
data_train = data[data['train'] <= 0.65]
data_test = data[data['train'] > 0.65]
y = data_train['Survived']
X = data_train.drop('Survived', axis = 1)
tGini = tree.DecisionTreeClassifier()
tGini = tGini.fit(X, y)
tEnt = tree.DecisionTreeClassifier(criterion='entropy')
tEnt = tEnt.fit(X, y)
print(tGini.score(X, y))
print(tEnt.score(X, y))
y = data_test['Survived']
X = data_test.drop('Survived', axis = 1)
print(tGini.score(X, y))
print(tEnt.score(X, y))
###Output
1.0
1.0
0.39147286821705424
0.5348837209302325
###Markdown
c)
###Code
print(tGini.tree_.max_depth)
print(tEnt.tree_.max_depth)
for i in range(5, 15, 2):
data['train'] = np.random.uniform(0, 1, len(data))
data_train = data[data['train'] <= 0.65]
data_test = data[data['train'] > 0.65]
y = data_train['Survived']
X = data_train.drop('Survived', axis = 1)
tGini = tree.DecisionTreeClassifier(max_depth=i)
tGini = tGini.fit(X, y)
tEnt = tree.DecisionTreeClassifier(criterion='entropy', max_depth=i)
tEnt = tEnt.fit(X, y)
print("\ni: ", i)
print('Train Gini: \t', tGini.score(X, y))
print('Train Ent: \t',tEnt.score(X, y))
y = data_test['Survived']
X = data_test.drop('Survived', axis = 1)
print('Test Gini: \t',tGini.score(X, y))
print('Test Ent: \t',tEnt.score(X, y))
###Output
i: 5
Train Gini: 0.8742004264392325
Train Ent: 0.8784648187633263
Test Gini: 0.7818930041152263
Test Ent: 0.7860082304526749
i: 7
Train Gini: 0.9209401709401709
Train Ent: 0.9252136752136753
Test Gini: 0.7459016393442623
Test Ent: 0.7704918032786885
i: 9
Train Gini: 0.9215686274509803
Train Ent: 0.9281045751633987
Test Gini: 0.7351778656126482
Test Ent: 0.83399209486166
i: 11
Train Gini: 0.9710467706013363
Train Ent: 0.9576837416481069
Test Gini: 0.7262357414448669
Test Ent: 0.7376425855513308
i: 13
Train Gini: 0.995475113122172
Train Ent: 0.9773755656108597
Test Gini: 0.6851851851851852
Test Ent: 0.7518518518518519
###Markdown
d)
###Code
help(tree.DecisionTreeClassifier)
data['train'] = np.random.uniform(0, 1, len(data))
data_train = data[data['train'] <= 0.65]
data_test = data[data['train'] > 0.65]
y = data_train['Survived']
X = data_train.drop('Survived', axis = 1)
tGini = tree.DecisionTreeClassifier(max_depth=7, min_samples_split=16, min_samples_leaf=8)
tGini = tGini.fit(X, y)
tEnt = tree.DecisionTreeClassifier(criterion='entropy', max_depth=7, min_samples_split=16, min_samples_leaf=8)
tEnt = tEnt.fit(X, y)
print("\ni: ", i)
print('Train Gini: \t', tGini.score(X, y))
print('Train Ent: \t',tEnt.score(X, y))
y = data_test['Survived']
X = data_test.drop('Survived', axis = 1)
print('Test Gini: \t',tGini.score(X, y))
print('Test Ent: \t',tEnt.score(X, y))
###Output
i: 13
Train Gini: 0.8768898488120951
Train Ent: 0.8725701943844493
Test Gini: 0.7710843373493976
Test Ent: 0.7670682730923695
###Markdown
e)
###Code
def titanicCrossValidate(data, model, name=''):
data['train'] = (np.random.uniform(0,1, len(data)) * 10).astype(int)
err = 0
print(f"\n*** {name} ***")
for i in range(10):
data_train = data[data['train'] != i]
data_test = data[data['train'] == i]
y_train = data_train['Survived']
X_train = data_train.drop('Survived', axis = 1)
model.fit(X_train, y_train)
y_test = data_test['Survived']
X_test = data_test.drop('Survived', axis = 1)
err += model.score(X_test, y_test)
# print(f'\t{model.score(X_test, y_test)}')
print(err/10)
return err
models = [
(tree.DecisionTreeClassifier(), 'Gini default'),
(tree.DecisionTreeClassifier(criterion='entropy'), 'Entropy default'),
(tree.DecisionTreeClassifier(max_depth=9), 'Gini, max_depth = 9'),
(tree.DecisionTreeClassifier(criterion='entropy', max_depth=9), 'Entropy, max_depth = 9'),
(
tree.DecisionTreeClassifier(max_depth=7, min_samples_split=16, min_samples_leaf=8),
'Gini, max_depth = 9, min_samples_split = 16, min_samples_leaf = 8'
),
(
tree.DecisionTreeClassifier(criterion='entropy', max_depth=7, min_samples_split=16, min_samples_leaf=8),
'Entropy, max_depth = 9, min_samples_split = 16, min_samples_leaf = 8'
)
]
for model, name in models:
titanicCrossValidate(data, model, name)
###Output
*** Gini default ***
0.7504595818738656
*** Entropy default ***
0.7630843465935913
*** Gini, max_depth = 9 ***
0.7559372729120015
*** Entropy, max_depth = 9 ***
0.7835942631158017
*** Gini, max_depth = 9, min_samples_split = 16, min_samples_leaf = 8 ***
0.8182552073318036
*** Entropy, max_depth = 9, min_samples_split = 16, min_samples_leaf = 8 ***
0.8023882273516479
|
notebooks/chap15.ipynb | ###Markdown
Modeling and Simulation in PythonChapter 15Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
The coffee cooling problemI'll use a `State` object to store the initial temperature.
###Code
init = State(T=90)
###Output
_____no_output_____
###Markdown
And a `System` object to contain the system parameters.
###Code
coffee = System(init=init,
volume=300,
r=0.01,
T_env=22,
t_0=0,
t_end=30,
dt=1)
###Output
_____no_output_____
###Markdown
The update function implements Newton's law of cooling.
###Code
def update_func(state, t, system):
"""Update the thermal transfer model.
state: State (temp)
t: time
system: System object
returns: State (temp)
"""
r, T_env, dt = system.r, system.T_env, system.dt
T = state.T
T += -r * (T - T_env) * dt
return State(T=T)
###Output
_____no_output_____
###Markdown
Here's how it works.
###Code
update_func(init, 0, coffee)
###Output
_____no_output_____
###Markdown
Here's a version of `run_simulation` that uses `linrange` to make an array of time steps.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add a TimeFrame to the System: results
system: System object
update_func: function that updates state
"""
init = system.init
t_0, t_end, dt = system.t_0, system.t_end, system.dt
frame = TimeFrame(columns=init.index)
frame.row[t_0] = init
ts = linrange(t_0, t_end, dt)
for t in ts:
frame.row[t+dt] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
And here's how it works.
###Code
results = run_simulation(coffee, update_func)
###Output
_____no_output_____
###Markdown
Here's what the results look like.
###Code
plot(results.T, label='coffee')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
###Output
_____no_output_____
###Markdown
And here's the final temperature:
###Code
coffee.T_final = get_last_value(results.T)
T_final = get_last_value(results.T)
###Output
_____no_output_____
###Markdown
EncapsulationBefore we go on, let's define a function to initialize `System` objects with relevant parameters:
###Code
def make_system(T_init, r, volume, t_end):
"""Makes a System object with the given parameters.
T_init: initial temperature in degC
r: heat transfer rate, in 1/min
volume: volume of liquid in mL
t_end: end time of simulation
returns: System object
"""
init = State(T=T_init)
return System(init=init,
r=r,
volume=volume,
temp=T_init,
t_0=0,
t_end=t_end,
dt=1,
T_env=22)
###Output
_____no_output_____
###Markdown
Here's how we use it:
###Code
coffee = make_system(T_init=90, r=0.01, volume=300, t_end=30)
results = run_simulation(coffee, update_func)
T_final = get_last_value(results.T)
###Output
_____no_output_____
###Markdown
Exercises**Exercise:** Simulate the temperature of 50 mL of milk with a starting temperature of 5 degC, in a vessel with the same insulation, for 15 minutes, and plot the results.By trial and error, find a value for `r` that makes the final temperature close to 20 C.
###Code
milk = make_system(5,0.135, 50, 15)
results = run_simulation(milk, update_func)
T_final = get_last_value(results.T)
plot(results.T, label='milk')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
###Output
_____no_output_____
###Markdown
Mark and Recapture Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
###Code
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
from utils import set_pyplot_params
set_pyplot_params()
###Output
_____no_output_____
###Markdown
This chapter introduces "mark and recapture" experiments, in which we sample individuals from a population, mark them somehow, and then take a second sample from the same population. Seeing how many individuals in the second sample are marked, we can estimate the size of the population.Experiments like this were originally used in ecology, but turn out to be useful in many other fields. Examples in this chapter include software engineering and epidemiology.Also, in this chapter we'll work with models that have three parameters, so we'll extend the joint distributions we've been using to three dimensions.But first, grizzly bears. The Grizzly Bear ProblemIn 1996 and 1997 researchers deployed bear traps in locations in British Columbia and Alberta, Canada, in an effort to estimate the population of grizzly bears. They describe the experiment in [this article](https://www.researchgate.net/publication/229195465_Estimating_Population_Size_of_Grizzly_Bears_Using_Hair_Capture_DNA_Profiling_and_Mark-Recapture_Analysis).The "trap" consists of a lure and several strands of barbed wire intended to capture samples of hair from bears that visit the lure. Using the hair samples, the researchers use DNA analysis to identify individual bears.During the first session, the researchers deployed traps at 76 sites. Returning 10 days later, they obtained 1043 hair samples and identified 23 different bears. During a second 10-day session they obtained 1191 samples from 19 different bears, where 4 of the 19 were from bears they had identified in the first batch.To estimate the population of bears from this data, we need a model for the probability that each bear will be observed during each session. As a starting place, we'll make the simplest assumption, that every bear in the population has the same (unknown) probability of being sampled during each session. With these assumptions we can compute the probability of the data for a range of possible populations.As an example, let's suppose that the actual population of bears is 100.After the first session, 23 of the 100 bears have been identified.During the second session, if we choose 19 bears at random, what is the probability that 4 of them were previously identified? I'll define* $N$: actual population size, 100.* $K$: number of bears identified in the first session, 23.* $n$: number of bears observed in the second session, 19 in the example.* $k$: number of bears in the second session that were previously identified, 4.For given values of $N$, $K$, and $n$, the probability of finding $k$ previously-identified bears is given by the [hypergeometric distribution](https://en.wikipedia.org/wiki/Hypergeometric_distribution):$$\binom{K}{k} \binom{N-K}{n-k}/ \binom{N}{n}$$where the [binomial coefficient](https://en.wikipedia.org/wiki/Binomial_coefficient), $\binom{K}{k}$, is the number of subsets of size $k$ we can choose from a population of size $K$. To understand why, consider: * The denominator, $\binom{N}{n}$, is the number of subsets of $n$ we could choose from a population of $N$ bears.* The numerator is the number of subsets that contain $k$ bears from the previously identified $K$ and $n-k$ from the previously unseen $N-K$.SciPy provides `hypergeom`, which we can use to compute this probability for a range of values of $k$.
###Code
import numpy as np
from scipy.stats import hypergeom
N = 100
K = 23
n = 19
ks = np.arange(12)
ps = hypergeom(N, K, n).pmf(ks)
###Output
_____no_output_____
###Markdown
The result is the distribution of $k$ with given parameters $N$, $K$, and $n$.Here's what it looks like.
###Code
import matplotlib.pyplot as plt
from utils import decorate
plt.bar(ks, ps)
decorate(xlabel='Number of bears observed twice',
ylabel='PMF',
title='Hypergeometric distribution of k (known population 100)')
###Output
_____no_output_____
###Markdown
The most likely value of $k$ is 4, which is the value actually observed in the experiment. That suggests that $N=100$ is a reasonable estimate of the population, given this data.We've computed the distribution of $k$ given $N$, $K$, and $n$.Now let's go the other way: given $K$, $n$, and $k$, how can we estimate the total population, $N$? The UpdateAs a starting place, let's suppose that, prior to this study, an expert estimates that the local bear population is between 50 and 500, and equally likely to be any value in that range.I'll use `make_uniform` to make a uniform distribution of integers in this range.
###Code
import numpy as np
from utils import make_uniform
qs = np.arange(50, 501)
prior_N = make_uniform(qs, name='N')
prior_N.shape
###Output
_____no_output_____
###Markdown
So that's our prior.To compute the likelihood of the data, we can use `hypergeom` with constants `K` and `n`, and a range of values of `N`.
###Code
Ns = prior_N.qs
K = 23
n = 19
k = 4
likelihood = hypergeom(Ns, K, n).pmf(k)
###Output
_____no_output_____
###Markdown
We can compute the posterior in the usual way.
###Code
posterior_N = prior_N * likelihood
posterior_N.normalize()
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
posterior_N.plot(color='C4')
decorate(xlabel='Population of bears (N)',
ylabel='PDF',
title='Posterior distribution of N')
###Output
_____no_output_____
###Markdown
The most likely value is 109.
###Code
posterior_N.max_prob()
###Output
_____no_output_____
###Markdown
But the distribution is skewed to the right, so the posterior mean is substantially higher.
###Code
posterior_N.mean()
###Output
_____no_output_____
###Markdown
And the credible interval is quite wide.
###Code
posterior_N.credible_interval(0.9)
###Output
_____no_output_____
###Markdown
This solution is relatively simple, but it turns out we can do a little better if we model the unknown probability of observing a bear explicitly. Two Parameter ModelNext we'll try a model with two parameters: the number of bears, `N`, and the probability of observing a bear, `p`.We'll assume that the probability is the same in both rounds, which is probably reasonable in this case because it is the same kind of trap in the same place.We'll also assume that the probabilities are independent; that is, the probability a bear is observed in the second round does not depend on whether it was observed in the first round. This assumption might be less reasonable, but for now it is a necessary simplification.Here are the counts again:
###Code
K = 23
n = 19
k = 4
###Output
_____no_output_____
###Markdown
For this model, I'll express the data in a notation that will make it easier to generalize to more than two rounds: * `k10` is the number of bears observed in the first round but not the second,* `k01` is the number of bears observed in the second round but not the first, and* `k11` is the number of bears observed in both rounds.Here are their values.
###Code
k10 = 23 - 4
k01 = 19 - 4
k11 = 4
###Output
_____no_output_____
###Markdown
Suppose we know the actual values of `N` and `p`. We can use them to compute the likelihood of this data.For example, suppose we know that `N=100` and `p=0.2`.We can use `N` to compute `k00`, which is the number of unobserved bears.
###Code
N = 100
observed = k01 + k10 + k11
k00 = N - observed
k00
###Output
_____no_output_____
###Markdown
For the update, it will be convenient to store the data as a list that represents the number of bears in each category.
###Code
x = [k00, k01, k10, k11]
x
###Output
_____no_output_____
###Markdown
Now, if we know `p=0.2`, we can compute the probability a bear falls in each category. For example, the probability of being observed in both rounds is `p*p`, and the probability of being unobserved in both rounds is `q*q` (where `q=1-p`).
###Code
p = 0.2
q = 1-p
y = [q*q, q*p, p*q, p*p]
y
###Output
_____no_output_____
###Markdown
Now the probability of the data is given by the [multinomial distribution](https://en.wikipedia.org/wiki/Multinomial_distribution):$$\frac{N!}{\prod x_i!} \prod y_i^{x_i}$$where $N$ is actual population, $x$ is a sequence with the counts in each category, and $y$ is a sequence of probabilities for each category.SciPy provides `multinomial`, which provides `pmf`, which computes this probability.Here is the probability of the data for these values of `N` and `p`.
###Code
from scipy.stats import multinomial
likelihood = multinomial.pmf(x, N, y)
likelihood
###Output
_____no_output_____
###Markdown
That's the likelihood if we know `N` and `p`, but of course we don't. So we'll choose prior distributions for `N` and `p`, and use the likelihoods to update it. The PriorWe'll use `prior_N` again for the prior distribution of `N`, and a uniform prior for the probability of observing a bear, `p`:
###Code
qs = np.linspace(0, 0.99, num=100)
prior_p = make_uniform(qs, name='p')
###Output
_____no_output_____
###Markdown
We can make a joint distribution in the usual way.
###Code
from utils import make_joint
joint_prior = make_joint(prior_p, prior_N)
joint_prior.shape
###Output
_____no_output_____
###Markdown
The result is a Pandas `DataFrame` with values of `N` down the rows and values of `p` across the columns.However, for this problem it will be convenient to represent the prior distribution as a 1-D `Series` rather than a 2-D `DataFrame`.We can convert from one format to the other using `stack`.
###Code
from empiricaldist import Pmf
joint_pmf = Pmf(joint_prior.stack())
joint_pmf.head(3)
type(joint_pmf)
type(joint_pmf.index)
joint_pmf.shape
###Output
_____no_output_____
###Markdown
The result is a `Pmf` whose index is a `MultiIndex`.A `MultiIndex` can have more than one column; in this example, the first column contains values of `N` and the second column contains values of `p`.The `Pmf` has one row (and one prior probability) for each possible pair of parameters `N` and `p`.So the total number of rows is the product of the lengths of `prior_N` and `prior_p`.Now we have to compute the likelihood of the data for each pair of parameters. The UpdateTo allocate space for the likelihoods, it is convenient to make a copy of `joint_pmf`:
###Code
likelihood = joint_pmf.copy()
###Output
_____no_output_____
###Markdown
As we loop through the pairs of parameters, we compute the likelihood of the data as in the previous section, and then store the result as an element of `likelihood`.
###Code
observed = k01 + k10 + k11
for N, p in joint_pmf.index:
k00 = N - observed
x = [k00, k01, k10, k11]
q = 1-p
y = [q*q, q*p, p*q, p*p]
likelihood[N, p] = multinomial.pmf(x, N, y)
###Output
_____no_output_____
###Markdown
Now we can compute the posterior in the usual way.
###Code
posterior_pmf = joint_pmf * likelihood
posterior_pmf.normalize()
###Output
_____no_output_____
###Markdown
We'll use `plot_contour` again to visualize the joint posterior distribution.But remember that the posterior distribution we just computed is represented as a `Pmf`, which is a `Series`, and `plot_contour` expects a `DataFrame`.Since we used `stack` to convert from a `DataFrame` to a `Series`, we can use `unstack` to go the other way.
###Code
joint_posterior = posterior_pmf.unstack()
###Output
_____no_output_____
###Markdown
And here's what the result looks like.
###Code
from utils import plot_contour
plot_contour(joint_posterior)
decorate(title='Joint posterior distribution of N and p')
###Output
_____no_output_____
###Markdown
The most likely values of `N` are near 100, as in the previous model. The most likely values of `p` are near 0.2.The shape of this contour indicates that these parameters are correlated. If `p` is near the low end of the range, the most likely values of `N` are higher; if `p` is near the high end of the range, `N` is lower. Now that we have a posterior `DataFrame`, we can extract the marginal distributions in the usual way.
###Code
from utils import marginal
posterior2_p = marginal(joint_posterior, 0)
posterior2_N = marginal(joint_posterior, 1)
###Output
_____no_output_____
###Markdown
Here's the posterior distribution for `p`:
###Code
posterior2_p.plot(color='C1')
decorate(xlabel='Probability of observing a bear',
ylabel='PDF',
title='Posterior marginal distribution of p')
###Output
_____no_output_____
###Markdown
The most likely values are near 0.2. Here's the posterior distribution for `N` based on the two-parameter model, along with the posterior we got using the one-parameter (hypergeometric) model.
###Code
posterior_N.plot(label='one-parameter model', color='C4')
posterior2_N.plot(label='two-parameter model', color='C1')
decorate(xlabel='Population of bears (N)',
ylabel='PDF',
title='Posterior marginal distribution of N')
###Output
_____no_output_____
###Markdown
With the two-parameter model, the mean is a little lower and the 90% credible interval is a little narrower.
###Code
print(posterior_N.mean(),
posterior_N.credible_interval(0.9))
print(posterior2_N.mean(),
posterior2_N.credible_interval(0.9))
###Output
_____no_output_____
###Markdown
The two-parameter model yields a narrower posterior distribution for `N`, compared to the one-parameter model, because it takes advantage of an additional source of information: the consistency of the two observations.To see how this helps, consider a scenario where `N` is relatively low, like 138 (the posterior mean of the two-parameter model).
###Code
N1 = 138
###Output
_____no_output_____
###Markdown
Given that we saw 23 bears during the first trial and 19 during the second, we can estimate the corresponding value of `p`.
###Code
mean = (23 + 19) / 2
p = mean/N1
p
###Output
_____no_output_____
###Markdown
With these parameters, how much variability do you expect in the number of bears from one trial to the next? We can quantify that by computing the standard deviation of the binomial distribution with these parameters.
###Code
from scipy.stats import binom
binom(N1, p).std()
###Output
_____no_output_____
###Markdown
Now let's consider a second scenario where `N` is 173, the posterior mean of the one-parameter model. The corresponding value of `p` is lower.
###Code
N2 = 173
p = mean/N2
p
###Output
_____no_output_____
###Markdown
In this scenario, the variation we expect to see from one trial to the next is higher.
###Code
binom(N2, p).std()
###Output
_____no_output_____
###Markdown
So if the number of bears we observe is the same in both trials, that would be evidence for lower values of `N`, where we expect more consistency.If the number of bears is substantially different between the two trials, that would be evidence for higher values of `N`.In the actual data, the difference between the two trials is low, which is why the posterior mean of the two-parameter model is lower.The two-parameter model takes advantage of additional information, which is why the credible interval is narrower. Joint and Marginal DistributionsMarginal distributions are called "marginal" because in a common visualization they appear in the margins of the plot.Seaborn provides a class called `JointGrid` that creates this visualization.The following function uses it to show the joint and marginal distributions in a single plot.
###Code
import pandas as pd
from seaborn import JointGrid
def joint_plot(joint, **options):
"""Show joint and marginal distributions.
joint: DataFrame that represents a joint distribution
options: passed to JointGrid
"""
# get the names of the parameters
x = joint.columns.name
x = 'x' if x is None else x
y = joint.index.name
y = 'y' if y is None else y
# make a JointGrid with minimal data
data = pd.DataFrame({x:[0], y:[0]})
g = JointGrid(x=x, y=y, data=data, **options)
# replace the contour plot
g.ax_joint.contour(joint.columns,
joint.index,
joint,
cmap='viridis')
# replace the marginals
marginal_x = marginal(joint, 0)
g.ax_marg_x.plot(marginal_x.qs, marginal_x.ps)
marginal_y = marginal(joint, 1)
g.ax_marg_y.plot(marginal_y.ps, marginal_y.qs)
joint_plot(joint_posterior)
###Output
_____no_output_____
###Markdown
A `JointGrid` is a concise way to represent the joint and marginal distributions visually. The Lincoln Index ProblemIn [an excellent blog post](http://www.johndcook.com/blog/2010/07/13/lincoln-index/), John D. Cook wrote about the Lincoln index, which is a way to estimate thenumber of errors in a document (or program) by comparing results fromtwo independent testers.Here's his presentation of the problem:> "Suppose you have a tester who finds 20 bugs in your program. Youwant to estimate how many bugs are really in the program. You knowthere are at least 20 bugs, and if you have supreme confidence in yourtester, you may suppose there are around 20 bugs. But maybe yourtester isn't very good. Maybe there are hundreds of bugs. How can youhave any idea how many bugs there are? There's no way to know with onetester. But if you have two testers, you can get a good idea, even ifyou don't know how skilled the testers are."Suppose the first tester finds 20 bugs, the second finds 15, and theyfind 3 in common; how can we estimate the number of bugs?This problem is similar to the Grizzly Bear problem, so I'll represent the data in the same way.
###Code
k10 = 20 - 3
k01 = 15 - 3
k11 = 3
###Output
_____no_output_____
###Markdown
But in this case it is probably not reasonable to assume that the testers have the same probability of finding a bug.So I'll define two parameters, `p0` for the probability that the first tester finds a bug, and `p1` for the probability that the second tester finds a bug.I will continue to assume that the probabilities are independent, which is like assuming that all bugs are equally easy to find. That might not be a good assumption, but let's stick with it for now.As an example, suppose we know that the probabilities are 0.2 and 0.15.
###Code
p0, p1 = 0.2, 0.15
###Output
_____no_output_____
###Markdown
We can compute the array of probabilities, `y`, like this:
###Code
def compute_probs(p0, p1):
"""Computes the probability for each of 4 categories."""
q0 = 1-p0
q1 = 1-p1
return [q0*q1, q0*p1, p0*q1, p0*p1]
y = compute_probs(p0, p1)
y
###Output
_____no_output_____
###Markdown
With these probabilities, there is a 68% chance that neither tester finds the bug and a3% chance that both do. Pretending that these probabilities are known, we can compute the posterior distribution for `N`.Here's a prior distribution that's uniform from 32 to 350 bugs.
###Code
qs = np.arange(32, 350, step=5)
prior_N = make_uniform(qs, name='N')
prior_N.head(3)
###Output
_____no_output_____
###Markdown
I'll put the data in an array, with 0 as a place-keeper for the unknown value `k00`.
###Code
data = np.array([0, k01, k10, k11])
###Output
_____no_output_____
###Markdown
And here are the likelihoods for each value of `N`, with `ps` as a constant.
###Code
likelihood = prior_N.copy()
observed = data.sum()
x = data.copy()
for N in prior_N.qs:
x[0] = N - observed
likelihood[N] = multinomial.pmf(x, N, y)
###Output
_____no_output_____
###Markdown
We can compute the posterior in the usual way.
###Code
posterior_N = prior_N * likelihood
posterior_N.normalize()
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
posterior_N.plot(color='C4')
decorate(xlabel='Number of bugs (N)',
ylabel='PMF',
title='Posterior marginal distribution of n with known p1, p2')
print(posterior_N.mean(),
posterior_N.credible_interval(0.9))
###Output
_____no_output_____
###Markdown
With the assumption that `p0` and `p1` are known to be `0.2` and `0.15`, the posterior mean is 102 with 90% credible interval (77, 127).But this result is based on the assumption that we know the probabilities, and we don't. Three-parameter ModelWhat we need is a model with three parameters: `N`, `p0`, and `p1`.We'll use `prior_N` again for the prior distribution of `N`, and here are the priors for `p0` and `p1`:
###Code
qs = np.linspace(0, 1, num=51)
prior_p0 = make_uniform(qs, name='p0')
prior_p1 = make_uniform(qs, name='p1')
###Output
_____no_output_____
###Markdown
Now we have to assemble them into a joint prior with three dimensions.I'll start by putting the first two into a `DataFrame`.
###Code
joint2 = make_joint(prior_p0, prior_N)
joint2.shape
###Output
_____no_output_____
###Markdown
Now I'll stack them, as in the previous example, and put the result in a `Pmf`.
###Code
joint2_pmf = Pmf(joint2.stack())
joint2_pmf.head(3)
###Output
_____no_output_____
###Markdown
We can use `make_joint` again to add in the third parameter.
###Code
joint3 = make_joint(prior_p1, joint2_pmf)
joint3.shape
###Output
_____no_output_____
###Markdown
The result is a `DataFrame` with values of `N` and `p0` in a `MultiIndex` that goes down the rows and values of `p1` in an index that goes across the columns.
###Code
joint3.head(3)
###Output
_____no_output_____
###Markdown
Now I'll apply `stack` again:
###Code
joint3_pmf = Pmf(joint3.stack())
joint3_pmf.head(3)
###Output
_____no_output_____
###Markdown
The result is a `Pmf` with a three-column `MultiIndex` containing all possible triplets of parameters.The number of rows is the product of the number of values in all three priors, which is almost 170,000.
###Code
joint3_pmf.shape
###Output
_____no_output_____
###Markdown
That's still small enough to be practical, but it will take longer to compute the likelihoods than in the previous examples.Here's the loop that computes the likelihoods; it's similar to the one in the previous section:
###Code
likelihood = joint3_pmf.copy()
observed = data.sum()
x = data.copy()
for N, p0, p1 in joint3_pmf.index:
x[0] = N - observed
y = compute_probs(p0, p1)
likelihood[N, p0, p1] = multinomial.pmf(x, N, y)
###Output
_____no_output_____
###Markdown
We can compute the posterior in the usual way.
###Code
posterior_pmf = joint3_pmf * likelihood
posterior_pmf.normalize()
###Output
_____no_output_____
###Markdown
Now, to extract the marginal distributions, we could unstack the joint posterior as we did in the previous section.But `Pmf` provides a version of `marginal` that works with a `Pmf` rather than a `DataFrame`.Here's how we use it to get the posterior distribution for `N`.
###Code
posterior_N = posterior_pmf.marginal(0)
###Output
_____no_output_____
###Markdown
And here's what it looks look.
###Code
posterior_N.plot(color='C4')
decorate(xlabel='Number of bugs (N)',
ylabel='PDF',
title='Posterior marginal distributions of N')
posterior_N.mean()
###Output
_____no_output_____
###Markdown
The posterior mean is 105 bugs, which suggests that there are still many bugs the testers have not found.Here are the posteriors for `p0` and `p1`.
###Code
posterior_p1 = posterior_pmf.marginal(1)
posterior_p2 = posterior_pmf.marginal(2)
posterior_p1.plot(label='p1')
posterior_p2.plot(label='p2')
decorate(xlabel='Probability of finding a bug',
ylabel='PDF',
title='Posterior marginal distributions of p1 and p2')
posterior_p1.mean(), posterior_p1.credible_interval(0.9)
posterior_p2.mean(), posterior_p2.credible_interval(0.9)
###Output
_____no_output_____
###Markdown
Comparing the posterior distributions, the tester who found more bugs probably has a higher probability of finding bugs. The posterior means are about 23% and 18%. But the distributions overlap, so we should not be too sure. This is the first example we've seen with three parameters.As the number of parameters increases, the number of combinations increases quickly.The method we've been using so far, enumerating all possible combinations, becomes impractical if the number of parameters is more than 3 or 4.However there are other methods that can handle models with many more parameters, as we'll see in >. SummaryThe problems in this chapter are examples of [mark and recapture](https://en.wikipedia.org/wiki/Mark_and_recapture) experiments, which are used in ecology to estimate animal populations. They also have applications in engineering, as in the Lincoln index problem. And in the exercises you'll see that they are used in epidemiology, too.This chapter introduces two new probability distributions:* The hypergeometric distribution is a variation of the binomial distribution in which samples are drawn from the population without replacement. * The multinomial distribution is a generalization of the binomial distribution where there are more than two possible outcomes.Also in this chapter, we saw the first example of a model with three parameters. We'll see more in subsequent chapters. Exercises **Exercise:** [In an excellent paper](http://chao.stat.nthu.edu.tw/wordpress/paper/110.pdf), Anne Chao explains how mark and recapture experiments are used in epidemiology to estimate the prevalence of a disease in a human population based on multiple incomplete lists of cases.One of the examples in that paper is a study "to estimate the number of people who were infected by hepatitis in an outbreak that occurred in and around a college in northern Taiwan from April to July 1995."Three lists of cases were available:1. 135 cases identified using a serum test. 2. 122 cases reported by local hospitals. 3. 126 cases reported on questionnaires collected by epidemiologists.In this exercise, we'll use only the first two lists; in the next exercise we'll bring in the third list.Make a joint prior and update it using this data, then compute the posterior mean of `N` and a 90% credible interval. The following array contains 0 as a place-holder for the unknown value of `k00`, followed by known values of `k01`, `k10`, and `k11`.
###Code
data2 = np.array([0, 73, 86, 49])
###Output
_____no_output_____
###Markdown
These data indicate that there are 73 cases on the second list that are not on the first, 86 cases on the first list that are not on the second, and 49 cases on both lists.To keep things simple, we'll assume that each case has the same probability of appearing on each list. So we'll use a two-parameter model where `N` is the total number of cases and `p` is the probability that any case appears on any list.Here are priors you can start with (but feel free to modify them).
###Code
qs = np.arange(200, 500, step=5)
prior_N = make_uniform(qs, name='N')
prior_N.head(3)
qs = np.linspace(0, 0.98, num=50)
prior_p = make_uniform(qs, name='p')
prior_p.head(3)
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** Now let's do the version of the problem with all three lists. Here's the data from Chou's paper:```Hepatitis A virus listP Q E Data1 1 1 k111 =281 1 0 k110 =211 0 1 k101 =171 0 0 k100 =690 1 1 k011 =180 1 0 k010 =550 0 1 k001 =630 0 0 k000 =??```Write a loop that computes the likelihood of the data for each pair of parameters, then update the prior and compute the posterior mean of `N`. How does it compare to the results using only the first two lists? Here's the data in a NumPy array (in reverse order).
###Code
data3 = np.array([0, 63, 55, 18, 69, 17, 21, 28])
###Output
_____no_output_____
###Markdown
Again, the first value is a place-keeper for the unknown `k000`. The second value is `k001`, which means there are 63 cases that appear on the third list but not the first two. And the last value is `k111`, which means there are 28 cases that appear on all three lists.In the two-list version of the problem we computed `ps` by enumerating the combinations of `p` and `q`.
###Code
q = 1-p
ps = [q*q, q*p, p*q, p*p]
###Output
_____no_output_____
###Markdown
We could do the same thing for the three-list version, computing the probability for each of the eight categories. But we can generalize it by recognizing that we are computing the cartesian product of `p` and `q`, repeated once for each list.And we can use the following function (based on [this StackOverflow answer](https://stackoverflow.com/questions/58242078/cartesian-product-of-arbitrary-lists-in-pandas/5824207958242079)) to compute Cartesian products:
###Code
def cartesian_product(*args, **options):
"""Cartesian product of sequences.
args: any number of sequences
options: passes to `MultiIndex.from_product`
returns: DataFrame with one column per sequence
"""
index = pd.MultiIndex.from_product(args, **options)
return pd.DataFrame(index=index).reset_index()
###Output
_____no_output_____
###Markdown
Here's an example with `p=0.2`:
###Code
p = 0.2
t = (1-p, p)
df = cartesian_product(t, t, t)
df
###Output
_____no_output_____
###Markdown
To compute the probability for each category, we take the product across the columns:
###Code
y = df.prod(axis=1)
y
###Output
_____no_output_____
###Markdown
Now you finish it off from there.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 15Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0) Assignment: Final Exam Part I Completed by: Philip Tanofsky
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
The coffee cooling problemI'll use a `State` object to store the initial temperature.
###Code
init = State(T=90)
###Output
_____no_output_____
###Markdown
And a `System` object to contain the system parameters.
###Code
coffee = System(init=init,
volume=300,
r=0.01,
T_env=22,
t_0=0,
t_end=30,
dt=1)
###Output
_____no_output_____
###Markdown
The update function implements Newton's law of cooling.
###Code
def update_func(state, t, system):
"""Update the thermal transfer model.
state: State (temp)
t: time
system: System object
returns: State (temp)
"""
r, T_env, dt = system.r, system.T_env, system.dt
T = state.T
T += -r * (T - T_env) * dt
return State(T=T)
###Output
_____no_output_____
###Markdown
Here's how it works.
###Code
update_func(init, 0, coffee)
###Output
_____no_output_____
###Markdown
Here's a version of `run_simulation` that uses `linrange` to make an array of time steps.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add a TimeFrame to the System: results
system: System object
update_func: function that updates state
"""
init = system.init
t_0, t_end, dt = system.t_0, system.t_end, system.dt
frame = TimeFrame(columns=init.index)
frame.row[t_0] = init
ts = linrange(t_0, t_end, dt)
for t in ts:
frame.row[t+dt] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
And here's how it works.
###Code
results = run_simulation(coffee, update_func)
###Output
_____no_output_____
###Markdown
Here's what the results look like.
###Code
plot(results.T, label='coffee')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
###Output
_____no_output_____
###Markdown
And here's the final temperature:
###Code
coffee.T_final = get_last_value(results.T)
T_final = get_last_value(results.T)
###Output
_____no_output_____
###Markdown
EncapsulationBefore we go on, let's define a function to initialize `System` objects with relevant parameters:
###Code
def make_system(T_init, r, volume, t_end):
"""Makes a System object with the given parameters.
T_init: initial temperature in degC
r: heat transfer rate, in 1/min
volume: volume of liquid in mL
t_end: end time of simulation
returns: System object
"""
init = State(T=T_init)
return System(init=init,
r=r,
volume=volume,
temp=T_init,
t_0=0,
t_end=t_end,
dt=1,
T_env=22)
###Output
_____no_output_____
###Markdown
Here's how we use it:
###Code
coffee = make_system(T_init=90, r=0.01, volume=300, t_end=30)
results = run_simulation(coffee, update_func)
T_final = get_last_value(results.T)
###Output
_____no_output_____
###Markdown
Exercises**Exercise:** Simulate the temperature of 50 mL of milk with a starting temperature of 5 degC, in a vessel with the same insulation, for 15 minutes, and plot the results.By trial and error, find a value for `r` that makes the final temperature close to 20 C.
###Code
# Solution goes here
coffee = make_system(T_init=5, r=0.01, volume=50, t_end=15)
results = run_simulation(coffee, update_func)
T_final = get_last_value(results.T)
plot(results.T, label='milk')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
###Output
_____no_output_____
###Markdown
Trial and Error Below
###Code
coffee = make_system(T_init=5, r=0.05, volume=50, t_end=15)
results = run_simulation(coffee, update_func)
T_final = get_last_value(results.T)
coffee = make_system(T_init=5, r=0.08, volume=50, t_end=15)
results = run_simulation(coffee, update_func)
T_final = get_last_value(results.T)
coffee = make_system(T_init=5, r=0.2, volume=50, t_end=15)
results = run_simulation(coffee, update_func)
T_final = get_last_value(results.T)
coffee = make_system(T_init=5, r=0.18, volume=50, t_end=15)
results = run_simulation(coffee, update_func)
T_final = get_last_value(results.T)
coffee = make_system(T_init=5, r=0.17, volume=50, t_end=15)
results = run_simulation(coffee, update_func)
T_final = get_last_value(results.T)
coffee = make_system(T_init=5, r=0.15, volume=50, t_end=15)
results = run_simulation(coffee, update_func)
T_final = get_last_value(results.T)
coffee = make_system(T_init=5, r=0.13, volume=50, t_end=15)
results = run_simulation(coffee, update_func)
T_final = get_last_value(results.T)
coffee = make_system(T_init=5, r=0.14, volume=50, t_end=15)
results = run_simulation(coffee, update_func)
T_final = get_last_value(results.T)
coffee = make_system(T_init=5, r=0.133, volume=50, t_end=15)
results = run_simulation(coffee, update_func)
T_final = get_last_value(results.T)
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 15Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
The coffee cooling problemI'll use a `State` object to store the initial temperature.
###Code
init = State(T=90)
###Output
_____no_output_____
###Markdown
And a `System` object to contain the system parameters.
###Code
coffee = System(init=init,
volume=300,
r=0.01,
T_env=22,
t_0=0,
t_end=30,
dt=1)
###Output
_____no_output_____
###Markdown
The update function implements Newton's law of cooling.
###Code
def update_func(state, t, system):
"""Update the thermal transfer model.
state: State (temp)
t: time
system: System object
returns: State (temp)
"""
r, T_env, dt = system.r, system.T_env, system.dt
T = state.T
T += -r * (T - T_env) * dt
return State(T=T)
###Output
_____no_output_____
###Markdown
Here's how it works.
###Code
update_func(init, 0, coffee)
###Output
_____no_output_____
###Markdown
Here's a version of `run_simulation` that uses `linrange` to make an array of time steps.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add a TimeFrame to the System: results
system: System object
update_func: function that updates state
"""
init = system.init
t_0, t_end, dt = system.t_0, system.t_end, system.dt
frame = TimeFrame(columns=init.index)
frame.row[t_0] = init
ts = linrange(t_0, t_end, dt)
for t in ts:
frame.row[t+dt] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
And here's how it works.
###Code
results = run_simulation(coffee, update_func)
###Output
_____no_output_____
###Markdown
Here's what the results look like.
###Code
plot(results.T, label='coffee')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
###Output
_____no_output_____
###Markdown
And here's the final temperature:
###Code
coffee.T_final = get_last_value(results.T)
T_final = get_last_value(results.T)
###Output
_____no_output_____
###Markdown
EncapsulationBefore we go on, let's define a function to initialize `System` objects with relevant parameters:
###Code
def make_system(T_init, r, volume, t_end):
"""Makes a System object with the given parameters.
T_init: initial temperature in degC
r: heat transfer rate, in 1/min
volume: volume of liquid in mL
t_end: end time of simulation
returns: System object
"""
init = State(T=T_init)
return System(init=init,
r=r,
volume=volume,
temp=T_init,
t_0=0,
t_end=t_end,
dt=1,
T_env=22)
###Output
_____no_output_____
###Markdown
Here's how we use it:
###Code
coffee = make_system(T_init=90, r=0.01, volume=300, t_end=30)
results = run_simulation(coffee, update_func)
T_final = get_last_value(results.T)
###Output
_____no_output_____
###Markdown
Exercises**Exercise:** Simulate the temperature of 50 mL of milk with a starting temperature of 5 degC, in a vessel with the same insulation, for 15 minutes, and plot the results.By trial and error, find a value for `r` that makes the final temperature close to 20 C.
###Code
coffee2 = make_system(T_init=5, r=0.01, volume=50, t_end=15)
results = run_simulation(coffee2, update_func)
plot(results.T, label='milk')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
from modsim import *
for i in linspace(0.1325,0.135,11):
sweep = SweepSeries()
coffee3 = make_system(T_init=5, r=i, volume=50, t_end=15)
results = run_simulation(coffee3, update_func)
print(i,get_last_value(results.T))
###Output
0.1325 19.983997003354382
0.13275 19.992694155542228
0.133 20.00135627897414
0.13325 20.00998350469278
0.1335 20.01857596328815
0.13375 20.027133784899036
0.134 20.03565709921443
0.13425 20.044146035474974
0.1345 20.052600722474356
0.13475 20.06102128856076
0.135 20.069407861638226
###Markdown
The optimal r value is 0.133
###Code
coffee4 = make_system(T_init=5, r=0.133, volume=50, t_end=15)
results = run_simulation(coffee4, update_func)
plot(results.T, label='milk')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
###Output
_____no_output_____
###Markdown
Table of Contents1 Modeling and Simulation in Python1.0.1 The coffee cooling problem1.1 Encapsulation1.2 Exercises Modeling and Simulation in PythonChapter 15Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
The coffee cooling problemI'll use a `State` object to store the initial temperature.
###Code
init = State(T=90)
coffee_test = pd.Series({'init':init,'volume':300,'r':0.01,'T_env':22,'t_0':0,'t_end':30,'dt':1})
coffee.init.T
###Output
_____no_output_____
###Markdown
And a `System` object to contain the system parameters.
###Code
coffee = System(init=init,
volume=300,
r=0.01,
T_env=22,
t_0=0,
t_end=30,
dt=1)
###Output
_____no_output_____
###Markdown
The update function implements Newton's law of cooling.
###Code
def update_func(state, t, system):
"""Update the thermal transfer model.
state: State (temp)
t: time
system: System object
returns: State (temp)
"""
r, T_env, dt = system.r, system.T_env, system.dt
T = state.T
T += -r * (T - T_env) * dt
return State(T=T)
###Output
_____no_output_____
###Markdown
Here's how it works.
###Code
update_func(init, 0, coffee)
###Output
_____no_output_____
###Markdown
Here's a version of `run_simulation` that uses `linrange` to make an array of time steps.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add a TimeFrame to the System: results
system: System object
update_func: function that updates state
"""
init = system.init
t_0, t_end, dt = system.t_0, system.t_end, system.dt
frame = TimeFrame(columns=init.index)
frame.row[t_0] = init
ts = linrange(t_0, t_end, dt)
for t in ts:
frame.row[t+dt] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
And here's how it works.
###Code
results = run_simulation(coffee, update_func)
###Output
_____no_output_____
###Markdown
Here's what the results look like.
###Code
plot(results.T, label='coffee')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
###Output
_____no_output_____
###Markdown
And here's the final temperature:
###Code
coffee.T_final = get_last_value(results.T)
T_final = get_last_value(results.T)
###Output
_____no_output_____
###Markdown
EncapsulationBefore we go on, let's define a function to initialize `System` objects with relevant parameters:
###Code
def make_system(T_init, r, volume, t_end):
"""Makes a System object with the given parameters.
T_init: initial temperature in degC
r: heat transfer rate, in 1/min
volume: volume of liquid in mL
t_end: end time of simulation
returns: System object
"""
init = State(T=T_init)
return System(init=init,
r=r,
volume=volume,
temp=T_init,
t_0=0,
t_end=t_end,
dt=1,
T_env=22)
###Output
_____no_output_____
###Markdown
Here's how we use it:
###Code
coffee = make_system(T_init=90, r=0.01, volume=300, t_end=30)
results = run_simulation(coffee, update_func)
T_final = get_last_value(results.T)
###Output
_____no_output_____
###Markdown
Exercises**Exercise:** Simulate the temperature of 50 mL of milk with a starting temperature of 5 degC, in a vessel with the same insulation, for 15 minutes, and plot the results.By trial and error, find a value for `r` that makes the final temperature close to 20 C.
###Code
# Solution goes here
plot(results.T, label='milk')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
###Output
_____no_output_____
###Markdown
Mark and Recapture Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
###Code
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
from utils import set_pyplot_params
set_pyplot_params()
###Output
_____no_output_____
###Markdown
This chapter introduces "mark and recapture" experiments, in which we sample individuals from a population, mark them somehow, and then take a second sample from the same population. Seeing how many individuals in the second sample are marked, we can estimate the size of the population.Experiments like this were originally used in ecology, but turn out to be useful in many other fields. Examples in this chapter include software engineering and epidemiology.Also, in this chapter we'll work with models that have three parameters, so we'll extend the joint distributions we've been using to three dimensions.But first, grizzly bears. The Grizzly Bear ProblemIn 1996 and 1997 researchers deployed bear traps in locations in British Columbia and Alberta, Canada, in an effort to estimate the population of grizzly bears. They describe the experiment in "[Estimating Population Size of Grizzly Bears Using Hair Capture, DNA Profiling, and Mark-Recapture Analysis](https://www.researchgate.net/publication/229195465_Estimating_Population_Size_of_Grizzly_Bears_Using_Hair_Capture_DNA_Profiling_and_Mark-Recapture_Analysis)".The "trap" consists of a lure and several strands of barbed wire intended to capture samples of hair from bears that visit the lure. Using the hair samples, the researchers use DNA analysis to identify individual bears.During the first session, the researchers deployed traps at 76 sites. Returning 10 days later, they obtained 1043 hair samples and identified 23 different bears. During a second 10-day session they obtained 1191 samples from 19 different bears, where 4 of the 19 were from bears they had identified in the first batch.To estimate the population of bears from this data, we need a model for the probability that each bear will be observed during each session. As a starting place, we'll make the simplest assumption, that every bear in the population has the same (unknown) probability of being sampled during each session. With these assumptions we can compute the probability of the data for a range of possible populations.As an example, let's suppose that the actual population of bears is 100.After the first session, 23 of the 100 bears have been identified.During the second session, if we choose 19 bears at random, what is the probability that 4 of them were previously identified? I'll define* $N$: actual population size, 100.* $K$: number of bears identified in the first session, 23.* $n$: number of bears observed in the second session, 19 in the example.* $k$: the number of bears in the second session that had previously been identified, 4.For given values of $N$, $K$, and $n$, the probability of finding $k$ previously-identified bears is given by the [hypergeometric distribution](https://en.wikipedia.org/wiki/Hypergeometric_distribution):$${K \choose k}{N-K \choose n-k}/{N \choose n}$$where the [binomial coefficient](https://en.wikipedia.org/wiki/Binomial_coefficient), ${K \choose k}$, is the number of subsets of size $k$ we can choose from a population of size $K$. To understand why, consider: * The denominator, ${ N \choose n}$, is the number of subsets of $n$ we could choose from a population of $N$ bears.* The numerator is the number of subsets that contain $k$ bears from the previously identified $K$ and $n-k$ from the previously unseen $N-K$.SciPy provides `hypergeom`, which we can use to compute this probability for a range of values of $k$.
###Code
import numpy as np
from scipy.stats import hypergeom
N = 100
K = 23
n = 19
ks = np.arange(12)
ps = hypergeom(N, K, n).pmf(ks)
###Output
_____no_output_____
###Markdown
The result is the distribution of $k$ with given parameters $N$, $K$, and $n$.Here's what it looks like.
###Code
import matplotlib.pyplot as plt
from utils import decorate
plt.bar(ks, ps)
decorate(xlabel='Number of bears observed twice',
ylabel='PMF',
title='Hypergeometric distribution of k (known population 100)')
###Output
_____no_output_____
###Markdown
The most likely value of $k$ is 4, which is the value actually observed in the experiment. That suggests that $N=100$ is a reasonable estimate of the population, given this data.We've computed the distribution of $k$ given $N$, $K$, and $n$.Now let's go the other way: given $K$, $n$, and $k$, how can we estimate the total population, $N$? The UpdateAs a starting place, let's suppose that, prior to this study, an expert estimates that the local bear population is between 50 and 500, and equally likely to be any value in that range.I'll use `make_uniform` to make a uniform distribution of integers in this range.
###Code
import numpy as np
from utils import make_uniform
qs = np.arange(50, 501)
prior_N = make_uniform(qs, name='N')
prior_N.shape
###Output
_____no_output_____
###Markdown
So that's our prior.To compute the likelihood of the data, we can use `hypergeom` with constants `K` and `n`, and a range of values of `N`.
###Code
Ns = prior_N.qs
K = 23
n = 19
k = 4
likelihood = hypergeom(Ns, K, n).pmf(k)
###Output
_____no_output_____
###Markdown
We can compute the posterior in the usual way.
###Code
posterior_N = prior_N * likelihood
posterior_N.normalize()
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
posterior_N.plot(color='C4', label='_nolegend')
decorate(xlabel='Population of bears (N)',
ylabel='PDF',
title='Posterior distribution of N')
###Output
_____no_output_____
###Markdown
The most likely value is 109.
###Code
posterior_N.max_prob()
###Output
_____no_output_____
###Markdown
But the distribution is skewed to the right, so the posterior mean is substantially higher.
###Code
posterior_N.mean()
###Output
_____no_output_____
###Markdown
And the credible interval is quite wide.
###Code
posterior_N.credible_interval(0.9)
###Output
_____no_output_____
###Markdown
This solution is relatively simple, but it turns out we can do a little better if we model the unknown probability of observing a bear explicitly. Two Parameter ModelNext we'll try a model with two parameters: the number of bears, `N`, and the probability of observing a bear, `p`.We'll assume that the probability is the same in both rounds, which is probably reasonable in this case because it is the same kind of trap in the same place.We'll also assume that the probabilities are independent; that is, the probability a bear is observed in the second round does not depend on whether it was observed in the first round. This assumption might be less reasonable, but for now it is a necessary simplification.Here are the counts again:
###Code
K = 23
n = 19
k = 4
###Output
_____no_output_____
###Markdown
For this model, I'll express the data in different notation: * `k10` is the number of bears observed in the first round but not the second,* `k01` is the number of bears observed in the second round but not the first, and* `k11` is the number of bears observed in both rounds.Here are their values.
###Code
k10 = 23 - 4
k01 = 19 - 4
k11 = 4
###Output
_____no_output_____
###Markdown
Suppose we know the actual values of `N` and `p`. We can use them to compute the likelihood of this data.For example, suppose we know that `N=100` and `p=0.2`.We can use `N` to compute `k00`, which is the number of unobserved bears.
###Code
N = 100
observed = k01 + k10 + k11
k00 = N - observed
k00
###Output
_____no_output_____
###Markdown
For the update, it will be convenient to store the data as a list that represents the number of bears in each category.
###Code
x = [k00, k01, k10, k11]
x
###Output
_____no_output_____
###Markdown
Now, if we know `p=0.2`, we can compute the probability a bear falls in each category. For example, the probability of being observed in both rounds is `p*p`, and the probability of being unobserved in both rounds is `q*q` (where `q=1-p`).
###Code
p = 0.2
q = 1-p
y = [q*q, q*p, p*q, p*p]
y
###Output
_____no_output_____
###Markdown
Now the probability of the data is given by the [multinomial distribution](https://en.wikipedia.org/wiki/Multinomial_distribution):$$\frac{N!}{\prod x_i!} \prod y_i^{x_i}$$where $N$ is actual population, $x$ is a sequence with the counts in each category, and $y$ is a sequence of probabilities for each category.SciPy provides `multinomial`, which provides `pmf`, which computes this probability.Here is the probability of the data for these values of `N` and `p`.
###Code
from scipy.stats import multinomial
likelihood = multinomial.pmf(x, N, y)
likelihood
###Output
_____no_output_____
###Markdown
That's the likelihood if we know `N` and `p`, but of course we don't. So we'll choose prior distributions for `N` and `p`, and use the likelihoods to update it. The PriorWe'll use `prior_N` again for the prior distribution of `N`, and a uniform prior for the probability of observing a bear, `p`:
###Code
qs = np.linspace(0, 0.99, num=100)
prior_p = make_uniform(qs, name='p')
###Output
_____no_output_____
###Markdown
We can make a joint distribution in the usual way.
###Code
from utils import make_joint
joint_prior = make_joint(prior_p, prior_N)
joint_prior.shape
###Output
_____no_output_____
###Markdown
The result is a Pandas `DataFrame` with values of `N` down the rows and values of `p` across the columns.However, for this problem it will be convenient to represent the prior distribution as a 1-D `Series` rather than a 2-D `DataFrame`.We can convert from one format to the other using `stack`.
###Code
from empiricaldist import Pmf
joint_pmf = Pmf(joint_prior.stack())
joint_pmf.head(3)
type(joint_pmf)
type(joint_pmf.index)
joint_pmf.shape
###Output
_____no_output_____
###Markdown
The result is a `Pmf` whose index is a `MultiIndex`.A `MultiIndex` can have more than one column; in this example, the first column contains values of `N` and the second column contains values of `p`.The `Pmf` has one row (and one prior probability) for each possible pair of parameters `N` and `p`.So the total number of rows is the product of the lengths of `prior_N` and `prior_p`.Now we have to compute the likelihood of the data for each pair of parameters. The UpdateTo allocate space for the likelihoods, it is convenient to make a copy of `joint_pmf`:
###Code
likelihood = joint_pmf.copy()
###Output
_____no_output_____
###Markdown
As we loop through the pairs of parameters, we compute the likelihood of the data as in the previous section, and then store the result as an element of `likelihood`.
###Code
observed = k01 + k10 + k11
for N, p in joint_pmf.index:
k00 = N - observed
x = [k00, k01, k10, k11]
q = 1-p
y = [q*q, q*p, p*q, p*p]
likelihood[N, p] = multinomial.pmf(x, N, y)
###Output
_____no_output_____
###Markdown
Now we can compute the posterior in the usual way.
###Code
posterior_pmf = joint_pmf * likelihood
posterior_pmf.normalize()
###Output
_____no_output_____
###Markdown
We'll use `plot_contour` again to visualize the joint posterior distribution.But remember that the posterior distribution we just computed is represented as a `Pmf`, which is a `Series`, and `plot_contour` expects a `DataFrame`.Since we used `stack` to convert from a `DataFrame` to a `Series`, we can use `unstack` to go the other way.
###Code
joint_posterior = posterior_pmf.unstack()
###Output
_____no_output_____
###Markdown
And here's what the result looks like.
###Code
from utils import plot_contour
plot_contour(joint_posterior)
decorate(title='Joint posterior distribution of N and p')
###Output
_____no_output_____
###Markdown
The most likely values of `N` are near 100, as in the previous model. The most likely values of `p` are near 0.2.The shape of this contour indicates that these parameters are correlated. If `p` is near the low end of the range, the most likely values of `N` are higher; if `p` is near the high end of the range, `N` is lower. Now that we have a posterior `DataFrame`, we can extract the marginal distributions in the usual way.
###Code
from utils import marginal
posterior2_p = marginal(joint_posterior, 0)
posterior2_N = marginal(joint_posterior, 1)
###Output
_____no_output_____
###Markdown
Here's the posterior distribution for `p`:
###Code
posterior2_p.plot(color='C1')
decorate(xlabel='Probability of observing a bear',
ylabel='PDF',
title='Posterior marginal distribution of p')
###Output
_____no_output_____
###Markdown
The most likely values are near 0.2. Here's the posterior distribution for `N` based on the two-parameter model, along with the posterior we got using the one-parameter (hypergeometric) model.
###Code
posterior_N.plot(label='one-parameter model', color='C4')
posterior2_N.plot(label='two-parameter model', color='C1')
decorate(xlabel='Population of bears (N)',
ylabel='PDF',
title='Posterior marginal distribution of N')
###Output
_____no_output_____
###Markdown
The mean is a little lower and the 90% credible interval is a little narrower.
###Code
print(posterior_N.mean(),
posterior_N.credible_interval(0.9))
print(posterior2_N.mean(),
posterior2_N.credible_interval(0.9))
###Output
_____no_output_____
###Markdown
The two-parameter model yields a narrower posterior distribution for `N`, compared to the one-parameter model, because it takes advantage of an additional source of information: the consistency of the two observations.To see how this helps, consider a scenario where `N` is relatively low, like 138 (the posterior mean of the two-parameter model).
###Code
N1 = 138
###Output
_____no_output_____
###Markdown
Given that we saw 23 bears during the first trial and 19 during the second, we can estimate the corresponding value of `p`.
###Code
mean = (23 + 19) / 2
p = mean/N1
p
###Output
_____no_output_____
###Markdown
With these parameters, how much variability do you expect in the number of bears from one trial to the next? We can quantify that by computing the standard deviation of the binomial distribution with these parameters.
###Code
from scipy.stats import binom
binom(N1, p).std()
###Output
_____no_output_____
###Markdown
Now let's consider a second scenario where `N` is 173, the posterior mean of the one-parameter model. The corresponding value of `p` is lower.
###Code
N2 = 173
p = mean/N2
p
###Output
_____no_output_____
###Markdown
In this scenario, the variation we expect to see from one trial to the next is higher.
###Code
binom(N2, p).std()
###Output
_____no_output_____
###Markdown
So if the number of bears we observe is the same in both trials, that would be evidence for lower values of `N`, where we expect more consistency.If the number of bears is substantially different between the two trials, that would be evidence for higher values of `N`.In the actual data, the difference between the two trials is low, which is why the posterior mean of the two-parameter model is lower.The two-parameter model takes advantage of additional information, which is why the credible interval is narrower. Joint and marginal distributionsMarginal distributions are called "marginal" because in a common visualization they appear in the margins of the plot.Seaborn provides a class called `JointGrid` that creates this visualization.The following function uses it to show the joint and marginal distributions in a single plot.
###Code
import pandas as pd
from seaborn import JointGrid
def joint_plot(joint, **options):
"""Show joint and marginal distributions.
joint: DataFrame that represents a joint distribution
options: passed to JointGrid
"""
# get the names of the parameters
x = joint.columns.name
x = 'x' if x is None else x
y = joint.index.name
y = 'y' if y is None else y
# make a JointGrid with minimal data
data = pd.DataFrame({x:[0], y:[0]})
g = JointGrid(x=x, y=y, data=data, **options)
# replace the contour plot
g.ax_joint.contour(joint.columns,
joint.index,
joint,
cmap='viridis')
# replace the marginals
marginal_x = marginal(joint, 0)
g.ax_marg_x.plot(marginal_x.qs, marginal_x.ps)
marginal_y = marginal(joint, 1)
g.ax_marg_y.plot(marginal_y.ps, marginal_y.qs)
joint_plot(joint_posterior)
###Output
_____no_output_____
###Markdown
A `JointGrid` is a concise way to represent the joint and marginal distributions visually. The Lincoln index problemIn [an excellent blog post](http://www.johndcook.com/blog/2010/07/13/lincoln-index/), John D. Cook wrote about the Lincoln index, which is a way to estimate thenumber of errors in a document (or program) by comparing results fromtwo independent testers. Here's his presentation of the problem:>"Suppose you have a tester who finds 20 bugs in your program. Youwant to estimate how many bugs are really in the program. You knowthere are at least 20 bugs, and if you have supreme confidence in yourtester, you may suppose there are around 20 bugs. But maybe yourtester isn't very good. Maybe there are hundreds of bugs. How can youhave any idea how many bugs there are? There's no way to know with onetester. But if you have two testers, you can get a good idea, even ifyou don't know how skilled the testers are."Suppose the first tester finds 20 bugs, the second finds 15, and theyfind 3 in common; how can we estimate the number of bugs?This problem is similar to the Grizzly Bear problem, so I'll represent the data in the same way.
###Code
k10 = 20 - 3
k01 = 15 - 3
k11 = 3
###Output
_____no_output_____
###Markdown
But in this case it is probably not reasonable to assume that the testers have the same probability of finding a bug.So I'll define two parameters, `p0` for the probability that the first tester finds a bug, and `p1` for the probability that the second tester finds a bug.I will continue to assume that the probabilities are independent, which is like assuming that all bugs are equally easy to find. That might not be a good assumption, but let's stick with it for now.As an example, suppose we know that the probabilities are 0.2 and 0.15.
###Code
p0, p1 = 0.2, 0.15
###Output
_____no_output_____
###Markdown
We can compute the array of probabilities, `y`, like this:
###Code
def compute_probs(p0, p1):
"""Computes the probability for each of 4 categories."""
q0 = 1-p0
q1 = 1-p1
return [q0*q1, q0*p1, p0*q1, p0*p1]
y = compute_probs(p0, p1)
y
###Output
_____no_output_____
###Markdown
With these probabilities, there is a 68% chance that neither tester finds the bug and a3% chance that both do. Pretending that these probabilities are known, we can compute the posterior distribution for `N`.Here's a prior distribution that's uniform from 32 to 350 bugs.
###Code
qs = np.arange(32, 350, step=5)
prior_N = make_uniform(qs, name='N')
prior_N.head(3)
###Output
_____no_output_____
###Markdown
I'll put the data in an array, with 0 as a place-keeper for the unknown value `k00`.
###Code
data = np.array([0, k01, k10, k11])
###Output
_____no_output_____
###Markdown
And here are the likelihoods for each value of `N`, with `ps` as a constant.
###Code
likelihood = prior_N.copy()
observed = data.sum()
x = data.copy()
for N in prior_N.qs:
x[0] = N - observed
likelihood[N] = multinomial.pmf(x, N, y)
###Output
_____no_output_____
###Markdown
We can compute the posterior in the usual way.
###Code
posterior_N = prior_N * likelihood
posterior_N.normalize()
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
posterior_N.plot(color='C4')
decorate(xlabel='Number of bugs (N)',
ylabel='PMF',
title='Posterior marginal distribution of n with known p1, p2')
print(posterior_N.mean(),
posterior_N.credible_interval(0.9))
###Output
_____no_output_____
###Markdown
With the assumption that `p0` and `p1` are known to be `0.2` and `0.15`, the posterior mean is 102 with 90% credible interval (77, 127).But this result is based on the assumption that we know the probabilities, and we don't. Three-parameter modelWhat we need is a model with three parameters: `N`, `p0`, and `p1`.We'll use `prior_N` again for the prior distribution of `N`, and here are the priors for `p0` and `p1`:
###Code
qs = np.linspace(0, 1, num=51)
prior_p0 = make_uniform(qs, name='p0')
prior_p1 = make_uniform(qs, name='p1')
###Output
_____no_output_____
###Markdown
Now we have to assemble them into a joint prior with three dimensions.I'll start by putting the first two into a `DataFrame`.
###Code
joint2 = make_joint(prior_p0, prior_N)
joint2.shape
###Output
_____no_output_____
###Markdown
Now I'll stack them, as in the previous example, and put the result in a `Pmf`.
###Code
joint2_pmf = Pmf(joint2.stack())
joint2_pmf.head(3)
###Output
_____no_output_____
###Markdown
We can use `make_joint` again to add in the third parameter.
###Code
joint3 = make_joint(prior_p1, joint2_pmf)
joint3.shape
###Output
_____no_output_____
###Markdown
The result is a `DataFrame` with values of `N` and `p0` in a `MultiIndex` that goes down the rows and values of `p1` in an index that goes across the columns.
###Code
joint3.head(3)
###Output
_____no_output_____
###Markdown
Now I'll apply `stack` again:
###Code
joint3_pmf = Pmf(joint3.stack())
joint3_pmf.head(3)
###Output
_____no_output_____
###Markdown
The result is a `Pmf` with a three-column `MultiIndex` containing all possible triplets of parameters.The number of rows is the product of the number of values in all three priors, which is almost 170,000.
###Code
joint3_pmf.shape
###Output
_____no_output_____
###Markdown
That's still small enough to be practical, but it will take longer to compute the likelihoods than in the previous examples.Here's the loop that computes the likelihoods; it's similar to the one in the previous section:
###Code
likelihood = joint3_pmf.copy()
observed = data.sum()
x = data.copy()
for N, p0, p1 in joint3_pmf.index:
x[0] = N - observed
y = compute_probs(p0, p1)
likelihood[N, p0, p1] = multinomial.pmf(x, N, y)
###Output
_____no_output_____
###Markdown
We can compute the posterior in the usual way.
###Code
posterior_pmf = joint3_pmf * likelihood
posterior_pmf.normalize()
###Output
_____no_output_____
###Markdown
Now, to extract the marginal distributions, we could unstack the joint posterior as we did in the previous section.But `Pmf` provides a version of `marginal` that works with a `Pmf` rather than a `DataFrame`.Here's how we use it to get the posterior distribution for `N`.
###Code
posterior_N = posterior_pmf.marginal(0)
###Output
_____no_output_____
###Markdown
And here's what it looks look.
###Code
posterior_N.plot(color='C4')
decorate(xlabel='Number of bugs (N)',
ylabel='PDF',
title='Posterior marginal distributions of N')
posterior_N.mean()
###Output
_____no_output_____
###Markdown
The posterior mean is 105 bugs, which suggests that there are still many bugs the testers have not found.Here are the posteriors for `p0` and `p1`.
###Code
posterior_p1 = posterior_pmf.marginal(1)
posterior_p2 = posterior_pmf.marginal(2)
posterior_p1.plot(label='p1')
posterior_p2.plot(label='p2')
decorate(xlabel='Probability of finding a bug',
ylabel='PDF',
title='Posterior marginal distributions of p1 and p2')
posterior_p1.mean(), posterior_p1.credible_interval(0.9)
posterior_p2.mean(), posterior_p2.credible_interval(0.9)
###Output
_____no_output_____
###Markdown
Comparing the posterior distributions, the tester who found more bugs probably has a higher probability of finding bugs. The posterior means are about 23% and 18%. But the distributions overlap, so we should not be too sure. This is the first example we've seen with three parameters.As the number of parameters increases, the number of combinations increases quickly.The method we've been using so far, enumerating all possible combinations, becomes impractical if the number of parameters is more than 3 or 4.However there are other methods that can handle models with many more parameters, as we'll see in Chapter xxx. SummaryThe problems in this chapter are examples of "[mark and recapture](https://en.wikipedia.org/wiki/Mark_and_recapture)" experiments, which are used in ecology to estimate animal populations. They also have applications in engineering, as in the Lincoln index problem. And in the exercises you'll see that they are used in epidemiology, too.This chapter introduces two new probability distributions:* The hypergeometric distribution is a variation of the binomial distribution in which samples are drawn from the population without replacement. * The multinomial distribution is a generalization of the binomial distribution where there are more than two possible outcomes.Also in this chapter, we saw the first example of a model with three parameters. We'll see more in subsequent chapters. Exercises **Exercise:** [In an excellent paper](http://chao.stat.nthu.edu.tw/wordpress/paper/110.pdf), Anne Chao explains how mark and recapture experiments are used in epidemiology to estimate the prevalence of a disease in a human population based on multiple incomplete lists of cases.One of the examples in that paper is a study "to estimate the number of people who were infected by hepatitis in an outbreak that occurred in and around a college in northern Taiwan from April to July 1995."Three lists of cases were available:1. 135 cases identified using a serum test. 2. 122 cases reported by local hospitals. 3. 126 cases reported on questionnaires collected by epidemiologists.In this exercise, we'll use only the first two lists; in the next exercise we'll bring in the third list.Make a joint prior and update it using this data, then compute the posterior mean of `N` and a 90% credible interval. The following array contains 0 as a place-holder for the unknown value of `k00`, followed by known values of `k01`, `k10`, and `k11`.
###Code
data2 = np.array([0, 73, 86, 49])
###Output
_____no_output_____
###Markdown
These data indicate that there are 73 cases on the second list that are not on the first, 86 cases on the first list that are not on the second, and 49 cases on both lists.To keep things simple, we'll assume that each case has the same probability of appearing on each list. So we'll use a two-parameter model where `N` is the total number of cases and `p` is the probability that any case appears on any list.Here are priors you can start with (but feel free to modify them).
###Code
qs = np.arange(200, 500, step=5)
prior_N = make_uniform(qs, name='N')
prior_N.head(3)
qs = np.linspace(0, 0.98, num=50)
prior_p = make_uniform(qs, name='p')
prior_p.head(3)
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** Now let's do the version of the problem with all three lists. Here's the data from Chou's paper:```Hepatitis A virus listP Q E Data1 1 1 k111 =281 1 0 k110 =211 0 1 k101 =171 0 0 k100 =690 1 1 k011 =180 1 0 k010 =550 0 1 k001 =630 0 0 k000 =??```Write a loop that computes the likelihood of the data for each pair of parameters, then update the prior and compute the posterior mean of `N`. How does it compare to the results using only the first two lists? Here's the data in a NumPy array (in reverse order).
###Code
data3 = np.array([0, 63, 55, 18, 69, 17, 21, 28])
###Output
_____no_output_____
###Markdown
Again, the first value is a place-keeper for the unknown `k000`. The second value is `k001`, which means there are 63 cases that appear on the third list but not the first two. And the last value is `k111`, which means there are 28 cases that appear on all three lists.In the two-list version of the problem we computed `ps` by enumerating the combinations of `p` and `q`.
###Code
q = 1-p
ps = [q*q, q*p, p*q, p*p]
###Output
_____no_output_____
###Markdown
We could do the same thing for the three-list version, computing the probability for each of the eight categories. But we can generalize it by recognizing that we are computing the cartesian product of `p` and `q`, repeated once for each list.And we can use the following function (based on [this StackOverflow answer](https://stackoverflow.com/questions/58242078/cartesian-product-of-arbitrary-lists-in-pandas/5824207958242079)) to compute Cartesian products:
###Code
def cartesian_product(*args, **options):
"""Cartesian product of sequences.
args: any number of sequences
options: passes to `MultiIndex.from_product`
returns: DataFrame with one column per sequence
"""
index = pd.MultiIndex.from_product(args, **options)
return pd.DataFrame(index=index).reset_index()
###Output
_____no_output_____
###Markdown
Here's an example with `p=0.2`:
###Code
p = 0.2
t = (1-p, p)
df = cartesian_product(t, t, t)
df
###Output
_____no_output_____
###Markdown
To compute the probability for each category, we take the product across the columns:
###Code
y = df.prod(axis=1)
y
###Output
_____no_output_____
###Markdown
Now you finish it off from there.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 15Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
The coffee cooling problemI'll use a `State` object to store the initial temperature.
###Code
init = State(T=90)
###Output
_____no_output_____
###Markdown
And a `System` object to contain the system parameters.
###Code
coffee = System(init=init,
volume=300,
r=0.01,
T_env=22,
t_0=0,
t_end=30,
dt=1)
###Output
_____no_output_____
###Markdown
The update function implements Newton's law of cooling.
###Code
def update_func(state, t, system):
"""Update the thermal transfer model.
state: State (temp)
t: time
system: System object
returns: State (temp)
"""
r, T_env, dt = system.r, system.T_env, system.dt
T = state.T
T += -r * (T - T_env) * dt
return State(T=T)
###Output
_____no_output_____
###Markdown
Here's how it works.
###Code
update_func(init, 0, coffee)
###Output
_____no_output_____
###Markdown
Here's a version of `run_simulation` that uses `linrange` to make an array of time steps.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add a TimeFrame to the System: results
system: System object
update_func: function that updates state
"""
init = system.init
t_0, t_end, dt = system.t_0, system.t_end, system.dt
frame = TimeFrame(columns=init.index)
frame.row[t_0] = init
ts = linrange(t_0, t_end, dt)
for t in ts:
frame.row[t+dt] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
And here's how it works.
###Code
results = run_simulation(coffee, update_func)
###Output
_____no_output_____
###Markdown
Here's what the results look like.
###Code
plot(results.T, label='coffee')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
###Output
_____no_output_____
###Markdown
And here's the final temperature:
###Code
coffee.T_final = get_last_value(results.T)
T_final = get_last_value(results.T)
###Output
_____no_output_____
###Markdown
EncapsulationBefore we go on, let's define a function to initialize `System` objects with relevant parameters:
###Code
def make_system(T_init, r, volume, t_end):
"""Makes a System object with the given parameters.
T_init: initial temperature in degC
r: heat transfer rate, in 1/min
volume: volume of liquid in mL
t_end: end time of simulation
returns: System object
"""
init = State(T=T_init)
return System(init=init,
r=r,
volume=volume,
temp=T_init,
t_0=0,
t_end=t_end,
dt=1,
T_env=22)
###Output
_____no_output_____
###Markdown
Here's how we use it:
###Code
coffee = make_system(T_init=90, r=0.01, volume=300, t_end=30)
results = run_simulation(coffee, update_func)
T_final = get_last_value(results.T)
###Output
_____no_output_____
###Markdown
Exercises**Exercise:** Simulate the temperature of 50 mL of milk with a starting temperature of 5 degC, in a vessel with the same insulation, for 15 minutes, and plot the results.By trial and error, find a value for `r` that makes the final temperature close to 20 C.
###Code
# Solution goes here
coffee = make_system(T_init=5, r=0.01, volume=50, t_end=15)
results = run_simulation(coffee, update_func);
plot(results.T, label='milk')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
coffee = make_system(T_init=5, r=0.133, volume=50, t_end=15)
results = run_simulation(coffee, update_func)
T_final = get_last_value(results.T)
###Output
_____no_output_____
###Markdown
Mark and Recapture Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
###Code
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
from utils import set_pyplot_params
set_pyplot_params()
###Output
_____no_output_____
###Markdown
This chapter introduces "mark and recapture" experiments, in which we sample individuals from a population, mark them somehow, and then take a second sample from the same population. Seeing how many individuals in the second sample are marked, we can estimate the size of the population.Experiments like this were originally used in ecology, but turn out to be useful in many other fields. Examples in this chapter include software engineering and epidemiology.Also, in this chapter we'll work with models that have three parameters, so we'll extend the joint distributions we've been using to three dimensions.But first, grizzly bears. The Grizzly Bear ProblemIn 1996 and 1997 researchers deployed bear traps in locations in British Columbia and Alberta, Canada, in an effort to estimate the population of grizzly bears. They describe the experiment in [this article](https://www.researchgate.net/publication/229195465_Estimating_Population_Size_of_Grizzly_Bears_Using_Hair_Capture_DNA_Profiling_and_Mark-Recapture_Analysis).The "trap" consists of a lure and several strands of barbed wire intended to capture samples of hair from bears that visit the lure. Using the hair samples, the researchers use DNA analysis to identify individual bears.During the first session, the researchers deployed traps at 76 sites. Returning 10 days later, they obtained 1043 hair samples and identified 23 different bears. During a second 10-day session they obtained 1191 samples from 19 different bears, where 4 of the 19 were from bears they had identified in the first batch.To estimate the population of bears from this data, we need a model for the probability that each bear will be observed during each session. As a starting place, we'll make the simplest assumption, that every bear in the population has the same (unknown) probability of being sampled during each session. With these assumptions we can compute the probability of the data for a range of possible populations.As an example, let's suppose that the actual population of bears is 100.After the first session, 23 of the 100 bears have been identified.During the second session, if we choose 19 bears at random, what is the probability that 4 of them were previously identified? I'll define* $N$: actual population size, 100.* $K$: number of bears identified in the first session, 23.* $n$: number of bears observed in the second session, 19 in the example.* $k$: number of bears in the second session that were previously identified, 4.For given values of $N$, $K$, and $n$, the probability of finding $k$ previously-identified bears is given by the [hypergeometric distribution](https://en.wikipedia.org/wiki/Hypergeometric_distribution):$$\binom{K}{k} \binom{N-K}{n-k}/ \binom{N}{n}$$where the [binomial coefficient](https://en.wikipedia.org/wiki/Binomial_coefficient), $\binom{K}{k}$, is the number of subsets of size $k$ we can choose from a population of size $K$. To understand why, consider: * The denominator, $\binom{N}{n}$, is the number of subsets of $n$ we could choose from a population of $N$ bears.* The numerator is the number of subsets that contain $k$ bears from the previously identified $K$ and $n-k$ from the previously unseen $N-K$.SciPy provides `hypergeom`, which we can use to compute this probability for a range of values of $k$.
###Code
import numpy as np
from scipy.stats import hypergeom
N = 100
K = 23
n = 19
ks = np.arange(12)
ps = hypergeom(N, K, n).pmf(ks)
###Output
_____no_output_____
###Markdown
The result is the distribution of $k$ with given parameters $N$, $K$, and $n$.Here's what it looks like.
###Code
import matplotlib.pyplot as plt
from utils import decorate
plt.bar(ks, ps)
decorate(xlabel='Number of bears observed twice',
ylabel='PMF',
title='Hypergeometric distribution of k (known population 100)')
###Output
_____no_output_____
###Markdown
The most likely value of $k$ is 4, which is the value actually observed in the experiment. That suggests that $N=100$ is a reasonable estimate of the population, given this data.We've computed the distribution of $k$ given $N$, $K$, and $n$.Now let's go the other way: given $K$, $n$, and $k$, how can we estimate the total population, $N$? The UpdateAs a starting place, let's suppose that, prior to this study, an expert estimates that the local bear population is between 50 and 500, and equally likely to be any value in that range.I'll use `make_uniform` to make a uniform distribution of integers in this range.
###Code
import numpy as np
from utils import make_uniform
qs = np.arange(50, 501)
prior_N = make_uniform(qs, name='N')
prior_N.shape
###Output
_____no_output_____
###Markdown
So that's our prior.To compute the likelihood of the data, we can use `hypergeom` with constants `K` and `n`, and a range of values of `N`.
###Code
Ns = prior_N.qs
K = 23
n = 19
k = 4
likelihood = hypergeom(Ns, K, n).pmf(k)
###Output
_____no_output_____
###Markdown
We can compute the posterior in the usual way.
###Code
posterior_N = prior_N * likelihood
posterior_N.normalize()
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
posterior_N.plot(color='C4')
decorate(xlabel='Population of bears (N)',
ylabel='PDF',
title='Posterior distribution of N')
###Output
_____no_output_____
###Markdown
The most likely value is 109.
###Code
posterior_N.max_prob()
###Output
_____no_output_____
###Markdown
But the distribution is skewed to the right, so the posterior mean is substantially higher.
###Code
posterior_N.mean()
###Output
_____no_output_____
###Markdown
And the credible interval is quite wide.
###Code
posterior_N.credible_interval(0.9)
###Output
_____no_output_____
###Markdown
This solution is relatively simple, but it turns out we can do a little better if we model the unknown probability of observing a bear explicitly. Two-Parameter ModelNext we'll try a model with two parameters: the number of bears, `N`, and the probability of observing a bear, `p`.We'll assume that the probability is the same in both rounds, which is probably reasonable in this case because it is the same kind of trap in the same place.We'll also assume that the probabilities are independent; that is, the probability a bear is observed in the second round does not depend on whether it was observed in the first round. This assumption might be less reasonable, but for now it is a necessary simplification.Here are the counts again:
###Code
K = 23
n = 19
k = 4
###Output
_____no_output_____
###Markdown
For this model, I'll express the data in a notation that will make it easier to generalize to more than two rounds: * `k10` is the number of bears observed in the first round but not the second,* `k01` is the number of bears observed in the second round but not the first, and* `k11` is the number of bears observed in both rounds.Here are their values.
###Code
k10 = 23 - 4
k01 = 19 - 4
k11 = 4
###Output
_____no_output_____
###Markdown
Suppose we know the actual values of `N` and `p`. We can use them to compute the likelihood of this data.For example, suppose we know that `N=100` and `p=0.2`.We can use `N` to compute `k00`, which is the number of unobserved bears.
###Code
N = 100
observed = k01 + k10 + k11
k00 = N - observed
k00
###Output
_____no_output_____
###Markdown
For the update, it will be convenient to store the data as a list that represents the number of bears in each category.
###Code
x = [k00, k01, k10, k11]
x
###Output
_____no_output_____
###Markdown
Now, if we know `p=0.2`, we can compute the probability a bear falls in each category. For example, the probability of being observed in both rounds is `p*p`, and the probability of being unobserved in both rounds is `q*q` (where `q=1-p`).
###Code
p = 0.2
q = 1-p
y = [q*q, q*p, p*q, p*p]
y
###Output
_____no_output_____
###Markdown
Now the probability of the data is given by the [multinomial distribution](https://en.wikipedia.org/wiki/Multinomial_distribution):$$\frac{N!}{\prod x_i!} \prod y_i^{x_i}$$where $N$ is actual population, $x$ is a sequence with the counts in each category, and $y$ is a sequence of probabilities for each category.SciPy provides `multinomial`, which provides `pmf`, which computes this probability.Here is the probability of the data for these values of `N` and `p`.
###Code
from scipy.stats import multinomial
likelihood = multinomial.pmf(x, N, y)
likelihood
###Output
_____no_output_____
###Markdown
That's the likelihood if we know `N` and `p`, but of course we don't. So we'll choose prior distributions for `N` and `p`, and use the likelihoods to update it. The PriorWe'll use `prior_N` again for the prior distribution of `N`, and a uniform prior for the probability of observing a bear, `p`:
###Code
qs = np.linspace(0, 0.99, num=100)
prior_p = make_uniform(qs, name='p')
###Output
_____no_output_____
###Markdown
We can make a joint distribution in the usual way.
###Code
from utils import make_joint
joint_prior = make_joint(prior_p, prior_N)
joint_prior.shape
###Output
_____no_output_____
###Markdown
The result is a Pandas `DataFrame` with values of `N` down the rows and values of `p` across the columns.However, for this problem it will be convenient to represent the prior distribution as a 1-D `Series` rather than a 2-D `DataFrame`.We can convert from one format to the other using `stack`.
###Code
from empiricaldist import Pmf
joint_pmf = Pmf(joint_prior.stack())
joint_pmf.head(3)
type(joint_pmf)
type(joint_pmf.index)
joint_pmf.shape
###Output
_____no_output_____
###Markdown
The result is a `Pmf` whose index is a `MultiIndex`.A `MultiIndex` can have more than one column; in this example, the first column contains values of `N` and the second column contains values of `p`.The `Pmf` has one row (and one prior probability) for each possible pair of parameters `N` and `p`.So the total number of rows is the product of the lengths of `prior_N` and `prior_p`.Now we have to compute the likelihood of the data for each pair of parameters. The UpdateTo allocate space for the likelihoods, it is convenient to make a copy of `joint_pmf`:
###Code
likelihood = joint_pmf.copy()
###Output
_____no_output_____
###Markdown
As we loop through the pairs of parameters, we compute the likelihood of the data as in the previous section, and then store the result as an element of `likelihood`.
###Code
observed = k01 + k10 + k11
for N, p in joint_pmf.index:
k00 = N - observed
x = [k00, k01, k10, k11]
q = 1-p
y = [q*q, q*p, p*q, p*p]
likelihood[N, p] = multinomial.pmf(x, N, y)
###Output
_____no_output_____
###Markdown
Now we can compute the posterior in the usual way.
###Code
posterior_pmf = joint_pmf * likelihood
posterior_pmf.normalize()
###Output
_____no_output_____
###Markdown
We'll use `plot_contour` again to visualize the joint posterior distribution.But remember that the posterior distribution we just computed is represented as a `Pmf`, which is a `Series`, and `plot_contour` expects a `DataFrame`.Since we used `stack` to convert from a `DataFrame` to a `Series`, we can use `unstack` to go the other way.
###Code
joint_posterior = posterior_pmf.unstack()
###Output
_____no_output_____
###Markdown
And here's what the result looks like.
###Code
from utils import plot_contour
plot_contour(joint_posterior)
decorate(title='Joint posterior distribution of N and p')
###Output
_____no_output_____
###Markdown
The most likely values of `N` are near 100, as in the previous model. The most likely values of `p` are near 0.2.The shape of this contour indicates that these parameters are correlated. If `p` is near the low end of the range, the most likely values of `N` are higher; if `p` is near the high end of the range, `N` is lower. Now that we have a posterior `DataFrame`, we can extract the marginal distributions in the usual way.
###Code
from utils import marginal
posterior2_p = marginal(joint_posterior, 0)
posterior2_N = marginal(joint_posterior, 1)
###Output
_____no_output_____
###Markdown
Here's the posterior distribution for `p`:
###Code
posterior2_p.plot(color='C1')
decorate(xlabel='Probability of observing a bear',
ylabel='PDF',
title='Posterior marginal distribution of p')
###Output
_____no_output_____
###Markdown
The most likely values are near 0.2. Here's the posterior distribution for `N` based on the two-parameter model, along with the posterior we got using the one-parameter (hypergeometric) model.
###Code
posterior_N.plot(label='one-parameter model', color='C4')
posterior2_N.plot(label='two-parameter model', color='C1')
decorate(xlabel='Population of bears (N)',
ylabel='PDF',
title='Posterior marginal distribution of N')
###Output
_____no_output_____
###Markdown
With the two-parameter model, the mean is a little lower and the 90% credible interval is a little narrower.
###Code
print(posterior_N.mean(),
posterior_N.credible_interval(0.9))
print(posterior2_N.mean(),
posterior2_N.credible_interval(0.9))
###Output
_____no_output_____
###Markdown
The two-parameter model yields a narrower posterior distribution for `N`, compared to the one-parameter model, because it takes advantage of an additional source of information: the consistency of the two observations.To see how this helps, consider a scenario where `N` is relatively low, like 138 (the posterior mean of the two-parameter model).
###Code
N1 = 138
###Output
_____no_output_____
###Markdown
Given that we saw 23 bears during the first trial and 19 during the second, we can estimate the corresponding value of `p`.
###Code
mean = (23 + 19) / 2
p = mean/N1
p
###Output
_____no_output_____
###Markdown
With these parameters, how much variability do you expect in the number of bears from one trial to the next? We can quantify that by computing the standard deviation of the binomial distribution with these parameters.
###Code
from scipy.stats import binom
binom(N1, p).std()
###Output
_____no_output_____
###Markdown
Now let's consider a second scenario where `N` is 173, the posterior mean of the one-parameter model. The corresponding value of `p` is lower.
###Code
N2 = 173
p = mean/N2
p
###Output
_____no_output_____
###Markdown
In this scenario, the variation we expect to see from one trial to the next is higher.
###Code
binom(N2, p).std()
###Output
_____no_output_____
###Markdown
So if the number of bears we observe is the same in both trials, that would be evidence for lower values of `N`, where we expect more consistency.If the number of bears is substantially different between the two trials, that would be evidence for higher values of `N`.In the actual data, the difference between the two trials is low, which is why the posterior mean of the two-parameter model is lower.The two-parameter model takes advantage of additional information, which is why the credible interval is narrower. Joint and Marginal DistributionsMarginal distributions are called "marginal" because in a common visualization they appear in the margins of the plot.Seaborn provides a class called `JointGrid` that creates this visualization.The following function uses it to show the joint and marginal distributions in a single plot.
###Code
import pandas as pd
from seaborn import JointGrid
def joint_plot(joint, **options):
"""Show joint and marginal distributions.
joint: DataFrame that represents a joint distribution
options: passed to JointGrid
"""
# get the names of the parameters
x = joint.columns.name
x = 'x' if x is None else x
y = joint.index.name
y = 'y' if y is None else y
# make a JointGrid with minimal data
data = pd.DataFrame({x:[0], y:[0]})
g = JointGrid(x=x, y=y, data=data, **options)
# replace the contour plot
g.ax_joint.contour(joint.columns,
joint.index,
joint,
cmap='viridis')
# replace the marginals
marginal_x = marginal(joint, 0)
g.ax_marg_x.plot(marginal_x.qs, marginal_x.ps)
marginal_y = marginal(joint, 1)
g.ax_marg_y.plot(marginal_y.ps, marginal_y.qs)
joint_plot(joint_posterior)
###Output
_____no_output_____
###Markdown
A `JointGrid` is a concise way to represent the joint and marginal distributions visually. The Lincoln Index ProblemIn [an excellent blog post](http://www.johndcook.com/blog/2010/07/13/lincoln-index/), John D. Cook wrote about the Lincoln index, which is a way to estimate thenumber of errors in a document (or program) by comparing results fromtwo independent testers.Here's his presentation of the problem:> "Suppose you have a tester who finds 20 bugs in your program. You> want to estimate how many bugs are really in the program. You know> there are at least 20 bugs, and if you have supreme confidence in your> tester, you may suppose there are around 20 bugs. But maybe your> tester isn't very good. Maybe there are hundreds of bugs. How can you> have any idea how many bugs there are? There's no way to know with one> tester. But if you have two testers, you can get a good idea, even if> you don't know how skilled the testers are."Suppose the first tester finds 20 bugs, the second finds 15, and theyfind 3 in common; how can we estimate the number of bugs?This problem is similar to the Grizzly Bear problem, so I'll represent the data in the same way.
###Code
k10 = 20 - 3
k01 = 15 - 3
k11 = 3
###Output
_____no_output_____
###Markdown
But in this case it is probably not reasonable to assume that the testers have the same probability of finding a bug.So I'll define two parameters, `p0` for the probability that the first tester finds a bug, and `p1` for the probability that the second tester finds a bug.I will continue to assume that the probabilities are independent, which is like assuming that all bugs are equally easy to find. That might not be a good assumption, but let's stick with it for now.As an example, suppose we know that the probabilities are 0.2 and 0.15.
###Code
p0, p1 = 0.2, 0.15
###Output
_____no_output_____
###Markdown
We can compute the array of probabilities, `y`, like this:
###Code
def compute_probs(p0, p1):
"""Computes the probability for each of 4 categories."""
q0 = 1-p0
q1 = 1-p1
return [q0*q1, q0*p1, p0*q1, p0*p1]
y = compute_probs(p0, p1)
y
###Output
_____no_output_____
###Markdown
With these probabilities, there is a 68% chance that neither tester finds the bug and a3% chance that both do. Pretending that these probabilities are known, we can compute the posterior distribution for `N`.Here's a prior distribution that's uniform from 32 to 350 bugs.
###Code
qs = np.arange(32, 350, step=5)
prior_N = make_uniform(qs, name='N')
prior_N.head(3)
###Output
_____no_output_____
###Markdown
I'll put the data in an array, with 0 as a place-keeper for the unknown value `k00`.
###Code
data = np.array([0, k01, k10, k11])
###Output
_____no_output_____
###Markdown
And here are the likelihoods for each value of `N`, with `ps` as a constant.
###Code
likelihood = prior_N.copy()
observed = data.sum()
x = data.copy()
for N in prior_N.qs:
x[0] = N - observed
likelihood[N] = multinomial.pmf(x, N, y)
###Output
_____no_output_____
###Markdown
We can compute the posterior in the usual way.
###Code
posterior_N = prior_N * likelihood
posterior_N.normalize()
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
posterior_N.plot(color='C4')
decorate(xlabel='Number of bugs (N)',
ylabel='PMF',
title='Posterior marginal distribution of n with known p1, p2')
print(posterior_N.mean(),
posterior_N.credible_interval(0.9))
###Output
_____no_output_____
###Markdown
With the assumption that `p0` and `p1` are known to be `0.2` and `0.15`, the posterior mean is 102 with 90% credible interval (77, 127).But this result is based on the assumption that we know the probabilities, and we don't. Three-Parameter ModelWhat we need is a model with three parameters: `N`, `p0`, and `p1`.We'll use `prior_N` again for the prior distribution of `N`, and here are the priors for `p0` and `p1`:
###Code
qs = np.linspace(0, 1, num=51)
prior_p0 = make_uniform(qs, name='p0')
prior_p1 = make_uniform(qs, name='p1')
###Output
_____no_output_____
###Markdown
Now we have to assemble them into a joint prior with three dimensions.I'll start by putting the first two into a `DataFrame`.
###Code
joint2 = make_joint(prior_p0, prior_N)
joint2.shape
###Output
_____no_output_____
###Markdown
Now I'll stack them, as in the previous example, and put the result in a `Pmf`.
###Code
joint2_pmf = Pmf(joint2.stack())
joint2_pmf.head(3)
###Output
_____no_output_____
###Markdown
We can use `make_joint` again to add in the third parameter.
###Code
joint3 = make_joint(prior_p1, joint2_pmf)
joint3.shape
###Output
_____no_output_____
###Markdown
The result is a `DataFrame` with values of `N` and `p0` in a `MultiIndex` that goes down the rows and values of `p1` in an index that goes across the columns.
###Code
joint3.head(3)
###Output
_____no_output_____
###Markdown
Now I'll apply `stack` again:
###Code
joint3_pmf = Pmf(joint3.stack())
joint3_pmf.head(3)
###Output
_____no_output_____
###Markdown
The result is a `Pmf` with a three-column `MultiIndex` containing all possible triplets of parameters.The number of rows is the product of the number of values in all three priors, which is almost 170,000.
###Code
joint3_pmf.shape
###Output
_____no_output_____
###Markdown
That's still small enough to be practical, but it will take longer to compute the likelihoods than in the previous examples.Here's the loop that computes the likelihoods; it's similar to the one in the previous section:
###Code
likelihood = joint3_pmf.copy()
observed = data.sum()
x = data.copy()
for N, p0, p1 in joint3_pmf.index:
x[0] = N - observed
y = compute_probs(p0, p1)
likelihood[N, p0, p1] = multinomial.pmf(x, N, y)
###Output
_____no_output_____
###Markdown
We can compute the posterior in the usual way.
###Code
posterior_pmf = joint3_pmf * likelihood
posterior_pmf.normalize()
###Output
_____no_output_____
###Markdown
Now, to extract the marginal distributions, we could unstack the joint posterior as we did in the previous section.But `Pmf` provides a version of `marginal` that works with a `Pmf` rather than a `DataFrame`.Here's how we use it to get the posterior distribution for `N`.
###Code
posterior_N = posterior_pmf.marginal(0)
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
posterior_N.plot(color='C4')
decorate(xlabel='Number of bugs (N)',
ylabel='PDF',
title='Posterior marginal distributions of N')
posterior_N.mean()
###Output
_____no_output_____
###Markdown
The posterior mean is 105 bugs, which suggests that there are still many bugs the testers have not found.Here are the posteriors for `p0` and `p1`.
###Code
posterior_p1 = posterior_pmf.marginal(1)
posterior_p2 = posterior_pmf.marginal(2)
posterior_p1.plot(label='p1')
posterior_p2.plot(label='p2')
decorate(xlabel='Probability of finding a bug',
ylabel='PDF',
title='Posterior marginal distributions of p1 and p2')
posterior_p1.mean(), posterior_p1.credible_interval(0.9)
posterior_p2.mean(), posterior_p2.credible_interval(0.9)
###Output
_____no_output_____
###Markdown
Comparing the posterior distributions, the tester who found more bugs probably has a higher probability of finding bugs. The posterior means are about 23% and 18%. But the distributions overlap, so we should not be too sure. This is the first example we've seen with three parameters.As the number of parameters increases, the number of combinations increases quickly.The method we've been using so far, enumerating all possible combinations, becomes impractical if the number of parameters is more than 3 or 4.However there are other methods that can handle models with many more parameters, as we'll see in >. SummaryThe problems in this chapter are examples of [mark and recapture](https://en.wikipedia.org/wiki/Mark_and_recapture) experiments, which are used in ecology to estimate animal populations. They also have applications in engineering, as in the Lincoln index problem. And in the exercises you'll see that they are used in epidemiology, too.This chapter introduces two new probability distributions:* The hypergeometric distribution is a variation of the binomial distribution in which samples are drawn from the population without replacement. * The multinomial distribution is a generalization of the binomial distribution where there are more than two possible outcomes.Also in this chapter, we saw the first example of a model with three parameters. We'll see more in subsequent chapters. Exercises **Exercise:** [In an excellent paper](http://chao.stat.nthu.edu.tw/wordpress/paper/110.pdf), Anne Chao explains how mark and recapture experiments are used in epidemiology to estimate the prevalence of a disease in a human population based on multiple incomplete lists of cases.One of the examples in that paper is a study "to estimate the number of people who were infected by hepatitis in an outbreak that occurred in and around a college in northern Taiwan from April to July 1995."Three lists of cases were available:1. 135 cases identified using a serum test. 2. 122 cases reported by local hospitals. 3. 126 cases reported on questionnaires collected by epidemiologists.In this exercise, we'll use only the first two lists; in the next exercise we'll bring in the third list.Make a joint prior and update it using this data, then compute the posterior mean of `N` and a 90% credible interval. The following array contains 0 as a place-holder for the unknown value of `k00`, followed by known values of `k01`, `k10`, and `k11`.
###Code
data2 = np.array([0, 73, 86, 49])
###Output
_____no_output_____
###Markdown
These data indicate that there are 73 cases on the second list that are not on the first, 86 cases on the first list that are not on the second, and 49 cases on both lists.To keep things simple, we'll assume that each case has the same probability of appearing on each list. So we'll use a two-parameter model where `N` is the total number of cases and `p` is the probability that any case appears on any list.Here are priors you can start with (but feel free to modify them).
###Code
qs = np.arange(200, 500, step=5)
prior_N = make_uniform(qs, name='N')
prior_N.head(3)
qs = np.linspace(0, 0.98, num=50)
prior_p = make_uniform(qs, name='p')
prior_p.head(3)
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** Now let's do the version of the problem with all three lists. Here's the data from Chou's paper:```Hepatitis A virus listP Q E Data1 1 1 k111 =281 1 0 k110 =211 0 1 k101 =171 0 0 k100 =690 1 1 k011 =180 1 0 k010 =550 0 1 k001 =630 0 0 k000 =??```Write a loop that computes the likelihood of the data for each pair of parameters, then update the prior and compute the posterior mean of `N`. How does it compare to the results using only the first two lists? Here's the data in a NumPy array (in reverse order).
###Code
data3 = np.array([0, 63, 55, 18, 69, 17, 21, 28])
###Output
_____no_output_____
###Markdown
Again, the first value is a place-keeper for the unknown `k000`. The second value is `k001`, which means there are 63 cases that appear on the third list but not the first two. And the last value is `k111`, which means there are 28 cases that appear on all three lists.In the two-list version of the problem we computed `ps` by enumerating the combinations of `p` and `q`.
###Code
q = 1-p
ps = [q*q, q*p, p*q, p*p]
###Output
_____no_output_____
###Markdown
We could do the same thing for the three-list version, computing the probability for each of the eight categories. But we can generalize it by recognizing that we are computing the cartesian product of `p` and `q`, repeated once for each list.And we can use the following function (based on [this StackOverflow answer](https://stackoverflow.com/questions/58242078/cartesian-product-of-arbitrary-lists-in-pandas/5824207958242079)) to compute Cartesian products:
###Code
def cartesian_product(*args, **options):
"""Cartesian product of sequences.
args: any number of sequences
options: passes to `MultiIndex.from_product`
returns: DataFrame with one column per sequence
"""
index = pd.MultiIndex.from_product(args, **options)
return pd.DataFrame(index=index).reset_index()
###Output
_____no_output_____
###Markdown
Here's an example with `p=0.2`:
###Code
p = 0.2
t = (1-p, p)
df = cartesian_product(t, t, t)
df
###Output
_____no_output_____
###Markdown
To compute the probability for each category, we take the product across the columns:
###Code
y = df.prod(axis=1)
y
###Output
_____no_output_____
###Markdown
Now you finish it off from there.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 15Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
The coffee cooling problemI'll use a `State` object to store the initial temperature.
###Code
init = State(T=90)
###Output
_____no_output_____
###Markdown
And a `System` object to contain the system parameters.
###Code
coffee = System(init=init,
volume=300,
r=0.01,
T_env=22,
t_0=0,
t_end=30,
dt=1)
###Output
_____no_output_____
###Markdown
The update function implements Newton's law of cooling.
###Code
def update_func(state, t, system):
"""Update the thermal transfer model.
state: State (temp)
t: time
system: System object
returns: State (temp)
"""
r, T_env, dt = system.r, system.T_env, system.dt
T = state.T
T += -r * (T - T_env) * dt
return State(T=T)
###Output
_____no_output_____
###Markdown
Here's how it works.
###Code
update_func(init, 0, coffee)
###Output
_____no_output_____
###Markdown
Here's a version of `run_simulation` that uses `linrange` to make an array of time steps.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add a TimeFrame to the System: results
system: System object
update_func: function that updates state
"""
init = system.init
t_0, t_end, dt = system.t_0, system.t_end, system.dt
frame = TimeFrame(columns=init.index)
frame.row[t_0] = init
ts = linrange(t_0, t_end, dt)
for t in ts:
frame.row[t+dt] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
And here's how it works.
###Code
results = run_simulation(coffee, update_func)
###Output
_____no_output_____
###Markdown
Here's what the results look like.
###Code
plot(results.T, label='coffee')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
###Output
_____no_output_____
###Markdown
And here's the final temperature:
###Code
coffee.T_final = get_last_value(results.T)
T_final = get_last_value(results.T)
###Output
_____no_output_____
###Markdown
EncapsulationBefore we go on, let's define a function to initialize `System` objects with relevant parameters:
###Code
def make_system(T_init, r, volume, t_end):
"""Makes a System object with the given parameters.
T_init: initial temperature in degC
r: heat transfer rate, in 1/min
volume: volume of liquid in mL
t_end: end time of simulation
returns: System object
"""
init = State(T=T_init)
return System(init=init,
r=r,
volume=volume,
temp=T_init,
t_0=0,
t_end=t_end,
dt=1,
T_env=22)
###Output
_____no_output_____
###Markdown
Here's how we use it:
###Code
coffee = make_system(T_init=90, r=0.01, volume=300, t_end=30)
results = run_simulation(coffee, update_func)
T_final = get_last_value(results.T)
###Output
_____no_output_____
###Markdown
Exercises**Exercise:** Simulate the temperature of 50 mL of milk with a starting temperature of 5 degC, in a vessel with the same insulation, for 15 minutes, and plot the results.By trial and error, find a value for `r` that makes the final temperature close to 20 C.
###Code
# Solution goes here
plot(results.T, label='milk')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
###Output
_____no_output_____
###Markdown
Mark and Recapture Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
###Code
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
###Output
_____no_output_____
###Markdown
This chapter introduces "mark and recapture" experiments, in which we sample individuals from a population, mark them somehow, and then take a second sample from the same population. Seeing how many individuals in the second sample are marked, we can estimate the size of the population.Experiments like this were originally used in ecology, but turn out to be useful in many other fields. Examples in this chapter include software engineering and epidemiology.Also, in this chapter we'll work with models that have three parameters, so we'll extend the joint distributions we've been using to three dimensions.But first, grizzly bears. The Grizzly Bear ProblemIn 1996 and 1997 researchers deployed bear traps in locations in British Columbia and Alberta, Canada, in an effort to estimate the population of grizzly bears. They describe the experiment in [this article](https://www.researchgate.net/publication/229195465_Estimating_Population_Size_of_Grizzly_Bears_Using_Hair_Capture_DNA_Profiling_and_Mark-Recapture_Analysis).The "trap" consists of a lure and several strands of barbed wire intended to capture samples of hair from bears that visit the lure. Using the hair samples, the researchers use DNA analysis to identify individual bears.During the first session, the researchers deployed traps at 76 sites. Returning 10 days later, they obtained 1043 hair samples and identified 23 different bears. During a second 10-day session they obtained 1191 samples from 19 different bears, where 4 of the 19 were from bears they had identified in the first batch.To estimate the population of bears from this data, we need a model for the probability that each bear will be observed during each session. As a starting place, we'll make the simplest assumption, that every bear in the population has the same (unknown) probability of being sampled during each session. With these assumptions we can compute the probability of the data for a range of possible populations.As an example, let's suppose that the actual population of bears is 100.After the first session, 23 of the 100 bears have been identified.During the second session, if we choose 19 bears at random, what is the probability that 4 of them were previously identified? I'll define* $N$: actual population size, 100.* $K$: number of bears identified in the first session, 23.* $n$: number of bears observed in the second session, 19 in the example.* $k$: number of bears in the second session that were previously identified, 4.For given values of $N$, $K$, and $n$, the probability of finding $k$ previously-identified bears is given by the [hypergeometric distribution](https://en.wikipedia.org/wiki/Hypergeometric_distribution):$$\binom{K}{k} \binom{N-K}{n-k}/ \binom{N}{n}$$where the [binomial coefficient](https://en.wikipedia.org/wiki/Binomial_coefficient), $\binom{K}{k}$, is the number of subsets of size $k$ we can choose from a population of size $K$. To understand why, consider: * The denominator, $\binom{N}{n}$, is the number of subsets of $n$ we could choose from a population of $N$ bears.* The numerator is the number of subsets that contain $k$ bears from the previously identified $K$ and $n-k$ from the previously unseen $N-K$.SciPy provides `hypergeom`, which we can use to compute this probability for a range of values of $k$.
###Code
import numpy as np
from scipy.stats import hypergeom
N = 100
K = 23
n = 19
ks = np.arange(12)
ps = hypergeom(N, K, n).pmf(ks)
###Output
_____no_output_____
###Markdown
The result is the distribution of $k$ with given parameters $N$, $K$, and $n$.Here's what it looks like.
###Code
import matplotlib.pyplot as plt
from utils import decorate
plt.bar(ks, ps)
decorate(xlabel='Number of bears observed twice',
ylabel='PMF',
title='Hypergeometric distribution of k (known population 100)')
###Output
_____no_output_____
###Markdown
The most likely value of $k$ is 4, which is the value actually observed in the experiment. That suggests that $N=100$ is a reasonable estimate of the population, given this data.We've computed the distribution of $k$ given $N$, $K$, and $n$.Now let's go the other way: given $K$, $n$, and $k$, how can we estimate the total population, $N$? The UpdateAs a starting place, let's suppose that, prior to this study, an expert estimates that the local bear population is between 50 and 500, and equally likely to be any value in that range.I'll use `make_uniform` to make a uniform distribution of integers in this range.
###Code
import numpy as np
from utils import make_uniform
qs = np.arange(50, 501)
prior_N = make_uniform(qs, name='N')
prior_N.shape
###Output
_____no_output_____
###Markdown
So that's our prior.To compute the likelihood of the data, we can use `hypergeom` with constants `K` and `n`, and a range of values of `N`.
###Code
Ns = prior_N.qs
K = 23
n = 19
k = 4
likelihood = hypergeom(Ns, K, n).pmf(k)
###Output
_____no_output_____
###Markdown
We can compute the posterior in the usual way.
###Code
posterior_N = prior_N * likelihood
posterior_N.normalize()
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
posterior_N.plot(color='C4')
decorate(xlabel='Population of bears (N)',
ylabel='PDF',
title='Posterior distribution of N')
###Output
_____no_output_____
###Markdown
The most likely value is 109.
###Code
posterior_N.max_prob()
###Output
_____no_output_____
###Markdown
But the distribution is skewed to the right, so the posterior mean is substantially higher.
###Code
posterior_N.mean()
###Output
_____no_output_____
###Markdown
And the credible interval is quite wide.
###Code
posterior_N.credible_interval(0.9)
###Output
_____no_output_____
###Markdown
This solution is relatively simple, but it turns out we can do a little better if we model the unknown probability of observing a bear explicitly. Two-Parameter ModelNext we'll try a model with two parameters: the number of bears, `N`, and the probability of observing a bear, `p`.We'll assume that the probability is the same in both rounds, which is probably reasonable in this case because it is the same kind of trap in the same place.We'll also assume that the probabilities are independent; that is, the probability a bear is observed in the second round does not depend on whether it was observed in the first round. This assumption might be less reasonable, but for now it is a necessary simplification.Here are the counts again:
###Code
K = 23
n = 19
k = 4
###Output
_____no_output_____
###Markdown
For this model, I'll express the data in a notation that will make it easier to generalize to more than two rounds: * `k10` is the number of bears observed in the first round but not the second,* `k01` is the number of bears observed in the second round but not the first, and* `k11` is the number of bears observed in both rounds.Here are their values.
###Code
k10 = 23 - 4
k01 = 19 - 4
k11 = 4
###Output
_____no_output_____
###Markdown
Suppose we know the actual values of `N` and `p`. We can use them to compute the likelihood of this data.For example, suppose we know that `N=100` and `p=0.2`.We can use `N` to compute `k00`, which is the number of unobserved bears.
###Code
N = 100
observed = k01 + k10 + k11
k00 = N - observed
k00
###Output
_____no_output_____
###Markdown
For the update, it will be convenient to store the data as a list that represents the number of bears in each category.
###Code
x = [k00, k01, k10, k11]
x
###Output
_____no_output_____
###Markdown
Now, if we know `p=0.2`, we can compute the probability a bear falls in each category. For example, the probability of being observed in both rounds is `p*p`, and the probability of being unobserved in both rounds is `q*q` (where `q=1-p`).
###Code
p = 0.2
q = 1-p
y = [q*q, q*p, p*q, p*p]
y
###Output
_____no_output_____
###Markdown
Now the probability of the data is given by the [multinomial distribution](https://en.wikipedia.org/wiki/Multinomial_distribution):$$\frac{N!}{\prod x_i!} \prod y_i^{x_i}$$where $N$ is actual population, $x$ is a sequence with the counts in each category, and $y$ is a sequence of probabilities for each category.SciPy provides `multinomial`, which provides `pmf`, which computes this probability.Here is the probability of the data for these values of `N` and `p`.
###Code
from scipy.stats import multinomial
likelihood = multinomial.pmf(x, N, y)
likelihood
###Output
_____no_output_____
###Markdown
That's the likelihood if we know `N` and `p`, but of course we don't. So we'll choose prior distributions for `N` and `p`, and use the likelihoods to update it. The PriorWe'll use `prior_N` again for the prior distribution of `N`, and a uniform prior for the probability of observing a bear, `p`:
###Code
qs = np.linspace(0, 0.99, num=100)
prior_p = make_uniform(qs, name='p')
###Output
_____no_output_____
###Markdown
We can make a joint distribution in the usual way.
###Code
from utils import make_joint
joint_prior = make_joint(prior_p, prior_N)
joint_prior.shape
###Output
_____no_output_____
###Markdown
The result is a Pandas `DataFrame` with values of `N` down the rows and values of `p` across the columns.However, for this problem it will be convenient to represent the prior distribution as a 1-D `Series` rather than a 2-D `DataFrame`.We can convert from one format to the other using `stack`.
###Code
from empiricaldist import Pmf
joint_pmf = Pmf(joint_prior.stack())
joint_pmf.head(3)
type(joint_pmf)
type(joint_pmf.index)
joint_pmf.shape
###Output
_____no_output_____
###Markdown
The result is a `Pmf` whose index is a `MultiIndex`.A `MultiIndex` can have more than one column; in this example, the first column contains values of `N` and the second column contains values of `p`.The `Pmf` has one row (and one prior probability) for each possible pair of parameters `N` and `p`.So the total number of rows is the product of the lengths of `prior_N` and `prior_p`.Now we have to compute the likelihood of the data for each pair of parameters. The UpdateTo allocate space for the likelihoods, it is convenient to make a copy of `joint_pmf`:
###Code
likelihood = joint_pmf.copy()
###Output
_____no_output_____
###Markdown
As we loop through the pairs of parameters, we compute the likelihood of the data as in the previous section, and then store the result as an element of `likelihood`.
###Code
observed = k01 + k10 + k11
for N, p in joint_pmf.index:
k00 = N - observed
x = [k00, k01, k10, k11]
q = 1-p
y = [q*q, q*p, p*q, p*p]
likelihood[N, p] = multinomial.pmf(x, N, y)
###Output
_____no_output_____
###Markdown
Now we can compute the posterior in the usual way.
###Code
posterior_pmf = joint_pmf * likelihood
posterior_pmf.normalize()
###Output
_____no_output_____
###Markdown
We'll use `plot_contour` again to visualize the joint posterior distribution.But remember that the posterior distribution we just computed is represented as a `Pmf`, which is a `Series`, and `plot_contour` expects a `DataFrame`.Since we used `stack` to convert from a `DataFrame` to a `Series`, we can use `unstack` to go the other way.
###Code
joint_posterior = posterior_pmf.unstack()
###Output
_____no_output_____
###Markdown
And here's what the result looks like.
###Code
from utils import plot_contour
plot_contour(joint_posterior)
decorate(title='Joint posterior distribution of N and p')
###Output
_____no_output_____
###Markdown
The most likely values of `N` are near 100, as in the previous model. The most likely values of `p` are near 0.2.The shape of this contour indicates that these parameters are correlated. If `p` is near the low end of the range, the most likely values of `N` are higher; if `p` is near the high end of the range, `N` is lower. Now that we have a posterior `DataFrame`, we can extract the marginal distributions in the usual way.
###Code
from utils import marginal
posterior2_p = marginal(joint_posterior, 0)
posterior2_N = marginal(joint_posterior, 1)
###Output
_____no_output_____
###Markdown
Here's the posterior distribution for `p`:
###Code
posterior2_p.plot(color='C1')
decorate(xlabel='Probability of observing a bear',
ylabel='PDF',
title='Posterior marginal distribution of p')
###Output
_____no_output_____
###Markdown
The most likely values are near 0.2. Here's the posterior distribution for `N` based on the two-parameter model, along with the posterior we got using the one-parameter (hypergeometric) model.
###Code
posterior_N.plot(label='one-parameter model', color='C4')
posterior2_N.plot(label='two-parameter model', color='C1')
decorate(xlabel='Population of bears (N)',
ylabel='PDF',
title='Posterior marginal distribution of N')
###Output
_____no_output_____
###Markdown
With the two-parameter model, the mean is a little lower and the 90% credible interval is a little narrower.
###Code
print(posterior_N.mean(),
posterior_N.credible_interval(0.9))
print(posterior2_N.mean(),
posterior2_N.credible_interval(0.9))
###Output
_____no_output_____
###Markdown
The two-parameter model yields a narrower posterior distribution for `N`, compared to the one-parameter model, because it takes advantage of an additional source of information: the consistency of the two observations.To see how this helps, consider a scenario where `N` is relatively low, like 138 (the posterior mean of the two-parameter model).
###Code
N1 = 138
###Output
_____no_output_____
###Markdown
Given that we saw 23 bears during the first trial and 19 during the second, we can estimate the corresponding value of `p`.
###Code
mean = (23 + 19) / 2
p = mean/N1
p
###Output
_____no_output_____
###Markdown
With these parameters, how much variability do you expect in the number of bears from one trial to the next? We can quantify that by computing the standard deviation of the binomial distribution with these parameters.
###Code
from scipy.stats import binom
binom(N1, p).std()
###Output
_____no_output_____
###Markdown
Now let's consider a second scenario where `N` is 173, the posterior mean of the one-parameter model. The corresponding value of `p` is lower.
###Code
N2 = 173
p = mean/N2
p
###Output
_____no_output_____
###Markdown
In this scenario, the variation we expect to see from one trial to the next is higher.
###Code
binom(N2, p).std()
###Output
_____no_output_____
###Markdown
So if the number of bears we observe is the same in both trials, that would be evidence for lower values of `N`, where we expect more consistency.If the number of bears is substantially different between the two trials, that would be evidence for higher values of `N`.In the actual data, the difference between the two trials is low, which is why the posterior mean of the two-parameter model is lower.The two-parameter model takes advantage of additional information, which is why the credible interval is narrower. Joint and Marginal DistributionsMarginal distributions are called "marginal" because in a common visualization they appear in the margins of the plot.Seaborn provides a class called `JointGrid` that creates this visualization.The following function uses it to show the joint and marginal distributions in a single plot.
###Code
import pandas as pd
from seaborn import JointGrid
def joint_plot(joint, **options):
"""Show joint and marginal distributions.
joint: DataFrame that represents a joint distribution
options: passed to JointGrid
"""
# get the names of the parameters
x = joint.columns.name
x = 'x' if x is None else x
y = joint.index.name
y = 'y' if y is None else y
# make a JointGrid with minimal data
data = pd.DataFrame({x:[0], y:[0]})
g = JointGrid(x=x, y=y, data=data, **options)
# replace the contour plot
g.ax_joint.contour(joint.columns,
joint.index,
joint,
cmap='viridis')
# replace the marginals
marginal_x = marginal(joint, 0)
g.ax_marg_x.plot(marginal_x.qs, marginal_x.ps)
marginal_y = marginal(joint, 1)
g.ax_marg_y.plot(marginal_y.ps, marginal_y.qs)
joint_plot(joint_posterior)
###Output
_____no_output_____
###Markdown
A `JointGrid` is a concise way to represent the joint and marginal distributions visually. The Lincoln Index ProblemIn [an excellent blog post](http://www.johndcook.com/blog/2010/07/13/lincoln-index/), John D. Cook wrote about the Lincoln index, which is a way to estimate thenumber of errors in a document (or program) by comparing results fromtwo independent testers.Here's his presentation of the problem:> "Suppose you have a tester who finds 20 bugs in your program. You> want to estimate how many bugs are really in the program. You know> there are at least 20 bugs, and if you have supreme confidence in your> tester, you may suppose there are around 20 bugs. But maybe your> tester isn't very good. Maybe there are hundreds of bugs. How can you> have any idea how many bugs there are? There's no way to know with one> tester. But if you have two testers, you can get a good idea, even if> you don't know how skilled the testers are."Suppose the first tester finds 20 bugs, the second finds 15, and theyfind 3 in common; how can we estimate the number of bugs?This problem is similar to the Grizzly Bear problem, so I'll represent the data in the same way.
###Code
k10 = 20 - 3
k01 = 15 - 3
k11 = 3
###Output
_____no_output_____
###Markdown
But in this case it is probably not reasonable to assume that the testers have the same probability of finding a bug.So I'll define two parameters, `p0` for the probability that the first tester finds a bug, and `p1` for the probability that the second tester finds a bug.I will continue to assume that the probabilities are independent, which is like assuming that all bugs are equally easy to find. That might not be a good assumption, but let's stick with it for now.As an example, suppose we know that the probabilities are 0.2 and 0.15.
###Code
p0, p1 = 0.2, 0.15
###Output
_____no_output_____
###Markdown
We can compute the array of probabilities, `y`, like this:
###Code
def compute_probs(p0, p1):
"""Computes the probability for each of 4 categories."""
q0 = 1-p0
q1 = 1-p1
return [q0*q1, q0*p1, p0*q1, p0*p1]
y = compute_probs(p0, p1)
y
###Output
_____no_output_____
###Markdown
With these probabilities, there is a 68% chance that neither tester finds the bug and a3% chance that both do. Pretending that these probabilities are known, we can compute the posterior distribution for `N`.Here's a prior distribution that's uniform from 32 to 350 bugs.
###Code
qs = np.arange(32, 350, step=5)
prior_N = make_uniform(qs, name='N')
prior_N.head(3)
###Output
_____no_output_____
###Markdown
I'll put the data in an array, with 0 as a place-keeper for the unknown value `k00`.
###Code
data = np.array([0, k01, k10, k11])
###Output
_____no_output_____
###Markdown
And here are the likelihoods for each value of `N`, with `ps` as a constant.
###Code
likelihood = prior_N.copy()
observed = data.sum()
x = data.copy()
for N in prior_N.qs:
x[0] = N - observed
likelihood[N] = multinomial.pmf(x, N, y)
###Output
_____no_output_____
###Markdown
We can compute the posterior in the usual way.
###Code
posterior_N = prior_N * likelihood
posterior_N.normalize()
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
posterior_N.plot(color='C4')
decorate(xlabel='Number of bugs (N)',
ylabel='PMF',
title='Posterior marginal distribution of n with known p1, p2')
print(posterior_N.mean(),
posterior_N.credible_interval(0.9))
###Output
_____no_output_____
###Markdown
With the assumption that `p0` and `p1` are known to be `0.2` and `0.15`, the posterior mean is 102 with 90% credible interval (77, 127).But this result is based on the assumption that we know the probabilities, and we don't. Three-Parameter ModelWhat we need is a model with three parameters: `N`, `p0`, and `p1`.We'll use `prior_N` again for the prior distribution of `N`, and here are the priors for `p0` and `p1`:
###Code
qs = np.linspace(0, 1, num=51)
prior_p0 = make_uniform(qs, name='p0')
prior_p1 = make_uniform(qs, name='p1')
###Output
_____no_output_____
###Markdown
Now we have to assemble them into a joint prior with three dimensions.I'll start by putting the first two into a `DataFrame`.
###Code
joint2 = make_joint(prior_p0, prior_N)
joint2.shape
###Output
_____no_output_____
###Markdown
Now I'll stack them, as in the previous example, and put the result in a `Pmf`.
###Code
joint2_pmf = Pmf(joint2.stack())
joint2_pmf.head(3)
###Output
_____no_output_____
###Markdown
We can use `make_joint` again to add in the third parameter.
###Code
joint3 = make_joint(prior_p1, joint2_pmf)
joint3.shape
###Output
_____no_output_____
###Markdown
The result is a `DataFrame` with values of `N` and `p0` in a `MultiIndex` that goes down the rows and values of `p1` in an index that goes across the columns.
###Code
joint3.head(3)
###Output
_____no_output_____
###Markdown
Now I'll apply `stack` again:
###Code
joint3_pmf = Pmf(joint3.stack())
joint3_pmf.head(3)
###Output
_____no_output_____
###Markdown
The result is a `Pmf` with a three-column `MultiIndex` containing all possible triplets of parameters.The number of rows is the product of the number of values in all three priors, which is almost 170,000.
###Code
joint3_pmf.shape
###Output
_____no_output_____
###Markdown
That's still small enough to be practical, but it will take longer to compute the likelihoods than in the previous examples.Here's the loop that computes the likelihoods; it's similar to the one in the previous section:
###Code
likelihood = joint3_pmf.copy()
observed = data.sum()
x = data.copy()
for N, p0, p1 in joint3_pmf.index:
x[0] = N - observed
y = compute_probs(p0, p1)
likelihood[N, p0, p1] = multinomial.pmf(x, N, y)
###Output
_____no_output_____
###Markdown
We can compute the posterior in the usual way.
###Code
posterior_pmf = joint3_pmf * likelihood
posterior_pmf.normalize()
###Output
_____no_output_____
###Markdown
Now, to extract the marginal distributions, we could unstack the joint posterior as we did in the previous section.But `Pmf` provides a version of `marginal` that works with a `Pmf` rather than a `DataFrame`.Here's how we use it to get the posterior distribution for `N`.
###Code
posterior_N = posterior_pmf.marginal(0)
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
posterior_N.plot(color='C4')
decorate(xlabel='Number of bugs (N)',
ylabel='PDF',
title='Posterior marginal distributions of N')
posterior_N.mean()
###Output
_____no_output_____
###Markdown
The posterior mean is 105 bugs, which suggests that there are still many bugs the testers have not found.Here are the posteriors for `p0` and `p1`.
###Code
posterior_p1 = posterior_pmf.marginal(1)
posterior_p2 = posterior_pmf.marginal(2)
posterior_p1.plot(label='p1')
posterior_p2.plot(label='p2')
decorate(xlabel='Probability of finding a bug',
ylabel='PDF',
title='Posterior marginal distributions of p1 and p2')
posterior_p1.mean(), posterior_p1.credible_interval(0.9)
posterior_p2.mean(), posterior_p2.credible_interval(0.9)
###Output
_____no_output_____
###Markdown
Comparing the posterior distributions, the tester who found more bugs probably has a higher probability of finding bugs. The posterior means are about 23% and 18%. But the distributions overlap, so we should not be too sure. This is the first example we've seen with three parameters.As the number of parameters increases, the number of combinations increases quickly.The method we've been using so far, enumerating all possible combinations, becomes impractical if the number of parameters is more than 3 or 4.However there are other methods that can handle models with many more parameters, as we'll see in >. SummaryThe problems in this chapter are examples of [mark and recapture](https://en.wikipedia.org/wiki/Mark_and_recapture) experiments, which are used in ecology to estimate animal populations. They also have applications in engineering, as in the Lincoln index problem. And in the exercises you'll see that they are used in epidemiology, too.This chapter introduces two new probability distributions:* The hypergeometric distribution is a variation of the binomial distribution in which samples are drawn from the population without replacement. * The multinomial distribution is a generalization of the binomial distribution where there are more than two possible outcomes.Also in this chapter, we saw the first example of a model with three parameters. We'll see more in subsequent chapters. Exercises **Exercise:** [In an excellent paper](http://chao.stat.nthu.edu.tw/wordpress/paper/110.pdf), Anne Chao explains how mark and recapture experiments are used in epidemiology to estimate the prevalence of a disease in a human population based on multiple incomplete lists of cases.One of the examples in that paper is a study "to estimate the number of people who were infected by hepatitis in an outbreak that occurred in and around a college in northern Taiwan from April to July 1995."Three lists of cases were available:1. 135 cases identified using a serum test. 2. 122 cases reported by local hospitals. 3. 126 cases reported on questionnaires collected by epidemiologists.In this exercise, we'll use only the first two lists; in the next exercise we'll bring in the third list.Make a joint prior and update it using this data, then compute the posterior mean of `N` and a 90% credible interval. The following array contains 0 as a place-holder for the unknown value of `k00`, followed by known values of `k01`, `k10`, and `k11`.
###Code
data2 = np.array([0, 73, 86, 49])
###Output
_____no_output_____
###Markdown
These data indicate that there are 73 cases on the second list that are not on the first, 86 cases on the first list that are not on the second, and 49 cases on both lists.To keep things simple, we'll assume that each case has the same probability of appearing on each list. So we'll use a two-parameter model where `N` is the total number of cases and `p` is the probability that any case appears on any list.Here are priors you can start with (but feel free to modify them).
###Code
qs = np.arange(200, 500, step=5)
prior_N = make_uniform(qs, name='N')
prior_N.head(3)
qs = np.linspace(0, 0.98, num=50)
prior_p = make_uniform(qs, name='p')
prior_p.head(3)
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** Now let's do the version of the problem with all three lists. Here's the data from Chou's paper:```Hepatitis A virus listP Q E Data1 1 1 k111 =281 1 0 k110 =211 0 1 k101 =171 0 0 k100 =690 1 1 k011 =180 1 0 k010 =550 0 1 k001 =630 0 0 k000 =??```Write a loop that computes the likelihood of the data for each pair of parameters, then update the prior and compute the posterior mean of `N`. How does it compare to the results using only the first two lists? Here's the data in a NumPy array (in reverse order).
###Code
data3 = np.array([0, 63, 55, 18, 69, 17, 21, 28])
###Output
_____no_output_____
###Markdown
Again, the first value is a place-keeper for the unknown `k000`. The second value is `k001`, which means there are 63 cases that appear on the third list but not the first two. And the last value is `k111`, which means there are 28 cases that appear on all three lists.In the two-list version of the problem we computed `ps` by enumerating the combinations of `p` and `q`.
###Code
q = 1-p
ps = [q*q, q*p, p*q, p*p]
###Output
_____no_output_____
###Markdown
We could do the same thing for the three-list version, computing the probability for each of the eight categories. But we can generalize it by recognizing that we are computing the cartesian product of `p` and `q`, repeated once for each list.And we can use the following function (based on [this StackOverflow answer](https://stackoverflow.com/questions/58242078/cartesian-product-of-arbitrary-lists-in-pandas/5824207958242079)) to compute Cartesian products:
###Code
def cartesian_product(*args, **options):
"""Cartesian product of sequences.
args: any number of sequences
options: passes to `MultiIndex.from_product`
returns: DataFrame with one column per sequence
"""
index = pd.MultiIndex.from_product(args, **options)
return pd.DataFrame(index=index).reset_index()
###Output
_____no_output_____
###Markdown
Here's an example with `p=0.2`:
###Code
p = 0.2
t = (1-p, p)
df = cartesian_product(t, t, t)
df
###Output
_____no_output_____
###Markdown
To compute the probability for each category, we take the product across the columns:
###Code
y = df.prod(axis=1)
y
###Output
_____no_output_____
###Markdown
Now you finish it off from there.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____ |
Document/Example8.ipynb | ###Markdown
Please cite us if you use the software Example-8 (Confidence interval) Environment check Checking that the notebook is running on Google Colab or not.
###Code
import sys
try:
import google.colab
!{sys.executable} -m pip -q -q install pycm
except:
pass
###Output
_____no_output_____
###Markdown
Install matplotlib
###Code
!{sys.executable} -m pip -q -q install matplotlib;
###Output
_____no_output_____
###Markdown
Plot function
###Code
import numpy as np
import matplotlib.pyplot as plt
import pycm
def plot_ci(cm,param,alpha=0.05,method="normal-approx"):
"""
Plot two-sided confidence interval.
:param cm: ConfusionMatrix
:type cm : pycm.ConfusionMatrix object
:param param: input parameter
:type param: str
:param alpha: type I error
:type alpha: float
:param method: binomial confidence intervals method
:type method: str
:return: None
"""
conf_str = str(round(100*(1-alpha)))
print(conf_str+"%CI :")
if param in cm.class_stat.keys():
mean = []
error = [[],[]]
data = cm.CI(param,alpha=alpha,binom_method=method)
class_names_str = list(map(str,(cm.classes)))
for class_index, class_name in enumerate(cm.classes):
print(str(class_name)+" : "+str(data[class_name][1]))
mean.append(cm.class_stat[param][class_name])
error[0].append(cm.class_stat[param][class_name]-data[class_name][1][0])
error[1].append(data[class_name][1][1]-cm.class_stat[param][class_name])
fig = plt.figure()
plt.errorbar(mean,class_names_str,xerr = error,fmt='o',capsize=5,linestyle="dotted")
plt.ylabel('Class')
fig.suptitle("Param :"+param + ", Alpha:"+str(alpha), fontsize=16)
for index,value in enumerate(mean):
down_point = data[cm.classes[index]][1][0]
up_point = data[cm.classes[index]][1][1]
plt.text(value, class_names_str[index], "%f" %value, ha="center",va="top",color="red")
plt.text(down_point, class_names_str[index], "%f" %down_point, ha="right",va="bottom",color="red")
plt.text(up_point , class_names_str[index], "%f" %up_point, ha="left",va="bottom",color="red")
else:
mean = cm.overall_stat[param]
data = cm.CI(param,alpha=alpha,binom_method=method)
print(data[1])
error = [[],[]]
up_point = data[1][1]
down_point = data[1][0]
error[0] = [cm.overall_stat[param] - down_point]
error[1] = [up_point - cm.overall_stat[param]]
fig = plt.figure()
plt.errorbar(mean,[param],xerr = error,fmt='o',capsize=5,linestyle="dotted")
fig.suptitle("Alpha:"+str(alpha), fontsize=16)
plt.text(mean, param, "%f" %mean, ha="center",va="top",color="red")
plt.text(down_point, param, "%f" %down_point, ha="right",va="bottom",color="red")
plt.text(up_point, param, "%f" %up_point, ha="left",va="bottom",color="red")
plt.show()
cm = pycm.ConfusionMatrix(matrix={0:{0:13,1:2,2:5},1:{0:1,1:10,2:6},2:{0:2,1:0,2:9}})
###Output
_____no_output_____
###Markdown
TPR
###Code
plot_ci(cm,param="TPR",method="normal-approx")
plot_ci(cm,param="TPR",method="wilson")
plot_ci(cm,param="TPR",method="agresti-coull")
###Output
95%CI :
0 : (0.4315849969359111, 0.8200759654156089)
1 : (0.3595423027276775, 0.7844005807361641)
2 : (0.5115131538244717, 0.9601341079262828)
###Markdown
FPR
###Code
plot_ci(cm,param="FPR",method="normal-approx")
plot_ci(cm,param="FPR",method="wilson")
plot_ci(cm,param="FPR",method="agresti-coull")
###Output
95%CI :
0 : (0.02898740902933511, 0.28009253670203504)
1 : (0.0075968375750390255, 0.21746745338748863)
2 : (0.1737338065288983, 0.4589936086556194)
###Markdown
AUC
###Code
plot_ci(cm,param="AUC")
###Output
95%CI :
0 : (0.6399211771547619, 0.902935965702381)
1 : (0.6273084086303518, 0.8964107564550372)
2 : (0.6151497954659743, 0.9057347254185467)
###Markdown
PLR
###Code
plot_ci(cm,param="PLR")
###Output
95%CI :
0 : (1.986202987452899, 18.530051901514057)
1 : (2.2523561191462638, 36.90867850896665)
2 : (1.5589394210441203, 4.858346516200761)
###Markdown
Overall ACC
###Code
plot_ci(cm,param="Overall ACC")
###Output
95%CI :
(0.5333055584484714, 0.8000277748848619)
###Markdown
Kappa
###Code
plot_ci(cm,param="Kappa")
###Output
95%CI :
(0.31072820940081924, 0.7046564059837961)
###Markdown
Please cite us if you use the software Example-8 (Confidence interval) Install matplotlib
###Code
import sys
!{sys.executable} -m pip install matplotlib;
###Output
Requirement already satisfied: matplotlib in c:\users\sepkjaer\appdata\local\programs\python\python35-32\lib\site-packages (3.0.3)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\users\sepkjaer\appdata\local\programs\python\python35-32\lib\site-packages (from matplotlib) (1.0.1)
Requirement already satisfied: cycler>=0.10 in c:\users\sepkjaer\appdata\local\programs\python\python35-32\lib\site-packages (from matplotlib) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in c:\users\sepkjaer\appdata\local\programs\python\python35-32\lib\site-packages (from matplotlib) (2.6.1)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in c:\users\sepkjaer\appdata\local\programs\python\python35-32\lib\site-packages (from matplotlib) (2.2.0)
Requirement already satisfied: numpy>=1.10.0 in c:\users\sepkjaer\appdata\local\programs\python\python35-32\lib\site-packages (from matplotlib) (1.15.2)
Requirement already satisfied: setuptools in c:\users\sepkjaer\appdata\local\programs\python\python35-32\lib\site-packages (from kiwisolver>=1.0.1->matplotlib) (40.9.0)
Requirement already satisfied: six in c:\users\sepkjaer\appdata\local\programs\python\python35-32\lib\site-packages (from cycler>=0.10->matplotlib) (1.11.0)
###Markdown
Plot function
###Code
import numpy as np
import matplotlib.pyplot as plt
import pycm
def plot_ci(cm,param,alpha=0.05,method="normal-approx"):
"""
Plot two-sided confidence interval.
:param cm: ConfusionMatrix
:type cm : pycm.ConfusionMatrix object
:param param: input parameter
:type param: str
:param alpha: type I error
:type alpha: float
:param method: binomial confidence intervals method
:type method: str
:return: None
"""
conf_str = str(round(100*(1-alpha)))
print(conf_str+"%CI :")
if param in cm.class_stat.keys():
mean = []
error = [[],[]]
data = cm.CI(param,alpha=alpha,binom_method=method)
class_names_str = list(map(str,(cm.classes)))
for class_index, class_name in enumerate(cm.classes):
print(str(class_name)+" : "+str(data[class_name][1]))
mean.append(cm.class_stat[param][class_name])
error[0].append(cm.class_stat[param][class_name]-data[class_name][1][0])
error[1].append(data[class_name][1][1]-cm.class_stat[param][class_name])
fig = plt.figure()
plt.errorbar(mean,class_names_str,xerr = error,fmt='o',capsize=5,linestyle="dotted")
plt.ylabel('Class')
fig.suptitle("Param :"+param + ", Alpha:"+str(alpha), fontsize=16)
for index,value in enumerate(mean):
down_point = data[cm.classes[index]][1][0]
up_point = data[cm.classes[index]][1][1]
plt.text(value, class_names_str[index], "%f" %value, ha="center",va="top",color="red")
plt.text(down_point, class_names_str[index], "%f" %down_point, ha="right",va="bottom",color="red")
plt.text(up_point , class_names_str[index], "%f" %up_point, ha="left",va="bottom",color="red")
else:
mean = cm.overall_stat[param]
data = cm.CI(param,alpha=alpha,binom_method=method)
print(data[1])
error = [[],[]]
up_point = data[1][1]
down_point = data[1][0]
error[0] = [cm.overall_stat[param] - down_point]
error[1] = [up_point - cm.overall_stat[param]]
fig = plt.figure()
plt.errorbar(mean,[param],xerr = error,fmt='o',capsize=5,linestyle="dotted")
fig.suptitle("Alpha:"+str(alpha), fontsize=16)
plt.text(mean, param, "%f" %mean, ha="center",va="top",color="red")
plt.text(down_point, param, "%f" %down_point, ha="right",va="bottom",color="red")
plt.text(up_point, param, "%f" %up_point, ha="left",va="bottom",color="red")
plt.show()
cm = pycm.ConfusionMatrix(matrix={0:{0:13,1:2,2:5},1:{0:1,1:10,2:6},2:{0:2,1:0,2:9}})
###Output
_____no_output_____
###Markdown
TPR
###Code
plot_ci(cm,param="TPR",method="normal-approx")
plot_ci(cm,param="TPR",method="wilson")
plot_ci(cm,param="TPR",method="agresti-coull")
###Output
95%CI :
0 : (0.4315849969359111, 0.8200759654156089)
1 : (0.3595423027276775, 0.7844005807361641)
2 : (0.5115131538244717, 0.9601341079262828)
###Markdown
FPR
###Code
plot_ci(cm,param="FPR",method="normal-approx")
plot_ci(cm,param="FPR",method="wilson")
plot_ci(cm,param="FPR",method="agresti-coull")
###Output
95%CI :
0 : (0.02898740902933511, 0.28009253670203504)
1 : (0.0075968375750390255, 0.21746745338748863)
2 : (0.1737338065288983, 0.4589936086556194)
###Markdown
AUC
###Code
plot_ci(cm,param="AUC")
###Output
95%CI :
0 : (0.6399211771547619, 0.902935965702381)
1 : (0.6273084086303518, 0.8964107564550372)
2 : (0.6151497954659743, 0.9057347254185467)
###Markdown
PLR
###Code
plot_ci(cm,param="PLR")
###Output
95%CI :
0 : (1.986202987452899, 18.530051901514057)
1 : (2.2523561191462638, 36.90867850896665)
2 : (1.5589394210441203, 4.858346516200761)
###Markdown
Overall ACC
###Code
plot_ci(cm,param="Overall ACC")
###Output
95%CI :
(0.5333055584484714, 0.8000277748848619)
###Markdown
Kappa
###Code
plot_ci(cm,param="Kappa")
###Output
95%CI :
(0.31072820940081924, 0.7046564059837961)
###Markdown
Please cite us if you use the software Example-8 (Confidence interval) Install matplotlib
###Code
import sys
!{sys.executable} -m pip -q -q install matplotlib;
###Output
_____no_output_____
###Markdown
Plot function
###Code
import numpy as np
import matplotlib.pyplot as plt
import pycm
def plot_ci(cm,param,alpha=0.05,method="normal-approx"):
"""
Plot two-sided confidence interval.
:param cm: ConfusionMatrix
:type cm : pycm.ConfusionMatrix object
:param param: input parameter
:type param: str
:param alpha: type I error
:type alpha: float
:param method: binomial confidence intervals method
:type method: str
:return: None
"""
conf_str = str(round(100*(1-alpha)))
print(conf_str+"%CI :")
if param in cm.class_stat.keys():
mean = []
error = [[],[]]
data = cm.CI(param,alpha=alpha,binom_method=method)
class_names_str = list(map(str,(cm.classes)))
for class_index, class_name in enumerate(cm.classes):
print(str(class_name)+" : "+str(data[class_name][1]))
mean.append(cm.class_stat[param][class_name])
error[0].append(cm.class_stat[param][class_name]-data[class_name][1][0])
error[1].append(data[class_name][1][1]-cm.class_stat[param][class_name])
fig = plt.figure()
plt.errorbar(mean,class_names_str,xerr = error,fmt='o',capsize=5,linestyle="dotted")
plt.ylabel('Class')
fig.suptitle("Param :"+param + ", Alpha:"+str(alpha), fontsize=16)
for index,value in enumerate(mean):
down_point = data[cm.classes[index]][1][0]
up_point = data[cm.classes[index]][1][1]
plt.text(value, class_names_str[index], "%f" %value, ha="center",va="top",color="red")
plt.text(down_point, class_names_str[index], "%f" %down_point, ha="right",va="bottom",color="red")
plt.text(up_point , class_names_str[index], "%f" %up_point, ha="left",va="bottom",color="red")
else:
mean = cm.overall_stat[param]
data = cm.CI(param,alpha=alpha,binom_method=method)
print(data[1])
error = [[],[]]
up_point = data[1][1]
down_point = data[1][0]
error[0] = [cm.overall_stat[param] - down_point]
error[1] = [up_point - cm.overall_stat[param]]
fig = plt.figure()
plt.errorbar(mean,[param],xerr = error,fmt='o',capsize=5,linestyle="dotted")
fig.suptitle("Alpha:"+str(alpha), fontsize=16)
plt.text(mean, param, "%f" %mean, ha="center",va="top",color="red")
plt.text(down_point, param, "%f" %down_point, ha="right",va="bottom",color="red")
plt.text(up_point, param, "%f" %up_point, ha="left",va="bottom",color="red")
plt.show()
cm = pycm.ConfusionMatrix(matrix={0:{0:13,1:2,2:5},1:{0:1,1:10,2:6},2:{0:2,1:0,2:9}})
###Output
_____no_output_____
###Markdown
TPR
###Code
plot_ci(cm,param="TPR",method="normal-approx")
plot_ci(cm,param="TPR",method="wilson")
plot_ci(cm,param="TPR",method="agresti-coull")
###Output
95%CI :
0 : (0.4315849969359111, 0.8200759654156089)
1 : (0.3595423027276775, 0.7844005807361641)
2 : (0.5115131538244717, 0.9601341079262828)
###Markdown
FPR
###Code
plot_ci(cm,param="FPR",method="normal-approx")
plot_ci(cm,param="FPR",method="wilson")
plot_ci(cm,param="FPR",method="agresti-coull")
###Output
95%CI :
0 : (0.02898740902933511, 0.28009253670203504)
1 : (0.0075968375750390255, 0.21746745338748863)
2 : (0.1737338065288983, 0.4589936086556194)
###Markdown
AUC
###Code
plot_ci(cm,param="AUC")
###Output
95%CI :
0 : (0.6399211771547619, 0.902935965702381)
1 : (0.6273084086303518, 0.8964107564550372)
2 : (0.6151497954659743, 0.9057347254185467)
###Markdown
PLR
###Code
plot_ci(cm,param="PLR")
###Output
95%CI :
0 : (1.986202987452899, 18.530051901514057)
1 : (2.2523561191462638, 36.90867850896665)
2 : (1.5589394210441203, 4.858346516200761)
###Markdown
Overall ACC
###Code
plot_ci(cm,param="Overall ACC")
###Output
95%CI :
(0.5333055584484714, 0.8000277748848619)
###Markdown
Kappa
###Code
plot_ci(cm,param="Kappa")
###Output
95%CI :
(0.31072820940081924, 0.7046564059837961)
###Markdown
Please cite us if you use the software Example-8 (Confidence interval) Install matplotlib
###Code
import sys
!{sys.executable} -m pip install matplotlib;
###Output
Requirement already satisfied: matplotlib in c:\users\sepkjaer\appdata\local\programs\python\python35-32\lib\site-packages (3.0.3)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in c:\users\sepkjaer\appdata\local\programs\python\python35-32\lib\site-packages (from matplotlib) (2.2.0)
Requirement already satisfied: cycler>=0.10 in c:\users\sepkjaer\appdata\local\programs\python\python35-32\lib\site-packages (from matplotlib) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\users\sepkjaer\appdata\local\programs\python\python35-32\lib\site-packages (from matplotlib) (1.0.1)
Requirement already satisfied: numpy>=1.10.0 in c:\users\sepkjaer\appdata\local\programs\python\python35-32\lib\site-packages (from matplotlib) (1.15.2)
Requirement already satisfied: python-dateutil>=2.1 in c:\users\sepkjaer\appdata\local\programs\python\python35-32\lib\site-packages (from matplotlib) (2.6.1)
Requirement already satisfied: six in c:\users\sepkjaer\appdata\local\programs\python\python35-32\lib\site-packages (from cycler>=0.10->matplotlib) (1.11.0)
Requirement already satisfied: setuptools in c:\users\sepkjaer\appdata\local\programs\python\python35-32\lib\site-packages (from kiwisolver>=1.0.1->matplotlib) (39.2.0)
###Markdown
Plot function
###Code
import numpy as np
import matplotlib.pyplot as plt
import pycm
def plot_ci(cm,param,alpha=0.05,method="normal-approx"):
"""
Plot two-sided confidence interval.
:param cm: ConfusionMatrix
:type cm : pycm.ConfusionMatrix object
:param param: input parameter
:type param: str
:param alpha: type I error
:type alpha: float
:param method: binomial confidence intervals method
:type method: str
:return: None
"""
conf_str = str(round(100*(1-alpha)))
print(conf_str+"%CI :")
if param in cm.class_stat.keys():
mean = []
error = [[],[]]
data = cm.CI(param,alpha=alpha,binom_method=method)
class_names_str = list(map(str,(cm.classes)))
for class_index, class_name in enumerate(cm.classes):
print(str(class_name)+" : "+str(data[class_name][1]))
mean.append(cm.class_stat[param][class_name])
error[0].append(cm.class_stat[param][class_name]-data[class_name][1][0])
error[1].append(data[class_name][1][1]-cm.class_stat[param][class_name])
fig = plt.figure()
plt.errorbar(mean,class_names_str,xerr = error,fmt='o',capsize=5,linestyle="dotted")
plt.ylabel('Class')
fig.suptitle("Param :"+param + ", Alpha:"+str(alpha), fontsize=16)
for index,value in enumerate(mean):
down_point = data[cm.classes[index]][1][0]
up_point = data[cm.classes[index]][1][1]
plt.text(value, class_names_str[index], "%f" %value, ha="center",va="top",color="red")
plt.text(down_point, class_names_str[index], "%f" %down_point, ha="right",va="bottom",color="red")
plt.text(up_point , class_names_str[index], "%f" %up_point, ha="left",va="bottom",color="red")
else:
mean = cm.overall_stat[param]
data = cm.CI(param,alpha=alpha,binom_method=method)
print(data[1])
error = [[],[]]
up_point = data[1][1]
down_point = data[1][0]
error[0] = [cm.overall_stat[param] - down_point]
error[1] = [up_point - cm.overall_stat[param]]
fig = plt.figure()
plt.errorbar(mean,[param],xerr = error,fmt='o',capsize=5,linestyle="dotted")
fig.suptitle("Alpha:"+str(alpha), fontsize=16)
plt.text(mean, param, "%f" %mean, ha="center",va="top",color="red")
plt.text(down_point, param, "%f" %down_point, ha="right",va="bottom",color="red")
plt.text(up_point, param, "%f" %up_point, ha="left",va="bottom",color="red")
plt.show()
cm = pycm.ConfusionMatrix(matrix={0:{0:13,1:2,2:5},1:{0:1,1:10,2:6},2:{0:2,1:0,2:9}})
###Output
_____no_output_____
###Markdown
TPR
###Code
plot_ci(cm,param="TPR",method="normal-approx")
plot_ci(cm,param="TPR",method="wilson")
plot_ci(cm,param="TPR",method="agresti-coull")
###Output
95%CI :
0 : (0.4315849969359111, 0.8200759654156089)
1 : (0.3595423027276775, 0.7844005807361641)
2 : (0.5115131538244717, 0.9601341079262828)
###Markdown
FPR
###Code
plot_ci(cm,param="FPR",method="normal-approx")
plot_ci(cm,param="FPR",method="wilson")
plot_ci(cm,param="FPR",method="agresti-coull")
###Output
95%CI :
0 : (0.02898740902933511, 0.28009253670203504)
1 : (0.0075968375750390255, 0.21746745338748863)
2 : (0.1737338065288983, 0.4589936086556194)
###Markdown
AUC
###Code
plot_ci(cm,param="AUC")
###Output
95%CI :
0 : (0.6399211771547619, 0.902935965702381)
1 : (0.6273084086303518, 0.8964107564550372)
2 : (0.6151497954659743, 0.9057347254185467)
###Markdown
PLR
###Code
plot_ci(cm,param="PLR")
###Output
95%CI :
0 : (1.986202987452899, 18.530051901514057)
1 : (2.2523561191462638, 36.90867850896665)
2 : (1.5589394210441203, 4.858346516200761)
###Markdown
Overall ACC
###Code
plot_ci(cm,param="Overall ACC")
###Output
95%CI :
(0.5333055584484714, 0.8000277748848619)
###Markdown
Kappa
###Code
plot_ci(cm,param="Kappa")
###Output
95%CI :
(0.31072820940081924, 0.7046564059837961)
|
notebooks/07_FPR_Candidate_ROI_extraction.ipynb | ###Markdown
Importing Modules
###Code
import os
import cv2
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from skimage import measure
import SimpleITK as stk
from glob import glob
from tqdm import tqdm
###Output
_____no_output_____
###Markdown
Paths for Source & Destination of Data
###Code
root = "../../DATA/"
target_root = "../../FPRProcessedData/"
###Output
_____no_output_____
###Markdown
Getting files names from subset of data
###Code
subset = 4 # Ran for 0...9
file_list = glob(root+f"subset{subset}/*.mhd")
print("Files Count:",len(file_list))
###Output
Files Count: 89
###Markdown
Reading candidates.csv
###Code
candidates_df = pd.read_csv(root+"candidates.csv")
candidates_df.head()
print("Total Candidates:",len(candidates_df))
print("Positives:",candidates_df['class'].sum())
candidates_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 551065 entries, 0 to 551064
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 seriesuid 551065 non-null object
1 coordX 551065 non-null float64
2 coordY 551065 non-null float64
3 coordZ 551065 non-null float64
4 class 551065 non-null int64
dtypes: float64(3), int64(1), object(1)
memory usage: 21.0+ MB
###Markdown
Function to filter ctscan files that are in subset as well as in candidates.csv
###Code
def get_filename(file_list, file):
for f in file_list:
if file in f:
return f
###Output
_____no_output_____
###Markdown
Function to load mhd files
###Code
def load_mhd(file):
mhdimage = stk.ReadImage(file)
ct_scan = stk.GetArrayFromImage(mhdimage)
origin = np.array(list(mhdimage.GetOrigin()))
space = np.array(list(mhdimage.GetSpacing()))
return ct_scan, origin, space
candidates_df["filename"] = candidates_df["seriesuid"].map(lambda file: get_filename(file_list, file))
candidates_df = candidates_df.dropna()
print(len(candidates_df))
candidates_df.head()
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8)) # CLAHE(Contrast Limited Adaptive Histogram Equalization) filter for enhancing the contrast of an image
for i,file in tqdm(enumerate(np.unique(candidates_df['filename'].values)), total=len(np.unique(candidates_df['filename'].values))):
candidates = candidates_df[candidates_df["filename"]==file]
ct, origin, space = load_mhd(file)
num_z, height, width = ct.shape
ct_norm = cv2.normalize(ct, None, 0, 255, cv2.NORM_MINMAX)
for idx, row in candidates.iterrows():
node_x = int(row["coordX"]) # x-coordinate of the candidate
node_y = int(row["coordY"]) # y-coordinate of the candidate
node_z = int(row["coordZ"]) # z-coordinate of the candidate
c = int(row["class"]) # class of the candidate (1: nodule, 0: non-nodule)
center = np.array([node_x, node_y, node_z]) # nodule center
v_center = np.rint((center-origin)/space) # nodule center in voxel space (still x,y,z ordering)
img_norm = ct_norm[int(v_center[2]),:,:] # a slice of the CT scan containing the candidate
img_norm = cv2.resize(img_norm, (512,512)) # resize the image to 512x512
img_norm_improved = clahe.apply(img_norm.astype(np.uint8)) # apply CLAHE filter to the image
x=abs(int(v_center[0]))
y=abs(int(v_center[1]))
box = img_norm_improved[max(0,y-25):min(y+25,512),max(0,x-25):min(x+25,512)] # extract a box of size 25x25 around the candidate
if box.shape != (50,50):
box = cv2.resize(box, (50,50))
if c: # if the candidate is a nodule
# applying different image transformations to increase the number of nodule candidates
cv2.imwrite(os.path.join(target_root+"nodule/", f"candidate_{subset}_{c}_{idx}.jpg"),box)
cv2.imwrite(os.path.join(target_root+"nodule/", f"candidate_{subset}_{c}_{idx}_1.jpg"),cv2.rotate(box,cv2.ROTATE_90_CLOCKWISE))
cv2.imwrite(os.path.join(target_root+"nodule/", f"candidate_{subset}_{c}_{idx}_2.jpg"),cv2.rotate(box, cv2.ROTATE_90_COUNTERCLOCKWISE))
cv2.imwrite(os.path.join(target_root+"nodule/", f"candidate_{subset}_{c}_{idx}_3.jpg"),cv2.rotate(box, cv2.ROTATE_180))
cv2.imwrite(os.path.join(target_root+"nodule/", f"candidate_{subset}_{c}_{idx}_4.jpg"),cv2.flip(box, 1))
else: # if the candidate is not a nodule
cv2.imwrite(os.path.join(target_root+"non-nodule-initial/", f"candidate_{subset}_{c}_{idx}.jpg"),box)
###Output
100%|██████████████████████████████████████████████████████████████████████████████████| 89/89 [06:02<00:00, 4.07s/it]
|
notebooks/kaggle/pgpg_train.ipynb | ###Markdown
1) Set paths, check data, clone repo, install packages 1.1) Define pathsThese should be defined after successful upload of dataset and assets (e.g. git keys)
###Code
import os
# Define dataset paths
df_icrb_root = '/kaggle/input/deepfashion-icrb'
df_icrb_img_root = f'{df_icrb_root}/Img'
assert os.path.exists(df_icrb_img_root), f'df_icrb_img_root={df_icrb_img_root}: NOT FOUND'
# Define asset paths
git_keys_root = '/kaggle/input/git-keys/github-keys'
assert os.path.exists(git_keys_root), f'git_keys_root={git_keys_root}: NOT FOUND'
client_secrets_path = '/kaggle/input/git-keys/client_secrets.json'
assert os.path.exists(client_secrets_path), f'client_secrets_path={client_secrets_path}: NOT FOUND'
###Output
_____no_output_____
###Markdown
Create the root Google Drive directory. This is where all model checkpoints/metrics exists as well as Datasets, Fonts etc. Symlink to dataset Img folder to avoid code changes and enable interoperability with Google Colab
###Code
# Create root directory if not exists
gdrive_root = '/kaggle/working/GoogleDrive'
!mkdir -p "$gdrive_root"
# Create the Dataset link inside Google Drive
gdrive_icrb_root = f'{gdrive_root}/Datasets/DeepFashion/In-shop Clothes Retrieval Benchmark'
!mkdir -p "$gdrive_root"/Datasets/DeepFashion/In-shop\ Clothes\ Retrieval\ Benchmark
!ln -s /kaggle/input/deepfashion-icrb/Img "$gdrive_icrb_root"
# Copy the Fonts dir inside local Google Drive root
!cp -r /kaggle/input/mplfonts/Fonts "$gdrive_root"
# Link the Inceptionv3 model Checkpoint inside local Google Drive root
!mkdir -p "$gdrive_root"/Models
!cp -r "/kaggle/input/inception-model/model_name=inceptionv3" "$gdrive_root"/Models
!mv "$gdrive_root"/Models/model_name=inceptionv3/Checkpoints/1a9a5a14.pth.bak "$gdrive_root"/Models/model_name=inceptionv3/Checkpoints/1a9a5a14.pth
# Create also an empty Img.zip file to fool GDriveDataset instance into thinking the dataset was downloaded
# and unzipped
!touch "$gdrive_icrb_root"/Img.zip
# FIX: We need client_secrets.json to be writable, so copy to /kaggle/working
!cp "$client_secrets_path" "$gdrive_root"
client_secrets_path = f'{gdrive_root}/client_secrets.json'
###Output
_____no_output_____
###Markdown
1.2) Clone github repoClone achariso/gans-thesis repo into /kaggle/working/code using git clone. For a similar procedure in Colab,see: https://medium.com/@purba0101/how-to-clone-private-github-repo-in-google-colab-using-ssh-77384cfef18f
###Code
# Clean failed attempts
!rm -rf /root/.ssh
!rm -rf /kaggle/working/code
!mkdir -p /kaggle/working/code
repo_root = '/kaggle/working/code/gans-thesis'
if not os.path.exists(repo_root):
# Check that ssh keys exist
id_rsa_abs_drive = f'{git_keys_root}/id_rsa'
id_rsa_pub_abs_drive = f'{id_rsa_abs_drive}.pub'
assert os.path.exists(id_rsa_abs_drive)
assert os.path.exists(id_rsa_pub_abs_drive)
# On first run: Add ssh key in repo
if not os.path.exists('/root/.ssh'):
# Transfer config file
ssh_config_abs_drive = f'{git_keys_root}/config'
assert os.path.exists(ssh_config_abs_drive)
!mkdir -p ~/.ssh
!cp -f "$ssh_config_abs_drive" ~/.ssh/
# Add github.com to known hosts
!ssh-keyscan -t rsa github.com >> ~/.ssh/known_hosts
# Test ssh connection
# !ssh -T [email protected]
# Remove any previous attempts
!rm -rf "$repo_root"
!mkdir -p "$repo_root"
# Clone repo
!git clone [email protected]:achariso/gans-thesis.git "$repo_root"
src_root = f'{repo_root}/src'
!rm -rf "$repo_root"/report
###Output
_____no_output_____
###Markdown
1.3) Install pip packagesAll required files are stored in a requirements.txt files at the repository's root.Use `pip install -r requirements.txt` from inside the dir to install required packages.
###Code
%cd $repo_root
!pip install -r requirements.txt
###Output
_____no_output_____
###Markdown
1.4) Update path to include src dirThis is necessary for the modules to function correctly
###Code
content_root_abs = f'{repo_root}'
src_root_abs = f'{repo_root}/src'
%env PYTHONPATH="/kaggle/lib/kagglegym:/kaggle/lib:$content_root_abs:$src_root_abs"
###Output
_____no_output_____
###Markdown
2) Train PGPG model on DeepFashionIn this section we run the actual training loop for PGPG network. PGPG consists of a 2-stage generator, where each stage is a UNET-like model, and, in our version, a PatchGAN discriminator. Colab Bug WorkaroundBug: matplotlib cache not rebuilding.Solution: Run the following code and then restart the kernel (now included inside `src/train_pgpg.py`) Actual RunEventually, run the code!
###Code
chkpt_step = None # supported: 'latest', <int>, None
log_level = 'debug' # supported: 'debug', 'info', 'warning', 'error', 'critical', 'fatal'
# Running with -i enables us to get variables defined inside the script (the script runs inline)
%run -i src/train_pgpg.py --log_level $log_level --chkpt_step $chkpt_step
###Output
_____no_output_____
###Markdown
3) Evaluate PGPGIn this section we evaluate the generation performance of our trained network using the SOTA GAN evaluation metrics. 3.1) Get the metrics evolution plotsWe plot how the metrics evolved during training. The GAN is **not** trained to minimize those metrics (they arecalculated using `torch.no_grad()`) and thus this evolution merely depends on the network and showcases the correlationbetween the GAN evaluation metrics, and the losses (e.g. adversarial & reconstruction) used to optimize the network.
###Code
# Since the PGPG implements utils.ifaces.Visualizable, we can
# directly call visualize_metrics() on the model instance.
_ = pgpg.visualize_metrics(upload=True, preview=True)
###Output
_____no_output_____
###Markdown
3.2) Evaluate Generated SamplesIn order to evaluate generated samples and compare model with other GAN architectures trained on the same dataset. For this purpose we will re-calculate the evaluation metrics as stated above, but with a much bigger number of samples. In this way, the metrics will be more trustworthy and comparable with the corresponding metrics in the original paper.
###Code
# Initialize a new evaluator instance
# (used to run GAN evaluation metrics: FID, IS, PRECISION, RECALL, F1 and SSIM)
evaluator = GanEvaluator(model_fs_folder_or_root=models_groot, gen_dataset=dataset, target_index=1, device=exec_device,
condition_indices=(0, 2), n_samples=10000, batch_size=metrics_batch_size,
f1_k=f1_k)
# Run the evaluator
metrics_dict = evaluator.evaluate(gen=pgpg.gen, metric_name='all', show_progress=True)
# Print results
import json
print(json.dumps(metrics_dict, indent=4))
###Output
_____no_output_____ |
California House Prices.ipynb | ###Markdown
In this notebook we will take a look at the __Housing Price__ Dataset and implement some Machine Learning Algorithms Importing Libraries
###Code
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
Importing the data
###Code
housing = pd.read_csv('D:\Github\My Repository\Machine-Learning-Projects\Data\housing.csv')
###Output
_____no_output_____
###Markdown
Initial exploration of the data1. info() -> Gives a list of all the columns along with the data type and count of all non-null values2. describe() -> Gives statistical breif about the columns
###Code
housing.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 20640 entries, 0 to 20639
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 longitude 20640 non-null float64
1 latitude 20640 non-null float64
2 housing_median_age 20640 non-null float64
3 total_rooms 20640 non-null float64
4 total_bedrooms 20433 non-null float64
5 population 20640 non-null float64
6 households 20640 non-null float64
7 median_income 20640 non-null float64
8 median_house_value 20640 non-null float64
9 ocean_proximity 20640 non-null object
dtypes: float64(9), object(1)
memory usage: 1.6+ MB
###Markdown
Things to note:- All except ocean_proximity column have float values- total_bedrooms column has some null values
###Code
housing.describe()
###Output
_____no_output_____
###Markdown
This gives an overall understanding about the values within a column_The analysis ignores any null values_ Visulazing the dataWe will look at the distribution for each column and try to understand a little bit about our data
###Code
housing.hist(figsize=(30, 40), bins=50)
plt.show()
###Output
_____no_output_____
###Markdown
Things to note:- `housing_media_age` seems to cap at 50, i.e. any value above 50 is considered 50. We could ask for a detailed data or ignore any row with 50 age- A similar cap can be seen for `median_house_value`- `median_income`, `population`, `total_bedrooms` and `total_rooms` are right skewed Now let's visulaize the data using the longitude and latitude coordinates
###Code
import matplotlib.image as mpimg
california_image = mpimg.imread('D:\Github\My Repository\Machine-Learning\Resources\california.png')
scatter_plot = housing.plot(
kind='scatter', x='longitude', y='latitude', figsize=(10,7),
c = 'median_house_value', cmap = 'jet', alpha=0.4, colorbar=False,
s=housing['population']/100
)
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.title('Median House Value for California')
graph_limts = list(plt.xlim()) + list(plt.ylim())
plt.imshow(california_image, extent=graph_limits, alpha=0.5,
cmap=plt.get_cmap("jet"))
prices = housing['median_house_value']
tick_values = np.linspace(prices.min(), prices.max(), 11)
cbar = plt.colorbar()
cbar.ax.set_yticklabels([f'${round(v/1000, 1)}k' for v in tick_values], fontsize=14)
cbar.set_label('Median House Value', fontsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
Let's look at the correlation among the columns
###Code
corr_matrix = housing.corr()
corr_matrix['median_house_value'].sort_values(ascending=False) # Since we are only interested in the output
###Output
_____no_output_____
###Markdown
Things to note:- `median_income` is very positively correlated with `median_house_value`, which makes sense as income of an individual increases, he/she is willing to pay more for a house- Looking at location, `latitude` seems to have a strong negative correlation where as `longitude` doesn't have that much impact Let's add some more features and check their correlation
###Code
data_ = housing.copy() # Creating a copy for this particular analysis
data_['household_per_population'] = data_['households']/data_['population']
data_['bedrooms_per_room'] = data_['total_bedrooms']/data_['total_rooms']
data_['rooms_per_household'] = data_['total_rooms']/data_['households']
data_['rooms_per_bedroom'] = data_['total_rooms']/data_['total_bedrooms']
data_['population_per_household'] = data_['population']/data_['households']
data_.corr()['median_house_value'].sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Things to note:- Within the 5 new features added, `rooms_per_bedroom`, `household_per_population` and `bedrooms_per_room` seems to have strong correlations- `rooms_per_household` has a slightly positive correlation- `population_per_household` has very less negative correlationWe could spend more time and come up with additional features, but for the time being we will work with these features only Splitting the dataWe need to split the data into training and test set, so that we can trian our models on training set and finally evaluate on the test set. In order to do so we could:- Use scikit learn `train_test_split` method within `sklearn.model_selection`- Or, look at the highest correlated column and make sure that it is divided correctlyFor this notebook, we will go through the second method. From the previous correlation matrix, we know that `median_income` is very strongly correlated with our output. So let's first look at the distribution for `median_income`
###Code
housing['median_income'].hist(bins=50)
plt.show()
###Output
_____no_output_____
###Markdown
Here, we can see it's skewed towards the right side a little. Let's create a new column __median_income_category__ based off `median_income` and cut it in a way that it's normally distributed.
###Code
housing['median_income_category'] = pd.cut(housing['median_income'],
bins=[0.0, 1.5, 3.0, 4.5, 6.0, np.inf],
labels=[1, 2, 3, 4, 5])
housing['median_income_category'].hist()
plt.show()
###Output
_____no_output_____
###Markdown
As we can see that it's closer to a normal distribution, hence we can use this column to split our dataset
###Code
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_ix, test_ix in split.split(housing, housing['median_income_category']):
housing_train = housing.loc[train_ix]
housing_test = housing.loc[test_ix]
# Let's remove the median_income_category column
for _set in (housing_train, housing_test):
_set.drop('median_income_category', axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Cleaning, Transforming and Preparing our dataNow the data is divided into train and test set, we could start working on cleaning and transforming the data. From our previous analysis we know:- There are some null values- We have a categorical column- We could add some more features which will increase the efficiency of our model Let's create a class to add in our additional features: Let's look at the column indexes and features we want to add.- household per population- bedrooms per room- rooms per household- population per housholdFrom our previous analysis, we know that __population per household__ is the least correlated feature. Hence we can add this as an optional argument in our class.
###Code
housing.columns
###Output
_____no_output_____
###Markdown
Adding TransformerMixing and BaseEstimator as base class generates some methods on it's own:- TransformMixing: `fit_transform()`- BaseEstimator: `get_params()` and `set_params()`
###Code
from sklearn.base import TransformerMixin, BaseEstimator
total_rooms_ix, total_bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6
class addAdditionalAttributes(BaseEstimator, TransformerMixin):
def __init__(self, add_population_per_household=True):
self.add_population_per_household = add_population_per_household
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
household_per_population = X[:, households_ix]/X[:, population_ix]
bedrooms_per_room = X[:, total_bedrooms_ix]/X[:, total_rooms_ix]
rooms_per_bedroom = X[:, total_rooms_ix]/X[:, total_bedrooms_ix]
rooms_per_household = X[:, total_rooms_ix]/X[:, households_ix]
if self.add_population_per_household:
population_per_household = X[:, population_ix]/X[:, households_ix]
return np.c_[X, household_per_population, rooms_per_bedroom, bedrooms_per_room, rooms_per_household, population_per_household]
else:
return np.c_[X, household_per_population, rooms_per_bedroom, bedrooms_per_room, rooms_per_household]
###Output
_____no_output_____
###Markdown
Now to clean the data, we could create a pipeline with different steps for numerical and categorical tranformers.But before that, let's split our label from features:
###Code
housing = housing_train.drop('median_house_value', axis=1)
housing_labels = housing_train['median_house_value'].copy()
###Output
_____no_output_____
###Markdown
Now let's get the numerical and categorical features:
###Code
num_attributes = list(housing.columns)
num_attributes.remove('ocean_proximity')
cat_attributes = ['ocean_proximity']
###Output
_____no_output_____
###Markdown
Let's tackle the transformations and create a pipeline for both numerical and categorical features
###Code
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import OneHotEncoder
from sklearn.pipeline import Pipeline
num_pipeline = Pipeline(steps=[
('impute', SimpleImputer(strategy='median')),
('attr_adder', addAdditionalAttributes()),
('scale', StandardScaler())
])
cat_pipeline = Pipeline(steps=[
('ohe', OneHotEncoder(handle_unknown='ignore'))
])
###Output
_____no_output_____
###Markdown
Now let's combine the two pipelines into one using `ColumnTransformer` method
###Code
from sklearn.compose import ColumnTransformer
full_pipeline = ColumnTransformer(transformers=[
('nums', num_pipeline, num_attributes),
('cats', cat_pipeline, cat_attributes)
])
###Output
_____no_output_____
###Markdown
Now we are ready with all the transformer steps, let's dive into some __Machine Learning__ Algorithms Let's prepare our data so that we can analyze it using various models
###Code
housing_prepared = full_pipeline.fit_transform(housing)
###Output
_____no_output_____
###Markdown
Linear Regression
###Code
from sklearn.linear_model import LinearRegression
housing_prepared = full_pipeline.fit_transform(housing)
lin_reg = LinearRegression()
lin_reg.fit(housing_prepared, housing_labels) # Fitting the model to the training set
###Output
_____no_output_____
###Markdown
Let's look at the features and values for Linear Regression
###Code
print(f'Coeficients: {lin_reg.coef_}')
print(f'Intercept: {lin_reg.intercept_}')
###Output
Coeficients: [-58228.93849321 -60998.18461181 13342.35693418 -4348.59309874
8393.44959445 2462.8858331 -848.10392466 73515.45306702
28917.40680399 1841.60475348 7127.56707549 5517.10600831
-489.61012657 -13940.99943214 -50122.29906285 105640.48260186
-24105.08033644 -17472.10377043]
Intercept: 234040.94026529999
###Markdown
Now in order to get some prediction, let's create a small validation set:
###Code
some_data = housing[:5]
some_data_labels = housing_labels[:5]
some_data_prepared = full_pipeline.transform(some_data)
###Output
_____no_output_____
###Markdown
Now let's predict the output using `predict` method:
###Code
some_predictions = lin_reg.predict(some_data_prepared)
for original, prediction in zip(some_data_labels, some_predictions):
print(f'Original Value: {round(original)}')
print(f'Predicted Value: {round(prediction)}\n')
###Output
Original Value: 286600
Predicted Value: 235953.0
Original Value: 340600
Predicted Value: 323452.0
Original Value: 196900
Predicted Value: 231087.0
Original Value: 46300
Predicted Value: 43875.0
Original Value: 254500
Predicted Value: 191290.0
###Markdown
Evalutating the ModelIn order to evaluate the model, we will first get predictions for the training set and use various evlaution techniques to compare the result- __RMSE__ _Root Mean Squared Error_ - Root Sum of Squared Distance of the prediction from the actual value. - Lesser the better- __Cross-Validation__
###Code
lin_predictions = lin_reg.predict(housing_prepared)
from sklearn.metrics import mean_squared_error
lin_mse = mean_squared_error(housing_labels, lin_predictions)
lin_rmse = np.sqrt(lin_mse)
print(lin_rmse)
###Output
65879.1711167482
###Markdown
Here, we can see that RMSE for Linear Regression Model is 65,882. This means, that on an average the prediction is $65,882 off from the actual value. _Also to note, this is on the train set, so score on the test set is bound to be lower than this_ Before tweaking the model, let's try out various other models and once we finalize the best model we can then tweak the parameters. Decision Tree Regressor
###Code
from sklearn.tree import DecisionTreeRegressor
dectree_reg = DecisionTreeRegressor()
dectree_reg.fit(housing_prepared, housing_labels)
###Output
_____no_output_____
###Markdown
Getting predictions from the `DecisionTreeRegressor` and calculating the RMSE for it
###Code
dectreee_predictions = dectree_reg.predict(housing_prepared)
dectree_mse = mean_squared_error(housing_labels, dectreee_predictions)
dectree_rmse = np.sqrt(dectree_mse)
print(dectree_rmse)
###Output
0.0
###Markdown
WAIT, WHAT! RMSE for `DecisionTreeRegressor` is 0!!!This is a clear indication that the model is overfitting the dataset and is performing poorly Let's try another evaluationg method: __Cross Validation__
###Code
from sklearn.model_selection import cross_val_score
dectree_score = cross_val_score(estimator=dectree_reg, X=housing_prepared, y=housing_labels,
scoring='neg_mean_squared_error', cv=10)
###Output
_____no_output_____
###Markdown
Now let's look at the scores, mean and std. deviation of the scores
###Code
dectree_rmse = np.sqrt(-dectree_score)
print(f'Scores:{dectree_rmse}')
print(f'Mean: {dectree_rmse.mean()}')
print(f'Std Dev: {dectree_rmse.std()}')
###Output
Scores:[67317.22979144 67854.58949683 71589.5293899 68561.01823896
70519.84720424 75181.55447834 69143.49502022 72021.48342787
75320.82018301 69118.971303 ]
Mean: 70662.85385338096
Std Dev: 2702.399767645295
###Markdown
As we can see the mean error of the `DecisionTreeRegressor` is 70,590. Now let's evaluate `LinearRegressor` using Cross Validation
###Code
linreg_score = cross_val_score(estimator=lin_reg, X=housing_prepared, y=housing_labels,
scoring='neg_mean_squared_error', cv=10)
linreg_rmse = np.sqrt(-linreg_score)
print(f'Scores:{linreg_rmse}')
print(f'Mean: {linreg_rmse.mean()}')
print(f'Std Dev: {linreg_rmse.std()}')
###Output
Scores:[64538.13230472 64182.01072581 68569.14820812 68166.42875105
65616.0268226 68796.11506635 64443.77316481 64363.49914171
69259.25163346 64363.75843765]
Mean: 66229.81442562716
Std Dev: 2063.3373707060377
###Markdown
As we can see, `DecisionTreeRegressor` is perfomering poorly than `LinearRegressor`. Let's look at another model Random Forest Regressor
###Code
from sklearn.ensemble import RandomForestRegressor
ranforreg = RandomForestRegressor()
ranforreg.fit(housing_prepared, housing_labels)
ranforreg_predictions = ranforreg.predict(housing_prepared)
###Output
_____no_output_____
###Markdown
Now let's look at both the evaluation scores
###Code
ranfor_mse = mean_squared_error(housing_labels, ranforreg_predictions)
ranfor_rmse = np.sqrt(ranfor_mse)
print(f'RMSE for RandomForestRegressor: {round(ranfor_rmse,2)}')
ranfor_score = cross_val_score(estimator=ranforreg, X=housing_prepared, y=housing_labels,
scoring='neg_mean_squared_error', cv=10)
ranfor_rmse = np.sqrt(-ranfor_score)
print(f'Scores:{ranfor_rmse}')
print(f'Mean: {ranfor_rmse.mean()}')
print(f'Std Dev: {ranfor_rmse.std()}')
###Output
Scores:[49733.14463807 47419.39231677 50000.80367177 52373.6116611
49698.09430276 53509.05723568 48569.36542151 47706.97313806
52900.76660287 50300.84145687]
Mean: 50221.20504454482
Std Dev: 2002.2420865560218
###Markdown
Again, we can see `RandomForestRegressor` has the lowest RMSE score out of the 3 models we tested.Now instead of trying new models, let's try to fine tune this one. Fine Tuning the modelThere are few ways we can fine tune the model:- __Grid Search__: Goes through the given parameter list automatically- __Randomized Search__: Goes through all the iterations of parameters based on n_iter hyperparameter Grid Search:
###Code
from sklearn.model_selection import GridSearchCV
# Define the parameters we need to go through
parameters = [
{'n_estimators': [3, 10, 30, 50], 'max_features': [2, 4, 6, 8, 10, 12]},
{'bootstrap': [False], 'n_estimators': [3, 10, 30], 'max_features': [2, 3, 4, 5]}
]
forest_reg = RandomForestRegressor()
grid_search = GridSearchCV(estimator=forest_reg,
param_grid=parameters,
scoring='neg_mean_squared_error',
return_train_score=True)
print(grid_search)
###Output
GridSearchCV(cv=None, error_score=nan,
estimator=RandomForestRegressor(bootstrap=True, ccp_alpha=0.0,
criterion='mse', max_depth=None,
max_features='auto',
max_leaf_nodes=None,
max_samples=None,
min_impurity_decrease=0.0,
min_impurity_split=None,
min_samples_leaf=1,
min_samples_split=2,
min_weight_fraction_leaf=0.0,
n_estimators=100, n_jobs=None,
oob_score=False, random_state=None,
verbose=0, warm_start=False),
iid='deprecated', n_jobs=None,
param_grid=[{'max_features': [2, 4, 6, 8, 10, 12],
'n_estimators': [3, 10, 30, 50]},
{'bootstrap': [False], 'max_features': [2, 3, 4, 5],
'n_estimators': [3, 10, 30]}],
pre_dispatch='2*n_jobs', refit=True, return_train_score=True,
scoring='neg_mean_squared_error', verbose=0)
###Markdown
_Note:_- _By default, `GridSearchCV` doesn't return training score. That's why we set `return_train_score` to True_
###Code
# Fitting the grid_search to our dataset
grid_search.fit(housing_prepared, housing_labels)
print(grid_search.best_params_)
print(grid_search.best_estimator_)
###Output
{'max_features': 10, 'n_estimators': 50}
RandomForestRegressor(bootstrap=True, ccp_alpha=0.0, criterion='mse',
max_depth=None, max_features=10, max_leaf_nodes=None,
max_samples=None, min_impurity_decrease=0.0,
min_impurity_split=None, min_samples_leaf=1,
min_samples_split=2, min_weight_fraction_leaf=0.0,
n_estimators=50, n_jobs=None, oob_score=False,
random_state=None, verbose=0, warm_start=False)
###Markdown
As we can see that the best value for `max_feature` is the maximum value within the list, we can then try a new set of values.One big drawback of this aproach is that, this is a trial method and the output varies depeding on the parameter grid. Let's evaluate the model and look at the results
###Code
cv_results = grid_search.cv_results_
for parameter_, test_score_ in zip(cv_results['params'],cv_results['mean_test_score']):
print(f'Parameter: {parameter_} | Test Score: {round(np.sqrt(-test_score_), 2)}')
###Output
Parameter: {'max_features': 2, 'n_estimators': 3} | Test Score: 65717.17
Parameter: {'max_features': 2, 'n_estimators': 10} | Test Score: 56491.9
Parameter: {'max_features': 2, 'n_estimators': 30} | Test Score: 54265.15
Parameter: {'max_features': 2, 'n_estimators': 50} | Test Score: 53557.66
Parameter: {'max_features': 4, 'n_estimators': 3} | Test Score: 61894.08
Parameter: {'max_features': 4, 'n_estimators': 10} | Test Score: 54189.1
Parameter: {'max_features': 4, 'n_estimators': 30} | Test Score: 51678.33
Parameter: {'max_features': 4, 'n_estimators': 50} | Test Score: 51251.21
Parameter: {'max_features': 6, 'n_estimators': 3} | Test Score: 59730.35
Parameter: {'max_features': 6, 'n_estimators': 10} | Test Score: 52667.86
Parameter: {'max_features': 6, 'n_estimators': 30} | Test Score: 51029.2
Parameter: {'max_features': 6, 'n_estimators': 50} | Test Score: 50625.37
Parameter: {'max_features': 8, 'n_estimators': 3} | Test Score: 60427.17
Parameter: {'max_features': 8, 'n_estimators': 10} | Test Score: 53245.48
Parameter: {'max_features': 8, 'n_estimators': 30} | Test Score: 50850.17
Parameter: {'max_features': 8, 'n_estimators': 50} | Test Score: 50606.86
Parameter: {'max_features': 10, 'n_estimators': 3} | Test Score: 59434.42
Parameter: {'max_features': 10, 'n_estimators': 10} | Test Score: 52962.24
Parameter: {'max_features': 10, 'n_estimators': 30} | Test Score: 50976.56
Parameter: {'max_features': 10, 'n_estimators': 50} | Test Score: 50349.24
Parameter: {'max_features': 12, 'n_estimators': 3} | Test Score: 59339.73
Parameter: {'max_features': 12, 'n_estimators': 10} | Test Score: 52934.37
Parameter: {'max_features': 12, 'n_estimators': 30} | Test Score: 50752.97
Parameter: {'max_features': 12, 'n_estimators': 50} | Test Score: 50571.29
Parameter: {'bootstrap': False, 'max_features': 2, 'n_estimators': 3} | Test Score: 63122.41
Parameter: {'bootstrap': False, 'max_features': 2, 'n_estimators': 10} | Test Score: 55910.48
Parameter: {'bootstrap': False, 'max_features': 2, 'n_estimators': 30} | Test Score: 53008.91
Parameter: {'bootstrap': False, 'max_features': 3, 'n_estimators': 3} | Test Score: 60995.28
Parameter: {'bootstrap': False, 'max_features': 3, 'n_estimators': 10} | Test Score: 53905.18
Parameter: {'bootstrap': False, 'max_features': 3, 'n_estimators': 30} | Test Score: 51520.4
Parameter: {'bootstrap': False, 'max_features': 4, 'n_estimators': 3} | Test Score: 59758.1
Parameter: {'bootstrap': False, 'max_features': 4, 'n_estimators': 10} | Test Score: 52877.1
Parameter: {'bootstrap': False, 'max_features': 4, 'n_estimators': 30} | Test Score: 50563.73
Parameter: {'bootstrap': False, 'max_features': 5, 'n_estimators': 3} | Test Score: 59139.74
Parameter: {'bootstrap': False, 'max_features': 5, 'n_estimators': 10} | Test Score: 52527.82
Parameter: {'bootstrap': False, 'max_features': 5, 'n_estimators': 30} | Test Score: 50350.35
###Markdown
Let's look at the RMSE for the best score
###Code
negative_gridsearch_mse = grid_search.best_score_
gridsearch_rmse = np.sqrt(-negative_gridsearch_mse)
gridsearch_rmse
###Output
_____no_output_____
###Markdown
Randomized Search:
###Code
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint
param_distribs = {
'n_estimators': randint(low=1, high=200),
'max_features': randint(low=1, high=8)
}
forest_reg = RandomForestRegressor(random_state=42)
rnd_search = RandomizedSearchCV(estimator=forest_reg, param_distributions=param_distribs,
n_iter=10, cv=5, scoring='neg_mean_squared_error', random_state=42)
rnd_search.fit(housing_prepared, housing_labels)
###Output
_____no_output_____
###Markdown
Let's look at the result:
###Code
print(f'Best Parameter: {rnd_search.best_params_}\n')
rnd_results = rnd_search.cv_results_
for parameter_, test_score_ in zip(rnd_results['params'],rnd_results['mean_test_score']):
print(f'Parameter: {parameter_} | Test Score: {round(np.sqrt(-test_score_), 2)}')
###Output
Best Parameter: {'max_features': 7, 'n_estimators': 180}
Parameter: {'max_features': 7, 'n_estimators': 180} | Test Score: 50139.69
Parameter: {'max_features': 5, 'n_estimators': 15} | Test Score: 52147.62
Parameter: {'max_features': 3, 'n_estimators': 72} | Test Score: 51664.1
Parameter: {'max_features': 5, 'n_estimators': 21} | Test Score: 51528.56
Parameter: {'max_features': 7, 'n_estimators': 122} | Test Score: 50251.36
Parameter: {'max_features': 3, 'n_estimators': 75} | Test Score: 51626.89
Parameter: {'max_features': 3, 'n_estimators': 88} | Test Score: 51602.38
Parameter: {'max_features': 5, 'n_estimators': 100} | Test Score: 50440.98
Parameter: {'max_features': 3, 'n_estimators': 150} | Test Score: 51370.41
Parameter: {'max_features': 5, 'n_estimators': 2} | Test Score: 65091.98
###Markdown
Let's look at the RMSE for the best score
###Code
negative_rndsearch_mse = rnd_search.best_score_
rndsearch_rmse = np.sqrt(-negative_rndsearch_mse)
rndsearch_rmse
###Output
_____no_output_____
###Markdown
Analyzing the best model and it's errors Now let's look at the feature importances of the best model and see if we can reduce some features or not
###Code
feature_importance = rnd_search.best_estimator_.feature_importances_
categorical_pipe = full_pipeline.named_transformers_['cats']
cat_attributes = list(categorical_pipe.named_steps['ohe'].categories_[0])
extra_attributes = ['household_per_population', 'rooms_per_bedroom', 'bedrooms_per_room', 'rooms_per_household', 'population_per_household']
attributes = num_attributes + extra_attributes + cat_attributes
sorted(zip(feature_importance, attributes), reverse=True)
###Output
_____no_output_____
###Markdown
Here, we can see the importance of each feature and decide on which features we want to exclude. Testing our model on our test data
###Code
final_model = rnd_search.best_estimator_
X_test = housing_test.drop('median_house_value', axis=1)
y_test = housing_test['median_house_value'].copy()
X_test_prepared = full_pipeline.transform(X_test)
final_predictions = final_model.predict(X_test_prepared)
final_mse = mean_squared_error(y_test, final_predictions)
final_rmse = np.sqrt(final_mse)
print(f'RMSE for the Final Model: {round(final_rmse, 2)}')
###Output
RMSE for the Final Model: 48080.72
###Markdown
We can also calculate a 95% confidence interval for the test RMSE:
###Code
from scipy import stats
confidence = 0.95
squared_error = (final_predictions - y_test)**2
np.sqrt(stats.t.interval(confidence, len(squared_error) -1,
loc=squared_error.mean(),
scale=stats.sem(squared_error)
))
###Output
_____no_output_____
###Markdown
Additional Topics:- Full Pipeline with both preparation and prediction- Full Pipeline with `GridSearchCV` for selecting the imputer parameter Full Pipeline with preparation and prediction
###Code
full_pipeline_with_predictor = Pipeline([
('preparation', full_pipeline),
('linear', LinearRegression())
])
full_pipeline_with_predictor.fit(housing, housing_labels)
full_pipeline_with_predictor.predict(some_data)
###Output
_____no_output_____
###Markdown
Full Pipeline with `GridSearchCV` for selecting the imputer parameter
###Code
full_pipeline_imputer_selection = Pipeline(steps=[
('preparation', full_pipeline),
('rnd_for_reg', RandomForestRegressor(**rnd_search.best_params_))
])
paramater_grid = [{
'preparation__nums__impute__strategy': ['mean', 'median', 'most_frequent']
}]
grid_search_prep = GridSearchCV(estimator=full_pipeline_imputer_selection, param_grid=paramater_grid,
scoring='neg_mean_squared_error', cv=5, verbose=2)
grid_search_prep.fit(housing, housing_labels)
grid_search_prep.best_params_
###Output
_____no_output_____ |
sagemaker-python-sdk/mxnet_mnist/mxnet_mnist_with_batch_transform.ipynb | ###Markdown
Using the Apache MXNet Module API with SageMaker Training and Batch Transformation The SageMaker Python SDK makes it easy to train MXNet models and use them for batch transformation. In this example, we train a simple neural network using the Apache MXNet [Module API](https://mxnet.incubator.apache.org/api/python/module.html) and the MNIST dataset. The MNIST dataset is widely used for handwritten digit classification, and consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). The task at hand is to train a model using the 60,000 training images and subsequently test its classification accuracy on the 10,000 test images. SetupFirst, we define a few variables that will be needed later in the example.
###Code
from sagemaker import get_execution_role
from sagemaker.session import Session
sagemaker_session = Session()
region = sagemaker_session.boto_session.region_name
sample_data_bucket = 'sagemaker-sample-data-{}'.format(region)
# S3 bucket for saving files. Feel free to redefine this variable to the bucket of your choice.
bucket = sagemaker_session.default_bucket()
# Bucket location where your custom code will be saved in the tar.gz format.
custom_code_upload_location = 's3://{}/mxnet-mnist-example/code'.format(bucket)
# Bucket location where results of model training are saved.
model_artifacts_location = 's3://{}/mxnet-mnist-example/artifacts'.format(bucket)
# IAM execution role that gives SageMaker access to resources in your AWS account.
# We can use the SageMaker Python SDK to get the role from our notebook environment.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Training and inference scriptThe `mnist.py` script provides all the code we need for training and and inference. The script we will use is adaptated from the Apache MXNet [MNIST tutorial](https://mxnet.incubator.apache.org/tutorials/python/mnist.html).
###Code
!cat mnist.py
###Output
_____no_output_____
###Markdown
SageMaker's MXNet estimator classThe SageMaker ```MXNet``` estimator allows us to run single machine or distributed training in SageMaker, using CPU or GPU-based instances.When we create the estimator, we pass in the filename of our training script, the name of our IAM execution role, and the S3 locations we defined in the setup section. We also provide a few other parameters. ``train_instance_count`` and ``train_instance_type`` determine the number and type of SageMaker instances that will be used for the training job. The ``hyperparameters`` parameter is a ``dict`` of values that will be passed to your training script -- you can see how to access these values in the ``mnist.py`` script above.For this example, we will choose one ``ml.m4.xlarge`` instance.
###Code
from sagemaker.mxnet import MXNet
mnist_estimator = MXNet(entry_point='mnist.py',
role=role,
output_path=model_artifacts_location,
code_location=custom_code_upload_location,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
framework_version='1.3.0',
hyperparameters={'learning-rate': 0.1})
###Output
_____no_output_____
###Markdown
Running a training jobAfter we've constructed our MXNet object, we can fit it using data stored in S3. Below we run SageMaker training on two input channels: train and test.During training, SageMaker makes this data stored in S3 available in the local filesystem where the `mnist.py` script is running. The script then simply loads the train and test data from disk.
###Code
%%time
train_data_location = 's3://{}/mxnet/mnist/train'.format(sample_data_bucket)
test_data_location = 's3://{}/mxnet/mnist/test'.format(sample_data_bucket)
mnist_estimator.fit({'train': train_data_location, 'test': test_data_location})
###Output
_____no_output_____
###Markdown
SageMaker's transformer classAfter training, we use our MXNet estimator object to create a `Transformer` by invoking the `transformer()` method. This method takes arguments for configuring our options with the batch transform job; these do not need to be the same values as the one we used for the training job. The method also creates a SageMaker Model to be used for the batch transform jobs.The `Transformer` class is responsible for running batch transform jobs, which will deploy the trained model to an endpoint and send requests for performing inference.
###Code
transformer = mnist_estimator.transformer(instance_count=1, instance_type='ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Running a batch transform jobNow we can perform some inference with the model we've trained by running a batch transform job. The request handling behavior of the Endpoint deployed during the transform job is determined by the `mnist.py` script.For demonstration purposes, we're going to use input data that contains 1000 MNIST images, located in the public SageMaker sample data S3 bucket. To create the batch transform job, we simply call `transform()` on our transformer with information about the input data.
###Code
input_file_path = 'batch-transform/mnist-1000-samples'
transformer.transform('s3://{}/{}'.format(sample_data_bucket, input_file_path), content_type='text/csv')
###Output
_____no_output_____
###Markdown
Now we wait for the batch transform job to complete. We have a convenience method, `wait()`, that will block until the batch transform job has completed. We can call that here to see if the batch transform job is still running; the cell will finish running when the batch transform job has completed.
###Code
transformer.wait()
###Output
_____no_output_____
###Markdown
Downloading the resultsThe batch transform job uploads its predictions to S3. Since we did not specify `output_path` when creating the Transformer, one was generated based on the batch transform job name:
###Code
print(transformer.output_path)
###Output
_____no_output_____
###Markdown
The output here will be a list of predictions, where each prediction is a list of probabilities, one for each possible label. Since we read the output as a string, we use `ast.literal_eval()` to turn it into a list and find the maximum element of the list gives us the predicted label. Here we define a convenience method to take the output and produce the predicted label.
###Code
import ast
def predicted_label(transform_output):
output = ast.literal_eval(transform_output)
probabilities = output[0]
return probabilities.index(max(probabilities))
###Output
_____no_output_____
###Markdown
Now let's download the first ten results from S3:
###Code
import json
from urllib.parse import urlparse
import boto3
parsed_url = urlparse(transformer.output_path)
bucket_name = parsed_url.netloc
prefix = parsed_url.path[1:]
s3 = boto3.resource('s3')
predictions = []
for i in range(10):
file_key = '{}/data-{}.csv.out'.format(prefix, i)
output_obj = s3.Object(bucket_name, file_key)
output = output_obj.get()["Body"].read().decode('utf-8')
predictions.append(predicted_label(output))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we're also going to download the corresponding original input data so that we can see how the model did with its predictions.
###Code
import os
tmp_dir = '/tmp/data'
if not os.path.exists(tmp_dir):
os.makedirs(tmp_dir)
###Output
_____no_output_____
###Markdown
And now we'll print out the images:
###Code
from numpy import genfromtxt
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (2,10)
def show_digit(img, caption='', subplot=None):
if subplot == None:
_,(subplot) = plt.subplots(1,1)
imgr = img.reshape((28,28))
subplot.axis('off')
subplot.imshow(imgr, cmap='gray')
plt.title(caption)
for i in range(10):
input_file_name = 'data-{}.csv'.format(i)
input_file_key = '{}/{}'.format(input_file_path, input_file_name)
s3.Bucket(sample_data_bucket).download_file(input_file_key, os.path.join(tmp_dir, input_file_name))
input_data = genfromtxt(os.path.join(tmp_dir, input_file_name), delimiter=',')
show_digit(input_data)
###Output
_____no_output_____
###Markdown
Here, we can see the original labels are:```7, 2, 1, 0, 4, 1, 4, 9, 5, 9```Now let's print out the predictions to compare:
###Code
print(predictions)
###Output
_____no_output_____
###Markdown
Using the Apache MXNet Module API with SageMaker Training and Batch Transformation The SageMaker Python SDK makes it easy to train MXNet models and use them for batch transformation. In this example, we train a simple neural network using the Apache MXNet [Module API](https://mxnet.incubator.apache.org/api/python/module.html) and the MNIST dataset. The MNIST dataset is widely used for handwritten digit classification, and consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). The task at hand is to train a model using the 60,000 training images and subsequently test its classification accuracy on the 10,000 test images. SetupFirst, we define a few variables that will be needed later in the example.
###Code
from sagemaker import get_execution_role
from sagemaker.session import Session
sagemaker_session = Session()
region = sagemaker_session.boto_session.region_name
sample_data_bucket = 'sagemaker-sample-data-{}'.format(region)
# S3 bucket for saving files. Feel free to redefine this variable to the bucket of your choice.
bucket = sagemaker_session.default_bucket()
# Bucket location where your custom code will be saved in the tar.gz format.
custom_code_upload_location = 's3://{}/mxnet-mnist-example/code'.format(bucket)
# Bucket location where results of model training are saved.
model_artifacts_location = 's3://{}/mxnet-mnist-example/artifacts'.format(bucket)
# IAM execution role that gives SageMaker access to resources in your AWS account.
# We can use the SageMaker Python SDK to get the role from our notebook environment.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Training and inference scriptThe `mnist.py` script provides all the code we need for training and and inference. The script we will use is adaptated from the Apache MXNet [MNIST tutorial](https://mxnet.incubator.apache.org/tutorials/python/mnist.html).
###Code
!cat mnist.py
###Output
_____no_output_____
###Markdown
SageMaker's MXNet estimator classThe SageMaker ```MXNet``` estimator allows us to run single machine or distributed training in SageMaker, using CPU or GPU-based instances.When we create the estimator, we pass in the filename of our training script, the name of our IAM execution role, and the S3 locations we defined in the setup section. We also provide a few other parameters. ``train_instance_count`` and ``train_instance_type`` determine the number and type of SageMaker instances that will be used for the training job. The ``hyperparameters`` parameter is a ``dict`` of values that will be passed to your training script -- you can see how to access these values in the ``mnist.py`` script above.For this example, we will choose one ``ml.m4.xlarge`` instance.
###Code
from sagemaker.mxnet import MXNet
mnist_estimator = MXNet(entry_point='mnist.py',
role=role,
output_path=model_artifacts_location,
code_location=custom_code_upload_location,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
framework_version='1.4.1',
py_version='py3',
hyperparameters={'learning-rate': 0.1})
###Output
_____no_output_____
###Markdown
Running a training jobAfter we've constructed our MXNet object, we can fit it using data stored in S3. Below we run SageMaker training on two input channels: train and test.During training, SageMaker makes this data stored in S3 available in the local filesystem where the `mnist.py` script is running. The script then simply loads the train and test data from disk.
###Code
%%time
train_data_location = 's3://{}/mxnet/mnist/train'.format(sample_data_bucket)
test_data_location = 's3://{}/mxnet/mnist/test'.format(sample_data_bucket)
mnist_estimator.fit({'train': train_data_location, 'test': test_data_location})
###Output
_____no_output_____
###Markdown
SageMaker's transformer classAfter training, we use our MXNet estimator object to create a `Transformer` by invoking the `transformer()` method. This method takes arguments for configuring our options with the batch transform job; these do not need to be the same values as the one we used for the training job. The method also creates a SageMaker Model to be used for the batch transform jobs.The `Transformer` class is responsible for running batch transform jobs, which will deploy the trained model to an endpoint and send requests for performing inference.
###Code
transformer = mnist_estimator.transformer(instance_count=1, instance_type='ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Running a batch transform jobNow we can perform some inference with the model we've trained by running a batch transform job. The request handling behavior of the Endpoint deployed during the transform job is determined by the `mnist.py` script.For demonstration purposes, we're going to use input data that contains 1000 MNIST images, located in the public SageMaker sample data S3 bucket. To create the batch transform job, we simply call `transform()` on our transformer with information about the input data.
###Code
input_file_path = 'batch-transform/mnist-1000-samples'
transformer.transform('s3://{}/{}'.format(sample_data_bucket, input_file_path), content_type='text/csv')
###Output
_____no_output_____
###Markdown
Now we wait for the batch transform job to complete. We have a convenience method, `wait()`, that will block until the batch transform job has completed. We can call that here to see if the batch transform job is still running; the cell will finish running when the batch transform job has completed.
###Code
transformer.wait()
###Output
_____no_output_____
###Markdown
Downloading the resultsThe batch transform job uploads its predictions to S3. Since we did not specify `output_path` when creating the Transformer, one was generated based on the batch transform job name:
###Code
print(transformer.output_path)
###Output
_____no_output_____
###Markdown
The output here will be a list of predictions, where each prediction is a list of probabilities, one for each possible label. Since we read the output as a string, we use `ast.literal_eval()` to turn it into a list and find the maximum element of the list gives us the predicted label. Here we define a convenience method to take the output and produce the predicted label.
###Code
import ast
def predicted_label(transform_output):
output = ast.literal_eval(transform_output)
probabilities = output[0]
return probabilities.index(max(probabilities))
###Output
_____no_output_____
###Markdown
Now let's download the first ten results from S3:
###Code
import json
from urllib.parse import urlparse
import boto3
parsed_url = urlparse(transformer.output_path)
bucket_name = parsed_url.netloc
prefix = parsed_url.path[1:]
s3 = boto3.resource('s3')
predictions = []
for i in range(10):
file_key = '{}/data-{}.csv.out'.format(prefix, i)
output_obj = s3.Object(bucket_name, file_key)
output = output_obj.get()["Body"].read().decode('utf-8')
predictions.append(predicted_label(output))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we're also going to download the corresponding original input data so that we can see how the model did with its predictions.
###Code
import os
tmp_dir = '/tmp/data'
if not os.path.exists(tmp_dir):
os.makedirs(tmp_dir)
###Output
_____no_output_____
###Markdown
And now we'll print out the images:
###Code
from numpy import genfromtxt
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (2,10)
def show_digit(img, caption='', subplot=None):
if subplot == None:
_,(subplot) = plt.subplots(1,1)
imgr = img.reshape((28,28))
subplot.axis('off')
subplot.imshow(imgr, cmap='gray')
plt.title(caption)
for i in range(10):
input_file_name = 'data-{}.csv'.format(i)
input_file_key = '{}/{}'.format(input_file_path, input_file_name)
s3.Bucket(sample_data_bucket).download_file(input_file_key, os.path.join(tmp_dir, input_file_name))
input_data = genfromtxt(os.path.join(tmp_dir, input_file_name), delimiter=',')
show_digit(input_data)
###Output
_____no_output_____
###Markdown
Here, we can see the original labels are:```7, 2, 1, 0, 4, 1, 4, 9, 5, 9```Now let's print out the predictions to compare:
###Code
print(predictions)
###Output
_____no_output_____
###Markdown
Using the Apache MXNet Module API with SageMaker Training and Batch TransformationThe SageMaker Python SDK makes it easy to train MXNet models and use them for batch transformation. In this example, we train a simple neural network using the Apache MXNet [Module API](https://mxnet.incubator.apache.org/api/python/module.html) and the MNIST dataset. The MNIST dataset is widely used for handwritten digit classification, and consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). The task at hand is to train a model using the 60,000 training images and subsequently test its classification accuracy on the 10,000 test images. SetupFirst, we define a few variables that are be needed later in the example.
###Code
from sagemaker import get_execution_role
from sagemaker.session import Session
sagemaker_session = Session()
region = sagemaker_session.boto_session.region_name
sample_data_bucket = "sagemaker-sample-data-{}".format(region)
# S3 bucket for saving files. Feel free to redefine this variable to the bucket of your choice.
bucket = sagemaker_session.default_bucket()
# Bucket location where your custom code will be saved in the tar.gz format.
custom_code_upload_location = "s3://{}/mxnet-mnist-example/code".format(bucket)
# Bucket location where results of model training are saved.
model_artifacts_location = "s3://{}/mxnet-mnist-example/artifacts".format(bucket)
# IAM execution role that gives SageMaker access to resources in your AWS account.
# We can use the SageMaker Python SDK to get the role from our notebook environment.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Training and inference scriptThe `mnist.py` script provides all the code we need for training and and inference. The script also checkpoints the model at the end of every epoch and saves the model graph, params and optimizer state in the folder `/opt/ml/checkpoints`. If the folder path does not exist then it skips checkpointing. The script we use is adaptated from the Apache MXNet [MNIST tutorial](https://mxnet.incubator.apache.org/tutorials/python/mnist.html).
###Code
!pygmentize mnist.py
###Output
_____no_output_____
###Markdown
SageMaker's MXNet estimator classThe SageMaker ```MXNet``` estimator allows us to run single machine or distributed training in SageMaker, using CPU or GPU-based instances.When we create the estimator, we pass in the filename of our training script, the name of our IAM execution role, and the S3 locations we defined in the setup section. We also provide a few other parameters. ``train_instance_count`` and ``train_instance_type`` determine the number and type of SageMaker instances that are used for the training job. The ``hyperparameters`` parameter is a ``dict`` of values that is passed to your training script -- you can see how to access these values in the ``mnist.py`` script above.For this example, we choose one ``ml.m4.xlarge`` instance for our training job.
###Code
from sagemaker.mxnet import MXNet
mnist_estimator = MXNet(
entry_point="mnist.py",
role=role,
output_path=model_artifacts_location,
code_location=custom_code_upload_location,
train_instance_count=1,
train_instance_type="ml.m4.xlarge",
framework_version="1.6.0",
py_version="py3",
hyperparameters={"learning-rate": 0.1},
)
###Output
_____no_output_____
###Markdown
Running a training jobAfter we've constructed our `MXNet` object, we can fit it using data stored in S3. Below we run SageMaker training on two input channels: train and test.During training, SageMaker makes this data stored in S3 available in the local filesystem where the `mnist.py` script is running. The script then simply loads the train and test data from disk.
###Code
%%time
train_data_location = "s3://{}/mxnet/mnist/train".format(sample_data_bucket)
test_data_location = "s3://{}/mxnet/mnist/test".format(sample_data_bucket)
mnist_estimator.fit({"train": train_data_location, "test": test_data_location})
###Output
_____no_output_____
###Markdown
SageMaker's transformer classAfter training, we use our `MXNet` estimator object to create a `Transformer` by invoking the `transformer()` method. This method takes arguments for configuring our options with the batch transform job; these do not need to be the same values as the one we used for the training job. The method also creates a SageMaker Model to be used for the batch transform jobs.The `Transformer` class is responsible for running batch transform jobs, which deploys the trained model to an endpoint and send requests for performing inference.
###Code
transformer = mnist_estimator.transformer(instance_count=1, instance_type="ml.m4.xlarge")
###Output
_____no_output_____
###Markdown
Running a batch transform jobNow we can perform some inference with the model we've trained by running a batch transform job. The request handling behavior during the transform job is determined by the `mnist.py` script.For demonstration purposes, we're going to use input data that contains 1000 MNIST images, located in the public SageMaker sample data S3 bucket. To create the batch transform job, we simply call `transform()` on our transformer with information about the input data.
###Code
input_file_path = "batch-transform/mnist-1000-samples"
transformer.transform(
"s3://{}/{}".format(sample_data_bucket, input_file_path), content_type="text/csv"
)
###Output
_____no_output_____
###Markdown
Now we wait for the batch transform job to complete. We have a convenience method, `wait()`, that blocks until the batch transform job has completed. We call that here to see if the batch transform job is still running; the cell finishes running when the batch transform job has completed.
###Code
transformer.wait()
###Output
_____no_output_____
###Markdown
Downloading the resultsThe batch transform job uploads its predictions to S3. Since we did not specify `output_path` when creating the Transformer, one was generated based on the batch transform job name:
###Code
print(transformer.output_path)
###Output
_____no_output_____
###Markdown
The output here will be a list of predictions, where each prediction is a list of probabilities, one for each possible label. Since we read the output as a string, we use `ast.literal_eval()` to turn it into a list and find the maximum element of the list gives us the predicted label. Here we define a convenience method to take the output and produce the predicted label.
###Code
import ast
def predicted_label(transform_output):
output = ast.literal_eval(transform_output)
probabilities = output[0]
return probabilities.index(max(probabilities))
###Output
_____no_output_____
###Markdown
Now let's download the first ten results from S3:
###Code
import json
from sagemaker.s3 import S3Downloader
predictions = []
for i in range(10):
file_key = "{}/data-{}.csv.out".format(transformer.output_path, i)
output = S3Downloader.read_file(file_key)
predictions.append(predicted_label(output))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we also download and display the corresponding original input data so that we can see how the model did with its predictions:
###Code
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams["figure.figsize"] = (2, 10)
def show_digit(img, caption="", subplot=None):
if subplot == None:
_, (subplot) = plt.subplots(1, 1)
imgr = img.reshape((28, 28))
subplot.axis("off")
subplot.imshow(imgr, cmap="gray")
plt.title(caption)
for i in range(10):
input_file_name = "data-{}.csv".format(i)
input_file_uri = "s3://{}/{}/{}".format(sample_data_bucket, input_file_path, input_file_name)
input_data = np.fromstring(S3Downloader.read_file(input_file_uri), sep=",")
show_digit(input_data)
###Output
_____no_output_____
###Markdown
Here, we can see the original labels are:```7, 2, 1, 0, 4, 1, 4, 9, 5, 9```Now let's print out the predictions to compare:
###Code
print(predictions)
###Output
_____no_output_____
###Markdown
Using the Apache MXNet Module API with SageMaker Training and Batch Transformation The SageMaker Python SDK makes it easy to train MXNet models and use them for batch transformation. In this example, we train a simple neural network using the Apache MXNet [Module API](https://mxnet.incubator.apache.org/api/python/module.html) and the MNIST dataset. The MNIST dataset is widely used for handwritten digit classification, and consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). The task at hand is to train a model using the 60,000 training images and subsequently test its classification accuracy on the 10,000 test images. SetupFirst, we define a few variables that will be needed later in the example.
###Code
from sagemaker import get_execution_role
from sagemaker.session import Session
sagemaker_session = Session()
region = sagemaker_session.boto_session.region_name
sample_data_bucket = 'sagemaker-sample-data-{}'.format(region)
# S3 bucket for saving files. Feel free to redefine this variable to the bucket of your choice.
bucket = sagemaker_session.default_bucket()
# Bucket location where your custom code will be saved in the tar.gz format.
custom_code_upload_location = 's3://{}/mxnet-mnist-example/code'.format(bucket)
# Bucket location where results of model training are saved.
model_artifacts_location = 's3://{}/mxnet-mnist-example/artifacts'.format(bucket)
# IAM execution role that gives SageMaker access to resources in your AWS account.
# We can use the SageMaker Python SDK to get the role from our notebook environment.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Training and inference scriptThe `mnist.py` script provides all the code we need for training and and inference. The script we will use is adaptated from the Apache MXNet [MNIST tutorial](https://mxnet.incubator.apache.org/tutorials/python/mnist.html).
###Code
!cat mnist.py
###Output
_____no_output_____
###Markdown
SageMaker's MXNet estimator classThe SageMaker ```MXNet``` estimator allows us to run single machine or distributed training in SageMaker, using CPU or GPU-based instances.When we create the estimator, we pass in the filename of our training script, the name of our IAM execution role, and the S3 locations we defined in the setup section. We also provide a few other parameters. ``train_instance_count`` and ``train_instance_type`` determine the number and type of SageMaker instances that will be used for the training job. The ``hyperparameters`` parameter is a ``dict`` of values that will be passed to your training script -- you can see how to access these values in the ``mnist.py`` script above.For this example, we will choose one ``ml.m4.xlarge`` instance.
###Code
from sagemaker.mxnet import MXNet
mnist_estimator = MXNet(entry_point='mnist.py',
role=role,
output_path=model_artifacts_location,
code_location=custom_code_upload_location,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
framework_version='1.2.1',
hyperparameters={'learning_rate': 0.1})
###Output
_____no_output_____
###Markdown
Running a training jobAfter we've constructed our MXNet object, we can fit it using data stored in S3. Below we run SageMaker training on two input channels: train and test.During training, SageMaker makes this data stored in S3 available in the local filesystem where the `mnist.py` script is running. The script then simply loads the train and test data from disk.
###Code
%%time
train_data_location = 's3://{}/mxnet/mnist/train'.format(sample_data_bucket)
test_data_location = 's3://{}/mxnet/mnist/test'.format(sample_data_bucket)
mnist_estimator.fit({'train': train_data_location, 'test': test_data_location})
###Output
_____no_output_____
###Markdown
SageMaker's transformer classAfter training, we use our MXNet estimator object to create a `Transformer` by invoking the `transformer()` method. This method takes arguments for configuring our options with the batch transform job; these do not need to be the same values as the one we used for the training job. The method also creates a SageMaker Model to be used for the batch transform jobs.The `Transformer` class is responsible for running batch transform jobs, which will deploy the trained model to an endpoint and send requests for performing inference.
###Code
transformer = mnist_estimator.transformer(instance_count=1, instance_type='ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Running a batch transform jobNow we can perform some inference with the model we've trained by running a batch transform job. The request handling behavior of the Endpoint deployed during the transform job is determined by the `mnist.py` script.For demonstration purposes, we're going to use input data that contains 1000 MNIST images, located in the public SageMaker sample data S3 bucket. To create the batch transform job, we simply call `transform()` on our transformer with information about the input data.
###Code
input_file_path = 'batch-transform/mnist-1000-samples'
transformer.transform('s3://{}/{}'.format(sample_data_bucket, input_file_path), content_type='text/csv')
###Output
_____no_output_____
###Markdown
Now we wait for the batch transform job to complete. We have a convenience method, `wait()`, that will block until the batch transform job has completed. We can call that here to see if the batch transform job is still running; the cell will finish running when the batch transform job has completed.
###Code
transformer.wait()
###Output
_____no_output_____
###Markdown
Downloading the resultsThe batch transform job uploads its predictions to S3. Since we did not specify `output_path` when creating the Transformer, one was generated based on the batch transform job name:
###Code
print(transformer.output_path)
###Output
_____no_output_____
###Markdown
The output here will be a list of predictions, where each prediction is a list of probabilities, one for each possible label. Since we read the output as a string, we use `ast.literal_eval()` to turn it into a list and find the maximum element of the list gives us the predicted label. Here we define a convenience method to take the output and produce the predicted label.
###Code
import ast
def predicted_label(transform_output):
output = ast.literal_eval(transform_output)
probabilities = output[0]
return probabilities.index(max(probabilities))
###Output
_____no_output_____
###Markdown
Now let's download the first ten results from S3:
###Code
import json
from urllib.parse import urlparse
import boto3
parsed_url = urlparse(transformer.output_path)
bucket_name = parsed_url.netloc
prefix = parsed_url.path[1:]
s3 = boto3.resource('s3')
predictions = []
for i in range(10):
file_key = '{}/data-{}.csv.out'.format(prefix, i)
output_obj = s3.Object(bucket_name, file_key)
output = output_obj.get()["Body"].read().decode('utf-8')
predictions.append(predicted_label(output))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we're also going to download the corresponding original input data so that we can see how the model did with its predictions.
###Code
import os
tmp_dir = '/tmp/data'
if not os.path.exists(tmp_dir):
os.makedirs(tmp_dir)
###Output
_____no_output_____
###Markdown
And now we'll print out the images:
###Code
from numpy import genfromtxt
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (2,10)
def show_digit(img, caption='', subplot=None):
if subplot == None:
_,(subplot) = plt.subplots(1,1)
imgr = img.reshape((28,28))
subplot.axis('off')
subplot.imshow(imgr, cmap='gray')
plt.title(caption)
for i in range(10):
input_file_name = 'data-{}.csv'.format(i)
input_file_key = '{}/{}'.format(input_file_path, input_file_name)
s3.Bucket(sample_data_bucket).download_file(input_file_key, os.path.join(tmp_dir, input_file_name))
input_data = genfromtxt(os.path.join(tmp_dir, input_file_name), delimiter=',')
show_digit(input_data)
###Output
_____no_output_____
###Markdown
Here, we can see the original labels are:```7, 2, 1, 0, 4, 1, 4, 9, 5, 9```Now let's print out the predictions to compare:
###Code
print(predictions)
###Output
_____no_output_____
###Markdown
Using the Apache MXNet Module API with SageMaker Training and Batch Transformation The SageMaker Python SDK makes it easy to train MXNet models and use them for batch transformation. In this example, we train a simple neural network using the Apache MXNet [Module API](https://mxnet.incubator.apache.org/api/python/module.html) and the MNIST dataset. The MNIST dataset is widely used for handwritten digit classification, and consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). The task at hand is to train a model using the 60,000 training images and subsequently test its classification accuracy on the 10,000 test images. SetupFirst, we define a few variables that will be needed later in the example.
###Code
from sagemaker import get_execution_role
from sagemaker.session import Session
# S3 bucket for saving files. Feel free to redefine this variable to the bucket of your choice.
bucket = Session().default_bucket()
# Bucket location where your custom code will be saved in the tar.gz format.
custom_code_upload_location = 's3://{}/customcode/mxnet'.format(bucket)
# Bucket location where results of model training are saved.
model_artifacts_location = 's3://{}/artifacts'.format(bucket)
# IAM execution role that gives SageMaker access to resources in your AWS account.
# We can use the SageMaker Python SDK to get the role from our notebook environment.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Training and inference scriptThe `mnist.py` script provides all the code we need for training and and inference. The script we will use is adaptated from the Apache MXNet [MNIST tutorial](https://mxnet.incubator.apache.org/tutorials/python/mnist.html).
###Code
!cat mnist.py
###Output
_____no_output_____
###Markdown
SageMaker's MXNet estimator classThe SageMaker ```MXNet``` estimator allows us to run single machine or distributed training in SageMaker, using CPU or GPU-based instances.When we create the estimator, we pass in the filename of our training script, the name of our IAM execution role, and the S3 locations we defined in the setup section. We also provide a few other parameters. ``train_instance_count`` and ``train_instance_type`` determine the number and type of SageMaker instances that will be used for the training job. The ``hyperparameters`` parameter is a ``dict`` of values that will be passed to your training script -- you can see how to access these values in the ``mnist.py`` script above.For this example, we will choose one ``ml.m4.xlarge`` instance.
###Code
from sagemaker.mxnet import MXNet
mnist_estimator = MXNet(entry_point='mnist.py',
role=role,
output_path=model_artifacts_location,
code_location=custom_code_upload_location,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
hyperparameters={'learning_rate': 0.1})
###Output
_____no_output_____
###Markdown
Running a training jobAfter we've constructed our MXNet object, we can fit it using data stored in S3. Below we run SageMaker training on two input channels: train and test.During training, SageMaker makes this data stored in S3 available in the local filesystem where the `mnist.py` script is running. The script then simply loads the train and test data from disk.
###Code
%%time
import boto3
region = boto3.Session().region_name
train_data_location = 's3://sagemaker-sample-data-{}/mxnet/mnist/train'.format(region)
test_data_location = 's3://sagemaker-sample-data-{}/mxnet/mnist/test'.format(region)
mnist_estimator.fit({'train': train_data_location, 'test': test_data_location})
###Output
_____no_output_____
###Markdown
SageMaker's transformer classAfter training, we use our MXNet estimator object to create a `Transformer` by invoking the `transformer()` method. This method takes arguments for configuring our options with the batch transform job; these do not need to be the same values as the one we used for the training job.The `Transformer` class is responsible for running batch transform jobs, which will deploy the trained model to an endpoint and send requests for performing inference.
###Code
transformer = mnist_estimator.transformer(instance_count=1, instance_type='ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Running a batch transform jobNow we can perform some inference with the model we've trained by running a batch transform job. The request handling behavior of the Endpoint deployed during the transform job is determined by the `mnist.py` script.For demonstration purposes, we will be using an image of a '7' that's already saved in S3:
###Code
transform_data_location = 's3://sagemaker-sample-data-{}/batch-transform/mnist'.format(region)
###Output
_____no_output_____
###Markdown
Just for fun, we can print out what the image looks like. First we'll create a temporary directory:
###Code
import os
tmp_dir = '/tmp/data'
if not os.path.exists(tmp_dir):
os.makedirs(tmp_dir)
###Output
_____no_output_____
###Markdown
And now we'll print out the image:
###Code
from numpy import genfromtxt
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (2,10)
def show_digit(img, caption='', subplot=None):
if subplot==None:
_,(subplot)=plt.subplots(1,1)
imgr=img.reshape((28,28))
subplot.axis('off')
subplot.imshow(imgr, cmap='gray')
plt.title(caption)
input_data_file = '/tmp/data/mnist_data.csv'
s3 = boto3.resource('s3')
s3.Bucket('sagemaker-sample-data-{}'.format(region)).download_file('batch-transform/mnist/data.csv', input_data_file)
input_data = genfromtxt(input_data_file, delimiter=',')
show_digit(input_data)
###Output
_____no_output_____
###Markdown
Now we can use the Transformer to classify the handwritten digit:
###Code
transformer.transform(transform_data_location, content_type='text/csv')
###Output
_____no_output_____
###Markdown
Now we wait for the batch transform job to complete. We have a convenience method, `wait()`, that will block until the batch transform job has completed. We can call that here to see if the batch transform job is still running; the cell will finish running when the batch transform job has completed.
###Code
transformer.wait()
###Output
_____no_output_____
###Markdown
Downloading the resultsThe batch transform job uploads its predictions to S3. Since we did not specify `output_path` when creating the Transformer, one was generated based on the batch transform job name:
###Code
print(transformer.output_path)
###Output
_____no_output_____
###Markdown
We use that to download the results from S3:
###Code
import json
from urllib.parse import urlparse
parsed_url = urlparse(transformer.output_path)
bucket_name = parsed_url.netloc
file_key = '{}/data.csv.out'.format(parsed_url.path[1:])
s3 = boto3.resource('s3')
output_obj = s3.Object(bucket_name, file_key)
output = output_obj.get()["Body"].read().decode('utf-8')
###Output
_____no_output_____
###Markdown
The output here is a list of predictions, where each prediction is a list of probabilities, one for each possible label. Since we read the output as a string, we use `ast.literal_eval()` to turn it into a list:
###Code
import ast
output = ast.literal_eval(output)
probabilities = output[0]
###Output
_____no_output_____
###Markdown
Now that we have the list of probabilities, finding the maximum element of the list gives us the predicted label:
###Code
prediction = probabilities.index(max(probabilities))
print('Prediction is {}'.format(prediction))
###Output
_____no_output_____
###Markdown
Using the Apache MXNet Module API with SageMaker Training and Batch TransformationThe SageMaker Python SDK makes it easy to train MXNet models and use them for batch transformation. In this example, we train a simple neural network using the Apache MXNet [Module API](https://mxnet.incubator.apache.org/api/python/module.html) and the MNIST dataset. The MNIST dataset is widely used for handwritten digit classification, and consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). The task at hand is to train a model using the 60,000 training images and subsequently test its classification accuracy on the 10,000 test images. SetupFirst, we define a few variables that are be needed later in the example.
###Code
from sagemaker import get_execution_role
from sagemaker.session import Session
sagemaker_session = Session()
region = sagemaker_session.boto_session.region_name
sample_data_bucket = 'sagemaker-sample-data-{}'.format(region)
# S3 bucket for saving files. Feel free to redefine this variable to the bucket of your choice.
bucket = sagemaker_session.default_bucket()
# Bucket location where your custom code will be saved in the tar.gz format.
custom_code_upload_location = 's3://{}/mxnet-mnist-example/code'.format(bucket)
# Bucket location where results of model training are saved.
model_artifacts_location = 's3://{}/mxnet-mnist-example/artifacts'.format(bucket)
# IAM execution role that gives SageMaker access to resources in your AWS account.
# We can use the SageMaker Python SDK to get the role from our notebook environment.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Training and inference scriptThe `mnist.py` script provides all the code we need for training and and inference. The script also checkpoints the model at the end of every epoch and saves the model graph, params and optimizer state in the folder `/opt/ml/checkpoints`. If the folder path does not exist then it skips checkpointing. The script we use is adaptated from the Apache MXNet [MNIST tutorial](https://mxnet.incubator.apache.org/tutorials/python/mnist.html).
###Code
!pygmentize mnist.py
###Output
_____no_output_____
###Markdown
SageMaker's MXNet estimator classThe SageMaker ```MXNet``` estimator allows us to run single machine or distributed training in SageMaker, using CPU or GPU-based instances.When we create the estimator, we pass in the filename of our training script, the name of our IAM execution role, and the S3 locations we defined in the setup section. We also provide a few other parameters. ``train_instance_count`` and ``train_instance_type`` determine the number and type of SageMaker instances that are used for the training job. The ``hyperparameters`` parameter is a ``dict`` of values that is passed to your training script -- you can see how to access these values in the ``mnist.py`` script above.For this example, we choose one ``ml.m4.xlarge`` instance for our training job.
###Code
from sagemaker.mxnet import MXNet
mnist_estimator = MXNet(entry_point='mnist.py',
role=role,
output_path=model_artifacts_location,
code_location=custom_code_upload_location,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
framework_version='1.6.0',
py_version='py3',
hyperparameters={'learning-rate': 0.1})
###Output
_____no_output_____
###Markdown
Running a training jobAfter we've constructed our `MXNet` object, we can fit it using data stored in S3. Below we run SageMaker training on two input channels: train and test.During training, SageMaker makes this data stored in S3 available in the local filesystem where the `mnist.py` script is running. The script then simply loads the train and test data from disk.
###Code
%%time
train_data_location = 's3://{}/mxnet/mnist/train'.format(sample_data_bucket)
test_data_location = 's3://{}/mxnet/mnist/test'.format(sample_data_bucket)
mnist_estimator.fit({'train': train_data_location, 'test': test_data_location})
###Output
_____no_output_____
###Markdown
SageMaker's transformer classAfter training, we use our `MXNet` estimator object to create a `Transformer` by invoking the `transformer()` method. This method takes arguments for configuring our options with the batch transform job; these do not need to be the same values as the one we used for the training job. The method also creates a SageMaker Model to be used for the batch transform jobs.The `Transformer` class is responsible for running batch transform jobs, which deploys the trained model to an endpoint and send requests for performing inference.
###Code
transformer = mnist_estimator.transformer(instance_count=1, instance_type='ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Running a batch transform jobNow we can perform some inference with the model we've trained by running a batch transform job. The request handling behavior during the transform job is determined by the `mnist.py` script.For demonstration purposes, we're going to use input data that contains 1000 MNIST images, located in the public SageMaker sample data S3 bucket. To create the batch transform job, we simply call `transform()` on our transformer with information about the input data.
###Code
input_file_path = 'batch-transform/mnist-1000-samples'
transformer.transform('s3://{}/{}'.format(sample_data_bucket, input_file_path), content_type='text/csv')
###Output
_____no_output_____
###Markdown
Now we wait for the batch transform job to complete. We have a convenience method, `wait()`, that blocks until the batch transform job has completed. We call that here to see if the batch transform job is still running; the cell finishes running when the batch transform job has completed.
###Code
transformer.wait()
###Output
_____no_output_____
###Markdown
Downloading the resultsThe batch transform job uploads its predictions to S3. Since we did not specify `output_path` when creating the Transformer, one was generated based on the batch transform job name:
###Code
print(transformer.output_path)
###Output
_____no_output_____
###Markdown
The output here will be a list of predictions, where each prediction is a list of probabilities, one for each possible label. Since we read the output as a string, we use `ast.literal_eval()` to turn it into a list and find the maximum element of the list gives us the predicted label. Here we define a convenience method to take the output and produce the predicted label.
###Code
import ast
def predicted_label(transform_output):
output = ast.literal_eval(transform_output)
probabilities = output[0]
return probabilities.index(max(probabilities))
###Output
_____no_output_____
###Markdown
Now let's download the first ten results from S3:
###Code
import json
from sagemaker.s3 import S3Downloader
predictions = []
for i in range(10):
file_key = '{}/data-{}.csv.out'.format(transformer.output_path, i)
output = S3Downloader.read_file(file_key)
predictions.append(predicted_label(output))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we also download and display the corresponding original input data so that we can see how the model did with its predictions:
###Code
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams['figure.figsize'] = (2,10)
def show_digit(img, caption='', subplot=None):
if subplot == None:
_,(subplot) = plt.subplots(1,1)
imgr = img.reshape((28,28))
subplot.axis('off')
subplot.imshow(imgr, cmap='gray')
plt.title(caption)
for i in range(10):
input_file_name = 'data-{}.csv'.format(i)
input_file_uri = 's3://{}/{}/{}'.format(sample_data_bucket, input_file_path, input_file_name)
input_data = np.fromstring(S3Downloader.read_file(input_file_uri), sep=',')
show_digit(input_data)
###Output
_____no_output_____
###Markdown
Here, we can see the original labels are:```7, 2, 1, 0, 4, 1, 4, 9, 5, 9```Now let's print out the predictions to compare:
###Code
print(predictions)
###Output
_____no_output_____
###Markdown
Using the Apache MXNet Module API with SageMaker Training and Batch Transformation The SageMaker Python SDK makes it easy to train MXNet models and use them for batch transformation. In this example, we train a simple neural network using the Apache MXNet [Module API](https://mxnet.incubator.apache.org/api/python/module.html) and the MNIST dataset. The MNIST dataset is widely used for handwritten digit classification, and consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). The task at hand is to train a model using the 60,000 training images and subsequently test its classification accuracy on the 10,000 test images. SetupFirst, we define a few variables that will be needed later in the example.
###Code
from sagemaker import get_execution_role
from sagemaker.session import Session
sagemaker_session = Session()
region = sagemaker_session.boto_session.region_name
sample_data_bucket = 'sagemaker-sample-data-{}'.format(region)
# S3 bucket for saving files. Feel free to redefine this variable to the bucket of your choice.
bucket = sagemaker_session.default_bucket()
# Bucket location where your custom code will be saved in the tar.gz format.
custom_code_upload_location = 's3://{}/mxnet-mnist-example/code'.format(bucket)
# Bucket location where results of model training are saved.
model_artifacts_location = 's3://{}/mxnet-mnist-example/artifacts'.format(bucket)
# IAM execution role that gives SageMaker access to resources in your AWS account.
# We can use the SageMaker Python SDK to get the role from our notebook environment.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Training and inference scriptThe `mnist.py` script provides all the code we need for training and and inference. The script also checkpoints the model at the end of every epoch and saves the model graph, params and optimizer state in the folder `/opt/ml/checkpoints`. If the folder path does not exist then it will skip checkpointing. The script we will use is adaptated from the Apache MXNet [MNIST tutorial](https://mxnet.incubator.apache.org/tutorials/python/mnist.html).
###Code
!cat mnist.py
###Output
_____no_output_____
###Markdown
SageMaker's MXNet estimator classThe SageMaker ```MXNet``` estimator allows us to run single machine or distributed training in SageMaker, using CPU or GPU-based instances.When we create the estimator, we pass in the filename of our training script, the name of our IAM execution role, and the S3 locations we defined in the setup section. We also provide a few other parameters. ``train_instance_count`` and ``train_instance_type`` determine the number and type of SageMaker instances that will be used for the training job. The ``hyperparameters`` parameter is a ``dict`` of values that will be passed to your training script -- you can see how to access these values in the ``mnist.py`` script above.For this example, we will choose one ``ml.m4.xlarge`` instance.
###Code
from sagemaker.mxnet import MXNet
mnist_estimator = MXNet(entry_point='mnist.py',
role=role,
output_path=model_artifacts_location,
code_location=custom_code_upload_location,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
framework_version='1.4.1',
py_version='py3',
hyperparameters={'learning-rate': 0.1})
###Output
_____no_output_____
###Markdown
Running a training jobAfter we've constructed our MXNet object, we can fit it using data stored in S3. Below we run SageMaker training on two input channels: train and test.During training, SageMaker makes this data stored in S3 available in the local filesystem where the `mnist.py` script is running. The script then simply loads the train and test data from disk.
###Code
%%time
train_data_location = 's3://{}/mxnet/mnist/train'.format(sample_data_bucket)
test_data_location = 's3://{}/mxnet/mnist/test'.format(sample_data_bucket)
mnist_estimator.fit({'train': train_data_location, 'test': test_data_location})
###Output
_____no_output_____
###Markdown
SageMaker's transformer classAfter training, we use our MXNet estimator object to create a `Transformer` by invoking the `transformer()` method. This method takes arguments for configuring our options with the batch transform job; these do not need to be the same values as the one we used for the training job. The method also creates a SageMaker Model to be used for the batch transform jobs.The `Transformer` class is responsible for running batch transform jobs, which will deploy the trained model to an endpoint and send requests for performing inference.
###Code
transformer = mnist_estimator.transformer(instance_count=1, instance_type='ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Running a batch transform jobNow we can perform some inference with the model we've trained by running a batch transform job. The request handling behavior of the Endpoint deployed during the transform job is determined by the `mnist.py` script.For demonstration purposes, we're going to use input data that contains 1000 MNIST images, located in the public SageMaker sample data S3 bucket. To create the batch transform job, we simply call `transform()` on our transformer with information about the input data.
###Code
input_file_path = 'batch-transform/mnist-1000-samples'
transformer.transform('s3://{}/{}'.format(sample_data_bucket, input_file_path), content_type='text/csv')
###Output
_____no_output_____
###Markdown
Now we wait for the batch transform job to complete. We have a convenience method, `wait()`, that will block until the batch transform job has completed. We can call that here to see if the batch transform job is still running; the cell will finish running when the batch transform job has completed.
###Code
transformer.wait()
###Output
_____no_output_____
###Markdown
Downloading the resultsThe batch transform job uploads its predictions to S3. Since we did not specify `output_path` when creating the Transformer, one was generated based on the batch transform job name:
###Code
print(transformer.output_path)
###Output
_____no_output_____
###Markdown
The output here will be a list of predictions, where each prediction is a list of probabilities, one for each possible label. Since we read the output as a string, we use `ast.literal_eval()` to turn it into a list and find the maximum element of the list gives us the predicted label. Here we define a convenience method to take the output and produce the predicted label.
###Code
import ast
def predicted_label(transform_output):
output = ast.literal_eval(transform_output)
probabilities = output[0]
return probabilities.index(max(probabilities))
###Output
_____no_output_____
###Markdown
Now let's download the first ten results from S3:
###Code
import json
from urllib.parse import urlparse
import boto3
parsed_url = urlparse(transformer.output_path)
bucket_name = parsed_url.netloc
prefix = parsed_url.path[1:]
s3 = boto3.resource('s3')
predictions = []
for i in range(10):
file_key = '{}/data-{}.csv.out'.format(prefix, i)
output_obj = s3.Object(bucket_name, file_key)
output = output_obj.get()["Body"].read().decode('utf-8')
predictions.append(predicted_label(output))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we're also going to download the corresponding original input data so that we can see how the model did with its predictions.
###Code
import os
tmp_dir = '/tmp/data'
if not os.path.exists(tmp_dir):
os.makedirs(tmp_dir)
###Output
_____no_output_____
###Markdown
And now we'll print out the images:
###Code
from numpy import genfromtxt
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (2,10)
def show_digit(img, caption='', subplot=None):
if subplot == None:
_,(subplot) = plt.subplots(1,1)
imgr = img.reshape((28,28))
subplot.axis('off')
subplot.imshow(imgr, cmap='gray')
plt.title(caption)
for i in range(10):
input_file_name = 'data-{}.csv'.format(i)
input_file_key = '{}/{}'.format(input_file_path, input_file_name)
s3.Bucket(sample_data_bucket).download_file(input_file_key, os.path.join(tmp_dir, input_file_name))
input_data = genfromtxt(os.path.join(tmp_dir, input_file_name), delimiter=',')
show_digit(input_data)
###Output
_____no_output_____
###Markdown
Here, we can see the original labels are:```7, 2, 1, 0, 4, 1, 4, 9, 5, 9```Now let's print out the predictions to compare:
###Code
print(predictions)
###Output
_____no_output_____
###Markdown
Using the Apache MXNet Module API with SageMaker Training and Batch Transformation The SageMaker Python SDK makes it easy to train MXNet models and use them for batch transformation. In this example, we train a simple neural network using the Apache MXNet [Module API](https://mxnet.incubator.apache.org/api/python/module.html) and the MNIST dataset. The MNIST dataset is widely used for handwritten digit classification, and consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). The task at hand is to train a model using the 60,000 training images and subsequently test its classification accuracy on the 10,000 test images. SetupFirst, we define a few variables that will be needed later in the example.
###Code
from sagemaker import get_execution_role
from sagemaker.session import Session
sagemaker_session = Session()
region = sagemaker_session.boto_session.region_name
sample_data_bucket = 'sagemaker-sample-data-{}'.format(region)
# S3 bucket for saving files. Feel free to redefine this variable to the bucket of your choice.
bucket = sagemaker_session.default_bucket()
# Bucket location where your custom code will be saved in the tar.gz format.
custom_code_upload_location = 's3://{}/mxnet-mnist-example/code'.format(bucket)
# Bucket location where results of model training are saved.
model_artifacts_location = 's3://{}/mxnet-mnist-example/artifacts'.format(bucket)
# IAM execution role that gives SageMaker access to resources in your AWS account.
# We can use the SageMaker Python SDK to get the role from our notebook environment.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Training and inference scriptThe `mnist.py` script provides all the code we need for training and and inference. The script also checkpoints the model at the end of every epoch and saves the model graph, params and optimizer state in the folder `/opt/ml/checkpoints`. If the folder path does not exist then it will skip checkpointing. The script we will use is adaptated from the Apache MXNet [MNIST tutorial](https://mxnet.incubator.apache.org/tutorials/python/mnist.html).
###Code
!cat mnist.py
###Output
_____no_output_____
###Markdown
SageMaker's MXNet estimator classThe SageMaker ```MXNet``` estimator allows us to run single machine or distributed training in SageMaker, using CPU or GPU-based instances.When we create the estimator, we pass in the filename of our training script, the name of our IAM execution role, and the S3 locations we defined in the setup section. We also provide a few other parameters. ``train_instance_count`` and ``train_instance_type`` determine the number and type of SageMaker instances that will be used for the training job. The ``hyperparameters`` parameter is a ``dict`` of values that will be passed to your training script -- you can see how to access these values in the ``mnist.py`` script above.For this example, we will choose one ``ml.m4.xlarge`` instance.
###Code
from sagemaker.mxnet import MXNet
mnist_estimator = MXNet(entry_point='mnist.py',
role=role,
output_path=model_artifacts_location,
code_location=custom_code_upload_location,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
framework_version='1.6.0',
py_version='py3',
hyperparameters={'learning-rate': 0.1})
###Output
_____no_output_____
###Markdown
Running a training jobAfter we've constructed our MXNet object, we can fit it using data stored in S3. Below we run SageMaker training on two input channels: train and test.During training, SageMaker makes this data stored in S3 available in the local filesystem where the `mnist.py` script is running. The script then simply loads the train and test data from disk.
###Code
%%time
train_data_location = 's3://{}/mxnet/mnist/train'.format(sample_data_bucket)
test_data_location = 's3://{}/mxnet/mnist/test'.format(sample_data_bucket)
mnist_estimator.fit({'train': train_data_location, 'test': test_data_location})
###Output
_____no_output_____
###Markdown
SageMaker's transformer classAfter training, we use our MXNet estimator object to create a `Transformer` by invoking the `transformer()` method. This method takes arguments for configuring our options with the batch transform job; these do not need to be the same values as the one we used for the training job. The method also creates a SageMaker Model to be used for the batch transform jobs.The `Transformer` class is responsible for running batch transform jobs, which will deploy the trained model to an endpoint and send requests for performing inference.
###Code
transformer = mnist_estimator.transformer(instance_count=1, instance_type='ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Running a batch transform jobNow we can perform some inference with the model we've trained by running a batch transform job. The request handling behavior of the Endpoint deployed during the transform job is determined by the `mnist.py` script.For demonstration purposes, we're going to use input data that contains 1000 MNIST images, located in the public SageMaker sample data S3 bucket. To create the batch transform job, we simply call `transform()` on our transformer with information about the input data.
###Code
input_file_path = 'batch-transform/mnist-1000-samples'
transformer.transform('s3://{}/{}'.format(sample_data_bucket, input_file_path), content_type='text/csv')
###Output
_____no_output_____
###Markdown
Now we wait for the batch transform job to complete. We have a convenience method, `wait()`, that will block until the batch transform job has completed. We can call that here to see if the batch transform job is still running; the cell will finish running when the batch transform job has completed.
###Code
transformer.wait()
###Output
_____no_output_____
###Markdown
Downloading the resultsThe batch transform job uploads its predictions to S3. Since we did not specify `output_path` when creating the Transformer, one was generated based on the batch transform job name:
###Code
print(transformer.output_path)
###Output
_____no_output_____
###Markdown
The output here will be a list of predictions, where each prediction is a list of probabilities, one for each possible label. Since we read the output as a string, we use `ast.literal_eval()` to turn it into a list and find the maximum element of the list gives us the predicted label. Here we define a convenience method to take the output and produce the predicted label.
###Code
import ast
def predicted_label(transform_output):
output = ast.literal_eval(transform_output)
probabilities = output[0]
return probabilities.index(max(probabilities))
###Output
_____no_output_____
###Markdown
Now let's download the first ten results from S3:
###Code
import json
from urllib.parse import urlparse
import boto3
parsed_url = urlparse(transformer.output_path)
bucket_name = parsed_url.netloc
prefix = parsed_url.path[1:]
s3 = boto3.resource('s3')
predictions = []
for i in range(10):
file_key = '{}/data-{}.csv.out'.format(prefix, i)
output_obj = s3.Object(bucket_name, file_key)
output = output_obj.get()["Body"].read().decode('utf-8')
predictions.append(predicted_label(output))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we're also going to download the corresponding original input data so that we can see how the model did with its predictions.
###Code
import os
tmp_dir = '/tmp/data'
if not os.path.exists(tmp_dir):
os.makedirs(tmp_dir)
###Output
_____no_output_____
###Markdown
And now we'll print out the images:
###Code
from numpy import genfromtxt
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (2,10)
def show_digit(img, caption='', subplot=None):
if subplot == None:
_,(subplot) = plt.subplots(1,1)
imgr = img.reshape((28,28))
subplot.axis('off')
subplot.imshow(imgr, cmap='gray')
plt.title(caption)
for i in range(10):
input_file_name = 'data-{}.csv'.format(i)
input_file_key = '{}/{}'.format(input_file_path, input_file_name)
s3.Bucket(sample_data_bucket).download_file(input_file_key, os.path.join(tmp_dir, input_file_name))
input_data = genfromtxt(os.path.join(tmp_dir, input_file_name), delimiter=',')
show_digit(input_data)
###Output
_____no_output_____
###Markdown
Here, we can see the original labels are:```7, 2, 1, 0, 4, 1, 4, 9, 5, 9```Now let's print out the predictions to compare:
###Code
print(predictions)
###Output
_____no_output_____
###Markdown
Using the Apache MXNet Module API with SageMaker Training and Batch Transformation The SageMaker Python SDK makes it easy to train MXNet models and use them for batch transformation. In this example, we train a simple neural network using the Apache MXNet [Module API](https://mxnet.incubator.apache.org/api/python/module.html) and the MNIST dataset. The MNIST dataset is widely used for handwritten digit classification, and consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). The task at hand is to train a model using the 60,000 training images and subsequently test its classification accuracy on the 10,000 test images. SetupFirst, we define a few variables that will be needed later in the example.
###Code
from sagemaker import get_execution_role
from sagemaker.session import Session
sagemaker_session = Session()
region = sagemaker_session.boto_session.region_name
sample_data_bucket = 'sagemaker-sample-data-{}'.format(region)
# S3 bucket for saving files. Feel free to redefine this variable to the bucket of your choice.
bucket = sagemaker_session.default_bucket()
# Bucket location where your custom code will be saved in the tar.gz format.
custom_code_upload_location = 's3://{}/mxnet-mnist-example/code'.format(bucket)
# Bucket location where results of model training are saved.
model_artifacts_location = 's3://{}/mxnet-mnist-example/artifacts'.format(bucket)
# IAM execution role that gives SageMaker access to resources in your AWS account.
# We can use the SageMaker Python SDK to get the role from our notebook environment.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Training and inference scriptThe `mnist.py` script provides all the code we need for training and and inference. The script we will use is adaptated from the Apache MXNet [MNIST tutorial](https://mxnet.incubator.apache.org/tutorials/python/mnist.html).
###Code
!cat mnist.py
###Output
_____no_output_____
###Markdown
SageMaker's MXNet estimator classThe SageMaker ```MXNet``` estimator allows us to run single machine or distributed training in SageMaker, using CPU or GPU-based instances.When we create the estimator, we pass in the filename of our training script, the name of our IAM execution role, and the S3 locations we defined in the setup section. We also provide a few other parameters. ``train_instance_count`` and ``train_instance_type`` determine the number and type of SageMaker instances that will be used for the training job. The ``hyperparameters`` parameter is a ``dict`` of values that will be passed to your training script -- you can see how to access these values in the ``mnist.py`` script above.For this example, we will choose one ``ml.m4.xlarge`` instance.
###Code
from sagemaker.mxnet import MXNet
mnist_estimator = MXNet(entry_point='mnist.py',
role=role,
output_path=model_artifacts_location,
code_location=custom_code_upload_location,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
framework_version='1.4.0',
hyperparameters={'learning-rate': 0.1})
###Output
_____no_output_____
###Markdown
Running a training jobAfter we've constructed our MXNet object, we can fit it using data stored in S3. Below we run SageMaker training on two input channels: train and test.During training, SageMaker makes this data stored in S3 available in the local filesystem where the `mnist.py` script is running. The script then simply loads the train and test data from disk.
###Code
%%time
train_data_location = 's3://{}/mxnet/mnist/train'.format(sample_data_bucket)
test_data_location = 's3://{}/mxnet/mnist/test'.format(sample_data_bucket)
mnist_estimator.fit({'train': train_data_location, 'test': test_data_location})
###Output
_____no_output_____
###Markdown
SageMaker's transformer classAfter training, we use our MXNet estimator object to create a `Transformer` by invoking the `transformer()` method. This method takes arguments for configuring our options with the batch transform job; these do not need to be the same values as the one we used for the training job. The method also creates a SageMaker Model to be used for the batch transform jobs.The `Transformer` class is responsible for running batch transform jobs, which will deploy the trained model to an endpoint and send requests for performing inference.
###Code
transformer = mnist_estimator.transformer(instance_count=1, instance_type='ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Running a batch transform jobNow we can perform some inference with the model we've trained by running a batch transform job. The request handling behavior of the Endpoint deployed during the transform job is determined by the `mnist.py` script.For demonstration purposes, we're going to use input data that contains 1000 MNIST images, located in the public SageMaker sample data S3 bucket. To create the batch transform job, we simply call `transform()` on our transformer with information about the input data.
###Code
input_file_path = 'batch-transform/mnist-1000-samples'
transformer.transform('s3://{}/{}'.format(sample_data_bucket, input_file_path), content_type='text/csv')
###Output
_____no_output_____
###Markdown
Now we wait for the batch transform job to complete. We have a convenience method, `wait()`, that will block until the batch transform job has completed. We can call that here to see if the batch transform job is still running; the cell will finish running when the batch transform job has completed.
###Code
transformer.wait()
###Output
_____no_output_____
###Markdown
Downloading the resultsThe batch transform job uploads its predictions to S3. Since we did not specify `output_path` when creating the Transformer, one was generated based on the batch transform job name:
###Code
print(transformer.output_path)
###Output
_____no_output_____
###Markdown
The output here will be a list of predictions, where each prediction is a list of probabilities, one for each possible label. Since we read the output as a string, we use `ast.literal_eval()` to turn it into a list and find the maximum element of the list gives us the predicted label. Here we define a convenience method to take the output and produce the predicted label.
###Code
import ast
def predicted_label(transform_output):
output = ast.literal_eval(transform_output)
probabilities = output[0]
return probabilities.index(max(probabilities))
###Output
_____no_output_____
###Markdown
Now let's download the first ten results from S3:
###Code
import json
from urllib.parse import urlparse
import boto3
parsed_url = urlparse(transformer.output_path)
bucket_name = parsed_url.netloc
prefix = parsed_url.path[1:]
s3 = boto3.resource('s3')
predictions = []
for i in range(10):
file_key = '{}/data-{}.csv.out'.format(prefix, i)
output_obj = s3.Object(bucket_name, file_key)
output = output_obj.get()["Body"].read().decode('utf-8')
predictions.append(predicted_label(output))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we're also going to download the corresponding original input data so that we can see how the model did with its predictions.
###Code
import os
tmp_dir = '/tmp/data'
if not os.path.exists(tmp_dir):
os.makedirs(tmp_dir)
###Output
_____no_output_____
###Markdown
And now we'll print out the images:
###Code
from numpy import genfromtxt
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (2,10)
def show_digit(img, caption='', subplot=None):
if subplot == None:
_,(subplot) = plt.subplots(1,1)
imgr = img.reshape((28,28))
subplot.axis('off')
subplot.imshow(imgr, cmap='gray')
plt.title(caption)
for i in range(10):
input_file_name = 'data-{}.csv'.format(i)
input_file_key = '{}/{}'.format(input_file_path, input_file_name)
s3.Bucket(sample_data_bucket).download_file(input_file_key, os.path.join(tmp_dir, input_file_name))
input_data = genfromtxt(os.path.join(tmp_dir, input_file_name), delimiter=',')
show_digit(input_data)
###Output
_____no_output_____
###Markdown
Here, we can see the original labels are:```7, 2, 1, 0, 4, 1, 4, 9, 5, 9```Now let's print out the predictions to compare:
###Code
print(predictions)
###Output
_____no_output_____ |
10-Introducao_ao_TensorFlow/06-Mini-Projeto - Analise4.ipynb | ###Markdown
Análise Exploratória em Conjunto de Dados do Kaggle Análise 4
###Code
# Imports
import os
import subprocess
import stat
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from datetime import datetime
sns.set(style="white")
%matplotlib inline
# Dataset
clean_data_path = "dataset/autos.csv"
df = pd.read_csv(clean_data_path,encoding="latin-1")
# Calcule a média de preço por marca e por veículo
trial = pd.DataFrame()
for b in list(df["brand"].unique()):
for v in list(df["vehicleType"].unique()):
z = df[(df["brand"] == b) & (df["vehicleType"] == v)]["price"].mean()
trial = trial.append(pd.DataFrame({'brand':b , 'vehicleType':v , 'avgPrice':z}, index=[0]))
trial = trial.reset_index()
del trial["index"]
trial["avgPrice"].fillna(0,inplace=True)
trial["avgPrice"].isnull().value_counts()
trial["avgPrice"] = trial["avgPrice"].astype(int)
trial.head(5)
###Output
_____no_output_____
###Markdown
Preço médio de um veículo por marca, bem como tipo de veículo
###Code
# Crie um Heatmap com Preço médio de um veículo por marca, bem como tipo de veículo
tri = trial.pivot("brand","vehicleType", "avgPrice")
fig, ax = plt.subplots(figsize=(15,20))
sns.heatmap(tri,linewidths=1,cmap="YlGnBu",annot=True, ax=ax, fmt="d")
ax.set_title("Heatmap - Preço médio de um veículo por marca e tipo de veículo",fontdict={'size':20})
ax.xaxis.set_label_text("Tipo de Veículo",fontdict= {'size':20})
ax.yaxis.set_label_text("Marca",fontdict= {'size':20})
plt.show()
# Salvando o plot
fig.savefig("plots/Analise4/heatmap-price-brand-vehicleType.png")
###Output
_____no_output_____ |
Data Analyst with Python/17_Joining_Data_in_SQL/17_2_Outer joins and cross joins.ipynb | ###Markdown
2. Outer joins and cross joins**In this chapter, you'll come to grips with different kinds of outer joins. You'll learn how to gain further insights into your data through left joins, right joins, and full joins. In addition to outer joins, you'll also work with cross joins.**
###Code
%load_ext sql
%sql sqlite://
###Output
_____no_output_____
###Markdown
LEFT and RIGHT JOINsYou can remember outer joins as reaching OUT to another table while keeping all of the records of the original table. Inner joins keep only the records IN both tables. You'll begin this chapter by exploring (1) `LEFT JOIN`s, (2) `RIGHT JOIN`s, and (3) `FULL JOIN`s, which are the three types of `OUTER JOIN`s. Let's begin by exploring how a `LEFT JOIN` differs from an `INNER JOIN` via a diagram. INNER JOIN diagramRecall the inner join diagram. - **`left_table`**id | val:---|:---1 | L12 | L23 | L34 | L4- **`right_table`**id | val:---|:---1 | R14 | R25 | R36 | R4- **`INNER JOIN`**L_id | L_val | R_val:---|:---|:---1 | L1 | R14 | L4 | R2The only records that were included in the resulting table of the `INNER JOIN` query were those in which the id field had matching values. LEFT JOIN initial diagramIn contrast, a `LEFT JOIN` notes those records in the left table that do not have a match on the key field in the right table. This is denoted in the diagram by the open circles remaining close to the left table for id values of 2 and 3. These values of 2 and 3 do not appear in the id field of the right table. LEFT JOIN diagramYou now see the result of the `LEFT JOIN` query. - **`left_table`**id | val:---|:---1 | L12 | L23 | L34 | L4- **`right_table`**id | val:---|:---1 | R14 | R25 | R36 | R4- **`LEFT JOIN`**L_id | L_val | R_val:---|:---|:---1 | L1 | R12 | L2 | -3 | L3 | -4 | L4 | R2Whereas the `INNER JOIN` kept just the records corresponding to id values of 1 and 4, a `LEFT JOIN` keeps all of the original records in the left table but then marks the values as missing in the right table for those that don't have a match. The missing values are marked with dark gray boxes here for clarity. Note that the values of 5 and 6 for id in the right table are not found in the result of `LEFT JOIN` in any way. Multiple INNER JOIN diagramIt isn't always the case that each key value in the left table corresponds to exactly one record in the key column of the right table. In these examples, we have this layout.- **`left_table`**id | val:---|:---1 | L12 | L23 | L34 | L4- **`right_table`**id | val:---|:---1 | R11 | R24 | R35 | R46 | R5- **`LEFT JOIN`**L_id | L_val | R_val:---|:---|:---1 | L1 | R11 | L1 | R22 | L2 | -3 | L3 | -4 | L4 | R3Missing entries still occur for ids of 2 and 3 and the value of R3 is brought into the join from right2 since it matches on id 4. Duplicate rows are shown in the `LEFT JOIN` for id 1 since it has two matches corresponding to the values of R1 and R2 in the right2 table. The syntax of a LEFT JOINThe syntax of the `LEFT JOIN` is similar to that of the `INNER JOIN`. Let's explore the same code you used before to determine the countries with a prime minister and a president, but let's use a `LEFT JOIN` instead of an `INNER JOIN`. Further, let's remove continent to save space on the screen.
###Code
%sql sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
%%sql
SELECT p1.country, prime_minister, president
FROM prime_ministers AS p1
LEFT JOIN presidents AS p2
ON p1.country = p2.country;
###Output
sqlite://
* sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
Done.
###Markdown
The first four records in this table are the same as those from the `INNER JOIN`. The last six correspond to the countries that do not have a president and thus their president values are missing. RIGHT JOINThe `RIGHT JOIN` is much less common than the `LEFT JOIN` so we won't spend as much time on it here. The diagram will help you to understand how it works. - **`left_table`**id | val:---|:---1 | L12 | L23 | L34 | L4- **`right_table`**id | val:---|:---1 | R14 | R25 | R36 | R4- **`RIGHT JOIN`**L_id | L_val | R_val:---|:---|:---1 | L1 | R12 | L2 | R23 | - | R34 | - | R4Instead of matching entries in the id column on the left table TO the id column of the right table, a RIGHT JOIN does the reverse. Therefore, we see open circles on the ids of 5 and 6 in the right table since they are not found in the left table. The resulting table from the `RIGHT JOIN` shows these missing entries in the `L_val` field. As you can see in SQL the right table appears after `RIGHT JOIN` and the left table appears after `FROM`.
###Code
%sql sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/diagrams.sqlite
%%sql
SELECT right_table.id AS R_id, left_table.val AS L_val, right_table.val AS R_val
FROM left_table
RIGHT JOIN right_table
ON left_table.id = right_table.id;
###Output
sqlite://
* sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/diagrams.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
(sqlite3.OperationalError) RIGHT and FULL OUTER JOINs are not currently supported
[SQL: SELECT right_table.id AS R_id, left_table.val AS L_val, right_table.val AS R_val
FROM left_table
RIGHT JOIN right_table
ON left_table.id = right_table.id;]
(Background on this error at: http://sqlalche.me/e/e3q8)
###Markdown
Left JoinNow you'll explore the differences between performing an inner join and a left join using the `cities` and `countries` tables.You'll begin by performing an inner join with the `cities` table on the left and the `countries` table on the right. Remember to alias the name of the city field as `city` and the name of the country field as `country`.You will then change the query to a left join. Take note of how many records are in each query here. - Fill in the code based on the instructions in the code comments to complete the inner join. Note how many records are in the result of the join in the query result.
###Code
%sql sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/countries.sqlite
%%sql
-- Select the city name (with alias), the country code,
-- the country name (with alias), the region,
-- and the city proper population
SELECT c1.name AS city, code, c2.country_name AS country, region, city_proper_pop
-- From left table (with alias)
FROM cities AS c1
-- Join to right table (with alias)
INNER JOIN countries AS c2
-- Match on country code
ON c1.country_code = c2.code
-- Order by descending country code
ORDER BY code DESC
LIMIT 10;
###Output
sqlite://
* sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/countries.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/diagrams.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
Done.
###Markdown
```Showing 10 out of 230 rows``` - Change the code to perform a `LEFT JOIN` instead of an `INNER JOIN`. After executing this query, note how many records the query result contains.
###Code
%%sql
SELECT c1.name AS city, code, c2.country_name AS country,
region, city_proper_pop
FROM cities AS c1
-- Join right table (with alias)
LEFT JOIN countries AS c2
-- Match on country code
ON c1.country_code = c2.code
-- Order by descending country code
ORDER BY code DESC
LIMIT 10;
###Output
sqlite://
* sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/countries.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/diagrams.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
Done.
###Markdown
```Showing 10 out of 236 rows``` *Notice that the* `INNER JOIN` *version resulted in **230** records. The* `LEFT JOIN` *version returned **236** rows.* Left join (2)Next, you'll try out another example comparing an inner join to its corresponding left join. Before you begin though, take note of how many records are in both the `countries` and `languages` tables below.You will begin with an inner join on the `countries` table on the left with the `languages` table on the right. Then you'll change the code to a left join in the next bullet.Note the use of multi-line comments here using `/*` and `*/`. - Perform an inner join and alias the name of the `country` field as country and the name of the language field as `language`.- Sort based on descending country name.
###Code
%%sql
/*
Select country name AS country, the country's local name,
the language name AS language, and
the percent of the language spoken in the country
*/
SELECT c.country_name AS country, local_name, l.name AS language, percent
-- From left table (alias as c)
FROM countries AS c
-- Join to right table (alias as l)
INNER JOIN languages AS l
-- Match on fields
ON c.code = l.code
-- Order by descending country
ORDER BY country DESC
LIMIT 10;
###Output
sqlite://
* sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/countries.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/diagrams.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
Done.
###Markdown
```Showing 10 out of 909 rows``` - Perform a left join instead of an inner join. Observe the result, and also note the change in the number of records in the result.- Carefully review which records appear in the left join result, but not in the inner join result.
###Code
%%sql
/*
Select country name AS country, the country's local name,
the language name AS language, and
the percent of the language spoken in the country
*/
SELECT c.country_name AS country, local_name, l.name AS language, percent
-- From left table (alias as c)
FROM countries AS c
-- Join to right table (alias as l)
LEFT JOIN languages AS l
-- Match on fields
ON c.code = l.code
-- Order by descending country
ORDER BY country DESC
LIMIT 10;
###Output
sqlite://
* sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/countries.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/diagrams.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
Done.
###Markdown
```Showing 10 out of 916 rows``` *Notice that the* `INNER JOIN` *version resulted in **909** records. The* `LEFT JOIN` *version returned **916** rows.* Left join (3)You'll now revisit the use of the `AVG()` function introduced in our introductory SQL course. You will use it in combination with left join to determine the average gross domestic product (GDP) per capita **by region** in 2010. - Begin with a left join with the `countries` table on the left and the `economies` table on the right.- Focus only on records with 2010 as the `year`.
###Code
%%sql
SELECT c.country_name, region, gdp_percapita
FROM countries AS c
LEFT JOIN economies AS e
ON c.code = e.code
WHERE e.year = 2010
LIMIT 10;
###Output
sqlite://
* sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/countries.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/diagrams.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
Done.
###Markdown
```Showing 10 out of 184 rows``` - Modify your code to calculate the average GDP per capita `AS avg_gdp` for **each region** in 2010.- Select the `region` and `avg_gdp` fields.- Arrange this data on average GDP per capita for each region in 2010 from highest to lowest average GDP per capita.
###Code
%%sql
SELECT c.region, AVG(gdp_percapita) AS avg_gdp
FROM countries AS c
LEFT JOIN economies AS e
ON c.code = e.code
WHERE e.year = 2010
GROUP BY region
ORDER BY avg_gdp DESC
LIMIT 10;
###Output
sqlite://
* sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/countries.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/diagrams.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
Done.
###Markdown
```Showing 10 out of 23 rows``` Right joinRight joins aren't as common as left joins. One reason why is that you can always write a right join as a left join.- The left join code is commented out here. Your task is to write a new query using rights joins that produces the same result as what the query using left joins produces. Keep this left joins code commented as you write your own query just below it using right joins to solve the problem. Note the order of the joins matters in your conversion to using right joins.```sql-- convert this code to use RIGHT JOINs instead of LEFT JOINs/*SELECT cities.name AS city, urbanarea_pop, countries.name AS country, indep_year, languages.name AS language, percentFROM cities LEFT JOIN countries ON cities.country_code = countries.code LEFT JOIN languages ON countries.code = languages.codeORDER BY city, language;*/``` ```sqlSELECT cities.name AS city, urbanarea_pop, countries.name AS country, indep_year, languages.name AS language, percentFROM languages RIGHT JOIN countries ON languages.code = countries.code RIGHT JOIN cities ON countries.code = cities.country_codeORDER BY city, language;``` --- FULL JOINsThe last of the three types of OUTER JOINs is the `FULL JOIN`. Now, you'll see the differences between a `FULL JOIN` and the other joins you've learned about. In particular, the instruction will focus on comparing them to `INNER JOIN`s and `LEFT JOIN`s and then to `LEFT JOIN`s and `RIGHT JOIN`s. Let's first review how the diagram changes between an `INNER JOIN` and a `LEFT JOIN` for our basic example using the left and right tables. Then we'll delve into the `FULL JOIN` diagram and its SQL code. INNER JOIN vs LEFT JOINRecall that an `INNER JOIN` keeps only the records that have matching key field values in both tables. A `LEFT JOIN` keeps all of the records in the left table while bringing in missing values for those key field values that don't appear in the right table. Let's next review the differences between a `LEFT JOIN` and a `RIGHT JOIN`. LEFT JOIN vs RIGHT JOINNow you can see the differences between a `LEFT JOIN` and a `RIGHT JOIN`. The id values of 2 and 3 in the left table do not match with the id values in the right table, so missing values are brought in for them in the `LEFT JOIN`. Likewise for the `RIGHT JOIN`, missing values are brought in for id values of 5 and 6. FULL JOIN initial diagramA `FULL JOIN` combines a `LEFT JOIN` and a `RIGHT JOIN` as you can see by looking at this diagram. So it will bring in all records from both the left and the right table and keep track of the missing values accordingly.- **`left_table`**id | val:---|:---1 | L12 | L23 | L34 | L4- **`right_table`**id | val:---|:---1 | R14 | R25 | R36 | R4- **`FULL JOIN`**L_id | R_id | L_val | R_val:---|:---|:---|:---1 | ~ | L1 | ~2 | ~ | L2 | ~3 | ~ | L3 | ~4 | 4 | L4 | R2~ | 5 | ~ | R3~ | 6 | ~ | R4 FULL JOIN diagramNote the missing values here and that all six of the values of id are included in the table. You can also see from the SQL code to produce this `FULL JOIN` result that the general format aligns closely with the SQL syntax you've seen for both an `INNER JOIN` and a `LEFT JOIN`. You'll next explore an example from the leaders database.```sqlSELECT left_table.id AS L_id, right_table.id AS R_id, left_table.val AS L_val, right_table.val AS R_valFROM left_tableFULL JOIN right_tableUSING (id);``` FULL JOIN example using leaders databaseLet's revisit the example of looking at countries with prime ministers and/or presidents. We'll walk through the code line by line to do this using a `FLL JOIN`. The `SELECT` statement starts us off by including the country field from both of our tables of interest and also the `prime_minister` and `president` fields.Next, the left table is specified as `prime_ministers`. Note that the order matters here and if you switched the two tables you'd get slightly different output.The right table is specified as `presidents` with the alias of `p2`. `prime_ministers` was aliased as `p1` in the previous line.Lastly, the join is done based on the key field of country in both tables. ```sqlSELECT p1.country AS pm_co, p2.country AS pres_co, prime_minister, presidentFROM prime_ministers AS p1FULL JOIN presidents AS p2ON p1.country = p2.country```
###Code
%sql sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
%%sql
SELECT p1.country AS pm_co, p2.country AS pres_co, prime_minister, president
FROM prime_ministers AS p1
LEFT JOIN presidents AS p2 USING(country)
UNION ALL
SELECT p1.country AS pm_co, p2.country AS pres_co, prime_minister, president
FROM presidents AS p2
LEFT JOIN prime_ministers AS p1 USING(country)
WHERE p1.country IS NULL;
###Output
sqlite://
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/countries.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/diagrams.sqlite
* sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
Done.
###Markdown
Full joinIn this exercise, you'll examine how your results differ when using a full join versus using a left join versus using an inner join with the `countries` and `currencies` tables.You will focus on the North American `region` and also where the `name` of the country is missing. Dig in to see what we mean!Begin with a full join with `countries` on the left and `currencies` on the right. The fields of interest have been `SELECT`ed for you throughout this exercise.Then complete a similar left join and conclude with an inner join. - Choose records in which `region` corresponds to North America or is `NULL`. ```sqlSELECT name AS country, code, region, basic_unit-- From countriesFROM countries -- Join to currencies FULL JOIN currencies -- Match on code USING (code)-- Where region is North America or nullWHERE region = 'North America' OR region IS NULL-- Order by regionORDER BY region;``` ```country code region basic_unit------------------------------------------------------------Bermuda BMU North America Bermudian dollarUnited States USA North America United States dollarCanada CAN North America Canadian dollarGreenland GRL North America nullnull TMP null United States dollarnull FLK null Falkland Islands poundnull HKG null Hong Kong dollarnull AIA null East Caribbean dollarnull NIU null New Zealand dollarnull ROM null Romanian leunull SHN null Saint Helena poundnull SGS null British poundnull TWN null New Taiwan dollarnull WLF null CFP francnull MSR null East Caribbean dollarnull IOT null United States dollarnull CCK null Australian dollarnull COK null New Zealand dollar``` - Repeat the same query as before, using a `LEFT JOIN` instead of a `FULL JOIN`. Note what has changed compared to the `FULL JOIN` result.
###Code
%sql sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/countries.sqlite
%%sql
SELECT country_name AS country, code, region, basic_unit
-- From countries
FROM countries
-- Join to currencies
LEFT JOIN currencies
-- Match on code
USING (code)
-- Where region is North America or null
WHERE region = 'North America' OR region IS NULL
-- Order by region
ORDER BY region;
###Output
sqlite://
* sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/countries.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/diagrams.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
Done.
###Markdown
- Repeat the same query again but use an `INNER JOIN` instead of a `FULL JOIN`. Note what has changed compared to the `FULL JOIN` and `LEFT JOIN` results.
###Code
%%sql
SELECT country_name AS country, code, region, basic_unit
-- From countries
FROM countries
-- Join to currencies
INNER JOIN currencies
-- Match on code
USING (code)
-- Where region is North America or null
WHERE region = 'North America' OR region IS NULL
-- Order by region
ORDER BY region;
###Output
sqlite://
* sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/countries.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/diagrams.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
Done.
###Markdown
*Have you kept an eye out on the different numbers of records these queries returned?*- *The* `FULL JOIN` *query returned **18** rows.*- *The* `OUTER JOIN` *returned **4** rows.* - *The* `INNER JOIN` *only returned **3** rows.* Full join (2)You'll now investigate a similar exercise to the last one, but this time focused on using a table with more records on the left than the right. You'll work with the `languages` and `countries` tables.Begin with a full join with `languages` on the left and `countries` on the right. Appropriate fields have been selected for you again here. - Choose records in which `countries.name` starts with the capital letter `'V'` or is `NULL`.- Arrange by `countries.name` in ascending order to more clearly see the results. ```sqlSELECT countries.name, code, languages.name AS language-- From languagesFROM languages -- Join to countries FULL JOIN countries -- Match on code USING (code)-- Where countries.name starts with V or is nullWHERE countries.name LIKE 'V%' OR countries.name IS NULL-- Order by ascending countries.nameORDER BY countries.name;``` ```name code language----------------------------------Vanuatu VUT Tribal LanguagesVanuatu VUT EnglishVanuatu VUT FrenchVanuatu VUT OtherVanuatu VUT BislamaVenezuela VEN SpanishVenezuela VEN indigenousVietnam VNM Vietnamese... ... ...null COK Rarotongannull COK Othernull HKG Cantonesenull HKG Englishnull HKG MandarinShowing 13 out of 58 rows``` - Repeat the same query as before, using a `LEFT JOIN` instead of a `FULL JOIN`. Note what has changed compared to the `FULL JOIN` result.
###Code
%%sql
SELECT countries.country_name, code, languages.name AS language
-- From languages
FROM languages
-- Join to countries
LEFT JOIN countries
-- Match on code
USING (code)
-- Where countries.country_name starts with V or is null
WHERE countries.country_name LIKE 'V%' OR countries.country_name IS NULL
-- Order by descending countries.name
ORDER BY countries.country_name DESC
LIMIT 10;
###Output
sqlite://
* sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/countries.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/diagrams.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
Done.
###Markdown
```Showing 10 out of 56 rows``` - Repeat once more, but use an `INNER JOIN` instead of a `LEFT JOIN`. Note what has changed compared to the `FULL JOIN` and `LEFT JOIN` results.
###Code
%%sql
SELECT countries.country_name, code, languages.name AS language
-- From languages
FROM languages
-- Join to countries
INNER JOIN countries
-- Match using code
USING (code)
-- Where countries.country_name starts with V or is null
WHERE countries.country_name LIKE 'V%' OR countries.country_name IS NULL
-- Order by descending countries.name
ORDER BY countries.country_name DESC;
###Output
sqlite://
* sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/countries.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/diagrams.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
Done.
###Markdown
```Showing 10 out of 10 rows``` *Again, make sure to compare the number of records the different types of joins return and try to verify whether the results make sense.* Full join (3)You'll now explore using two consecutive full joins on the three tables you worked with in the previous two exercises. - Complete a full join with `countries` on the left and `languages` on the right.- Next, full join this result with `currencies` on the right.- Use `LIKE` to choose the Melanesia and Micronesia regions (Hint: `'M%esia'`).- Select the fields corresponding to the country name `AS country`, region, language name `AS language`, and basic and fractional units of currency. ```sql-- Select fields (with aliases)SELECT c1.name AS country, region, l.name AS language, basic_unit, frac_unit-- From countries (alias as c1)FROM countries AS c1 -- Join with languages (alias as l) FULL JOIN languages AS l -- Match on code USING (code) -- Join with currencies (alias as c2) FULL JOIN currencies AS c2 -- Match on code USING (code)-- Where region like Melanesia and MicronesiaWHERE region LIKE 'M%esia';``` ```country region language basic_unit frac_unit---------------------------------------------------------------------------Kiribati Micronesia English Australian dollar CentKiribati Micronesia Kiribati Australian dollar CentMarshall Islands Micronesia Other United States dollar CentMarshall Islands Micronesia Marshallese United States dollar CentNauru Micronesia Other Australian dollar CentNauru Micronesia English Australian dollar CentNew Caledonia Melanesia Other CFP franc Centime... ... ... ... ...Guam Micronesia Chamorro null nullGuam Micronesia Filipino null nullGuam Micronesia English null nullShowing 10 out of 50 row``` CROSSing the rubiconIt's time to check out the `CROSS JOIN`. `CROSS JOIN`s create all possible combinations of two tables. Let's explore the diagram for a `CROSS JOIN` next. CROSS JOIN diagramIn this diagram we have two tables named `table1` and `table2`. Each table only has one field, both with the name of id. The result of the `CROSS JOIN` is all nine combinations of the id values of 1, 2, and 3 in `table1` with the id values of A, B, and C for `table2`. Next you'll explore an example from the leaders database and look over the SQL syntax for a `CROSS JOIN`.- **`table1`** & **`table2`**id |[]| id:---|:---|:---1 | [] | A2 | [] | B3 | [] | C- **`CROSS JOIN`**id1 | id2:---|:---1 | A1 | B1 | C2 | A2 | B2 | C3 | A3 | B3 | C Pairing prime ministers with presidentsSuppose that all prime ministers in North America and Oceania in the prime_ministers table are scheduled for individual meetings with all presidents in the presidents table. You can look at all of these combinations by using a `CROSS JOIN`. The syntax here remains similar to what you've seen earlier in the course. We use a `WHERE` clause to focus on only prime ministers in North America and Oceania in the `prime_ministers` table. The results of the query give us the pairings for the two prime ministers in North America and Oceania from the `prime_ministers` table with the seven presidents in the presidents table.
###Code
%sql sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
%%sql
SELECT prime_minister, president
FROM prime_ministers AS p1
CROSS JOIN presidents AS p2
WHERE p1.continent IN ('North America', 'Oceania');
###Output
sqlite://
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/countries.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/diagrams.sqlite
* sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
Done.
###Markdown
A table of two citiesThis exercise looks to explore languages potentially *and* most frequently spoken in the cities of Hyderabad, India and Hyderabad, Pakistan. - Create a `CROSS JOIN` with `cities AS c` on the left and `languages AS l` on the right.- Make use of `LIKE` and `Hyder%` to choose Hyderabad in both countries.- Select only the city name `AS city` and language name `AS language`.
###Code
%sql sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/countries.sqlite
%%sql
SELECT c.name AS city, l.name AS language
FROM cities AS c
CROSS JOIN languages AS l
WHERE c.name LIKE 'Hyder%'
LIMIT 10;
###Output
sqlite://
* sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/countries.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/diagrams.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
Done.
###Markdown
```Showing 10 out of 1910 rows``` - Use an `INNER JOIN` instead of a `CROSS JOIN`. Think about what the difference will be in the results for this `INNER JOIN` result and the one for the `CROSS JOIN`.
###Code
%%sql
SELECT c.name AS city, l.name AS language
FROM cities AS c
INNER JOIN languages AS l
ON c.country_code = l.code
WHERE c.name LIKE 'Hyder%'
LIMIT 10;
###Output
sqlite://
* sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/countries.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/diagrams.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
Done.
###Markdown
```Showing 10 out of 25 rows``` Outer challengeNow that you're fully equipped to use `OUTER JOIN`s, try a challenge problem to test your knowledge!In terms of life expectancy for 2010, determine the names of the lowest five countries and their regions.- Select country name `AS country`, `region`, and life expectancy `AS life_exp`.- Make sure to use `LEFT JOIN`, `WHERE`, `ORDER BY`, and `LIMIT`.
###Code
%%sql
SELECT c.country_name AS country, region, life_expectancy AS life_exp
FROM countries AS c
LEFT JOIN populations AS p
ON c.code = p.country_code
WHERE year = 2010 AND life_exp IS NOT NULL
ORDER BY life_exp
LIMIT 30;
%%sql
SELECT c.country_name AS country, region, life_expectancy AS life_exp
FROM countries AS c
LEFT JOIN populations AS p
ON c.code = p.country_code
WHERE year = 2010 AND life_exp IS NOT NULL
ORDER BY life_exp DESC
LIMIT 30;
###Output
sqlite://
* sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/countries.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/diagrams.sqlite
sqlite:////Users/sj501/Documents/Jupyter/Jupyter_lab/17_Joining_Data_in_SQL/leaders.sqlite
Done.
|
multidimensional-chart.ipynb | ###Markdown
This sample notebook shows how you can plot three-dimensional data extracted from the Data Lake.It plots the price, quantity and time of AAPL trades on a given day.
###Code
import datetime
import pandas as pd
import matplotlib.pyplot as plt
import maystreet_data as md
import numpy as np
year, month, day = '2022', '01', '19'
def fetch_price_quantity():
"""
Query the Data Lake for min/max prices grouped by hour of the day.
Returns a Pandas dataframe with timestamp (as a Python datetime), min_price and max_price.
"""
query = f"""
SELECT
ExchangeTimestamp AS "timestamp",
price,
quantity
FROM
"prod_lake"."p_mst_data_lake".mt_trade
WHERE
y = '{year}'
AND m = '{month}'
AND d = '{day}'
AND product = 'AAPL'
ORDER BY 1
"""
return pd.DataFrame(md.query(md.DataSource.DATA_LAKE, query))
data = fetch_price_quantity()
data
def time_formatter(x, pos=None):
as_datetime = datetime.datetime.fromtimestamp(x / 1000000000)
return as_datetime.strftime('%H:%M:%S')
plt.rcParams['figure.figsize'] = [10, 10]
fig = plt.figure()
fig.patch.set_facecolor((1, 1, 1))
ax = plt.axes(projection='3d')
ax.set_title('AAPL time/price/quantity, 2022/01/19')
ax.set_xlabel('Time')
ax.set_ylabel('Quantity')
ax.set_zlabel('Price')
ax.xaxis.set_major_formatter(time_formatter)
ax.scatter3D(data['timestamp'], data['quantity'], data['price'])
plt.show()
###Output
_____no_output_____ |
14-Assignment/01-Mnist_using_LeNet_CNN/Image_classification_using_LeNet_CNN.ipynb | ###Markdown
**`Image classification using LeNet CNN`****MNIST Dataset - Handwritten Digits (0-9)**
###Code
## import tensorflow module
import tensorflow as tf
import numpy as np
print(tf.__version__)
###Output
2.2.0
###Markdown
**Load the data**
###Code
print("[INFO downloading MNIST]")
(trainData , trainLabels) , (testData,testLabels) = tf.keras.datasets.mnist.load_data()
## Parameter for mnist data set
image_width = 28
image_height = 28
image_channels = 1 # As it already grayscale image
num_classes = 10 # Output whould be range from 0-9 i.e. 10
print(trainData.shape) # (no. of image , width , height)
print(testData.shape)
print(trainLabels.shape)
print(testLabels.shape)
# num_samples x rows x columns x depth (channel)
trainData = trainData.reshape(trainData.shape[0] ,image_height,image_width,image_channels)
testData = testData.reshape(testData.shape[0] ,image_height,image_width,image_channels)
print(trainData.shape) # (no. of image , width , height, channel)
print(testData.shape)
print(trainLabels.shape)
print(testLabels.shape)
# We normalize the image
# we scale them between [0.0,1.0]
trainData = trainData.astype("float32") / 255.0
testData = testData.astype("float32") / 255.0
###Output
_____no_output_____
###Markdown
**LeNet architecture**Tanh activation for all layers. Softmax activation for last (output) layer.
###Code
'''
As we are using LeNet architecture we pad the input image of size 28x28
into 32x32 size image
'''
trainData = np.pad(trainData, ((0,0) , (2,2) , (2,2) , (0,0)), 'constant')
testData = np.pad(testData , ((0,0) , (2,2) , (2,2) , (0,0)), 'constant')
print(trainData.shape) # (no. of image , width , height, channel)
print(testData.shape)
print(trainLabels.shape)
print(testLabels.shape)
## Updated parameter for mnist data set
image_width = 32
image_height = 32
image_channels = 1 # As it already grayscale image
num_classes = 10 # Output whould be range from 0-9 i.e. 10
###Output
_____no_output_____
###Markdown
**Import package**
###Code
from tensorflow.keras import backend
from tensorflow.keras import models
from tensorflow.keras import layers
# define the model as a class
class LeNet:
'''
In a sequential model, we stack layers sequentially.
So, each layer has unique input and output, and those inputs and outputs
then also come with a unique input shape and output shape.
'''
# 2 Convolutional unit (conv,activation,pooling)
# INPUT => CONV => TANH => AVG-POOL => CONV => TANH => AVG-POOL => FC => TANH => FC => TANH => FC => SOFTMAX
@staticmethod ## class can instantiated only once
def init(numChannels, imgRows, imgCols , numClasses, weightsPath=None):
# if we are using channel first we have update the input size
if backend.image_data_format() == "channels_first":
inputShape = (numChannels , imgRows , imgCols)
else:
inputShape = (imgRows , imgCols , numChannels)
# initilize the model
model = models.Sequential()
# Define the first set of CONV => ACTIVATION => POOL LAYERS
'''
Padding: valid means 0 zero padding
'''
model.add(layers.Conv2D( filters=6,kernel_size=(5,5),strides=(1,1),
padding="valid",activation=tf.nn.tanh,input_shape=inputShape))
model.add(layers.AveragePooling2D(pool_size=(2,2),strides=(2,2)))
# Define the second set of CONV => ACTIVATION => POOL LAYERS
model.add(layers.Conv2D( filters=16,kernel_size=(5,5),strides=(1,1),
padding="valid",activation=tf.nn.tanh,input_shape=inputShape))
model.add(layers.AveragePooling2D(pool_size=(2,2),strides=(2,2)))
# Flatten the convolution volume to fully connected layers (convert them into single vector)
model.add(layers.Flatten())
# Define the first FC layer + Activation
model.add(layers.Dense(units=120, activation=tf.nn.tanh))
# Define the second FC layer + Activation
model.add(layers.Dense(units=84, activation=tf.nn.tanh))
# lastly , define the softmax classifier
model.add(layers.Dense(units=numClasses,activation=tf.nn.softmax))
# if a weights path is supplied (indicating that the model was pre-trained)
# then add weights
if weightsPath is not None:
model.load_weights(weightsPath)
# return the constructed network architecture
return model
'''
NOTE : Instead adding each layer step by step we can also do
model = tf.keras.sequantial([
tf.keras.layers.conv2d(.....)
tf.keras.layers.averagepooling2d(.....)
tf.keras.layers.flatten(.....)
tf.keras.layers.dense(.....)
])
Add layers in the array
'''
###Output
_____no_output_____
###Markdown
**Compile model**
###Code
print("[INFO] Compiling model ... ")
model = LeNet.init(numChannels=image_channels,
imgRows=image_width,
imgCols=image_height,
numClasses=num_classes,
weightsPath=None
)
# Compile the model
'''
As our labels are number then we use "Sparse categorical cross entropy"
if our labels are one hot encoding vector then we "categorical cross entropy"
The only difference between sparse categorical cross entropy and categorical cross entropy is the format of true labels
'''
# Specify the training configuration (optimizer, loss, metrics)
model.compile(
optimizer=tf.keras.optimizers.Adadelta(learning_rate=0.01),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy']
)
# Print model summery
model.summary()
###Output
[INFO] Compiling model ...
Model: "sequential_9"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_18 (Conv2D) (None, 28, 28, 6) 156
_________________________________________________________________
average_pooling2d_18 (Averag (None, 14, 14, 6) 0
_________________________________________________________________
conv2d_19 (Conv2D) (None, 10, 10, 16) 2416
_________________________________________________________________
average_pooling2d_19 (Averag (None, 5, 5, 16) 0
_________________________________________________________________
flatten_9 (Flatten) (None, 400) 0
_________________________________________________________________
dense_27 (Dense) (None, 120) 48120
_________________________________________________________________
dense_28 (Dense) (None, 84) 10164
_________________________________________________________________
dense_29 (Dense) (None, 10) 850
=================================================================
Total params: 61,706
Trainable params: 61,706
Non-trainable params: 0
_________________________________________________________________
###Markdown
**Train Model**
###Code
'''
Define a callback function for training termination criteria
accuracy cutoff = 0.99 (After 0.99 accuracy is reached ,then model will freez i.e weight updation will never happen)
'''
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self,epoch,logs=None):
if(logs.net('accuracy')>0.99):
print("\n Reached 99% accuracy to cancelling training")
self.model.stop_training = True
# initilize training config
batch_size = 128
epochs = 100
# Run training
print("[INFO] Training ...")
history = model.fit(x=trainData , y=trainLabels , validation_data=(testData,testLabels),
batch_size=batch_size,epochs=epochs,
verbose=1)
print('\nhistory dict:', history.history)
###Output
[INFO] Training ...
Epoch 1/100
469/469 [==============================] - 30s 63ms/step - loss: 1.3432 - accuracy: 0.7237 - val_loss: 0.9848 - val_accuracy: 0.7874
Epoch 2/100
469/469 [==============================] - 30s 63ms/step - loss: 0.8428 - accuracy: 0.8021 - val_loss: 0.7021 - val_accuracy: 0.8308
Epoch 3/100
469/469 [==============================] - 30s 64ms/step - loss: 0.6573 - accuracy: 0.8345 - val_loss: 0.5833 - val_accuracy: 0.8555
Epoch 4/100
469/469 [==============================] - 30s 64ms/step - loss: 0.5656 - accuracy: 0.8543 - val_loss: 0.5138 - val_accuracy: 0.8708
Epoch 5/100
469/469 [==============================] - 30s 64ms/step - loss: 0.5076 - accuracy: 0.8669 - val_loss: 0.4672 - val_accuracy: 0.8802
Epoch 6/100
469/469 [==============================] - 30s 64ms/step - loss: 0.4668 - accuracy: 0.8751 - val_loss: 0.4334 - val_accuracy: 0.8851
Epoch 7/100
469/469 [==============================] - 30s 64ms/step - loss: 0.4362 - accuracy: 0.8817 - val_loss: 0.4071 - val_accuracy: 0.8907
Epoch 8/100
469/469 [==============================] - 30s 63ms/step - loss: 0.4122 - accuracy: 0.8866 - val_loss: 0.3861 - val_accuracy: 0.8954
Epoch 9/100
469/469 [==============================] - 30s 64ms/step - loss: 0.3927 - accuracy: 0.8911 - val_loss: 0.3688 - val_accuracy: 0.8986
Epoch 10/100
469/469 [==============================] - 30s 64ms/step - loss: 0.3765 - accuracy: 0.8947 - val_loss: 0.3542 - val_accuracy: 0.9009
Epoch 11/100
469/469 [==============================] - 30s 64ms/step - loss: 0.3626 - accuracy: 0.8976 - val_loss: 0.3415 - val_accuracy: 0.9043
Epoch 12/100
469/469 [==============================] - 30s 63ms/step - loss: 0.3505 - accuracy: 0.9006 - val_loss: 0.3303 - val_accuracy: 0.9067
Epoch 13/100
469/469 [==============================] - 30s 63ms/step - loss: 0.3396 - accuracy: 0.9030 - val_loss: 0.3204 - val_accuracy: 0.9086
Epoch 14/100
469/469 [==============================] - 30s 63ms/step - loss: 0.3299 - accuracy: 0.9056 - val_loss: 0.3112 - val_accuracy: 0.9114
Epoch 15/100
469/469 [==============================] - 30s 63ms/step - loss: 0.3209 - accuracy: 0.9073 - val_loss: 0.3034 - val_accuracy: 0.9132
Epoch 16/100
469/469 [==============================] - 30s 63ms/step - loss: 0.3128 - accuracy: 0.9092 - val_loss: 0.2956 - val_accuracy: 0.9150
Epoch 17/100
469/469 [==============================] - 30s 64ms/step - loss: 0.3051 - accuracy: 0.9113 - val_loss: 0.2885 - val_accuracy: 0.9175
Epoch 18/100
469/469 [==============================] - 30s 64ms/step - loss: 0.2980 - accuracy: 0.9131 - val_loss: 0.2819 - val_accuracy: 0.9191
Epoch 19/100
469/469 [==============================] - 30s 63ms/step - loss: 0.2913 - accuracy: 0.9150 - val_loss: 0.2757 - val_accuracy: 0.9194
Epoch 20/100
469/469 [==============================] - 30s 63ms/step - loss: 0.2849 - accuracy: 0.9164 - val_loss: 0.2698 - val_accuracy: 0.9218
Epoch 21/100
469/469 [==============================] - 30s 63ms/step - loss: 0.2789 - accuracy: 0.9187 - val_loss: 0.2639 - val_accuracy: 0.9225
Epoch 22/100
469/469 [==============================] - 30s 64ms/step - loss: 0.2731 - accuracy: 0.9196 - val_loss: 0.2585 - val_accuracy: 0.9237
Epoch 23/100
469/469 [==============================] - 31s 65ms/step - loss: 0.2675 - accuracy: 0.9216 - val_loss: 0.2536 - val_accuracy: 0.9261
Epoch 24/100
469/469 [==============================] - 30s 63ms/step - loss: 0.2623 - accuracy: 0.9227 - val_loss: 0.2487 - val_accuracy: 0.9268
Epoch 25/100
469/469 [==============================] - 30s 63ms/step - loss: 0.2572 - accuracy: 0.9247 - val_loss: 0.2441 - val_accuracy: 0.9278
Epoch 26/100
469/469 [==============================] - 30s 63ms/step - loss: 0.2524 - accuracy: 0.9262 - val_loss: 0.2394 - val_accuracy: 0.9296
Epoch 27/100
469/469 [==============================] - 30s 64ms/step - loss: 0.2476 - accuracy: 0.9276 - val_loss: 0.2350 - val_accuracy: 0.9314
Epoch 28/100
469/469 [==============================] - 30s 64ms/step - loss: 0.2431 - accuracy: 0.9286 - val_loss: 0.2307 - val_accuracy: 0.9326
Epoch 29/100
469/469 [==============================] - 30s 64ms/step - loss: 0.2386 - accuracy: 0.9300 - val_loss: 0.2265 - val_accuracy: 0.9341
Epoch 30/100
469/469 [==============================] - 30s 64ms/step - loss: 0.2344 - accuracy: 0.9310 - val_loss: 0.2226 - val_accuracy: 0.9350
Epoch 31/100
469/469 [==============================] - 30s 64ms/step - loss: 0.2303 - accuracy: 0.9321 - val_loss: 0.2190 - val_accuracy: 0.9354
Epoch 32/100
469/469 [==============================] - 30s 64ms/step - loss: 0.2263 - accuracy: 0.9332 - val_loss: 0.2152 - val_accuracy: 0.9366
Epoch 33/100
469/469 [==============================] - 30s 64ms/step - loss: 0.2224 - accuracy: 0.9345 - val_loss: 0.2117 - val_accuracy: 0.9377
Epoch 34/100
469/469 [==============================] - 30s 64ms/step - loss: 0.2187 - accuracy: 0.9359 - val_loss: 0.2085 - val_accuracy: 0.9382
Epoch 35/100
469/469 [==============================] - 30s 63ms/step - loss: 0.2151 - accuracy: 0.9367 - val_loss: 0.2047 - val_accuracy: 0.9390
Epoch 36/100
469/469 [==============================] - 30s 64ms/step - loss: 0.2116 - accuracy: 0.9377 - val_loss: 0.2016 - val_accuracy: 0.9402
Epoch 37/100
469/469 [==============================] - 29s 62ms/step - loss: 0.2081 - accuracy: 0.9386 - val_loss: 0.1984 - val_accuracy: 0.9406
Epoch 38/100
469/469 [==============================] - 30s 64ms/step - loss: 0.2048 - accuracy: 0.9395 - val_loss: 0.1949 - val_accuracy: 0.9419
Epoch 39/100
469/469 [==============================] - 30s 64ms/step - loss: 0.2016 - accuracy: 0.9404 - val_loss: 0.1925 - val_accuracy: 0.9429
Epoch 40/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1984 - accuracy: 0.9412 - val_loss: 0.1896 - val_accuracy: 0.9436
Epoch 41/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1954 - accuracy: 0.9423 - val_loss: 0.1865 - val_accuracy: 0.9444
Epoch 42/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1924 - accuracy: 0.9429 - val_loss: 0.1837 - val_accuracy: 0.9454
Epoch 43/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1895 - accuracy: 0.9438 - val_loss: 0.1811 - val_accuracy: 0.9462
Epoch 44/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1865 - accuracy: 0.9447 - val_loss: 0.1791 - val_accuracy: 0.9470
Epoch 45/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1839 - accuracy: 0.9454 - val_loss: 0.1760 - val_accuracy: 0.9487
Epoch 46/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1812 - accuracy: 0.9463 - val_loss: 0.1735 - val_accuracy: 0.9496
Epoch 47/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1786 - accuracy: 0.9468 - val_loss: 0.1711 - val_accuracy: 0.9499
Epoch 48/100
469/469 [==============================] - 30s 63ms/step - loss: 0.1760 - accuracy: 0.9478 - val_loss: 0.1688 - val_accuracy: 0.9504
Epoch 49/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1735 - accuracy: 0.9487 - val_loss: 0.1666 - val_accuracy: 0.9515
Epoch 50/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1710 - accuracy: 0.9493 - val_loss: 0.1641 - val_accuracy: 0.9523
Epoch 51/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1686 - accuracy: 0.9502 - val_loss: 0.1620 - val_accuracy: 0.9529
Epoch 52/100
469/469 [==============================] - 30s 63ms/step - loss: 0.1663 - accuracy: 0.9510 - val_loss: 0.1597 - val_accuracy: 0.9530
Epoch 53/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1640 - accuracy: 0.9517 - val_loss: 0.1575 - val_accuracy: 0.9539
Epoch 54/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1618 - accuracy: 0.9526 - val_loss: 0.1554 - val_accuracy: 0.9553
Epoch 55/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1595 - accuracy: 0.9531 - val_loss: 0.1534 - val_accuracy: 0.9553
Epoch 56/100
469/469 [==============================] - 30s 63ms/step - loss: 0.1574 - accuracy: 0.9538 - val_loss: 0.1514 - val_accuracy: 0.9564
Epoch 57/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1553 - accuracy: 0.9542 - val_loss: 0.1494 - val_accuracy: 0.9561
Epoch 58/100
469/469 [==============================] - 29s 63ms/step - loss: 0.1532 - accuracy: 0.9550 - val_loss: 0.1475 - val_accuracy: 0.9564
Epoch 59/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1512 - accuracy: 0.9558 - val_loss: 0.1457 - val_accuracy: 0.9580
Epoch 60/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1492 - accuracy: 0.9560 - val_loss: 0.1437 - val_accuracy: 0.9587
Epoch 61/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1473 - accuracy: 0.9567 - val_loss: 0.1420 - val_accuracy: 0.9584
Epoch 62/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1454 - accuracy: 0.9571 - val_loss: 0.1402 - val_accuracy: 0.9596
Epoch 63/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1435 - accuracy: 0.9580 - val_loss: 0.1386 - val_accuracy: 0.9591
Epoch 64/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1417 - accuracy: 0.9586 - val_loss: 0.1369 - val_accuracy: 0.9602
Epoch 65/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1399 - accuracy: 0.9586 - val_loss: 0.1352 - val_accuracy: 0.9605
Epoch 66/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1382 - accuracy: 0.9596 - val_loss: 0.1337 - val_accuracy: 0.9607
Epoch 67/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1364 - accuracy: 0.9599 - val_loss: 0.1320 - val_accuracy: 0.9612
Epoch 68/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1348 - accuracy: 0.9604 - val_loss: 0.1305 - val_accuracy: 0.9620
Epoch 69/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1331 - accuracy: 0.9609 - val_loss: 0.1291 - val_accuracy: 0.9621
Epoch 70/100
469/469 [==============================] - 30s 63ms/step - loss: 0.1315 - accuracy: 0.9612 - val_loss: 0.1276 - val_accuracy: 0.9627
Epoch 71/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1299 - accuracy: 0.9616 - val_loss: 0.1264 - val_accuracy: 0.9627
Epoch 72/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1284 - accuracy: 0.9624 - val_loss: 0.1248 - val_accuracy: 0.9633
Epoch 73/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1269 - accuracy: 0.9627 - val_loss: 0.1235 - val_accuracy: 0.9635
Epoch 74/100
469/469 [==============================] - 30s 63ms/step - loss: 0.1253 - accuracy: 0.9631 - val_loss: 0.1219 - val_accuracy: 0.9644
Epoch 75/100
469/469 [==============================] - 30s 63ms/step - loss: 0.1239 - accuracy: 0.9638 - val_loss: 0.1208 - val_accuracy: 0.9649
Epoch 76/100
469/469 [==============================] - 30s 63ms/step - loss: 0.1224 - accuracy: 0.9640 - val_loss: 0.1196 - val_accuracy: 0.9655
Epoch 77/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1210 - accuracy: 0.9645 - val_loss: 0.1185 - val_accuracy: 0.9657
Epoch 78/100
469/469 [==============================] - 30s 63ms/step - loss: 0.1196 - accuracy: 0.9651 - val_loss: 0.1169 - val_accuracy: 0.9669
Epoch 79/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1183 - accuracy: 0.9654 - val_loss: 0.1159 - val_accuracy: 0.9669
Epoch 80/100
469/469 [==============================] - 29s 62ms/step - loss: 0.1169 - accuracy: 0.9659 - val_loss: 0.1144 - val_accuracy: 0.9674
Epoch 81/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1156 - accuracy: 0.9662 - val_loss: 0.1132 - val_accuracy: 0.9675
Epoch 82/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1143 - accuracy: 0.9665 - val_loss: 0.1118 - val_accuracy: 0.9685
Epoch 83/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1131 - accuracy: 0.9668 - val_loss: 0.1109 - val_accuracy: 0.9682
Epoch 84/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1118 - accuracy: 0.9675 - val_loss: 0.1098 - val_accuracy: 0.9686
Epoch 85/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1106 - accuracy: 0.9678 - val_loss: 0.1088 - val_accuracy: 0.9694
Epoch 86/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1094 - accuracy: 0.9682 - val_loss: 0.1077 - val_accuracy: 0.9699
Epoch 87/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1082 - accuracy: 0.9686 - val_loss: 0.1067 - val_accuracy: 0.9696
Epoch 88/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1070 - accuracy: 0.9691 - val_loss: 0.1053 - val_accuracy: 0.9705
Epoch 89/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1059 - accuracy: 0.9693 - val_loss: 0.1047 - val_accuracy: 0.9703
Epoch 90/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1047 - accuracy: 0.9695 - val_loss: 0.1036 - val_accuracy: 0.9703
Epoch 91/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1037 - accuracy: 0.9697 - val_loss: 0.1026 - val_accuracy: 0.9711
Epoch 92/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1026 - accuracy: 0.9699 - val_loss: 0.1016 - val_accuracy: 0.9712
Epoch 93/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1015 - accuracy: 0.9703 - val_loss: 0.1008 - val_accuracy: 0.9712
Epoch 94/100
469/469 [==============================] - 30s 64ms/step - loss: 0.1006 - accuracy: 0.9707 - val_loss: 0.0997 - val_accuracy: 0.9719
Epoch 95/100
469/469 [==============================] - 30s 64ms/step - loss: 0.0995 - accuracy: 0.9710 - val_loss: 0.0991 - val_accuracy: 0.9720
Epoch 96/100
469/469 [==============================] - 30s 64ms/step - loss: 0.0985 - accuracy: 0.9714 - val_loss: 0.0980 - val_accuracy: 0.9718
Epoch 97/100
469/469 [==============================] - 30s 64ms/step - loss: 0.0976 - accuracy: 0.9717 - val_loss: 0.0968 - val_accuracy: 0.9717
Epoch 98/100
469/469 [==============================] - 30s 64ms/step - loss: 0.0966 - accuracy: 0.9720 - val_loss: 0.0960 - val_accuracy: 0.9722
Epoch 99/100
469/469 [==============================] - 30s 64ms/step - loss: 0.0956 - accuracy: 0.9724 - val_loss: 0.0952 - val_accuracy: 0.9727
Epoch 100/100
469/469 [==============================] - 30s 64ms/step - loss: 0.0947 - accuracy: 0.9723 - val_loss: 0.0948 - val_accuracy: 0.9727
history dict: {'loss': [1.3431609869003296, 0.8428141474723816, 0.6572690606117249, 0.5655957460403442, 0.5075865387916565, 0.46676570177078247, 0.4361523389816284, 0.41217565536499023, 0.3927132189273834, 0.3765268921852112, 0.3625921308994293, 0.35045212507247925, 0.33962979912757874, 0.3298572599887848, 0.32094621658325195, 0.3127550482749939, 0.3051263988018036, 0.29802677035331726, 0.2912783920764923, 0.2849498391151428, 0.2788880169391632, 0.2730884850025177, 0.2675323784351349, 0.26230400800704956, 0.25722235441207886, 0.252378910779953, 0.2476033717393875, 0.2430839240550995, 0.23864908516407013, 0.2343992292881012, 0.23028649389743805, 0.22631999850273132, 0.22244878113269806, 0.21870596706867218, 0.2150687873363495, 0.2115875780582428, 0.20812083780765533, 0.20479533076286316, 0.20155219733715057, 0.19839149713516235, 0.1953805834054947, 0.19240935146808624, 0.1894644796848297, 0.18654827773571014, 0.18389412760734558, 0.18118631839752197, 0.17857582867145538, 0.17600952088832855, 0.17345890402793884, 0.17099148035049438, 0.16864044964313507, 0.16626934707164764, 0.16399584710597992, 0.16175241768360138, 0.1595277637243271, 0.15739311277866364, 0.15530318021774292, 0.15322962403297424, 0.15119560062885284, 0.14924518764019012, 0.14730022847652435, 0.14538563787937164, 0.14346522092819214, 0.14172561466693878, 0.1399255096912384, 0.13819916546344757, 0.13644221425056458, 0.1347852349281311, 0.13314761221408844, 0.13151931762695312, 0.12992143630981445, 0.12839488685131073, 0.12686461210250854, 0.1253497153520584, 0.12385062873363495, 0.12243879586458206, 0.12100295722484589, 0.11963048577308655, 0.11826080828905106, 0.1169232502579689, 0.11561912298202515, 0.11434328556060791, 0.11305766552686691, 0.11181148886680603, 0.11060771346092224, 0.10940266400575638, 0.10822492092847824, 0.10703414678573608, 0.1059085950255394, 0.10474557429552078, 0.1037183329463005, 0.10263289511203766, 0.1015438586473465, 0.10056009888648987, 0.09954310208559036, 0.09849246591329575, 0.09759869426488876, 0.09659652411937714, 0.09561500698328018, 0.09466367214918137], 'accuracy': [0.7236999869346619, 0.8021166920661926, 0.8344833254814148, 0.8543333411216736, 0.8668500185012817, 0.8751000165939331, 0.8817333579063416, 0.8865666389465332, 0.8910833597183228, 0.8947166800498962, 0.8975666761398315, 0.9005666375160217, 0.9029833078384399, 0.9055833220481873, 0.9072666764259338, 0.9091833233833313, 0.9113166928291321, 0.913100004196167, 0.9150333404541016, 0.916366696357727, 0.918666660785675, 0.9196166396141052, 0.9215666651725769, 0.9226999878883362, 0.9246666431427002, 0.9262333512306213, 0.9275833368301392, 0.9286166429519653, 0.9300333261489868, 0.930983304977417, 0.9321333169937134, 0.9332333207130432, 0.9345333576202393, 0.9358999729156494, 0.9366999864578247, 0.937666654586792, 0.9385666847229004, 0.9394833445549011, 0.9404333233833313, 0.9411666393280029, 0.942300021648407, 0.942883312702179, 0.9437666535377502, 0.9447166919708252, 0.9454333186149597, 0.9462666511535645, 0.9467833042144775, 0.9477999806404114, 0.9486833214759827, 0.9493499994277954, 0.950166642665863, 0.951033353805542, 0.9517499804496765, 0.9526000022888184, 0.9531333446502686, 0.9538499712944031, 0.9541500210762024, 0.9549999833106995, 0.9557666778564453, 0.9560166597366333, 0.9567000269889832, 0.9571499824523926, 0.957966685295105, 0.9585833549499512, 0.9585833549499512, 0.9595833420753479, 0.9599166512489319, 0.9604499936103821, 0.9608833193778992, 0.9612166881561279, 0.9616166949272156, 0.9623500108718872, 0.9626500010490417, 0.9631166458129883, 0.9637500047683716, 0.964033305644989, 0.9645166397094727, 0.9650833606719971, 0.9654333591461182, 0.9658666849136353, 0.9661833047866821, 0.966533362865448, 0.9667666554450989, 0.9675333499908447, 0.9677666425704956, 0.9682166576385498, 0.9685500264167786, 0.9690666794776917, 0.9692833423614502, 0.9695166945457458, 0.9696666598320007, 0.9699000120162964, 0.9703166484832764, 0.9707333445549011, 0.9710166454315186, 0.9714166522026062, 0.9716833233833313, 0.9719833135604858, 0.9723666906356812, 0.9722999930381775], 'val_loss': [0.9848088622093201, 0.7020638585090637, 0.5833069682121277, 0.513763964176178, 0.46720609068870544, 0.4333729147911072, 0.40710821747779846, 0.3861260414123535, 0.3687882721424103, 0.35418567061424255, 0.34151995182037354, 0.3303300440311432, 0.32039931416511536, 0.3112075924873352, 0.30339550971984863, 0.2956223785877228, 0.28849077224731445, 0.2818671464920044, 0.27567535638809204, 0.2698317766189575, 0.26393231749534607, 0.25853270292282104, 0.2535744905471802, 0.2487049251794815, 0.24410712718963623, 0.23942597210407257, 0.23499774932861328, 0.23073318600654602, 0.22647090256214142, 0.22261066734790802, 0.21903124451637268, 0.2151721715927124, 0.21166643500328064, 0.20852810144424438, 0.2047457993030548, 0.20156925916671753, 0.1983940601348877, 0.19494758546352386, 0.19254286587238312, 0.18962937593460083, 0.18650975823402405, 0.18369761109352112, 0.181098073720932, 0.1790771186351776, 0.17598357796669006, 0.17350110411643982, 0.17113567888736725, 0.16882529854774475, 0.16662165522575378, 0.16405393183231354, 0.16198749840259552, 0.15970391035079956, 0.15747496485710144, 0.15540222823619843, 0.15344293415546417, 0.15143360197544098, 0.14936241507530212, 0.14746876060962677, 0.14565761387348175, 0.14368031919002533, 0.14199528098106384, 0.14023804664611816, 0.13863761723041534, 0.13685019314289093, 0.13517050445079803, 0.13368192315101624, 0.13200300931930542, 0.13054963946342468, 0.12908431887626648, 0.12761244177818298, 0.1263933926820755, 0.12475363910198212, 0.12347876280546188, 0.12189362943172455, 0.1207994893193245, 0.11961602419614792, 0.11850181967020035, 0.11694902181625366, 0.1158779188990593, 0.11436095088720322, 0.11318954080343246, 0.11183725297451019, 0.11086208373308182, 0.10977950692176819, 0.1087634488940239, 0.10769123584032059, 0.10668773204088211, 0.10526978969573975, 0.1047046035528183, 0.1036263033747673, 0.10261313617229462, 0.10160790383815765, 0.10079753398895264, 0.09974884986877441, 0.09906212985515594, 0.09802382439374924, 0.09678712487220764, 0.09601907432079315, 0.09518987685441971, 0.09481088072061539], 'val_accuracy': [0.7874000072479248, 0.8307999968528748, 0.8554999828338623, 0.8708000183105469, 0.8802000284194946, 0.8851000070571899, 0.8906999826431274, 0.8953999876976013, 0.8985999822616577, 0.9009000062942505, 0.9042999744415283, 0.9067000150680542, 0.9085999727249146, 0.9114000201225281, 0.9132000207901001, 0.9150000214576721, 0.9175000190734863, 0.9190999865531921, 0.9193999767303467, 0.9218000173568726, 0.9225000143051147, 0.9236999750137329, 0.9261000156402588, 0.926800012588501, 0.9277999997138977, 0.9296000003814697, 0.9314000010490417, 0.9326000213623047, 0.9340999722480774, 0.9350000023841858, 0.9354000091552734, 0.9366000294685364, 0.9376999735832214, 0.9381999969482422, 0.9390000104904175, 0.9401999711990356, 0.9405999779701233, 0.9419000148773193, 0.9429000020027161, 0.9435999989509583, 0.9444000124931335, 0.9453999996185303, 0.9462000131607056, 0.9470000267028809, 0.9487000107765198, 0.9495999813079834, 0.9498999714851379, 0.9503999948501587, 0.9514999985694885, 0.9523000121116638, 0.9528999924659729, 0.953000009059906, 0.9538999795913696, 0.955299973487854, 0.955299973487854, 0.9563999772071838, 0.9560999870300293, 0.9563999772071838, 0.9580000042915344, 0.9587000012397766, 0.9584000110626221, 0.9595999717712402, 0.9591000080108643, 0.9602000117301941, 0.9605000019073486, 0.9606999754905701, 0.9611999988555908, 0.9620000123977661, 0.9621000289916992, 0.9627000093460083, 0.9627000093460083, 0.9632999897003174, 0.9635000228881836, 0.9643999934196472, 0.964900016784668, 0.965499997138977, 0.9656999707221985, 0.9668999910354614, 0.9668999910354614, 0.9674000144004822, 0.9674999713897705, 0.968500018119812, 0.9682000279426575, 0.9685999751091003, 0.9693999886512756, 0.9699000120162964, 0.9696000218391418, 0.9704999923706055, 0.970300018787384, 0.970300018787384, 0.9710999727249146, 0.9711999893188477, 0.9711999893188477, 0.9718999862670898, 0.972000002861023, 0.9718000292778015, 0.9717000126838684, 0.9721999764442444, 0.9726999998092651, 0.9726999998092651]}
###Markdown
**Visualization**
###Code
%matplotlib inline
import matplotlib.pyplot as plt
# retrieve a list of list results on training and test data sets for each training epoch
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc)) ## get number of epochs
###Output
_____no_output_____
###Markdown
**Plot training and validation accuracy per epoch**
###Code
plt.figure(figsize=(7,5))
plt.plot(epochs, acc,label='train_accuracy')
plt.plot(epochs, val_acc,label='train_accuracy')
plt.xlabel('epochs')
plt.ylabel('Accuracy')
plt.legend(loc="lower right")
plt.title('Plot training and validation accuracy per epoch')
plt.show()
###Output
_____no_output_____
###Markdown
**Plot training and validation loss per epoch**
###Code
plt.figure(figsize=(7,5))
plt.plot(epochs, loss,label='train loss')
plt.plot(epochs, val_loss,label='validation loss')
plt.xlabel('epochs')
plt.ylabel('Accuracy')
plt.legend(loc="upper right")
plt.title('Plot training and validation loss per epoch')
plt.show()
###Output
_____no_output_____
###Markdown
**Show accuracy on the testing data**
###Code
print("[INFO] Evaluating ... ")
(loss,accuracy) = model.evaluate(testData,testLabels,batch_size=batch_size,verbose=1)
print("[INFO] accuracy : {:.2f}%".format(accuracy*100))
## Save the weight
'''
Instead of saving weights we can also save model i.e. model.save_model(filename)
and whenever you want , you can use them using model.load_model(filename)
'''
model.save_weights("weights/LeNetMNIST.temp.hd5",overwrite=True)
###Output
_____no_output_____
###Markdown
**Evaluate pre-trained model**
###Code
print("[INFO] Compiling model ... ")
'''
Use the saved weights here so that we don't have
to re train the model again. But again the better way is just use
1. save_model property
2. load_model propterty
instead of saving weights ,
'''
model = LeNet.init(numChannels=image_channels,
imgRows=image_width,
imgCols=image_height,
numClasses=num_classes,
weightsPath="weights/LeNetMNIST.temp.hd5"
)
model.compile(
optimizer=tf.keras.optimizers.Adadelta(learning_rate=0.01),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy']
)
print("[INFO] Evaluating ... ")
(loss,accuracy) = model.evaluate(testData,testLabels,batch_size=batch_size,verbose=1)
print("[INFO] accuracy : {:.2f}%".format(accuracy*100))
###Output
[INFO] Evaluating ...
79/79 [==============================] - 2s 26ms/step - loss: 0.0948 - accuracy: 0.9727
[INFO] accuracy : 97.27%
###Markdown
**Model predictions**
###Code
# import package -> OpenCV
import cv2 # cv : computer vision libray
# set up matplotlib fig and size it to fit 3 rows and 4 col
nrows = 3
ncols = 4
'''
matplotlib.pyplot.gcf() is primarily used to get the current figure.
If no current figure is available then one is created with the help of the figure() function
'''
fig = plt.gcf()
fig.set_size_inches(ncols*6 , nrows*6)
# Randomly select a few testing image
plt.figure(figsize=(10,10))
num_prediction = 12
'''
Generate a uniform random sample from np.arange(5) of size 3:
>>> np.random.choice(5, 3)
array([0, 3, 4])
>>> #This is equivalent to np.random.randint(0,5,3)
'''
# index of selected random image from test dataset
test_indices = np.random.choice(np.arange(0, len(testLabels)),size=(num_prediction,))
# Get the testing image using list comprehension
test_images = np.stack(([ testData[i] for i in test_indices ]))
# Get the testing image labels using list comprehension
test_labels = np.stack(([ testLabels[i] for i in test_indices ]))
# Compute_prediction
predictions = model.predict(test_images)
for i in range(num_prediction):
'''
As output would be in the form of probability as we are using softmax function
at output layer , so we are choosing the class which have highest probability
'''
prediction = np.argmax(predictions[i])
# rescale the test image
# As it was normailize between 0 to 1
image = (test_images[i]*255).astype("uint8")
# resize the image from a 28x28 images into 96x96 so that can see the image clearly
image = cv2.resize(image, (96,96), interpolation=cv2.INTER_CUBIC)
# convert grayscale into rgb color , 3 represent -: 3 channels
image = cv2.merge([image]*3)
# if prediction == ground truth label then mark with green else with red
if prediction == test_labels[i]:
rgb_color = (0,255,0) ## true prediction
else:
rgb_color = (255,0,0) ## False prediction
# put text on the image
cv2.putText(image, str(prediction),(0,18), cv2.FONT_HERSHEY_SIMPLEX,0.75,rgb_color,1)
# set up subplot ; subplot indices starts from 1
sp = plt.subplot(nrows,ncols,i+1,title="True label:%s"% test_labels[i])
sp.axis('Off') # Don't show axis
plt.imshow(image)
plt.show()
###Output
_____no_output_____ |
docs/Multitier doc.ipynb | ###Markdown
Multitiers
###Code
from pathlib import Path
import os
from pprint import pprint
import multitiers
from IPython.display import display
import graphviz
###Output
_____no_output_____
###Markdown
Multitiers are a novel way of representing linguistic data for purposes of historical investigation and language comparison, mainly in terms of regular correspondences. They can be conceived as an extension to alignments, incorporating information other than segments or sound classes, in ways that are suitable for directly or easily applying most methods of machine learning currently in vogue, particularly those for white-box results such as decision trees.As mentioned, they initially stem from aligned data. Consider the words "house" /haʊs/ in English, "huis" /ɦœy̯s/ in Dutch, and "hus" /huːs/ in Icelandic, all cognate stemming from a Proto-Germanic "\*hūsą". Their alignment is straightforward, and can be done either manually or with recommended tools such as LingPy: | Language | 1 | 2 | 3 | 4 | |-----------|---|----|---|---| | English | h | a | ʊ | s | | Dutch | ɦ | œ | y | s | | Icelandic | h | uː | - | s | The information under the language names can be considered an essential and the most important tier, and to be clearer we should specify what information they carry (the segments). The indexes, on the other hand, constitute another kind of information, a positional tier, in this case counting left-to-right. We can extend this alignment to a more evident multitier system by clearly marking what are segments and adding positional tiers both left-to-right and right-to-left. | Tier Name | | | | | |-----------|---|----|---|---| | Index | 1 | 2 | 3 | 4 | | RIndex | 4 | 3 | 2 | 1 | | Segments_English | h | a | ʊ | s | | Segments_Dutch | ɦ | œ | y | s | | Segments_Icelandic | h | uː | - | s | Each tier is in fact a variable that a given observation (that is, an alignment site in a cognate set) can assume. It is easier to note this if we transpose the table following common conventions of relational databases, also allowing us to give a unique ID to each position (here, "P" and a number, to distinguish from the index) | ID | Index | RIndex | Segment_ENG | Segment_DUT | Segment_ICE | |----|-------|--------|-------------|-------------|-------------| | P0 | 1 | 4 | h | ɦ | h | | P1 | 2 | 3 | a | œ | uː | | P2 | 3 | 2 | ʊ | y | - | | P3 | 4 | 1 | s | s | s | This allows us to easily expand with more tiers. We can, for example, incorporate information on the sound class of each site, for each language: | ID | Index | RIndex | Segment_ENG | SC_ENG |Segment_DUT | SC_DUT | Segment_ICE | SC_ICE | |----|-------|--------|-------------|--------|-------------|--------|-------------|--------| | P0 | 1 | 4 | h | H | ɦ | H | h | H | | P1 | 2 | 3 | a | A | œ | U | uː | U | | P2 | 3 | 2 | ʊ | U | y | Y | - | - | | P3 | 4 | 1 | s | S | s | S | s | S | This can be extended with essentially any information at hand. For example, we can add for each language an information on the sound class one position before (to the left, L1) for each alignment site. In the table below we do that removing Icelandic, for typesetting reasons. | ID | Index | RIndex | Segment_ENG | SC_ENG | SC_ENG_L1 |Segment_DUT | SC_DUT | SC_DUT_L1 | |----|-------|--------|-------------|--------|-----------|-------------|--------|-----------| | P0 | 1 | 4 | h | H | ∅ | ɦ | H | ∅ | | P1 | 2 | 3 | a | A | H | œ | U | H | | P2 | 3 | 2 | ʊ | U | A | y | Y | U | | P3 | 4 | 1 | s | S | U | s | S | Y | The information offered in the tiers can include pretty much anything, despite focusing on sounds and sound classes. While it can technically done by hand, and in fact it is what many people do in their heads when doing historical linguistics, the idea is to use wordlists and employ the tools for assisting the management. Loading dataDifferent kinds of input can be used, including CLDF datasets, but for development purposes we are starting with simpler wordlists as those used by LingPy. Wordlists are plain textual tabular files which must have (1) an unique ID per entry, (2) a COGID id that allows to group different lexemes, (3) a DOCULECT id (or equivalent) that allows to distiguish the different expressions (=lexemes) of each cogid, (4) an ALIGNMENT information with the expression of the lexeme. For the time being, only one entry is allowed for each DOCULECT/COGID pair, but of course it is not necessary for all to be present.The Germanic dataset in the resources, for example, includes this kind of information: | ID | DOCULECT | PARAMETER | VALUE | IPA | TOKENS | ALIGNMENT | COGID | |--------|----------------|-----------|----------|---------|-------------|------------------|-------| | 1 | Proto-Germanic | \*wīban | \*wīban | wiːban | w iː b a n | w iː b ( a n ) | 538 | | 2 | German | \*wīban | Weib | vaip | v ai p | v ai b ( - - ) | 538 | | 3 | English | \*wīban | wife | ʋaɪf | ʋ aɪ f | ʋ aɪ f ( - - ) | 538 | | 4 | Dutch | \*wīban | wijf | ʋɛɪf | ʋ ɛɪ f | ʋ ɛɪ f ( - - ) | 538 | | 5 | Proto-Germanic | \*kurnan | \*kurnan | kurnan | k u r n a n | k u r n ( a n ) | 533 | | 6 | German | \*kurnan | Korn | kɔrn | k ɔ r n | k ɔ r n ( - - ) | 533 | | 7 | English | \*kurnan | corn | kɔːn | k ɔː n | k ɔː - n ( - - ) | 533 | | 8 | Dutch | \*kurnan | koren | koːrə | k oː r ə | k oː r - ( ə - ) | 533 | | 9 | Proto-Germanic | \*xaimaz | \*xaimaz | xaimaz | x ai m a z | x ai m ( a z ) | 532 |We can read the data with the auxiliary function an inspect them:
###Code
datafile = Path(os.getcwd()).parent / "resources" / "germanic.tsv"
data_germanic = multitiers.read_wordlist_data(datafile, comma=False)
pprint(data_germanic[:2])
###Output
[OrderedDict([('ID', '1'),
('DOCULECT', 'Proto-Germanic'),
('PARAMETER', '*wīban'),
('VALUE', '*wīban'),
('IPA', 'wiːban'),
('TOKENS', 'w iː b a n'),
('ALIGNMENT', 'w iː b ( a n )'),
('COGID', '538'),
('NOTE', '')]),
OrderedDict([('ID', '2'),
('DOCULECT', 'German'),
('PARAMETER', '*wīban'),
('VALUE', 'Weib'),
('IPA', 'vaip'),
('TOKENS', 'v ai p'),
('ALIGNMENT', 'v ai b ( - - )'),
('COGID', '538'),
('NOTE', '')])]
###Markdown
A `MultiTiers` object can be created from such lists of dictionaries, already specifying the sound class model(s) to be used and the left and right context lenghts, if any. If more tiers are needed along the way, they will be added automatically.
###Code
mt_germanic = multitiers.MultiTiers(data_germanic, models=['cv'], left=1, right=1)
print(mt_germanic)
###Output
<multitiers.multitiers.MultiTiers object at 0x7f0bd5b50250>
###Markdown
A correspondence study can be run by listing all the known and unknown tiers, which can be limited in terms of the values they must include and/or exclude. Consider we want a counter of the sound classes (consonant or vowel) of the second sound in Proto-Germanic lexemes when the initial is /s/. We can prepare the following study:
###Code
known1 = {
'index' : {'include': [1]}, # First position in the lexeme...
'segment_Proto-Germanic' : {'include': ['s']}, # ...when PG has /s/
}
unknown1 = {'cv_Proto-Germanic_R1': {}}
result = mt_germanic.correspondence_study(known1, unknown1)
result
###Output
_____no_output_____
###Markdown
Which shows that, in this dataset, there are 31 cases when the initial Proto-Germanic /s/ is followed by a vowel and 78 cases when it is followed by a consonant. Results are easier to interpret with the `print_study` auxiliary function:
###Code
multitiers.print_study(result, known1, unknown1)
###Output
index segment_Proto-Germanic cv_Proto-Germanic_R1 Count % Solved
------- ------------------------ ---------------------- ------- ---- --------
1 s C 78 0.72
1 s V 31 0.28
###Markdown
No entry is marked as solved, as no combination of known tiers results in a fully predictable combination of unknown tiers.From the literature we know that, in German, initial Proto-Germanic /s/ developed as /z/ when followed by a vowel and as /ʃ/ otherwise, when regular. Let's confirm this:
###Code
known2 = {
'index' : {'include': [1]}, # First position in the lexeme...
'segment_Proto-Germanic' : {'include': ['s']}, # ...when PG has /s/...
'segment_German' : {}
}
unknown2 = {'cv_Proto-Germanic_R1': {}}
multitiers.print_study( mt_germanic.correspondence_study(known2, unknown2), known2, unknown2)
###Output
index segment_Proto-Germanic segment_German cv_Proto-Germanic_R1 Count % Solved
------- ------------------------ ---------------- ---------------------- ------- --- --------
1 s z V 30 1 *
1 s ʃ C 78 1 *
###Markdown
Studies can also be provided as a small programming language, parsed with the `parse_study` function. Here, we illustrate this by investigating another dataset, with Latin and Spanish, to check how palatalization might have worked. The study is "solved", meaning that we obtain a single set of free tiers for the observed bound tiers: all Latin segments /t/ reflected as Spanish /tʃ/ have a consonant one position to the left and a vowel one position to the right.
###Code
# TODO: initialize multitier object with filename
datafile = Path(os.getcwd()).parent / "resources" / "latin2spanish.tsv"
data_spanish = multitiers.read_wordlist_data(datafile, comma=False)
mt_spanish = multitiers.MultiTiers(data_spanish, models=['cv'], left=1, right=1)
study = """
KNOWN segment_Latin INCLUDE t
KNOWN segment_Spanish INCLUDE tʃ
UNKNOWN cv_Latin_L1
UNKNOWN cv_Latin_R1
"""
known3, unknown3 = multitiers.utils.parse_study(study)
multitiers.print_study(mt_spanish.correspondence_study(known3, unknown3), known3, unknown3)
###Output
segment_Latin segment_Spanish cv_Latin_L1 cv_Latin_R1 Count % Solved
--------------- ----------------- ------------- ------------- ------- --- --------
t tʃ C V 41 1 *
###Markdown
Decision treesOne out-of-the-box method for classification offered by the library are decision trees. Let's go back to our Germanic example and check how it can be used:
###Code
clf1 = multitiers.Classifier(data_germanic, models=['cv'], left=1, right=1)
clf1.train(known2, unknown2)
display(graphviz.Source(clf1.to_dot()))
###Output
_____no_output_____
###Markdown
We can build a more complex example where we use SCA sound classes and try to predict a tuple consisting of the German and English reflexes together. Note that, for simplicity and demonstration, we exclude German /r/ reflexes that are found in the data and we limit to a level three decision tree.
###Code
clf2 = multitiers.Classifier(data_germanic, models=['sca'], left=1, right=1)
study = """
X_tier index INCLUDE 1
X_tier segment_Proto-Germanic INCLUDE s
X_tier sca_Proto-Germanic_R1
y_tier segment_German EXCLUDE r
y_tier segment_English
"""
X_tiers2, y_tiers2 = multitiers.utils.parse_study(study)
clf2.train(X_tiers2, y_tiers2, max_depth=3)
display(graphviz.Source(clf2.to_dot()))
###Output
_____no_output_____
###Markdown
We obtain that the most common pair (topmost cell) involves a /ʃ/ reflex in German and a /s/ reflex in English. The first decision found by algorithm is related to the following sound (R1, one position to the right) being of the SCA `K` class (velar sounds in general), for which in all cases both German and English show a /ʃ/ reflex.If the sound to the right is not a velar consonant, the most informative decision is whether it is of class `T` (dental and alveolars in general). If true, it means that in all cases we observe a /ʃ/, /s/ pair. The third most informative decision is whether the following sound is of class `P` (bilabial consonants), in which case the pair is likewise /ʃ/, /s/. Note that the observed reflexes for P a and T are the same, but as expected from decision trees the decision is not able to group them -- either we use different methods, post-process the output, or create a class that involves both P and T.No further decision is taken by the tree, as we limited it to three levels. The tree is not able to fully solve the problem, as we end up (in the left bottom corner) with 34 samples. In all cases the English reflex is /s/, but in 22 cases German shows /z/ and in the remaining 12 it shows /ʃ/. Different tiers might have been able to solve the problem with these constrains, but we demonstrate that, using only the SCA sound class of the second sound for s- initials, we cannot fully solve the English/German pair. An interesting possibility of these trees is to study correspondences of non reconstructed languages. In the example below we try to predict Dutch vowels by using only the corresponding alignment sites in German and English. Instead of limiting by tree-depth, we limit by minimm impurity decrease.
###Code
clf3 = multitiers.Classifier(data_germanic, models=['cv'])
study3 = """
X_tier segment_German
X_tier segment_English
X_tier cv_Dutch INCLUDE V
y_tier segment_Dutch
"""
X_tiers3, y_tiers3 = multitiers.utils.parse_study(study3)
clf3.train(X_tiers3, y_tiers3, min_impurity_decrease=0.03333)
display(graphviz.Source(clf3.to_dot()))
###Output
_____no_output_____
###Markdown
We first note that only German tiers were picked, indicating that its tiers have more power of prediction than those of English for this small study. The correspondences are rather clear: German /ə/ always predicts a corresponding Dutch /ə/, German /a/ always predicts a Dutch /ɑ/, German /aː/ always predicts a Dutch /aː/, and German /eː/ always predicts a Dutch /ɪ/. The problem is however far from solved, with 232 samples left and a high 0.899). The fact that no more decision could be collected due to the limit we set indicates that the tiers we are using are in general insufficient for the prediction task: that is, that using *exclusively* the German and English vowel it impossible to predict most of the Dutch ones.We can make a more advanced experiment by trying to predict Dutch consonantal segments by using the corresponding German and English ones and the sound classes to the left or right. We limit our tree to a 15-level depth.
###Code
clf4 = multitiers.Classifier(data_germanic, models=['cv', 'sca'], left=1, right=1)
study4 = """
X_tier segment_German
X_tier segment_English
X_tier sca_German
X_tier sca_German_L1
X_tier sca_German_R1
X_tier sca_English
X_tier sca_English_L1
X_tier sca_English_R1
X_tier cv_Dutch INCLUDE C
y_tier segment_Dutch
"""
X_tiers4, y_tiers4 = multitiers.utils.parse_study(study4)
clf4.train(X_tiers4, y_tiers4, max_depth=15)
###Output
_____no_output_____
###Markdown
Visualizing this tree will not be very helpful, but we check how well it is performing in terms of predictions. Note that we are running on the same dataset we trained it, which makes sense in this case. There is a `show_pred()` method, but here we get the full prediction, including the probability for each one.
###Code
clf4.show_pred_prob(max_lines=10)
###Output
#0: v/(v|0.973,f|0.027)
#1: r/(r|1.000)
#2: s/(s|1.000)
#3: p/(p|1.000)
#4: r/(r|1.000)
#5: l/(l|1.000)
#6: x/(x|0.519,ɣ|0.160,ʋ|0.086)
#7: h/(h|1.000)
#8: n/(n|1.000)
#9: ŋ/(x|0.519,ɣ|0.160,ʋ|0.086)
#10: b/(b|1.000)
###Markdown
In only one case, 9, this method failed the prediction and in a bad way, as the observed state is not even among the top-three choices. In a number of cases, the observed state was predicted with full certainty.The library also offers methods for feature extraction. Let's consider the Latin Spanish example above, but also with feature extraction.
###Code
clf5 = multitiers.Classifier(data_spanish, models=['sca'], left=1, right=1)
study = """
X_tier segment_Latin INCLUDE t
X_tier sca_Latin_L1
Y_tier sca_Latin_R1
y_tier segment_Spanish
"""
X_tiers5, y_tiers5 = multitiers.utils.parse_study(study)
clf5.train(X_tiers5, y_tiers5)
display(graphviz.Source(clf5.to_dot()))
clf5.feature_extraction("tree", num_feats=5)
clf5.feature_extraction("lsvc")
###Output
_____no_output_____ |
python-requests-library.ipynb | ###Markdown
Get requestsIndicates you are trying to retrieve data from a specified resource
###Code
url = 'https://api.github.com'
response =requests.get(url)
response
###Output
_____no_output_____
###Markdown
Status CodesStatus codes are issued by a server in response to a client request1XX : Information2XX : Success - request was received, understood and accpeted3XX : Redirection4XX : Client Errors5XX : Server Errors
###Code
def getCountryInfo(country):
base_url = 'https://restcountries.eu/rest/v2/name/'
country_url = base_url + country
response = requests.get(country_url)
return response.json()
def parseCountryJson(jsonload):
capital = jsonload[0]['capital']
region = jsonload[0]['region']
subregion = jsonload[0]['subregion']
latlng = jsonload[0]['latlng']
return [capital, region, subregion, latlng]
report = pd.read_csv('archive/world-happiness-report-2021.csv')
report.head()
# Use reverse geocoding API
gmaps = googlemaps.Client(key=os.environ['GOOGLE_API_KEY'])
def reverseGeocode(latlng_pair_list):
location = gmaps.reverse_geocode(latlng_pair_list)[0]
address=location['formatted_address']
return address
reverseGeocode([64.0, 26.0])
country_list = report['Name']
address = []
capital = []
region = []
subregion = []
for country in country_list:
jsonload = getCountryInfo(country)
response_list = parseCountryJson(jsonload)
capital.append(response_list[0])
region.append(response_list[1])
subregion.append(response_list[2])
address.append(reverseGeocode(response_list[3]))
#report['Address'] = address
report['Capital City'] = capital
report['Region'] = region
report['Subregion'] = subregion
report['Address'] = address
report.head(20)
report.to_csv('archive/whr.csv')
###Output
_____no_output_____ |
book-code/GraphNeuralNetwork/chapter5/GCN_Cora.ipynb | ###Markdown
Table of Contents1 基于Cora数据集的GCN节点分类1.1 SetUp1.2 数据准备1.3 图卷积层定义1.4 模型定义1.5 模型训练 基于Cora数据集的GCN节点分类 Run in Google Colab 在Colab中运行时可以通过`代码执行程序->更改运行时类型`选择使用`GPU` SetUp
###Code
import itertools
import os
import os.path as osp
import pickle
import urllib
from collections import namedtuple
import numpy as np
import scipy.sparse as sp
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.init as init
import torch.optim as optim
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
数据准备
###Code
Data = namedtuple('Data', ['x', 'y', 'adjacency',
'train_mask', 'val_mask', 'test_mask'])
def tensor_from_numpy(x, device):
return torch.from_numpy(x).to(device)
class CoraData(object):
filenames = ["ind.cora.{}".format(name) for name in
['x', 'tx', 'allx', 'y', 'ty', 'ally', 'graph', 'test.index']]
def __init__(self, data_root="../data/cora", rebuild=False):
"""Cora数据,包括数据下载,处理,加载等功能
当数据的缓存文件存在时,将使用缓存文件,否则将下载、进行处理,并缓存到磁盘
处理之后的数据可以通过属性 .data 获得,它将返回一个数据对象,包括如下几部分:
* x: 节点的特征,维度为 2708 * 1433,类型为 np.ndarray
* y: 节点的标签,总共包括7个类别,类型为 np.ndarray
* adjacency: 邻接矩阵,维度为 2708 * 2708,类型为 scipy.sparse.coo.coo_matrix
* train_mask: 训练集掩码向量,维度为 2708,当节点属于训练集时,相应位置为True,否则False
* val_mask: 验证集掩码向量,维度为 2708,当节点属于验证集时,相应位置为True,否则False
* test_mask: 测试集掩码向量,维度为 2708,当节点属于测试集时,相应位置为True,否则False
Args:
-------
data_root: string, optional
存放数据的目录,原始数据路径: ../data/cora
缓存数据路径: {data_root}/ch5_cached.pkl
rebuild: boolean, optional
是否需要重新构建数据集,当设为True时,如果存在缓存数据也会重建数据
"""
self.data_root = data_root
save_file = osp.join(self.data_root, "ch5_cached.pkl")
if osp.exists(save_file) and not rebuild:
print("Using Cached file: {}".format(save_file))
self._data = pickle.load(open(save_file, "rb"))
else:
self._data = self.process_data()
with open(save_file, "wb") as f:
pickle.dump(self.data, f)
print("Cached file: {}".format(save_file))
@property
def data(self):
"""返回Data数据对象,包括x, y, adjacency, train_mask, val_mask, test_mask"""
return self._data
def process_data(self):
"""
处理数据,得到节点特征和标签,邻接矩阵,训练集、验证集以及测试集
引用自:https://github.com/rusty1s/pytorch_geometric
"""
print("Process data ...")
_, tx, allx, y, ty, ally, graph, test_index = [self.read_data(
osp.join(self.data_root, name)) for name in self.filenames]
train_index = np.arange(y.shape[0])
val_index = np.arange(y.shape[0], y.shape[0] + 500)
sorted_test_index = sorted(test_index)
x = np.concatenate((allx, tx), axis=0)
y = np.concatenate((ally, ty), axis=0).argmax(axis=1)
x[test_index] = x[sorted_test_index]
y[test_index] = y[sorted_test_index]
num_nodes = x.shape[0]
train_mask = np.zeros(num_nodes, dtype=np.bool)
val_mask = np.zeros(num_nodes, dtype=np.bool)
test_mask = np.zeros(num_nodes, dtype=np.bool)
train_mask[train_index] = True
val_mask[val_index] = True
test_mask[test_index] = True
adjacency = self.build_adjacency(graph)
print("Node's feature shape: ", x.shape)
print("Node's label shape: ", y.shape)
print("Adjacency's shape: ", adjacency.shape)
print("Number of training nodes: ", train_mask.sum())
print("Number of validation nodes: ", val_mask.sum())
print("Number of test nodes: ", test_mask.sum())
return Data(x=x, y=y, adjacency=adjacency,
train_mask=train_mask, val_mask=val_mask, test_mask=test_mask)
@staticmethod
def build_adjacency(adj_dict):
"""根据邻接表创建邻接矩阵"""
edge_index = []
num_nodes = len(adj_dict)
for src, dst in adj_dict.items():
edge_index.extend([src, v] for v in dst)
edge_index.extend([v, src] for v in dst)
# 去除重复的边
edge_index = list(k for k, _ in itertools.groupby(sorted(edge_index)))
edge_index = np.asarray(edge_index)
adjacency = sp.coo_matrix((np.ones(len(edge_index)),
(edge_index[:, 0], edge_index[:, 1])),
shape=(num_nodes, num_nodes), dtype="float32")
return adjacency
@staticmethod
def read_data(path):
"""使用不同的方式读取原始数据以进一步处理"""
name = osp.basename(path)
if name == "ind.cora.test.index":
out = np.genfromtxt(path, dtype="int64")
return out
else:
out = pickle.load(open(path, "rb"), encoding="latin1")
out = out.toarray() if hasattr(out, "toarray") else out
return out
@staticmethod
def normalization(adjacency):
"""计算 L=D^-0.5 * (A+I) * D^-0.5"""
adjacency += sp.eye(adjacency.shape[0]) # 增加自连接
degree = np.array(adjacency.sum(1))
d_hat = sp.diags(np.power(degree, -0.5).flatten())
return d_hat.dot(adjacency).dot(d_hat).tocoo()
###Output
_____no_output_____
###Markdown
图卷积层定义
###Code
class GraphConvolution(nn.Module):
def __init__(self, input_dim, output_dim, use_bias=True):
"""图卷积:L*X*\theta
Args:
----------
input_dim: int
节点输入特征的维度
output_dim: int
输出特征维度
use_bias : bool, optional
是否使用偏置
"""
super(GraphConvolution, self).__init__()
self.input_dim = input_dim
self.output_dim = output_dim
self.use_bias = use_bias
self.weight = nn.Parameter(torch.Tensor(input_dim, output_dim))
if self.use_bias:
self.bias = nn.Parameter(torch.Tensor(output_dim))
else:
self.register_parameter('bias', None)
self.reset_parameters()
def reset_parameters(self):
init.kaiming_uniform_(self.weight)
if self.use_bias:
init.zeros_(self.bias)
def forward(self, adjacency, input_feature):
"""邻接矩阵是稀疏矩阵,因此在计算时使用稀疏矩阵乘法
Args:
-------
adjacency: torch.sparse.FloatTensor
邻接矩阵
input_feature: torch.Tensor
输入特征
"""
support = torch.mm(input_feature, self.weight)
output = torch.sparse.mm(adjacency, support)
if self.use_bias:
output += self.bias
return output
def __repr__(self):
return self.__class__.__name__ + ' (' \
+ str(self.input_dim) + ' -> ' \
+ str(self.output_dim) + ')'
###Output
_____no_output_____
###Markdown
模型定义读者可以自己对GCN模型结构进行修改和实验
###Code
class GcnNet(nn.Module):
"""
定义一个包含两层GraphConvolution的模型
"""
def __init__(self, input_dim=1433):
super(GcnNet, self).__init__()
self.gcn1 = GraphConvolution(input_dim, 16)
self.gcn2 = GraphConvolution(16, 7)
def forward(self, adjacency, feature):
h = F.relu(self.gcn1(adjacency, feature))
logits = self.gcn2(adjacency, h)
return logits
###Output
_____no_output_____
###Markdown
模型训练
###Code
# 超参数定义
LEARNING_RATE = 0.1
WEIGHT_DACAY = 5e-4
EPOCHS = 200
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
# 加载数据,并转换为torch.Tensor
dataset = CoraData().data
node_feature = dataset.x / dataset.x.sum(1, keepdims=True) # 归一化数据,使得每一行和为1
tensor_x = tensor_from_numpy(node_feature, DEVICE)
tensor_y = tensor_from_numpy(dataset.y, DEVICE)
tensor_train_mask = tensor_from_numpy(dataset.train_mask, DEVICE)
tensor_val_mask = tensor_from_numpy(dataset.val_mask, DEVICE)
tensor_test_mask = tensor_from_numpy(dataset.test_mask, DEVICE)
normalize_adjacency = CoraData.normalization(dataset.adjacency) # 规范化邻接矩阵
num_nodes, input_dim = node_feature.shape
indices = torch.from_numpy(np.asarray([normalize_adjacency.row,
normalize_adjacency.col]).astype('int64')).long()
values = torch.from_numpy(normalize_adjacency.data.astype(np.float32))
tensor_adjacency = torch.sparse.FloatTensor(indices, values,
(num_nodes, num_nodes)).to(DEVICE)
# 模型定义:Model, Loss, Optimizer
model = GcnNet(input_dim).to(DEVICE)
criterion = nn.CrossEntropyLoss().to(DEVICE)
optimizer = optim.Adam(model.parameters(),
lr=LEARNING_RATE,
weight_decay=WEIGHT_DACAY)
# 训练主体函数
def train():
loss_history = []
val_acc_history = []
model.train()
train_y = tensor_y[tensor_train_mask]
for epoch in range(EPOCHS):
logits = model(tensor_adjacency, tensor_x) # 前向传播
train_mask_logits = logits[tensor_train_mask] # 只选择训练节点进行监督
loss = criterion(train_mask_logits, train_y) # 计算损失值
optimizer.zero_grad()
loss.backward() # 反向传播计算参数的梯度
optimizer.step() # 使用优化方法进行梯度更新
train_acc, _, _ = test(tensor_train_mask) # 计算当前模型训练集上的准确率
val_acc, _, _ = test(tensor_val_mask) # 计算当前模型在验证集上的准确率
# 记录训练过程中损失值和准确率的变化,用于画图
loss_history.append(loss.item())
val_acc_history.append(val_acc.item())
print("Epoch {:03d}: Loss {:.4f}, TrainAcc {:.4}, ValAcc {:.4f}".format(
epoch, loss.item(), train_acc.item(), val_acc.item()))
return loss_history, val_acc_history
# 测试函数
def test(mask):
model.eval()
with torch.no_grad():
logits = model(tensor_adjacency, tensor_x)
test_mask_logits = logits[mask]
predict_y = test_mask_logits.max(1)[1]
accuarcy = torch.eq(predict_y, tensor_y[mask]).float().mean()
return accuarcy, test_mask_logits.cpu().numpy(), tensor_y[mask].cpu().numpy()
def plot_loss_with_acc(loss_history, val_acc_history):
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.plot(range(len(loss_history)), loss_history,
c=np.array([255, 71, 90]) / 255.)
plt.ylabel('Loss')
ax2 = fig.add_subplot(111, sharex=ax1, frameon=False)
ax2.plot(range(len(val_acc_history)), val_acc_history,
c=np.array([79, 179, 255]) / 255.)
ax2.yaxis.tick_right()
ax2.yaxis.set_label_position("right")
plt.ylabel('ValAcc')
plt.xlabel('Epoch')
plt.title('Training Loss & Validation Accuracy')
plt.show()
loss, val_acc = train()
test_acc, test_logits, test_label = test(tensor_test_mask)
print("Test accuarcy: ", test_acc.item())
plot_loss_with_acc(loss, val_acc)
# 绘制测试数据的TSNE降维图
from sklearn.manifold import TSNE
tsne = TSNE()
out = tsne.fit_transform(test_logits)
fig = plt.figure()
for i in range(7):
indices = test_label == i
x, y = out[indices].T
plt.scatter(x, y, label=str(i))
plt.legend()
###Output
_____no_output_____ |
MCAA_version2.ipynb | ###Markdown
Imports
###Code
import numpy as np
import pandas as pd
import math
import sys
from numpy import random
from numpy import linalg
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Input
###Code
n_row = 4000
n_col = 1000
possibilities = [-1.0, 1.0]
W = np.random.normal(size = (n_row, n_col))
X = np.random.choice(possibilities, n_col)
Y = np.maximum((np.dot(W, X)/math.sqrt(n_col)), 0)
T = 0.1
B = 1/T
n_iter = 10000
N = 100
###Output
_____no_output_____
###Markdown
Helper functions
###Code
def energy(vector):
dot = np.dot(W, vector)
diff = Y - np.maximum(0, dot) / math.sqrt(n_col)
return diff.T.dot(diff)
def error(vector):
diff = vector - X
return diff.T.dot(diff)/(4*n_col)
def get_loss_category(loss):
if loss >= 0.5:
return '>0.5'
elif loss < 0.5 and loss >= 0.4:
return '0.5-0.4'
elif loss < 0.4 and loss >= 0.3:
return '0.4-0.3'
elif loss < 0.3 and loss >= 0.2:
return '0.3-0.2'
elif loss < 0.2 and loss >= 0.1:
return '0.2-0.1'
elif loss < 0.1 and loss >= 0.05:
return '0.1-0.05'
elif loss < 0.05 and loss >= 0.01:
return '0.05-0.01'
elif loss < 0.01:
return '<0.01'
else:
print("Y a comme une couille: loss = {}".format(loss))
def Metropolis_chain(dim, n_iter, B, threshold, rate):
print("Metropolis with initial B: {} , threshold : {} , rate: {}".format(B, threshold, rate))
loss_dict = {'>0.5':0, '0.5-0.4':0, '0.4-0.3':0, '0.3-0.2':0, '0.2-0.1':0, '0.1-0.05':0, '0.05-0.01':0, '<0.01':0}
test = np.random.choice(possibilities, dim)
errors = np.zeros(n_iter)
iter_increase = n_iter * threshold
for _iter in range(n_iter):
if _iter > iter_increase:
B = B*rate
to_flip = np.random.randint(0, dim)
to_test = test.copy()
to_test[to_flip] = -to_test[to_flip]
proba = np.minimum(1, np.exp(-B*(energy(to_test)-energy(test))))
if random.random() < proba:
test = to_test
err = error(test)
errors[_iter] = err
loss_categ = get_loss_category(err)
loss_dict[loss_categ] += 1
print(loss_dict)
return np.array(errors), loss_dict, B, threshold, rate
_error, loss_dict, B, threshold, rate = Metropolis_chain(n_col, 5000, 2, 0.43333, 1.40)
def Glauber(vector, dim, n_iter, B, threshold, rate):
print("Glauber with initial B: {} , threshold : {} , rate: {}".format(B, threshold, rate))
loss_dict = {'>0.5':0, '0.5-0.4':0, '0.4-0.3':0, '0.3-0.2':0, '0.2-0.1':0, '0.1-0.05':0, '0.05-0.01':0, '<0.01':0}
test = vector.copy()#np.random.choice(possibilities, dim)
errors = []
iter_increase = n_iter * threshold
for _iter in range(n_iter):
if _iter > iter_increase:
B = B*rate
to_flip = np.random.randint(0, dim)
flipped = test.copy()
flipped[to_flip] = test[to_flip]*-1
proba = (1 + test[to_flip] * math.tanh(B*(energy(flipped) - energy(test))))/2.0
if random.random() < proba:
test[to_flip] = 1
else:
test[to_flip] = -1
err = error(test)
errors.append(err)
loss_categ = get_loss_category(err)
loss_dict[loss_categ] += 1
print(loss_dict)
return errors, loss_dict, B, threshold, rate
def Glauber2(vector, dim, n_iter, B, threshold, rate):
print("Glauber with initial B: {} , threshold : {} , rate: {}".format(B, threshold, rate))
loss_dict = {'>0.5':0, '0.5-0.4':0, '0.4-0.3':0, '0.3-0.2':0, '0.2-0.1':0, '0.1-0.05':0, '0.05-0.01':0, '<0.01':0}
test = vector.copy()#np.random.choice(possibilities, dim)
errors = []
iter_increase = n_iter * threshold
for _iter in range(n_iter):
if _iter % int(n_iter * threshold) == int(n_iter * threshold) - 1:
B = B * rate
to_flip = np.random.randint(0, dim)
flipped = test.copy()
flipped[to_flip] = test[to_flip]*-1
proba = (1 + test[to_flip] * math.tanh(B*(energy(flipped) - energy(test))))/2.0
if random.random() < proba:
test[to_flip] = 1
else:
test[to_flip] = -1
err = error(test)
errors.append(err)
loss_categ = get_loss_category(err)
loss_dict[loss_categ] += 1
print(loss_dict)
return errors, loss_dict, B, threshold, rate
_errors, loss_dict, B, threshold, rate = Glauber2(n_col, 10000, 2, 0.1, 1.1)
i = 40
errs_reduced = _errors[::i]
x = np.array(range(len(_errors)))[::i]
print(len(errs_reduced), len(x))
plt.scatter(x, errs_reduced, color = 'blue')
plt.show()
###Output
250 250
###Markdown
Optimization
###Code
vector = np.random.choice(possibilities, n_col)
print(error(vector))
t_tresholds = np.linspace(0.33, 0.66, num=4)
t_increasing = np.linspace(1.1, 2, num=4)
betas = [1,2,3,4]
results = []
for treshold in t_tresholds:
for increasing in t_increasing:
for beta in betas:
errors = np.zeros(n_iter)
_dict = {'>0.5':0, '0.5-0.4':0, '0.4-0.3':0, '0.3-0.2':0, '0.2-0.1':0, '0.1-0.05':0, '0.05-0.01':0, '<0.01':0}
repeat = 3
for test in range(repeat):
_error, loss_dict, B, threshold, rate = Glauber(vector, n_col, n_iter, beta, treshold, increasing)
errors += _error
for key in _dict.keys():
_dict[key] += loss_dict[key]
mean = errors / repeat
for key in _dict.keys():
_dict[key] = _dict[key]/repeat
results.append((mean, _dict, beta, threshold, rate))
np.save('Glauber_sameRandom_result_4000', results)
###Output
Glauber with initial B: 1 , threshold : 0.33 , rate: 1.1
{'>0.5': 0, '0.5-0.4': 416, '0.4-0.3': 565, '0.3-0.2': 693, '0.2-0.1': 915, '0.1-0.05': 826, '0.05-0.01': 1341, '<0.01': 5244}
Glauber with initial B: 1 , threshold : 0.33 , rate: 1.1
{'>0.5': 0, '0.5-0.4': 348, '0.4-0.3': 569, '0.3-0.2': 688, '0.2-0.1': 1039, '0.1-0.05': 842, '0.05-0.01': 1654, '<0.01': 4860}
Glauber with initial B: 1 , threshold : 0.33 , rate: 1.1
{'>0.5': 0, '0.5-0.4': 301, '0.4-0.3': 517, '0.3-0.2': 680, '0.2-0.1': 1002, '0.1-0.05': 813, '0.05-0.01': 1231, '<0.01': 5456}
Glauber with initial B: 2 , threshold : 0.33 , rate: 1.1
{'>0.5': 0, '0.5-0.4': 372, '0.4-0.3': 520, '0.3-0.2': 626, '0.2-0.1': 965, '0.1-0.05': 526, '0.05-0.01': 1858, '<0.01': 5133}
Glauber with initial B: 2 , threshold : 0.33 , rate: 1.1
{'>0.5': 0, '0.5-0.4': 364, '0.4-0.3': 517, '0.3-0.2': 793, '0.2-0.1': 812, '0.1-0.05': 1004, '0.05-0.01': 1377, '<0.01': 5133}
Glauber with initial B: 2 , threshold : 0.33 , rate: 1.1
{'>0.5': 0, '0.5-0.4': 399, '0.4-0.3': 534, '0.3-0.2': 791, '0.2-0.1': 878, '0.1-0.05': 694, '0.05-0.01': 1657, '<0.01': 5047}
Glauber with initial B: 3 , threshold : 0.33 , rate: 1.1
{'>0.5': 0, '0.5-0.4': 389, '0.4-0.3': 565, '0.3-0.2': 659, '0.2-0.1': 969, '0.1-0.05': 653, '0.05-0.01': 2120, '<0.01': 4645}
Glauber with initial B: 3 , threshold : 0.33 , rate: 1.1
{'>0.5': 0, '0.5-0.4': 359, '0.4-0.3': 491, '0.3-0.2': 657, '0.2-0.1': 1181, '0.1-0.05': 865, '0.05-0.01': 1296, '<0.01': 5151}
Glauber with initial B: 3 , threshold : 0.33 , rate: 1.1
{'>0.5': 0, '0.5-0.4': 324, '0.4-0.3': 557, '0.3-0.2': 687, '0.2-0.1': 1033, '0.1-0.05': 829, '0.05-0.01': 1205, '<0.01': 5365}
Glauber with initial B: 4 , threshold : 0.33 , rate: 1.1
{'>0.5': 0, '0.5-0.4': 332, '0.4-0.3': 485, '0.3-0.2': 785, '0.2-0.1': 844, '0.1-0.05': 844, '0.05-0.01': 1649, '<0.01': 5061}
Glauber with initial B: 4 , threshold : 0.33 , rate: 1.1
{'>0.5': 0, '0.5-0.4': 330, '0.4-0.3': 601, '0.3-0.2': 742, '0.2-0.1': 1209, '0.1-0.05': 719, '0.05-0.01': 1314, '<0.01': 5085}
Glauber with initial B: 4 , threshold : 0.33 , rate: 1.1
{'>0.5': 0, '0.5-0.4': 313, '0.4-0.3': 417, '0.3-0.2': 658, '0.2-0.1': 1119, '0.1-0.05': 654, '0.05-0.01': 1693, '<0.01': 5146}
Glauber with initial B: 1 , threshold : 0.33 , rate: 1.4000000000000001
###Markdown
Glauber with initial B: 2 , threshold : 0.43333333333333335 , rate: 1.4000000000000001
###Code
def get_best_params(file):
results = np.load(file)
max_count = 0
best_params = None
for res in results:
loss_dict = res[1]
if loss_dict['<0.01'] > max_count:
max_count = loss_dict['<0.01']
best_params = res
return best_params
best_params_glauber = get_best_params('Glauber_sameRandom_result_4000.npy')
best_params_metro = get_best_params('Metro_results.npy')
print('Best results Metropolis:')
print(best_params_metro)
print('Best results Glauber:')
print(best_params_glauber)
for i in range(6):
base_errs_glauber,_,_,_,_ = Glauber(vector, n_col, 10000, 4.0, 0.44, 1.1)
best_errs_glauber = best_params_glauber[0]
best_beta_glauber, best_thresh_glauber, best_rate_glauber = best_params_glauber[2], best_params_glauber[3], best_params_glauber[4]
best_errs_metro = best_params_metro[0]
best_beta_metro, best_thresh_metro, best_rate_metro = best_params_metro[2], best_params_metro[3], best_params_metro[4]
"""baseline_errs_glauber = np.zeros(n_iter)
baseline_errs_metro = np.zeros(n_iter)
for i in range(4):
base_errs_glauber,_,_,_,_ = Glauber(vector, n_col, n_iter, best_beta_glauber, 1.0, 1.0)
baseline_errs_glauber += np.array(base_errs_glauber)
base_errs_metro,_,_,_,_ = Metropolis_chain(n_col, n_iter, best_beta_metro, 1.0, 1.0)
baseline_errs_metro += np.array(base_errs_metro)
baseline_errs_glauber = baseline_errs_glauber / 4.0
baseline_errs_metro = baseline_errs_metro / 4.0"""
x = np.array(range(len(best_errs_glauber)))
plt.figure(figsize=(15,7))
plt.title("Evolution of error in function of the number of iterations", size = 15)
plt.plot(x, baseline_errs_glauber, color = 'cyan', label = 'baseline glauber')
#plt.plot(x, baseline_errs_metro, color = 'blue', label = 'baseline metropolis')
plt.plot(x, best_errs_glauber, color = 'orange', label = 'best glauber')
plt.plot(x, best_errs_metro, color = 'red', label = 'best metropolis')
plt.xlabel('Iteration number', size = 15)
plt.ylabel('Error', size = 15)
plt.legend(prop={'size': 15})
plt.show()
###Output
Glauber with initial B: 4 , threshold : 1.0 , rate: 1.0
{'>0.5': 57, '0.5-0.4': 407, '0.4-0.3': 509, '0.3-0.2': 705, '0.2-0.1': 1084, '0.1-0.05': 684, '0.05-0.01': 1677, '<0.01': 4877}
Metropolis with initial B: 2 , threshold : 1.0 , rate: 1.0
{'>0.5': 0, '0.5-0.4': 403, '0.4-0.3': 530, '0.3-0.2': 692, '0.2-0.1': 1063, '0.1-0.05': 606, '0.05-0.01': 1365, '<0.01': 5341}
Glauber with initial B: 4 , threshold : 1.0 , rate: 1.0
{'>0.5': 66, '0.5-0.4': 322, '0.4-0.3': 467, '0.3-0.2': 739, '0.2-0.1': 872, '0.1-0.05': 715, '0.05-0.01': 1819, '<0.01': 5000}
Metropolis with initial B: 2 , threshold : 1.0 , rate: 1.0
{'>0.5': 0, '0.5-0.4': 436, '0.4-0.3': 500, '0.3-0.2': 768, '0.2-0.1': 938, '0.1-0.05': 714, '0.05-0.01': 1812, '<0.01': 4832}
Glauber with initial B: 4 , threshold : 1.0 , rate: 1.0
{'>0.5': 49, '0.5-0.4': 403, '0.4-0.3': 551, '0.3-0.2': 762, '0.2-0.1': 813, '0.1-0.05': 659, '0.05-0.01': 1791, '<0.01': 4972}
Metropolis with initial B: 2 , threshold : 1.0 , rate: 1.0
{'>0.5': 0, '0.5-0.4': 337, '0.4-0.3': 452, '0.3-0.2': 708, '0.2-0.1': 989, '0.1-0.05': 889, '0.05-0.01': 1644, '<0.01': 4981}
Glauber with initial B: 4 , threshold : 1.0 , rate: 1.0
{'>0.5': 33, '0.5-0.4': 332, '0.4-0.3': 537, '0.3-0.2': 726, '0.2-0.1': 1043, '0.1-0.05': 832, '0.05-0.01': 1510, '<0.01': 4987}
Metropolis with initial B: 2 , threshold : 1.0 , rate: 1.0
{'>0.5': 0, '0.5-0.4': 381, '0.4-0.3': 521, '0.3-0.2': 560, '0.2-0.1': 970, '0.1-0.05': 689, '0.05-0.01': 1289, '<0.01': 5590}
###Markdown
Results
###Code
Metropolis_chain(n_col, 15000, 2, 0.6, 1.5)
errs, loss_dict = Glauber(n_col, 1000, 2, 0.6, 1.5)
values = []
B_values = np.linspace(1, 10, num = 5)
for b in B_values:
for i in range(3):
values.append((b,Glauber(n_col, 50, b, 0.75, 1.75)))
t_tresholds
errors = []
for i in range(N):
error = #Glauber ou Metropolis_chain selon best_set_up
errors.append(error)
mean = np.mean(errors)
variance = np.var(errors)
print('Mean error : ', mean)
print('Variance of error : ' variance)
best_set_up
t_increasing = np.linspace(1.5, 5, num=8)
t_increasing
BEST_SO_FAR = 0.01411756195508118 #Metropolis_chain(n_col, 100, B) 0.8
###Output
_____no_output_____ |
week4_EDA_np_pd_json_apis_regex/day4_gen_annotation_eda/theory/eda_example/EDA.ipynb | ###Markdown
Exploratory data analysis in Python. Let us understand how to explore the data in python.  Image Credits: Morioh Introduction **What is Exploratory Data Analysis ?**Exploratory Data Analysis or (EDA) is understanding the data sets by summarizing their main characteristics often plotting them visually. This step is very important especially when we arrive at modeling the data in order to apply Machine learning. Plotting in EDA consists of Histograms, Box plot, Scatter plot and many more. It often takes much time to explore the data. Through the process of EDA, we can ask to define the problem statement or definition on our data set which is very important. **How to perform Exploratory Data Analysis ?**This is one such question that everyone is keen on knowing the answer. Well, the answer is it depends on the data set that you are working. There is no one method or common methods in order to perform EDA, whereas in this tutorial you can understand some common methods and plots that would be used in the EDA process. **What data are we exploring today ?**Since I am a huge fan of cars, I got a very beautiful data-set of cars from Kaggle. The data-set can be downloaded from [here](https://www.kaggle.com/CooperUnion/cardataset). To give a piece of brief information about the data set this data contains more of 10, 000 rows and more than 10 columns which contains features of the car such as Engine Fuel Type, Engine HP, Transmission Type, highway MPG, city MPG and many more. So in this tutorial, we will explore the data and make it ready for modeling. --- 1. Importing the required libraries for EDA Below are the libraries that are used in order to perform EDA (Exploratory data analysis) in this tutorial.
###Code
import pandas as pd
import numpy as np
import seaborn as sns #visualisation
import matplotlib.pyplot as plt #visualisation
%matplotlib inline
sns.set(color_codes=True)
###Output
_____no_output_____
###Markdown
--- 2. Loading the data into the data frame. Loading the data into the pandas data frame is certainly one of the most important steps in EDA, as we can see that the value from the data set is comma-separated. So all we have to do is to just read the CSV into a data frame and pandas data frame does the job for us. To get or load the dataset into the notebook, all I did was one trivial step. In Google Colab at the left-hand side of the notebook, you will find a > (greater than symbol). When you click that you will find a tab with three options, you just have to select Files. Then you can easily upload your file with the help of the Upload option. No need to mount to the google drive or use any specific libraries just upload the data set and your job is done. One thing to remember in this step is that uploaded files will get deleted when this runtime is recycled. This is how I got the data set into the notebook.
###Code
df = pd.read_csv("data.csv")
# To display the top 5 rows
df.head(5)
df.tail(5) # To display the botton 5 rows
###Output
_____no_output_____
###Markdown
--- 3. Checking the types of data Here we check for the datatypes because sometimes the MSRP or the price of the car would be stored as a string, if in that case, we have to convert that string to the integer data only then we can plot the data via a graph. Here, in this case, the data is already in integer format so nothing to worry.
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
--- 4. Dropping irrelevant columns This step is certainly needed in every EDA because sometimes there would be many columns that we never use in such cases dropping is the only solution. In this case, the columns such as Engine Fuel Type, Market Category, Vehicle style, Popularity, Number of doors, Vehicle Size doesn't make any sense to me so I just dropped for this instance.
###Code
df = df.drop(['Engine Fuel Type', 'Market Category', 'Vehicle Style', 'Popularity', 'Number of Doors', 'Vehicle Size'], axis=1)
df.head(5)
###Output
_____no_output_____
###Markdown
--- 5. Renaming the columns In this instance, most of the column names are very confusing to read, so I just tweaked their column names. This is a good approach it improves the readability of the data set.
###Code
df = df.rename(columns={"Engine HP": "HP", "Engine Cylinders": "Cylinders", "Transmission Type": "Transmission", "Driven_Wheels": "Drive Mode","highway MPG": "MPG-H", "city mpg": "MPG-C", "MSRP": "Price" })
df.head(5)
###Output
_____no_output_____
###Markdown
--- 6. Dropping the duplicate rows This is often a handy thing to do because a huge data set as in this case contains more than 10, 000 rows often have some duplicate data which might be disturbing, so here I remove all the duplicate value from the data-set. For example prior to removing I had 11914 rows of data but after removing the duplicates 10925 data meaning that I had 989 of duplicate data.
###Code
df.shape
duplicate_rows_df = df[df.duplicated()]
print("number of duplicate rows: ", duplicate_rows_df.shape)
###Output
number of duplicate rows: (989, 10)
###Markdown
Now let us remove the duplicate data because it's ok to remove them.
###Code
df.count() # Used to count the number of rows
###Output
_____no_output_____
###Markdown
So seen above there are 11914 rows and we are removing 989 rows of duplicate data.
###Code
df = df.drop_duplicates()
df.head(5)
df.count()
###Output
_____no_output_____
###Markdown
--- 7. Dropping the missing or null values. This is mostly similar to the previous step but in here all the missing values are detected and are dropped later. Now, this is not a good approach to do so, because many people just replace the missing values with the mean or the average of that column, but in this case, I just dropped that missing values. This is because there is nearly 100 missing value compared to 10, 000 values this is a small number and this is negligible so I just dropped those values.
###Code
print(df.isnull().sum())
###Output
Make 0
Model 0
Year 0
HP 69
Cylinders 30
Transmission 0
Drive Mode 0
MPG-H 0
MPG-C 0
Price 0
dtype: int64
###Markdown
This is the reason in the above step while counting both Cylinders and Horsepower (HP) had 10856 and 10895 over 10925 rows.
###Code
df = df.dropna() # Dropping the missing values.
df.count()
###Output
_____no_output_____
###Markdown
Now we have removed all the rows which contain the Null or N/A values (Cylinders and Horsepower (HP)).
###Code
print(df.isnull().sum()) # After dropping the values
###Output
Make 0
Model 0
Year 0
HP 0
Cylinders 0
Transmission 0
Drive Mode 0
MPG-H 0
MPG-C 0
Price 0
dtype: int64
###Markdown
--- 8. Detecting Outliers An outlier is a point or set of points that are different from other points. Sometimes they can be very high or very low. It's often a good idea to detect and remove the outliers. Because outliers are one of the primary reasons for resulting in a less accurate model. Hence it's a good idea to remove them. The outlier detection and removing that I am going to perform is called IQR score technique. Often outliers can be seen with visualizations using a box plot. Shown below are the box plot of MSRP, Cylinders, Horsepower and EngineSize. Herein all the plots, you can find some points are outside the box they are none other than outliers. The technique of finding and removing outlier that I am performing in this assignment is taken help of a tutorial from[ towards data science](https://towardsdatascience.com/ways-to-detect-and-remove-the-outliers-404d16608dba).
###Code
sns.boxplot(x=df['Price'])
sns.boxplot(x=df['HP'])
sns.boxplot(x=df['Cylinders'])
Q1 = df.quantile(0.25)
Q3 = df.quantile(0.75)
IQR = Q3 - Q1
print(IQR)
###Output
Year 9.0
HP 130.0
Cylinders 2.0
MPG-H 8.0
MPG-C 6.0
Price 21327.5
dtype: float64
###Markdown
Don't worry about the above values because it's not important to know each and every one of them because it's just important to know how to use this technique in order to remove the outliers.
###Code
df = df[~((df < (Q1 - 1.5 * IQR)) |(df > (Q3 + 1.5 * IQR))).any(axis=1)]
df.shape
###Output
_____no_output_____
###Markdown
As seen above there were around 1600 rows were outliers. But you cannot completely remove the outliers because even after you use the above technique there maybe 1–2 outlier unremoved but that ok because there were more than 100 outliers. Something is better than nothing. --- 9. Plot different features against one another (scatter), against frequency (histogram) HistogramHistogram refers to the frequency of occurrence of variables in an interval. In this case, there are mainly 10 different types of car manufacturing companies, but it is often important to know who has the most number of cars. To do this histogram is one of the trivial solutions which lets us know the total number of car manufactured by a different company.
###Code
df.Make.value_counts().nlargest(40).plot(kind='bar', figsize=(10,5))
plt.title("Number of cars by make")
plt.ylabel('Number of cars')
plt.xlabel('Make');
###Output
_____no_output_____
###Markdown
Heat MapsHeat Maps is a type of plot which is necessary when we need to find the dependent variables. One of the best way to find the relationship between the features can be done using heat maps. In the below heat map we know that the price feature depends mainly on the Engine Size, Horsepower, and Cylinders.
###Code
plt.figure(figsize=(10,5))
c= df.corr()
sns.heatmap(c,cmap="BrBG",annot=True)
c
###Output
_____no_output_____
###Markdown
ScatterplotWe generally use scatter plots to find the correlation between two variables. Here the scatter plots are plotted between Horsepower and Price and we can see the plot below. With the plot given below, we can easily draw a trend line. These features provide a good scattering of points.
###Code
fig, ax = plt.subplots(figsize=(10,6))
ax.scatter(df['HP'], df['Price'])
ax.set_xlabel('HP')
ax.set_ylabel('Price')
plt.show()
###Output
_____no_output_____
###Markdown
Exploratory data analysis in Python. Let us understand how to explore the data in python.  Image Credits: Morioh Introduction **What is Exploratory Data Analysis ?**Exploratory Data Analysis or (EDA) is understanding the data sets by summarizing their main characteristics often plotting them visually. This step is very important especially when we arrive at modeling the data in order to apply Machine learning. Plotting in EDA consists of Histograms, Box plot, Scatter plot and many more. It often takes much time to explore the data. Through the process of EDA, we can ask to define the problem statement or definition on our data set which is very important. **How to perform Exploratory Data Analysis ?**This is one such question that everyone is keen on knowing the answer. Well, the answer is it depends on the data set that you are working. There is no one method or common methods in order to perform EDA, whereas in this tutorial you can understand some common methods and plots that would be used in the EDA process. **What data are we exploring today ?**Since I am a huge fan of cars, I got a very beautiful data-set of cars from Kaggle. The data-set can be downloaded from [here](https://www.kaggle.com/CooperUnion/cardataset). To give a piece of brief information about the data set this data contains more of 10, 000 rows and more than 10 columns which contains features of the car such as Engine Fuel Type, Engine HP, Transmission Type, highway MPG, city MPG and many more. So in this tutorial, we will explore the data and make it ready for modeling. --- 1. Importing the required libraries for EDA Below are the libraries that are used in order to perform EDA (Exploratory data analysis) in this tutorial.
###Code
import pandas as pd
import numpy as np
import seaborn as sns #visualisation
import matplotlib.pyplot as plt #visualisation
%matplotlib inline
sns.set(color_codes=True)
###Output
_____no_output_____
###Markdown
--- 2. Loading the data into the data frame. Loading the data into the pandas data frame is certainly one of the most important steps in EDA, as we can see that the value from the data set is comma-separated. So all we have to do is to just read the CSV into a data frame and pandas data frame does the job for us. To get or load the dataset into the notebook, all I did was one trivial step. In Google Colab at the left-hand side of the notebook, you will find a > (greater than symbol). When you click that you will find a tab with three options, you just have to select Files. Then you can easily upload your file with the help of the Upload option. No need to mount to the google drive or use any specific libraries just upload the data set and your job is done. One thing to remember in this step is that uploaded files will get deleted when this runtime is recycled. This is how I got the data set into the notebook.
###Code
df = pd.read_csv("data.csv") # no ejecutar que no tenemos el dataframe
# To display the top 5 rows
df.head(5)
df.tail(5) # To display the botton 5 rows
###Output
_____no_output_____
###Markdown
--- 3. Checking the types of data Here we check for the datatypes because sometimes the MSRP or the price of the car would be stored as a string, if in that case, we have to convert that string to the integer data only then we can plot the data via a graph. Here, in this case, the data is already in integer format so nothing to worry.
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
--- 4. Dropping irrelevant columns This step is certainly needed in every EDA because sometimes there would be many columns that we never use in such cases dropping is the only solution. In this case, the columns such as Engine Fuel Type, Market Category, Vehicle style, Popularity, Number of doors, Vehicle Size doesn't make any sense to me so I just dropped for this instance.
###Code
df = df.drop(['Engine Fuel Type', 'Market Category', 'Vehicle Style', 'Popularity', 'Number of Doors', 'Vehicle Size'], axis=1)
df.head(5)
###Output
_____no_output_____
###Markdown
--- 5. Renaming the columns In this instance, most of the column names are very confusing to read, so I just tweaked their column names. This is a good approach it improves the readability of the data set.
###Code
df = df.rename(columns={"Engine HP": "HP", "Engine Cylinders": "Cylinders", "Transmission Type": "Transmission", "Driven_Wheels": "Drive Mode","highway MPG": "MPG-H", "city mpg": "MPG-C", "MSRP": "Price" })
df.head(5)
###Output
_____no_output_____
###Markdown
--- 6. Dropping the duplicate rows This is often a handy thing to do because a huge data set as in this case contains more than 10, 000 rows often have some duplicate data which might be disturbing, so here I remove all the duplicate value from the data-set. For example prior to removing I had 11914 rows of data but after removing the duplicates 10925 data meaning that I had 989 of duplicate data.
###Code
df.shape
duplicate_rows_df = df[df.duplicated()]
print("number of duplicate rows: ", duplicate_rows_df.shape)
###Output
number of duplicate rows: (989, 10)
###Markdown
Now let us remove the duplicate data because it's ok to remove them.
###Code
df.count() # Used to count the number of rows
###Output
_____no_output_____
###Markdown
So seen above there are 11914 rows and we are removing 989 rows of duplicate data.
###Code
df = df.drop_duplicates()
df.head(5)
df.count()
###Output
_____no_output_____
###Markdown
--- 7. Dropping the missing or null values. This is mostly similar to the previous step but in here all the missing values are detected and are dropped later. Now, this is not a good approach to do so, because many people just replace the missing values with the mean or the average of that column, but in this case, I just dropped that missing values. This is because there is nearly 100 missing value compared to 10, 000 values this is a small number and this is negligible so I just dropped those values.
###Code
print(df.isnull().sum())
###Output
Make 0
Model 0
Year 0
HP 69
Cylinders 30
Transmission 0
Drive Mode 0
MPG-H 0
MPG-C 0
Price 0
dtype: int64
###Markdown
This is the reason in the above step while counting both Cylinders and Horsepower (HP) had 10856 and 10895 over 10925 rows.
###Code
df = df.dropna() # Dropping the missing values.
df.count()
###Output
_____no_output_____
###Markdown
Now we have removed all the rows which contain the Null or N/A values (Cylinders and Horsepower (HP)).
###Code
print(df.isnull().sum()) # After dropping the values
###Output
Make 0
Model 0
Year 0
HP 0
Cylinders 0
Transmission 0
Drive Mode 0
MPG-H 0
MPG-C 0
Price 0
dtype: int64
###Markdown
--- 8. Detecting Outliers An outlier is a point or set of points that are different from other points. Sometimes they can be very high or very low. It's often a good idea to detect and remove the outliers. Because outliers are one of the primary reasons for resulting in a less accurate model. Hence it's a good idea to remove them. The outlier detection and removing that I am going to perform is called IQR score technique. Often outliers can be seen with visualizations using a box plot. Shown below are the box plot of MSRP, Cylinders, Horsepower and EngineSize. Herein all the plots, you can find some points are outside the box they are none other than outliers. The technique of finding and removing outlier that I am performing in this assignment is taken help of a tutorial from[ towards data science](https://towardsdatascience.com/ways-to-detect-and-remove-the-outliers-404d16608dba).
###Code
sns.boxplot(x=df['Price'])
sns.boxplot(x=df['HP'])
sns.boxplot(x=df['Cylinders'])
Q1 = df.quantile(0.25)
Q3 = df.quantile(0.75)
IQR = Q3 - Q1
print(IQR)
###Output
Year 9.0
HP 130.0
Cylinders 2.0
MPG-H 8.0
MPG-C 6.0
Price 21327.5
dtype: float64
###Markdown
Don't worry about the above values because it's not important to know each and every one of them because it's just important to know how to use this technique in order to remove the outliers.
###Code
df = df[~((df < (Q1 - 1.5 * IQR)) |(df > (Q3 + 1.5 * IQR))).any(axis=1)]
df.shape
###Output
_____no_output_____
###Markdown
As seen above there were around 1600 rows were outliers. But you cannot completely remove the outliers because even after you use the above technique there maybe 1–2 outlier unremoved but that ok because there were more than 100 outliers. Something is better than nothing. --- 9. Plot different features against one another (scatter), against frequency (histogram) HistogramHistogram refers to the frequency of occurrence of variables in an interval. In this case, there are mainly 10 different types of car manufacturing companies, but it is often important to know who has the most number of cars. To do this histogram is one of the trivial solutions which lets us know the total number of car manufactured by a different company.
###Code
df.Make.value_counts().nlargest(40).plot(kind='bar', figsize=(10,5))
plt.title("Number of cars by make")
plt.ylabel('Number of cars')
plt.xlabel('Make');
###Output
_____no_output_____
###Markdown
Heat MapsHeat Maps is a type of plot which is necessary when we need to find the dependent variables. One of the best way to find the relationship between the features can be done using heat maps. In the below heat map we know that the price feature depends mainly on the Engine Size, Horsepower, and Cylinders.
###Code
plt.figure(figsize=(10,5))
c= df.corr()
sns.heatmap(c,cmap="BrBG",annot=True)
c
###Output
_____no_output_____
###Markdown
ScatterplotWe generally use scatter plots to find the correlation between two variables. Here the scatter plots are plotted between Horsepower and Price and we can see the plot below. With the plot given below, we can easily draw a trend line. These features provide a good scattering of points.
###Code
fig, ax = plt.subplots(figsize=(10,6))
ax.scatter(df['HP'], df['Price'])
ax.set_xlabel('HP')
ax.set_ylabel('Price')
plt.show()
###Output
_____no_output_____ |
Chapter12/Exercise12.01/.ipynb_checkpoints/production-checkpoint.ipynb | ###Markdown
In production environment: import, load, and execute the model
###Code
!pip install flask
from flask import Flask, jsonify, request
import pickle
# load the model from pickle file
file = open('model.pkl', 'rb') # read bytes
model = pickle.load(file)
file.close()
# get predictions from the model
print(model.predict([[3,0,22.0,1,0,7.25]])) # male
print(model.predict([[3,1,22.0,1,0,7.25]])) # female
# create an API with Flask
app = Flask('Titanic')
# call this: curl -X GET http://127.0.0.1:5000/foo
@app.route('/hi', methods=['GET'])
def bar():
result = 'hello!'
return result
# call this: curl -X POST -H "Content-Type: application/json" -d '{"Pclass": 3, "Sex": 0, "Age": 72, "SibSb": 2, "Parch": 0, "Fare": 8.35}' http://127.0.0.1:5000/survived
@app.route('/survived', methods=['POST'])
def survived():
payload = request.get_json()
person = [payload['Pclass'], payload['Sex'], payload['Age'], payload['SibSb'], payload['Parch'], payload['Fare']]
result = model.predict([person])
print(f'{person} -> {str(result)}')
return f'I predict that person {person} has {"_not_ " if result == [0] else ""}survived the Titanic\n'
app.run()
###Output
_____no_output_____ |
FinMath/Models and Pricing of Financial Derivatives/Review.ipynb | ###Markdown
**Review****程远星**$\DeclareMathOperator*{\argmin}{argmin}\DeclareMathOperator*{\argmax}{argmax}\DeclareMathOperator*{\plim}{plim}\newcommand{\using}[1]{\stackrel{\mathrm{1}}{=}}\newcommand{\ffrac}{\displaystyle \frac}\newcommand{\asim}{\overset{\text{a}}{\sim}}\newcommand{\space}{\text{ }}\newcommand{\bspace}{\;\;\;\;}\newcommand{\QQQ}{\boxed{?\:}}\newcommand{\void}{\left.\right.}\newcommand{\Tran}[1]{{1}^{\mathrm{T}}}\newcommand{\d}[1]{\displaystyle{1}}\newcommand{\CB}[1]{\left\{ 1 \right\}}\newcommand{\SB}[1]{\left[ 1 \right]}\newcommand{\P}[1]{\left( 1 \right)}\newcommand{\abs}[1]{\left| 1 \right|}\newcommand{\norm}[1]{\left\| 1 \right\|}\newcommand{\dd}{\mathrm{d}}\newcommand{\Exp}{\mathrm{E}}\newcommand{\RR}{\mathbb{R}}\newcommand{\EE}{\mathbb{E}}\newcommand{\II}{\mathbb{I}}\newcommand{\NN}{\mathbb{N}}\newcommand{\ZZ}{\mathbb{Z}}\newcommand{\QQ}{\mathbb{Q}}\newcommand{\PP}{\mathbb{P}}\newcommand{\AcA}{\mathcal{A}}\newcommand{\FcF}{\mathcal{F}}\newcommand{\AsA}{\mathscr{A}}\newcommand{\FsF}{\mathscr{F}}\newcommand{\Var}[2][\,\!]{\mathrm{Var}_{1}\left[2\right]}\newcommand{\Avar}[2][\,\!]{\mathrm{Avar}_{1}\left[2\right]}\newcommand{\Cov}[2][\,\!]{\mathrm{Cov}_{1}\left(2\right)}\newcommand{\Corr}[2][\,\!]{\mathrm{Corr}_{1}\left(2\right)}\newcommand{\I}[1]{\mathrm{I}\left( 1 \right)}\newcommand{\N}[1]{\mathcal{N} \left( 1 \right)}\newcommand{\ow}{\text{otherwise}}\newcommand{\FSD}{\text{FSD}}\void^\dagger$ Question 1Let $S_t$ denote the stock price at time $t$. Under the physical (objective) measure $P$, the stock price $S_t$ follows the Black-Schole model given by:$$\dd S_t = \mu S_t \dd t + \sigma S_t \dd W_t$$A financial institution plans to offer a security that pays off a dollar amount equal to $S^2_T$ at time $T$.$\P{1}$ Use the risk-neutral valuation method to derive the pricing formula of the security at time $t$ (in terms of the expectation).>Under risk neutral world this process is a martingale, which could help us find the risk neutral measure (this is a time-continuous world). Under $\PP$ measure we have:>>$$\dd \tilde S_t = \P{\mu - r} \tilde S_t \dd t + \sigma \tilde S_t \dd W_t$$>>Then transfer to $\QQ$ measure we find the standard brownian motion $\widetilde W_t$ under $\QQ$, so that the drift is $0$, which leads to $\dd \tilde S_t = \sigma \tilde S_t \dd \widetilde W_t$.>>Balence the two equations we have>>$$ \dd \widetilde W_t = \dd W_t + \ffrac{\mu - r}{\sigma}\dd t $$>>Then using the Girsanov's Theorem, we have>>$$\left.\ffrac{\dd \QQ}{\dd \PP}\right|_{\FcF_t} = \exp\CB{-\ffrac{\mu - r}{\sigma}W_t - \ffrac{1}{2} \P{\ffrac{\mu - r}{\sigma}}^2t }$$>>With $\QQ$, we now can write>>$$V_t = \Exp^\QQ\SB{e^{-r\P{T-t}} S_T^2\mid \FcF_t} = e^{-r\P{T-t}}\Exp^{\QQ}_t\SB{S_T^2}$$$\P{2}$ Under the risk-neutral measure, according to the corresponding stochastic differential equation for $S_t$, solve $S_T$ in terms of $S_t$.>To solve the differential equation under risk-neutral measure, we first write the equation and separate the variables by taking the $\log$ function:>>$$ \dd \tilde S_t = \sigma \tilde S_t \dd \widetilde W_t \Longrightarrow \dd S_t = rS_t\dd t + \sigma S_t \dd \widetilde W_t \Longrightarrow\dd\log S_t \using{ \text{Itô's formula} } \P{r-\ffrac{\sigma^2}{2}}\dd t + \sigma \dd \widetilde W_t$$>>Take the integral from $t$ to $T$, we have>>$$S_T = S_t \cdot\exp\CB{\P{r-\ffrac{\sigma^2}{2}}\P{T-t} +\sigma \P{\widetilde W_T-\widetilde W_t}}$$$\P{3}$ Assume that the formula $\Exp^\PP \SB{\exp\CB{\alpha W_t}} = \exp\CB{\ffrac{\alpha^2}{2}t}$ is known and a similar result holds under the risk-neutral measure. According to the derived pricing formula in $\P{1}$ by making use of $\P 2$, calculate the price of the security at time $t$.> $$\begin{align}V_t &= e^{-r\P{T-t}}\Exp^{\QQ}_t\SB{S_T^2} \\&= e^{-r\P{T-t}}\Exp^{\QQ}_t\SB{\P{S_t \cdot\exp\CB{\P{r-\ffrac{\sigma^2}{2}}\P{T-t} +\sigma \P{\widetilde W_T-\widetilde W_t}}}^2}\\&= e^{-r\P{T-t}}\Exp^{\QQ}_t\SB{S_t^2} \cdot \exp\CB{\P{2r - \sigma^2}\P{T-t}} \cdot \Exp_t^\QQ \SB{\exp\CB{2\sigma \P{\widetilde W_T-\widetilde W_t}}}\\&= \exp\CB{\P{-r + 2r - \sigma^2 - \ffrac{4\sigma^2}{2}}\P{T-t}}\cdot S_t^2\\&= \exp\CB{\P{r+\sigma^2}\P{T-t}}\cdot S_t^2\end{align}$$$\P{4}$ Confirm that your price satisfies the differential equation.$$\ffrac{\partial f}{\partial t} + rS_t\ffrac{\partial f}{\partial S_t} + \ffrac{1}{2}\sigma^2S_t^2\ffrac{\partial^2 f}{\partial S_t^2} = rf$$>It's easy to see the result by taking the derivatives term by term, or using the Feynman-Kac formula. First we let $f\P{t} = V_t$>>The second equation is satisfied since the payoff is $\Phi\P{X} = S_T^2 = f\P{T}$. Then the first equation, since $\mu_t = rS_t$, $\sigma_t = \sigma S_t$, it can't be more obvious. Question 2see HW_04_Question_5 for complete solution. However here we present another method to solve for $h_t$. Still we have the BS equation $\ffrac{\partial c}{\partial t} + \ffrac{1}{2}\sigma^2S_t^2\ffrac{\partial^2 c}{\partial S_t^2} + rS_t\ffrac{\partial c}{\partial S_t} - rc = 0$, then substituting $V = h\P{t,T}S^n$, we have the differential equation$$\ffrac{\partial h\P{t,T}}{\partial t} + \ffrac{h\P{t,T}}{2} \sigma^2 \cdot n\P{n-1} + rh\P{t,T} \cdot n-rh\P{t,T} = 0$$Or, like this: $\ffrac{h_t}{h} = -r\P{n-1} - \ffrac{1}{2}\sigma^2 n\P{n-1}$. The solution to this is$$\ln h = \P{-r\P{n-1} - \ffrac{1}{2}\sigma^2 n\P{n-1}} t + k$$since when $t = T$, $\ln h = 0$ it follows that $k = \P{r\P{n-1} + \ffrac{1}{2}\sigma^2 n\P{n-1}}T$ thus$$\ln h = \P{r\P{n-1} + \ffrac{1}{2}\sigma^2 n\P{n-1}}\P{T-t}$$ Question 4Consider the three-step binomial tree and set $T = 3$, $S_0 = 80$, $u = 1.5$, $d=0.5$, $p_u = 0.6$, $p_d = 0.4$, and for computational simplicity, $r=0$. We consider a European call option on the underlying stock. The date of expiration of the option is $T=3$, and the strike price is $K=80$. Suppose that at time $t = 1$ the stock price has gone up to $120$, and that the market price of the option turns out to be $50.0$. Show explicitly how you can make an arbitrage profit.>Ignore $p_u$ and $p_d$ and things will be easy. Question 5Consider a two-period Cox-Ross-Rubinstein model with $S_0 = 100$, $u = 1.01$, $d = 1/u$, and $r = 0.005$. Find a replicating portfolio in this model for the put option $g\P{s} = \P{100-s}^+$>There's no authentic answer for this. So the formulas are presented here>>$$q = \ffrac{e^{rT}-1/u}{u-1/u} = \ffrac{ue^{rT}-1}{u^2-1}$$ Question 6Prove directly from the definition of Itô integrals, that$$\int_0^t s\;\dd W_s = tW_t - \int_0^t W_s\;\dd s$$>Split the interval $\SB{0,t}$ into any $n$ subintervals $[t_j,t_{j+1})$, then we obtain>>$$\begin{align}\sum_{j=0}^{n-1} t_j\P{W_{t_{\void_{j+1}}} - W_{t_{\void_j}}}&= \sum_{j=0}^{n-1} t_j W_{t_{\void_{j+1}}} - \sum_{j=0}^{n-1} t_j W_{t_{\void_{j}}}\\&= \sum_{j=0}^{n-1} \P{t_j-t_{j+1}+t_{j+1}} W_{t_{\void_{j+1}}} - \sum_{j=0}^{n-1} t_j W_{t_{\void_{j}}}\\&= \sum_{j=0}^{n-1} \P{t_j-t_{j+1}}W_{t_{\void_{j+1}}} + \sum_{j=0}^{n-1} \P{t_{j+1} W_{t_{\void_{j+1}}} - t_j W_{t_{\void_{j}}}}\\&= -\sum_{j=0}^{n-1} \P{t_{j+1}-t_j}W_{t_{\void_{j+1}}} + tW_t - 0W_0\end{align}$$>>Let the *mesh* of this partition tend to zero, we have>>$$\int_0^t s\;\dd W_s = tW_t - \int_0^t W_s\;\dd s$$ Question 7Use Itô's Formula to show that $\d{\int_0^t s\;\dd W_s = tW_t - \int_0^t W_s\;\dd s}$>An application of Itô's Formula to $tW_t$ gives>>$$\dd \P{tW_t}= \P{W_t + 0 + 0}\dd t + t \dd W_t\Rightarrow tW_t = \int_0^t s\;\dd W_s + \int_0^t W_s\;\dd s$$ Question 8Suppose that $X$ has the stochastic differential $\dd X\P t = \alpha X\P t\dd t + \sigma\dd W\P t$ where $X\P0 = x$ and $\alpha$ is a real number, compute $\Exp\SB{X\P t}$>Denote $Y_t = X_t e^{-\alpha t}$ trying to eliminate the alpha in the SDE:>>$$\begin{align}\dd Y\P t &= \P{-\alpha\cdot e^{-\alpha t}X\P t + e^{-\alpha t}\cdot \alpha X\P t+0}\dd t + \sigma \cdot e^{-\alpha t}\dd W_t\\&= \sigma \cdot e^{-\alpha t}\dd W_t\\\end{align}$$>>Integral both sides of the equation above, with $Y_0 = x\cdot e^0 = x$, we have>>$$Y\P t=x + \sigma \int_0^t e^{-\alpha s}\;\dd W_s \Rightarrow X_t = xe^{\alpha t} + e^{\alpha t}\sigma \int_0^t e^{-\alpha s}\;\dd W_s$$>>Thus, $\Exp\SB{X\P t} = xe^{\alpha t}$ Question 8.extendedNow what if $\dd X\P t = \alpha X\P t\dd t + \sigma X\P t\dd W\P t$ with same conditions?>Easy to find that using the same method we have this is still $xe^{\alpha t}$, and here's another method, by directly solve the SDE: given the equation we have>>$$\dd \log X\P t = \P{\alpha - \ffrac{\sigma^2}{2}}\dd t + \sigma \dd W_t$$>>Take the integral, we have>>$$\log\ffrac{X\P t}{X\P0} = \P{\alpha - \ffrac{\sigma^2}{2}}t + \sigma W_t\Rightarrow X\P t = x\cdot \exp\CB{\P{\alpha - \ffrac{\sigma^2}{2}}t + \sigma W_t}$$>>$$\Exp\SB{X\P t} = x \cdot \exp\CB{\P{\alpha - \ffrac{\sigma^2}{2}}t}\cdot\exp\CB{\ffrac{\sigma^2}{2}t} = xe^{\alpha t}$$ Question 9Let $X$ and $Y$ be given as the solutions to the following system of stochastic differential equations$$\begin{align}\dd X\P{t} &= \alpha X_{t} \dd t- Y\P{t} \dd W\P{t}, X\P{0} = x_0\\\dd Y\P{t} &= \alpha Y_{t} \dd t- X\P{t} \dd W\P{t}, Y\P{0} = y_0\end{align}$$Prove that the process $R$ defined by $R\P{t} = X^2 \P{t} + Y^2 \P{t}$ is deterministic.>$$\begin{align}\dd X^2\P{t} &= \P{0+\alpha X\P{t} \cdot 2X\P{t} + \ffrac{1}{2}Y^2\P{t}\cdot 2}\dd t-Y\P{t}\cdot 2X\P{t}\dd W_t\\&= 2\alpha X^2\P{t} \dd t + Y^2\P{t} \dd t - 2X\P{t}Y\P{t} \dd W_t\\\dd Y^2\P{t} &= \P{0+\alpha Y\P{t} \cdot 2X\P{t} + \ffrac{1}{2}X^2\P{t}\cdot 2}\dd t+X\P{t}\cdot 2Y\P{t}\dd W_t\\&= 2\alpha Y^2\P{t} \dd t + X^2\P{t} \dd t + 2X\P{t}Y\P{t} \dd W_t\\\dd R &= \P{2\alpha + 1 }\P{X^2 \P{t} + Y^2 \P{t}}\dd t = \P{2\alpha + 1}R\dd t\end{align}$$ Question 10Suppose that the stock price today is $S_t = 2.00$, interest rate $r = 0$, and the time to maturity is $3$ months. Consider a contingent claim whose (Black-Scholes) price is given by the function$$V\P{t,S_t} = S_t^2 e^{2\P{T-t}}$$where the time is in annual terms.$\P{\text a}$ What is the claim price today?>$$V\P{t,S_t} = S_t^2 e^{2\P{T-t}} = 2^2 e^{2\times3/12} = 6.5949$$$\P{\text b}$ Find the volatility $\sigma$>We can use the same way to derive the Black-Scholes-Merton equation, and plug $V$ in we have>>$$\begin{align}0 &= \ffrac{\partial V}{\partial t} + \ffrac{1}{2}\sigma^2S_t^2\ffrac{\partial^2 V}{\partial S_t^2} + rS_t\ffrac{\partial V}{\partial S_t} - rV\\&= -2S_t^2 e^{2\P{T-t}} + \ffrac{1}{2}\sigma^2 S_t^2 \times 2e^{2\P{T-t}} + 0 - 0\\\sigma^2 &= 2 \Rightarrow \sigma = \sqrt 2\end{align}$$>>Or using risk neutral valuation, since the terminal payoff is $V\P{T,S_T} = S_T^2$, we have>>$$\begin{align}V\P{t} &= \Exp_t^\QQ\SB{S_T^2 e^{-r\P{T-t}}}\\&= \Exp_t^\QQ\SB{S_T^2}\\&= \Exp_t^\QQ\SB{ S_t^2 \exp\CB{2\P{r-\ffrac{\sigma^2}{2}}\P{T-t} + 2\sigma\P{W_T-W_t}} }\\&= S_t^2 e^{-\sigma^2\P{T-t}}\Exp^\QQ_t\SB{e^{2\sigma\P{W_T-W_t}}}\\&= S_t^2 e^{-\sigma^2\P{T-t}}\exp\CB{\ffrac{\P{2\sigma}^2}{2}\P{T-t}}\\&= S_t^2 e^{\sigma^2\P{T-t}}\end{align}$$>>Thus, $\sigma^2 = 2$$\P{\text c}$ If the stock at maturity is $S(T) = 2.00$, what is the payoff of the claim at maturity?>$$V\P{T,S_T} = 2^2 e^0 = 4$$ ***
###Code
import math
4*math.exp(0.5)
###Output
_____no_output_____ |
lesson07/.ipynb_checkpoints/_Part1_programming_moreOnWidgets_lesson07-checkpoint.ipynb | ###Markdown
Day 7, More on widgets and dashboarding* we'll get into some more of the nitty gritty details of different things we can use with widgets now
###Code
# lets import our usual stuff
import pandas as pd
import numpy as np
%matplotlib inline
#import widgets
import ipywidgets
###Output
_____no_output_____
###Markdown
So, this is how we've used widgets before for the most part:
###Code
@ipywidgets.interact(name = ['Linda', 'Tina', 'Louise'])
def print_name(name):
print(name) # just a simple print out
###Output
_____no_output_____
###Markdown
But now, lets go into ipywidgets in a bit more detail and look at specific functions.For example, we can create a little display that increments integer numbers:
###Code
itext = ipywidgets.IntText()
###Output
_____no_output_____
###Markdown
To be able to display our widget we need to use the "display" function.Note: depending on the version of jupyter you are using, you may or maynot have to use "display" to display things. We'll do it both ways, starting with display:
###Code
from IPython.display import display
# lets show the thing!
display(itext)
# you can see there are little numbers at the
# end that we can toggle up and down
###Output
_____no_output_____
###Markdown
Note that if I make another display of itext, this one and the previous one have values that are tied together:
###Code
display(itext)
###Output
_____no_output_____
###Markdown
Also, the value of itext is then stored - so we could in theory generate a toggle and then do stuff with its value:
###Code
itext.value
###Output
_____no_output_____
###Markdown
Note if I go up and change the toggle value I have to re-run this cell to print out the newly stored value.I can also set the value "by hand":
###Code
itext.value = 10
###Output
_____no_output_____
###Markdown
Once I run this cell, now the toggle values are updated above.We can also do fun things like create progress bars:
###Code
ip = ipywidgets.IntProgress()
display(ip)
###Output
_____no_output_____
###Markdown
So, its probably hard to see, but there is indeed a bar there. I can show this better by setting the progress to 90% by hand:
###Code
ip.value = 90
###Output
_____no_output_____
###Markdown
We can also check out more of the interer counters, like this nifty slider:
###Code
irange = ipywidgets.IntRangeSlider(min = -10, max = 10, step = 1)
display(irange)
###Output
_____no_output_____
###Markdown
Again, the value of the slider is stored if we want to use it:
###Code
irange.value
###Output
_____no_output_____
###Markdown
We can use ipywidgets to make clickers:
###Code
button1 = ipywidgets.Button(description = "I am a Clicker")
display(button1)
###Output
_____no_output_____
###Markdown
We note that nothing happens when we press this button.Now, lets make a function that can act when we press this button:
###Code
def say_click(event):
print("I have clicked. Click.")
###Output
_____no_output_____
###Markdown
Now we can tell our button that we've created to say click when we click.
###Code
button1.on_click(say_click)
###Output
_____no_output_____
###Markdown
We'll note that this is "back reactive" - so now if we go up to our displayed button and press click it says it has clicked were before it did nothing. Now that we've played with some toy examples, let's see how we can combine them to make interactive things and how we can layer widgets to create more complex interactives.
###Code
# lets start by making a progress bar again:
ip = ipywidgets.IntProgress()
# now, lets add in a button that will add 10
button_plus = ipywidgets.Button(description = "+10")
# and one that will subtract 10
button_minus = ipywidgets.Button(description = "-10")
###Output
_____no_output_____
###Markdown
Now we'll use one of the layout features of widgets to put these in a horizontal order with ```HBox```:
###Code
ipywidgets.HBox([button_minus, ip, button_plus])
###Output
_____no_output_____
###Markdown
We note if we click these, nothing happens this is because we haven't associated actions to our clicks as before.So, no matter what we press, we don't see anything happening:
###Code
ip.value
###Output
_____no_output_____
###Markdown
Let's now associate a change in the value of our progress bar when we click the down button:
###Code
def click_down(event):
ip.value -= 10
###Output
_____no_output_____
###Markdown
And lets tie this change in value to the click with the "on_click" function of our down button:
###Code
button_minus.on_click(click_down)
###Output
_____no_output_____
###Markdown
Same type of thing, but for our up button:
###Code
def click_up(event):
ip.value += 10
button_plus.on_click(click_up)
###Output
_____no_output_____
###Markdown
Let's try this again:
###Code
ipywidgets.HBox([button_minus, ip, button_plus])
###Output
_____no_output_____
###Markdown
Note also again that these associations with these up and down click functions are in a sense "back reactive" - so now our previous instance of this clicker and progress bar updates as well! Let's try one more example - an integer slider:
###Code
islider = ipywidgets.IntSlider(min = 0,
max = 10,
step = 1,
orientation = 'vertical')
###Output
_____no_output_____
###Markdown
Let's give this slider a base color that is sort of purple-y, using a hex code:
###Code
islider.style.handle_color = "#750075"
islider
###Output
_____no_output_____
###Markdown
Note here (and above with our boxes) I'm not using "display" note this slider slides up and down, nothing too exciting.Let's create a new widget object called a color picker:
###Code
cp = ipywidgets.ColorPicker()
cp
###Output
_____no_output_____
###Markdown
When we show this we can click on the little box and it popus up a color picker we can mess around with.Now, lets link the slider and the color picker:
###Code
ipywidgets.link( (cp, 'value'), (islider.style, 'handle_color'))
ipywidgets.VBox([cp, islider])
###Output
_____no_output_____
###Markdown
Note how previous instances of these objects are tied together also note how the .link function sort of intuitievly knows how to link these two interactive widgets together. A more practicle example: Plotting analytical orbitsLet's now change over from toy examples to something that we can use. We'll make use of "interact" in ipywidgets to do this.
###Code
import matplotlib.pyplot as plt
theta = np.arange(0, 2*np.pi, 0.001)
a = 5
def plot_ellipse(ecc):
#r = b/np.sqrt(1-ecc*(np.cos(theta))**2) ## double check formula, wrong!
r = a*(1-ecc**2)/(1-ecc*np.cos(theta))
x = r*np.cos(theta)
y = r*np.sin(theta)
plt.plot(x, y)
plt.xlim(-10,10)
plt.ylim(-10,10)
plt.show()
# note here I'm doing this differently then using it as
# a "decorator" function => this is just a different way to call things
ipywidgets.interact(plot_ellipse, ecc = (0.0,0.99, 0.01))
###Output
_____no_output_____ |
1-foundations/python/fsnd01_06_classes_turtles.ipynb | ###Markdown
Lesson 06. Classes: Turtle graphics**Udacity Full Stack Web Developer Nanodegree program**Part 01. Programming fundamentals and the web[Programming foundations with Python](https://www.udacity.com/course/programming-foundations-with-python--ud036)Brendon Smithbr3ndonland 01. Course Map 02. Drawing Turtles (Story)Drawing shapes like squares, circles, and fractals is one of the easiest ways to learn about classes. 03. Drawing Turtles (Output)The output will be a series of squares that rotate around to form a circle. 04. Quiz: How to Draw a Square*What instructions will you provide if Kunal is walking around on a carpet trying to draw a square with his steps?*1. walk east 5 paces2. walk south 5 paces3. walk west 5 paces4. walk north 5 paces 05. Drawing a Square*Why is the Python function called turtle?* -> from the [Python documentation](https://docs.python.org/3/library/turtle.html?highlight=turtles):> Turtle graphics is a popular way for introducing programming to kids. It was part of the original Logo programming language developed by Wally Feurzig and Seymour Papert in 1966.>> Imagine a robotic turtle starting at (0, 0) in the x-y plane. After an `import turtle`, give it the command `turtle.forward(15)`, and it moves (on-screen!) 15 pixels in the direction it is facing, drawing a line as it moves. Give it the command `turtle.right(25)`, and it rotates in-place 25 degrees clockwise.Resources in the instructor notes:* Changing Turtle's Shape* Changing Turtle's Color* Changing Turtle's Speed* Use this discussion thread to post your response: Drawing Square TurtlesOther resources I found:* [Turtle.py | The Pragmatic Procrastinator](https://ianwitham.wordpress.com/2010/04/30/turtle-py/)* [Stack Overflow Turtle colors](https://stackoverflow.com/questions/18621285/how-do-you-add-100-colors-using-a-loop-into-a-turtle-graphics-design-code)* [matplotlib pastel colors](https://matplotlib.org/examples/pylab_examples/colours.html) 06. Change Turtle Shape, Color, and SpeedNote that in Jupyter notebook I seem to be getting a turtle `Terminator` error every second time. It may still be running in the notebook's Python kernel. Running the code in the shell also works.```pythonimport turtledef draw_square(): """Draw a square with turtle graphics.""" window = turtle.Screen() window.bgcolor('cyan') brad = turtle.Turtle() brad.shape('turtle') brad.color('pink') brad.speed(3) brad.forward(100) brad.right(90) brad.forward(100) brad.right(90) brad.forward(100) brad.right(90) brad.forward(100) brad.right(90) turtle.done()draw_square()``` 07. Where Does Turtle Come From?turtle (lowercase) is a module in the Python standard library. Turtle (uppercase) is a class within the turtle module.A class is "a neatly packaged box, that puts things together really well. Within `turtle.Turtle()` are various functions, like `def __init__`. 08. Reading Turtle DocumentationAlready found [it](https://docs.python.org/3/library/turtle.html). 09. Two TurtlesAdding a second turtle that starts where the first stops:```pythonimport turtledef draw_square_and_circle(): """Draw a square and circle with turtle graphics.""" window = turtle.Screen() window.bgcolor('cyan') brad = turtle.Turtle() brad.shape('turtle') brad.color('pink') brad.speed(3) brad.forward(100) brad.right(90) brad.forward(100) brad.right(90) brad.forward(100) brad.right(90) brad.forward(100) brad.right(90) angie = turtle.Turtle() angie.shape('arrow') angie.color('blue') angie.circle(100) turtle.done()draw_square_and_circle()``` 10. What's Wrong With This Code?My answer: *Not efficient - lots of repeats. Should include loops.*Correct. Repetitive lines of code, and the name of the function needs to be changed now that we also draw a circle. 11. Quiz: Improving Code QualityThere is no triangle function, but you can draw two lines and then a diagonal one connecting them by returning to the home position.```pythonimport turtledef draw_shapes(): """Draw shapes with turtle graphics.""" window = turtle.Screen() window.bgcolor('cyan') Draw square brad = turtle.Turtle() brad.shape('turtle') brad.color('pink') brad.speed(3) for i in range(0, 4): brad.forward(100) brad.right(90) Draw circle angie = turtle.Turtle() angie.shape('arrow') angie.color('blue') angie.circle(100) Draw extra credit triangle beavis = turtle.Turtle() beavis.color('brown') beavis.backward(300) beavis.left(90) beavis.backward(300) beavis.home() turtle.done()draw_shapes()```Posted response in "Making Turtle Code Better" Udacity forum. 12. Quiz: What is a Class?A class is like a blueprint. The objects or instances are like examples of the blueprint. 13 Quiz: Making a Circle out of Squares```pythonimport turtledef draw_square(some_turtle): """Draw a square with turtle graphics.""" for i in range(0, 4): some_turtle.forward(100) some_turtle.right(90)def many_squares(): """Use turtle graphics to form a circle out of many squares.""" window = turtle.Screen() window.bgcolor('cyan') brad = turtle.Turtle() brad.shape('turtle') brad.color('pink') brad.speed(3) for i in range(0, 36): draw_square(brad) brad.right(10) turtle.done()many_squares()``` 14. Quiz: Turtle Mini-ProjectI created a colorful spiral, transporting the viewer into the Tron game grid. The background image is from [Wikipedia](https://en.wikipedia.org/wiki/File:Tron_Lightcycles.jpg/media/File:Tron_Lightcycles.jpg).
###Code
import turtle
import colorsys
def spiral_into_the_grid():
"""Use turtle graphics to create a colorful spiral."""
turtle.setup(width=1600, height=900)
turtle.speed(0)
turtle.hideturtle()
window = turtle.Screen()
window.bgpic('img/TRON.gif')
for i in range(1250):
colors = colorsys.hsv_to_rgb(i / 1250, 1.0, 1.0)
turtle.color(colors)
turtle.forward(i)
turtle.left(115)
turtle.done()
spiral_into_the_grid()
###Output
_____no_output_____
###Markdown
Lesson 06. Classes: Turtle graphics**Udacity Full Stack Web Developer Nanodegree program**Part 01. Programming fundamentals and the web[Programming foundations with Python](https://www.udacity.com/course/programming-foundations-with-python--ud036)Brendon Smithbr3ndonland 01. Course Map 02. Drawing Turtles (Story)Drawing shapes like squares, circles, and fractals is one of the easiest ways to learn about classes. 03. Drawing Turtles (Output)The output will be a series of squares that rotate around to form a circle. 04. Quiz: How to Draw a Square*What instructions will you provide if Kunal is walking around on a carpet trying to draw a square with his steps?*1. walk east 5 paces2. walk south 5 paces3. walk west 5 paces4. walk north 5 paces 05. Drawing a Square*Why is the Python function called turtle?* -> from the [Python documentation](https://docs.python.org/3/library/turtle.html?highlight=turtles):> Turtle graphics is a popular way for introducing programming to kids. It was part of the original Logo programming language developed by Wally Feurzig and Seymour Papert in 1966.>> Imagine a robotic turtle starting at (0, 0) in the x-y plane. After an `import turtle`, give it the command `turtle.forward(15)`, and it moves (on-screen!) 15 pixels in the direction it is facing, drawing a line as it moves. Give it the command `turtle.right(25)`, and it rotates in-place 25 degrees clockwise.Resources in the instructor notes:* Changing Turtle's Shape* Changing Turtle's Color* Changing Turtle's Speed* Use this discussion thread to post your response: Drawing Square TurtlesOther resources I found:* [Turtle.py | The Pragmatic Procrastinator](https://ianwitham.wordpress.com/2010/04/30/turtle-py/)* [Stack Overflow Turtle colors](https://stackoverflow.com/questions/18621285/how-do-you-add-100-colors-using-a-loop-into-a-turtle-graphics-design-code)* [matplotlib pastel colors](https://matplotlib.org/examples/pylab_examples/colours.html) 06. Change Turtle Shape, Color, and SpeedNote that in Jupyter notebook I seem to be getting a turtle `Terminator` error every second time. It may still be running in the notebook's Python kernel. Running the code in the shell also works.```pythonimport turtledef draw_square(): """Draw a square with turtle graphics.""" window = turtle.Screen() window.bgcolor('cyan') brad = turtle.Turtle() brad.shape('turtle') brad.color('pink') brad.speed(3) brad.forward(100) brad.right(90) brad.forward(100) brad.right(90) brad.forward(100) brad.right(90) brad.forward(100) brad.right(90) turtle.done()draw_square()``` 07. Where Does Turtle Come From?turtle (lowercase) is a module in the Python standard library. Turtle (uppercase) is a class within the turtle module.A class is "a neatly packaged box, that puts things together really well. Within `turtle.Turtle()` are various functions, like `def __init__`. 08. Reading Turtle DocumentationAlready found [it](https://docs.python.org/3/library/turtle.html). 09. Two TurtlesAdding a second turtle that starts where the first stops:```pythonimport turtledef draw_square_and_circle(): """Draw a square and circle with turtle graphics.""" window = turtle.Screen() window.bgcolor('cyan') brad = turtle.Turtle() brad.shape('turtle') brad.color('pink') brad.speed(3) brad.forward(100) brad.right(90) brad.forward(100) brad.right(90) brad.forward(100) brad.right(90) brad.forward(100) brad.right(90) angie = turtle.Turtle() angie.shape('arrow') angie.color('blue') angie.circle(100) turtle.done()draw_square_and_circle()``` 10. What's Wrong With This Code?My answer: *Not efficient - lots of repeats. Should include loops.*Correct. Repetitive lines of code, and the name of the function needs to be changed now that we also draw a circle. 11. Quiz: Improving Code QualityThere is no triangle function, but you can draw two lines and then a diagonal one connecting them by returning to the home position.```pythonimport turtledef draw_shapes(): """Draw shapes with turtle graphics.""" window = turtle.Screen() window.bgcolor('cyan') Draw square brad = turtle.Turtle() brad.shape('turtle') brad.color('pink') brad.speed(3) for i in range(0, 4): brad.forward(100) brad.right(90) Draw circle angie = turtle.Turtle() angie.shape('arrow') angie.color('blue') angie.circle(100) Draw extra credit triangle beavis = turtle.Turtle() beavis.color('brown') beavis.backward(300) beavis.left(90) beavis.backward(300) beavis.home() turtle.done()draw_shapes()```Posted response in "Making Turtle Code Better" Udacity forum. 12. Quiz: What is a Class?A class is like a blueprint. The objects or instances are like examples of the blueprint. 13 Quiz: Making a Circle out of Squares```pythonimport turtledef draw_square(some_turtle): """Draw a square with turtle graphics.""" for i in range(0, 4): some_turtle.forward(100) some_turtle.right(90)def many_squares(): """Use turtle graphics to form a circle out of many squares.""" window = turtle.Screen() window.bgcolor('cyan') brad = turtle.Turtle() brad.shape('turtle') brad.color('pink') brad.speed(3) for i in range(0, 36): draw_square(brad) brad.right(10) turtle.done()many_squares()``` 14. Quiz: Turtle Mini-ProjectI created a colorful spiral, transporting the viewer into the Tron game grid. The background image is from [Wikipedia](https://en.wikipedia.org/wiki/File:Tron_Lightcycles.jpg/media/File:Tron_Lightcycles.jpg).
###Code
import turtle
import colorsys
def spiral_into_the_grid():
"""Use turtle graphics to create a colorful spiral."""
turtle.setup(width=1600, height=900)
turtle.speed(0)
turtle.hideturtle()
window = turtle.Screen()
window.bgpic('img/TRON.gif')
for i in range(1250):
colors = colorsys.hsv_to_rgb(i / 1250, 1.0, 1.0)
turtle.color(colors)
turtle.forward(i)
turtle.left(115)
turtle.done()
spiral_into_the_grid()
###Output
_____no_output_____ |
notebooks/analysis_4class.ipynb | ###Markdown
Since XG Boost is a boosted decision tree and performed better than logistic regression, I chose to run the grid search only on the XG Boost model to save computing time.
###Code
# define parameters to search
params_xgb = {
'booster':['gbtree', 'gblinear'],
'max_delta_step ':[10000, 1000],
'max_depth':[8]
}
# run grid search
xgb_gs = mylib.run_grid_search(params_xgb, xgb, X_train, y_train)
# Evaluate model
xgb_gs_scores = mylib.model_scores(xgb_gs, 'xgb_gridsearch',
X_train, X_test,
y_train, y_test)
# Pickle model for later use
pickle.dump(xgb_gs, open('/content/gdrive/MyDrive/Github/capstone/data/model_xgb_gs_4class-2.pkl', 'wb'))
xgb_gs_scores
###Output
_____no_output_____
###Markdown
Train and evaluate model on entire data set for deployment
###Code
X = pd.concat([X_train, X_test])
y = pd.concat([y_train, y_test])
final_model = pickle.load(open('data/model_xgb_gs_4class.pkl', 'rb'))
# Train and evaluate model on entire data set for deployment
final_model.fit(X, y)
y_pred = final_model.predict(X)
# Pickle model for deployment
pickle.dump(final_model, open('/content/gdrive/MyDrive/Github/capstone/data/final_model.pkl', 'wb'))
###Output
_____no_output_____ |
The GitHub History of the Scala Language/notebook.ipynb | ###Markdown
1. Scala's real-world project repository dataWith almost 30k commits and a history spanning over ten years, Scala is a mature programming language. It is a general-purpose programming language that has recently become another prominent language for data scientists.Scala is also an open source project. Open source projects have the advantage that their entire development histories -- who made changes, what was changed, code reviews, etc. -- are publicly available. We're going to read in, clean up, and visualize the real world project repository of Scala that spans data from a version control system (Git) as well as a project hosting site (GitHub). We will find out who has had the most influence on its development and who are the experts.The dataset we will use, which has been previously mined and extracted from GitHub, is comprised of three files:pulls_2011-2013.csv contains the basic information about the pull requests, and spans from the end of 2011 up to (but not including) 2014.pulls_2014-2018.csv contains identical information, and spans from 2014 up to 2018.pull_files.csv contains the files that were modified by each pull request.
###Code
# Importing pandas
import pandas as pd
# Loading in the data
pulls_one = pd.read_csv('datasets/pulls_2011-2013.csv')
pulls_two = pd.read_csv('datasets/pulls_2014-2018.csv')
pull_files = pd.read_csv('datasets/pull_files.csv')
###Output
_____no_output_____
###Markdown
2. Preparing and cleaning the dataFirst, we will need to combine the data from the two separate pull DataFrames. Next, the raw data extracted from GitHub contains dates in the ISO8601 format. However, pandas imports them as regular strings. To make our analysis easier, we need to convert the strings into Python's DateTime objects. DateTime objects have the important property that they can be compared and sorted.The pull request times are all in UTC (also known as Coordinated Universal Time). The commit times, however, are in the local time of the author with time zone information (number of hours difference from UTC). To make comparisons easy, we should convert all times to UTC.
###Code
# Append pulls_one to pulls_two
pulls = pulls_two.append(pulls_one)
# Convert the date for the pulls object
pulls['date'] = pd.to_datetime(pulls['date'], utc=True)
###Output
_____no_output_____
###Markdown
3. Merging the DataFramesThe data extracted comes in two separate files. Merging the two DataFrames will make it easier for us to analyze the data in the future tasks.
###Code
# Merge the two DataFrames
data = pulls.merge(pull_files, on='pid')
data.head()
###Output
_____no_output_____
###Markdown
4. Is the project still actively maintained?The activity in an open source project is not very consistent. Some projects might be active for many years after the initial release, while others can slowly taper out into oblivion. Before committing to contributing to a project, it is important to understand the state of the project. Is development going steadily, or is there a drop? Has the project been abandoned altogether?The data used in this project was collected in January of 2018. We are interested in the evolution of the number of contributions up to that date.For Scala, we will do this by plotting a chart of the project's activity. We will calculate the number of pull requests submitted each (calendar) month during the project's lifetime. We will then plot these numbers to see the trend of contributions.A helpful reminder of how to access various components of a date can be found in this exercise of Data Manipulation with pandasAdditionally, recall that you can group by multiple variables by passing a list to groupby(). This video from Data Manipulation with pandas should help!
###Code
%matplotlib inline
import datetime as dt
import matplotlib.pyplot as plt
import matplotlib.ticker as plticker
# # Create a column that will store the month
# data['month'] = data['date'].dt.month
# # Create a column that will store the year
# data['year'] = data['date'].dt.year
# Group by the month and year and count the pull requests
counts = data.groupby([data['date'].dt.year,data['date'].dt.month]).sum()
# Plot the results with ticks every 10 entries
loc = plticker.MultipleLocator(base=10.0) # this locator puts ticks at regular intervals
counts.plot(kind='bar', figsize = (12,4)).xaxis.set_major_locator(loc)
###Output
_____no_output_____
###Markdown
5. Is there camaraderie in the project?The organizational structure varies from one project to another, and it can influence your success as a contributor. A project that has a very small community might not be the best one to start working on. The small community might indicate a high barrier of entry. This can be caused by several factors, including a community that is reluctant to accept pull requests from "outsiders," that the code base is hard to work with, etc. However, a large community can serve as an indicator that the project is regularly accepting pull requests from new contributors. Such a project would be a good place to start.In order to evaluate the dynamics of the community, we will plot a histogram of the number of pull requests submitted by each user. A distribution that shows that there are few people that only contribute a small number of pull requests can be used as in indicator that the project is not welcoming of new contributors.
###Code
# Required for matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
# Group by the submitter
by_user = data.groupby('user').sum()
# Plot the histogram and hide the x ticks due to confusion
by_user.plot(kind='bar', figsize = (12,4))
ax1 = plt.axes()
x_axis = ax1.axes.get_xaxis()
x_axis.set_visible(False)
###Output
_____no_output_____
###Markdown
6. What files were changed in the last ten pull requests?Choosing the right place to make a contribution is as important as choosing the project to contribute to. Some parts of the code might be stable, some might be dead. Contributing there might not have the most impact. Therefore it is important to understand the parts of the system that have been recently changed. This allows us to pinpoint the "hot" areas of the code where most of the activity is happening. Focusing on those parts might not the most effective use of our times.
###Code
# Identify the last 10 pull requests
last_10 = pulls.sort_values(by = 'date').tail(10)
last_10
# Join the two data sets
joined_pr = pull_files.merge(last_10, on='pid')
# Identify the unique files
files = set(joined_pr['file'])
# Print the results
files
###Output
_____no_output_____
###Markdown
7. Who made the most pull requests to a given file?When contributing to a project, we might need some guidance. We might find ourselves needing some information regarding the codebase. It is important direct any questions to the right person. Contributors to open source projects generally have other day jobs, so their time is limited. It is important to address our questions to the right people. One way to identify the right target for our inquiries is by using their contribution history.We identified src/compiler/scala/reflect/reify/phases/Calculate.scala as being recently changed. We are interested in the top 3 developers who changed that file. Those developers are the ones most likely to have the best understanding of the code.
###Code
# This is the file we are interested in:
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Identify the pull requests that changed the file
file_pr = data[data['file'] == file]
# Count the number of changes made by each developer
author_counts = file_pr.groupby('user').count()
# Print the top 3 developers
author_counts.nlargest(3, 'file')
###Output
_____no_output_____
###Markdown
8. Who made the last ten pull requests on a given file?Open source projects suffer from fluctuating membership. This makes the problem of finding the right person more challenging: the person has to be knowledgeable and still be involved in the project. A person that contributed a lot in the past might no longer be available (or willing) to help. To get a better understanding, we need to investigate the more recent history of that particular part of the system. Like in the previous task, we will look at the history of src/compiler/scala/reflect/reify/phases/Calculate.scala.
###Code
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests that changed the target file
file_pr = pull_files[pull_files['file'] == file]
# Merge the obtained results with the pulls DataFrame
joined_pr = pulls.merge(file_pr, on='pid')
# Find the users of the last 10 most recent pull requests
users_last_10 = set(joined_pr.nlargest(10, 'date')['user'])
# Printing the results
users_last_10
###Output
_____no_output_____
###Markdown
9. The pull requests of two special developersNow that we have identified two potential contacts in the projects, we need to find the person who was most involved in the project in recent times. That person is most likely to answer our questions. For each calendar year, we are interested in understanding the number of pull requests the authors submitted. This will give us a high-level image of their contribution trend to the project.
###Code
%matplotlib inline
# The developers we are interested in
authors = ['xeno-by', 'soc']
# Get all the developers' pull requests
by_author = pulls[pulls['user'].isin(authors)]
# Count the number of pull requests submitted each year
counts = by_author.groupby(['user', by_author['date'].dt.year]).agg({'pid': 'count'}).reset_index()
# Convert the table to a wide format
counts_wide = counts.pivot_table(index='date', columns='user', values='pid', fill_value=0)
# Plot the results
counts_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
10. Visualizing the contributions of each developerAs mentioned before, it is important to make a distinction between the global expertise and contribution levels and the contribution levels at a more granular level (file, submodule, etc.) In our case, we want to see which of our two developers of interest have the most experience with the code in a given file. We will measure experience by the number of pull requests submitted that affect that file and how recent those pull requests were submitted.
###Code
authors = ['xeno-by', 'soc']
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests submitted by the authors, from the `data` DataFrame
by_author = data[data['user'].isin(authors)]
# Select the pull requests that affect the file
by_file = by_author[by_author['file'] == file]
# Group and count the number of PRs done by each user each year
grouped = by_file.groupby(['user', by_file['date'].dt.year]).count()['pid'].reset_index()
# Transform the data into a wide format
by_file_wide = grouped.pivot_table(index='date', columns='user', values='pid', fill_value=0)
# Plot the results
by_file_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
1. Loading the NIPS papersThe NIPS conference (Neural Information Processing Systems) is one of the most prestigious yearly events in the machine learning community. At each NIPS conference, a large number of research papers are published. Over 50,000 PDF files were automatically downloaded and processed to obtain a dataset on various machine learning techniques. These NIPS papers are stored in datasets/papers.csv. The CSV file contains information on the different NIPS papers that were published from 1987 until 2017 (30 years!). These papers discuss a wide variety of topics in machine learning, from neural networks to optimization methods and many more.First, we will explore the CSV file to determine what type of data we can use for the analysis and how it is structured. A research paper typically consists of a title, an abstract and the main text. Other data such as figures and tables were not extracted from the PDF files. Each paper discusses a novel technique or improvement. In this analysis, we will focus on analyzing these papers with natural language processing methods.
###Code
# Importing modules
# -- YOUR CODE HERE --
import pandas as pd
# Read datasets/papers.csv into papers
papers = pd.read_csv('datasets/papers.csv')
# Print out the first rows of papers
# -- YOUR CODE HERE --
print(papers.head())
###Output
id year title event_type \
0 1 1987 Self-Organization of Associative Database and ... NaN
1 10 1987 A Mean Field Theory of Layer IV of Visual Cort... NaN
2 100 1988 Storing Covariance by the Associative Long-Ter... NaN
3 1000 1994 Bayesian Query Construction for Neural Network... NaN
4 1001 1994 Neural Network Ensembles, Cross Validation, an... NaN
pdf_name abstract \
0 1-self-organization-of-associative-database-an... Abstract Missing
1 10-a-mean-field-theory-of-layer-iv-of-visual-c... Abstract Missing
2 100-storing-covariance-by-the-associative-long... Abstract Missing
3 1000-bayesian-query-construction-for-neural-ne... Abstract Missing
4 1001-neural-network-ensembles-cross-validation... Abstract Missing
paper_text
0 767\n\nSELF-ORGANIZATION OF ASSOCIATIVE DATABA...
1 683\n\nA MEAN FIELD THEORY OF LAYER IV OF VISU...
2 394\n\nSTORING COVARIANCE BY THE ASSOCIATIVE\n...
3 Bayesian Query Construction for Neural\nNetwor...
4 Neural Network Ensembles, Cross\nValidation, a...
###Markdown
2. Preparing the data for analysisFor the analysis of the papers, we are only interested in the text data associated with the paper as well as the year the paper was published in.We will analyze this text data using natural language processing. Since the file contains some metadata such as id's and filenames, it is necessary to remove all the columns that do not contain useful text information.
###Code
# Remove the columns
# -- YOUR CODE HERE --
papers.drop(['id','event_type','pdf_name'],axis=1,inplace=True)
# Print out the first rows of papers
# -- YOUR CODE HERE --
print(papers.head())
###Output
year title abstract \
0 1987 Self-Organization of Associative Database and ... Abstract Missing
1 1987 A Mean Field Theory of Layer IV of Visual Cort... Abstract Missing
2 1988 Storing Covariance by the Associative Long-Ter... Abstract Missing
3 1994 Bayesian Query Construction for Neural Network... Abstract Missing
4 1994 Neural Network Ensembles, Cross Validation, an... Abstract Missing
paper_text
0 767\n\nSELF-ORGANIZATION OF ASSOCIATIVE DATABA...
1 683\n\nA MEAN FIELD THEORY OF LAYER IV OF VISU...
2 394\n\nSTORING COVARIANCE BY THE ASSOCIATIVE\n...
3 Bayesian Query Construction for Neural\nNetwor...
4 Neural Network Ensembles, Cross\nValidation, a...
###Markdown
3. Plotting how machine learning has evolved over timeIn order to understand how the machine learning field has recently exploded in popularity, we will begin by visualizing the number of publications per year. By looking at the number of published papers per year, we can understand the extent of the machine learning 'revolution'! Typically, this significant increase in popularity is attributed to the large amounts of compute power, data and improvements in algorithms.
###Code
# Group the papers by year
groups = papers.groupby('year')
# Determine the size of each group
counts = groups.size()
# Visualise the counts as a bar plot
import matplotlib.pyplot
%matplotlib inline
# -- YOUR CODE HERE --
counts.plot()
###Output
_____no_output_____
###Markdown
4. Preprocessing the text dataLet's now analyze the titles of the different papers to identify machine learning trends. First, we will perform some simple preprocessing on the titles in order to make them more amenable for analysis. We will use a regular expression to remove any punctuation in the title. Then we will perform lowercasing. We'll then print the titles of the first rows before and after applying the modification.
###Code
# Load the regular expression library
# -- YOUR CODE HERE --
import re
# Print the titles of the first rows
print(papers['title'].head())
# Remove punctuation
papers['title_processed'] = papers['title'].map(lambda x: re.sub('[,\.!?]', '', x))
# Convert the titles to lowercase
papers['title_processed'] = papers['title_processed'].str.lower()
# Print the processed titles of the first rows
# -- YOUR CODE HERE --
print(papers.head())
###Output
0 Self-Organization of Associative Database and ...
1 A Mean Field Theory of Layer IV of Visual Cort...
2 Storing Covariance by the Associative Long-Ter...
3 Bayesian Query Construction for Neural Network...
4 Neural Network Ensembles, Cross Validation, an...
Name: title, dtype: object
year title abstract \
0 1987 Self-Organization of Associative Database and ... Abstract Missing
1 1987 A Mean Field Theory of Layer IV of Visual Cort... Abstract Missing
2 1988 Storing Covariance by the Associative Long-Ter... Abstract Missing
3 1994 Bayesian Query Construction for Neural Network... Abstract Missing
4 1994 Neural Network Ensembles, Cross Validation, an... Abstract Missing
paper_text \
0 767\n\nSELF-ORGANIZATION OF ASSOCIATIVE DATABA...
1 683\n\nA MEAN FIELD THEORY OF LAYER IV OF VISU...
2 394\n\nSTORING COVARIANCE BY THE ASSOCIATIVE\n...
3 Bayesian Query Construction for Neural\nNetwor...
4 Neural Network Ensembles, Cross\nValidation, a...
title_processed
0 self-organization of associative database and ...
1 a mean field theory of layer iv of visual cort...
2 storing covariance by the associative long-ter...
3 bayesian query construction for neural network...
4 neural network ensembles cross validation and ...
###Markdown
5. A word cloud to visualize the preprocessed text dataIn order to verify whether the preprocessing happened correctly, we can make a word cloud of the titles of the research papers. This will give us a visual representation of the most common words. Visualisation is key to understanding whether we are still on the right track! In addition, it allows us to verify whether we need additional preprocessing before further analyzing the text data.Python has a massive number of open libraries! Instead of trying to develop a method to create word clouds ourselves, we'll use Andreas Mueller's wordcloud library.
###Code
# Import the wordcloud library
# -- YOUR CODE HERE --
import wordcloud
# Join the different processed titles together.
s = " "
long_string = s.join(papers['title_processed'])
# Create a WordCloud object
wordcloud = wordcloud.WordCloud()
# Generate a word cloud
# -- YOUR CODE HERE --
wordcloud.generate(long_string)
# Visualize the word cloud
wordcloud.to_image()
###Output
_____no_output_____
###Markdown
6. Prepare the text for LDA analysisThe main text analysis method that we will use is latent Dirichlet allocation (LDA). LDA is able to perform topic detection on large document sets, determining what the main 'topics' are in a large unlabeled set of texts. A 'topic' is a collection of words that tend to co-occur often. The hypothesis is that LDA might be able to clarify what the different topics in the research titles are. These topics can then be used as a starting point for further analysis.LDA does not work directly on text data. First, it is necessary to convert the documents into a simple vector representation. This representation will then be used by LDA to determine the topics. Each entry of a 'document vector' will correspond with the number of times a word occurred in the document. In conclusion, we will convert a list of titles into a list of vectors, all with length equal to the vocabulary. For example, 'Analyzing machine learning trends with neural networks.' would be transformed into [1, 0, 1, ..., 1, 0].We'll then plot the 10 most common words based on the outcome of this operation (the list of document vectors). As a check, these words should also occur in the word cloud.
###Code
# Load the library with the CountVectorizer method
from sklearn.feature_extraction.text import CountVectorizer
import numpy as np
# Helper function
def plot_10_most_common_words(count_data, count_vectorizer):
import matplotlib.pyplot as plt
words = count_vectorizer.get_feature_names()
total_counts = np.zeros(len(words))
for t in count_data:
total_counts+=t.toarray()[0]
count_dict = (zip(words, total_counts))
count_dict = sorted(count_dict, key=lambda x:x[1], reverse=True)[0:10]
words = [w[0] for w in count_dict]
counts = [w[1] for w in count_dict]
x_pos = np.arange(len(words))
plt.bar(x_pos, counts,align='center')
plt.xticks(x_pos, words, rotation=90)
plt.xlabel('words')
plt.ylabel('counts')
plt.title('10 most common words')
plt.show()
# Initialise the count vectorizer with the English stop words
count_vectorizer = CountVectorizer(stop_words = 'english')
# Fit and transform the processed titles
count_data = count_vectorizer.fit_transform(papers['title'])
# Visualise the 10 most common words
# -- YOUR CODE HERE --
###Output
_____no_output_____
###Markdown
7. Analysing trends with LDAFinally, the research titles will be analyzed using LDA. Note that in order to process a new set of documents (e.g. news articles), a similar set of steps will be required to preprocess the data. The flow that was constructed here can thus easily be exported for a new text dataset.The only parameter we will tweak is the number of topics in the LDA algorithm. Typically, one would calculate the 'perplexity' metric to determine which number of topics is best and iterate over different amounts of topics until the lowest 'perplexity' is found. For now, let's play around with a different number of topics. From there, we can distinguish what each topic is about ('neural networks', 'reinforcement learning', 'kernel methods', 'gaussian processes', etc.).
###Code
import warnings
warnings.simplefilter("ignore", DeprecationWarning)
# Load the LDA model from sk-learn
from sklearn.decomposition import LatentDirichletAllocation as LDA
# Helper function
def print_topics(model, count_vectorizer, n_top_words):
words = count_vectorizer.get_feature_names()
for topic_idx, topic in enumerate(model.components_):
print("\nTopic #%d:" % topic_idx)
print(" ".join([words[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
# Tweak the two parameters below (use int values below 15)
number_topics = 20
number_words = 20
# Create and fit the LDA model
lda = LDA(n_components=number_topics)
lda.fit(count_data)
# Print the topics found by the LDA model
print("Topics found via LDA:")
print_topics(lda, count_vectorizer, number_words)
###Output
Topics found via LDA:
Topic #0:
efficient learning robust supervised systems speech semi matching nonlinear data dependent recognition free ranking using domain coordinate manifold architecture signals
Topic #1:
models multi large learning scale graphical dynamics selection brain inference task games inverse discrete general global completion constrained planning factor
Topic #2:
clustering linear information gradient learning unsupervised minimization descent natural maximum images margin regularized embedding algorithms exponential properties line entropy likelihood
Topic #3:
analysis framework component value neuronal independent learning rules means principal tensor relational effects decoding data projection components structures sum correlated
Topic #4:
model detection structure human using reduction causal real learning infinite comparison near backpropagation objects auditory variables forward empirical dimensionality measures
Topic #5:
classification propagation applications metric weighted maps expectation pairwise max level consistent message passing use nonconvex protein imaging strategies budget construction
Topic #6:
stochastic optimal inference dynamic local algorithms neurons problems approximate programming spiking pca power linear language discovery noisy differential patterns family
Topic #7:
based estimation kernel matrix method distributed statistical graph learning memory maximization attention invariant pattern joint distance nets making predicting recognition
Topic #8:
learning gaussian adaptive regression reinforcement non processes process using active control model vector machines support theory generalized sequential distributions decomposition
Topic #9:
spike connectionist factorization covariance activity matrices partial input synaptic accelerated social network short timing hard term metrics long noise role
Topic #10:
visual learning multiple recognition object spectral model 3d computational cortex priors view end using tasks instance evaluation pursuit smooth hashing
Topic #11:
sparse learning convex decision convergence machine complexity sample mixture coding boosting trees regret label model rate boltzmann output unified exact
Topic #12:
bayesian methods latent approximation order learning binary monte carlo variable density scalable bandit improved dirichlet sparsity distribution using functional pac
Topic #13:
data approach variational bounds online kernels inference nonparametric learning risk single estimating bayes loss predictive prior structural case understanding collaborative
Topic #14:
probabilistic hierarchical generative point discriminative fields classifiers sequence parameter population exploration learning embeddings codes complex spaces simple plasticity cortical communication
Topic #15:
markov feature high search dimensional temporal regularization policy hidden function parallel networks features processing filtering dual performance using identification dimension
Topic #16:
optimization graphs generalization dynamical constraints feedback context energy norm perception weight size combinatorial direct flow pruning symmetric sensory model optical
Topic #17:
deep fast sampling bandits motion adversarial belief problem neuron learning greedy driven armed solving poisson interaction like posterior aggregation importance
Topic #18:
algorithm random rank low submodular conditional segmentation learning action minimax implementation separation subspace new em blind localization squares pose factorial
Topic #19:
neural networks network learning time using prediction training image recurrent structured modeling functions online representations continuous state analog convolutional error
###Markdown
8. The future of machine learningMachine learning has become increasingly popular over the past years. The number of NIPS conference papers has risen exponentially, and people are continuously looking for ways on how they can incorporate machine learning into their products and services.Although this analysis focused on analyzing machine learning trends in research, a lot of these techniques are rapidly being adopted in industry. Following the latest machine learning trends is a critical skill for a data scientist, and it is recommended to continuously keep learning by going through blogs, tutorials, and courses.
###Code
# The historical data indicates that:
more_papers_published_in_2018 = True
###Output
_____no_output_____
###Markdown
1. Scala's real-world project repository dataWith almost 30k commits and a history spanning over ten years, Scala is a mature programming language. It is a general-purpose programming language that has recently become another prominent language for data scientists.Scala is also an open source project. Open source projects have the advantage that their entire development histories -- who made changes, what was changed, code reviews, etc. -- publicly available. We're going to read in, clean up, and visualize the real world project repository of Scala that spans data from a version control system (Git) as well as a project hosting site (GitHub). We will find out who has had the most influence on its development and who are the experts.The dataset we will use, which has been previously mined and extracted from GitHub, is comprised of three files:pulls_2011-2013.csv contains the basic information about the pull requests, and spans from the end of 2011 up to (but not including) 2014.pulls_2014-2018.csv contains identical information, and spans from 2014 up to 2018.pull_files.csv contains the files that were modified by each pull request.
###Code
# Importing pandas
import pandas as pd
# Loading in the data
pulls_one = pd.read_csv('datasets/pulls_2011-2013.csv')
pulls_two = pd.read_csv('datasets/pulls_2014-2018.csv')
pull_files = pd.read_csv('datasets/pull_files.csv')
###Output
_____no_output_____
###Markdown
2. Preparing and cleaning the dataFirst, we will need to combine the data from the two separate pull DataFrames. Next, the raw data extracted from GitHub contains dates in the ISO8601 format. However, pandas imports them as regular strings. To make our analysis easier, we need to convert the strings into Python's DateTime objects. DateTime objects have the important property that they can be compared and sorted.The pull request times are all in UTC (also known as Coordinated Universal Time). The commit times, however, are in the local time of the author with time zone information (number of hours difference from UTC). To make comparisons easy, we should convert all times to UTC.
###Code
# Append pulls_one to pulls_two
pulls = pulls_one.append(pulls_two)
# Convert the date for the pulls object
pulls['date'] = pd.to_datetime(pulls['date'], utc=True)
###Output
_____no_output_____
###Markdown
3. Merging the DataFramesThe data extracted comes in two separate files. Merging the two DataFrames will make it easier for us to analyze the data in the future tasks.
###Code
# Merge the two DataFrames
data = pd.merge(pulls, pull_files, on='pid')
###Output
_____no_output_____
###Markdown
4. Is the project still actively maintained?The activity in an open source project is not very consistent. Some projects might be active for many years after the initial release, while others can slowly taper out into oblivion. Before committing to contributing to a project, it is important to understand the state of the project. Is development going steadily, or is there a drop? Has the project been abandoned altogether?The data used in this project was collected in January of 2018. We are interested in the evolution of the number of contributions up to that date.For Scala, we will do this by plotting a chart of the project's activity. We will calculate the number of pull requests submitted each (calendar) month during the project's lifetime. We will then plot these numbers to see the trend of contributions.
###Code
%matplotlib inline
# Create a column that will store the month and the year, as a string
data['month_year'] = pd.to_datetime(data['date']).dt.to_period('M')
# Group by month_year and count the pull requests
counts = data.groupby('month_year').count()
# Plot the results
counts.plot(kind='bar')
###Output
_____no_output_____
###Markdown
5. Is there camaraderie in the project?The organizational structure varies from one project to another, and it can influence your success as a contributor. A project that has a very small community might not be the best one to start working on. The small community might indicate a high barrier of entry. This can be caused by several factors, including a community that is reluctant to accept pull requests from "outsiders," that the code base is hard to work with, etc. However, a large community can serve as an indicator that the project is regularly accepting pull requests from new contributors. Such a project would be a good place to start.In order to evaluate the dynamics of the community, we will plot a histogram of the number of pull requests submitted by each user. A distribution that shows that there are few people that only contribute a small number of pull requests can be used as in indicator that the project is not welcoming of new contributors.
###Code
# Required for matplotlib
%matplotlib inline
# Group by the submitter
by_user = data.groupby('user').count()
# Plot the histogram
by_user.hist()
###Output
_____no_output_____
###Markdown
6. What files were changed in the last ten pull requests?Choosing the right place to make a contribution is as important as choosing the project to contribute to. Some parts of the code might be stable, some might be dead. Contributing there might not have the most impact. Therefore it is important to understand the parts of the system that have been recently changed. This allows us to pinpoint the "hot" areas of the code where most of the activity is happening. Focusing on those parts might not the most effective use of our times.
###Code
# Identify the last 10 pull requests
last_10 = pulls.sort_values('date').iloc[-10:]#nlargest(10,'date',keep='last')
# Join the two data sets
joined_pr = pd.merge(last_10, pull_files, on ='pid')
# Identify the unique files
files = set(joined_pr['file'])
# Print the results
files
###Output
_____no_output_____
###Markdown
7. Who made the most pull requests to a given file?When contributing to a project, we might need some guidance. We might find ourselves needing some information regarding the codebase. It is important direct any questions to the right person. Contributors to open source projects generally have other day jobs, so their time is limited. It is important to address our questions to the right people. One way to identify the right target for our inquiries is by using their contribution history.We identified src/compiler/scala/reflect/reify/phases/Calculate.scala as being recently changed. We are interested in the top 3 developers who changed that file. Those developers are the ones most likely to have the best understanding of the code.
###Code
# This is the file we are interested in:
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Identify the commits that changed the file
file_pr = data[data.loc[:,pd.IndexSlice['file']]==file]
# Count the number of changes made by each developer
author_counts = file_pr.groupby('user').count()
# Print the top 3 developers
author_counts.sort_values('pid', ascending=False).iloc[:3]
###Output
_____no_output_____
###Markdown
8. Who made the last ten pull requests on a given file?Open source projects suffer from fluctuating membership. This makes the problem of finding the right person more challenging: the person has to be knowledgeable and still be involved in the project. A person that contributed a lot in the past might no longer be available (or willing) to help. To get a better understanding, we need to investigate the more recent history of that particular part of the system. Like in the previous task, we will look at the history of src/compiler/scala/reflect/reify/phases/Calculate.scala.
###Code
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests that changed the target file
file_pr = pull_files[pull_files.loc[:,pd.IndexSlice['file']]==file]
# Merge the obtained results with the pulls DataFrame
joined_pr = pd.merge(file_pr, pulls, on='pid')
# Find the users of the last 10 most recent pull requests
users_last_10 = set(joined_pr.nlargest(10,'date')['user'])
# Printing the results
users_last_10
###Output
_____no_output_____
###Markdown
9. The pull requests of two special developersNow that we have identified two potential contacts in the projects, we need to find the person who was most involved in the project in recent times. That person is most likely to answer our questions. For each calendar year, we are interested in understanding the number of pull requests the authors submitted. This will give us a high-level image of their contribution trend to the project.
###Code
%matplotlib inline
# The developers we are interested in
authors = ['xeno-by', 'soc']
# Get all the developers' pull requests
by_author = pulls[pulls['user'].isin(authors)]
# Count the number of pull requests submitted each year
counts = by_author.groupby(['user', by_author['date'].dt.year]).agg({'pid': 'count'}).reset_index()
# Convert the table to a wide format
counts_wide = counts.pivot_table(index='date', columns='user', values='pid', fill_value=0)
# Plot the results
counts_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
10. Visualizing the contributions of each developerAs mentioned before, it is important to make a distinction between the global expertise and contribution levels and the contribution levels at a more granular level (file, submodule, etc.) In our case, we want to see which of our two developers of interest have the most experience with the code in a given file. We will measure experience by the number of pull requests submitted that affect that file and how recent those pull requests were submitted.
###Code
authors = ['xeno-by', 'soc']
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests submitted by the authors, from the `data` DataFrame
by_author = data[data['user'].isin(authors)]
# Select the pull requests that affect the file
by_file = by_author[by_author['file']==file]
# Group and count the number of PRs done by each user each year
grouped = by_file.groupby(['user', by_file['date'].dt.year]).count()['pid'].reset_index()
# Transform the data into a wide format
by_file_wide = by_file.pivot_table(index='date', columns='user', values='pid', fill_value=0)
# Plot the results
by_file_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
1. Scala's real-world project repository dataWith almost 30k commits and a history spanning over ten years, Scala is a mature programming language. It is a general-purpose programming language that has recently become another prominent language for data scientists.Scala is also an open source project. Open source projects have the advantage that their entire development histories -- who made changes, what was changed, code reviews, etc. -- publicly available. We're going to read in, clean up, and visualize the real world project repository of Scala that spans data from a version control system (Git) as well as a project hosting site (GitHub). We will find out who has had the most influence on its development and who are the experts.The dataset we will use, which has been previously mined and extracted from GitHub, is comprised of three files:pulls_2011-2013.csv contains the basic information about the pull requests, and spans from the end of 2011 up to (but not including) 2014.pulls_2014-2018.csv contains identical information, and spans from 2014 up to 2018.pull_files.csv contains the files that were modified by each pull request.
###Code
# Importing pandas
import pandas as pd
# Loading in the data
pulls_one = pd.read_csv('datasets/pulls_2011-2013.csv')
pulls_two = pd.read_csv('datasets/pulls_2014-2018.csv')
pull_files = pd.read_csv('datasets/pull_files.csv')
###Output
_____no_output_____
###Markdown
2. Preparing and cleaning the dataFirst, we will need to combine the data from the two separate pull DataFrames. Next, the raw data extracted from GitHub contains dates in the ISO8601 format. However, pandas imports them as regular strings. To make our analysis easier, we need to convert the strings into Python's DateTime objects. DateTime objects have the important property that they can be compared and sorted.The pull request times are all in UTC (also known as Coordinated Universal Time). The commit times, however, are in the local time of the author with time zone information (number of hours difference from UTC). To make comparisons easy, we should convert all times to UTC.
###Code
# Append pulls_one to pulls_two
pulls = pulls_two.append(pulls_one, ignore_index=True)
# Convert the date for the pulls object
pulls['date'] = pd.to_datetime(pulls['date'], utc=True)
###Output
_____no_output_____
###Markdown
3. Merging the DataFramesThe data extracted comes in two separate files. Merging the two DataFrames will make it easier for us to analyze the data in the future tasks.
###Code
# Merge the two DataFrames
data = pd.merge(pulls, pull_files, on='pid')
###Output
_____no_output_____
###Markdown
4. Is the project still actively maintained?The activity in an open source project is not very consistent. Some projects might be active for many years after the initial release, while others can slowly taper out into oblivion. Before committing to contributing to a project, it is important to understand the state of the project. Is development going steadily, or is there a drop? Has the project been abandoned altogether?The data used in this project was collected in January of 2018. We are interested in the evolution of the number of contributions up to that date.For Scala, we will do this by plotting a chart of the project's activity. We will calculate the number of pull requests submitted each (calendar) month during the project's lifetime. We will then plot these numbers to see the trend of contributions.
###Code
%matplotlib inline
# Create a column that will store the month and the year, as a string
data['month_year'] = data.apply(lambda x: str(x['date'].year) + '-' + str(x['date'].month), axis=1)
# Group by month_year and count the pull requests
counts = data.groupby('month_year').agg({'date':'count'})
# Plot the results
counts.plot.bar()
###Output
_____no_output_____
###Markdown
5. Is there camaraderie in the project?The organizational structure varies from one project to another, and it can influence your success as a contributor. A project that has a very small community might not be the best one to start working on. The small community might indicate a high barrier of entry. This can be caused by several factors, including a community that is reluctant to accept pull requests from "outsiders," that the code base is hard to work with, etc. However, a large community can serve as an indicator that the project is regularly accepting pull requests from new contributors. Such a project would be a good place to start.In order to evaluate the dynamics of the community, we will plot a histogram of the number of pull requests submitted by each user. A distribution that shows that there are few people that only contribute a small number of pull requests can be used as in indicator that the project is not welcoming of new contributors.
###Code
# Required for matplotlib
%matplotlib inline
# Group by the submitter
by_user = data.groupby('user').agg({'date':'count'})
# Plot the histogram
by_user.plot.hist(bins=20)
###Output
_____no_output_____
###Markdown
6. What files were changed in the last ten pull requests?Choosing the right place to make a contribution is as important as choosing the project to contribute to. Some parts of the code might be stable, some might be dead. Contributing there might not have the most impact. Therefore it is important to understand the parts of the system that have been recently changed. This allows us to pinpoint the "hot" areas of the code where most of the activity is happening. Focusing on those parts might not the most effective use of our times.
###Code
# Identify the last 10 pull requests
last_10 = pulls.nlargest(10, 'date')
# Join the two data sets
joined_pr = pd.merge(last_10, pull_files, on='pid')
# Identify the unique files
files = set(joined_pr['file'])
# Print the results
files
###Output
_____no_output_____
###Markdown
7. Who made the most pull requests to a given file?When contributing to a project, we might need some guidance. We might find ourselves needing some information regarding the codebase. It is important direct any questions to the right person. Contributors to open source projects generally have other day jobs, so their time is limited. It is important to address our questions to the right people. One way to identify the right target for our inquiries is by using their contribution history.We identified src/compiler/scala/reflect/reify/phases/Calculate.scala as being recently changed. We are interested in the top 3 developers who changed that file. Those developers are the ones most likely to have the best understanding of the code.
###Code
# This is the file we are interested in:
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Identify the commits that changed the file
file_pr = data[data['file'] == file]
# Count the number of changes made by each developer
author_counts = file_pr.groupby('user').count()
# Print the top 3 developers
print(author_counts.nlargest(3, 'date'))
###Output
pid date file month_year
user
xeno-by 11 11 11 11
retronym 5 5 5 5
soc 4 4 4 4
###Markdown
8. Who made the last ten pull requests on a given file?Open source projects suffer from fluctuating membership. This makes the problem of finding the right person more challenging: the person has to be knowledgeable and still be involved in the project. A person that contributed a lot in the past might no longer be available (or willing) to help. To get a better understanding, we need to investigate the more recent history of that particular part of the system. Like in the previous task, we will look at the history of src/compiler/scala/reflect/reify/phases/Calculate.scala.
###Code
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests that changed the target file
file_pr = data[data['file'] == file]
# Merge the obtained results with the pulls DataFrame
joined_pr = pd.merge(file_pr, pulls, on='pid')
# Find the users of the last 10 most recent pull requests
users_last_10 = set(file_pr.nlargest(10, 'date')['user'])
# Printing the results
users_last_10
###Output
_____no_output_____
###Markdown
9. The pull requests of two special developersNow that we have identified two potential contacts in the projects, we need to find the person who was most involved in the project in recent times. That person is most likely to answer our questions. For each calendar year, we are interested in understanding the number of pull requests the authors submitted. This will give us a high-level image of their contribution trend to the project.
###Code
%matplotlib inline
# The developers we are interested in
authors = ['xeno-by', 'soc']
# Get all the developers' pull requests
by_author = pulls[pulls['user'].isin(authors)]
# Count the number of pull requests submitted each year
counts = by_author.groupby(['user', by_author['date'].dt.year]).agg({'pid': 'count'}).reset_index()
# Convert the table to a wide format
counts_wide = counts.pivot_table(index='date', columns='user', values='pid', fill_value=0)
# Plot the results
counts_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
10. Visualizing the contributions of each developerAs mentioned before, it is important to make a distinction between the global expertise and contribution levels and the contribution levels at a more granular level (file, submodule, etc.) In our case, we want to see which of our two developers of interest have the most experience with the code in a given file. We will measure experience by the number of pull requests submitted that affect that file and how recent those pull requests were submitted.
###Code
authors = ['xeno-by', 'soc']
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests submitted by the authors, from the `data` DataFrame
by_author = data[data['user'].isin(authors)]
# Select the pull requests that affect the file
by_file = by_author[by_author['file'] == file]
# Group and count the number of PRs done by each user each year
grouped = by_file.groupby(['user', by_file['date'].dt.year]).count()['pid'].reset_index()
# Transform the data into a wide format
by_file_wide = grouped.pivot_table(index='date', columns='user', values='pid', fill_value=0)
# Plot the results
by_file_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
1. Scala's real-world project repository dataWith almost 30k commits and a history spanning over ten years, Scala is a mature programming language. It is a general-purpose programming language that has recently become another prominent language for data scientists.Scala is also an open source project. Open source projects have the advantage that their entire development histories -- who made changes, what was changed, code reviews, etc. -- publicly available. We're going to read in, clean up, and visualize the real world project repository of Scala that spans data from a version control system (Git) as well as a project hosting site (GitHub). We will find out who has had the most influence on its development and who are the experts.The dataset we will use, which has been previously mined and extracted directly from GitHub, is comprised of two files:pulls.csv contains the basic information about the pull requests.pull_files.csv contains the files that were modified by each pull request.
###Code
# Importing pandas
import pandas as pd
# Loading in the data
pulls = pd.read_csv('datasets/pulls.csv')
pull_files = pd.read_csv('datasets/pull_files.csv')
###Output
_____no_output_____
###Markdown
2. Cleaning the dataThe raw data extracted from GitHub contains dates in the ISO8601 format. However, pandas imports them as regular strings. To make our analysis easier, we need to convert the strings into Python's DateTime objects. DateTime objects have the important property that they can be compared and sorted.The pull request times are all in UTC (also known as Coordinated Universal Time). The commit times, however, are in the local time of the author with time zone information (number of hours difference from UTC). To make comparisons easy, we should convert all times to UTC.
###Code
# Convert the date for the pulls object
pulls['date'] = pd.to_datetime(pulls['date'], utc=True)
###Output
_____no_output_____
###Markdown
3. Merging the DataFramesThe data extracted comes in two separate files. Merging the two DataFrames will make it easier for us to analyze the data in the future tasks.
###Code
# Merge the two DataFrames
data = pulls.merge(pull_files, on='pid')
data
###Output
_____no_output_____
###Markdown
4. Is the project still actively maintained?The activity in an open source project is not very consistent. Some projects might be active for many years after the initial release, while others can slowly taper out into oblivion. Before committing to contributing to a project, it is important to understand the state of the project. Is development going steadily, or is there a drop? Has the project been abandoned altogether?The data used in this project was collected in January of 2018. We are interested in the evolution of the number of contributions up to that date.For Scala, we will do this by plotting a chart of the project's activity. We will calculate the number of pull requests submitted each (calendar) month during the project's lifetime. We will then plot these numbers to see the trend of contributions.
###Code
%matplotlib inline
# Create a column that will store the month and the year, as a string
pulls['month_year'] = pulls.apply(lambda x: str(x['date'].year) + '-' + str(x['date'].month), axis = 1)
# Group by month_year and count the pull requests
counts = pulls.groupby('month_year').count()
# Plot the results
counts.plot.bar()
###Output
_____no_output_____
###Markdown
5. Is there camaraderie in the project?The organizational structure varies from one project to another, and it can influence your success as a contributor. A project that has a very small community might not be the best one to start working on. The small community might indicate a high barrier of entry. This can be caused by several factors, including a community that is reluctant to accept pull requests from "outsiders," that the code base is hard to work with, etc. However, a large community can serve as an indicator that the project is regularly accepting pull requests from new contributors. Such a project would be a good place to start.In order to evaluate the dynamics of the community, we will plot a histogram of the number of pull requests submitted by each user. A distribution that shows that there are few people that only contribute a small number of pull requests can be used as in indicator that the project is not welcoming of new contributors.
###Code
# Required for matplotlib
%matplotlib inline
# Group by the submitter
by_user = pulls.groupby('user').agg({
'pid':'count'
})
# Plot the histogram
by_user.hist()
###Output
_____no_output_____
###Markdown
6. What files were changed in the last ten pull requests?Choosing the right place to make a contribution is as important as choosing the project to contribute to. Some parts of the code might be stable, some might be dead. Contributing there might not have the most impact. Therefore it is important to understand the parts of the system that have been recently changed. This allows us to pinpoint the "hot" areas of the code where most of the activity is happening. Focusing on those parts might not the most effective use of our times.
###Code
# Identify the last 10 pull requests
last_10 = pulls.nlargest(10, 'date')
# Join the two data sets
joined_pr = pull_files.merge(last_10, on='pid')
# Identify the unique files
files = set(joined_pr['file'])
# Print the results
files
###Output
_____no_output_____
###Markdown
7. Who made the most pull requests to a given file?When contributing to a project, we might need some guidance. We might find ourselves needing some information regarding the codebase. It is important direct any questions to the right person. Contributors to open source projects generally have other day jobs, so their time is limited. It is important to address our questions to the right people. One way to identify the right target for our inquiries is by using their contribution history.We identified src/compiler/scala/reflect/reify/phases/Calculate.scala as being recently changed. We are interested in the top 3 developers who changed that file. Those developers are the ones most likely to have the best understanding of the code.
###Code
# This is the file we are interested in:
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Identify the commits that changed the file
file_pr = data[data['file'] == file]
# Count the number of changes made by each developer
author_counts = file_pr.groupby('user').count()
# Print the top 3 developers
print(author_counts.nlargest(3, 'pid'))
###Output
pid date file
user
xeno-by 11 11 11
retronym 5 5 5
soc 4 4 4
###Markdown
8. Who made the last ten pull requests on a given file?Open source projects suffer from fluctuating membership. This makes the problem of finding the right person more challenging: the person has to be knowledgeable and still be involved in the project. A person that contributed a lot in the past might no longer be available (or willing) to help. To get a better understanding, we need to investigate the more recent history of that particular part of the system. Like in the previous task, we will look at the history of src/compiler/scala/reflect/reify/phases/Calculate.scala.
###Code
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests that changed the target file
file_pr = pull_files[pull_files['file'] == file]
# Merge the obtained results with the pulls DataFrame
joined_pr = file_pr.merge(pulls, on='pid')
# Find the users of the last 10 most recent pull requests
users_last_10 = set(joined_pr.nlargest(10, 'date')['user'])
# Printing the results
users_last_10
###Output
_____no_output_____
###Markdown
9. The pull requests of two special developersNow that we have identified two potential contacts in the projects, we need to find the person who was most involved in the project in recent times. That person is most likely to answer our questions. For each calendar year, we are interested in understanding the number of pull requests the authors submitted. This will give us a high-level image of their contribution trend to the project.
###Code
%matplotlib inline
# The developers we are interested in
authors = ['xeno-by', 'soc']
# Get all the developers' pull requests
by_author = pulls[pulls['user'].isin(authors)]
# Count the number of pull requests submitted each year
counts = by_author.groupby(['user', pulls['date'].dt.year]).agg({'pid': 'count'}).reset_index()
# Convert the table to a wide format
counts_wide = counts.pivot_table(index='date', columns='user', values='pid', fill_value=0)
# Plot the results
counts_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
10. Visualizing the contributions of each developerAs mentioned before, it is important to make a distinction between the global expertise and contribution levels and the contribution levels at a more granular level (file, submodule, etc.) In our case, we want to see which of our two developers of interest have the most experience with the code in a given file. We will measure experience by the number of pull requests submitted that affect that file and how recent those pull requests were submitted.
###Code
authors = ['xeno-by', 'soc']
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests submitted by the authors, from the `data` DataFrame
by_author = data[data['user'].isin(authors)]
# Select the pull requests that affect the file
by_file = by_author[by_author['file'] == file]
# Group and count the number of PRs done by each user each year
grouped = by_file.groupby(['user', by_file['date'].dt.year]).count()['pid'].reset_index()
# Transform the data into a wide format
by_file_wide = grouped.pivot_table(index = 'date',columns = 'user',values = 'pid', fill_value = 0)
# Plot the results
by_file_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
1. Scala's real-world project repository dataWith almost 30k commits and a history spanning over ten years, Scala is a mature programming language. It is a general-purpose programming language that has recently become another prominent language for data scientists.Scala is also an open source project. Open source projects have the advantage that their entire development histories -- who made changes, what was changed, code reviews, etc. -- publicly available. We're going to read in, clean up, and visualize the real world project repository of Scala that spans data from a version control system (Git) as well as a project hosting site (GitHub). We will find out who has had the most influence on its development and who are the experts.The dataset we will use, which has been previously mined and extracted directly from GitHub, is comprised of two files:pulls.csv contains the basic information about the pull requests.pull_files.csv contains the files that were modified by each pull request.
###Code
# Importing pandas
import pandas as pd
# Loading in the data
pulls = pd.read_csv('datasets/pulls.csv')
pull_files = pd.read_csv('datasets/pull_files.csv')
###Output
_____no_output_____
###Markdown
2. Cleaning the dataThe raw data extracted from GitHub contains dates in the ISO8601 format. However, pandas imports them as regular strings. To make our analysis easier, we need to convert the strings into Python's DateTime objects. DateTime objects have the important property that they can be compared and sorted.The pull request times are all in UTC (also known as Coordinated Universal Time). The commit times, however, are in the local time of the author with time zone information (number of hours difference from UTC). To make comparisons easy, we should convert all times to UTC.
###Code
# Convert the date for the pulls object
pulls['date']=pd.to_datetime(pulls['date'] ,utc=True)
###Output
_____no_output_____
###Markdown
3. Merging the DataFramesThe data extracted comes in two separate files. Merging the two DataFrames will make it easier for us to analyze the data in the future tasks.
###Code
# Merge the two DataFrames
data = pulls.merge(pull_files, on='pid')
###Output
_____no_output_____
###Markdown
4. Is the project still actively maintained?The activity in an open source project is not very consistent. Some projects might be active for many years after the initial release, while others can slowly taper out into oblivion. Before committing to contributing to a project, it is important to understand the state of the project. Is development going steadily, or is there a drop? Has the project been abandoned altogether?The data used in this project was collected in January of 2018. We are interested in the evolution of the number of contributions up to that date.For Scala, we will do this by plotting a chart of the project's activity. We will calculate the number of pull requests submitted each (calendar) month during the project's lifetime. We will then plot these numbers to see the trend of contributions.
###Code
%matplotlib inline
# Create a column that will store the month and the year, as a string
pulls['month_year'] = pulls.apply(
lambda x: str(x['date'].year) + '-'+ str(x['date'].month),
axis=1)
# Group by month_year and count the pull requests
counts = pulls.groupby('month_year').sum()
# Plot the results
counts.plot(kind='bar')
###Output
_____no_output_____
###Markdown
5. Is there camaraderie in the project?The organizational structure varies from one project to another, and it can influence your success as a contributor. A project that has a very small community might not be the best one to start working on. The small community might indicate a high barrier of entry. This can be caused by several factors, including a community that is reluctant to accept pull requests from "outsiders," that the code base is hard to work with, etc. However, a large community can serve as an indicator that the project is regularly accepting pull requests from new contributors. Such a project would be a good place to start.In order to evaluate the dynamics of the community, we will plot a histogram of the number of pull requests submitted by each user. A distribution that shows that there are few people that only contribute a small number of pull requests can be used as in indicator that the project is not welcoming of new contributors.
###Code
# Required for matplotlib
%matplotlib inline
# Group by the submitter
by_user = pulls.groupby('user').sum()
# Plot the histogram
by_user.hist()
###Output
_____no_output_____
###Markdown
6. What files were changed in the last ten pull requests?Choosing the right place to make a contribution is as important as choosing the project to contribute to. Some parts of the code might be stable, some might be dead. Contributing there might not have the most impact. Therefore it is important to understand the parts of the system that have been recently changed. This allows us to pinpoint the "hot" areas of the code where most of the activity is happening. Focusing on those parts might not the most effective use of our times.
###Code
# Identify the last 10 pull requests
last_10 = pulls.nlargest(10, 'date')
# Join the two data sets
joined_pr = last_10.merge(pull_files, on='pid')
# Identify the unique files
files = set(joined_pr['file'].unique())
# Print the results
files
###Output
_____no_output_____
###Markdown
7. Who made the most pull requests to a given file?When contributing to a project, we might need some guidance. We might find ourselves needing some information regarding the codebase. It is important direct any questions to the right person. Contributors to open source projects generally have other day jobs, so their time is limited. It is important to address our questions to the right people. One way to identify the right target for our inquiries is by using their contribution history.We identified src/compiler/scala/reflect/reify/phases/Calculate.scala as being recently changed. We are interested in the top 3 developers who changed that file. Those developers are the ones most likely to have the best understanding of the code.
###Code
# This is the file we are interested in:
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Identify the commits that changed the file
file_pr = data[data['file']==file]
# Count the number of changes made by each developer
author_counts = file_pr.groupby('user').count()
# Print the top 3 developers
print(author_counts.nlargest(3, 'pid'))
###Output
pid date file
user
xeno-by 11 11 11
retronym 5 5 5
soc 4 4 4
###Markdown
8. Who made the last ten pull requests on a given file?Open source projects suffer from fluctuating membership. This makes the problem of finding the right person more challenging: the person has to be knowledgeable and still be involved in the project. A person that contributed a lot in the past might no longer be available (or willing) to help. To get a better understanding, we need to investigate the more recent history of that particular part of the system. Like in the previous task, we will look at the history of src/compiler/scala/reflect/reify/phases/Calculate.scala.
###Code
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests that changed the target file
file_pr = pull_files[pull_files['file']==file]
# Merge the obtained results with the pulls DataFrame
joined_pr = file_pr.merge(pulls, on='pid')
# Find the users of the last 10 most recent pull requests
users_last_10 = set(joined_pr.nlargest(10, 'date')['user'])
# Printing the results
users_last_10
###Output
_____no_output_____
###Markdown
9. The pull requests of two special developersNow that we have identified two potential contacts in the projects, we need to find the person who was most involved in the project in recent times. That person is most likely to answer our questions. For each calendar year, we are interested in understanding the number of pull requests the authors submitted. This will give us a high-level image of their contribution trend to the project.
###Code
%matplotlib inline
# The developers we are interested in
authors = ['xeno-by', 'soc']
# Get all the developers' pull requests
by_author = pulls[pulls['user'].isin(authors)]
# Count the number of pull requests submitted each year
counts = by_author.groupby(['user', pulls['date'].dt.year]).agg({'pid': 'count'}).reset_index()
# Convert the table to a wide format
counts_wide = counts.pivot_table(index='date', columns='user', values='pid', fill_value=0)
# Plot the results
counts_wide.plot(kind='bar')
data.head()
###Output
_____no_output_____
###Markdown
10. Visualizing the contributions of each developerAs mentioned before, it is important to make a distinction between the global expertise and contribution levels and the contribution levels at a more granular level (file, submodule, etc.) In our case, we want to see which of our two developers of interest have the most experience with the code in a given file. We will measure experience by the number of pull requests submitted that affect that file and how recent those pull requests were submitted.
###Code
authors = ['xeno-by', 'soc']
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests submitted by the authors, from the `data` DataFrame
by_author = data[data['user'].isin(authors)]
# Select the pull requests that affect the file
by_file = by_author[by_author['file']==file]
# Group and count the number of PRs done by each user each year
grouped = by_file.groupby(['user', by_file['date'].dt.year]).count()['pid'].reset_index()
# Transform the data into a wide format
by_file_wide = pd.pivot_table(grouped)
# Plot the results
by_file_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
1. Scala's real-world project repository dataWith almost 30k commits and a history spanning over ten years, Scala is a mature programming language. It is a general-purpose programming language that has recently become another prominent language for data scientists.Scala is also an open source project. Open source projects have the advantage that their entire development histories -- who made changes, what was changed, code reviews, etc. -- are publicly available. We're going to read in, clean up, and visualize the real world project repository of Scala that spans data from a version control system (Git) as well as a project hosting site (GitHub). We will find out who has had the most influence on its development and who are the experts.The dataset we will use, which has been previously mined and extracted from GitHub, is comprised of three files:pulls_2011-2013.csv contains the basic information about the pull requests, and spans from the end of 2011 up to (but not including) 2014.pulls_2014-2018.csv contains identical information, and spans from 2014 up to 2018.pull_files.csv contains the files that were modified by each pull request.
###Code
# Importing pandas
import pandas as pd
# Loading in the data
pulls_one = pd.read_csv('datasets/pulls_2011-2013.csv')
pulls_two = pd.read_csv('datasets/pulls_2014-2018.csv')
pull_files = pd.read_csv('datasets/pull_files.csv')
pulls_one.head() , pulls_two.head() , pull_files.head()
###Output
_____no_output_____
###Markdown
2. Preparing and cleaning the dataFirst, we will need to combine the data from the two separate pull DataFrames. Next, the raw data extracted from GitHub contains dates in the ISO8601 format. However, pandas imports them as regular strings. To make our analysis easier, we need to convert the strings into Python's DateTime objects. DateTime objects have the important property that they can be compared and sorted.The pull request times are all in UTC (also known as Coordinated Universal Time). The commit times, however, are in the local time of the author with time zone information (number of hours difference from UTC). To make comparisons easy, we should convert all times to UTC.
###Code
# Append pulls_one to pulls_two
pulls = pd.concat([pulls_one, pulls_two], axis=0)
# Convert the date for the pulls object
pulls['date'] = pd.to_datetime(pulls['date'], utc=True)
pulls.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 6200 entries, 0 to 2903
Data columns (total 3 columns):
pid 6200 non-null int64
user 6200 non-null object
date 6200 non-null datetime64[ns, UTC]
dtypes: datetime64[ns, UTC](1), int64(1), object(1)
memory usage: 193.8+ KB
###Markdown
3. Merging the DataFramesThe data extracted comes in two separate files. Merging the two DataFrames will make it easier for us to analyze the data in the future tasks.
###Code
pulls.head(), pull_files.head()
# Merge the two DataFrames
data = pd.merge(pulls, pull_files, on='pid', how='inner')
data.shape
data.head()
###Output
_____no_output_____
###Markdown
4. Is the project still actively maintained?The activity in an open source project is not very consistent. Some projects might be active for many years after the initial release, while others can slowly taper out into oblivion. Before committing to contributing to a project, it is important to understand the state of the project. Is development going steadily, or is there a drop? Has the project been abandoned altogether?The data used in this project was collected in January of 2018. We are interested in the evolution of the number of contributions up to that date.For Scala, we will do this by plotting a chart of the project's activity. We will calculate the number of pull requests submitted each (calendar) month during the project's lifetime. We will then plot these numbers to see the trend of contributions.A helpful reminder of how to access various components of a date can be found in this exercise of Data Manipulation with pandasAdditionally, recall that you can group by multiple variables by passing a list to groupby(). This video from Data Manipulation with pandas should help!
###Code
# Alternative way
data['year_month'] = data['date'].astype('str').apply(lambda x: x.strip()[0:7])
data.head()
counts = data.groupby(by='year_month').agg({'pid':'count'})
counts.head()
# %matplotlib inline
# Create a column that will store the month
data['month'] = pd.DatetimeIndex(data['date']).month
# Create a column that will store the year
data['year'] = pd.DatetimeIndex(data['date']).year
# Group by the month and year and count the pull requests
counts = data.groupby(by=['year','month']).agg({'pid':'count'})
# Plot the results
counts.plot(kind='bar', figsize = (12,4))
###Output
_____no_output_____
###Markdown
5. Is there camaraderie in the project?The organizational structure varies from one project to another, and it can influence your success as a contributor. A project that has a very small community might not be the best one to start working on. The small community might indicate a high barrier of entry. This can be caused by several factors, including a community that is reluctant to accept pull requests from "outsiders," that the code base is hard to work with, etc. However, a large community can serve as an indicator that the project is regularly accepting pull requests from new contributors. Such a project would be a good place to start.In order to evaluate the dynamics of the community, we will plot a histogram of the number of pull requests submitted by each user. A distribution that shows that there are few people that only contribute a small number of pull requests can be used as in indicator that the project is not welcoming of new contributors.
###Code
# Required for matplotlib
%matplotlib inline
# Group by the submitter
by_user = data.groupby(by='user').agg({'pid':'count'})
by_user
# Plot the histogram
# ... YOUR CODE FOR TASK 5 ...
by_user.hist()
###Output
_____no_output_____
###Markdown
6. What files were changed in the last ten pull requests?Choosing the right place to make a contribution is as important as choosing the project to contribute to. Some parts of the code might be stable, some might be dead. Contributing there might not have the most impact. Therefore it is important to understand the parts of the system that have been recently changed. This allows us to pinpoint the "hot" areas of the code where most of the activity is happening. Focusing on those parts might not the most effective use of our times.
###Code
# Identify the last 10 pull requests
last_10 = pulls.sort_values('date', ascending=False)[:10]
last_10
# # Join the two data sets
joined_pr = pd.merge(last_10, pull_files, on='pid')
# joined_pr = pull_files.merge(last_10, on='pid')
joined_pr.head()
# # Identify the unique files
files = pd.DataFrame(joined_pr['file'].unique(), columns=['file'])
# Alternative
# files = len(set(joined_pr['file']))
# # Print the results
files
###Output
_____no_output_____
###Markdown
7. Who made the most pull requests to a given file?When contributing to a project, we might need some guidance. We might find ourselves needing some information regarding the codebase. It is important direct any questions to the right person. Contributors to open source projects generally have other day jobs, so their time is limited. It is important to address our questions to the right people. One way to identify the right target for our inquiries is by using their contribution history.We identified src/compiler/scala/reflect/reify/phases/Calculate.scala as being recently changed. We are interested in the top 3 developers who changed that file. Those developers are the ones most likely to have the best understanding of the code.
###Code
# This is the file we are interested in:
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# pull_files [pull_files['file'] == file]
# data [data['file'] == file].groupby(by='user').agg({'pid':'count'}).sort_values('pid', ascending=False)
# # Identify the commits that changed the file
file_pr = data[data['file'] == file]
file_pr
# # Count the number of changes made by each developer
author_counts = file_pr.groupby('user').agg({'pid':'count'})
author_counts.head()
# # Print the top 3 developers
print(author_counts.sort_values('pid', ascending=False)[:3])
###Output
pid
user
xeno-by 11
retronym 5
soc 4
###Markdown
8. Who made the last ten pull requests on a given file?Open source projects suffer from fluctuating membership. This makes the problem of finding the right person more challenging: the person has to be knowledgeable and still be involved in the project. A person that contributed a lot in the past might no longer be available (or willing) to help. To get a better understanding, we need to investigate the more recent history of that particular part of the system. Like in the previous task, we will look at the history of src/compiler/scala/reflect/reify/phases/Calculate.scala.
###Code
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# # Select the pull requests that changed the target file
file_pr =pull_files[pull_files['file'] == file]
file_pr
# # Merge the obtained results with the pulls DataFrame
joined_pr = pd.merge(file_pr, pulls, on='pid', how='inner')
joined_pr
# # Find the users of the last 10 most recent pull requests
users_last_10 = joined_pr.sort_values('date', ascending=False)[:10]
users_last_10 = set(users_last_10['user'])
# # Printing the results
users_last_10
###Output
_____no_output_____
###Markdown
9. The pull requests of two special developersNow that we have identified two potential contacts in the projects, we need to find the person who was most involved in the project in recent times. That person is most likely to answer our questions. For each calendar year, we are interested in understanding the number of pull requests the authors submitted. This will give us a high-level image of their contribution trend to the project.
###Code
%matplotlib inline
# The developers we are interested in
authors = ['xeno-by', 'soc']
# Get all the developers' pull requests
by_author = pulls[pulls['user'].isin(authors)]
by_author
# by_author['date'].astype(str)[0:4]
# # Count the number of pull requests submitted each year
# by_author['year'] = pd.DatetimeIndex(by_author['date']).year
counts = by_author.groupby(['user', by_author['date'].dt.year]).agg({'pid': 'count'})
counts
# # Convert the table to a wide format
counts_wide = counts.pivot_table(index='date', columns='user', values='pid', fill_value=0)
counts_wide
# # Plot the results
counts_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
10. Visualizing the contributions of each developerAs mentioned before, it is important to make a distinction between the global expertise and contribution levels and the contribution levels at a more granular level (file, submodule, etc.) In our case, we want to see which of our two developers of interest have the most experience with the code in a given file. We will measure experience by the number of pull requests submitted that affect that file and how recent those pull requests were submitted.
###Code
authors = ['xeno-by', 'soc']
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests submitted by the authors, from the `data` DataFrame
by_author = data[data['user'].isin(authors)]
by_author
# # Select the pull requests that affect the file
by_file = by_author[by_author['file'] == file]
by_file
# # Group and count the number of PRs done by each user each year
grouped = by_file.groupby(['user', by_file['date'].dt.year]).count()['pid'].reset_index()
grouped
# # Transform the data into a wide format
by_file_wide = grouped.pivot_table(index='date', columns='user', values='pid', fill_value=0.0)
by_file_wide
# Plot the results
by_file_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
1. Scala's real-world project repository dataWith almost 30k commits and a history spanning over ten years, Scala is a mature programming language. It is a general-purpose programming language that has recently become another prominent language for data scientists.Scala is also an open source project. Open source projects have the advantage that their entire development histories -- who made changes, what was changed, code reviews, etc. -- are publicly available. We're going to read in, clean up, and visualize the real world project repository of Scala that spans data from a version control system (Git) as well as a project hosting site (GitHub). We will find out who has had the most influence on its development and who are the experts.The dataset we will use, which has been previously mined and extracted from GitHub, is comprised of three files:pulls_2011-2013.csv contains the basic information about the pull requests, and spans from the end of 2011 up to (but not including) 2014.pulls_2014-2018.csv contains identical information, and spans from 2014 up to 2018.pull_files.csv contains the files that were modified by each pull request.
###Code
# Importing pandas
import pandas as pd
# Loading in the data
pulls_one = pd.read_csv('datasets/pulls_2011-2013.csv')
pulls_two = pd.read_csv('datasets/pulls_2014-2018.csv')
pull_files = pd.read_csv('datasets/pull_files.csv')
###Output
_____no_output_____
###Markdown
2. Preparing and cleaning the dataFirst, we will need to combine the data from the two separate pull DataFrames. Next, the raw data extracted from GitHub contains dates in the ISO8601 format. However, pandas imports them as regular strings. To make our analysis easier, we need to convert the strings into Python's DateTime objects. DateTime objects have the important property that they can be compared and sorted.The pull request times are all in UTC (also known as Coordinated Universal Time). The commit times, however, are in the local time of the author with time zone information (number of hours difference from UTC). To make comparisons easy, we should convert all times to UTC.
###Code
# Append pulls_one to pulls_two
pulls = pulls_one.append(pulls_two)
# Convert the date for the pulls object
pulls['date'] = pd.to_datetime(pulls['date'], utc = True)
###Output
_____no_output_____
###Markdown
3. Merging the DataFramesThe data extracted comes in two separate files. Merging the two DataFrames will make it easier for us to analyze the data in the future tasks.
###Code
# Merge the two DataFrames
data = pulls.merge(pull_files, on = 'pid')
###Output
_____no_output_____
###Markdown
4. Is the project still actively maintained?The activity in an open source project is not very consistent. Some projects might be active for many years after the initial release, while others can slowly taper out into oblivion. Before committing to contributing to a project, it is important to understand the state of the project. Is development going steadily, or is there a drop? Has the project been abandoned altogether?The data used in this project was collected in January of 2018. We are interested in the evolution of the number of contributions up to that date.For Scala, we will do this by plotting a chart of the project's activity. We will calculate the number of pull requests submitted each (calendar) month during the project's lifetime. We will then plot these numbers to see the trend of contributions.A helpful reminder of how to access various components of a date can be found in this exercise of Data Manipulation with pandasAdditionally, recall that you can group by multiple variables by passing a list to groupby(). This video from Data Manipulation with pandas should help!
###Code
%matplotlib inline
# Create a column that will store the month
data['month'] = data['date'].dt.month
# Create a column that will store the year
data['year'] = data['date'].dt.year
# Group by the month and year and count the pull requests
counts = data.groupby(['month', 'year'])['pid'].count()
# Plot the results
counts.plot(kind='bar', figsize = (12,4))
###Output
_____no_output_____
###Markdown
5. Is there camaraderie in the project?The organizational structure varies from one project to another, and it can influence your success as a contributor. A project that has a very small community might not be the best one to start working on. The small community might indicate a high barrier of entry. This can be caused by several factors, including a community that is reluctant to accept pull requests from "outsiders," that the code base is hard to work with, etc. However, a large community can serve as an indicator that the project is regularly accepting pull requests from new contributors. Such a project would be a good place to start.In order to evaluate the dynamics of the community, we will plot a histogram of the number of pull requests submitted by each user. A distribution that shows that there are few people that only contribute a small number of pull requests can be used as in indicator that the project is not welcoming of new contributors.
###Code
data.head()
# Required for matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
# Group by the submitter
by_user = data.groupby('user')['file'].count()
# Plot the histogram
plt.hist(by_user)
###Output
_____no_output_____
###Markdown
6. What files were changed in the last ten pull requests?Choosing the right place to make a contribution is as important as choosing the project to contribute to. Some parts of the code might be stable, some might be dead. Contributing there might not have the most impact. Therefore it is important to understand the parts of the system that have been recently changed. This allows us to pinpoint the "hot" areas of the code where most of the activity is happening. Focusing on those parts might not the most effective use of our times.
###Code
# Identify the last 10 pull requests
last_10 = pulls.sort_values('date').tail(10)
# Join the two data sets
joined_pr = pull_files.merge(last_10, on = 'pid')
# Identify the unique files
files = set(joined_pr['file'])
# Print the results
files
###Output
_____no_output_____
###Markdown
7. Who made the most pull requests to a given file?When contributing to a project, we might need some guidance. We might find ourselves needing some information regarding the codebase. It is important direct any questions to the right person. Contributors to open source projects generally have other day jobs, so their time is limited. It is important to address our questions to the right people. One way to identify the right target for our inquiries is by using their contribution history.We identified src/compiler/scala/reflect/reify/phases/Calculate.scala as being recently changed. We are interested in the top 3 developers who changed that file. Those developers are the ones most likely to have the best understanding of the code.
###Code
data.head()
# This is the file we are interested in:
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Identify the commits that changed the file
file_pr = data[data['file']==file]
# Count the number of changes made by each developer
author_counts = file_pr.groupby('user').count()
author_counts.head()
# Print the top 3 developers
author_counts.nlargest(3, 'file')
###Output
_____no_output_____
###Markdown
8. Who made the last ten pull requests on a given file?Open source projects suffer from fluctuating membership. This makes the problem of finding the right person more challenging: the person has to be knowledgeable and still be involved in the project. A person that contributed a lot in the past might no longer be available (or willing) to help. To get a better understanding, we need to investigate the more recent history of that particular part of the system. Like in the previous task, we will look at the history of src/compiler/scala/reflect/reify/phases/Calculate.scala.
###Code
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests that changed the target file
file_pr = pull_files[pull_files['file']==file]
# Merge the obtained results with the pulls DataFrame
joined_pr = file_pr.merge(pulls, on = 'pid')
# Find the users of the last 10 most recent pull requests
users_last_10 = set(joined_pr.nlargest(10, 'date')['user'])
# Printing the results
users_last_10
###Output
_____no_output_____
###Markdown
9. The pull requests of two special developersNow that we have identified two potential contacts in the projects, we need to find the person who was most involved in the project in recent times. That person is most likely to answer our questions. For each calendar year, we are interested in understanding the number of pull requests the authors submitted. This will give us a high-level image of their contribution trend to the project.
###Code
pulls.head()
%matplotlib inline
# The developers we are interested in
authors = ['xeno-by', 'soc']
# Get all the developers' pull requests
by_author = pulls[pulls['user'].isin(authors)]
# Count the number of pull requests submitted each year
counts = by_author.groupby([by_author['user'], by_author['date'].dt.year]).agg({'pid': 'count'}).reset_index()
# Convert the table to a wide format
counts_wide = counts.pivot_table(index='date', columns='user', values='pid', fill_value=0)
# Plot the results
counts_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
10. Visualizing the contributions of each developerAs mentioned before, it is important to make a distinction between the global expertise and contribution levels and the contribution levels at a more granular level (file, submodule, etc.) In our case, we want to see which of our two developers of interest have the most experience with the code in a given file. We will measure experience by the number of pull requests submitted that affect that file and how recent those pull requests were submitted.
###Code
authors = ['xeno-by', 'soc']
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests submitted by the authors, from the `data` DataFrame
by_author = data[data['user'].isin(authors)]
# Select the pull requests that affect the file
by_file = by_author[by_author['file'] == file]
# Group and count the number of PRs done by each user each year
grouped = by_file.groupby(['user', by_file['date'].dt.year]).count()['pid'].reset_index()
grouped
authors = ['xeno-by', 'soc']
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests submitted by the authors, from the `data` DataFrame
by_author = data[data['user'].isin(authors)]
# Select the pull requests that affect the file
by_file = by_author[by_author['file'] == file]
# Group and count the number of PRs done by each user each year
grouped = by_file.groupby(['user', by_file['date'].dt.year]).count()['pid'].reset_index()
# Transform the data into a wide format
by_file_wide = grouped.pivot_table(values = 'pid', index = 'date')
# Plot the results
by_file_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
1. Scala's real-world project repository dataWith almost 30k commits and a history spanning over ten years, Scala is a mature programming language. It is a general-purpose programming language that has recently become another prominent language for data scientists.Scala is also an open source project. Open source projects have the advantage that their entire development histories -- who made changes, what was changed, code reviews, etc. -- are publicly available. We're going to read in, clean up, and visualize the real world project repository of Scala that spans data from a version control system (Git) as well as a project hosting site (GitHub). We will find out who has had the most influence on its development and who are the experts.The dataset we will use, which has been previously mined and extracted from GitHub, is comprised of three files:pulls_2011-2013.csv contains the basic information about the pull requests, and spans from the end of 2011 up to (but not including) 2014.pulls_2014-2018.csv contains identical information, and spans from 2014 up to 2018.pull_files.csv contains the files that were modified by each pull request.
###Code
# Importing pandas
import pandas as pd
import matplotlib as plt
# Loading in the data
pulls_one = pd.read_csv('datasets/pulls_2011-2013.csv')
pulls_two = pd.read_csv('datasets/pulls_2014-2018.csv')
pull_files = pd.read_csv('datasets/pull_files.csv')
###Output
_____no_output_____
###Markdown
2. Preparing and cleaning the dataFirst, we will need to combine the data from the two separate pull DataFrames. Next, the raw data extracted from GitHub contains dates in the ISO8601 format. However, pandas imports them as regular strings. To make our analysis easier, we need to convert the strings into Python's DateTime objects. DateTime objects have the important property that they can be compared and sorted.The pull request times are all in UTC (also known as Coordinated Universal Time). The commit times, however, are in the local time of the author with time zone information (number of hours difference from UTC). To make comparisons easy, we should convert all times to UTC.
###Code
# Append pulls_one to pulls_two
pulls = pulls_one.append(pulls_two , ignore_index= True)
# Convert the date for the pulls object
pulls['date'] = pd.to_datetime(pulls['date'] , utc= True)
###Output
_____no_output_____
###Markdown
3. Merging the DataFramesThe data extracted comes in two separate files. Merging the two DataFrames will make it easier for us to analyze the data in the future tasks.
###Code
# Merge the two DataFrames
data = pd.merge(pulls,pull_files , on = 'pid')
data.head()
###Output
_____no_output_____
###Markdown
4. Is the project still actively maintained?The activity in an open source project is not very consistent. Some projects might be active for many years after the initial release, while others can slowly taper out into oblivion. Before committing to contributing to a project, it is important to understand the state of the project. Is development going steadily, or is there a drop? Has the project been abandoned altogether?The data used in this project was collected in January of 2018. We are interested in the evolution of the number of contributions up to that date.For Scala, we will do this by plotting a chart of the project's activity. We will calculate the number of pull requests submitted each (calendar) month during the project's lifetime. We will then plot these numbers to see the trend of contributions.A helpful reminder of how to access various components of a date can be found in this exercise of Data Manipulation with pandasAdditionally, recall that you can group by multiple variables by passing a list to groupby(). This video from Data Manipulation with pandas should help!
###Code
%matplotlib inline
# Create a column that will store the month
data['month'] = data['date'].dt.month
# Create a column that will store the year
data['year'] = data['date'].dt.year
# Group by the month and year and count the pull requests
counts = data.groupby(by = ['month','year'])['pid'].count()
# Plot the results
counts.plot(kind='bar', figsize = (12,4))
###Output
_____no_output_____
###Markdown
5. Is there camaraderie in the project?The organizational structure varies from one project to another, and it can influence your success as a contributor. A project that has a very small community might not be the best one to start working on. The small community might indicate a high barrier of entry. This can be caused by several factors, including a community that is reluctant to accept pull requests from "outsiders," that the code base is hard to work with, etc. However, a large community can serve as an indicator that the project is regularly accepting pull requests from new contributors. Such a project would be a good place to start.In order to evaluate the dynamics of the community, we will plot a histogram of the number of pull requests submitted by each user. A distribution that shows that there are few people that only contribute a small number of pull requests can be used as in indicator that the project is not welcoming of new contributors.
###Code
# Required for matplotlib
%matplotlib inline
# Group by the submitter
by_user = data.groupby('user').agg({'pid':'count'})
# Plot the histogram
by_user.hist(bins = 10)
###Output
_____no_output_____
###Markdown
6. What files were changed in the last ten pull requests?Choosing the right place to make a contribution is as important as choosing the project to contribute to. Some parts of the code might be stable, some might be dead. Contributing there might not have the most impact. Therefore it is important to understand the parts of the system that have been recently changed. This allows us to pinpoint the "hot" areas of the code where most of the activity is happening. Focusing on those parts might not the most effective use of our times.
###Code
# Identify the last 10 pull requests
last_10 = pulls.nlargest(10,'date')
# Join the two data sets
joined_pr = pd.merge(pull_files,last_10, on = 'pid')
# Identify the unique files
files = set(joined_pr['file'].unique())
# Print the results
files
###Output
_____no_output_____
###Markdown
7. Who made the most pull requests to a given file?When contributing to a project, we might need some guidance. We might find ourselves needing some information regarding the codebase. It is important direct any questions to the right person. Contributors to open source projects generally have other day jobs, so their time is limited. It is important to address our questions to the right people. One way to identify the right target for our inquiries is by using their contribution history.We identified src/compiler/scala/reflect/reify/phases/Calculate.scala as being recently changed. We are interested in the top 3 developers who changed that file. Those developers are the ones most likely to have the best understanding of the code.
###Code
# This is the file we are interested in:
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Identify the commits that changed the file
file_pr = data[data['file'] == file]
# Count the number of changes made by each developer
author_counts = file_pr.groupby('user').count()
# Print the top 3 developers
print(author_counts.nlargest(3,'file'))
###Output
pid date file month year
user
xeno-by 11 11 11 11 11
retronym 5 5 5 5 5
soc 4 4 4 4 4
###Markdown
8. Who made the last ten pull requests on a given file?Open source projects suffer from fluctuating membership. This makes the problem of finding the right person more challenging: the person has to be knowledgeable and still be involved in the project. A person that contributed a lot in the past might no longer be available (or willing) to help. To get a better understanding, we need to investigate the more recent history of that particular part of the system. Like in the previous task, we will look at the history of src/compiler/scala/reflect/reify/phases/Calculate.scala.
###Code
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests that changed the target file
file_pr = pull_files[pull_files['file'] == file]
# Merge the obtained results with the pulls DataFrame
joined_pr = pulls.merge(file_pr,on = 'pid')
# Find the users of the last 10 most recent pull requests
users_last_10 = set(joined_pr.nlargest(10,'date')['user'])
# Printing the results
users_last_10
pull_files
###Output
_____no_output_____
###Markdown
9. The pull requests of two special developersNow that we have identified two potential contacts in the projects, we need to find the person who was most involved in the project in recent times. That person is most likely to answer our questions. For each calendar year, we are interested in understanding the number of pull requests the authors submitted. This will give us a high-level image of their contribution trend to the project.
###Code
%matplotlib inline
# The developers we are interested in
authors = ['xeno-by', 'soc']
# Get all the developers' pull requests
by_author = pulls[pulls['user'].isin(authors) ]
# Count the number of pull requests submitted each year
counts = by_author.groupby([by_author['user'],by_author['date'].dt.year]).agg({'pid': 'count'}).reset_index()
# Convert the table to a wide format
counts_wide = counts.pivot_table(index='date', columns='user', values='pid', fill_value=0)
# Plot the results
counts_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
10. Visualizing the contributions of each developerAs mentioned before, it is important to make a distinction between the global expertise and contribution levels and the contribution levels at a more granular level (file, submodule, etc.) In our case, we want to see which of our two developers of interest have the most experience with the code in a given file. We will measure experience by the number of pull requests submitted that affect that file and how recent those pull requests were submitted.
###Code
authors = ['xeno-by', 'soc']
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests submitted by the authors, from the `data` DataFrame
by_author = data[data['user'].isin(['xeno-by', 'soc'])]
# Select the pull requests that affect the file
by_file = by_author[by_author['file']== file]
# Group and count the number of PRs done by each user each year
grouped = by_file.groupby(['user', by_file['date'].dt.year]).count()['pid'].reset_index()
# Transform the data into a wide format
by_file_wide = grouped.pivot_table(index = 'date' , columns = 'user' , values = 'pid' ,fill_value = 0)
# Plot the results
by_file_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
1. Scala's real-world project repository dataWith almost 30k commits and a history spanning over ten years, Scala is a mature programming language. It is a general-purpose programming language that has recently become another prominent language for data scientists.Scala is also an open source project. Open source projects have the advantage that their entire development histories -- who made changes, what was changed, code reviews, etc. -- publicly available. We're going to read in, clean up, and visualize the real world project repository of Scala that spans data from a version control system (Git) as well as a project hosting site (GitHub). We will find out who has had the most influence on its development and who are the experts.The dataset we will use, which has been previously mined and extracted from GitHub, is comprised of three files:pulls_2011-2013.csv contains the basic information about the pull requests, and spans from the end of 2011 up to (but not including) 2014.pulls_2014-2018.csv contains identical information, and spans from 2014 up to 2018.pull_files.csv contains the files that were modified by each pull request.
###Code
# Importing pandas
# ... YOUR CODE FOR TASK 1 ...
import pandas as pd
# Loading in the data
pulls_one = pd.read_csv('datasets/pulls_2011-2013.csv')
pulls_two = pd.read_csv('datasets/pulls_2014-2018.csv')
pull_files = pd.read_csv('datasets/pull_files.csv')
###Output
_____no_output_____
###Markdown
2. Preparing and cleaning the dataFirst, we will need to combine the data from the two separate pull DataFrames. Next, the raw data extracted from GitHub contains dates in the ISO8601 format. However, pandas imports them as regular strings. To make our analysis easier, we need to convert the strings into Python's DateTime objects. DateTime objects have the important property that they can be compared and sorted.The pull request times are all in UTC (also known as Coordinated Universal Time). The commit times, however, are in the local time of the author with time zone information (number of hours difference from UTC). To make comparisons easy, we should convert all times to UTC.
###Code
# Append pulls_one to pulls_two
pulls = pulls_two.append(pulls_one, ignore_index=True)
# Convert the date for the pulls object
pulls['date'] = pd.to_datetime(pulls["date"], utc=True)
###Output
_____no_output_____
###Markdown
3. Merging the DataFramesThe data extracted comes in two separate files. Merging the two DataFrames will make it easier for us to analyze the data in the future tasks.
###Code
# Merge the two DataFrames
data = pulls.merge(pull_files, on="pid")
data
###Output
_____no_output_____
###Markdown
4. Is the project still actively maintained?The activity in an open source project is not very consistent. Some projects might be active for many years after the initial release, while others can slowly taper out into oblivion. Before committing to contributing to a project, it is important to understand the state of the project. Is development going steadily, or is there a drop? Has the project been abandoned altogether?The data used in this project was collected in January of 2018. We are interested in the evolution of the number of contributions up to that date.For Scala, we will do this by plotting a chart of the project's activity. We will calculate the number of pull requests submitted each (calendar) month during the project's lifetime. We will then plot these numbers to see the trend of contributions.
###Code
%matplotlib inline
# Create a column that will store the month and the year, as a string
data['month_year'] = data.apply(lambda x: str(x['date'].year) + '-' + str(x['date'].month), axis = 1)
# Group by month_year and count the pull requests
counts = data.groupby('month_year').count()
# Plot the results
counts.plot.bar()
###Output
_____no_output_____
###Markdown
5. Is there camaraderie in the project?The organizational structure varies from one project to another, and it can influence your success as a contributor. A project that has a very small community might not be the best one to start working on. The small community might indicate a high barrier of entry. This can be caused by several factors, including a community that is reluctant to accept pull requests from "outsiders," that the code base is hard to work with, etc. However, a large community can serve as an indicator that the project is regularly accepting pull requests from new contributors. Such a project would be a good place to start.In order to evaluate the dynamics of the community, we will plot a histogram of the number of pull requests submitted by each user. A distribution that shows that there are few people that only contribute a small number of pull requests can be used as in indicator that the project is not welcoming of new contributors.
###Code
# Required for matplotlib
%matplotlib inline
# Group by the submitter
by_user = data.groupby("user").agg({"pid": "count"})
print(by_user.head())
# Plot the histogram
by_user.plot.hist()
###Output
_____no_output_____
###Markdown
6. What files were changed in the last ten pull requests?Choosing the right place to make a contribution is as important as choosing the project to contribute to. Some parts of the code might be stable, some might be dead. Contributing there might not have the most impact. Therefore it is important to understand the parts of the system that have been recently changed. This allows us to pinpoint the "hot" areas of the code where most of the activity is happening. Focusing on those parts might not the most effective use of our times.
###Code
#Identify the last 10 pull requests
last_10 = pulls.nlargest(10, 'date')
# Join the two data sets
joined_pr = pull_files.merge(last_10, on='pid')
# Identify the unique files
files = set(joined_pr['file'])
# Print the results
files
###Output
_____no_output_____
###Markdown
7. Who made the most pull requests to a given file?When contributing to a project, we might need some guidance. We might find ourselves needing some information regarding the codebase. It is important direct any questions to the right person. Contributors to open source projects generally have other day jobs, so their time is limited. It is important to address our questions to the right people. One way to identify the right target for our inquiries is by using their contribution history.We identified src/compiler/scala/reflect/reify/phases/Calculate.scala as being recently changed. We are interested in the top 3 developers who changed that file. Those developers are the ones most likely to have the best understanding of the code.
###Code
# This is the file we are interested in:
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Identify the commits that changed the file
file_pr = data[data['file'] == file]
# Count the number of changes made by each developer
author_counts = file_pr.groupby('user').count()
# Print the top 3 developers
# ... YOUR CODE FOR TASK 7 ...
print(author_counts.nlargest(3, 'pid'))
###Output
_____no_output_____
###Markdown
8. Who made the last ten pull requests on a given file?Open source projects suffer from fluctuating membership. This makes the problem of finding the right person more challenging: the person has to be knowledgeable and still be involved in the project. A person that contributed a lot in the past might no longer be available (or willing) to help. To get a better understanding, we need to investigate the more recent history of that particular part of the system. Like in the previous task, we will look at the history of src/compiler/scala/reflect/reify/phases/Calculate.scala.
###Code
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests that changed the target file
file_pr = pull_files[pull_files['file'] == file]
# Merge the obtained results with the pulls DataFrame
joined_pr = file_pr.merge(pulls, on='pid')
# Find the users of the last 10 most recent pull requests
users_last_10 = set(joined_pr.nlargest(10, 'date')['user'])
# Printing the results
users_last_10
###Output
_____no_output_____
###Markdown
9. The pull requests of two special developersNow that we have identified two potential contacts in the projects, we need to find the person who was most involved in the project in recent times. That person is most likely to answer our questions. For each calendar year, we are interested in understanding the number of pull requests the authors submitted. This will give us a high-level image of their contribution trend to the project.
###Code
%matplotlib inline
# The developers we are interested in
authors = ['xeno-by', 'soc']
# Get all the developers' pull requests
by_author = pulls[pulls["user"].isin(authors)]
# Count the number of pull requests submitted each year
counts = by_author.groupby(["user", pulls['date'].dt.year]).agg({'pid': 'count'}).reset_index()
# Convert the table to a wide format
counts_wide = counts.pivot_table(index='date', columns='user', values='pid', fill_value=0)
# Plot the results
counts_wide.plot.bar()
###Output
_____no_output_____
###Markdown
10. Visualizing the contributions of each developerAs mentioned before, it is important to make a distinction between the global expertise and contribution levels and the contribution levels at a more granular level (file, submodule, etc.) In our case, we want to see which of our two developers of interest have the most experience with the code in a given file. We will measure experience by the number of pull requests submitted that affect that file and how recent those pull requests were submitted.
###Code
authors = ['xeno-by', 'soc']
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests submitted by the authors, from the `data` DataFrame
by_author = data[data['user'].isin(authors)]
# Select the pull requests that affect the file
by_file = by_author[by_author['file'] == file]
# Group and count the number of PRs done by each user each year
grouped = by_file.groupby(['user', by_file['date'].dt.year]).count()['pid'].reset_index()
# Transform the data into a wide format
by_file_wide = grouped.pivot_table(index = 'date',columns = 'user',values = 'pid', fill_value = 0)
# Plot the results
by_file_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
1. Scala's real-world project repository dataWith almost 30k commits and a history spanning over ten years, Scala is a mature programming language. It is a general-purpose programming language that has recently become another prominent language for data scientists.Scala is also an open source project. Open source projects have the advantage that their entire development histories -- who made changes, what was changed, code reviews, etc. -- are publicly available. We're going to read in, clean up, and visualize the real world project repository of Scala that spans data from a version control system (Git) as well as a project hosting site (GitHub). We will find out who has had the most influence on its development and who are the experts.The dataset we will use, which has been previously mined and extracted from GitHub, is comprised of three files:pulls_2011-2013.csv contains the basic information about the pull requests, and spans from the end of 2011 up to (but not including) 2014.pulls_2014-2018.csv contains identical information, and spans from 2014 up to 2018.pull_files.csv contains the files that were modified by each pull request.
###Code
# Importing pandas
# ... YOUR CODE FOR TASK 1 ...
import pandas as pd
# Loading in the data
pulls_one = pd.read_csv('datasets/pulls_2011-2013.csv')
pulls_two = pd.read_csv('datasets/pulls_2014-2018.csv')
pull_files = pd.read_csv('datasets/pull_files.csv')
###Output
_____no_output_____
###Markdown
2. Preparing and cleaning the dataFirst, we will need to combine the data from the two separate pull DataFrames. Next, the raw data extracted from GitHub contains dates in the ISO8601 format. However, pandas imports them as regular strings. To make our analysis easier, we need to convert the strings into Python's DateTime objects. DateTime objects have the important property that they can be compared and sorted.The pull request times are all in UTC (also known as Coordinated Universal Time). The commit times, however, are in the local time of the author with time zone information (number of hours difference from UTC). To make comparisons easy, we should convert all times to UTC.
###Code
# Append pulls_one to pulls_two
pulls = pd.concat([pulls_one,pulls_two])
# Convert the date for the pulls object
pulls['date'] = pd.to_datetime(pulls['date'], utc=True)
###Output
_____no_output_____
###Markdown
3. Merging the DataFramesThe data extracted comes in two separate files. Merging the two DataFrames will make it easier for us to analyze the data in the future tasks.
###Code
# Merge the two DataFrames
data = pd.merge(pulls, pull_files, on='pid')
###Output
_____no_output_____
###Markdown
4. Is the project still actively maintained?The activity in an open source project is not very consistent. Some projects might be active for many years after the initial release, while others can slowly taper out into oblivion. Before committing to contributing to a project, it is important to understand the state of the project. Is development going steadily, or is there a drop? Has the project been abandoned altogether?The data used in this project was collected in January of 2018. We are interested in the evolution of the number of contributions up to that date.For Scala, we will do this by plotting a chart of the project's activity. We will calculate the number of pull requests submitted each (calendar) month during the project's lifetime. We will then plot these numbers to see the trend of contributions.A helpful reminder of how to access various components of a date can be found in this exercise of Data Manipulation with pandasAdditionally, recall that you can group by multiple variables by passing a list to groupby(). This video from Data Manipulation with pandas should help!
###Code
%matplotlib inline
# Create a column that will store the month
data['month'] = data['date'].dt.month
# Create a column that will store the year
data['year'] = data['date'].dt.year
# Group by the month and year and count the pull requests
counts = data.groupby(['month','year']).agg({'pid':'count'})
# Plot the results
counts.plot(kind='bar', figsize = (12,4))
###Output
_____no_output_____
###Markdown
5. Is there camaraderie in the project?The organizational structure varies from one project to another, and it can influence your success as a contributor. A project that has a very small community might not be the best one to start working on. The small community might indicate a high barrier of entry. This can be caused by several factors, including a community that is reluctant to accept pull requests from "outsiders," that the code base is hard to work with, etc. However, a large community can serve as an indicator that the project is regularly accepting pull requests from new contributors. Such a project would be a good place to start.In order to evaluate the dynamics of the community, we will plot a histogram of the number of pull requests submitted by each user. A distribution that shows that there are few people that only contribute a small number of pull requests can be used as in indicator that the project is not welcoming of new contributors.
###Code
# Required for matplotlib
%matplotlib inline
# Group by the submitter
by_user = data.groupby('user').agg({'pid':'count'})
# Plot the histogram
by_user.hist()
###Output
_____no_output_____
###Markdown
6. What files were changed in the last ten pull requests?Choosing the right place to make a contribution is as important as choosing the project to contribute to. Some parts of the code might be stable, some might be dead. Contributing there might not have the most impact. Therefore it is important to understand the parts of the system that have been recently changed. This allows us to pinpoint the "hot" areas of the code where most of the activity is happening. Focusing on those parts might not the most effective use of our times.
###Code
# Identify the last 10 pull requests
last_10 = pulls.nlargest(10, 'date')
# Join the two data sets
joined_pr = pd.merge(last_10, pull_files, on='pid', how='left')
# Identify the unique files
files = set(joined_pr['file'])
# Print the results
files
###Output
_____no_output_____
###Markdown
7. Who made the most pull requests to a given file?When contributing to a project, we might need some guidance. We might find ourselves needing some information regarding the codebase. It is important direct any questions to the right person. Contributors to open source projects generally have other day jobs, so their time is limited. It is important to address our questions to the right people. One way to identify the right target for our inquiries is by using their contribution history.We identified src/compiler/scala/reflect/reify/phases/Calculate.scala as being recently changed. We are interested in the top 3 developers who changed that file. Those developers are the ones most likely to have the best understanding of the code.
###Code
# This is the file we are interested in:
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Identify the commits that changed the file
file_pr = data[data['file'] == file]
# Count the number of changes made by each developer
author_counts = file_pr.groupby('user')['file'].agg('count')
# Print the top 3 developers
print(author_counts.nlargest(3))
###Output
user
xeno-by 11
retronym 5
soc 4
Name: file, dtype: int64
###Markdown
8. Who made the last ten pull requests on a given file?Open source projects suffer from fluctuating membership. This makes the problem of finding the right person more challenging: the person has to be knowledgeable and still be involved in the project. A person that contributed a lot in the past might no longer be available (or willing) to help. To get a better understanding, we need to investigate the more recent history of that particular part of the system. Like in the previous task, we will look at the history of src/compiler/scala/reflect/reify/phases/Calculate.scala.
###Code
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests that changed the target file
file_pr = data[data['file'] == file]
# Merge the obtained results with the pulls DataFrame
joined_pr = file_pr.merge(pulls, on='pid')
# Find the users of the last 10 most recent pull requests
users_last_10 = set(joined_pr.nlargest(10, 'date_x')['user_x'])
# Printing the results
users_last_10
###Output
_____no_output_____
###Markdown
9. The pull requests of two special developersNow that we have identified two potential contacts in the projects, we need to find the person who was most involved in the project in recent times. That person is most likely to answer our questions. For each calendar year, we are interested in understanding the number of pull requests the authors submitted. This will give us a high-level image of their contribution trend to the project.
###Code
%matplotlib inline
# The developers we are interested in
authors = ['xeno-by', 'soc']
# Get all the developers' pull requests
by_author = pulls[pulls['user'].isin(authors)]
print(by_author.columns)
# Count the number of pull requests submitted each year
by_author['date'] = by_author['date'].dt.year
counts = by_author.groupby(['user', 'date']).agg({'pid': 'count'}).reset_index()
# Convert the table to a wide format
counts_wide = counts.pivot_table(index='date', columns='user', values='pid', fill_value=0)
print(counts_wide)
# Plot the results
counts_wide.plot(kind='bar')
###Output
Index(['pid', 'user', 'date'], dtype='object')
user soc xeno-by
date
2011 12 20
2012 44 271
2013 117 123
2014 20 60
2015 24 3
2016 21 0
###Markdown
10. Visualizing the contributions of each developerAs mentioned before, it is important to make a distinction between the global expertise and contribution levels and the contribution levels at a more granular level (file, submodule, etc.) In our case, we want to see which of our two developers of interest have the most experience with the code in a given file. We will measure experience by the number of pull requests submitted that affect that file and how recent those pull requests were submitted.
###Code
authors = ['xeno-by', 'soc']
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests submitted by the authors, from the `data` DataFrame
by_author = data[data['user'].isin(authors)]
# Select the pull requests that affect the file
by_file = by_author[by_author['file'] == file]
# Group and count the number of PRs done by each user each year
grouped = by_file.groupby(['user', by_file['date'].dt.year]).count()['pid'].reset_index()
# Transform the data into a wide format
by_file_wide = grouped.pivot_table(index='date', columns='user', values='pid', fill_value=0)
# Plot the results
by_file_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
1. Scala's real-world project repository dataWith almost 30k commits and a history spanning over ten years, Scala is a mature programming language. It is a general-purpose programming language that has recently become another prominent language for data scientists.Scala is also an open source project. Open source projects have the advantage that their entire development histories -- who made changes, what was changed, code reviews, etc. -- are publicly available. We're going to read in, clean up, and visualize the real world project repository of Scala that spans data from a version control system (Git) as well as a project hosting site (GitHub). We will find out who has had the most influence on its development and who are the experts.The dataset we will use, which has been previously mined and extracted from GitHub, is comprised of three files:pulls_2011-2013.csv contains the basic information about the pull requests, and spans from the end of 2011 up to (but not including) 2014.pulls_2014-2018.csv contains identical information, and spans from 2014 up to 2018.pull_files.csv contains the files that were modified by each pull request.
###Code
# Importing pandas
import pandas as pd
# Loading in the data
pulls_one = pd.read_csv("datasets/pulls_2011-2013.csv")
pulls_two = pd.read_csv("datasets/pulls_2014-2018.csv")
pull_files = pd.read_csv("datasets/pull_files.csv")
###Output
_____no_output_____
###Markdown
2. Preparing and cleaning the dataFirst, we will need to combine the data from the two separate pull DataFrames. Next, the raw data extracted from GitHub contains dates in the ISO8601 format. However, pandas imports them as regular strings. To make our analysis easier, we need to convert the strings into Python's DateTime objects. DateTime objects have the important property that they can be compared and sorted.The pull request times are all in UTC (also known as Coordinated Universal Time). The commit times, however, are in the local time of the author with time zone information (number of hours difference from UTC). To make comparisons easy, we should convert all times to UTC.
###Code
# Append pulls_one to pulls_two
pulls = pulls_two.append(pulls_one)
# Convert the date for the pulls object
pulls['date'] = pd.to_datetime(pulls['date'], utc=True)
pulls.head()
###Output
_____no_output_____
###Markdown
3. Merging the DataFramesThe data extracted comes in two separate files. Merging the two DataFrames will make it easier for us to analyze the data in the future tasks.
###Code
# Merge the two DataFrames
data = pd.merge(pulls, pull_files, on="pid")
data.head(2)
###Output
_____no_output_____
###Markdown
4. Is the project still actively maintained?The activity in an open source project is not very consistent. Some projects might be active for many years after the initial release, while others can slowly taper out into oblivion. Before committing to contributing to a project, it is important to understand the state of the project. Is development going steadily, or is there a drop? Has the project been abandoned altogether?The data used in this project was collected in January of 2018. We are interested in the evolution of the number of contributions up to that date.For Scala, we will do this by plotting a chart of the project's activity. We will calculate the number of pull requests submitted each (calendar) month during the project's lifetime. We will then plot these numbers to see the trend of contributions.A helpful reminder of how to access various components of a date can be found in this exercise of Data Manipulation with pandasAdditionally, recall that you can group by multiple variables by passing a list to groupby(). This video from Data Manipulation with pandas should help!
###Code
%matplotlib inline
# Create a column that will store the month
data['month'] = data['date'].dt.month
# Create a column that will store the year
data['year'] = data['date'].dt.year
# Group by the month and year and count the pull requests
counts = data.groupby(['month', 'year'])['pid'].count()
# Plot the results
counts.plot(kind='bar', figsize = (12,4))
###Output
_____no_output_____
###Markdown
5. Is there camaraderie in the project?The organizational structure varies from one project to another, and it can influence your success as a contributor. A project that has a very small community might not be the best one to start working on. The small community might indicate a high barrier of entry. This can be caused by several factors, including a community that is reluctant to accept pull requests from "outsiders," that the code base is hard to work with, etc. However, a large community can serve as an indicator that the project is regularly accepting pull requests from new contributors. Such a project would be a good place to start.In order to evaluate the dynamics of the community, we will plot a histogram of the number of pull requests submitted by each user. A distribution that shows that there are few people that only contribute a small number of pull requests can be used as in indicator that the project is not welcoming of new contributors.
###Code
# Required for matplotlib
%matplotlib inline
# Group by the submitter
by_user = data.groupby('user')['pid'].count()
# Plot the histogram
by_user.plot(kind='hist', figsize=(12, 6))
###Output
_____no_output_____
###Markdown
6. What files were changed in the last ten pull requests?Choosing the right place to make a contribution is as important as choosing the project to contribute to. Some parts of the code might be stable, some might be dead. Contributing there might not have the most impact. Therefore it is important to understand the parts of the system that have been recently changed. This allows us to pinpoint the "hot" areas of the code where most of the activity is happening. Focusing on those parts might not the most effective use of our times.
###Code
# Identify the last 10 pull requests
last_10 = pulls.nlargest(10, "date")
# Join the two data sets
joined_pr = pd.merge(last_10, pull_files, on='pid')
print(joined_pr.head(2))
# Identify the unique files
files = set(joined_pr['file'])
# Print the results
files
###Output
pid user date \
0 163314316 hrhino 2018-01-16 23:29:16+00:00
1 163314316 hrhino 2018-01-16 23:29:16+00:00
file
0 test/files/pos/t5638/Among.java
1 test/files/pos/t5638/Usage.scala
###Markdown
7. Who made the most pull requests to a given file?When contributing to a project, we might need some guidance. We might find ourselves needing some information regarding the codebase. It is important direct any questions to the right person. Contributors to open source projects generally have other day jobs, so their time is limited. It is important to address our questions to the right people. One way to identify the right target for our inquiries is by using their contribution history.We identified src/compiler/scala/reflect/reify/phases/Calculate.scala as being recently changed. We are interested in the top 3 developers who changed that file. Those developers are the ones most likely to have the best understanding of the code.
###Code
# This is the file we are interested in:
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Identify the commits that changed the file
file_pr = data[data['file'] == file]
# Count the number of changes made by each developer
author_counts = file_pr.groupby('user').count()
# Print the top 3 developers
author_counts.nlargest(3, 'file')
###Output
_____no_output_____
###Markdown
8. Who made the last ten pull requests on a given file?Open source projects suffer from fluctuating membership. This makes the problem of finding the right person more challenging: the person has to be knowledgeable and still be involved in the project. A person that contributed a lot in the past might no longer be available (or willing) to help. To get a better understanding, we need to investigate the more recent history of that particular part of the system. Like in the previous task, we will look at the history of src/compiler/scala/reflect/reify/phases/Calculate.scala.
###Code
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests that changed the target file
file_pr = data[data['file'] == file]
# file_pr.head(2)
# Merge the obtained results with the pulls DataFrame
joined_pr = pd.merge(file_pr, pulls, on='pid')
# Find the users of the last 10 most recent pull requests
users_last_10 = set(joined_pr.nlargest(10, 'date_x')['user_x'])
# Printing the results
users_last_10
###Output
_____no_output_____
###Markdown
9. The pull requests of two special developersNow that we have identified two potential contacts in the projects, we need to find the person who was most involved in the project in recent times. That person is most likely to answer our questions. For each calendar year, we are interested in understanding the number of pull requests the authors submitted. This will give us a high-level image of their contribution trend to the project.
###Code
%matplotlib inline
# The developers we are interested in
authors = ['xeno-by', 'soc']
# Get all the developers' pull requests
by_author = pulls[pulls['user'].isin(authors)]
# Count the number of pull requests submitted each year
year = by_author['date'].dt.year
counts = by_author.groupby(['user', year]).agg({'pid': 'count'}).reset_index()
# Convert the table to a wide format
counts_wide = counts.pivot_table(index='date', columns='user', values='pid', fill_value=0)
# Plot the results
counts_wide.plot(kind='bar', figsize=(12,6))
###Output
_____no_output_____
###Markdown
10. Visualizing the contributions of each developerAs mentioned before, it is important to make a distinction between the global expertise and contribution levels and the contribution levels at a more granular level (file, submodule, etc.) In our case, we want to see which of our two developers of interest have the most experience with the code in a given file. We will measure experience by the number of pull requests submitted that affect that file and how recent those pull requests were submitted.
###Code
authors = ['xeno-by', 'soc']
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests submitted by the authors, from the `data` DataFrame
by_author = data[data['user'].isin(authors)]
# Select the pull requests that affect the file
by_file = by_author[by_author['file'] == file]
# Group and count the number of PRs done by each user each year
grouped = by_file.groupby(['user', by_file['date'].dt.year]).count()['pid'].reset_index()
# Transform the data into a wide format
by_file_wide = grouped.pivot_table(index='date', columns='user', values='pid', fill_value=0)
# Plot the results
by_file_wide.plot(kind='bar', figsize=(12,6))
###Output
_____no_output_____
###Markdown
1. Scala's real-world project repository dataWith almost 30k commits and a history spanning over ten years, Scala is a mature programming language. It is a general-purpose programming language that has recently become another prominent language for data scientists.Scala is also an open source project. Open source projects have the advantage that their entire development histories -- who made changes, what was changed, code reviews, etc. -- are publicly available. We're going to read in, clean up, and visualize the real world project repository of Scala that spans data from a version control system (Git) as well as a project hosting site (GitHub). We will find out who has had the most influence on its development and who are the experts.The dataset we will use, which has been previously mined and extracted from GitHub, is comprised of three files:pulls_2011-2013.csv contains the basic information about the pull requests, and spans from the end of 2011 up to (but not including) 2014.pulls_2014-2018.csv contains identical information, and spans from 2014 up to 2018.pull_files.csv contains the files that were modified by each pull request.
###Code
# Importing pandas
import pandas as pd
# Loading in the data
pulls_one = pd.read_csv('datasets/pulls_2011-2013.csv')
pulls_two = pd.read_csv('datasets/pulls_2014-2018.csv')
pull_files = pd.read_csv('datasets/pull_files.csv')
###Output
_____no_output_____
###Markdown
2. Preparing and cleaning the dataFirst, we will need to combine the data from the two separate pull DataFrames. Next, the raw data extracted from GitHub contains dates in the ISO8601 format. However, pandas imports them as regular strings. To make our analysis easier, we need to convert the strings into Python's DateTime objects. DateTime objects have the important property that they can be compared and sorted.The pull request times are all in UTC (also known as Coordinated Universal Time). The commit times, however, are in the local time of the author with time zone information (number of hours difference from UTC). To make comparisons easy, we should convert all times to UTC.
###Code
print(pulls_one.head(), pulls_one.info())
print(pulls_two.head(), pulls_two.info())
print(pull_files.head(), pull_files.info())
!dir
# Append pulls_one to pulls_two
pulls = pulls_two.append(pulls_one)
pulls['date'] = pd.to_datetime(pulls['date'], utc = True)
###Output
_____no_output_____
###Markdown
3. Merging the DataFramesThe data extracted comes in two separate files. Merging the two DataFrames will make it easier for us to analyze the data in the future tasks.
###Code
pulls.head(6)
# Merge the two DataFrames
data = pulls.merge(pull_files, on = 'pid')
###Output
_____no_output_____
###Markdown
4. Is the project still actively maintained?The activity in an open source project is not very consistent. Some projects might be active for many years after the initial release, while others can slowly taper out into oblivion. Before committing to contributing to a project, it is important to understand the state of the project. Is development going steadily, or is there a drop? Has the project been abandoned altogether?The data used in this project was collected in January of 2018. We are interested in the evolution of the number of contributions up to that date.For Scala, we will do this by plotting a chart of the project's activity. We will calculate the number of pull requests submitted each (calendar) month during the project's lifetime. We will then plot these numbers to see the trend of contributions.A helpful reminder of how to access various components of a date can be found in this exercise of Data Manipulation with pandasAdditionally, recall that you can group by multiple variables by passing a list to groupby(). This video from Data Manipulation with pandas should help!
###Code
%matplotlib inline
# Create a column that will store the month
data['month'] = data['date'].dt.month
# Create a column that will store the year
data['year'] = data['date'].dt.year
# Group by the month and year and count the pull requests
counts = data.groupby(by = ['month', 'year']).agg('count')
# Plot the results
counts.plot(kind='bar', figsize = (12,4))
###Output
_____no_output_____
###Markdown
5. Is there camaraderie in the project?The organizational structure varies from one project to another, and it can influence your success as a contributor. A project that has a very small community might not be the best one to start working on. The small community might indicate a high barrier of entry. This can be caused by several factors, including a community that is reluctant to accept pull requests from "outsiders," that the code base is hard to work with, etc. However, a large community can serve as an indicator that the project is regularly accepting pull requests from new contributors. Such a project would be a good place to start.In order to evaluate the dynamics of the community, we will plot a histogram of the number of pull requests submitted by each user. A distribution that shows that there are few people that only contribute a small number of pull requests can be used as in indicator that the project is not welcoming of new contributors.
###Code
# Required for matplotlib
%matplotlib inline
# Group by the submitter
by_user = data.groupby(by = 'user').agg('count')
# Plot the histogram
by_user.plot(kind= 'hist')
###Output
_____no_output_____
###Markdown
6. What files were changed in the last ten pull requests?Choosing the right place to make a contribution is as important as choosing the project to contribute to. Some parts of the code might be stable, some might be dead. Contributing there might not have the most impact. Therefore it is important to understand the parts of the system that have been recently changed. This allows us to pinpoint the "hot" areas of the code where most of the activity is happening. Focusing on those parts might not the most effective use of our times.
###Code
# Identify the last 10 pull requests
last_10 = pulls.nlargest(10, 'date')
# Join the two data sets
joined_pr = last_10.merge(pull_files, on = 'pid')
# Identify the unique files
files = joined_pr['file'].unique()
# Print the results
files
###Output
_____no_output_____
###Markdown
7. Who made the most pull requests to a given file?When contributing to a project, we might need some guidance. We might find ourselves needing some information regarding the codebase. It is important direct any questions to the right person. Contributors to open source projects generally have other day jobs, so their time is limited. It is important to address our questions to the right people. One way to identify the right target for our inquiries is by using their contribution history.We identified src/compiler/scala/reflect/reify/phases/Calculate.scala as being recently changed. We are interested in the top 3 developers who changed that file. Those developers are the ones most likely to have the best understanding of the code.
###Code
# This is the file we are interested in:
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Identify the commits that changed the file
file_pr = data.loc[data['file'] == file]
# Count the number of changes made by each developer
author_counts = file_pr.groupby(by = 'user').count()
# Print the top 3 developers
author_counts.nlargest(3, 'pid')
###Output
_____no_output_____
###Markdown
8. Who made the last ten pull requests on a given file?Open source projects suffer from fluctuating membership. This makes the problem of finding the right person more challenging: the person has to be knowledgeable and still be involved in the project. A person that contributed a lot in the past might no longer be available (or willing) to help. To get a better understanding, we need to investigate the more recent history of that particular part of the system. Like in the previous task, we will look at the history of src/compiler/scala/reflect/reify/phases/Calculate.scala.
###Code
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests that changed the target file
file_pr = pull_files.loc[pull_files['file'] == file]
# Merge the obtained results with the pulls DataFrame
joined_pr = file_pr.merge(pulls, on = 'pid')
# Find the users of the last 10 most recent pull requests
users_last_10 = set(joined_pr.nlargest(10, 'date').user)
# Printing the results
users_last_10
###Output
_____no_output_____
###Markdown
9. The pull requests of two special developersNow that we have identified two potential contacts in the projects, we need to find the person who was most involved in the project in recent times. That person is most likely to answer our questions. For each calendar year, we are interested in understanding the number of pull requests the authors submitted. This will give us a high-level image of their contribution trend to the project.
###Code
%matplotlib inline
# The developers we are interested in
authors = ['xeno-by', 'soc']
# Get all the developers' pull requests
by_author = pulls.loc[pulls['user'].isin(authors)]
# Count the number of pull requests submitted each year
counts = by_author.groupby(['user', by_author.date.dt.year]).agg({'pid': 'count'}).reset_index()
# Convert the table to a wide format
counts_wide = counts.pivot_table(index='date', columns='user', values='pid', fill_value=0)
# Plot the results
counts_wide.plot(kind = 'bar')
###Output
_____no_output_____
###Markdown
10. Visualizing the contributions of each developerAs mentioned before, it is important to make a distinction between the global expertise and contribution levels and the contribution levels at a more granular level (file, submodule, etc.) In our case, we want to see which of our two developers of interest have the most experience with the code in a given file. We will measure experience by the number of pull requests submitted that affect that file and how recent those pull requests were submitted.
###Code
authors = ['xeno-by', 'soc']
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests submitted by the authors, from the `data` DataFrame
by_author = data.loc[data['user'].isin(authors)]
# Select the pull requests that affect the file
by_file = by_author.loc[by_author['file'] == file]
# Group and count the number of PRs done by each user each year
grouped = by_file.groupby(['user', by_file['date'].dt.year]).count()['pid'].reset_index()
# Transform the data into a wide format
by_file_wide = grouped.pivot_table(index='date', columns='user', values='pid', fill_value=0)
# Plot the results
by_file_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
1. Scala's real-world project repository dataWith almost 30k commits and a history spanning over ten years, Scala is a mature programming language. It is a general-purpose programming language that has recently become another prominent language for data scientists.Scala is also an open source project. Open source projects have the advantage that their entire development histories -- who made changes, what was changed, code reviews, etc. -- are publicly available. We're going to read in, clean up, and visualize the real world project repository of Scala that spans data from a version control system (Git) as well as a project hosting site (GitHub). We will find out who has had the most influence on its development and who are the experts.The dataset we will use, which has been previously mined and extracted from GitHub, is comprised of three files:pulls_2011-2013.csv contains the basic information about the pull requests, and spans from the end of 2011 up to (but not including) 2014.pulls_2014-2018.csv contains identical information, and spans from 2014 up to 2018.pull_files.csv contains the files that were modified by each pull request.
###Code
# Importing pandas
# ... YOUR CODE FOR TASK 1 ...
import pandas as pd
# Loading in the data
pulls_one = pd.read_csv('datasets/pulls_2011-2013.csv')
pulls_two = pd.read_csv('datasets/pulls_2014-2018.csv')
pull_files = pd.read_csv('datasets/pull_files.csv')
pull_files.head()
###Output
_____no_output_____
###Markdown
2. Preparing and cleaning the dataFirst, we will need to combine the data from the two separate pull DataFrames. Next, the raw data extracted from GitHub contains dates in the ISO8601 format. However, pandas imports them as regular strings. To make our analysis easier, we need to convert the strings into Python's DateTime objects. DateTime objects have the important property that they can be compared and sorted.The pull request times are all in UTC (also known as Coordinated Universal Time). The commit times, however, are in the local time of the author with time zone information (number of hours difference from UTC). To make comparisons easy, we should convert all times to UTC.
###Code
# Append pulls_one to pulls_two
pulls = pulls_one.append(pulls_two)
# Convert the date for the pulls object
pulls['date'] = pd.to_datetime(pulls['date'])
pulls.head()
###Output
_____no_output_____
###Markdown
3. Merging the DataFramesThe data extracted comes in two separate files. Merging the two DataFrames will make it easier for us to analyze the data in the future tasks.
###Code
# Merge the two DataFrames
data = pulls.merge(pull_files, on='pid')
# Check the first few rows
data.head()
###Output
_____no_output_____
###Markdown
4. Is the project still actively maintained?The activity in an open source project is not very consistent. Some projects might be active for many years after the initial release, while others can slowly taper out into oblivion. Before committing to contributing to a project, it is important to understand the state of the project. Is development going steadily, or is there a drop? Has the project been abandoned altogether?The data used in this project was collected in January of 2018. We are interested in the evolution of the number of contributions up to that date.For Scala, we will do this by plotting a chart of the project's activity. We will calculate the number of pull requests submitted each (calendar) month during the project's lifetime. We will then plot these numbers to see the trend of contributions.A helpful reminder of how to access various components of a date can be found in this exercise of Data Manipulation with pandasAdditionally, recall that you can group by multiple variables by passing a list to groupby(). This video from Data Manipulation with pandas should help!
###Code
%matplotlib inline
# Create a column that will store the month
data['month'] = data['date'].dt.month
# Create a column that will store the year
data['year'] = data['date'].dt.year
# Group by the month and year and count the pull requests
counts = data.groupby(['year', 'month']).agg({'pid': 'count'})
# Plot the results
counts.plot(kind='bar', figsize = (12,4))
###Output
_____no_output_____
###Markdown
5. Is there camaraderie in the project?The organizational structure varies from one project to another, and it can influence your success as a contributor. A project that has a very small community might not be the best one to start working on. The small community might indicate a high barrier of entry. This can be caused by several factors, including a community that is reluctant to accept pull requests from "outsiders," that the code base is hard to work with, etc. However, a large community can serve as an indicator that the project is regularly accepting pull requests from new contributors. Such a project would be a good place to start.In order to evaluate the dynamics of the community, we will plot a histogram of the number of pull requests submitted by each user. A distribution that shows that there are few people that only contribute a small number of pull requests can be used as in indicator that the project is not welcoming of new contributors.
###Code
# Required for matplotlib
%matplotlib inline
# Group by the submitter
by_user = pulls.groupby('user').agg({'pid': 'count'})
# Plot the histogram
# ... YOUR CODE FOR TASK 5 ...
by_user.hist()
###Output
_____no_output_____
###Markdown
6. What files were changed in the last ten pull requests?Choosing the right place to make a contribution is as important as choosing the project to contribute to. Some parts of the code might be stable, some might be dead. Contributing there might not have the most impact. Therefore it is important to understand the parts of the system that have been recently changed. This allows us to pinpoint the "hot" areas of the code where most of the activity is happening. Focusing on those parts might not the most effective use of our times.
###Code
# Identify the last 10 pull requests
last_10 = pulls.nlargest(10, 'date')
# Join the two data sets
joined_pr = last_10.merge(pull_files, on='pid')
# Identify the unique files
files = set(joined_pr['file'])
# Print the results
files
###Output
_____no_output_____
###Markdown
7. Who made the most pull requests to a given file?When contributing to a project, we might need some guidance. We might find ourselves needing some information regarding the codebase. It is important direct any questions to the right person. Contributors to open source projects generally have other day jobs, so their time is limited. It is important to address our questions to the right people. One way to identify the right target for our inquiries is by using their contribution history.We identified src/compiler/scala/reflect/reify/phases/Calculate.scala as being recently changed. We are interested in the top 3 developers who changed that file. Those developers are the ones most likely to have the best understanding of the code.
###Code
# This is the file we are interested in:
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Identify the commits that changed the file
file_pr = data[data['file'] == file]
# Count the number of changes made by each developer
author_counts = file_pr.groupby('user').agg({'pid':'count'})
# Print the top 3 developers
# ... YOUR CODE FOR TASK 7 ...
print(*list(author_counts.nlargest(3, 'pid').index))
###Output
xeno-by retronym soc
###Markdown
8. Who made the last ten pull requests on a given file?Open source projects suffer from fluctuating membership. This makes the problem of finding the right person more challenging: the person has to be knowledgeable and still be involved in the project. A person that contributed a lot in the past might no longer be available (or willing) to help. To get a better understanding, we need to investigate the more recent history of that particular part of the system. Like in the previous task, we will look at the history of src/compiler/scala/reflect/reify/phases/Calculate.scala.
###Code
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests that changed the target file
file_pr = pull_files[pull_files['file'] == file]
# Merge the obtained results with the pulls DataFrame
joined_pr = file_pr.merge(pulls, on='pid')
# Find the users of the last 10 most recent pull requests
users_last_10 = set(joined_pr.nlargest(10, 'date')['user'])
# Printing the results
users_last_10
###Output
_____no_output_____
###Markdown
9. The pull requests of two special developersNow that we have identified two potential contacts in the projects, we need to find the person who was most involved in the project in recent times. That person is most likely to answer our questions. For each calendar year, we are interested in understanding the number of pull requests the authors submitted. This will give us a high-level image of their contribution trend to the project.
###Code
%matplotlib inline
# The developers we are interested in
authors = ['xeno-by', 'soc']
# Get all the developers' pull requests
by_author = pulls[pulls['user'].isin(authors)]
# Count the number of pull requests submitted each year
counts = by_author.groupby(['user', by_author['date'].dt.year]).agg({'pid': 'count'}).reset_index()
# Convert the table to a wide format
counts_wide = counts.pivot_table(index='date', columns='user', values='pid', fill_value=0)
# Plot the results
# ... YOUR CODE FOR TASK 9 ...
counts_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
10. Visualizing the contributions of each developerAs mentioned before, it is important to make a distinction between the global expertise and contribution levels and the contribution levels at a more granular level (file, submodule, etc.) In our case, we want to see which of our two developers of interest have the most experience with the code in a given file. We will measure experience by the number of pull requests submitted that affect that file and how recent those pull requests were submitted.
###Code
authors = ['xeno-by', 'soc']
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests submitted by the authors, from the `data` DataFrame
by_author = data[data['user'].isin(authors)]
# Select the pull requests that affect the file
by_file = by_author[by_author['file'] == file]
# Group and count the number of PRs done by each user each year
grouped = by_file.groupby(['user', by_file['date'].dt.year]).count()['pid'].reset_index()
# Transform the data into a wide format
by_file_wide = grouped.pivot_table(index='co')
# Plot the results
by_file_wide.plot(kind='bar')
###Output
_____no_output_____
###Markdown
1. Scala's real-world project repository dataWith almost 30k commits and a history spanning over ten years, Scala is a mature programming language. It is a general-purpose programming language that has recently become another prominent language for data scientists.Scala is also an open source project. Open source projects have the advantage that their entire development histories -- who made changes, what was changed, code reviews, etc. -- publicly available. We're going to read in, clean up, and visualize the real world project repository of Scala that spans data from a version control system (Git) as well as a project hosting site (GitHub). We will find out who has had the most influence on its development and who are the experts.The dataset we will use, which has been previously mined and extracted directly from GitHub, is comprised of two files:pulls.csv contains the basic information about the pull requests.pull_files.csv contains the files that were modified by each pull request.
###Code
# Importing pandas
# ... YOUR CODE FOR TASK 1 ...
import pandas as pd
# Loading in the data
pulls = pd.read_csv("datasets/pulls.csv")
pull_files = pd.read_csv('datasets/pull_files.csv')
###Output
_____no_output_____
###Markdown
2. Cleaning the dataThe raw data extracted from GitHub contains dates in the ISO8601 format. However, pandas imports them as regular strings. To make our analysis easier, we need to convert the strings into Python's DateTime objects. DateTime objects have the important property that they can be compared and sorted.The pull request times are all in UTC (also known as Coordinated Universal Time). The commit times, however, are in the local time of the author with time zone information (number of hours difference from UTC). To make comparisons easy, we should convert all times to UTC.
###Code
# Convert the date for the pulls object
pulls['date'] = pd.to_datetime(pulls['date'],utc=True)
# ... YOUR CODE FOR TASK 2 ...
###Output
_____no_output_____
###Markdown
3. Merging the DataFramesThe data extracted comes in two separate files. Merging the two DataFrames will make it easier for us to analyze the data in the future tasks.
###Code
# Merge the two DataFrames
data = pd.merge(pulls,pull_files,on=['pid'],how='inner')
###Output
_____no_output_____
###Markdown
4. Is the project still actively maintained?The activity in an open source project is not very consistent. Some projects might be active for many years after the initial release, while others can slowly taper out into oblivion. Before committing to contributing to a project, it is important to understand the state of the project. Is development going steadily, or is there a drop? Has the project been abandoned altogether?The data used in this project was collected in January of 2018. We are interested in the evolution of the number of contributions up to that date.For Scala, we will do this by plotting a chart of the project's activity. We will calculate the number of pull requests submitted each (calendar) month during the project's lifetime. We will then plot these numbers to see the trend of contributions.
###Code
pulls.head()
%matplotlib inline
# Create a column that will store the month and the year, as a string
pulls['month_year'] = pulls["date"].dt.month.astype(str) +'-'+pulls["date"].dt.year.astype(str)
# Group by month_year and count the pull requests
counts = pulls.groupby(['month_year'])['pid'].count()
counts.plot()
# Plot the results
# ... YOUR CODE FOR TASK 4 ...
###Output
_____no_output_____
###Markdown
5. Is there camaraderie in the project?The organizational structure varies from one project to another, and it can influence your success as a contributor. A project that has a very small community might not be the best one to start working on. The small community might indicate a high barrier of entry. This can be caused by several factors, including a community that is reluctant to accept pull requests from "outsiders," that the code base is hard to work with, etc. However, a large community can serve as an indicator that the project is regularly accepting pull requests from new contributors. Such a project would be a good place to start.In order to evaluate the dynamics of the community, we will plot a histogram of the number of pull requests submitted by each user. A distribution that shows that there are few people that only contribute a small number of pull requests can be used as in indicator that the project is not welcoming of new contributors.
###Code
# Required for matplotlib
%matplotlib inline
# Group by the submitter
by_user = pulls.groupby(['user'])['pid'].count()
by_user.plot(kind='hist')
# Plot the histogram
# ... YOUR CODE FOR TASK 5 ...
###Output
_____no_output_____
###Markdown
6. What files were changed in the last ten pull requests?Choosing the right place to make a contribution is as important as choosing the project to contribute to. Some parts of the code might be stable, some might be dead. Contributing there might not have the most impact. Therefore it is important to understand the parts of the system that have been recently changed. This allows us to pinpoint the "hot" areas of the code where most of the activity is happening. Focusing on those parts might not the most effective use of our times.
###Code
pull_files.head()
# Identify the last 10 pull requests
last_10 = pulls.nlargest(10, 'date')
# Join the two data sets
joined_pr = pd.merge(last_10,pull_files,on=['pid'],how='inner')
# Identify the unique files
files = set(joined_pr['file'])
# Print the results
files
###Output
_____no_output_____
###Markdown
7. Who made the most pull requests to a given file?When contributing to a project, we might need some guidance. We might find ourselves needing some information regarding the codebase. It is important direct any questions to the right person. Contributors to open source projects generally have other day jobs, so their time is limited. It is important to address our questions to the right people. One way to identify the right target for our inquiries is by using their contribution history.We identified src/compiler/scala/reflect/reify/phases/Calculate.scala as being recently changed. We are interested in the top 3 developers who changed that file. Those developers are the ones most likely to have the best understanding of the code.
###Code
# This is the file we are interested in:
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
file_pr = pull_files[pull_files['file'] == file]
joined_pr = file_pr.merge(pulls, on='pid')
author_counts = joined_pr.groupby(['user'],as_index=False)['file'].count().sort_values(by=['file'],ascending=[False])
author_counts.head(3)
# Printing the results
# Print the top 3 developers
# ... YOUR CODE FOR TASK 7 ...
###Output
_____no_output_____
###Markdown
8. Who made the last ten pull requests on a given file?Open source projects suffer from fluctuating membership. This makes the problem of finding the right person more challenging: the person has to be knowledgeable and still be involved in the project. A person that contributed a lot in the past might no longer be available (or willing) to help. To get a better understanding, we need to investigate the more recent history of that particular part of the system. Like in the previous task, we will look at the history of src/compiler/scala/reflect/reify/phases/Calculate.scala.
###Code
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
file_pr = pull_files[pull_files['file'] == file]
# Merge the obtained results with the pulls DataFrame
joined_pr = file_pr.merge(pulls, on='pid')
# Find the users of the last 10 most recent pull requests
users_last_10 = set(joined_pr.nlargest(10, 'date')['user'])
# Printing the results
users_last_10
###Output
_____no_output_____
###Markdown
9. The pull requests of two special developersNow that we have identified two potential contacts in the projects, we need to find the person who was most involved in the project in recent times. That person is most likely to answer our questions. For each calendar year, we are interested in understanding the number of pull requests the authors submitted. This will give us a high-level image of their contribution trend to the project.
###Code
%matplotlib inline
# The developers we are interested in
authors = ['xeno-by', 'soc']
# Get all the developers' pull requests
by_author = pulls[pulls['user'].isin(authors)]
# Count the number of pull requests submitted each year
counts = by_author.groupby(['user', pulls['date'].dt.year]).agg({'pid': 'count'}).reset_index()
# Convert the table to a wide format
counts_wide = counts.pivot_table(index='date', columns='user', values='pid', fill_value=0)
counts_wide.plot(kind='bar')
# Plot the results
# ... YOUR CODE FOR TASK 9 ...
###Output
_____no_output_____
###Markdown
10. Visualizing the contributions of each developerAs mentioned before, it is important to make a distinction between the global expertise and contribution levels and the contribution levels at a more granular level (file, submodule, etc.) In our case, we want to see which of our two developers of interest have the most experience with the code in a given file. We will measure experience by the number of pull requests submitted that affect that file and how recent those pull requests were submitted.
###Code
data.head()
authors = ['xeno-by', 'soc']
file = 'src/compiler/scala/reflect/reify/phases/Calculate.scala'
# Select the pull requests submitted by the authors, from the `data` DataFrame
by_author = data[data['user'].isin(authors)]
# Select the pull requests that affect the file
by_file = by_author[by_author['file']==file]
# Group and count the number of PRs done by each user each year
grouped = by_file.groupby(['user', by_file['date'].dt.year]).count()['pid'].reset_index()
# Transform the data into a wide format
by_file_wide = grouped.pivot_table(index='date', columns='user', values='pid', fill_value=0)
# Plot the results
by_file_wide.plot(kind='bar')
###Output
_____no_output_____ |
docs/notebooks/ChestXRay_Translate.ipynb | ###Markdown
Translating Images:Make directoy '/data/datasets/chestxray/' and run the following cells
###Code
for data_name in ['nih', 'chex', 'kaggle']:
rng = default_rng()
indices = rng.choice(data[data_name]['size'], size=data[data_name]['size'], replace=False)
print(indices.shape)
count=0
for case in ['train', 'val', 'test']:
size= data[data_name][case+'_size']
ids=[]
count_l0=0
count_l1=0
count_lim=int(size/2)
while count_l0 < count_lim or count_l1 < count_lim:
index= indices[count].item()
task = xrv.datasets.default_pathologies.index('Pneumonia')
label= data[data_name]['obj'][index]['lab'][task]
count+=1
if np.isnan(label):
continue
else:
if label == 0:
if count_l0 < count_lim:
# print('Label 0')
count_l0+= 1
else:
continue
if label ==1:
if count_l1 < count_lim:
# print('Label 1')
count_l1+= 1
else:
continue
ids.append(index)
ids= np.array(ids)
print(count_l0, count_l1)
base_dir='/data/datasets/chestxray/'
np.save(base_dir + data_name + '_' + case + '_' + 'indices.npy', ids)
base_dir='/data/datasets/chestxray/'
for data_name in ['nih', 'chex', 'kaggle']:
indices= np.random.randint(0, data[data_name]['size'], data[data_name]['size'] )
print(indices.shape)
count=0
for case in ['train', 'val', 'test']:
size= data[data_name][case+'_size']
imgs=[]
labels=[]
imgs_org=[]
indices= np.load(base_dir + data_name + '_' + case + '_' + 'indices.npy')
count_l0=0
count_l1=0
for idx in range(indices.shape[0]):
index= indices[idx].item()
task = xrv.datasets.default_pathologies.index('Pneumonia')
img= data[data_name]['obj'][index]['img']
img_org= data[data_name]['obj'][index]['img']
label= data[data_name]['obj'][index]['lab'][task]
count+=1
if np.isnan(data[data_name]['obj'][index]['lab'][task]):
print('Error: Nan in the labels')
if label == 0:
count_l0+=1
if label == 1:
count_l1+=1
label=torch.tensor(label).long()
label= label.view(1)
img= to_tensor( to_augment( translate(img, label, data_name, index, 1) ) )
img_org= to_tensor( translate(img, label, data_name, index, 1) )
img= img.view(1, img.shape[0], img.shape[1], img.shape[2])
img_org= img_org.view(1, img_org.shape[0], img_org.shape[1], img_org.shape[2])
# print('Data: ', data_name, count, img.shape, label)
imgs.append(img)
imgs_org.append(img_org)
labels.append(label)
if torch.all(torch.eq(img, img_org)):
print('Error:')
imgs=torch.cat(imgs)
imgs_org=torch.cat(imgs_org)
labels=torch.cat(labels)
print(imgs.shape, imgs_org.shape, labels.shape, count_l0, count_l1)
torch.save(imgs, base_dir + data_name + '_trans_' + case + '_' + 'image.pt')
torch.save(imgs_org, base_dir + data_name + '_trans_' + case + '_' + 'image_org.pt')
torch.save(labels, base_dir + data_name + '_trans_' + case + '_' + 'label.pt')
###Output
_____no_output_____ |
2018_material/labs/lab2.ipynb | ###Markdown
Ridge regression and model selectionModified from the github repo: https://github.com/JWarmenhoven/ISLR-python which is based on the book by James et al. Intro to Statistical Learning. Loading data
###Code
# %load ../standard_import.txt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import scale
from sklearn.linear_model import LinearRegression, Ridge, RidgeCV, Lasso, LassoCV
from sklearn.decomposition import PCA
from sklearn.metrics import mean_squared_error
%matplotlib inline
plt.style.use('ggplot')
datafolder = "../data/"
# In R, I exported the dataset from package 'ISLR' to a csv file.
df = pd.read_csv(datafolder+'Hitters.csv', index_col=0).dropna()
df.index.name = 'Player'
df.info()
df.head()
dummies = pd.get_dummies(df[['League', 'Division', 'NewLeague']])
dummies.info()
print(dummies.head())
y = df.Salary
# Drop the column with the independent variable (Salary), and columns for which we created dummy variables
X_ = df.drop(['Salary', 'League', 'Division', 'NewLeague'], axis=1).astype('float64')
# Define the feature set X.
X = pd.concat([X_, dummies[['League_N', 'Division_W', 'NewLeague_N']]], axis=1)
X.info()
X.head(5)
###Output
_____no_output_____
###Markdown
Ridge Regression
###Code
alphas = 10**np.linspace(10,-2,100)*0.5
ridge = Ridge()
coefs = []
for a in alphas:
ridge.set_params(alpha=a)
ridge.fit(scale(X), y)
coefs.append(ridge.coef_)
ax = plt.gca()
ax.plot(alphas, coefs)
ax.set_xscale('log')
ax.set_xlim(ax.get_xlim()[::-1]) # reverse axis
plt.axis('tight')
plt.xlabel('lambda')
plt.ylabel('weights')
plt.title('Ridge coefficients as a function of the regularization');
###Output
_____no_output_____ |
Notebooks/Barsim_Sequential.ipynb | ###Markdown
Usage of the Event Detector by Barsim et al. Load the required packages
###Code
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
# Import public packages
import sys
import os
from pathlib import Path
from matplotlib import pyplot as plt
import seaborn as sns
sns.set(style="darkgrid")
%matplotlib notebook
import glob
from datetime import datetime, timedelta
import numpy as np
import pandas as pd
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
# Add src to the path for import
project_dir = Path(os.getcwd()).resolve().parents[0]
module_path = os.path.abspath(os.path.join(project_dir))
if module_path not in sys.path:
sys.path.append(module_path)
# Import private source code
from Event_Detectors import EventDet_Barsim_Sequential
import BLUED_loader as blued
# Activate Autoreload
%load_ext autoreload
%autoreload 2
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Set all global Parameters for the BLUED Dataset
###Code
# Hardcoded Hyperparameters
DATASET_LOCATION_BLUED = os.path.join("./Test_Data/") #Path to Test Data
DATASET = "blued_events" #name of the dataset (used to load the file later with my Utility class)
CURRENT_COLUMN = "Current B" # Dataset has two phases: phase A and B. They can be treated independently. We load only Phase B.
NETWORK_FREQUENCY_BLUED = 60 # Base electrical network frequency of the region where the dataset was recorded
SAMPLES_PER_SECOND_BLUED = 2 # We compute two features (data points) per second.
SAMPLERATE_BLUED = 12000 # Sampling Rate the raw BLUED Dataset was recorded with
# Hyperparameters Dictionary for the Event Detector
init_dict_BLUED = {"dbscan_eps": 0.03, #epsilon radius parameter for the dbscan algorithm
"dbscan_min_pts": 2, # minimum points parameter for the dbscan algorithm
"window_size_n": 4, # datapoints the algorithm takes one at a time
"future_window_size_n": 6, # datapoints it needs from the future
"loss_thresh": 40, # threshold for model loss
"temp_eps": 0.8, # temporal epsilon parameter of the algorithm
"debugging_mode": False, # debugging, yes or no - if yes detailed information is printed to console
"network_frequency": 60} #base frequency
# Compute some relevant window sizes etc. for the "streaming"
window_size_seconds_BLUED = (init_dict_BLUED["window_size_n"] + init_dict_BLUED["future_window_size_n"]) / SAMPLES_PER_SECOND_BLUED
# Compute how big the window is regarding the raw samples --> this is used for the "streaming"
samples_raw_per_window_BLUED = SAMPLERATE_BLUED * window_size_seconds_BLUED
# Compute the period size of the BLUED dataset: i.e. number of raw data points per period
BLUED_period = int(SAMPLERATE_BLUED / NETWORK_FREQUENCY_BLUED)
###Output
_____no_output_____
###Markdown
Load and display the BLUED Test File
###Code
# Get the Test File
test_file = glob.glob(os.path.join(DATASET_LOCATION_BLUED, "*.txt"))[0] #get the full path of the test file
# Load the Data from the test File
data, file_info = blued.load_file(test_file)
lable_path:str = glob.glob(os.path.join(DATASET_LOCATION_BLUED, "*.csv"))[0]
data_start = file_info['file_start']
data_end = file_info['file_end']
labels = blued.load_labels(lable_path, data_start, data_end)
current = data["Current"]
voltage = data["Voltage"]
# Plot the data from the test File
_, ax = plt.subplots(figsize=(9.9,5))
plt.title("Full Current signal of Test File")
plt.ylabel("Current")
plt.xlabel("Time")
ax.plot(current)
ax.scatter(x=labels.index, y=np.zeros(len(labels.index)), color='r',zorder=100)
ax.vlines(x=labels.index, color='r',ymin=-80, ymax=80, zorder=101)
plt.show()
###Output
_____no_output_____
###Markdown
Run the Event Detection on the Test Data
###Code
found_events=[]
found_events_mean=[]
show_plots = False
samples_remaining = len(current) # number of samples that we have not predicted yet
window_start = 0 # offset of the next window
# Step 1: Initialize the Event Detector with the Hypperparameter dictionary
EventDet_Barsim = EventDet_Barsim_Sequential(**init_dict_BLUED) #i.e. values are unpacked into the parameters
EventDet_Barsim.fit() # Call the fit() method to further initialize the algorithm (required by the sklearn API)
while samples_remaining >= samples_raw_per_window_BLUED: #while we still have samples to "stream" do the following
window_stop = int(window_start + samples_raw_per_window_BLUED) # compute end index of the new window
# Get the voltage and current windows
voltage_window = voltage[window_start:window_stop]
current_window = current[window_start:window_stop]
# Step 2: Use the feature computation function of the algorithm to compute the input features
X = EventDet_Barsim.compute_input_signal(voltage=voltage_window, current=current_window, period_length=BLUED_period)
# Step 3: Run the prediciton on the features
event_interval_indices = EventDet_Barsim.predict(X) #(start_index, end_index) of event if existent is returned
print(">"+ f" ({window_start}-{window_stop})")
if event_interval_indices is not None: # if an event is returned
print("Event Detected at " + str(event_interval_indices)+ f" ({window_start}-{window_stop})")
# Plot the computed features
if show_plots:
#draw range
range_line = np.full_like(X, np.NaN)
range_line[event_interval_indices[0]:event_interval_indices[1]+1] = X[event_interval_indices[0]:event_interval_indices[1]+1]
plt.title("Computed input features for this window"+ f" ({window_start}-{window_stop})")
plt.plot(X)
plt.plot(range_line, color='g', zorder=100, marker='x')
plt.show()
# Instead of an event interval, we might be interested in an exact event point
# Hence, we just take the mean of the interval boundaries
event_time = data.index[int(window_start+EventDet_Barsim._convert_relative_offset(event_interval_indices[0]))]
found_events.append(str(event_time))
mean_event_index = np.mean([event_interval_indices[0], event_interval_indices[1]])
mean_event_time = data.index[int(window_start+EventDet_Barsim._convert_relative_offset(mean_event_index))]
found_events_mean.append(str(mean_event_time))
# We compute a new offset: We start of at the event index
# The event indices returned are with respect to the feature domain
# To "stream" the next window, we need it with respect to the raw input data domain
end_index = EventDet_Barsim._convert_relative_offset(event_interval_indices[1]) # convert it to input domain with utility function
# We need to update the data points that remain for streaming now.
window_start = int(window_start + end_index) # set new starting point
#print("+++++++++++++++++++++++++++++++++++")
# We need to update the data points that remain for streaming now.
samples_remaining -= end_index
else: #no event was detected
# We start at the end of the previous window
window_start = int(window_stop)
# Plot the computed features
if show_plots:
plt.title("Computed input features for this window"+ f" ({window_start}-{window_stop})")
plt.plot(X)
plt.show()
#print("+++++++++++++++++++++++++++++++++++")
# We need to update the data points that remain for streaming now.
samples_remaining -= samples_raw_per_window_BLUED
found_events
labels.index
_, ax = plt.subplots(figsize=(9.5,5))
plt.title("Full Current signal of Test File")
plt.ylabel("Current")
plt.xlabel("Time")
ax.plot(current)
ax.scatter(x=labels.index, y=np.zeros(len(labels.index)), color='r',zorder=100)
ax.vlines(x=found_events, color='red',ymin=-80, ymax=80, zorder=101)
#end of sliding window
ax.vlines(x=data.index[window_start], color='y',ymin=-80, ymax=80, zorder=101)
plt.show()
###Output
_____no_output_____
###Markdown
Usage of the Event Detector by Barsim et al. Load the required packages
###Code
# Import public packages
import sys
import os
from pathlib import Path
import ipdb
from matplotlib import pyplot as plt
import glob
from datetime import datetime, timedelta
import pandas as pd
import numpy as np
from io import StringIO
# Add src to the path for import
project_dir = Path(os.getcwd()).resolve().parents[0]
module_path = os.path.abspath(os.path.join(project_dir))
if module_path not in sys.path:
sys.path.append(module_path)
# Import private source code
from Event_Detectors import EventDet_Barsim_Sequential
# Activate Autoreload
%load_ext autoreload
%autoreload 2
###Output
/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
###Markdown
Set all global Parameters for the BLUED Dataset
###Code
# Hardcoded Hyperparameters
DATASET_LOCATION_BLUED = os.path.join("./Test_Data/") #Path to Test Data
DATASET = "blued_events" #name of the dataset (used to load the file later with my Utility class)
CURRENT_COLUMN = "Current B" # Dataset has two phases: phase A and B. They can be treated independently. We load only Phase B.
NETWORK_FREQUENCY_BLUED = 60 # Base electrical network frequency of the region where the dataset was recorded
SAMPLES_PER_SECOND_BLUED = 2 # We compute two features (data points) per second.
SAMPLERATE_BLUED = 12000 # Sampling Rate the raw BLUED Dataset was recorded with
# Hyperparameters Dictionary for the Event Detector
init_dict_BLUED = {"dbscan_eps": 0.03, #epsilon radius parameter for the dbscan algorithm
"dbscan_min_pts": 2, # minimum points parameter for the dbscan algorithm
"window_size_n": 4, # datapoints the algorithm takes one at a time
"future_window_size_n": 6, # datapoints it needs from the future
"loss_thresh": 40, # threshold for model loss
"temp_eps": 0.8, # temporal epsilon parameter of the algorithm
"debugging_mode": False, # debugging, yes or no - if yes detailed information is printed to console
"network_frequency": 60} #base frequency
# Compute some relevant window sizes etc. for the "streaming"
window_size_seconds_BLUED = (init_dict_BLUED["window_size_n"] + init_dict_BLUED["future_window_size_n"]) / SAMPLES_PER_SECOND_BLUED
# Compute how big the window is regarding the raw samples --> this is used for the "streaming"
samples_raw_per_window_BLUED = SAMPLERATE_BLUED * window_size_seconds_BLUED
# Compute the period size of the BLUED dataset: i.e. number of raw data points per period
BLUED_period = int(SAMPLERATE_BLUED / NETWORK_FREQUENCY_BLUED)
###Output
_____no_output_____
###Markdown
Load and display the BLUED Test File
###Code
def load_file_BLUED(file_path, phase="b"):
"""
Function to load the BLUED test data.
Args:
file_path (Path): full path to the test file
phase (string): either "all", "b" or "a". Returns only the requested phase of the dataset.
Returns:
data_df (pandas DataFrame): original columns if phase=="all" else colums are just "Current" and "Voltage" --> already for the matching phase! (* - 1 done for B)
file_info (dict): dictionary with information about the file that was loaded. Parsed from the filename
and the metadata included in the file.
"""
with open(file_path, 'r') as f:
data_txt = f.read()
lines = data_txt.splitlines()
data_txt = data_txt.split("***End_of_Header***")
reference_time = data_txt[0].split("Date,")[1][:11].replace("\n","") +"-"+ data_txt[0].split("Time,")[1][:15]
reference_time = datetime.strptime(reference_time, '%Y/%m/%d-%H:%M:%S.%f')
data_time_str = data_txt[1].split("Time,")[1]
data_time_str = data_time_str.split(',')
data_day_str = data_txt[1].split("Date,")[1]
data_day_str = data_day_str.split(',')
day_str = data_day_str[0] # just the first on is enoguh
time_str = data_time_str[0][:15] # same for time
date = day_str + "-" + time_str
start_date_time = datetime.strptime(date, '%Y/%m/%d-%H:%M:%S.%f')
filename = Path(file_path).name # get the file name
samples = data_txt[1].split("Samples,")[1].split(",")[0:3][0]
samples = int(samples)
values_str = data_txt[-1]
values_str = values_str[values_str.index("X_Value"):]
measurement_steps = data_txt[1].split("Delta_X")[1].split(",")[0:3]
measurement_steps = [float(x) for x in measurement_steps if x != ""]
measurement_steps = measurement_steps[0]
data_df = pd.read_csv(StringIO(values_str), usecols=["X_Value", "Current A", "Current B", "VoltageA"])
data_df.dropna(inplace=True,how="any")
file_duration = data_df.tail(1)["X_Value"].values[0]
file_duration = float(file_duration)
file_duration = timedelta(seconds=file_duration)
end_date_time = reference_time + file_duration
file_duration = end_date_time - start_date_time
# Convert totimestamps
data_df["TimeStamp"] = data_df["X_Value"].apply(lambda x: timedelta(seconds=x) + reference_time)
data_df.drop(columns=["X_Value"],inplace=True)
data_df.set_index("TimeStamp",inplace=True)
file_info = {"Filepath": file_path, "Filename": filename, "samples": samples,
"file_start": start_date_time, "file_duration": file_duration, "file_end": end_date_time,
"measurement_steps": measurement_steps,"reference_time":reference_time}
if phase.lower() != "all":
if phase.lower() == "a":
data_df["Current"] = data_df["Current A"]
data_df["Voltage"] = data_df["VoltageA"]
elif phase.lower() == "b":
data_df["Current"] = data_df["Current B"]
data_df["Voltage"] = data_df["VoltageA"].values * -1
else:
raise ValueError("The phase provided does not exist")
data_df.drop(columns=['Current A', 'Current B',"VoltageA"],inplace=True)
return data_df, file_info
# Get the Test File
test_file = glob.glob(os.path.join(DATASET_LOCATION_BLUED, "*.txt"))[0] #get the full path of the test file
# Load the Data from the test File
data,file_info = load_file_BLUED(test_file)
current = data["Current"]
voltage = data["Voltage"]
# Plot the data from the test File
plt.title("Full Current signal of Test File")
plt.ylabel("Current")
plt.xlabel("Time")
plt.plot(current)
plt.show()
###Output
/anaconda3/lib/python3.6/site-packages/pandas/plotting/_converter.py:129: FutureWarning: Using an implicitly registered datetime converter for a matplotlib plotting method. The converter was registered by pandas on import. Future versions of pandas will require you to explicitly register matplotlib converters.
To register the converters:
>>> from pandas.plotting import register_matplotlib_converters
>>> register_matplotlib_converters()
warnings.warn(msg, FutureWarning)
###Markdown
Run the Event Detection on the Test Data
###Code
samples_remaining = len(current) # number of samples that we have not predicted yet
window_start = 0 # offset of the next window
# Step 1: Initialize the Event Detector with the Hypperparameter dictionary
EventDet_Barsim = EventDet_Barsim_Sequential(**init_dict_BLUED) #i.e. values are unpacked into the parameters
EventDet_Barsim.fit() # Call the fit() method to further initialize the algorithm (required by the sklearn API)
while samples_remaining >= samples_raw_per_window_BLUED: #while we still have samples to "stream" do the following
window_stop = int(window_start + samples_raw_per_window_BLUED) # compute end index of the new window
# Get the voltage and current windows
voltage_window = voltage[window_start:window_stop]
current_window = current[window_start:window_stop]
# Step 2: Use the feature computation function of the algorithm to compute the input features
X = EventDet_Barsim.compute_input_signal(voltage=voltage_window, current=current_window, period_length=BLUED_period)
# Plot the computed features
plt.title("Computed input features for this window")
plt.plot(X)
plt.show()
# Step 3: Run the prediciton on the features
event_interval_indices = EventDet_Barsim.predict(X) #(start_index, end_index) of event if existent is returned
if event_interval_indices is not None: # if an event is returned
print("Event Detected at " + str(event_interval_indices))
# Instead of an event interval, we might be interested in an exact event point
# Hence, we just take the mean of the interval boundaries
mean_event_index = np.mean([event_interval_indices[0], event_interval_indices[1]])
# We compute a new offset: We start of at the event index
# The event indices returned are with respect to the feature domain
# To "stream" the next window, we need it with respect to the raw input data domain
end_index = EventDet_Barsim._convert_relative_offset(event_interval_indices[1]) # convert it to input domain with utility function
# We need to update the data points that remain for streaming now.
window_start = int(window_start + end_index) # set new start point
# We need to update the data points that remain for streaming now.
samples_remaining -= end_index
print("+++++++++++++++++++++++++++++++++++")
else: #no event was detected
# We start at the end of the previous window
window_start = int(window_stop)
print("+++++++++++++++++++++++++++++++++++")
# We need to update the data points that remain for streaming now.
samples_remaining -= samples_raw_per_window_BLUED
###Output
/Users/daniel/Development/MEED-An-Unsupervised-Multi-Environment-EventDetector-for-Non-Intrusive-Load-Monitoring/Event_Detectors/Event_Detectors.py:1947: Warning: You have provided less then 5 times of future_window_samples_n than window_size_n samples to the algorithm. Please make sure that your window_size_n is at least big enough for the event detector to work. We recommend using more future samples
"event detector to work. We recommend using more future samples", Warning)
|
code/Hyperparameter Search.ipynb | ###Markdown
BERT ClassifierUse Yujia's BERT classifier code with all the new data we got. Table of Contents1 Import Packages and Load Data1.1 Load Data1.2 Prepare GPU2 Data Pre-Processing2.1 BERT Tokenizer and Padding2.2 Attention Masks3 Prepare Model3.1 Train Test Validation Split3.2 Multilabel Classifier4 Run Model5 Save Trained Models
###Code
batch_size=4
gpu_id=1
epochs = 8
# topics = topic_to_id.keys()
topics = ['child product', 'pregnancy', 'dad parenting', 'multiple children', 'mom health', 'non-biological parents', 'child appearances']
# device = torch.device(1)
# if device.type == 'cuda':
# print(torch.cuda.get_device_name(torch.cuda.current_device()))
# print('Memory Usage:')
# print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB')
# print('Cached: ', round(torch.cuda.memory_cached(0)/1024**3,1), 'GB')
###Output
_____no_output_____
###Markdown
Import Packages and Load DataCan be later changed to pull data from GitHub, but for now just do it from a local path.
###Code
# Input data files are available in the "../data/" directory.
import os
print(os.listdir("../data"))
# Basics + Viz
import torch
import pickle
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Pre-processing
from transformers import BertTokenizer
from keras.preprocessing.sequence import pad_sequences
from transformers import BertForSequenceClassification, AdamW, BertConfig
from transformers import get_linear_schedule_with_warmup
# Models
from sklearn.model_selection import train_test_split
from utils import flat_accuracy, format_time, single_topic_train, augmented_validationloader
###Output
['20200422-multilabel.h5', '.ipynb_checkpoints', 'facebook', '0527_reddit_1300_parenting_clean.csv', 'extra_data', 'labeled_only-reddit_796_of_1300.h5', '20200405-topic_to_id.pickle', '20200405-topic_per_row.h5']
###Markdown
Load Data
###Code
# load dictionary
with open("../data/20200405-topic_to_id.pickle", "rb") as input_file:
topic_to_id = pickle.load(input_file)
# load data
data_folder = '../data/extra_data/aug/'
df = pd.DataFrame()
file_names = os.listdir(data_folder)
for f in file_names:
temp = pd.read_csv(data_folder + f)
print(temp.shape)
df = pd.concat([df, temp])
print(f"Total: {df.shape}")
###Output
(259, 33)
(665, 33)
(59, 33)
(590, 33)
(796, 33)
(249, 33)
Total: (2618, 33)
###Markdown
Prepare GPU
###Code
# If there's a GPU available...
if torch.cuda.is_available():
# Tell PyTorch to use the GPU.
device = torch.device(gpu_id)
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(torch.cuda.current_device()))
# If not...
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
###Output
There are 2 GPU(s) available.
We will use the GPU: GeForce RTX 2080 Ti
###Markdown
Data Pre-ProcessingUse https://github.com/huggingface/transformers BERTTokenizer to change all the words into IDs. BERT Tokenizer and Padding
###Code
# Load the BERT tokenizer.
print('Loading BERT tokenizer...')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
# Set the maximum sequence length.
MAX_LEN = 512
sentence_lengths = []
def tokenize_and_count(s, lst, max_len):
# `encode` will:
# (1) Tokenize the sentence.
# (2) Prepend the `[CLS]` token to the start.
# (3) Append the `[SEP]` token to the end.
# (4) Map tokens to their IDs.
answer = tokenizer.encode(s, add_special_tokens=True)
lst.append(len(answer))
return answer
df['bert'] = df.text.apply(lambda s : tokenize_and_count(s, sentence_lengths, MAX_LEN))
df['bert_aug'] = df.aug.apply(lambda s : tokenize_and_count(s, sentence_lengths, MAX_LEN))
###Output
Token indices sequence length is longer than the specified maximum sequence length for this model (601 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (580 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (544 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (835 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (662 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (766 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (546 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (575 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (666 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (566 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (522 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (854 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (578 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (791 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (596 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (1033 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (626 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (962 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (652 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (569 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (698 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (1677 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (890 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (1890 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (898 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (845 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (511 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (551 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (553 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (555 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (887 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (1602 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (586 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (567 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (579 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (513 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (638 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (987 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (682 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (707 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (831 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (970 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (798 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (799 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (596 > 512). Running this sequence through the model will result in indexing errors
###Markdown
It's obvious the default MAX_LEN=512 is not enough for some posts, but just how long are these posts?Turns out only 2% of all the sentences are above the length 512.So we'll just proceed as normal and truncate/extend all the sentences to length 512, as most sentences are distributed between the 100~200 word range, we don't want to add too many padding to most sentences by setting the MAX_LEN to something too high.
###Code
max_len = 512
temp = np.array(sentence_lengths)
temp_count = len(temp[temp > max_len])
temp_len = len(sentence_lengths)
print(f"Out of the\n{temp_len} total sentences,\n{temp_count} are over the length {max_len},\nTotal of: {(temp_count/temp_len * 100):.2f}%")
n, bins, patches = plt.hist(sentence_lengths, bins=[30 * i for i in range(50)], facecolor='red', alpha=0.5)
_ = plt.axvline(512)
# Pad our input tokens with value 0.
# "post" indicates that we want to pad and truncate at the end of the sequence,
# as opposed to the beginning.
df['bert'] = pad_sequences(df['bert'].values, maxlen=MAX_LEN, dtype="long",
value=0, truncating="post", padding="post").tolist()
df['bert_aug'] = pad_sequences(df['bert_aug'].values, maxlen=MAX_LEN, dtype="long",
value=0, truncating="post", padding="post").tolist()
###Output
_____no_output_____
###Markdown
Attention MasksSource: https://huggingface.co/transformers/v2.2.0/model_doc/bert.htmltransformers.BertForSequenceClassificationAttention masks are used to filter out the padding from each sentence. A simple format of 1 for a real word and 0 for padding.
###Code
# Create attention masks
df['attention'] = df['bert'].apply(lambda arr : [int(token_id > 0) for token_id in arr])
df['attention_aug'] = df['bert_aug'].apply(lambda arr : [int(token_id > 0) for token_id in arr])
###Output
_____no_output_____
###Markdown
Prepare Model Train Test Validation Split
###Code
test_size = 0.2
validation_size = 0.5
train_df, test_df = train_test_split(df, random_state=42, test_size=test_size)
test_df, validation_df = train_test_split(test_df, random_state=42, test_size=validation_size)
print(f"""{1 - test_size}/{test_size * (1-validation_size)}/{test_size * validation_size} split
{train_df.shape[0]} lines of training data,
{test_df.shape[0]} lines of test data
{validation_df.shape[0]} lines of validation data""")
###Output
0.8/0.1/0.1 split
2094 lines of training data,
262 lines of test data
262 lines of validation data
###Markdown
Multilabel Classifier__[TODO]__ Turns out this is harder to do. Figure this out later.For this classifier we are going to throw in all 30 labels as one big multilabel.This is happening first mostly because it's easier to implement. Run Model Run all the training for the topics we want to run.Later, when we are sure of the model we're to use, we'll be running it for all 30 topics.
###Code
lrs = [5e-5, 3e-5, 2e-5]
for lr in lrs:
avg_f1s = np.zeros(epochs, dtype=float)
# Create x, y for each
for topic in topics[::-1]:
train_dataloader, test_dataloader, validation_dataloader = augmented_validationloader(train_df,
test_df,
validation_df,
topic,
batch_size)
# Then load the pretrained BERT model (has linear classification layer on top)
model = BertForSequenceClassification.from_pretrained(
"bert-base-uncased", # Use the 12-layer BERT model, with an uncased vocab.
num_labels = 2, # The number of output labels--2 for binary classification.
# You can increase this for multi-class tasks.
output_attentions = False, # Whether the model returns attentions weights.
output_hidden_states = False, # Whether the model returns all hidden-states.
)
model.cuda(device=device)
# load optimizer
optimizer = AdamW(model.parameters(),
lr = lr, # args.learning_rate - default is 5e-5
eps = 1e-8 # args.adam_epsilon - default is 1e-8.
)
# Total number of training steps is [number of batches] x [number of epochs].
total_steps = len(train_dataloader) * epochs
scheduler = get_linear_schedule_with_warmup(optimizer,
num_warmup_steps = 0,
num_training_steps = total_steps)
arg_dict = {
"device" : device,
"optimizer" : optimizer,
"scheduler" : scheduler,
"model" : model,
"epochs" : epochs,
"train_dataloader" : train_dataloader,
"test_dataloader" : validation_dataloader,
"seed_val" : 42,
"get_f1s" : True,
"verbose" : False
}
model, train_losses, test_losses, f1s = single_topic_train(**arg_dict)
avg_f1s += np.array(f1s)
print(f"lr = {lr}: {(avg_f1s / len(topics)).tolist()}")
lrs = [1e-5, 5e-6]
for lr in lrs:
avg_f1s = np.zeros(epochs, dtype=float)
# Create x, y for each
for topic in topics[::-1]:
train_dataloader, test_dataloader, validation_dataloader = augmented_validationloader(train_df,
test_df,
validation_df,
topic,
batch_size)
# Then load the pretrained BERT model (has linear classification layer on top)
model = BertForSequenceClassification.from_pretrained(
"bert-base-uncased", # Use the 12-layer BERT model, with an uncased vocab.
num_labels = 2, # The number of output labels--2 for binary classification.
# You can increase this for multi-class tasks.
output_attentions = False, # Whether the model returns attentions weights.
output_hidden_states = False, # Whether the model returns all hidden-states.
)
model.cuda(device=device)
# load optimizer
optimizer = AdamW(model.parameters(),
lr = lr, # args.learning_rate - default is 5e-5
eps = 1e-8 # args.adam_epsilon - default is 1e-8.
)
# Total number of training steps is [number of batches] x [number of epochs].
total_steps = len(train_dataloader) * epochs
scheduler = get_linear_schedule_with_warmup(optimizer,
num_warmup_steps = 0,
num_training_steps = total_steps)
arg_dict = {
"device" : device,
"optimizer" : optimizer,
"scheduler" : scheduler,
"model" : model,
"epochs" : epochs,
"train_dataloader" : train_dataloader,
"test_dataloader" : validation_dataloader,
"seed_val" : 42,
"get_f1s" : True,
"verbose" : False
}
model, train_losses, test_losses, f1s = single_topic_train(**arg_dict)
avg_f1s += np.array(f1s)
print(f"lr = {lr}: {(avg_f1s / len(topics)).tolist()}")
###Output
lr = 1e-05: [0.06516290726817042, 0.24675324675324675, 0.5842891002554867, 0.6294093705858411, 0.6860166288737718, 0.7122598430869107, 0.7346648060933775, 0.7492931353690847]
lr = 5e-06: [0.06896551724137931, 0.15912087912087913, 0.24770044770044766, 0.4942900237017884, 0.5530499848910312, 0.5631834281540739, 0.6011988011988011, 0.6061115355233001]
###Markdown
Save Trained Models
###Code
# # Saving best-practices: if you use defaults names for the model, you can reload it using from_pretrained()
# output_dir = './{}_model_save/'.format("child_product".replace(' ', '_'))
# print(output_dir)
# # Create output directory if needed
# if not os.path.exists(output_dir):
# os.makedirs(output_dir)
# print("Saving model to %s" % output_dir)
# # Save a trained model, configuration and tokenizer using `save_pretrained()`.
# # They can then be reloaded using `from_pretrained()`
# model_to_save = model.module if hasattr(model, 'module') else model # Take care of distributed/parallel training
# model_to_save.save_pretrained(output_dir)
# tokenizer.save_pretrained(output_dir)
# # Good practice: save your training arguments together with the trained model
# # torch.save(args, os.path.join(output_dir, 'training_args.bin'))
###Output
_____no_output_____ |
Keister-dwcEvent.ipynb | ###Markdown
DwC Events. Keister Zooplankton Hood Canal 2012-13 dataUniversity of Washington Pelagic Hypoxia Hood Canal project, Zooplankton dataset.2022-1-12 Parse the data to define and pull out 3 event "types": `cruise`, `stationVisit` and `sample`. The DwC event table will be populated sequentially for each of those event types, in that order, from the most temporally aggregated (cruise) to the most granular (sample). Columns will be populated differently depending on the event type. Since a cruise is a spatial collection of points, a `footprintWKT` polygon and depth ranges are generated and populated.**Notes about fields to use**- mention why I'm not using `id` columns (see Abby's comments)- consider adding `basisOfRecord`- See additional fields used in the LifeWatch dataset: modified, language, rightsHolder, accessRights, institutionCode, datasetName, country (in addition to id, type, eventID, parentEventID, samplingProtocol, eventDate, locationID, waterBody, countryCode, minimumDepthInMeters, maximumDepthInMeters, decimalLatitude, decimalLongitude)**date-time issues**- Are times in UTC (as claimed by the `time` variable) or PDT?? Based on `day_night`, I think they're actually in PDT, not UTC!- Inconsistencies between `time_start` and `day_night`. I've spotted at least one: sample ("20120611UNDm3_200") labelled as "Day" while `time_start` is "23:10". Clearly one of them is wrong. Test for other obvious inconsistencies by setting a time window for Day vs Night and comparing it to `time_start`.- Missing `time_start` values. There are a few such cases, but apparently no corresponding missing `date` values. Still, they lead to `NaT` `time` values. I need to replace the `NaT`'s with a valid datetime, and the only option is to use the `date` and an artificial time value. Start with a fixed value (12:00), then refine it by using 12:00 when `day_night` is `Day` and 23:00 when it's `Night`.- Times capture only the *start* of the sampling event. So, using times as the event end in an interval range would be misleading.**Other comments**- Look up US `countryCode`- Add "Day sampling" / "Night sampling" to `eventRemarks` for sample events (or stationVisit, or both?), based on `day_night` column- Parsing `sample_code` for distinctive information - Example `sample_code`: "20120611UNDm2_200". Dataset description entry for `sample_code`: "PI issued sample ID; sampling date + Station + D (day) or N (night) + Net code (e.g. m1) \_mesh" - The upper case letter character before the "m" is D or N (Day or Night). **In a few cases there's an additional character found before the D/N character, but its meaning is not described in the `sample_code` description**- These were monthly cruises lasting about 4 days, in June-October 2012 and 2013 (ie, 8 cruises).
###Code
from datetime import datetime, timedelta, timezone
import json
from pathlib import Path
import numpy as np
import pandas as pd
import geopandas as gpd
data_pth = Path(".")
###Output
_____no_output_____
###Markdown
Process JSON file containing common mappings and strings
###Code
with open(data_pth / 'common_mappings.json') as f:
common_mappings = json.load(f)
DatasetCode = common_mappings['datasetcode']
cruises = common_mappings['cruises']
stations = common_mappings['stations']
net_tow = common_mappings['net_tow']
iso8601_format = common_mappings['iso8601_format']
CRS = common_mappings['CRS']
###Output
_____no_output_____
###Markdown
Pre-process data from csv for Event table Read the csv file
###Code
sourcecsvdata_pth = data_pth / "sourcedata" / "bcodmo_dataset_682074_data.csv"
###Output
_____no_output_____
###Markdown
`usecols` defines the columns that will be kept and the order in which they'll be organized
###Code
usecols = [
'sample_code', 'mesh_size', 'FWC_DS',
'station', 'latitude', 'longitude',
'date', 'time_start', 'time', 'day_night',
'depth_min', 'depth_max'
]
eventsource_df = pd.read_csv(
sourcecsvdata_pth,
skiprows=[1],
parse_dates=['time'],
usecols=usecols
)[usecols]
len(eventsource_df)
eventsource_df.head()
###Output
_____no_output_____
###Markdown
Some `time` entries are missing (NaT). It looks like it's because the time component is missing, while the date component is available. Remove duplicates and address `time` gaps
###Code
eventsource_df = eventsource_df.drop_duplicates().sort_values(by='sample_code').reset_index(drop=True)
len(eventsource_df)
eventsource_df[eventsource_df.time.isnull()]
###Output
_____no_output_____
###Markdown
Use `time` filtering on `NaT` (null) values to assign `time` based on `date` plus an artificial time value (12:00)
###Code
# eventsource_df['time'].fillna(value='1999-01-01T00:0000Z', inplace=True)
eventsource_df.loc[eventsource_df['time'].isnull(), 'time'] = eventsource_df['date'].astype(str).apply(
lambda datestr: datetime(
int(datestr[:4]), int(datestr[4:6]), int(datestr[-2:]),
12, 0, tzinfo=timezone.utc
)
)
eventsource_df.head(10)
###Output
_____no_output_____
###Markdown
Extract net_code and "extra token" from `sample_code`- Retain only ones that are not already found among the existing dataframe columns.- Example: "20120611UNDm2_200". Dataset description entry for `sample_code`: "PI issued sample ID; sampling date + Station + D (day) or N (night) + Net code (e.g. m1) \_mesh"- The upper case letter character before the "m" is D or N (Day or Night). **In a few cases there's an additional character found before the D/N character, but its meaning is not described in the `sample_code` description**- Ultimately, try to pull out or create a profile code and profile depth interval code, if appropriate?
###Code
len(eventsource_df.sample_code.unique())
###Output
_____no_output_____
###Markdown
Parsing steps:- split the new `sample_code` on the "_" delimiter, create two new columns, `token1` and `mesh_size`- From `token1` extract `token2`, the characters between `station_code` and the "_" delimiter- parse `token2`: split on the letter "m", into `token3` and `net_code`; then create `token4` from `token3` by removing the D/N character. `token4` will be empty in most cases, and will be renamed to `extra_sample_token`
###Code
def split_token2(token2):
token3, net_code = token2.split('m')
token4 = token3[:-1]
return pd.Series({
'net_code': 'm'+net_code,
'extra_sample_token': token4
})
eventsource_df['token2'] = eventsource_df['sample_code'].apply(
lambda cd: cd[10:].split('_')[0]
)
eventsource_df = pd.concat([eventsource_df, eventsource_df['token2'].apply(split_token2)],
axis='columns')
eventsource_df.drop(columns='token2', inplace=True)
eventsource_df.head(20)
###Output
_____no_output_____
###Markdown
Create empty Event dataframeRecords from each event type will be appended to this dataframe, by "type". The type is encoded in the `eventRemarks` column, not in the DwC `type` column, which is not used here explicitly (the type is `Event`).
###Code
event_cols_dtypes = np.dtype(
[
('eventID', str),
('eventRemarks', str),
('parentEventID', str),
('eventDate', str),
('locationID', str),
('locality', str),
('decimalLatitude', float),
('decimalLongitude', float),
('footprintWKT', str),
('geodeticDatum', str),
('waterBody', str),
('countryCode', str),
('minimumDepthInMeters', float),
('maximumDepthInMeters', float),
('samplingProtocol', str)
]
)
event_df = pd.DataFrame(np.empty(0, dtype=event_cols_dtypes))
###Output
_____no_output_____
###Markdown
Create cruise events - Assign cruise code from `cruises` based on `date`- Use `group_by` to generate cruise `time` start and end- Come up with cruise `eventID`. Come up with a project/dataset code first- Populating `event_df` with cruise events Add cruise-related columns to `eventsource_df`. `date_yyyymm` will be used to link cruise events to stationVisit events
###Code
eventsource_df['date_yyyymm'] = eventsource_df['time'].apply(lambda dt: dt.strftime("%Y%m"))
eventsource_df['cruise_code'] = eventsource_df['date_yyyymm'].apply(lambda s: cruises[s])
eventsource_df.head()
###Output
_____no_output_____
###Markdown
Create cruise footprintWKT and centroid points
###Code
cruise_stations_df = eventsource_df[['cruise_code', 'latitude', 'longitude']].drop_duplicates()
cruise_stations_gdf = gpd.GeoDataFrame(
data=cruise_stations_df,
geometry=gpd.points_from_xy(cruise_stations_df.longitude, cruise_stations_df.latitude, crs=CRS)
)
len(cruise_stations_gdf)
cruise_stations_gdf.head()
cruise_stations_gdf.plot();
cruise_stations_gdf.cruise_code.value_counts()
###Output
_____no_output_____
###Markdown
Create one convex hull polygon for each cruise
###Code
cruise_footprints_gdf = gpd.GeoDataFrame(
cruise_stations_gdf.groupby(['cruise_code'])['geometry'].apply(
lambda geom: geom.unary_union.convex_hull)
)
cruise_footprints_gdf['footprintWKT'] = cruise_footprints_gdf.geometry.to_wkt()
# Add footprint centroid latitude and longitude
cruise_footprints_gdf['decimalLongitude'] = cruise_footprints_gdf.centroid.x.round(3)
cruise_footprints_gdf['decimalLatitude'] = cruise_footprints_gdf.centroid.y.round(3)
cruise_footprints_gdf.reset_index(drop=False, inplace=True)
cruise_footprints_gdf
###Output
_____no_output_____
###Markdown
**NOTE:** It would be nice to add the R2R cruise code (eg, "CB1002"; `cruise_code` column) to the event table, ideally as an R2R url (eg, https://www.rvdata.us/search/cruise/CB988). But in what column? `eventRemarks` is already being used for a very specific purpose (event type).
###Code
cruise_df = eventsource_df.groupby(['cruise_code', 'date_yyyymm']).agg({
'time':['min', 'max'],
'depth_min':['min'],
'depth_max':['max'],
})
cruise_df.columns = ["_".join(stat) for stat in cruise_df.columns.ravel()]
cruise_df = (
cruise_df.
rename(columns={'depth_min_min':'minimumDepthInMeters', 'depth_max_max':'maximumDepthInMeters'})
.sort_values(by='date_yyyymm')
.reset_index(drop=False)
)
cruise_df
# This form is for populating eventDate with an iso8601 interval
# lambda row: "{}/{}".format(row['time_min'].strftime(iso8601_format),
# row['time_max'].strftime(iso8601_format)),
cruise_df['eventDate'] = cruise_df.apply(
lambda row: "{}".format(row['time_min'].strftime(iso8601_format)),
axis=1
)
cruise_df = cruise_df.merge(cruise_footprints_gdf, on='cruise_code')
cruise_df['eventID'] = DatasetCode + "_" + cruise_df['cruise_code']
cruise_df['eventRemarks'] = 'cruise'
cruise_df['geodeticDatum'] = CRS
cruise_df['waterBody'] = 'Hood Canal, Puget Sound'
cruise_df['countryCode'] = 'US'
cruise_df.head()
###Output
_____no_output_____
###Markdown
Populate (append to) the `event_df` table with the cruise events
###Code
event_df = event_df.append(
cruise_df[['eventID', 'eventRemarks', 'eventDate',
'decimalLatitude', 'decimalLongitude', 'footprintWKT', 'geodeticDatum',
'waterBody', 'countryCode', 'minimumDepthInMeters', 'maximumDepthInMeters']],
ignore_index=True
)
event_df.head()
###Output
_____no_output_____
###Markdown
Create stationVisit events- Use cruise `eventID` from `eventsource_df` as stationVisit `parentEventID`- Add `stationvisit_code` to `eventsource_df`, for use by the next event type (sample)
###Code
eventsource_df['stationvisit_code'] = (
eventsource_df['date'].astype(str) + eventsource_df['station'] + eventsource_df['day_night'].str.get(0)
)
eventsource_df.head()
stationvisit_df = eventsource_df.groupby(
['date', 'day_night', 'station', 'latitude', 'longitude', 'date_yyyymm', 'stationvisit_code']
).agg({
'time':['min', 'max'],
'depth_min':['min'],
'depth_max':['max'],
})
stationvisit_df.columns = ["_".join(stat) for stat in stationvisit_df.columns.ravel()]
stationvisit_df = (
stationvisit_df
.sort_values(by='time_min')
.reset_index(drop=False)
)
len(stationvisit_df)
stationvisit_df.head(10)
stationvisit_df = stationvisit_df.merge(
cruise_df[['date_yyyymm', 'eventID', 'waterBody', 'countryCode']],
how='left',
on='date_yyyymm'
)
stationvisit_df.head()
stationvisit_df.rename(
columns={
'station':'locationID',
'latitude':'decimalLatitude',
'longitude':'decimalLongitude',
'eventID':'parentEventID',
'depth_min_min':'minimumDepthInMeters',
'depth_max_max':'maximumDepthInMeters',
},
inplace=True
)
# This form is for populating eventDate with an iso8601 interval
# stationvisit_df['eventDate'] = stationvisit_df[['time_min', 'time_max']].apply(
# lambda row: "{}/{}".format(row['time_min'].strftime(iso8601_format),
# row['time_max'].strftime(iso8601_format)),
# axis=1
# )
stationvisit_df['eventDate'] = stationvisit_df.apply(
lambda row: "{}".format(row['time_min'].strftime(iso8601_format)),
axis=1
)
stationvisit_df['eventID'] = stationvisit_df['parentEventID'] + '-' + stationvisit_df['stationvisit_code']
stationvisit_df['eventRemarks'] = 'stationVisit'
stationvisit_df['locality'] = stationvisit_df['locationID'].apply(lambda cd: stations[cd])
stationvisit_df['geodeticDatum'] = CRS
###Output
_____no_output_____
###Markdown
Verify that no duplicate station `eventID` values are created
###Code
len(stationvisit_df.eventID.unique()) == len(stationvisit_df)
stationvisit_df.head(5)
###Output
_____no_output_____
###Markdown
Populate (append to) the `event_df` table with the stationVisit events
###Code
event_df = event_df.append(
stationvisit_df[['eventID', 'eventRemarks', 'parentEventID', 'eventDate',
'decimalLatitude', 'decimalLongitude', 'geodeticDatum',
'locationID', 'locality', 'waterBody', 'countryCode',
'minimumDepthInMeters', 'maximumDepthInMeters']],
ignore_index=True
)
len(event_df)
event_df.head(12)
###Output
_____no_output_____
###Markdown
Create individual "sample" events- Each unique `sample_code` will be an event. `sample_code` will be the eventID, possibly prefixed by the dataset code, `UWPHHCZoop`
###Code
sample_df = eventsource_df.copy()
sample_df.head()
sample_df = sample_df.merge(
stationvisit_df[['stationvisit_code', 'eventID', 'waterBody', 'countryCode',
'locationID', 'locality', 'geodeticDatum']],
how='left',
on='stationvisit_code'
)
sample_df= (
sample_df
.rename(columns={
'sample_code':'eventID',
'eventID':'parentEventID',
'latitude':'decimalLatitude',
'longitude':'decimalLongitude',
'depth_min':'minimumDepthInMeters',
'depth_max':'maximumDepthInMeters',
})
.sort_values(by='time')
.reset_index(drop=False)
)
def samplingProtocol(row):
return (
f"{net_tow[row['FWC_DS']]} net tow using 0.25 m2 HydroBios MultiNet Multiple Plankton Sampler,"
f" net code {row['net_code']}, {row['mesh_size']} micron mesh (qualifier: {row['extra_sample_token']})"
)
sample_df['eventRemarks'] = 'sample'
sample_df['eventDate'] = sample_df['time'].apply(lambda t: t.strftime(iso8601_format))
sample_df['samplingProtocol'] = sample_df.apply(samplingProtocol, axis=1)
sample_df.head()
###Output
_____no_output_____
###Markdown
Populate (append to) the `event_df` table with the sample events
###Code
event_df = event_df.append(
sample_df[['eventID', 'eventRemarks', 'parentEventID', 'eventDate',
'locationID', 'locality', 'waterBody', 'countryCode',
'decimalLatitude', 'decimalLongitude', 'geodeticDatum',
'minimumDepthInMeters', 'maximumDepthInMeters', 'samplingProtocol']],
ignore_index=True
)
len(event_df)
event_df.tail(10)
###Output
_____no_output_____
###Markdown
Export `event_df` to csv
###Code
event_df.eventRemarks.value_counts()
event_df.to_csv(data_pth / 'aligned_csvs' / 'DwC_event.csv', index=False)
###Output
_____no_output_____
###Markdown
Package versions
###Code
print(
f"{datetime.utcnow()} +00:00\n"
f"pandas: {pd.__version__}, geopandas: {gpd.__version__}"
)
###Output
2022-01-13 02:43:18.890177 +00:00
pandas: 1.3.5, geopandas: 0.10.2
|
en/1. The prison Break/The Prison Break.ipynb | ###Markdown
The Prison Break
###Code
YouTubeVideo(id="KFVdHDMcepw",width="560", height="315")
###Output
_____no_output_____
###Markdown
What Hedge can do**Spin the dial**:```pythonlock.spin_dial(some_number_of_times)```**Check the color of the dial.**:```pythoncolor = lock.check_color()```Output:```textred```**Record the position of the dail**:```pythonposition = lock.check_dial()print(position)```Output:```text1```**Open the lock**```pythonlock.open()```Output:```textThe lock stays closed.```**Display values**Anything can be printed, it allows you to see what the program is working with.```pythonprint(lock.check_color())```Output:```textred``` How to loopAny loop consists of two parts:1. The looping condition The looping condition determines how many times or how long the instructions are performed. This line ends with a colon (:).2. The body The body lists the instructions that are to performed each repetition. These instructions are indented compared to the looping condition.As seen in the video there are two kinds of loops:1. repeat n times In python this is a _for_ loop2. repeat until condition In python this is a _while_ loop For loopThe for loop in python loops over a sequence of items. The condition always looks like this:```pythonfor element in sequence:``` RangeHere we loop over the numbers 1 until 5. ```pythonfor element in range(1, 5): print(element)```Output:```text1234```If you only provide one number in the `range` the counting starts at `0````pythonfor element in range(5): print(element)```Output:```text01234``` ListYou can also loop over a predetermined set of values:```pythonfor element in ["one", "two", "three", "four"]: print(element)```Output:```textonetwothreefour``` While loop```pythoncount = 0while count < 5: print(count) count = count + 1```Output:```text01234``` How to check values| Operator | Description ||----------|-----------------------|| == | Equals || != | Not equals || < | Less than || > | Greater then || <= | Less then or equal || >= | Greater then or equal |To compare with a value that is a word surround the word with "". Otherwise python will think it is a variable. Program Hedge belowBelow you see the lock that Hedge encounters. You can use the tools described above to program Hedge.To let Hedge try to open the lock you can press the run button in the menu, or press Ctrl+Enter
###Code
lock = Lock()
print(lock)
###Output
color: red, position: 1
|
quizzes/quiz3/Q3_Unid.ipynb | ###Markdown
Quiz-3There are three tasks.You'll see the tasks enclosed as follows.---> YOUR TASK n <---...task n ... ---> YOUR TASK n ENDS HERE<---Goals:* Design a DFA that we shall specify in Section 2 (begin XOR end with 01)* Design a DFA we shall specify in Section 3 (for numbers, MSB first, equal to 0 mod 5, similar to Sec 5.2.3 from book)* Practice some Pumping Lemma problems in Section 4. Answer the questions there YOUR TASKSYour tasks will be denoted by "---> YOUR TASK n <---" below
###Code
from jove.DotBashers import *
from jove.Def_md2mc import *
from jove.Def_DFA import *
from jove.LangDef import *
from jove.Def_RE2NFA import *
from jove.Def_NFA import *
###Output
_____no_output_____
###Markdown
Design a DFA for strings over 0,1 that begin XOR end with 01* If it begins with 01, it can't end with 01* If it does not begin with 01, it must end with 01 This is the main design the students will work on!! ---> YOUR TASK 1 is below <---
###Code
Db01XORe01 = md2mc('''
!!
!!- The overall algorithm is to case-analyze on whether we began with a 01 or not.
!!- Please see the state names assigned. Once you understand how the state names were designed,
!!- the transitions should make sense.
!!
DFA !! This DFA chooses meaningful state names and records the last bit seen
!!--- The DFA has to be designed by you
!!--- I'll just tell you the state names I ended up inventing, and my scheme for naming the states
!!--- without giving such state names, I could not have solved this problem!
!!--- In other words, the ENTIRE solution depended on my keeping a clear sense of state names
!!--- and also remembering one bit seen last.
!!--- The state names I chose ---
S_0 !! No acceptance upon seeing a 0; record in state name S0
MNE_1 !! MNE means "must not end in 01." The _1 remembers the last bit seen
NB_1 !! NB means "not beginning with 01." The _1 remembers the last bit seen
NB_0 !! Not beginning with 01. Also 0 is the last bit seen
FNB_1 !! FNB means a final state for the case not beginning with 01. Also '1' seen last
FMNE_0 !! FMNE means a final state and "must not end in 01". Also 0 bit seen last
FMNE_0 !! Since we are seeing a 00, we are not ending in 01, so the F status is kept
''')
dotObj_dfa(Db01XORe01, FuseEdges = True)
Sigma={'0','1'}
for i in range(1,120):
w = nthnumeric(i, Sigma)
if accepts_dfa(Db01XORe01, w):
print("DFA Db01XORe01 accepts ", w)
print("DFA Db01XORe01 rejects all other w in the test set")
###Output
_____no_output_____
###Markdown
---> YOUR TASK 1 ENDS HERE <--- The part below will be retained as such. The TAs will check Presto-1 and Presto-2They expect empty DFA. Then the student design is correct! Else there is a mistake somewhere. Testing out the above machine is not easy; we use REs for thatWe will show the power of regular expressions to test out the above machine. You will simply be doing the tests below and ending up with empty DFAs at "Presto-1" and "Presto-2". The TAs will grade wrt those Prestos.There is no other way to exhaustively test out the DFA! We first complement the above machine and make senseThe complement of the above machine must be a DFA that begins with a 01 exactly when it ends with a 01. See if so.
###Code
# Its complement must be a machine that begins with 01 exactly when it ends with 01 :-)
# This can be read out and confirmed!
Db01XNORe01 = comp_dfa(Db01XORe01)
dotObj_dfa(Db01XNORe01, FuseEdges = True)
###Output
_____no_output_____
###Markdown
Check the complementIf the complement looks like it is doing its job, you can let out a mini Presto. But we will do more tests! Obtain an RE for begins with 01 AND ends with 01
###Code
# This RE "01(''+(0+1)*01)" captures begin with 01 AND ends with 01
Db01ANDe01 = min_dfa(nfa2dfa(re2nfa("01(''+(0+1)*01)")))
dotObj_dfa(Db01ANDe01)
###Output
_____no_output_____
###Markdown
Obtain DB01XNORe01 minus Db01ANDe01 Now the DFA must neither begin with 01 nor end with 01. Check.We can let out a mini Presto if so. It indeed is so!
###Code
# We now need to perform DbXNORe01 - DB01ANDE01 to get a DFA which neither begins nor ends with 01
Dnb01ANDne01 = intersect_dfa(Db01XNORe01, comp_dfa(Db01ANDe01))
dotObj_dfa(Dnb01ANDne01)
###Output
_____no_output_____
###Markdown
This is the RE for "begins with 01". Again fool-proof to obtain.
###Code
# Now Dnb01ANDne01 must neither begin with 01 nor end with 01
# We can intersect with DFAs that begin with 01 and then DFA that ends with 01 and prove they are empty
Db01 = min_dfa(nfa2dfa(re2nfa("01(0+1)*")))
dotObj_dfa(Db01)
###Output
_____no_output_____
###Markdown
This is the RE for "ends with 01". Again fool-proof to obtain.
###Code
De01 = min_dfa(nfa2dfa(re2nfa("(0+1)*01")))
dotObj_dfa(De01)
###Output
_____no_output_____
###Markdown
Presto-1 : If the following DFA is empty, it DOES NOT begin with 01The student is likely right! Check Presto-2 also.
###Code
dotObj_dfa(min_dfa(intersect_dfa(Db01, Dnb01ANDne01)), FuseEdges=True)
###Output
_____no_output_____
###Markdown
Presto-2: If the following DFA is empty, it DOES NOT end with 01If this check also passes, the student is right !!
###Code
dotObj_dfa(min_dfa(intersect_dfa(De01, Dnb01ANDne01)), FuseEdges=True)
###Output
_____no_output_____
###Markdown
Since Presto-1 and Presto-2 worked out, we are done !! Design a DFA for Numbers arriving MSB-first, equal to 0 modulo 5Similar to the machine in Section 5.2.3 but with "5" not "3" ---> YOUR TASK 2 <---
###Code
DmsbMod5 = md2mc('''
DFA
''')
dotObj_dfa(DmsbMod5, FuseEdges=True)
Sigma={'0','1'}
for i in range(1,120):
w = nthnumeric(i, Sigma)
if accepts_dfa(DmsbMod5, w):
print("DFA DmsbMod5 accepts ", w, " having value ", int(w, 2))
# Printout below must be for numbers modulo 5 = 0
###Output
_____no_output_____
###Markdown
Quiz-3There are three tasks.You'll see the tasks enclosed as follows.---> YOUR TASK n <---...task n ... ---> YOUR TASK n ENDS HERE<---Goals:* Design a DFA that we shall specify in Section 2 (begin XOR end with 01)* Design a DFA we shall specify in Section 3 (for numbers, MSB first, equal to 0 mod 5, similar to Sec 5.2.3 from book)* Practice some Pumping Lemma problems in Section 4. Answer the questions there YOUR TASKSYour tasks will be denoted by "---> YOUR TASK n <---" below
###Code
from jove.DotBashers import *
from jove.Def_md2mc import *
from jove.Def_DFA import *
from jove.LangDef import *
from jove.Def_RE2NFA import *
from jove.Def_NFA import *
###Output
You may use any of these help commands:
help(ResetStNum)
help(NxtStateStr)
You may use any of these help commands:
help(md2mc)
.. and if you want to dig more, then ..
help(default_line_attr)
help(length_ok_input_items)
help(union_line_attr_list_fld)
help(extend_rsltdict)
help(form_delta)
help(get_machine_components)
You may use any of these help commands:
help(mkp_dfa)
help(mk_dfa)
help(totalize_dfa)
help(addtosigma_delta)
help(step_dfa)
help(run_dfa)
help(accepts_dfa)
help(comp_dfa)
help(union_dfa)
help(intersect_dfa)
help(pruneUnreach)
help(iso_dfa)
help(langeq_dfa)
help(same_status)
help(h_langeq_dfa)
help(fixptDist)
help(min_dfa)
help(pairFR)
help(state_combos)
help(sepFinNonFin)
help(bash_eql_classes)
help(listminus)
help(bash_1)
help(mk_rep_eqc)
help(F_of)
help(rep_of_s)
help(q0_of)
help(Delta_of)
help(mk_state_eqc_name)
You may use any of these help commands:
help(mk_nfa)
help(totalize_nfa)
help(step_nfa)
help(run_nfa)
help(ec_step_nfa)
help(Eclosure)
help(Echelp)
help(accepts_nfa)
help(nfa2dfa)
help(n2d)
help(inSets)
help(rev_dfa)
help(min_dfa_brz)
You may use any of these help commands:
help(re2nfa)
###Markdown
Design a DFA for strings over 0,1 that begin XOR end with 01* If it begins with 01, it can't end with 01* If it does not begin with 01, it must end with 01 This is the main design the students will work on!! ---> YOUR TASK 1 is below <---
###Code
Db01XORe01 = md2mc('''
!!
!!- The overall algorithm is to case-analyze on whether we began with a 01 or not.
!!- Please see the state names assigned. Once you understand how the state names were designed,
!!- the transitions should make sense.
!!
DFA !! This DFA chooses meaningful state names and records the last bit seen
!!--- The DFA has to be designed by you
!!--- I'll just tell you the state names I ended up inventing, and my scheme for naming the states
!!--- without giving such state names, I could not have solved this problem!
!!--- In other words, the ENTIRE solution depended on my keeping a clear sense of state names
!!--- and also remembering one bit seen last.
!!--- The state names I chose ---
S_0 : 0 -> NB_0 !! No acceptance upon seeing a 0; record in state name S0
S_0 : 1 -> MNE_1
MNE_1 : 0 -> FMNE_0 !! MNE means "must not end in 01." The _1 remembers the last bit seen
MNE_1 : 1 -> FMNE_1
NB_1 : 0 -> NB_0 !! NB means "not beginning with 01." The _1 remembers the last bit seen
NB_1 : 1 -> NB_1
NB_0 : 0 -> NB_0 !! Not beginning with 01. Also 0 is the last bit seen
NB_0 : 1 -> FNB_1
FNB_1 : 0 -> NB_0 !! FNB means a final state for the case not beginning with 01. Also '1' seen last
FNB_1 : 1 -> NB_1
FMNE_0 : 0 -> FMNE_0 !! FMNE means a final state and "must not end in 01". Also 0 bit seen last
FMNE_0 : 1 -> MNE_1
FMNE_1 : 0 -> FMNE_0 !! Since we are seeing a 00, we are not ending in 01, so the F status is kept
FMNE_1 : 1 -> FMNE_1
I : 0 -> S_0
I : 1 -> NB_1
''')
dotObj_dfa(Db01XORe01, FuseEdges = True)
Sigma={'0','1'}
for i in range(1,120):
w = nthnumeric(i, Sigma)
if accepts_dfa(Db01XORe01, w):
print("DFA Db01XORe01 accepts ", w)
print("DFA Db01XORe01 rejects all other w in the test set")
###Output
DFA Db01XORe01 accepts 101
DFA Db01XORe01 accepts 011
DFA Db01XORe01 accepts 010
DFA Db01XORe01 accepts 001
DFA Db01XORe01 accepts 1101
DFA Db01XORe01 accepts 1001
DFA Db01XORe01 accepts 0111
DFA Db01XORe01 accepts 0110
DFA Db01XORe01 accepts 0100
DFA Db01XORe01 accepts 0001
DFA Db01XORe01 accepts 11101
DFA Db01XORe01 accepts 11001
DFA Db01XORe01 accepts 10101
DFA Db01XORe01 accepts 10001
DFA Db01XORe01 accepts 01111
DFA Db01XORe01 accepts 01110
DFA Db01XORe01 accepts 01100
DFA Db01XORe01 accepts 01011
DFA Db01XORe01 accepts 01010
DFA Db01XORe01 accepts 01000
DFA Db01XORe01 accepts 00101
DFA Db01XORe01 accepts 00001
DFA Db01XORe01 accepts 111101
DFA Db01XORe01 accepts 111001
DFA Db01XORe01 accepts 110101
DFA Db01XORe01 accepts 110001
DFA Db01XORe01 accepts 101101
DFA Db01XORe01 accepts 101001
DFA Db01XORe01 accepts 100101
DFA Db01XORe01 accepts 100001
DFA Db01XORe01 accepts 011111
DFA Db01XORe01 accepts 011110
DFA Db01XORe01 accepts 011100
DFA Db01XORe01 accepts 011011
DFA Db01XORe01 accepts 011010
DFA Db01XORe01 accepts 011000
DFA Db01XORe01 accepts 010111
DFA Db01XORe01 accepts 010110
DFA Db01XORe01 accepts 010100
DFA Db01XORe01 accepts 010011
DFA Db01XORe01 accepts 010010
DFA Db01XORe01 accepts 010000
DFA Db01XORe01 accepts 001101
DFA Db01XORe01 accepts 001001
DFA Db01XORe01 rejects all other w in the test set
###Markdown
---> YOUR TASK 1 ENDS HERE <--- The part below will be retained as such. The TAs will check Presto-1 and Presto-2They expect empty DFA. Then the student design is correct! Else there is a mistake somewhere. Testing out the above machine is not easy; we use REs for thatWe will show the power of regular expressions to test out the above machine. You will simply be doing the tests below and ending up with empty DFAs at "Presto-1" and "Presto-2". The TAs will grade wrt those Prestos.There is no other way to exhaustively test out the DFA! We first complement the above machine and make senseThe complement of the above machine must be a DFA that begins with a 01 exactly when it ends with a 01. See if so.
###Code
# Its complement must be a machine that begins with 01 exactly when it ends with 01 :-)
# This can be read out and confirmed!
Db01XNORe01 = comp_dfa(Db01XORe01)
dotObj_dfa(Db01XNORe01, FuseEdges = True)
###Output
_____no_output_____
###Markdown
Check the complementIf the complement looks like it is doing its job, you can let out a mini Presto. But we will do more tests! Obtain an RE for begins with 01 AND ends with 01
###Code
# This RE "01(''+(0+1)*01)" captures begin with 01 AND ends with 01
Db01ANDe01 = min_dfa(nfa2dfa(re2nfa("01(''+(0+1)*01)")))
dotObj_dfa(Db01ANDe01)
###Output
_____no_output_____
###Markdown
Obtain DB01XNORe01 minus Db01ANDe01 Now the DFA must neither begin with 01 nor end with 01. Check.We can let out a mini Presto if so. It indeed is so!
###Code
# We now need to perform DbXNORe01 - DB01ANDE01 to get a DFA which neither begins nor ends with 01
Dnb01ANDne01 = intersect_dfa(Db01XNORe01, comp_dfa(Db01ANDe01))
dotObj_dfa(Dnb01ANDne01)
###Output
_____no_output_____
###Markdown
This is the RE for "begins with 01". Again fool-proof to obtain.
###Code
# Now Dnb01ANDne01 must neither begin with 01 nor end with 01
# We can intersect with DFAs that begin with 01 and then DFA that ends with 01 and prove they are empty
Db01 = min_dfa(nfa2dfa(re2nfa("01(0+1)*")))
dotObj_dfa(Db01)
###Output
_____no_output_____
###Markdown
This is the RE for "ends with 01". Again fool-proof to obtain.
###Code
De01 = min_dfa(nfa2dfa(re2nfa("(0+1)*01")))
dotObj_dfa(De01)
###Output
_____no_output_____
###Markdown
Presto-1 : If the following DFA is empty, it DOES NOT begin with 01The student is likely right! Check Presto-2 also.
###Code
dotObj_dfa(min_dfa(intersect_dfa(Db01, Dnb01ANDne01)), FuseEdges=True)
###Output
_____no_output_____
###Markdown
Presto-2: If the following DFA is empty, it DOES NOT end with 01If this check also passes, the student is right !!
###Code
dotObj_dfa(min_dfa(intersect_dfa(De01, Dnb01ANDne01)), FuseEdges=True)
###Output
_____no_output_____
###Markdown
Since Presto-1 and Presto-2 worked out, we are done !! Design a DFA for Numbers arriving MSB-first, equal to 0 modulo 5Similar to the machine in Section 5.2.3 but with "5" not "3" ---> YOUR TASK 2 <---
###Code
DmsbMod5 = md2mc('''
DFA
I : 0 -> F0
I : 1 -> S1
F0 : 0 -> F0
F0 : 1 -> S1
S1 : 0 -> S2
S1 : 1 -> S3
S2 : 0 -> S4
S2 : 1 -> F0
S3 : 0 -> S1
S3 : 1 -> S2
S4 : 0 -> S3
S4 : 1 -> S4
''')
dotObj_dfa(DmsbMod5, FuseEdges=True)
Sigma={'0','1'}
for i in range(1,120):
w = nthnumeric(i, Sigma)
if accepts_dfa(DmsbMod5, w):
print("DFA DmsbMod5 accepts ", w, " having value ", int(w, 2))
# Printout below must be for numbers modulo 5 = 0
###Output
DFA DmsbMod5 accepts 0 having value 0
DFA DmsbMod5 accepts 00 having value 0
DFA DmsbMod5 accepts 101 having value 5
DFA DmsbMod5 accepts 000 having value 0
DFA DmsbMod5 accepts 1111 having value 15
DFA DmsbMod5 accepts 1010 having value 10
DFA DmsbMod5 accepts 0101 having value 5
DFA DmsbMod5 accepts 0000 having value 0
DFA DmsbMod5 accepts 11110 having value 30
DFA DmsbMod5 accepts 11001 having value 25
DFA DmsbMod5 accepts 10100 having value 20
DFA DmsbMod5 accepts 01111 having value 15
DFA DmsbMod5 accepts 01010 having value 10
DFA DmsbMod5 accepts 00101 having value 5
DFA DmsbMod5 accepts 00000 having value 0
DFA DmsbMod5 accepts 111100 having value 60
DFA DmsbMod5 accepts 110111 having value 55
DFA DmsbMod5 accepts 110010 having value 50
DFA DmsbMod5 accepts 101101 having value 45
DFA DmsbMod5 accepts 101000 having value 40
DFA DmsbMod5 accepts 100011 having value 35
DFA DmsbMod5 accepts 011110 having value 30
DFA DmsbMod5 accepts 011001 having value 25
DFA DmsbMod5 accepts 010100 having value 20
DFA DmsbMod5 accepts 001111 having value 15
DFA DmsbMod5 accepts 001010 having value 10
###Markdown
Quiz-3There are three tasks.You'll see the tasks enclosed as follows.---> YOUR TASK n <---...task n ... ---> YOUR TASK n ENDS HERE<---Goals:* Design a DFA that we shall specify in Section 2 (begin XOR end with 01)* Design a DFA we shall specify in Section 3 (for numbers, MSB first, equal to 0 mod 5, similar to Sec 5.2.3 from book)* Practice some Pumping Lemma problems in Section 4. Answer the questions there YOUR TASKSYour tasks will be denoted by "---> YOUR TASK n <---" below
###Code
import sys
sys.path[0:0] = ['../..','../../3rdparty'] # Put these at the head of the search path
from jove.DotBashers import *
from jove.Def_md2mc import *
from jove.Def_DFA import *
from jove.LangDef import *
from jove.Def_RE2NFA import *
from jove.Def_NFA import *
###Output
_____no_output_____
###Markdown
Design a DFA for strings over 0,1 that begin XOR end with 01* If it begins with 01, it can't end with 01* If it does not begin with 01, it must end with 01 This is the main design the students will work on!! ---> YOUR TASK 1 is below <---
###Code
Db01XORe01 = md2mc('''
!!
!!- The overall algorithm is to case-analyze on whether we began with a 01 or not.
!!- Please see the state names assigned. Once you understand how the state names were designed,
!!- the transitions should make sense.
!!
DFA !! This DFA chooses meaningful state names and records the last bit seen
!!--- The DFA has to be designed by you
!!--- I'll just tell you the state names I ended up inventing, and my scheme for naming the states
!!--- without giving such state names, I could not have solved this problem!
!!--- In other words, the ENTIRE solution depended on my keeping a clear sense of state names
!!--- and also remembering one bit seen last.
!!--- The state names I chose ---
S_0 !! No acceptance upon seeing a 0; record in state name S0
MNE_1 !! MNE means "must not end in 01." The _1 remembers the last bit seen
NB_1 !! NB means "not beginning with 01." The _1 remembers the last bit seen
NB_0 !! Not beginning with 01. Also 0 is the last bit seen
FNB_1 !! FNB means a final state for the case not beginning with 01. Also '1' seen last
FMNE_0 !! FMNE means a final state and "must not end in 01". Also 0 bit seen last
FMNE_0 !! Since we are seeing a 00, we are not ending in 01, so the F status is kept
''')
dotObj_dfa(Db01XORe01, FuseEdges = True)
Sigma={'0','1'}
for i in range(1,120):
w = nthnumeric(i, Sigma)
if accepts_dfa(Db01XORe01, w):
print("DFA Db01XORe01 accepts ", w)
print("DFA Db01XORe01 rejects all other w in the test set")
###Output
_____no_output_____
###Markdown
---> YOUR TASK 1 ENDS HERE <--- The part below will be retained as such. The TAs will check Presto-1 and Presto-2They expect empty DFA. Then the student design is correct! Else there is a mistake somewhere. Testing out the above machine is not easy; we use REs for thatWe will show the power of regular expressions to test out the above machine. You will simply be doing the tests below and ending up with empty DFAs at "Presto-1" and "Presto-2". The TAs will grade wrt those Prestos.There is no other way to exhaustively test out the DFA! We first complement the above machine and make senseThe complement of the above machine must be a DFA that begins with a 01 exactly when it ends with a 01. See if so.
###Code
# Its complement must be a machine that begins with 01 exactly when it ends with 01 :-)
# This can be read out and confirmed!
Db01XNORe01 = comp_dfa(Db01XORe01)
dotObj_dfa(Db01XNORe01, FuseEdges = True)
###Output
_____no_output_____
###Markdown
Check the complementIf the complement looks like it is doing its job, you can let out a mini Presto. But we will do more tests! Obtain an RE for begins with 01 AND ends with 01
###Code
# This RE "01(''+(0+1)*01)" captures begin with 01 AND ends with 01
Db01ANDe01 = min_dfa(nfa2dfa(re2nfa("01(''+(0+1)*01)")))
dotObj_dfa(Db01ANDe01)
###Output
_____no_output_____
###Markdown
Obtain DB01XNORe01 minus Db01ANDe01 Now the DFA must neither begin with 01 nor end with 01. Check.We can let out a mini Presto if so. It indeed is so!
###Code
# We now need to perform DbXNORe01 - DB01ANDE01 to get a DFA which neither begins nor ends with 01
Dnb01ANDne01 = intersect_dfa(Db01XNORe01, comp_dfa(Db01ANDe01))
dotObj_dfa(Dnb01ANDne01)
###Output
_____no_output_____
###Markdown
This is the RE for "begins with 01". Again fool-proof to obtain.
###Code
# Now Dnb01ANDne01 must neither begin with 01 nor end with 01
# We can intersect with DFAs that begin with 01 and then DFA that ends with 01 and prove they are empty
Db01 = min_dfa(nfa2dfa(re2nfa("01(0+1)*")))
dotObj_dfa(Db01)
###Output
_____no_output_____
###Markdown
This is the RE for "ends with 01". Again fool-proof to obtain.
###Code
De01 = min_dfa(nfa2dfa(re2nfa("(0+1)*01")))
dotObj_dfa(De01)
###Output
_____no_output_____
###Markdown
Presto-1 : If the following DFA is empty, it DOES NOT begin with 01The student is likely right! Check Presto-2 also.
###Code
dotObj_dfa(min_dfa(intersect_dfa(Db01, Dnb01ANDne01)), FuseEdges=True)
###Output
_____no_output_____
###Markdown
Presto-2: If the following DFA is empty, it DOES NOT end with 01If this check also passes, the student is right !!
###Code
dotObj_dfa(min_dfa(intersect_dfa(De01, Dnb01ANDne01)), FuseEdges=True)
###Output
_____no_output_____
###Markdown
Since Presto-1 and Presto-2 worked out, we are done !! Design a DFA for Numbers arriving MSB-first, equal to 0 modulo 5Similar to the machine in Section 5.2.3 but with "5" not "3" ---> YOUR TASK 2 <---
###Code
DmsbMod5 = md2mc('''
DFA
''')
dotObj_dfa(DmsbMod5, FuseEdges=True)
Sigma={'0','1'}
for i in range(1,120):
w = nthnumeric(i, Sigma)
if accepts_dfa(DmsbMod5, w):
print("DFA DmsbMod5 accepts ", w, " having value ", int(w, 2))
# Printout below must be for numbers modulo 5 = 0
###Output
_____no_output_____ |
notebooks/elasticsearch-spark-recommender.ipynb | ###Markdown
Creating a Scalable Recommender with Apache Spark & ElasticsearchIn this notebook, you will create a recommendation engine using Spark and Elasticsearch. Using some movie rating data,you will train a collaborative filtering model in Spark and export the trained model to Elasticsearch. Once exported, you can test your recommendations by querying Elasticsearch and displaying the results. _Prerequisites_The notebook assumes you have installed Elasticsearch, Apache Spark and the Elasticsearch Spark connector detailed in the [setup steps](https://github.com/IBM/elasticsearch-spark-recommender/tree/mastersteps).> _Optional:_> In order to display the images in the recommendation demo, you will need to access [The Movie Database (TMdb) API](https://www.themoviedb.org/documentation/api). Please follow the [instructions](https://developers.themoviedb.org/3/getting-started) to get an API key. OverviewYou will work through the following steps1. Prepare the data2. Use the Elasticsearch Spark connector to save it to Elasticsearch3. Load ratings data and train a collaborative filtering recommendation model using Spark MLlib3. Save the model to Elasticsearch4. Show recommendations using Elasticsearch's script score query together with vector functions Step 1: Prepare the data* This notebook uses the "small" version of the latest MovieLens movie rating dataset, containing about 100,000 ratings, 9,000 movies and 700 users* The latest version of the data can be downloaded at https://grouplens.org/datasets/movielens/latest/* Follow the [Code Pattern instructions](https://github.com/IBM/elasticsearch-spark-recommender/tree/master5-download-the-data) to download the `ml-latest-small.zip` file and unzip it to a suitable location on your system.The folder should contain a number of CSV files. We will be using the following files:* `ratings.csv` - movie rating data* `links.csv` - external database ids for each movie* `movies.csv` - movie title and genres
###Code
# first import a few utility methods that we'll use later on
from IPython.display import Image, HTML, display
# check PySpark is running
spark
###Output
_____no_output_____
###Markdown
Load rating and movie data **Ratings**The ratings data consists of around 100,000 ratings given by users to movies. Each row of the `DataFrame` consists of a `userId`, `movieId` and `timestamp` for the event, together with the `rating` given by the user to the movie
###Code
# if you unzipped the data to a different location than that specified in the Code Pattern setup steps
# you can change the path below to point to the correct location
PATH_TO_DATA = "../data/ml-latest-small"
# load ratings data
ratings = spark.read.csv(PATH_TO_DATA + "/ratings.csv", header=True, inferSchema=True)
ratings.cache()
print("Number of ratings: {}".format(ratings.count()))
print("Sample of ratings:")
ratings.show(5)
###Output
_____no_output_____
###Markdown
You will see that the `timestamp` field is a UNIX timestamp in seconds. Elasticsearch takes timestamps in milliseconds, so you will use some `DataFrame` operations to convert the timestamps into milliseconds.
###Code
ratings = ratings.select(
ratings.userId, ratings.movieId, ratings.rating, (ratings.timestamp.cast("long") * 1000).alias("timestamp"))
ratings.show(5)
###Output
_____no_output_____
###Markdown
**Movies**The file `movies.csv` contains the `movieId`, `title` and `genres` for each movie. As you can see, the `genres` field is a bit tricky to use, as the genres are in the form of one string delimited by the `|` character: `Adventure|Animation|Children|Comedy|Fantasy`.
###Code
# load raw data from CSV
raw_movies = spark.read.csv(PATH_TO_DATA + "/movies.csv", header=True, inferSchema=True)
print("Raw movie data:")
raw_movies.show(5, truncate=False)
###Output
_____no_output_____
###Markdown
For indexing into Elasticsearch, we would prefer to represent the genres as a list. Create a `DataFrame` user-defined function (UDF) to extract this delimited string into a list of genres.
###Code
from pyspark.sql.functions import udf
from pyspark.sql.types import *
# define a UDF to convert the raw genres string to an array of genres and lowercase
extract_genres = udf(lambda x: x.lower().split("|"), ArrayType(StringType()))
# test it out
raw_movies.select("movieId", "title", extract_genres("genres").alias("genres")).show(5, False)
###Output
_____no_output_____
###Markdown
Ok, that looks better!You may also notice that the movie titles contain the year of release. It would be useful to have that as a field in your search index for filtering results (say you want to filter our recommendations to include only more recent movies).Create a UDF to extract the release year from the title using a Python regular expression.
###Code
import re
# define a UDF to extract the release year from the title, and return the new title and year in a struct type
def extract_year_fn(title):
result = re.search("\(\d{4}\)", title)
try:
if result:
group = result.group()
year = group[1:-1]
start_pos = result.start()
title = title[:start_pos-1]
return (title, year)
else:
return (title, 1970)
except:
print(title)
extract_year = udf(extract_year_fn,\
StructType([StructField("title", StringType(), True),\
StructField("release_date", StringType(), True)]))
# test out our function
s = "Jumanji (1995)"
extract_year_fn(s)
###Output
_____no_output_____
###Markdown
Ok the function works! Now create a new `DataFrame` with the cleaned-up titles, release dates and genres of the movies.
###Code
movies = raw_movies.select(
"movieId", extract_year("title").title.alias("title"),\
extract_year("title").release_date.alias("release_date"),\
extract_genres("genres").alias("genres"))
print("Cleaned movie data:")
movies.show(5, truncate=False)
###Output
_____no_output_____
###Markdown
Next, join the `links.csv` data to `movies` so that there is an id for _The Movie Database_ corresponding to each movie. You can use this id to retrieve movie poster images when displaying your recommendations later.
###Code
link_data = spark.read.csv(PATH_TO_DATA + "/links.csv", header=True, inferSchema=True)
# join movies with links to get TMDB id
movie_data = movies.join(link_data, movies.movieId == link_data.movieId)\
.select(movies.movieId, movies.title, movies.release_date, movies.genres, link_data.tmdbId)
num_movies = movie_data.count()
print("Cleaned movie data with tmdbId links:")
movie_data.show(5, truncate=False)
###Output
_____no_output_____
###Markdown
> **_Optional_**>> Run the below cell to test your access to TMDb API. You should see the _Toy Story_ movie poster displayed inline.>> To install the Python package run `pip install tmdbsimple` in your console (see the [Code Pattern Steps](https://github.com/IBM/elasticsearch-spark-recommender/tree/master6-launch-the-notebook))
###Code
try:
import tmdbsimple as tmdb
import json
from requests.exceptions import HTTPError
# replace this variable with your actual TMdb API key
tmdb.API_KEY = 'YOUR_API_KEY'
print("Successfully imported tmdbsimple!")
# base URL for TMDB poster images
IMAGE_URL = 'https://image.tmdb.org/t/p/w500'
movie_id = movie_data.first().tmdbId
movie_info = tmdb.Movies(movie_id).info()
movie_poster_url = IMAGE_URL + movie_info['poster_path']
display(Image(movie_poster_url, width=200))
except ImportError:
print("Cannot import tmdbsimple as it is not installed, no movie posters will be displayed!")
except HTTPError as e:
if e.response.status_code == 401:
j = json.loads(e.response.text)
print("TMdb API call failed: {}".format(j['status_message']))
###Output
_____no_output_____
###Markdown
Step 2: Load data into ElasticsearchNow that you have your dataset processed and prepared, you will load it into Elasticsearch._Note:_ for the purposes of this demo notebook you have started with an existing example dataset and will load that into Elasticsearch. In practice you may write your event data as well as user and item metadata from your application directly into Elasticsearch.First test that your Elasticsearch instance is running and you can connect to it using the Python Elasticsearch client.
###Code
from elasticsearch import Elasticsearch
# test your ES instance is running
es = Elasticsearch()
es.info(pretty=True)
###Output
_____no_output_____
###Markdown
Create Elasticsearch indices, with mappings for users, movies and rating eventsIn Elasticsearch, an "index" is roughly similar to a "database" or "database table". The schema for an index is called an index mapping.While Elasticsearch supports dynamic mapping, it's advisable to specify the mapping explicitly when creating an index if you know what your data looks like.For the purposes of your recommendation engine, this is also necessary so that you can specify the vector field that will hold the recommendation "model" (that is, the factor vectors). When creating a vector field, you need to provide the dimension of the vector explicitly, so it cannot be a dynamic mapping.> _Note_ This notebook does not go into detail about the underlying scoring mechanism or the relevant Elasticsearch internals. See the talks and slides in the [Code Pattern Links section](https://github.com/IBM/elasticsearch-spark-recommender/blob/master/README.mdlinks) for more detail.__References:__* [Create index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html)* [Index mapping](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html)* [Dense vector types](https://www.elastic.co/guide/en/elasticsearch/reference/current/dense-vector.html) > **_Optional_**> If you are re-running the notebook and have previously created the `movies`, `users` and `ratings`indices in Elasticsearch, you should first delete them by un-commenting and running the next cell, before running the index creation cell that follows.
###Code
# es.indices.delete(index="ratings,users,movies")
###Output
_____no_output_____
###Markdown
Now you're ready to create your indices.
###Code
# set the factor vector dimension for the recommendation model
VECTOR_DIM = 20
create_ratings = {
# this mapping definition sets up the fields for the rating events
"mappings": {
"properties": {
"timestamp": {
"type": "date"
},
"userId": {
"type": "integer"
},
"movieId": {
"type": "integer"
},
"rating": {
"type": "double"
}
}
}
}
create_users = {
# this mapping definition sets up the metadata fields for the users
"mappings": {
"properties": {
"userId": {
"type": "integer"
},
# the following fields define our model factor vectors and metadata
"model_factor": {
"type": "dense_vector",
"dims" : VECTOR_DIM
},
"model_version": {
"type": "keyword"
},
"model_timestamp": {
"type": "date"
}
}
}
}
create_movies = {
# this mapping definition sets up the metadata fields for the movies
"mappings": {
"properties": {
"movieId": {
"type": "integer"
},
"tmdbId": {
"type": "keyword"
},
"genres": {
"type": "keyword"
},
"release_date": {
"type": "date",
"format": "year"
},
# the following fields define our model factor vectors and metadata
"model_factor": {
"type": "dense_vector",
"dims" : VECTOR_DIM
},
"model_version": {
"type": "keyword"
},
"model_timestamp": {
"type": "date"
}
}
}
}
# create indices with the settings and mappings above
res_ratings = es.indices.create(index="ratings", body=create_ratings)
res_users = es.indices.create(index="users", body=create_users)
res_movies = es.indices.create(index="movies", body=create_movies)
print("Created indices:")
print(res_ratings)
print(res_users)
print(res_movies)
###Output
_____no_output_____
###Markdown
Load Ratings and Movies DataFrames into ElasticsearchFirst you will write the ratings data to Elasticsearch. Notice that you can simply use the Spark Elasticsearch connector to write a `DataFrame` with the native Spark datasource API by specifying `format("es")`
###Code
# write ratings data
ratings.write.format("es").save("ratings")
num_ratings_es = es.count(index="ratings")['count']
num_ratings_df = ratings.count()
# check write went ok
print("Dataframe count: {}".format(num_ratings_df))
print("ES index count: {}".format(num_ratings_es))
# test things out by retrieving a few rating event documents from Elasticsearch
es.search(index="ratings", q="*", size=3)
###Output
_____no_output_____
###Markdown
Since you've indexed the rating event data into Elasticsearch, you can use all the capabilities of a search engine to query the data. For example, you could count the number of ratings events in a given date range using Elasticsearch's date math in a query string:
###Code
es.count(index="ratings", q="timestamp:[2018-01-01 TO 2018-02-01]")
###Output
_____no_output_____
###Markdown
Next write the movie metadata
###Code
# write movie data, specifying the DataFrame column to use as the id mapping
movie_data.write.format("es").option("es.mapping.id", "movieId").save("movies")
num_movies_df = movie_data.count()
num_movies_es = es.count(index="movies")['count']
# check load went ok
print("Movie DF count: {}".format(num_movies_df))
print("ES index count: {}".format(num_movies_es))
###Output
_____no_output_____
###Markdown
Again you can harness the power of search to query the movie metadata:
###Code
# test things out by searching for movies containing "matrix" in the title
es.search(index="movies", q="title:matrix", size=3)
###Output
_____no_output_____
###Markdown
Step 3: Train a recommmender model on the ratings dataYour data is now stored in Elasticsearch and you will use the ratings data to build a collaborative filtering recommendation model.[Collaborative filtering](https://en.wikipedia.org/wiki/Collaborative_filtering) is a recommendation approach that is effectively based on the "wisdom of the crowd". It makes the assumption that, if two people share similar preferences, then the things that one of them prefers could be good recommendations to make to the other. In other words, if user A tends to like certain movies, and user B shares some of these preferences with user A, then the movies that user A likes, that user B _has not yet seen_, may well be movies that user B will also like.In a similar manner, we can think about _items_ as being similar if they tend to be rated highly by the same people, on average. Hence these models are based on the combined, collaborative preferences and behavior of all users in aggregate. They tend to be very effective in practice (provided you have enough preference data to train the model). The ratings data you have is a form of _explicit preference data_, perfect for training collaborative filtering models. Alternating Least SquaresAlternating Least Squares (ALS) is a specific algorithm for solving a type of collaborative filtering model known as [matrix factorization (MF)](https://en.wikipedia.org/wiki/Matrix_decomposition). The core idea of MF is to represent the ratings as a _user-item ratings matrix_. In the diagram below you will see this matrix on the left (with users as _rows_ and movies as _columns_). The entries in this matrix are the ratings given by users to movies.You may also notice that the matrix has _missing entries_ because not all users have rated all movies. In this situation we refer to the data as _sparse_.MF methods aim to find two much smaller matrices (one representing the _users_ and the other the _items_) that, when multiplied together, re-construct the original ratings matrix as closely as possible. This is know as _factorizing_ the original matrix, hence the name of the technique.The two smaller matrices are called _factor matrices_ (or _latent features_). The user and movie factor matrices are illustrated on the right in the diagram above. The idea is that each user factor vector is a compressed representation of the user's preferences and behavior. Likewise, each item factor vector is a compressed representation of the item. Once the model is trained, the factor vectors can be used to make recommendations, which is what you will do in the following sections.__Further reading:__* [Spark MLlib Collaborative Filtering](http://spark.apache.org/docs/latest/ml-collaborative-filtering.html)* [Alternating Least Squares and collaborative filtering](https://datasciencemadesimpler.wordpress.com/tag/alternating-least-squares/)* [Quora question on Alternating Least Squares](https://www.quora.com/What-is-the-Alternating-Least-Squares-method-in-recommendation-systems-And-why-does-this-algorithm-work-intuition-behind-this)Fortunately, Spark's MLlib machine learning library has a scalable, efficient implementation of matrix factorization built in, which we can use to train our recommendation model. Next, you will use Spark's ALS to train a model on your ratings data from Elasticsearch.
###Code
ratings_from_es = spark.read.format("es").load("ratings")
ratings_from_es.show(5)
from pyspark.ml.recommendation import ALS
from pyspark.sql.functions import col
als = ALS(userCol="userId", itemCol="movieId", ratingCol="rating", regParam=0.02, rank=VECTOR_DIM, seed=54)
model = als.fit(ratings_from_es)
model.userFactors.show(5)
model.itemFactors.show(5)
###Output
_____no_output_____
###Markdown
Step 4: Export ALS user and item factor vectors to ElasticsearchCongratulations, you've trained a recommendation model! The next step is to export the model factors (shown in the `DataFrames` above) to Elasticsearch.We can export the model factor vector directly to Elasticsearch, since it is an array and the `dense_vector` field expects an array as input.For illustrative purposes, we will also export model metadata (such as the Spark model id and a timestamp). Write the model factor vectors, model version and model timestamp to Elasticsearch
###Code
from pyspark.sql.functions import lit, current_timestamp, unix_timestamp
ver = model.uid
ts = unix_timestamp(current_timestamp())
movie_vectors = model.itemFactors.select("id",\
col("features").alias("model_factor"),\
lit(ver).alias("model_version"),\
ts.alias("model_timestamp"))
movie_vectors.show(5)
user_vectors = model.userFactors.select("id",\
col("features").alias("model_factor"),\
lit(ver).alias("model_version"),\
ts.alias("model_timestamp"))
user_vectors.show(5)
# write data to ES, use:
# - "id" as the column to map to ES movie id
# - "update" write mode for ES, since you want to update new fields only
# - "append" write mode for Spark
movie_vectors.write.format("es") \
.option("es.mapping.id", "id") \
.option("es.write.operation", "update") \
.save("movies", mode="append")
# write data to ES, use:
# - "id" as the column to map to ES movie id
# - "index" write mode for ES, since you have not written to the user index previously
# - "append" write mode for Spark
user_vectors.write.format("es") \
.option("es.mapping.id", "id") \
.option("es.write.operation", "index") \
.save("users", mode="append")
###Output
_____no_output_____
###Markdown
Check the data was written correctlyYou can search for a movie to see if the model factor vector was written correctly. You should see a `'model_factor': [0..188..., ]` field in the returned movie document, as well as a `model_version` and `model_timestamp` field).
###Code
# search for a particular sci-fi movie
es.search(index="movies", q="force awakens")['hits']['hits'][0]
###Output
_____no_output_____
###Markdown
Step 5: Recommend using Elasticsearch!Now that you have loaded your recommendation model into Elasticsearch, you will generate some recommendations.First, you will need to create a few utility functions for:* Fetching movie posters from TMdb API (optional)* Constructing the Elasticsearch [script score query for vector functions](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-script-score-query.htmlvector-functions) to generate recommendations from your factor model* Given a movie, use this query to find the movies most similar to it* Given a user, use this query to find the movies with the highest predicted rating, to recommend to the user* Display the results as an HTML table in Jupyter
###Code
from IPython.display import Image, HTML, display
def get_poster_url(id):
"""Fetch movie poster image URL from TMDb API given a tmdbId"""
IMAGE_URL = 'https://image.tmdb.org/t/p/w500'
try:
import tmdbsimple as tmdb
from tmdbsimple import APIKeyError
try:
movie = tmdb.Movies(id).info()
poster_url = IMAGE_URL + movie['poster_path'] if 'poster_path' in movie and movie['poster_path'] is not None else ""
return poster_url
except APIKeyError as ae:
return "KEY_ERR"
except Exception as me:
return "NA"
def vector_query(query_vec, vector_field, q="*", cosine=False):
"""
Construct an Elasticsearch script score query using `dense_vector` fields
The script score query takes as parameters the query vector (as a Python list)
Parameters
----------
query_vec : list
The query vector
vector_field : str
The field name in the document against which to score `query_vec`
q : str, optional
Query string for the search query (default: '*' to search across all documents)
cosine : bool, optional
Whether to compute cosine similarity. If `False` then the dot product is computed (default: False)
Note: Elasticsearch cannot rank negative scores. Therefore, in the case of the dot product, a sigmoid transform
is applied. In the case of cosine similarity, 1.0 is added to the score. In both cases, documents with no
factor vectors are ignored by applying a 0.0 score.
The query vector passed in will be the user factor vector (if generating recommended movies for a user)
or movie factor vector (if generating similar movies for a given movie)
"""
if cosine:
score_fn = "doc['{v}'].size() == 0 ? 0 : cosineSimilarity(params.vector, '{v}') + 1.0"
else:
score_fn = "doc['{v}'].size() == 0 ? 0 : sigmoid(1, Math.E, -dotProduct(params.vector, '{v}'))"
score_fn = score_fn.format(v=vector_field, fn=score_fn)
return {
"query": {
"script_score": {
"query" : {
"query_string": {
"query": q
}
},
"script": {
"source": score_fn,
"params": {
"vector": query_vec
}
}
}
}
}
def get_similar(the_id, q="*", num=10, index="movies", vector_field='model_factor'):
"""
Given a movie id, execute the recommendation script score query to find similar movies,
ranked by cosine similarity. We return the `num` most similar, excluding the movie itself.
"""
response = es.get(index=index, id=the_id)
src = response['_source']
if vector_field in src:
query_vec = src[vector_field]
q = vector_query(query_vec, vector_field, q=q, cosine=True)
results = es.search(index=index, body=q)
hits = results['hits']['hits']
return src, hits[1:num+1]
def get_user_recs(the_id, q="*", num=10, users="users", movies="movies", vector_field='model_factor'):
"""
Given a user id, execute the recommendation script score query to find top movies, ranked by predicted rating
"""
response = es.get(index=users, id=the_id)
src = response['_source']
if vector_field in src:
query_vec = src[vector_field]
q = vector_query(query_vec, vector_field, q=q, cosine=False)
results = es.search(index=movies, body=q)
hits = results['hits']['hits']
return src, hits[:num]
def get_movies_for_user(the_id, num=10, ratings="ratings", movies="movies"):
"""
Given a user id, get the movies rated by that user, from highest- to lowest-rated.
"""
response = es.search(index=ratings, q="userId:{}".format(the_id), size=num, sort=["rating:desc"])
hits = response['hits']['hits']
ids = [h['_source']['movieId'] for h in hits]
movies = es.mget(body={"ids": ids}, index=movies, _source_includes=['tmdbId', 'title'])
movies_hits = movies['docs']
tmdbids = [h['_source'] for h in movies_hits]
return tmdbids
def display_user_recs(the_id, q="*", num=10, num_last=10, users="users", movies="movies", ratings="ratings"):
user, recs = get_user_recs(the_id, q, num, users, movies)
user_movies = get_movies_for_user(the_id, num_last, ratings, movies)
# check that posters can be displayed
first_movie = user_movies[0]
first_im_url = get_poster_url(first_movie['tmdbId'])
if first_im_url == "NA":
display(HTML("<i>Cannot import tmdbsimple. No movie posters will be displayed!</i>"))
if first_im_url == "KEY_ERR":
display(HTML("<i>Key error accessing TMDb API. Check your API key. No movie posters will be displayed!</i>"))
# display the movies that this user has rated highly
display(HTML("<h2>Get recommended movies for user id %s</h2>" % the_id))
display(HTML("<h4>The user has rated the following movies highly:</h4>"))
user_html = "<table border=0>"
i = 0
for movie in user_movies:
movie_im_url = get_poster_url(movie['tmdbId'])
movie_title = movie['title']
user_html += "<td><h5>%s</h5><img src=%s width=150></img></td>" % (movie_title, movie_im_url)
i += 1
if i % 5 == 0:
user_html += "</tr><tr>"
user_html += "</tr></table>"
display(HTML(user_html))
# now display the recommended movies for the user
display(HTML("<br>"))
display(HTML("<h2>Recommended movies:</h2>"))
rec_html = "<table border=0>"
i = 0
for rec in recs:
r_im_url = get_poster_url(rec['_source']['tmdbId'])
r_score = rec['_score']
r_title = rec['_source']['title']
rec_html += "<td><h5>%s</h5><img src=%s width=150></img></td><td><h5>%2.3f</h5></td>" % (r_title, r_im_url, r_score)
i += 1
if i % 5 == 0:
rec_html += "</tr><tr>"
rec_html += "</tr></table>"
display(HTML(rec_html))
def display_similar(the_id, q="*", num=10, movies="movies"):
"""
Display query movie, together with similar movies and similarity scores, in a table
"""
movie, recs = get_similar(the_id, q, num, movies)
q_im_url = get_poster_url(movie['tmdbId'])
if q_im_url == "NA":
display(HTML("<i>Cannot import tmdbsimple. No movie posters will be displayed!</i>"))
if q_im_url == "KEY_ERR":
display(HTML("<i>Key error accessing TMDb API. Check your API key. No movie posters will be displayed!</i>"))
display(HTML("<h2>Get similar movies for:</h2>"))
display(HTML("<h4>%s</h4>" % movie['title']))
if q_im_url != "NA":
display(Image(q_im_url, width=200))
display(HTML("<br>"))
display(HTML("<h2>People who liked this movie also liked these:</h2>"))
sim_html = "<table border=0>"
i = 0
for rec in recs:
r_im_url = get_poster_url(rec['_source']['tmdbId'])
r_score = rec['_score']
r_title = rec['_source']['title']
sim_html += "<td><h5>%s</h5><img src=%s width=150></img></td><td><h5>%2.3f</h5></td>" % (r_title, r_im_url, r_score)
i += 1
if i % 5 == 0:
sim_html += "</tr><tr>"
sim_html += "</tr></table>"
display(HTML(sim_html))
###Output
_____no_output_____
###Markdown
Now, you're ready to generate some recommendations. 5(a) Find similar movies for a given movieTo start, you can find movies that are _similar_ to a given movie. This similarity score is computed from the model factor vectors for each movie. Recall that the ALS model you trained earlier is a collaborative filtering model, so the similarity between movie vectors will be based on the _rating co-occurrence_ of the movies. In other words, two movies that tend to be rated highly by a user will tend to be more similar. It is common to use the [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) of the movie factor vectors as a measure of the similarity between two movies.Using this similarity you can show recommendations along the lines of _people who liked this movie also liked these_.
###Code
display_similar(858, num=5)
###Output
_____no_output_____
###Markdown
So we see that people who like the original Godfather movie tend like other crime drama movies (including another Godfather film), as well as some general drama movies.> _Note_ since we are using a relatively small and sparse dataset, results may not be as good as those for the same model trained on a larger dataset.Now you will see the power and flexibility that comes from using a search engine to generate recommendations. Elasticsearch allows you to tweak the results returned by the recommendation query using any standard search query or filter - from free text search through to filters based on time and geo-location (or any other piece of metadata you can think of). Filter recommendations based on titleFor example, perhaps you want to remove any movies with "godfather" in the title from the recommendations. You can do this by simply passing a valid Elasticsearch query string to the recommendation function.
###Code
display_similar(858, num=5, q="title:(NOT godfather)")
###Output
_____no_output_____
###Markdown
We see that we now only have similar movies that are _not also part of the Godfather trilogy_. Filter recommendations based on genreOr you may want to ensure that only valid children's movies are shown to young viewers.
###Code
display_similar(1, num=5, q="genres:children")
###Output
_____no_output_____
###Markdown
Feel free to check out the documentation for the Elasticsearch [query string query](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html) and play around with the various queries you can construct by passing in a query string as `q` in the recommendation function above! 5(b) Find movies to recommend to a userNow, you're ready to generate some movie recommendations, personalized for a specific user.Given a user, you can recommend movies to that user based on the predicted ratings from your model. In the same manner as for the similar movie recommendations, this predicted rating score is computed from the model factor vector for the user and the factor vectors for each movie. Recall that the collaborative filtering model means that, at a high level, we will recommend movies _liked by other users who liked the same movies as the given user_.
###Code
display_user_recs(72, num=5, num_last=5)
###Output
_____no_output_____
###Markdown
Again, note that since we are using a relatively small and sparse dataset, the results may not be too good. However, we can see that this user seems to like some sci-fi and some drama films. The recommended movies fall broadly into these categories and seem to be somewhat reasonable. Filter recommendations based on release dateNext, you can again apply the power of Elasticsearch's filtering capabilities to your recommendation engine. Let's say you only want to recommend more recent movies (say, from the past 3 years). This can be done by adding a date math query to the recommendation function score query.
###Code
display_user_recs(72, num=5, num_last=5, q="release_date:[2017 TO *]")
###Output
_____no_output_____
###Markdown
Creating a Scalable Recommender with Apache Spark & ElasticsearchIn this notebook, you will create a recommendation engine using Spark and Elasticsearch. Using some movie rating data,you will train a collaborative filtering model in Spark and export the trained model to Elasticsearch. Once exported, you can test your recommendations by querying Elasticsearch and displaying the results. _Prerequisites_The notebook assumes you have installed Elasticsearch, the Elasticsearch vector-scoring plugin, Apache Spark and the Elasticsearch Spark connector detailed in the [setup steps](https://github.com/MLnick/elasticsearch-spark-recommender-demo/tree/mastersteps).> _Optional:_> In order to display the images in the recommendation demo, you will need to access [The Movie Database (TMdb) API](https://www.themoviedb.org/documentation/api). Please follow the [instructions](https://developers.themoviedb.org/3/getting-started) to get an API key. OverviewYou will work through the following steps1. Prepare the data2. Use the Elasticsearch Spark connector to save it to Elasticsearch3. Load ratings data and train a collaborative filtering recommendation model using Spark MLlib3. Save the model to Elasticsearch4. Show recommendations using Elasticsearch vector scoring plugin Step 1: Prepare the data* This notebook uses the "small" version of the latest MovieLens movie rating dataset, containing about 100,000 ratings, 9,000 movies and 700 users* The latest version of the data can be downloaded at https://grouplens.org/datasets/movielens/latest/* Download the `ml-latest-small.zip` file and unzip it to a suitable location on your system.The folder should contain a number of CSV files. We will be using the following files:* `ratings.csv` - movie rating data* `links.csv` - external database ids for each movie* `movies.csv` - movie title and genres
###Code
# first import a few utility methods that we'll use later on
from IPython.display import Image, HTML, display
# check PySpark is running
spark
###Output
_____no_output_____
###Markdown
Load rating and movie data **Ratings**The ratings data consists of around 100,000 ratings given by users to movies. Each row of the `DataFrame` consists of a `userId`, `movieId` and `timestamp` for the event, together with the `rating` given by the user to the movie
###Code
# if you unzipped the data to a different location than that specified in the Journey setup steps
# you can change the path below to point to the correct location
PATH_TO_DATA = "../data/ml-latest-small"
# load ratings data
ratings = spark.read.csv(PATH_TO_DATA + "/ratings.csv", header=True, inferSchema=True)
ratings.cache()
print("Number of ratings: %i" % ratings.count())
print("Sample of ratings:")
ratings.show(5)
###Output
_____no_output_____
###Markdown
You will see that the `timestamp` field is a UNIX timestamp in seconds. Elasticsearch takes timestamps in milliseconds, so you will use some `DataFrame` operations to convert the timestamps into milliseconds.
###Code
ratings = ratings.select(
ratings.userId, ratings.movieId, ratings.rating, (ratings.timestamp.cast("long") * 1000).alias("timestamp"))
ratings.show(5)
###Output
_____no_output_____
###Markdown
**Movies**The file `movies.csv` contains the `movieId`, `title` and `genres` for each movie. As you can see, the `genres` field is a bit tricky to use, as the genres are in the form of one string delimited by the `|` character: `Adventure|Animation|Children|Comedy|Fantasy`.
###Code
# load raw data from CSV
raw_movies = spark.read.csv(PATH_TO_DATA + "/movies.csv", header=True, inferSchema=True)
print("Raw movie data:")
raw_movies.show(5, truncate=False)
###Output
_____no_output_____
###Markdown
Create a `DataFrame` user-defined function (UDF) to extract this delimited string into a list of genres.
###Code
from pyspark.sql.functions import udf
from pyspark.sql.types import *
# define a UDF to convert the raw genres string to an array of genres and lowercase
extract_genres = udf(lambda x: x.lower().split("|"), ArrayType(StringType()))
# test it out
raw_movies.select("movieId", "title", extract_genres("genres").alias("genres")).show(5, False)
###Output
_____no_output_____
###Markdown
Ok, that looks better!You may also notice that the movie titles contain the year of release. It would be useful to have that as a field in your search index for filtering results (say you want to filter our recommendations to include only more recent movies).Create a UDF to extract the release year from the title using a Python regular expression.
###Code
import re
# define a UDF to extract the release year from the title, and return the new title and year in a struct type
def extract_year_fn(title):
result = re.search("\(\d{4}\)", title)
try:
if result:
group = result.group()
year = group[1:-1]
start_pos = result.start()
title = title[:start_pos-1]
return (title, year)
else:
return (title, 1970)
except:
print(title)
extract_year = udf(extract_year_fn,\
StructType([StructField("title", StringType(), True),\
StructField("release_date", StringType(), True)]))
# test out our function
s = "Jumanji (1995)"
extract_year_fn(s)
###Output
_____no_output_____
###Markdown
Ok the function works! Now create a new `DataFrame` with the cleaned-up titles, release dates and genres of the movies.
###Code
movies = raw_movies.select(
"movieId", extract_year("title").title.alias("title"),\
extract_year("title").release_date.alias("release_date"),\
extract_genres("genres").alias("genres"))
print("Cleaned movie data:")
movies.show(5, truncate=False)
###Output
_____no_output_____
###Markdown
Next, join the `links.csv` data to `movies` so that there is an id for _The Movie Database_ corresponding to each movie. You can use this id to retrieve movie poster images when displaying your recommendations later.
###Code
link_data = spark.read.csv(PATH_TO_DATA + "/links.csv", header=True, inferSchema=True)
# join movies with links to get TMDB id
movie_data = movies.join(link_data, movies.movieId == link_data.movieId)\
.select(movies.movieId, movies.title, movies.release_date, movies.genres, link_data.tmdbId)
num_movies = movie_data.count()
print("Cleaned movie data with tmdbId links:")
movie_data.show(5, truncate=False)
###Output
_____no_output_____
###Markdown
> **_Optional_**> Run the below cell to test your access to TMDb API. You should see the _Toy Story_ movie poster displayed inline.> To install the Python package run `pip install tmdbsimple`
###Code
try:
import tmdbsimple as tmdb
# replace this variable with your actual TMdb API key
tmdb.API_KEY = 'YOUR_API_KEY'
print("Successfully imported tmdbsimple!")
# base URL for TMDB poster images
IMAGE_URL = 'https://image.tmdb.org/t/p/w500'
movie_id = movie_data.first().tmdbId
movie_info = tmdb.Movies(movie_id).info()
movie_poster_url = IMAGE_URL + movie_info['poster_path']
display(Image(movie_poster_url, width=200))
except Exception:
print("Cannot import tmdbsimple, no movie posters will be displayed!")
###Output
_____no_output_____
###Markdown
Step 2: Load data into ElasticsearchNow that you have your dataset processed and prepared, you will load it into Elasticsearch._Note:_ for the purposes of this demo notebook you have started with an existing example dataset and will load that into Elasticsearch. In practice you may write your event data as well as user and item metadata from your application directly into Elasticsearch.First test that your Elasticsearch instance is running and you can connect to it using the Python Elasticsearch client.
###Code
from elasticsearch import Elasticsearch
# test your ES instance is running
es = Elasticsearch()
es.info(pretty=True)
###Output
_____no_output_____
###Markdown
Create an Elasticsearch index with mappings for users, movies and rating eventsIn Elasticsearch, an "index" is roughly similar to a "database", while a "document type" is roughly similar to a "table" in that database. The schema for a document type is called an index mapping.While Elasticsearch supports dynamic mapping, it's advisable to specify the mapping explicitly when creating an index if you know what your data looks like.For the purposes of your recommendation engine, this is also necessary so that you can specify a custom analyzer for the field that will hold the recommendation "model" (that is, the factor vectors). This will ensure the vector-scoring plugin will work correctly.> _Note_ This notebook does not go into detail about the underlying scoring mechanism or the relevant Elasticsearch internals. See the talks and slides in the [Journey Links section](https://github.com/MLnick/elasticsearch-spark-recommender-demo/blob/master/README.mdlinks) for more detail.__References:__* [Create index request](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html)* [Delimited payload filter](https://www.elastic.co/guide/en/elasticsearch/reference/2.4/analysis-delimited-payload-tokenfilter.html)* [Term vectors](https://www.elastic.co/guide/en/elasticsearch/reference/2.4/docs-termvectors.html_term_information)* [Mapping](https://www.elastic.co/guide/en/elasticsearch/reference/2.4/mapping.html) > **_Optional_**> If you are re-running the notebook and have previously created the `demo` index in Elasticsearch, you should first delete it by un-commenting and running the next cell, before running the index creation cell that follows.
###Code
# es.indices.delete(index="demo")
###Output
_____no_output_____
###Markdown
Now you're ready to create your index.
###Code
create_index = {
"settings": {
"analysis": {
"analyzer": {
# this configures the custom analyzer we need to parse vectors such that the scoring
# plugin will work correctly
"payload_analyzer": {
"type": "custom",
"tokenizer":"whitespace",
"filter":"delimited_payload_filter"
}
}
}
},
"mappings": {
"ratings": {
# this mapping definition sets up the fields for the rating events
"properties": {
"timestamp": {
"type": "date"
},
"userId": {
"type": "integer"
},
"movieId": {
"type": "integer"
},
"rating": {
"type": "double"
}
}
},
"users": {
# this mapping definition sets up the metadata fields for the users
"properties": {
"userId": {
"type": "integer"
},
"@model": {
# this mapping definition sets up the fields for user factor vectors of our model
"properties": {
"factor": {
"type": "text",
"term_vector": "with_positions_offsets_payloads",
"analyzer" : "payload_analyzer"
},
"version": {
"type": "keyword"
},
"timestamp": {
"type": "date"
}
}
}
}
},
"movies": {
# this mapping definition sets up the metadata fields for the movies
"properties": {
"movieId": {
"type": "integer"
},
"tmdbId": {
"type": "keyword"
},
"genres": {
"type": "keyword"
},
"release_date": {
"type": "date",
"format": "year"
},
"@model": {
# this mapping definition sets up the fields for movie factor vectors of our model
"properties": {
"factor": {
"type": "text",
"term_vector": "with_positions_offsets_payloads",
"analyzer" : "payload_analyzer"
},
"version": {
"type": "keyword"
},
"timestamp": {
"type": "date"
}
}
}
}
}
}
}
# create index with the settings and mappings above
es.indices.create(index="demo", body=create_index)
###Output
_____no_output_____
###Markdown
Load Ratings and Movies DataFrames into ElasticsearchFirst you will write the ratings data to Elasticsearch. Notice that you can simply use the Spark Elasticsearch connector to write a `DataFrame` with the native Spark datasource API by specifying `format("es")`
###Code
# write ratings data
ratings.write.format("es").save("demo/ratings")
# check write went ok
print("Dataframe count: %d" % ratings.count())
print("ES index count: %d" % es.count(index="demo", doc_type="ratings")['count'])
# test things out by retrieving a few rating event documents from Elasticsearch
es.search(index="demo", doc_type="ratings", q="*", size=3)
###Output
_____no_output_____
###Markdown
Since you've indexed the rating event data into Elasticsearch, you can use all the capabilities of a search engine to query the data. For example, you could count the number of ratings events in a given date range using Elasticsearch's date math in a query string:
###Code
es.count(index="demo", doc_type="ratings", q="timestamp:[2016-01-01 TO 2016-02-01]")
###Output
_____no_output_____
###Markdown
Next write the movie metadata
###Code
# write movie data, specifying the DataFrame column to use as the id mapping
movie_data.write.format("es").option("es.mapping.id", "movieId").save("demo/movies")
# check load went ok
print("Movie DF count: %d" % movie_data.count())
print("ES index count: %d" % es.count(index="demo", doc_type="movies")['count'])
###Output
_____no_output_____
###Markdown
Again you can harness the power of search to query the movie metadata:
###Code
# test things out by searching for movies containing "matrix" in the title
es.search(index="demo", doc_type="movies", q="title:matrix", size=3)
###Output
_____no_output_____
###Markdown
Step 3: Train a recommmender model on the ratings dataYour data is now stored in Elasticsearch and you will use the ratings data to build a collaborative filtering recommendation model.[Collaborative filtering](https://en.wikipedia.org/wiki/Collaborative_filtering) is a recommendation approach that is effectively based on the "wisdom of the crowd". It makes the assumption that, if two people share similar preferences, then the things that one of them prefers could be good recommendations to make to the other. In other words, if user A tends to like certain movies, and user B shares some of these preferences with user A, then the movies that user A likes, that user B _has not yet seen_, may well be movies that user B will also like.In a similar manner, we can think about _items_ as being similar if they tend to be rated highly by the same people, on average. Hence these models are based on the combined, collaborative preferences and behavior of all users in aggregate. They tend to be very effective in practice (provided you have enough preference data to train the model). The ratings data you have is a form of _explicit preference data_, perfect for training collaborative filtering models. Alternating Least SquaresAlternating Least Squares (ALS) is a specific algorithm for solving a type of collaborative filtering model known as [matrix factorization (MF)](https://en.wikipedia.org/wiki/Matrix_decomposition). The core idea of MF is to represent the ratings as a _user-item ratings matrix_. In the diagram below you will see this matrix on the left (with users as _rows_ and movies as _columns_). The entries in this matrix are the ratings given by users to movies.You may also notice that the matrix has _missing entries_ because not all users have rated all movies. In this situation we refer to the data as _sparse_.MF methods aim to find two much smaller matrices (one representing the _users_ and the other the _items_) that, when multiplied together, re-construct the original ratings matrix as closely as possible. This is know as _factorizing_ the original matrix, hence the name of the technique.The two smaller matrices are called _factor matrices_ (or _latent features_). The user and movie factor matrices are illustrated on the right in the diagram above. The idea is that each user factor vector is a compressed representation of the user's preferences and behavior. Likewise, each item factor vector is a compressed representation of the item. Once the model is trained, the factor vectors can be used to make recommendations, which is what you will do in the following sections.__Further reading:__* [Spark MLlib Collaborative Filtering](http://spark.apache.org/docs/latest/ml-collaborative-filtering.html)* [Alternating Least Squares and collaborative filtering](https://datasciencemadesimpler.wordpress.com/tag/alternating-least-squares/)* [Quora question on Alternating Least Squares](https://www.quora.com/What-is-the-Alternating-Least-Squares-method-in-recommendation-systems-And-why-does-this-algorithm-work-intuition-behind-this)Fortunately, Spark's MLlib machine learning library has a scalable, efficient implementation of matrix factorization built in, which we can use to train our recommendation model. Next, you will use Spark's ALS to train a model on your ratings data from Elasticsearch.
###Code
ratings_from_es = spark.read.format("es").load("demo/ratings")
ratings_from_es.show(5)
from pyspark.ml.recommendation import ALS
from pyspark.sql.functions import col
als = ALS(userCol="userId", itemCol="movieId", ratingCol="rating", regParam=0.01, rank=20, seed=12)
model = als.fit(ratings_from_es)
model.userFactors.show(5)
model.itemFactors.show(5)
###Output
_____no_output_____
###Markdown
Step 4: Export ALS user and item factor vectors to ElasticsearchCongratulations, you've trained a recommendation model! The next step is to export the model factors (shown in the `DataFrames` above) to Elasticsearch.In order to store the model in the correct format for the index mappings set up earlier, you will need to create some utility functions. These functions will allow you to convert the raw vectors (which are equivalent to a Python list in the factor `DataFrames` above) to the correct _delimited string format_. This ensures Elasticsearch will parse the vector field in the model correctly using the delimited token filter custom analyzer you configured earlier.You will also create a function to convert a vector and related metadata (such as the Spark model id and a timestamp) into a `DataFrame` field that matches the `model` field in the Elasticsearch index mapping. Utility functions for converting factor vectors
###Code
from pyspark.sql.types import *
from pyspark.sql.functions import udf, lit, current_timestamp, unix_timestamp
def convert_vector(x):
'''Convert a list or numpy array to delimited token filter format'''
return " ".join(["%s|%s" % (i, v) for i, v in enumerate(x)])
def reverse_convert(s):
'''Convert a delimited token filter format string back to list format'''
return [float(f.split("|")[1]) for f in s.split(" ")]
def vector_to_struct(x, version, ts):
'''Convert a vector to a SparkSQL Struct with string-format vector and version fields'''
return (convert_vector(x), version, ts)
vector_struct = udf(vector_to_struct, \
StructType([StructField("factor", StringType(), True), \
StructField("version", StringType(), True),\
StructField("timestamp", LongType(), True)]))
# test out the vector conversion function
test_vec = model.userFactors.select("features").first().features
print(test_vec)
print()
print(convert_vector(test_vec))
###Output
_____no_output_____
###Markdown
Convert factor vectors to [factor, version, timestamp] form and write to Elasticsearch
###Code
ver = model.uid
ts = unix_timestamp(current_timestamp())
movie_vectors = model.itemFactors.select("id", vector_struct("features", lit(ver), ts).alias("@model"))
movie_vectors.select("id", "@model.factor", "@model.version", "@model.timestamp").show(5)
user_vectors = model.userFactors.select("id", vector_struct("features", lit(ver), ts).alias("@model"))
user_vectors.select("id", "@model.factor", "@model.version", "@model.timestamp").show(5)
# write data to ES, use:
# - "id" as the column to map to ES movie id
# - "update" write mode for ES, since you want to update new fields only
# - "append" write mode for Spark
movie_vectors.write.format("es") \
.option("es.mapping.id", "id") \
.option("es.write.operation", "update") \
.save("demo/movies", mode="append")
# write data to ES, use:
# - "id" as the column to map to ES movie id
# - "index" write mode for ES, since you have not written to the user index previously
# - "append" write mode for Spark
user_vectors.write.format("es") \
.option("es.mapping.id", "id") \
.option("es.write.operation", "index") \
.save("demo/users", mode="append")
###Output
_____no_output_____
###Markdown
Check the data was written correctlyYou can search for a movie to see if the model factor vector was written correctly. You should see a `'@model': {'factor': '0|...` field in the returned movie document.
###Code
# search for a particular sci-fi movie
es.search(index="demo", doc_type="movies", q="star wars phantom menace", size=1)['hits']['hits'][0]
###Output
_____no_output_____
###Markdown
Step 5: Recommend using Elasticsearch!Now that you have loaded your recommendation model into Elasticsearch, you will generate some recommendations.First, you will need to create a few utility functions for:* Fetching movie posters from TMdb API (optional)* Constructing the Elasticsearch [function score query](https://www.elastic.co/guide/en/elasticsearch/reference/5.3/query-dsl-function-score-query.html) to generate recommendations from your factor model* Given a movie, use this query to find the movies most similar to it* Given a user, use this query to find the movies with the highest predicted rating, to recommend to the user* Display the results as an HTML table in Jupyter
###Code
from IPython.display import Image, HTML, display
def get_poster_url(id):
"""Fetch movie poster image URL from TMDb API given a tmdbId"""
IMAGE_URL = 'https://image.tmdb.org/t/p/w500'
try:
import tmdbsimple as tmdb
from tmdbsimple import APIKeyError
try:
movie = tmdb.Movies(id).info()
poster_url = IMAGE_URL + movie['poster_path'] if 'poster_path' in movie and movie['poster_path'] is not None else ""
return poster_url
except APIKeyError as ae:
return "KEY_ERR"
except Exception as me:
return "NA"
def fn_query(query_vec, q="*", cosine=False):
"""
Construct an Elasticsearch function score query.
The query takes as parameters:
- the field in the candidate document that contains the factor vector
- the query vector
- a flag indicating whether to use dot product or cosine similarity (normalized dot product) for scores
The query vector passed in will be the user factor vector (if generating recommended movies for a user)
or movie factor vector (if generating similar movies for a given movie)
"""
return {
"query": {
"function_score": {
"query" : {
"query_string": {
"query": q
}
},
"script_score": {
"script": {
"inline": "payload_vector_score",
"lang": "native",
"params": {
"field": "@model.factor",
"vector": query_vec,
"cosine" : cosine
}
}
},
"boost_mode": "replace"
}
}
}
def get_similar(the_id, q="*", num=10, index="demo", dt="movies"):
"""
Given a movie id, execute the recommendation function score query to find similar movies, ranked by cosine similarity
"""
response = es.get(index=index, doc_type=dt, id=the_id)
src = response['_source']
if '@model' in src and 'factor' in src['@model']:
raw_vec = src['@model']['factor']
# our script actually uses the list form for the query vector and handles conversion internally
query_vec = reverse_convert(raw_vec)
q = fn_query(query_vec, q=q, cosine=True)
results = es.search(index, dt, body=q)
hits = results['hits']['hits']
return src, hits[1:num+1]
def get_user_recs(the_id, q="*", num=10, index="demo"):
"""
Given a user id, execute the recommendation function score query to find top movies, ranked by predicted rating
"""
response = es.get(index=index, doc_type="users", id=the_id)
src = response['_source']
if '@model' in src and 'factor' in src['@model']:
raw_vec = src['@model']['factor']
# our script actually uses the list form for the query vector and handles conversion internally
query_vec = reverse_convert(raw_vec)
q = fn_query(query_vec, q=q, cosine=False)
results = es.search(index, "movies", body=q)
hits = results['hits']['hits']
return src, hits[:num]
def get_movies_for_user(the_id, num=10, index="demo"):
"""
Given a user id, get the movies rated by that user, from highest- to lowest-rated.
"""
response = es.search(index=index, doc_type="ratings", q="userId:%s" % the_id, size=num, sort=["rating:desc"])
hits = response['hits']['hits']
ids = [h['_source']['movieId'] for h in hits]
movies = es.mget(body={"ids": ids}, index=index, doc_type="movies", _source_include=['tmdbId', 'title'])
movies_hits = movies['docs']
tmdbids = [h['_source'] for h in movies_hits]
return tmdbids
def display_user_recs(the_id, q="*", num=10, num_last=10, index="demo"):
user, recs = get_user_recs(the_id, q, num, index)
user_movies = get_movies_for_user(the_id, num_last, index)
# check that posters can be displayed
first_movie = user_movies[0]
first_im_url = get_poster_url(first_movie['tmdbId'])
if first_im_url == "NA":
display(HTML("<i>Cannot import tmdbsimple. No movie posters will be displayed!</i>"))
if first_im_url == "KEY_ERR":
display(HTML("<i>Key error accessing TMDb API. Check your API key. No movie posters will be displayed!</i>"))
# display the movies that this user has rated highly
display(HTML("<h2>Get recommended movies for user id %s</h2>" % the_id))
display(HTML("<h4>The user has rated the following movies highly:</h4>"))
user_html = "<table border=0>"
i = 0
for movie in user_movies:
movie_im_url = get_poster_url(movie['tmdbId'])
movie_title = movie['title']
user_html += "<td><h5>%s</h5><img src=%s width=150></img></td>" % (movie_title, movie_im_url)
i += 1
if i % 5 == 0:
user_html += "</tr><tr>"
user_html += "</tr></table>"
display(HTML(user_html))
# now display the recommended movies for the user
display(HTML("<br>"))
display(HTML("<h2>Recommended movies:</h2>"))
rec_html = "<table border=0>"
i = 0
for rec in recs:
r_im_url = get_poster_url(rec['_source']['tmdbId'])
r_score = rec['_score']
r_title = rec['_source']['title']
rec_html += "<td><h5>%s</h5><img src=%s width=150></img></td><td><h5>%2.3f</h5></td>" % (r_title, r_im_url, r_score)
i += 1
if i % 5 == 0:
rec_html += "</tr><tr>"
rec_html += "</tr></table>"
display(HTML(rec_html))
def display_similar(the_id, q="*", num=10, index="demo", dt="movies"):
"""
Display query movie, together with similar movies and similarity scores, in a table
"""
movie, recs = get_similar(the_id, q, num, index, dt)
q_im_url = get_poster_url(movie['tmdbId'])
if q_im_url == "NA":
display(HTML("<i>Cannot import tmdbsimple. No movie posters will be displayed!</i>"))
if q_im_url == "KEY_ERR":
display(HTML("<i>Key error accessing TMDb API. Check your API key. No movie posters will be displayed!</i>"))
display(HTML("<h2>Get similar movies for:</h2>"))
display(HTML("<h4>%s</h4>" % movie['title']))
if q_im_url != "NA":
display(Image(q_im_url, width=200))
display(HTML("<br>"))
display(HTML("<h2>People who liked this movie also liked these:</h2>"))
sim_html = "<table border=0>"
i = 0
for rec in recs:
r_im_url = get_poster_url(rec['_source']['tmdbId'])
r_score = rec['_score']
r_title = rec['_source']['title']
sim_html += "<td><h5>%s</h5><img src=%s width=150></img></td><td><h5>%2.3f</h5></td>" % (r_title, r_im_url, r_score)
i += 1
if i % 5 == 0:
sim_html += "</tr><tr>"
sim_html += "</tr></table>"
display(HTML(sim_html))
###Output
_____no_output_____
###Markdown
Now, you're ready to generate some recommendations. 5(a) Find similar movies for a given movieTo start, you can find movies that are _similar_ to a given movie. This similarity score is computed from the model factor vectors for each movie. Recall that the ALS model you trained earlier is a collaborative filtering model, so the similarity between movie vectors will be based on the _rating co-occurrence_ of the movies. In other words, two movies that tend to be rated highly by a user will tend to be more similar. It is common to use the [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) of the movie factor vectors as a measure of the similarity between two movies.Using this similarity you can show recommendations along the lines of _people who liked this movie also liked these_.
###Code
display_similar(2628, num=5)
###Output
_____no_output_____
###Markdown
So we see that people who like Star Wars tend like other sci-fi movies (including other Star Wars films), as well as some action and drama.> _Note_ since we are using a very small dataset, results may not be as good as those for the same model trained on a larger dataset.Now you will see the power and flexibility that comes from using a search engine to generate recommendations. Elasticsearch allows you to tweak the results returned by the recommendation query using any standard search query or filter - from free text search through to filters based on time and geo-location (or any other piece of metadata you can think of).For example, perhaps you want to remove any movies with "matrix" in the title from the recommendations. You can do this by simply passing a valid Elasticsearch query string to the recommendation function.
###Code
display_similar(2628, num=5, q="title:(NOT matrix)")
###Output
_____no_output_____
###Markdown
Or you may want to ensure that only valid children's movies are shown to young viewers.
###Code
display_similar(1, num=5, q="genres:children")
###Output
_____no_output_____
###Markdown
Feel free to check out the documentation for the Elasticsearch [query string query](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html) and play around with the various queries you can construct by passing in a query string as `q` in the recommendation function above! 5(b) Find movies to recommend to a userNow, you're ready to generate some movie recommendations, personalized for a specific user.Given a user, you can recommend movies to that user based on the predicted ratings from your model. In a similar manner to the similar movie recommendations, this predicted rating score is computed from the model factor vector for the user and the factor vectors for each movie. Recall that the collaborative filtering model means that, at a high level, we will recommend movies _liked by other users who liked the same movies as the given user_.
###Code
display_user_recs(12, num=5, num_last=5)
###Output
_____no_output_____
###Markdown
Again, note that since we are using a very small dataset, the results may not be too good. However, we can see that this user seems to like some sci-fi, some horror and some comedy films. The recommended movies fall broadly into these categories and seem to be somewhat reasonable.Next, you can again apply the power of Elasticsearch's filtering capabilities to your recommendation engine. Let's say you only want to recommend more recent movies (say, from the past 5 years). This can be done by adding a date math query to the recommendation function score query.
###Code
display_user_recs(12, num=5, num_last=5, q="release_date:[2012 TO *]")
###Output
_____no_output_____
###Markdown
Creating a Scalable Recommender with Apache Spark & ElasticsearchIn this notebook, you will create a recommendation engine using Spark and Elasticsearch. Using some movie rating data,you will train a collaborative filtering model in Spark and export the trained model to Elasticsearch. Once exported, you can test your recommendations by querying Elasticsearch and displaying the results. _Prerequisites_The notebook assumes you have installed Elasticsearch, Apache Spark and the Elasticsearch Spark connector detailed in the [setup steps](https://github.com/MLnick/elasticsearch-spark-recommender-demo/tree/mastersteps).> _Optional:_> In order to display the images in the recommendation demo, you will need to access [The Movie Database API](https://www.themoviedb.org/documentation/api). Please follow the [instructions](https://developers.themoviedb.org/3/getting-started) to get an API key. OverviewYou will work through the following steps1. Prepare the data by loading it into Elasticsearch2. Load the movie data into Spark DataFrames and use the Elasticsearch Spark connector to save itto the newly created Elasticsearch index3. Load ratings data and run ALS3. Save ALS model factors to Elasticsearch4. Show similar items using Elasticsearch vector scoring plugin RequirementsRequires [Spark `2.1.0`](http://spark.apache.org/downloads.html), [Elasticsearch `5.3.0`](https://www.elastic.co/downloads/past-releases/elasticsearch-5-3-0), [`elasticsearch-hadoop 5.3.0`](https://www.elastic.co/downloads/past-releases/elasticsearch-apache-hadoop-5-3-0) and [`elasticsearch-vector-scoring 5.3.0`](https://github.com/MLnick/elasticsearch-vector-scoring).*Elasticsearch vector scoring plugin* must be [installed first](https://github.com/MLnick/elasticsearch-vector-scoringplugin-installation).For reference, the code to create Elasticsearch mappings is shown here (see `Enrich & Prepare MovieLens Dataset.ipynb` for full details). This notebook assumes these mappings have been pre-created - so we start from Step 2. Step 1: Prepare the data* This notebook uses the "small" version of the latest MovieLens movie rating dataset, containing about 100,000 ratings, 9,000 movies and 700 users* The latest version of the data can be downloaded at https://grouplens.org/datasets/movielens/latest/* Download the `ml-latest-small.zip` file and unzip it to a suitable location on your system.The folder should contain a number of CSV files. We will be using the following files:* `ratings.csv` - movie rating data* `links.csv` - external database ids for each movie* `movies.csv` - movie title and genres
###Code
# first import a few utility methods that we'll use later on
from IPython.display import Image, HTML, display
# check PySpark is running
spark
###Output
_____no_output_____
###Markdown
Load rating and movie data
###Code
# change the path below to point to the correct base folder
PATH_TO_DATA = "/Users/nick/workspace/datasets/ml-latest-small"
# PATH_TO_DATA = "FOLDER/ml-latest-small"
###Output
_____no_output_____
###Markdown
**Ratings**The ratings data consists of around 100,000 ratings given by users to movies. Each row of the `DataFrame` consists of a `userId`, `movieId` and `timestamp` for the event, together with the `rating` given by the user to the movie
###Code
# load ratings data
ratings = spark.read.csv(PATH_TO_DATA + "/ratings.csv", header=True, inferSchema=True)
ratings.cache()
print("Number of ratings: %i" % ratings.count())
print("Sample of ratings:")
ratings.show(5)
###Output
Number of ratings: 100004
Sample of ratings:
+------+-------+------+----------+
|userId|movieId|rating| timestamp|
+------+-------+------+----------+
| 1| 31| 2.5|1260759144|
| 1| 1029| 3.0|1260759179|
| 1| 1061| 3.0|1260759182|
| 1| 1129| 2.0|1260759185|
| 1| 1172| 4.0|1260759205|
+------+-------+------+----------+
only showing top 5 rows
###Markdown
[convert timestamp ]
###Code
ratings = ratings.select(
ratings.userId, ratings.movieId, ratings.rating, (ratings.timestamp.cast("long") * 1000).alias("timestamp"))
ratings.show(5)
###Output
+------+-------+------+-------------+
|userId|movieId|rating| timestamp|
+------+-------+------+-------------+
| 1| 31| 2.5|1260759144000|
| 1| 1029| 3.0|1260759179000|
| 1| 1061| 3.0|1260759182000|
| 1| 1129| 2.0|1260759185000|
| 1| 1172| 4.0|1260759205000|
+------+-------+------+-------------+
only showing top 5 rows
###Markdown
**Users**The dataset only contains anonymous user ids. For the purposes of this demo we will create some random user names using the Python package `names`.
###Code
import names
from pyspark.sql.functions import udf
from pyspark.sql.types import *
# define UDF to create random user names
random_name = udf(lambda x: names.get_full_name(), StringType())
# create a user data set from the unique user ids in the ratings dataset
unique_users = ratings.select("userId").distinct()
users = unique_users.select("userId", random_name("userId").alias("name"))
users.cache()
print("Number of users: %i" % (users.count()))
print("Sample of users:")
users.show(5)
###Output
Number of users: 671
Sample of users:
+------+----------------+
|userId| name|
+------+----------------+
| 12| Maria Moreno|
| 13| Matthew Mendoza|
| 14| Deloris Crouch|
| 18| Janet Gillespie|
| 38|Phyllis Bankston|
+------+----------------+
only showing top 5 rows
###Markdown
**Movies**The file `movies.csv` contains the `movieId`, `title` and `genres` for each movie. As you can see, the `genres` field is a bit tricky to use, as the genres are in the form `Adventure|Animation|Children|Comedy|Fantasy`.
###Code
# load raw data from CSV
raw_movies = spark.read.csv(PATH_TO_DATA + "/movies.csv", header=True, inferSchema=True)
print("Raw movie data:")
raw_movies.show(5, truncate=False)
###Output
Raw movie data:
+-------+----------------------------------+-------------------------------------------+
|movieId|title |genres |
+-------+----------------------------------+-------------------------------------------+
|1 |Toy Story (1995) |Adventure|Animation|Children|Comedy|Fantasy|
|2 |Jumanji (1995) |Adventure|Children|Fantasy |
|3 |Grumpier Old Men (1995) |Comedy|Romance |
|4 |Waiting to Exhale (1995) |Comedy|Drama|Romance |
|5 |Father of the Bride Part II (1995)|Comedy |
+-------+----------------------------------+-------------------------------------------+
only showing top 5 rows
###Markdown
Let's create a SparkSQL function to extract this delimited string into a list of genres.
###Code
# define a UDF to convert the raw genres string to an array of genres and lowercase
extract_genres = udf(lambda x: x.lower().split("|"), ArrayType(StringType()))
# test it out
raw_movies.select("movieId", "title", extract_genres("genres").alias("genres")).show(5, False)
###Output
+-------+----------------------------------+-------------------------------------------------+
|movieId|title |genres |
+-------+----------------------------------+-------------------------------------------------+
|1 |Toy Story (1995) |[adventure, animation, children, comedy, fantasy]|
|2 |Jumanji (1995) |[adventure, children, fantasy] |
|3 |Grumpier Old Men (1995) |[comedy, romance] |
|4 |Waiting to Exhale (1995) |[comedy, drama, romance] |
|5 |Father of the Bride Part II (1995)|[comedy] |
+-------+----------------------------------+-------------------------------------------------+
only showing top 5 rows
###Markdown
Ok, that looks better!You may also notice that the movie titles contain the year of release. It would be useful to have that as a field in our search index for filtering results. Let's create a UDF to extract the release year from the title using a Python regular expression.
###Code
import re
# define a UDF to extract the release year from the title, and return the new title and year in a struct type
def extract_year_fn(title):
result = re.search("\(\d{4}\)", title)
try:
if result:
group = result.group()
year = group[1:-1]
start_pos = result.start()
title = title[:start_pos-1]
return (title, year)
else:
return (title, 1970)
except:
print(title)
extract_year = udf(extract_year_fn,\
StructType([StructField("title", StringType(), True),\
StructField("release_date", StringType(), True)]))
# test out our function
s = "Jumanji (1995)"
extract_year_fn(s)
###Output
_____no_output_____
###Markdown
Ok the function works! Let's create a new `DataFrame` with the cleaned-up titles, release dates and genres of our movies.
###Code
movies = raw_movies.select(
"movieId", extract_year("title").title.alias("title"),\
extract_year("title").release_date.alias("release_date"),\
extract_genres("genres").alias("genres"))
print("Cleaned movie data:")
movies.show(5, truncate=False)
###Output
Cleaned movie data:
+-------+---------------------------+------------+-------------------------------------------------+
|movieId|title |release_date|genres |
+-------+---------------------------+------------+-------------------------------------------------+
|1 |Toy Story |1995 |[adventure, animation, children, comedy, fantasy]|
|2 |Jumanji |1995 |[adventure, children, fantasy] |
|3 |Grumpier Old Men |1995 |[comedy, romance] |
|4 |Waiting to Exhale |1995 |[comedy, drama, romance] |
|5 |Father of the Bride Part II|1995 |[comedy] |
+-------+---------------------------+------------+-------------------------------------------------+
only showing top 5 rows
###Markdown
Next, we will join the `links.csv` data to `movies` so that we have the `TMDb id` corresponding to each movie. We can use this to look up movie poster images.
###Code
link_data = spark.read.csv(PATH_TO_DATA + "/links.csv", header=True, inferSchema=True)
# join movies with links to get TMDB id
movie_data = movies.join(link_data, movies.movieId == link_data.movieId)\
.select(movies.movieId, movies.title, movies.release_date, movies.genres, link_data.tmdbId)
num_movies = movie_data.count()
print("Cleaned movie data with tmdbId links:")
movie_data.show(5, truncate=False)
###Output
Cleaned movie data with tmdbId links:
+-------+---------------------------+------------+-------------------------------------------------+------+
|movieId|title |release_date|genres |tmdbId|
+-------+---------------------------+------------+-------------------------------------------------+------+
|1 |Toy Story |1995 |[adventure, animation, children, comedy, fantasy]|862 |
|2 |Jumanji |1995 |[adventure, children, fantasy] |8844 |
|3 |Grumpier Old Men |1995 |[comedy, romance] |15602 |
|4 |Waiting to Exhale |1995 |[comedy, drama, romance] |31357 |
|5 |Father of the Bride Part II|1995 |[comedy] |11862 |
+-------+---------------------------+------------+-------------------------------------------------+------+
only showing top 5 rows
###Markdown
> **_Optional_**> Run the below code to test your access to TMDb API. You should see the _Toy Story_ movie poster displayed inline.
###Code
import tmdbsimple as tmdb
#tmdb.API_KEY = 'YOUR_API_KEY'
tmdb.API_KEY = '6a1cf3f01fd70e17b5393ef1ba162909'
# base URL for TMDB poster images
IMAGE_URL = 'https://image.tmdb.org/t/p/w500'
# from requests import HTTPError
movie_id = movie_data.first().tmdbId
movie_info = tmdb.Movies(movie_id).info()
movie_poster_url = IMAGE_URL + movie_info['poster_path']
display(Image(movie_poster_url, width=200))
###Output
_____no_output_____
###Markdown
Step 2: Load data into Elasticsearch[ ]
###Code
from elasticsearch import Elasticsearch
# test your ES instance is running
es = Elasticsearch()
es.info(pretty=True)
###Output
_____no_output_____
###Markdown
**Create an Elasticsearch index with mappings for users, movies and rating events**References:* [Create index request](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html)* [Delimited payload filter](https://www.elastic.co/guide/en/elasticsearch/reference/2.4/analysis-delimited-payload-tokenfilter.html)* [Term vectors](https://www.elastic.co/guide/en/elasticsearch/reference/2.4/docs-termvectors.html_term_information)* [Mapping](https://www.elastic.co/guide/en/elasticsearch/reference/2.4/mapping.html)
###Code
create_index = {
"settings": {
"analysis": {
"analyzer": {
"payload_analyzer": {
"type": "custom",
"tokenizer":"whitespace",
"filter":"delimited_payload_filter"
}
}
}
},
"mappings": {
"ratings": {
"properties": {
"timestamp": {
"type": "date"
},
"userId": {
"type": "integer"
},
"movieId": {
"type": "integer"
},
"rating": {
"type": "double"
}
}
},
"users": {
"properties": {
"userId": {
"type": "integer"
},
"name": {
"type": "text"
},
"@model": {
"properties": {
"factor": {
"type": "text",
"term_vector": "with_positions_offsets_payloads",
"analyzer" : "payload_analyzer"
},
"version": {
"type": "keyword"
},
"timestamp": {
"type": "date"
}
}
}
}
},
"movies": {
"properties": {
"movieId": {
"type": "integer"
},
"tmdbId": {
"type": "keyword"
},
"genres": {
"type": "keyword"
},
"release_date": {
"type": "date",
"format": "year"
},
"@model": {
"properties": {
"factor": {
"type": "text",
"term_vector": "with_positions_offsets_payloads",
"analyzer" : "payload_analyzer"
},
"version": {
"type": "keyword"
},
"timestamp": {
"type": "date"
}
}
}
}
}
}
}
# create index with the settings & mappings above
es.indices.create(index="demo", body=create_index)
###Output
_____no_output_____
###Markdown
**Load User, Movie and Ratings DataFrames into Elasticsearch**
###Code
# write ratings data
ratings.write.format("es").save("demo/ratings")
# check write went ok
print("Dataframe count: %d" % ratings.count())
print("ES index count: %d" % es.count(index="demo", doc_type="ratings")['count'])
es.search(index="demo", doc_type="ratings", q="*", size=3)
# write user data, specifying the DataFrame column to use as the id mapping
users.write.format("es").option("es.mapping.id", "userId").save("demo/users")
# check write went ok
print("User DF count: %d" % users.count())
print("ES index count: %d" % es.count(index="demo", doc_type="users")['count'])
es.search(index="demo", doc_type="users", q="*", size=3)
# write movie data, specifying the DataFrame column to use as the id mapping
movie_data.write.format("es").option("es.mapping.id", "movieId").save("demo/movies")
# check load went ok
print("Movie DF count: %d" % movie_data.count())
print("ES index count: %d" % es.count(index="demo", doc_type="movies")['count'])
es.search(index="demo", doc_type="movies", q="*", size=3)
###Output
_____no_output_____
###Markdown
Step 3: Train a recommmender model on the ratings data[TBC]
###Code
ratings_from_es = spark.read.format("es").load("demo/ratings")
ratings_from_es.printSchema()
ratings_from_es.show(5)
from pyspark.ml.recommendation import ALS
from pyspark.sql.functions import col
als = ALS(userCol="userId", itemCol="movieId", ratingCol="rating", regParam=0.1, rank=10, seed=42)
model = als.fit(ratings_from_es)
model.userFactors.show(5)
model.itemFactors.show(5)
###Output
+---+--------------------+
| id| features|
+---+--------------------+
| 10|[-0.65529627, -0....|
| 20|[-0.08578758, -0....|
| 30|[0.14903268, 0.11...|
| 40|[-0.07772923, 0.2...|
| 50|[-0.22381878, -0....|
+---+--------------------+
only showing top 5 rows
+---+--------------------+
| id| features|
+---+--------------------+
| 10|[-0.37270272, 0.0...|
| 20|[-0.52351207, -0....|
| 30|[-0.9530989, -0.0...|
| 40|[-0.14209941, 0.1...|
| 50|[-0.5712305, 0.34...|
+---+--------------------+
only showing top 5 rows
###Markdown
Step 4: Export ALS user and item factor vectors to Elasticsearch[TBC] Utility functions for converting factor vectors
###Code
from pyspark.sql.types import *
from pyspark.sql.functions import udf, lit, current_timestamp, unix_timestamp
def convert_vector(x):
'''Convert a list or numpy array to delimited token filter format'''
return " ".join(["%s|%s" % (i, v) for i, v in enumerate(x)])
def reverse_convert(s):
'''Convert a delimited token filter format string back to list format'''
return [float(f.split("|")[1]) for f in s.split(" ")]
def vector_to_struct(x, version, ts):
'''Convert a vector to a SparkSQL Struct with string-format vector and version fields'''
return (convert_vector(x), version, ts)
vector_struct = udf(vector_to_struct, \
StructType([StructField("factor", StringType(), True), \
StructField("version", StringType(), True),\
StructField("timestamp", LongType(), True)]))
# test out the vector conversion function
test_vec = model.userFactors.select("features").first().features
print(test_vec)
print()
print(convert_vector(test_vec))
###Output
[-0.655296266078949, -0.07208383828401566, 1.184463381767273, -0.8777399063110352, 0.34049999713897705, -0.4350639879703522, 0.15437132120132446, 0.5911734104156494, 0.27180013060569763, 0.9368487000465393]
0|-0.655296266078949 1|-0.07208383828401566 2|1.184463381767273 3|-0.8777399063110352 4|0.34049999713897705 5|-0.4350639879703522 6|0.15437132120132446 7|0.5911734104156494 8|0.27180013060569763 9|0.9368487000465393
###Markdown
Convert factor vectors to [factor, version] form and write to Elasticsearch
###Code
ver = model.uid
ts = unix_timestamp(current_timestamp())
movie_vectors = model.itemFactors.select("id", vector_struct("features", lit(ver), ts).alias("@model"))
movie_vectors.select("id", "@model.factor", "@model.version", "@model.timestamp").show(5)
user_vectors = model.userFactors.select("id", vector_struct("features", lit(ver), ts).alias("@model"))
user_vectors.select("id", "@model.factor", "@model.version", "@model.timestamp").show(5)
# write data to ES, use:
# - "id" as the column to map to ES movie id
# - "update" write mode for ES
# - "append" write mode for Spark
movie_vectors.write.format("es") \
.option("es.mapping.id", "id") \
.option("es.write.operation", "update") \
.save("demo/movies", mode="append")
user_vectors.write.format("es") \
.option("es.mapping.id", "id") \
.option("es.write.operation", "update") \
.save("demo/users", mode="append")
###Output
_____no_output_____
###Markdown
Check the data was written correctly
###Code
# search for a particular sci-fi movie
es.search(index="demo", doc_type="movies", q="star wars phantom menace", size=1)['hits']['hits'][0]
###Output
_____no_output_____
###Markdown
Step 5: Recommend using Elasticsearch!
###Code
import tmdbsimple as tmdb
from requests import HTTPError
from IPython.display import Image, HTML, display
tmdb.API_KEY = '6a1cf3f01fd70e17b5393ef1ba162909'
# base URL for TMDB poster images
IMAGE_URL = 'https://image.tmdb.org/t/p/w500'
def get_poster_url(id):
movie = tmdb.Movies(id).info()
poster_url = IMAGE_URL + movie['poster_path'] if 'poster_path' in movie and movie['poster_path'] is not None else ""
return poster_url
def fn_query(query_vec, q="*", cosine=False):
return {
"query": {
"function_score": {
"query" : {
"query_string": {
"query": q
}
},
"script_score": {
"script": {
"inline": "payload_vector_score",
"lang": "native",
"params": {
"field": "@model.factor",
"vector": query_vec,
"cosine" : cosine
}
}
},
"boost_mode": "replace"
}
}
}
def get_similar(the_id, q="*", num=10, index="demo", dt="movies"):
response = es.get(index=index, doc_type=dt, id=the_id)
src = response['_source']
if '@model' in src and 'factor' in src['@model']:
raw_vec = src['@model']['factor']
# our script actually uses the list form for the query vector and handles conversion internally
query_vec = reverse_convert(raw_vec)
q = fn_query(query_vec, q=q, cosine=True)
results = es.search(index, dt, body=q)
hits = results['hits']['hits']
return src, hits[1:num+1]
def display_similar(the_id, q="*", num=10, index="demo", dt="movies"):
movie, recs = get_similar(the_id, q, num, index, dt)
# display query
q_im_url = get_poster_url(movie['tmdbId'])
display(HTML("<h2>Get similar movies for:</h2>"))
display(Image(q_im_url, width=200))
display(HTML("<br>"))
display(HTML("<h2>Similar movies:</h2>"))
sim_html = "<table border=0><tr>"
i = 0
for rec in recs:
r_im_url = get_poster_url(rec['_source']['tmdbId'])
r_score = rec['_score']
sim_html += "<td><img src=%s width=200></img></td><td>%2.3f</td>" % (r_im_url, r_score)
i += 1
if i % 5 == 0:
sim_html += "</tr><tr>"
sim_html += "</tr></table>"
display(HTML(sim_html))
display_similar(2628, num=5)
display_similar(2628, num=5, q="title:(NOT trek)")
display_similar(1, num=5, q="genres:children")
###Output
_____no_output_____ |
src/pca_visualisation.ipynb | ###Markdown
Visualising the Classification Power of DataUsing PCA to explore how well your data can separate classes
###Code
#imports
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
###Output
_____no_output_____
###Markdown
Dataset
###Code
import numpy as np
import pandas as pd
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
data = pd.DataFrame(cancer['data'],columns=cancer['feature_names'])
data['y'] = cancer['target']
print(data.columns)
print(len(data))
data.head()
#Infomation values
import information_value as iv
features = data.columns
for f in features:
iv_ = iv.calc_iv(data,f,'y')
print("{}: {}".format(f,round(iv_,2)))
#Correlation Matrix
mat = data[features].corr()
test_data = pd.DataFrame(mat,columns=features,index=features)
plt.figure(figsize=(15, 15), dpi= 80, facecolor='w', edgecolor='k')
sns.set(font_scale=0.8)
ax = plt.axes()
sns.heatmap(mat,cmap='coolwarm',ax=ax,annot=True,fmt='.0g')
ax.set_title('Figure 3: ANN Confusion Matrix Heatmap')
plt.show()
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(10,10))
plt.scatter('mean symmetry','worst smoothness',c='#ff2121',edgecolors='#000000',data=data[data.y == 1])
plt.scatter('mean symmetry','worst smoothness',c='#2176ff',edgecolors='#000000',data=data[data.y == 0])
plt.ylabel("worst smoothness",size=20)
plt.xlabel('mean symmetry',size=20)
plt.legend(['Malignant','Benign'],loc =2,prop={"size":20})
plt.savefig('../figures/pca1.png',format='png')
###Output
_____no_output_____
###Markdown
PCA - Entire Dataset
###Code
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
#Scale the data
scaler = StandardScaler()
scaler.fit(data)
scaled = scaler.transform(data)
#Obtain principal components
pca = PCA().fit(scaled)
pc = pca.transform(scaled)
pc1 = pc[:,0]
pc2 = pc[:,1]
#Plot principal components
plt.figure(figsize=(10,10))
colour = ['#ff2121' if y == 1 else '#2176ff' for y in data['y']]
plt.scatter(pc1,pc2 ,c=colour,edgecolors='#000000')
plt.ylabel("Glucose",size=20)
plt.xlabel('Age',size=20)
plt.yticks(size=12)
plt.xticks(size=12)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.savefig('../figures/pca2.png',format='png')
var = pca.explained_variance_[0:10] #percentage of variance explained
labels = ['PC1','PC2','PC3','PC4','PC5','PC6','PC7','PC8','PC9','PC10']
plt.figure(figsize=(15,7))
plt.bar(labels,var,)
plt.xlabel('Pricipal Component', fontsize=18)
plt.ylabel('Proportion of Variance Explained', fontsize=18)
print(var[0]+ var[1])
plt.savefig('../figures/pca_scree.png',format='png')
###Output
19.678707842971907
###Markdown
PCA - Groups
###Code
group_1 = ['mean smoothness','smoothness error','worst smoothness',
'mean symmetry', 'symmetry error','worst symmetry']
group_2 = ['mean perimeter','perimeter error','worst perimeter',
'mean concavity','concavity error','worst concavity']
fig, ax = plt.subplots(nrows=1, ncols=2,figsize=(20,10))
group = [group_1,group_2]
for i,g in enumerate(group):
#Scale the data
scaler = StandardScaler()
scaler.fit(data[g])
scaled = scaler.transform(data[g])
#Obtain principal components
pca = PCA().fit(scaled)
pc = pca.transform(scaled)
pc1 = pc[:,0]
pc2 = pc[:,1]
#Plot principal components
ax[i].scatter(pc1,pc2 ,c=colour,edgecolors='#000000')
ax[i].set_title('Group {}'.format(i+1), fontsize=25)
ax[i].set_xlabel('PC1', fontsize=18)
ax[i].set_ylabel('PC2', fontsize=18)
plt.savefig('../figures/pca_group.png',format='png')
from sklearn.model_selection import train_test_split
import sklearn.metrics as metric
import statsmodels.api as sm
for i,g in enumerate(group):
x = data[g]
x = sm.add_constant(x)
y = data['y']
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.3, random_state = 101)
model = sm.Logit(y_train,x_train).fit() #fit logistic regression model
predictions = np.around(model.predict(x_test))
accuracy = metric.accuracy_score(y_test,predictions)
print("Accuracy of Group {}: {}".format(i+1,accuracy))
###Output
Optimization terminated successfully.
Current function value: 0.458884
Iterations 7
Accuracy of Group 1: 0.7368421052631579
Optimization terminated successfully.
Current function value: 0.103458
Iterations 10
Accuracy of Group 2: 0.9707602339181286
|
lessons/Chapter3/03_burgers_1d.ipynb | ###Markdown
1-D Burgers equation$$\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = \nu \frac{\partial ^2u}{\partial x^2}$$
###Code
from IPython.core.display import HTML
def css_styling():
styles = open("../../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
###Output
_____no_output_____ |
PY_02_Intro_parte_2/PY_02_forca_pt2_strings.ipynb | ###Markdown
O find também aceita um segundo parâmetro, que define a partir de qual posição gostaríamos de começar, por exemplo:>>> palavra = "aluracursos">>> palavra.find("a",1) procurando "a" a partir da segunda posição 4
###Code
for letra in palavra:
print(letra)
def jogar():
print(40 * "*")
print("Bem vindo ao jogo da FORCA")
print(40 * "*")
palavra_secreta = 'banana'
enforcou = False
acertou = False
while not enforcou and not acertou:
chute = input("Qual letra? ")
for letra in palavra_secreta:
if letra == chute:
print(f'Temos a letra {chute}')
# else:
# print(f"Não tem a letra {chute}")
print('jogando...')
# if __name__ == "main":
jogar()
###Output
_____no_output_____
###Markdown
O Python diferencia letra maiúscula de minúscula, portanto a != A Vamos criar um index para o python devolver a posição em que está a letra.
###Code
def jogar():
print(40 * "*")
print("Bem vindo ao jogo da FORCA")
print(40 * "*")
palavra_secreta = 'banana'
enforcou = False
acertou = False
while not enforcou and not acertou:
chute = input("Qual letra? ")
index = 0
for letra in palavra_secreta:
if letra == chute:
print(f'Temos a letra {chute} na posição {index}')
index = index + 1
print('jogando...')
# if __name__ == "main":
jogar()
###Output
****************************************
Bem vindo ao jogo da FORCA
****************************************
Qual letra? a
Temos a letra a na posição 1
Temos a letra a na posição 3
Temos a letra a na posição 5
jogando...
Qual letra? A
jogando...
###Markdown
Alguns métodos usados com strings
###Code
nome = 'rafael'
nome.capitalize() #Retorna com PRIMEIRA LETRA MAIÚSCULA
nome
# nome.capitalize(inplace = True)
#VAI DAR ERROR
# TypeError: str.capitalize() takes no keyword arguments
#Pra guardar com a letra maiúscula vou precisar criar outra variável
nome2 = nome.capitalize()
nome2
nome.endswith('el') #faço uma pergunta se a palavra termina com tal(is) letra(s)
#STARTSWITH
nome2.lower() #transforma todos os caracteres em letras minúsculas
nome2.upper()
fruta = ' abacaxi '
fruta
fruta.strip()
fruta
nova_fruta = fruta.strip()
nova_fruta
def jogar():
print(40 * "*")
print("Bem vindo ao jogo da FORCA")
print(40 * "*")
palavra_secreta = 'banana'
enforcou = False
acertou = False
while not enforcou and not acertou:
chute = input("Qual letra? ")
chute = chute.strip()
index = 0
for letra in palavra_secreta:
if chute.upper() == letra.upper():
print(f'Temos a letra {chute} na posição {index}')
index = index + 1
print('jogando...')
# if __name__ == "main":
jogar()
###Output
****************************************
Bem vindo ao jogo da FORCA
****************************************
Qual letra? s
jogando...
|
ModuleTwoLabActivity (3).ipynb | ###Markdown
Lab 1a Basics Practice: Notebooks, Comments, print(), type(), Addition, Errors and Art Student will be able to- Use Python 3 in Jupyter notebooks- Write working code using `print()` and `` comments - Write working code using `type()` and variables- Combine Strings using string addition (+)- Add numbers in code (+)- Troubleshoot errors- Create character art >**Note:** The **[ ]** indicates student has a task to complete. >**Reminder:** To run code and save changes: student should upload or clone a copy of notebooks. Notebook use- [ ] Insert a **code cell** below - [ ] Enter the following Python code, including the comment: ```Python [ ] print 'Hello!' and remember to save notebook!print('Hello!')```Then run the code - the output should be: `Hello!` Run the cell below - [ ] Use **Ctrl + Enter** - [ ] Use **Shift + Enter**
###Code
print('watch for the cat')
###Output
_____no_output_____
###Markdown
Student's Notebook editing- [ ] Edit **this** notebook Markdown cell replacing the word "Student's" above with your name- [ ] Run the cell to display the formatted text- [ ] Run any 'markdown' cells that are in edit mode, so they are easier to read [ ] Convert \*this\* cell from markdown to a code cell, then run it print('Run as a code cell') CommentsCreate a code comment that identifies this notebook, containing your name and the date. Use print() to - [ ] print [**your_name**]- [ ] print **is using Python!**
###Code
# [ ] print your name
# [ ] print "is using Python"
###Output
_____no_output_____
###Markdown
Output above should be: `Your Name is using Python!` Use variables in print()- [ ] Create a variable **your_name** and assign it a string containing your name- [ ] Print **your_name**
###Code
# [ ] create a variable your_name and assign it a string containing your name
#[ ] print your_name
###Output
_____no_output_____
###Markdown
Create more string variables- **[ ]** Create variables as directed below- **[ ]** Print the variables
###Code
# [ ] create variables and assign values for: favorite_song, shoe_size, lucky_number
# [ ] print the value of each variable favorite_song, shoe_size, and lucky_number
###Output
_____no_output_____
###Markdown
Use string addition- **[ ]** Print the above string variables (favorite_song, shoe_size, lucky_number) combined with a description by using **string addition**>For example, favorite_song displayed as: `favorite song is happy birthday`
###Code
# [ ] print favorite_song with description
# [ ] print shoe_size with description
# [ ] print lucky_number with description
###Output
_____no_output_____
###Markdown
More string addition- **[ ]** Make a single string (sentence) in a variable called favorite_lucky_shoe using **string addition** with favorite_song, shoe_size, lucky_number variables and other strings as needed - **[ ]** Print the value of the favorite_lucky_shoe variable string> Sample output: `For singing happy birthday 8.5 times, you will be fined $25`
###Code
# assign favorite_lucky_shoe using
###Output
_____no_output_____
###Markdown
print() art Use `print()` and the asterisk **\*** to create the following shapes:- [ ] Diagonal line - [ ] Rectangle - [ ] Smiley face
###Code
# [ ] print a diagonal using "*"
# [ ] rectangle using "*"
# [ ] smiley using "*"
###Output
_____no_output_____
###Markdown
Using `type()` Calculate the *type* using `type()`
###Code
# [ ] display the type of 'your name' (use single quotes)
# [ ] display the type of "save your notebook!" (use double quotes)
# [ ] display the type of "25" (use quotes)
# [ ] display the type of "save your notebook " + 'your name'
# [ ] display the type of 25 (no quotes)
# [ ] display the type of 25 + 10
# [ ] display the type of 1.55
# [ ] display the type of 1.55 + 25
###Output
_____no_output_____
###Markdown
Find the type of variables- **[ ]** Run the cell below to make the variables available to be used in other code- **[ ]** Display the data type as directed in the cells that follow
###Code
# assignments ***RUN THIS CELL*** before starting the section
# [ ] display the current type of the variable student_name
# [ ] display the type of student_age
# [ ] display the type of student_grade
# [ ] display the type of student_age + student_grade
# [ ] display the current type of student_id
# assign new value to student_id
# [ ] display the current of student_id
###Output
_____no_output_____
###Markdown
Number integer addition- **[ ]** Create variables (x, y, z) with integer values
###Code
# [ ] create integer variables (x, y, z) and assign them 1-3 digit integers (no decimals - no quotes)
###Output
_____no_output_____
###Markdown
- **[ ]** Insert a **code cell** below- **[ ]** Create an integer variable named **xyz_sum** equal to the sum of x, y, and z- **[ ]** Print the value of **xyz_sum**
###Code
###Output
_____no_output_____
###Markdown
Errors- **[ ]** Troubleshoot and fix the errors below
###Code
# [ ] fix the error
print("Hello World!"")
# [ ] fix the error
print(strings have quotes and variables have names)
# [ ] fix the error
print( "I have $" + 5)
# [ ] fix the error
print('always save the notebook")
###Output
_____no_output_____
###Markdown
ASCII art- **[ ]** Display first name or initials as ASCII Art- **[ ]** Challenge: insert an additional code cell to make an ASCII pictureCheck out the video if you are unsure of ASCII Art (it was the extra credit assignment)[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/d7a3d1b4-8d8d-4e9e-a984-a6920bcd7ca1/Unit1_Section1.5-ASCII_Art.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/d7a3d1b4-8d8d-4e9e-a984-a6920bcd7ca1/Unit1_Section1.5-ASCII_Art.vtt","srclang":"en","kind":"subtitles","label":"english"}])
###Code
# [ ] ASCII ART
# [ ] ASCII ART
###Output
_____no_output_____ |
src/train.ipynb | ###Markdown
Data Directory
###Code
import os
print(os.listdir("../input"))
print(os.listdir("../input/dataset/dataset"))
###Output
_____no_output_____
###Markdown
Install Dependencies
###Code
import torch
print(torch.__version__)
print(torch.cuda.device_count())
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import os
import cv2
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.transforms as transforms
import numpy as np
import pandas as pd
from torch.utils import data
from torch.utils.data import DataLoader
from torch.optim.lr_scheduler import MultiStepLR
###Output
_____no_output_____
###Markdown
Hyper-parameters
###Code
dataroot = "../input/dataset/dataset/"
ckptroot = "./"
lr = 1e-4
weight_decay = 1e-5
batch_size = 32
num_workers = 8
test_size = 0.8
shuffle = True
epochs = 80
start_epoch = 0
resume = False
###Output
_____no_output_____
###Markdown
Helper functions
###Code
def toDevice(datas, device):
"""Enable cuda."""
imgs, angles = datas
return imgs.float().to(device), angles.float().to(device)
def augment(dataroot, imgName, angle):
"""Data augmentation."""
name = dataroot + 'IMG/' + imgName.split('\\')[-1]
current_image = cv2.imread(name)
if current_image is None:
print(name)
current_image = current_image[65:-25, :, :]
if np.random.rand() < 0.5:
current_image = cv2.flip(current_image, 1)
angle = angle * -1.0
return current_image, angle
###Output
_____no_output_____
###Markdown
Load data
###Code
import scipy
from scipy import signal
def load_data(data_dir, test_size):
"""Load training data and train validation split"""
# reads CSV file into a single dataframe variable
data_df = pd.read_csv(os.path.join(data_dir, 'driving_log.csv'),
names=['center', 'left', 'right', 'steering', 'throttle', 'reverse', 'speed'])
# smooth data signal with `savgol_filter`
data_df["steering"] = signal.savgol_filter(data_df["steering"].values.tolist(), 51, 11)
# Divide the data into training set and validation set
train_len = int(test_size * data_df.shape[0])
valid_len = data_df.shape[0] - train_len
trainset, valset = data.random_split(
data_df.values.tolist(), lengths=[train_len, valid_len])
return trainset, valset
trainset, valset = load_data(dataroot, test_size)
###Output
_____no_output_____
###Markdown
Create dataset
###Code
class TripletDataset(data.Dataset):
def __init__(self, dataroot, samples, transform=None):
self.samples = samples
self.dataroot = dataroot
self.transform = transform
def __getitem__(self, index):
batch_samples = self.samples[index]
steering_angle = float(batch_samples[3])
center_img, steering_angle_center = augment(self.dataroot, batch_samples[0], steering_angle)
left_img, steering_angle_left = augment(self.dataroot, batch_samples[1], steering_angle + 0.4)
right_img, steering_angle_right = augment(self.dataroot, batch_samples[2], steering_angle - 0.4)
center_img = self.transform(center_img)
left_img = self.transform(left_img)
right_img = self.transform(right_img)
return (center_img, steering_angle_center), (left_img, steering_angle_left), (right_img, steering_angle_right)
def __len__(self):
return len(self.samples)
###Output
_____no_output_____
###Markdown
Get data loader
###Code
print("==> Preparing dataset ...")
def data_loader(dataroot, trainset, valset, batch_size, shuffle, num_workers):
"""Self-Driving vehicles simulator dataset Loader.
Args:
trainset: training set
valset: validation set
batch_size: training set input batch size
shuffle: whether shuffle during training process
num_workers: number of workers in DataLoader
Returns:
trainloader (torch.utils.data.DataLoader): DataLoader for training set
testloader (torch.utils.data.DataLoader): DataLoader for validation set
"""
transformations = transforms.Compose(
[transforms.Lambda(lambda x: (x / 127.5) - 1.0)])
# Load training data and validation data
training_set = TripletDataset(dataroot, trainset, transformations)
trainloader = DataLoader(training_set,
batch_size=batch_size,
shuffle=shuffle,
num_workers=num_workers)
validation_set = TripletDataset(dataroot, valset, transformations)
valloader = DataLoader(validation_set,
batch_size=batch_size,
shuffle=shuffle,
num_workers=num_workers)
return trainloader, valloader
trainloader, validationloader = data_loader(dataroot,
trainset, valset,
batch_size,
shuffle,
num_workers)
###Output
==> Preparing dataset ...
###Markdown
Define model
###Code
class NetworkNvidia(nn.Module):
"""NVIDIA model used in the paper."""
def __init__(self):
"""Initialize NVIDIA model.
NVIDIA model used
Image normalization to avoid saturation and make gradients work better.
Convolution: 5x5, filter: 24, strides: 2x2, activation: ELU
Convolution: 5x5, filter: 36, strides: 2x2, activation: ELU
Convolution: 5x5, filter: 48, strides: 2x2, activation: ELU
Convolution: 3x3, filter: 64, strides: 1x1, activation: ELU
Convolution: 3x3, filter: 64, strides: 1x1, activation: ELU
Drop out (0.5)
Fully connected: neurons: 100, activation: ELU
Fully connected: neurons: 50, activation: ELU
Fully connected: neurons: 10, activation: ELU
Fully connected: neurons: 1 (output)
the convolution layers are meant to handle feature engineering
the fully connected layer for predicting the steering angle.
"""
super(NetworkNvidia, self).__init__()
self.conv_layers = nn.Sequential(
nn.Conv2d(3, 24, 5, stride=2),
nn.ELU(),
nn.Conv2d(24, 36, 5, stride=2),
nn.ELU(),
nn.Conv2d(36, 48, 5, stride=2),
nn.ELU(),
nn.Conv2d(48, 64, 3),
nn.ELU(),
nn.Conv2d(64, 64, 3),
nn.Dropout(0.5)
)
self.linear_layers = nn.Sequential(
nn.Linear(in_features=64 * 2 * 33, out_features=100),
nn.ELU(),
nn.Linear(in_features=100, out_features=50),
nn.ELU(),
nn.Linear(in_features=50, out_features=10),
nn.Linear(in_features=10, out_features=1)
)
def forward(self, input):
"""Forward pass."""
input = input.view(input.size(0), 3, 70, 320)
output = self.conv_layers(input)
# print(output.shape)
output = output.view(output.size(0), -1)
output = self.linear_layers(output)
return output
# Define model
print("==> Initialize model ...")
model = NetworkNvidia()
print("==> Initialize model done ...")
###Output
_____no_output_____
###Markdown
Define optimizer and criterion
###Code
# Define optimizer and criterion
optimizer = optim.Adam(model.parameters(),
lr=lr,
weight_decay=weight_decay)
criterion = nn.MSELoss()
###Output
_____no_output_____
###Markdown
Learning rate scheduler
###Code
# learning rate scheduler
scheduler = MultiStepLR(optimizer, milestones=[30, 50], gamma=0.1)
# transfer to gpu
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(device)
###Output
_____no_output_____
###Markdown
Resume training
###Code
if resume:
print("==> Loading checkpoint ...")
checkpoint = torch.load("../input/pretrainedmodels/both-nvidia-model-61.h5",
map_location=lambda storage, loc: storage)
start_epoch = checkpoint['epoch']
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
scheduler.load_state_dict(checkpoint['scheduler'])
###Output
_____no_output_____
###Markdown
Train
###Code
class Trainer(object):
"""Trainer."""
def __init__(self,
ckptroot,
model,
device,
epochs,
criterion,
optimizer,
scheduler,
start_epoch,
trainloader,
validationloader):
"""Self-Driving car Trainer.
Args:
model:
device:
epochs:
criterion:
optimizer:
start_epoch:
trainloader:
validationloader:
"""
super(Trainer, self).__init__()
self.model = model
self.device = device
self.epochs = epochs
self.ckptroot = ckptroot
self.criterion = criterion
self.optimizer = optimizer
self.scheduler = scheduler
self.start_epoch = start_epoch
self.trainloader = trainloader
self.validationloader = validationloader
def train(self):
"""Training process."""
self.model.to(self.device)
for epoch in range(self.start_epoch, self.epochs + self.start_epoch):
self.scheduler.step()
# Training
train_loss = 0.0
self.model.train()
for local_batch, (centers, lefts, rights) in enumerate(self.trainloader):
# Transfer to GPU
centers, lefts, rights = toDevice(centers, self.device), toDevice(
lefts, self.device), toDevice(rights, self.device)
# Model computations
self.optimizer.zero_grad()
datas = [centers, lefts, rights]
for data in datas:
imgs, angles = data
# print("training image: ", imgs.shape)
outputs = self.model(imgs)
loss = self.criterion(outputs, angles.unsqueeze(1))
loss.backward()
self.optimizer.step()
train_loss += loss.data.item()
if local_batch % 100 == 0:
print("Training Epoch: {} | Loss: {}".format(epoch, train_loss / (local_batch + 1)))
# Validation
self.model.eval()
valid_loss = 0
with torch.set_grad_enabled(False):
for local_batch, (centers, lefts, rights) in enumerate(self.validationloader):
# Transfer to GPU
centers, lefts, rights = toDevice(centers, self.device), toDevice(
lefts, self.device), toDevice(rights, self.device)
# Model computations
self.optimizer.zero_grad()
datas = [centers, lefts, rights]
for data in datas:
imgs, angles = data
outputs = self.model(imgs)
loss = self.criterion(outputs, angles.unsqueeze(1))
valid_loss += loss.data.item()
if local_batch % 100 == 0:
print("Validation Loss: {}".format(valid_loss / (local_batch + 1)))
print()
# Save model
if epoch % 5 == 0 or epoch == self.epochs + self.start_epoch - 1:
state = {
'epoch': epoch + 1,
'state_dict': self.model.state_dict(),
'optimizer': self.optimizer.state_dict(),
'scheduler': self.scheduler.state_dict(),
}
self.save_checkpoint(state)
def save_checkpoint(self, state):
"""Save checkpoint."""
print("==> Save checkpoint ...")
if not os.path.exists(self.ckptroot):
os.makedirs(self.ckptroot)
torch.save(state, self.ckptroot + 'both-nvidia-model-{}.h5'.format(state['epoch']))
print("==> Start training ...")
trainer = Trainer(ckptroot,
model,
device,
epochs,
criterion,
optimizer,
scheduler,
start_epoch,
trainloader,
validationloader)
trainer.train()
###Output
==> Start training ...
Training Epoch: 0 | Loss: 0.5558980479836464
Training Epoch: 0 | Loss: 0.5213123413337635
Training Epoch: 0 | Loss: 0.49653078340438767
Training Epoch: 0 | Loss: 0.47576030551075343
Training Epoch: 0 | Loss: 0.46251533310070747
Training Epoch: 0 | Loss: 0.45327764451912717
Training Epoch: 0 | Loss: 0.4468134562632134
Training Epoch: 0 | Loss: 0.43905620173167187
Training Epoch: 0 | Loss: 0.4328681884645095
Validation Loss: 0.3594933710992336
Validation Loss: 0.3774320130780487
Validation Loss: 0.3786888392688475
==> Save checkpoint ...
Training Epoch: 1 | Loss: 0.29850859567523
Training Epoch: 1 | Loss: 0.3933299669424201
Training Epoch: 1 | Loss: 0.38579951670595364
Training Epoch: 1 | Loss: 0.3791089756445235
Training Epoch: 1 | Loss: 0.37866771645249425
Training Epoch: 1 | Loss: 0.3755420975595296
Training Epoch: 1 | Loss: 0.37281983622509407
Training Epoch: 1 | Loss: 0.37209821650266817
Training Epoch: 1 | Loss: 0.37061068274722564
Validation Loss: 0.4482746571302414
Validation Loss: 0.36231463190426333
Validation Loss: 0.35759038125067505
Training Epoch: 2 | Loss: 0.4340990409255028
Training Epoch: 2 | Loss: 0.3613892498767317
Training Epoch: 2 | Loss: 0.3549202584675444
Training Epoch: 2 | Loss: 0.35272254028738137
Training Epoch: 2 | Loss: 0.35017852030434365
Training Epoch: 2 | Loss: 0.3463931422821895
Training Epoch: 2 | Loss: 0.3460471120270933
Training Epoch: 2 | Loss: 0.34563853538538014
Training Epoch: 2 | Loss: 0.3454248610367712
Validation Loss: 0.3542107194662094
Validation Loss: 0.3406148337676088
Validation Loss: 0.33789137188363727
Training Epoch: 3 | Loss: 0.264209995046258
Training Epoch: 3 | Loss: 0.3261413291410202
Training Epoch: 3 | Loss: 0.3273491282433049
Training Epoch: 3 | Loss: 0.3265615521630031
Training Epoch: 3 | Loss: 0.3258087210045974
Training Epoch: 3 | Loss: 0.3259306916253206
Training Epoch: 3 | Loss: 0.3268608287822908
Training Epoch: 3 | Loss: 0.3264079918000799
Training Epoch: 3 | Loss: 0.3259248299007949
Validation Loss: 0.39915426820516586
Validation Loss: 0.30457275844003895
Validation Loss: 0.3131801426151202
Training Epoch: 4 | Loss: 0.39218131452798843
Training Epoch: 4 | Loss: 0.3071626303533074
Training Epoch: 4 | Loss: 0.31109592095666116
Training Epoch: 4 | Loss: 0.31579079903179524
Training Epoch: 4 | Loss: 0.31195566433242655
Training Epoch: 4 | Loss: 0.30986880071780937
Training Epoch: 4 | Loss: 0.31034917343011165
Training Epoch: 4 | Loss: 0.3110801278920406
Training Epoch: 4 | Loss: 0.3119584239668186
Validation Loss: 0.33516016602516174
Validation Loss: 0.3022027490383918
Validation Loss: 0.30148412708878813
Training Epoch: 5 | Loss: 0.27030542120337486
Training Epoch: 5 | Loss: 0.30442704625501493
Training Epoch: 5 | Loss: 0.29826431486996546
Training Epoch: 5 | Loss: 0.2994403130377943
Training Epoch: 5 | Loss: 0.2985142806540255
Training Epoch: 5 | Loss: 0.2981301402550436
Training Epoch: 5 | Loss: 0.297343739291371
Training Epoch: 5 | Loss: 0.2987090627371882
Training Epoch: 5 | Loss: 0.2987920840790544
Validation Loss: 0.32323939725756645
Validation Loss: 0.29273371717496083
Validation Loss: 0.2871951352879034
==> Save checkpoint ...
Training Epoch: 6 | Loss: 0.32839538902044296
Training Epoch: 6 | Loss: 0.29666612228138906
Training Epoch: 6 | Loss: 0.2934820310569447
Training Epoch: 6 | Loss: 0.29186512748602517
Training Epoch: 6 | Loss: 0.29133500919405586
Training Epoch: 6 | Loss: 0.2883545456402108
Training Epoch: 6 | Loss: 0.2876144118015089
Training Epoch: 6 | Loss: 0.28617462163894686
Training Epoch: 6 | Loss: 0.2859632329801365
Validation Loss: 0.3766295313835144
Validation Loss: 0.2782088677113009
Validation Loss: 0.2822243148155177
Training Epoch: 7 | Loss: 0.2903166115283966
Training Epoch: 7 | Loss: 0.2865490806611753
Training Epoch: 7 | Loss: 0.28307681777213345
Training Epoch: 7 | Loss: 0.28187749710589943
Training Epoch: 7 | Loss: 0.2844150026242929
Training Epoch: 7 | Loss: 0.28225355822511183
Training Epoch: 7 | Loss: 0.2826199128808525
Training Epoch: 7 | Loss: 0.2808552694330244
Training Epoch: 7 | Loss: 0.27950089134754946
Validation Loss: 0.2773801423609257
Validation Loss: 0.2668501916203168
Validation Loss: 0.2679719563471663
Training Epoch: 8 | Loss: 0.22995337285101414
Training Epoch: 8 | Loss: 0.2671015135538165
Training Epoch: 8 | Loss: 0.2693308888012497
Training Epoch: 8 | Loss: 0.2699474113253858
Training Epoch: 8 | Loss: 0.2702809993474932
Training Epoch: 8 | Loss: 0.27097151106494866
Training Epoch: 8 | Loss: 0.269372782569236
Training Epoch: 8 | Loss: 0.2703229691275261
Training Epoch: 8 | Loss: 0.2694994483233782
Validation Loss: 0.4028911516070366
Validation Loss: 0.26184657857854765
Validation Loss: 0.2637266797935636
Training Epoch: 9 | Loss: 0.21294931136071682
Training Epoch: 9 | Loss: 0.2616017740404252
Training Epoch: 9 | Loss: 0.26171125053655153
Training Epoch: 9 | Loss: 0.26092398835599817
Training Epoch: 9 | Loss: 0.2617953882373851
Training Epoch: 9 | Loss: 0.26141142010681345
Training Epoch: 9 | Loss: 0.2625510316208848
Training Epoch: 9 | Loss: 0.2610523397549349
Training Epoch: 9 | Loss: 0.2612313608651732
Validation Loss: 0.20352470874786377
Validation Loss: 0.2572043027157093
Validation Loss: 0.2567003251410168
Training Epoch: 10 | Loss: 0.21185582876205444
Training Epoch: 10 | Loss: 0.2526186974073696
Training Epoch: 10 | Loss: 0.25129839042618646
Training Epoch: 10 | Loss: 0.25301170805500095
Training Epoch: 10 | Loss: 0.2529331938505284
Training Epoch: 10 | Loss: 0.2524757189341559
Training Epoch: 10 | Loss: 0.2522906418982738
Training Epoch: 10 | Loss: 0.2547447567888019
Training Epoch: 10 | Loss: 0.25426639573353244
Validation Loss: 0.17803704738616943
Validation Loss: 0.24933554828720222
Validation Loss: 0.25088364510004646
==> Save checkpoint ...
Training Epoch: 11 | Loss: 0.28222727589309216
Training Epoch: 11 | Loss: 0.24758459486546788
Training Epoch: 11 | Loss: 0.24390571605210282
Training Epoch: 11 | Loss: 0.2457734851531709
Training Epoch: 11 | Loss: 0.24518923855333258
Training Epoch: 11 | Loss: 0.24370857883623975
Training Epoch: 11 | Loss: 0.24585306418814795
Training Epoch: 11 | Loss: 0.24607789625922555
Training Epoch: 11 | Loss: 0.24603650066420232
Validation Loss: 0.29393066465854645
Validation Loss: 0.24517780712040343
Validation Loss: 0.24816482426107522
Training Epoch: 12 | Loss: 0.23766812682151794
Training Epoch: 12 | Loss: 0.24573293279413835
Training Epoch: 12 | Loss: 0.2435140684154348
Training Epoch: 12 | Loss: 0.24455862184769886
Training Epoch: 12 | Loss: 0.24235888323433083
Training Epoch: 12 | Loss: 0.24117271392280887
Training Epoch: 12 | Loss: 0.24186863172985304
Training Epoch: 12 | Loss: 0.24082272260259843
Training Epoch: 12 | Loss: 0.24107306878130125
Validation Loss: 0.24100221879780293
Validation Loss: 0.24739118067934962
Validation Loss: 0.2439304100377346
Training Epoch: 13 | Loss: 0.25456971675157547
Training Epoch: 13 | Loss: 0.23461813835593143
Training Epoch: 13 | Loss: 0.2338196463113771
Training Epoch: 13 | Loss: 0.23472174196086354
Training Epoch: 13 | Loss: 0.2332427940932927
Training Epoch: 13 | Loss: 0.233152847989888
Training Epoch: 13 | Loss: 0.23435691840630551
Training Epoch: 13 | Loss: 0.23435870556163466
Training Epoch: 13 | Loss: 0.2337003994172674
Validation Loss: 0.17068983986973763
Validation Loss: 0.23284707438790858
Validation Loss: 0.23032822598476166
Training Epoch: 14 | Loss: 0.20500125363469124
Training Epoch: 14 | Loss: 0.23083460223999355
Training Epoch: 14 | Loss: 0.22811559161903402
Training Epoch: 14 | Loss: 0.2295534267705986
Training Epoch: 14 | Loss: 0.22893911445666654
Training Epoch: 14 | Loss: 0.2293677601624957
Training Epoch: 14 | Loss: 0.2287822798864541
Training Epoch: 14 | Loss: 0.22885720480297203
Training Epoch: 14 | Loss: 0.22837581821413858
Validation Loss: 0.281342051923275
Validation Loss: 0.22756358607681376
Validation Loss: 0.22592272661366866
Training Epoch: 15 | Loss: 0.18625633418560028
Training Epoch: 15 | Loss: 0.22155093053116068
Training Epoch: 15 | Loss: 0.22841702257537871
Training Epoch: 15 | Loss: 0.22528333675759873
Training Epoch: 15 | Loss: 0.2237345713940902
Training Epoch: 15 | Loss: 0.22327746673972307
Training Epoch: 15 | Loss: 0.22320071807729325
Training Epoch: 15 | Loss: 0.22208592686775577
Training Epoch: 15 | Loss: 0.2224890992984864
Validation Loss: 0.20852447301149368
Validation Loss: 0.22743604202192313
Validation Loss: 0.22756089310889221
==> Save checkpoint ...
Training Epoch: 16 | Loss: 0.278608076274395
Training Epoch: 16 | Loss: 0.2176758324782742
Training Epoch: 16 | Loss: 0.22105828319578918
Training Epoch: 16 | Loss: 0.22038722468967056
Training Epoch: 16 | Loss: 0.21867911999771125
Training Epoch: 16 | Loss: 0.21758490169751726
Training Epoch: 16 | Loss: 0.21735895485981382
Training Epoch: 16 | Loss: 0.21838709287818378
Training Epoch: 16 | Loss: 0.21816295476283845
Validation Loss: 0.21124553307890892
Validation Loss: 0.22580153291288874
Validation Loss: 0.22567729530531672
Training Epoch: 17 | Loss: 0.2221863493323326
Training Epoch: 17 | Loss: 0.20700035888775445
Training Epoch: 17 | Loss: 0.20867448877913886
Training Epoch: 17 | Loss: 0.20947744138848248
Training Epoch: 17 | Loss: 0.21001340445837102
Training Epoch: 17 | Loss: 0.21080476700322714
Training Epoch: 17 | Loss: 0.21220476588485343
Training Epoch: 17 | Loss: 0.2127552761958006
Training Epoch: 17 | Loss: 0.21344786353604625
Validation Loss: 0.2768826875835657
Validation Loss: 0.2207322043209973
Validation Loss: 0.21826668909347768
Training Epoch: 18 | Loss: 0.21707474067807198
Training Epoch: 18 | Loss: 0.2100564966088917
Training Epoch: 18 | Loss: 0.2095325038819319
Training Epoch: 18 | Loss: 0.2067850856481745
Training Epoch: 18 | Loss: 0.20531453758459584
Training Epoch: 18 | Loss: 0.20554040704190196
Training Epoch: 18 | Loss: 0.2064526474301683
Training Epoch: 18 | Loss: 0.20697355608770707
Training Epoch: 18 | Loss: 0.20740818699935776
Validation Loss: 0.15573538094758987
Validation Loss: 0.2093696068271552
Validation Loss: 0.20964083010655138
Training Epoch: 19 | Loss: 0.17430533096194267
Training Epoch: 19 | Loss: 0.2051358608676508
Training Epoch: 19 | Loss: 0.20169026392340586
Training Epoch: 19 | Loss: 0.20207260390496085
Training Epoch: 19 | Loss: 0.20188577775204755
Training Epoch: 19 | Loss: 0.20216206715213592
Training Epoch: 19 | Loss: 0.20159144903736162
Training Epoch: 19 | Loss: 0.2022220491864765
Training Epoch: 19 | Loss: 0.20246625956340916
Validation Loss: 0.2369263619184494
Validation Loss: 0.20266468283526673
Validation Loss: 0.20726833708899384
Training Epoch: 20 | Loss: 0.2222776673734188
Training Epoch: 20 | Loss: 0.21027703304765838
Training Epoch: 20 | Loss: 0.20322537491218515
Training Epoch: 20 | Loss: 0.2017889144548843
Training Epoch: 20 | Loss: 0.19995146813312373
Training Epoch: 20 | Loss: 0.20040283582829488
Training Epoch: 20 | Loss: 0.19980571134490102
Training Epoch: 20 | Loss: 0.1993404284502232
Training Epoch: 20 | Loss: 0.19946590290124647
Validation Loss: 0.16440673358738422
Validation Loss: 0.209560443158492
Validation Loss: 0.20665648085672167
==> Save checkpoint ...
Training Epoch: 21 | Loss: 0.2762938588857651
Training Epoch: 21 | Loss: 0.19057230524128616
Training Epoch: 21 | Loss: 0.19143913665424978
Training Epoch: 21 | Loss: 0.19268284435461328
Training Epoch: 21 | Loss: 0.19391095649423148
Training Epoch: 21 | Loss: 0.19397550788029522
Training Epoch: 21 | Loss: 0.19316910364558168
Training Epoch: 21 | Loss: 0.19361230589651346
Training Epoch: 21 | Loss: 0.19401582251052918
Validation Loss: 0.20578867197036743
Validation Loss: 0.20170451565138478
Validation Loss: 0.2017488433048129
Training Epoch: 22 | Loss: 0.19112133979797363
Training Epoch: 22 | Loss: 0.1864249930246779
Training Epoch: 22 | Loss: 0.18730209411043136
Training Epoch: 22 | Loss: 0.1895174391697659
Training Epoch: 22 | Loss: 0.18932300684811024
Training Epoch: 22 | Loss: 0.19022441885718508
Training Epoch: 22 | Loss: 0.18973341082438158
Training Epoch: 22 | Loss: 0.19124891420828974
Training Epoch: 22 | Loss: 0.19155113258225828
Validation Loss: 0.1396778579801321
Validation Loss: 0.20069910235481686
Validation Loss: 0.19718750514349534
Training Epoch: 23 | Loss: 0.1680156160145998
Training Epoch: 23 | Loss: 0.18342934296199
Training Epoch: 23 | Loss: 0.1855131829880289
Training Epoch: 23 | Loss: 0.18612864660253853
Training Epoch: 23 | Loss: 0.18776062037563532
Training Epoch: 23 | Loss: 0.18745538163669928
Training Epoch: 23 | Loss: 0.18678500525771133
Training Epoch: 23 | Loss: 0.18654204154793658
Training Epoch: 23 | Loss: 0.1869311716661006
Validation Loss: 0.2137889377772808
Validation Loss: 0.1934741106677321
Validation Loss: 0.20233425665276117
Training Epoch: 24 | Loss: 0.17449133843183517
Training Epoch: 24 | Loss: 0.18256820474594537
Training Epoch: 24 | Loss: 0.18363206641313004
Training Epoch: 24 | Loss: 0.18103405783738805
Training Epoch: 24 | Loss: 0.18377421041973807
Training Epoch: 24 | Loss: 0.18324544396206827
Training Epoch: 24 | Loss: 0.18310828917751246
Training Epoch: 24 | Loss: 0.1826808032886618
Training Epoch: 24 | Loss: 0.18361921361025502
Validation Loss: 0.21371613070368767
Validation Loss: 0.19830838014546892
Validation Loss: 0.19559160577346438
Training Epoch: 25 | Loss: 0.1620171144604683
Training Epoch: 25 | Loss: 0.17842964881496265
Training Epoch: 25 | Loss: 0.17904694451121103
Training Epoch: 25 | Loss: 0.17786293076753518
Training Epoch: 25 | Loss: 0.1785723666156505
Training Epoch: 25 | Loss: 0.17930432909471308
Training Epoch: 25 | Loss: 0.17931076332294504
Training Epoch: 25 | Loss: 0.17901938823390534
Training Epoch: 25 | Loss: 0.17933988871376091
Validation Loss: 0.16753017529845238
Validation Loss: 0.19168283893625335
Validation Loss: 0.18912111804241416
==> Save checkpoint ...
Training Epoch: 26 | Loss: 0.19842606037855148
Training Epoch: 26 | Loss: 0.17232090076155002
Training Epoch: 26 | Loss: 0.17404945317970877
Training Epoch: 26 | Loss: 0.17182544406590072
Training Epoch: 26 | Loss: 0.1745193297939928
Training Epoch: 26 | Loss: 0.17556633071754565
Training Epoch: 26 | Loss: 0.1767324035097775
Training Epoch: 26 | Loss: 0.17759696574623932
Training Epoch: 26 | Loss: 0.1773701577543859
Validation Loss: 0.1697067767381668
Validation Loss: 0.19083106877411357
Validation Loss: 0.19283857457895778
Training Epoch: 27 | Loss: 0.20691899210214615
Training Epoch: 27 | Loss: 0.17567943114012774
Training Epoch: 27 | Loss: 0.17492046841971615
Training Epoch: 27 | Loss: 0.17283289590262593
Training Epoch: 27 | Loss: 0.17360142392786848
Training Epoch: 27 | Loss: 0.1753042704358831
Training Epoch: 27 | Loss: 0.17597079508939445
Training Epoch: 27 | Loss: 0.17577971016307933
Training Epoch: 27 | Loss: 0.17524438374123527
Validation Loss: 0.20292261242866516
Validation Loss: 0.18492771926714052
Validation Loss: 0.18554761032782383
Training Epoch: 28 | Loss: 0.20249639078974724
Training Epoch: 28 | Loss: 0.17074983258095414
Training Epoch: 28 | Loss: 0.16849764063027664
Training Epoch: 28 | Loss: 0.1701213106469706
Training Epoch: 28 | Loss: 0.1700671935486526
Training Epoch: 28 | Loss: 0.1699974855237051
Training Epoch: 28 | Loss: 0.1698463109589977
Training Epoch: 28 | Loss: 0.1700528596173148
Training Epoch: 28 | Loss: 0.17064785336629115
Validation Loss: 0.20899496227502823
Validation Loss: 0.18283661828367131
Validation Loss: 0.17996628589427738
Training Epoch: 29 | Loss: 0.16723143681883812
Training Epoch: 29 | Loss: 0.16498726173903389
Training Epoch: 29 | Loss: 0.16189140676803404
Training Epoch: 29 | Loss: 0.16562344011327754
Training Epoch: 29 | Loss: 0.16539964532279908
Training Epoch: 29 | Loss: 0.16627585826006658
Training Epoch: 29 | Loss: 0.16672694578320532
Training Epoch: 29 | Loss: 0.16684596137424
Training Epoch: 29 | Loss: 0.1684172000101164
Validation Loss: 0.22685733437538147
Validation Loss: 0.1872056618443515
Validation Loss: 0.18599905579614995
Training Epoch: 30 | Loss: 0.16529813595116138
Training Epoch: 30 | Loss: 0.15969805158490297
Training Epoch: 30 | Loss: 0.15830484040971124
Training Epoch: 30 | Loss: 0.1567927011632147
Training Epoch: 30 | Loss: 0.15550987988924073
Training Epoch: 30 | Loss: 0.15419294125588234
Training Epoch: 30 | Loss: 0.15396508519164287
Training Epoch: 30 | Loss: 0.15370267864779041
Training Epoch: 30 | Loss: 0.15289312571673172
Validation Loss: 0.1469468828290701
Validation Loss: 0.1664571056492848
Validation Loss: 0.1668232187990155
==> Save checkpoint ...
Training Epoch: 31 | Loss: 0.14909717440605164
Training Epoch: 31 | Loss: 0.14830047251785747
Training Epoch: 31 | Loss: 0.14714835821859426
Training Epoch: 31 | Loss: 0.14696658467038526
Training Epoch: 31 | Loss: 0.1473759038332023
Training Epoch: 31 | Loss: 0.14681233402653904
Training Epoch: 31 | Loss: 0.1472432877691757
Training Epoch: 31 | Loss: 0.14672552769373937
Training Epoch: 31 | Loss: 0.14602454004462775
Validation Loss: 0.13192971237003803
Validation Loss: 0.16626779473211506
Validation Loss: 0.16314254069135556
Training Epoch: 32 | Loss: 0.15958929061889648
Training Epoch: 32 | Loss: 0.14507770151969526
Training Epoch: 32 | Loss: 0.14501602531628527
Training Epoch: 32 | Loss: 0.14574033380841891
Training Epoch: 32 | Loss: 0.14597462175386097
Training Epoch: 32 | Loss: 0.14530503421467905
Training Epoch: 32 | Loss: 0.14497977259418557
Training Epoch: 32 | Loss: 0.14500209721225113
Training Epoch: 32 | Loss: 0.14483511763056287
Validation Loss: 0.17656882293522358
Validation Loss: 0.16422325930567366
Validation Loss: 0.16268255127436337
Training Epoch: 33 | Loss: 0.13364188931882381
Training Epoch: 33 | Loss: 0.14090926241184962
Training Epoch: 33 | Loss: 0.1416635098218436
Training Epoch: 33 | Loss: 0.14276675977264577
Training Epoch: 33 | Loss: 0.14268753822530594
Training Epoch: 33 | Loss: 0.14264681345658625
Training Epoch: 33 | Loss: 0.14288842276718078
Training Epoch: 33 | Loss: 0.143205048892376
Training Epoch: 33 | Loss: 0.14302573168647945
Validation Loss: 0.12797581776976585
Validation Loss: 0.16235577179543156
Validation Loss: 0.16118916960905738
Training Epoch: 34 | Loss: 0.1402034778147936
Training Epoch: 34 | Loss: 0.14356202618357275
Training Epoch: 34 | Loss: 0.1437118392650835
Training Epoch: 34 | Loss: 0.14423270784070918
Training Epoch: 34 | Loss: 0.14376815522503006
Training Epoch: 34 | Loss: 0.14316016882881075
Training Epoch: 34 | Loss: 0.14315142542489506
Training Epoch: 34 | Loss: 0.14254543581145807
Training Epoch: 34 | Loss: 0.14212952008919694
Validation Loss: 0.20082200318574905
Validation Loss: 0.15607534636660378
Validation Loss: 0.16042465147613294
Training Epoch: 35 | Loss: 0.13528640940785408
Training Epoch: 35 | Loss: 0.14417017182095512
Training Epoch: 35 | Loss: 0.1413894189540204
Training Epoch: 35 | Loss: 0.1407534778922598
Training Epoch: 35 | Loss: 0.1397215721808244
Training Epoch: 35 | Loss: 0.13935297816627457
Training Epoch: 35 | Loss: 0.13995435623337396
Training Epoch: 35 | Loss: 0.14017588575138112
Training Epoch: 35 | Loss: 0.14056728861135398
Validation Loss: 0.14866485074162483
Validation Loss: 0.1617887373275981
Validation Loss: 0.16014964026931802
==> Save checkpoint ...
Training Epoch: 36 | Loss: 0.15206346102058887
Training Epoch: 36 | Loss: 0.1370103362726398
Training Epoch: 36 | Loss: 0.13863936688899253
Training Epoch: 36 | Loss: 0.13884895035619554
Training Epoch: 36 | Loss: 0.13928021289213116
Training Epoch: 36 | Loss: 0.13960671930772786
Training Epoch: 36 | Loss: 0.14015002151914002
Training Epoch: 36 | Loss: 0.1397832968091584
Training Epoch: 36 | Loss: 0.13992608518987318
Validation Loss: 0.1887362077832222
Validation Loss: 0.16176510182679585
Validation Loss: 0.1594349093600848
Training Epoch: 37 | Loss: 0.12116347625851631
Training Epoch: 37 | Loss: 0.14185324240254588
Training Epoch: 37 | Loss: 0.13897971340469015
Training Epoch: 37 | Loss: 0.1390219624974205
Training Epoch: 37 | Loss: 0.1385143224830435
Training Epoch: 37 | Loss: 0.13875062918818432
Training Epoch: 37 | Loss: 0.13926097735245022
Training Epoch: 37 | Loss: 0.1390064984876211
Training Epoch: 37 | Loss: 0.13889426892546566
Validation Loss: 0.14551307447254658
Validation Loss: 0.15676237156817524
Validation Loss: 0.1586712980689249
Training Epoch: 38 | Loss: 0.12586339749395847
Training Epoch: 38 | Loss: 0.13451387627179376
Training Epoch: 38 | Loss: 0.13464881319422922
Training Epoch: 38 | Loss: 0.1353996942791333
Training Epoch: 38 | Loss: 0.13649351933409
Training Epoch: 38 | Loss: 0.13651473376297665
Training Epoch: 38 | Loss: 0.1371574652460272
Training Epoch: 38 | Loss: 0.13723955648724048
Training Epoch: 38 | Loss: 0.13719154794359772
Validation Loss: 0.13287467882037163
Validation Loss: 0.15505467533216913
Validation Loss: 0.15752896522426635
Training Epoch: 39 | Loss: 0.13174331560730934
Training Epoch: 39 | Loss: 0.1364056709969398
Training Epoch: 39 | Loss: 0.1348766565591616
Training Epoch: 39 | Loss: 0.13586195670934612
Training Epoch: 39 | Loss: 0.136279559036963
Training Epoch: 39 | Loss: 0.13624709930471673
Training Epoch: 39 | Loss: 0.13635237778408357
Training Epoch: 39 | Loss: 0.13632277147140295
Training Epoch: 39 | Loss: 0.13636877567515465
Validation Loss: 0.174721822142601
Validation Loss: 0.1582106131193514
Validation Loss: 0.15614745968869373
Training Epoch: 40 | Loss: 0.12013669684529305
Training Epoch: 40 | Loss: 0.13209482418871163
Training Epoch: 40 | Loss: 0.1310471903963654
Training Epoch: 40 | Loss: 0.13140457515075704
Training Epoch: 40 | Loss: 0.13338948979606838
Training Epoch: 40 | Loss: 0.13381757922080195
Training Epoch: 40 | Loss: 0.13479780202062053
Training Epoch: 40 | Loss: 0.13562371078466787
Training Epoch: 40 | Loss: 0.13536351270910896
Validation Loss: 0.11838603019714355
Validation Loss: 0.1596005661689704
Validation Loss: 0.15711133056945764
==> Save checkpoint ...
Training Epoch: 41 | Loss: 0.15194160863757133
Training Epoch: 41 | Loss: 0.13543357050027882
Training Epoch: 41 | Loss: 0.1346739257653406
Training Epoch: 41 | Loss: 0.13473209078451526
Training Epoch: 41 | Loss: 0.13497781919366553
Training Epoch: 41 | Loss: 0.13496463281658297
Training Epoch: 41 | Loss: 0.1357002083147053
Training Epoch: 41 | Loss: 0.13586556979490924
Training Epoch: 41 | Loss: 0.135459776118993
Validation Loss: 0.17126787453889847
Validation Loss: 0.15775642532043824
Validation Loss: 0.15600556836681284
Training Epoch: 42 | Loss: 0.14004615880548954
Training Epoch: 42 | Loss: 0.13615811591285584
Training Epoch: 42 | Loss: 0.13748657874837147
Training Epoch: 42 | Loss: 0.13562041451145843
Training Epoch: 42 | Loss: 0.13520506280067926
Training Epoch: 42 | Loss: 0.13553559837189919
Training Epoch: 42 | Loss: 0.13593661877948066
Training Epoch: 42 | Loss: 0.13553828641389645
Training Epoch: 42 | Loss: 0.13559328791302874
Validation Loss: 0.11377754434943199
Validation Loss: 0.15877565130185667
Validation Loss: 0.1551260683611406
Training Epoch: 43 | Loss: 0.1373677644878626
Training Epoch: 43 | Loss: 0.13591830975532826
Training Epoch: 43 | Loss: 0.1341278798442649
Training Epoch: 43 | Loss: 0.13614904164859523
Training Epoch: 43 | Loss: 0.13537751347235313
Training Epoch: 43 | Loss: 0.13490857525046418
Training Epoch: 43 | Loss: 0.13506662309907388
Training Epoch: 43 | Loss: 0.13480673744456148
Training Epoch: 43 | Loss: 0.13440374099648325
Validation Loss: 0.18266897276043892
Validation Loss: 0.15362528070696804
Validation Loss: 0.15545869457873687
Training Epoch: 44 | Loss: 0.1655827946960926
Training Epoch: 44 | Loss: 0.13383885748721291
Training Epoch: 44 | Loss: 0.1354383868651826
Training Epoch: 44 | Loss: 0.135665253581374
Training Epoch: 44 | Loss: 0.1353621544674076
Training Epoch: 44 | Loss: 0.13529378123871402
Training Epoch: 44 | Loss: 0.13447988849170445
Training Epoch: 44 | Loss: 0.13455457718891442
Training Epoch: 44 | Loss: 0.13433099319732814
Validation Loss: 0.1199633814394474
Validation Loss: 0.1522047169654086
Validation Loss: 0.15508234385512212
Training Epoch: 45 | Loss: 0.1016813088208437
Training Epoch: 45 | Loss: 0.13548197761399322
Training Epoch: 45 | Loss: 0.13246452249586582
Training Epoch: 45 | Loss: 0.1331684324593342
Training Epoch: 45 | Loss: 0.13304622235598781
Training Epoch: 45 | Loss: 0.13363701260895905
Training Epoch: 45 | Loss: 0.13372622288224253
Training Epoch: 45 | Loss: 0.13349510378040874
Training Epoch: 45 | Loss: 0.13325081580201079
Validation Loss: 0.18766889162361622
Validation Loss: 0.16213456544066124
Validation Loss: 0.15595552954470637
==> Save checkpoint ...
Training Epoch: 46 | Loss: 0.1001485325396061
Training Epoch: 46 | Loss: 0.1320281652272633
Training Epoch: 46 | Loss: 0.13265821776950537
Training Epoch: 46 | Loss: 0.13163342526758035
Training Epoch: 46 | Loss: 0.1317113782383706
Training Epoch: 46 | Loss: 0.132570140413241
Training Epoch: 46 | Loss: 0.1321004900830285
Training Epoch: 46 | Loss: 0.13238011580359155
Training Epoch: 46 | Loss: 0.1327636790090937
Validation Loss: 0.19173771515488625
Validation Loss: 0.15458102435758797
Validation Loss: 0.153134662981159
Training Epoch: 47 | Loss: 0.15653258934617043
Training Epoch: 47 | Loss: 0.1294304019504107
Training Epoch: 47 | Loss: 0.13065201996719067
Training Epoch: 47 | Loss: 0.13029246379704867
Training Epoch: 47 | Loss: 0.13103944137738902
Training Epoch: 47 | Loss: 0.13147360801756264
Training Epoch: 47 | Loss: 0.13100280642149947
Training Epoch: 47 | Loss: 0.1310941495211229
Training Epoch: 47 | Loss: 0.13101237383990846
Validation Loss: 0.13723058626055717
Validation Loss: 0.15431909103610433
Validation Loss: 0.1541624502255697
Training Epoch: 48 | Loss: 0.15290119219571352
Training Epoch: 48 | Loss: 0.12948833086002287
Training Epoch: 48 | Loss: 0.13194830720289727
Training Epoch: 48 | Loss: 0.13035199688778465
Training Epoch: 48 | Loss: 0.13170468937861093
Training Epoch: 48 | Loss: 0.13136387930275825
Training Epoch: 48 | Loss: 0.13094461931864082
Training Epoch: 48 | Loss: 0.13189542545111965
Training Epoch: 48 | Loss: 0.13213370245743017
Validation Loss: 0.15939001739025116
Validation Loss: 0.15389733819641394
Validation Loss: 0.15335870068056964
Training Epoch: 49 | Loss: 0.13691089488565922
Training Epoch: 49 | Loss: 0.13416821638693904
Training Epoch: 49 | Loss: 0.13261513946235032
Training Epoch: 49 | Loss: 0.13064403909929964
Training Epoch: 49 | Loss: 0.13070844519799785
Training Epoch: 49 | Loss: 0.13096214360752684
Training Epoch: 49 | Loss: 0.13082386001467258
Training Epoch: 49 | Loss: 0.13038697931915522
Training Epoch: 49 | Loss: 0.13071093950932205
Validation Loss: 0.178191052749753
Validation Loss: 0.1522629468766327
Validation Loss: 0.15497970611172085
Training Epoch: 50 | Loss: 0.11077447608113289
Training Epoch: 50 | Loss: 0.13017049287953
Training Epoch: 50 | Loss: 0.13063099772775943
Training Epoch: 50 | Loss: 0.13107738133049585
Training Epoch: 50 | Loss: 0.13135379415813675
Training Epoch: 50 | Loss: 0.13047613693352886
Training Epoch: 50 | Loss: 0.1303818926103599
Training Epoch: 50 | Loss: 0.13002324445882596
Training Epoch: 50 | Loss: 0.12991924072091424
Validation Loss: 0.15091070346534252
Validation Loss: 0.15607136130646462
Validation Loss: 0.15258882127804171
==> Save checkpoint ...
Training Epoch: 51 | Loss: 0.14594712108373642
Training Epoch: 51 | Loss: 0.1248013979091429
Training Epoch: 51 | Loss: 0.12692755241808829
Training Epoch: 51 | Loss: 0.12831652512699612
Training Epoch: 51 | Loss: 0.1287766135300038
Training Epoch: 51 | Loss: 0.1293556450098426
Training Epoch: 51 | Loss: 0.12954872124169808
Training Epoch: 51 | Loss: 0.12916854118925655
Training Epoch: 51 | Loss: 0.12901298613165424
Validation Loss: 0.12869765795767307
Validation Loss: 0.1487093988695357
Validation Loss: 0.15183630250209007
Training Epoch: 52 | Loss: 0.1026082169264555
Training Epoch: 52 | Loss: 0.1277203207356062
Training Epoch: 52 | Loss: 0.12851646512431736
Training Epoch: 52 | Loss: 0.12691421936145456
Training Epoch: 52 | Loss: 0.12800767434821314
Training Epoch: 52 | Loss: 0.12778840859619503
Training Epoch: 52 | Loss: 0.12744722976047118
Training Epoch: 52 | Loss: 0.12811283808807375
Training Epoch: 52 | Loss: 0.12794103360875567
Validation Loss: 0.17501967959105968
Validation Loss: 0.14930496980795765
Validation Loss: 0.1512175345079816
Training Epoch: 53 | Loss: 0.12318732403218746
Training Epoch: 53 | Loss: 0.12571313781755986
Training Epoch: 53 | Loss: 0.12531027791382215
Training Epoch: 53 | Loss: 0.1280118446159749
Training Epoch: 53 | Loss: 0.12885183682827803
Training Epoch: 53 | Loss: 0.12872096125088528
Training Epoch: 53 | Loss: 0.1282168173908369
Training Epoch: 53 | Loss: 0.1281417754562452
Training Epoch: 53 | Loss: 0.12845327450106672
Validation Loss: 0.1680463943630457
Validation Loss: 0.1487993952993414
Validation Loss: 0.15177887790739092
Training Epoch: 54 | Loss: 0.157408706843853
Training Epoch: 54 | Loss: 0.12885089158679885
Training Epoch: 54 | Loss: 0.1292324834758073
Training Epoch: 54 | Loss: 0.1292016566406156
Training Epoch: 54 | Loss: 0.12911464878011894
Training Epoch: 54 | Loss: 0.12846023005483573
Training Epoch: 54 | Loss: 0.1282795159333694
Training Epoch: 54 | Loss: 0.12809466579185
Training Epoch: 54 | Loss: 0.1279968148801992
Validation Loss: 0.1426497958600521
Validation Loss: 0.15228406022680868
Validation Loss: 0.15141499909892012
Training Epoch: 55 | Loss: 0.12593234796077013
Training Epoch: 55 | Loss: 0.12468170839371068
Training Epoch: 55 | Loss: 0.12628654101446493
Training Epoch: 55 | Loss: 0.1270362139954866
Training Epoch: 55 | Loss: 0.1265590923148535
Training Epoch: 55 | Loss: 0.1272185357369319
Training Epoch: 55 | Loss: 0.12739625376666247
Training Epoch: 55 | Loss: 0.12763433552050854
Training Epoch: 55 | Loss: 0.12793645397707654
Validation Loss: 0.13698064535856247
Validation Loss: 0.14954291616962984
Validation Loss: 0.15078720647322746
==> Save checkpoint ...
Training Epoch: 56 | Loss: 0.1395331174135208
Training Epoch: 56 | Loss: 0.12987285797106157
Training Epoch: 56 | Loss: 0.1290082192738814
Training Epoch: 56 | Loss: 0.13017010475753302
Training Epoch: 56 | Loss: 0.12929143957872677
Training Epoch: 56 | Loss: 0.12932097234348663
Training Epoch: 56 | Loss: 0.12840478749598644
Training Epoch: 56 | Loss: 0.12794643921044238
Training Epoch: 56 | Loss: 0.12787054020030128
Validation Loss: 0.10640829429030418
Validation Loss: 0.1502487006606442
Validation Loss: 0.1526404997259749
Training Epoch: 57 | Loss: 0.1267650742083788
Training Epoch: 57 | Loss: 0.12752719440193164
Training Epoch: 57 | Loss: 0.12863848172359874
Training Epoch: 57 | Loss: 0.12965188201275527
Training Epoch: 57 | Loss: 0.1293217530276654
Training Epoch: 57 | Loss: 0.12848410645533287
Training Epoch: 57 | Loss: 0.128566529316682
Training Epoch: 57 | Loss: 0.128588631883338
Training Epoch: 57 | Loss: 0.12833783519890313
Validation Loss: 0.17916010320186615
Validation Loss: 0.14821014402083832
Validation Loss: 0.14964410310045848
Training Epoch: 58 | Loss: 0.15045594237744808
Training Epoch: 58 | Loss: 0.12798642137521268
Training Epoch: 58 | Loss: 0.12707230816850096
Training Epoch: 58 | Loss: 0.1268813316339002
Training Epoch: 58 | Loss: 0.12773585151486155
Training Epoch: 58 | Loss: 0.12731611599193926
Training Epoch: 58 | Loss: 0.12746794341820078
Training Epoch: 58 | Loss: 0.12768448533134244
Training Epoch: 58 | Loss: 0.1281722142726192
Validation Loss: 0.13749243319034576
Validation Loss: 0.1494210528597088
Validation Loss: 0.14962688252904374
Training Epoch: 59 | Loss: 0.11740010604262352
Training Epoch: 59 | Loss: 0.1296359763508386
Training Epoch: 59 | Loss: 0.12842194597463852
Training Epoch: 59 | Loss: 0.12853432792638028
Training Epoch: 59 | Loss: 0.1282140888677831
Training Epoch: 59 | Loss: 0.12872188173963162
Training Epoch: 59 | Loss: 0.12842111922676572
Training Epoch: 59 | Loss: 0.12838443320712206
Training Epoch: 59 | Loss: 0.12833185713987022
Validation Loss: 0.07724031619727612
Validation Loss: 0.15177084505558014
Validation Loss: 0.15008846062249434
Training Epoch: 60 | Loss: 0.12128831539303064
Training Epoch: 60 | Loss: 0.13097195287631586
Training Epoch: 60 | Loss: 0.12661357526088235
Training Epoch: 60 | Loss: 0.12686910008101962
Training Epoch: 60 | Loss: 0.12787191438583884
Training Epoch: 60 | Loss: 0.12746422398582608
Training Epoch: 60 | Loss: 0.12757100009618205
Training Epoch: 60 | Loss: 0.1279042687243151
Training Epoch: 60 | Loss: 0.1278674630552865
Validation Loss: 0.16210730001330376
Validation Loss: 0.14910003696891047
Validation Loss: 0.15146952061751737
==> Save checkpoint ...
Training Epoch: 61 | Loss: 0.1009722352027893
Training Epoch: 61 | Loss: 0.12856135864991067
Training Epoch: 61 | Loss: 0.1271134881506586
Training Epoch: 61 | Loss: 0.12666137749793324
Training Epoch: 61 | Loss: 0.1271813815515349
Training Epoch: 61 | Loss: 0.12672419499182058
Training Epoch: 61 | Loss: 0.12749122952834738
Training Epoch: 61 | Loss: 0.1270619197403175
Training Epoch: 61 | Loss: 0.1270171895410433
Validation Loss: 0.22367330081760883
Validation Loss: 0.1555609964571967
Validation Loss: 0.1511726364845512
Training Epoch: 62 | Loss: 0.0968262804672122
Training Epoch: 62 | Loss: 0.1288273635108282
Training Epoch: 62 | Loss: 0.13015623500950596
Training Epoch: 62 | Loss: 0.12975603003252878
Training Epoch: 62 | Loss: 0.1282932187507203
Training Epoch: 62 | Loss: 0.12777905889643643
Training Epoch: 62 | Loss: 0.12781590865010678
Training Epoch: 62 | Loss: 0.12710252948392306
Training Epoch: 62 | Loss: 0.1273035613962969
Validation Loss: 0.20092477276921272
Validation Loss: 0.14634622573520584
Validation Loss: 0.14958436591133698
Training Epoch: 63 | Loss: 0.1328619448468089
Training Epoch: 63 | Loss: 0.1250622231734566
Training Epoch: 63 | Loss: 0.1278040915153068
Training Epoch: 63 | Loss: 0.12809993361008426
Training Epoch: 63 | Loss: 0.1271036160282995
Training Epoch: 63 | Loss: 0.1269576003023994
Training Epoch: 63 | Loss: 0.12764415520238698
Training Epoch: 63 | Loss: 0.12756464762291664
Training Epoch: 63 | Loss: 0.1277939265014862
Validation Loss: 0.11774739250540733
Validation Loss: 0.14967101865844562
Validation Loss: 0.14908855366265744
Training Epoch: 64 | Loss: 0.13459014892578125
Training Epoch: 64 | Loss: 0.12855691368700845
Training Epoch: 64 | Loss: 0.12647176754489467
Training Epoch: 64 | Loss: 0.1273612398313872
Training Epoch: 64 | Loss: 0.12848778431561597
Training Epoch: 64 | Loss: 0.1275263849976742
Training Epoch: 64 | Loss: 0.12700846751907008
Training Epoch: 64 | Loss: 0.12724268864361987
Training Epoch: 64 | Loss: 0.12745972435640057
Validation Loss: 0.12036133836954832
Validation Loss: 0.15092666026684318
Validation Loss: 0.15020315876851478
Training Epoch: 65 | Loss: 0.12018739990890026
Training Epoch: 65 | Loss: 0.11983188196076172
Training Epoch: 65 | Loss: 0.12330057179851837
Training Epoch: 65 | Loss: 0.12433950053342306
Training Epoch: 65 | Loss: 0.1251247344698822
Training Epoch: 65 | Loss: 0.12581501343378615
Training Epoch: 65 | Loss: 0.1263137504128358
Training Epoch: 65 | Loss: 0.12679954606989205
Training Epoch: 65 | Loss: 0.12687477676288578
Validation Loss: 0.1277560368180275
Validation Loss: 0.1495156640758609
Validation Loss: 0.15101347709843768
==> Save checkpoint ...
Training Epoch: 66 | Loss: 0.12618125975131989
Training Epoch: 66 | Loss: 0.12914780272853257
Training Epoch: 66 | Loss: 0.12789692004113945
Training Epoch: 66 | Loss: 0.12956510222458167
Training Epoch: 66 | Loss: 0.128554311645986
Training Epoch: 66 | Loss: 0.1279715334665692
Training Epoch: 66 | Loss: 0.12754602825575928
Training Epoch: 66 | Loss: 0.12753502789821033
Training Epoch: 66 | Loss: 0.1278280612527766
Validation Loss: 0.13987663388252258
Validation Loss: 0.1486371211744476
Validation Loss: 0.15112651234483393
Training Epoch: 67 | Loss: 0.14122574217617512
Training Epoch: 67 | Loss: 0.12878868594249288
Training Epoch: 67 | Loss: 0.12703893381397968
Training Epoch: 67 | Loss: 0.1269984537135327
Training Epoch: 67 | Loss: 0.12724814419065628
Training Epoch: 67 | Loss: 0.12717476285373172
Training Epoch: 67 | Loss: 0.12725587485872594
Training Epoch: 67 | Loss: 0.1274904737537315
Training Epoch: 67 | Loss: 0.1274024349737322
Validation Loss: 0.15735146217048168
Validation Loss: 0.14830515268120434
Validation Loss: 0.15053790155798197
Training Epoch: 68 | Loss: 0.12716376222670078
Training Epoch: 68 | Loss: 0.12736591559346064
Training Epoch: 68 | Loss: 0.12865772544040313
Training Epoch: 68 | Loss: 0.12763967021581937
Training Epoch: 68 | Loss: 0.12744149503685961
Training Epoch: 68 | Loss: 0.12861720120493406
Training Epoch: 68 | Loss: 0.12832985722774426
Training Epoch: 68 | Loss: 0.12743003510959053
Training Epoch: 68 | Loss: 0.12805094794903912
Validation Loss: 0.10288998484611511
Validation Loss: 0.15054282096979937
Validation Loss: 0.1506179288187208
Training Epoch: 69 | Loss: 0.11506218649446964
Training Epoch: 69 | Loss: 0.1267571442133498
Training Epoch: 69 | Loss: 0.12710285035948923
Training Epoch: 69 | Loss: 0.1268626664072102
Training Epoch: 69 | Loss: 0.12648816769269117
Training Epoch: 69 | Loss: 0.12653037669609288
Training Epoch: 69 | Loss: 0.12622162846913454
Training Epoch: 69 | Loss: 0.12664508748900533
Training Epoch: 69 | Loss: 0.1264177451454354
Validation Loss: 0.19796617329120636
Validation Loss: 0.15189363265244088
Validation Loss: 0.1508083722290041
Training Epoch: 70 | Loss: 0.10773616470396519
Training Epoch: 70 | Loss: 0.128474870770432
Training Epoch: 70 | Loss: 0.12983429949934505
Training Epoch: 70 | Loss: 0.12926607564295892
Training Epoch: 70 | Loss: 0.12761061936222073
Training Epoch: 70 | Loss: 0.12637498842537434
Training Epoch: 70 | Loss: 0.12598427194178402
Training Epoch: 70 | Loss: 0.12655419371517299
Training Epoch: 70 | Loss: 0.12678521286961292
Validation Loss: 0.091615816578269
Validation Loss: 0.1482983855069569
Validation Loss: 0.14903987743961278
==> Save checkpoint ...
Training Epoch: 71 | Loss: 0.119870625436306
Training Epoch: 71 | Loss: 0.1270487995560069
Training Epoch: 71 | Loss: 0.12780435881878607
Training Epoch: 71 | Loss: 0.12779726667416302
Training Epoch: 71 | Loss: 0.1272397001315735
Training Epoch: 71 | Loss: 0.12743296084680153
Training Epoch: 71 | Loss: 0.1277789340149952
Training Epoch: 71 | Loss: 0.12802251811589668
Training Epoch: 71 | Loss: 0.12723320328947924
Validation Loss: 0.13389458786696196
Validation Loss: 0.15103914619119155
Validation Loss: 0.1495876892455923
Training Epoch: 72 | Loss: 0.11554756946861744
Training Epoch: 72 | Loss: 0.12494182554396367
Training Epoch: 72 | Loss: 0.12652485333475752
Training Epoch: 72 | Loss: 0.126107120787434
Training Epoch: 72 | Loss: 0.12503956042304448
Training Epoch: 72 | Loss: 0.1259175871114085
Training Epoch: 72 | Loss: 0.1261279210014371
Training Epoch: 72 | Loss: 0.12645816733228965
Training Epoch: 72 | Loss: 0.1266008523094018
Validation Loss: 0.13717797584831715
Validation Loss: 0.15016679458393908
Validation Loss: 0.15040686880400525
Training Epoch: 73 | Loss: 0.11150839924812317
Training Epoch: 73 | Loss: 0.12622973509132862
Training Epoch: 73 | Loss: 0.12582930520788502
Training Epoch: 73 | Loss: 0.12695011777797113
Training Epoch: 73 | Loss: 0.12597413165053525
Training Epoch: 73 | Loss: 0.12630729651089737
Training Epoch: 73 | Loss: 0.12559368779197855
Training Epoch: 73 | Loss: 0.1260849476968617
Training Epoch: 73 | Loss: 0.12601767399100663
Validation Loss: 0.16582746803760529
Validation Loss: 0.15618922199021176
Validation Loss: 0.15036561577789373
Training Epoch: 74 | Loss: 0.11187710613012314
Training Epoch: 74 | Loss: 0.12561039500270443
Training Epoch: 74 | Loss: 0.12481983703797433
Training Epoch: 74 | Loss: 0.12480250020137063
Training Epoch: 74 | Loss: 0.1253579344990619
Training Epoch: 74 | Loss: 0.12535231386101295
Training Epoch: 74 | Loss: 0.12617940784504827
Training Epoch: 74 | Loss: 0.12600648369289372
Training Epoch: 74 | Loss: 0.12668255545412854
Validation Loss: 0.1768135130405426
Validation Loss: 0.15619683119993988
Validation Loss: 0.15040244555462207
Training Epoch: 75 | Loss: 0.14798284880816936
Training Epoch: 75 | Loss: 0.13024941934180437
Training Epoch: 75 | Loss: 0.12837038023063718
Training Epoch: 75 | Loss: 0.12659150076584513
Training Epoch: 75 | Loss: 0.12679687505959536
Training Epoch: 75 | Loss: 0.1277771553149198
Training Epoch: 75 | Loss: 0.1268029183950492
Training Epoch: 75 | Loss: 0.12761387789818576
Training Epoch: 75 | Loss: 0.12695913468679407
Validation Loss: 0.19182095490396023
Validation Loss: 0.14630295800063575
Validation Loss: 0.15051557115784184
==> Save checkpoint ...
Training Epoch: 76 | Loss: 0.1368694957345724
Training Epoch: 76 | Loss: 0.1234776968330071
Training Epoch: 76 | Loss: 0.12452868155588336
Training Epoch: 76 | Loss: 0.12583596761991267
Training Epoch: 76 | Loss: 0.12565489540787
Training Epoch: 76 | Loss: 0.12615287114128085
Training Epoch: 76 | Loss: 0.1266467268137066
Training Epoch: 76 | Loss: 0.12699277936806438
Training Epoch: 76 | Loss: 0.12717556874134048
Validation Loss: 0.12894094735383987
Validation Loss: 0.1513928957375707
Validation Loss: 0.15112463699941017
Training Epoch: 77 | Loss: 0.17138062790036201
Training Epoch: 77 | Loss: 0.13003266067013586
Training Epoch: 77 | Loss: 0.12741345893220968
Training Epoch: 77 | Loss: 0.12730262023505085
Training Epoch: 77 | Loss: 0.1280169491528387
Training Epoch: 77 | Loss: 0.12799757826392968
Training Epoch: 77 | Loss: 0.12796731542148154
Training Epoch: 77 | Loss: 0.12752940581803252
Training Epoch: 77 | Loss: 0.12767637710606727
Validation Loss: 0.13769418373703957
Validation Loss: 0.15339837622561373
Validation Loss: 0.15109787721057139
Training Epoch: 78 | Loss: 0.10369015857577324
Training Epoch: 78 | Loss: 0.12977484022128846
Training Epoch: 78 | Loss: 0.1274376038383155
Training Epoch: 78 | Loss: 0.12747820224334483
Training Epoch: 78 | Loss: 0.12801938025661724
Training Epoch: 78 | Loss: 0.1273163602363965
Training Epoch: 78 | Loss: 0.12742579778030316
Training Epoch: 78 | Loss: 0.12668280426835304
Training Epoch: 78 | Loss: 0.12663664867678254
Validation Loss: 0.1354389674961567
Validation Loss: 0.1471553150435338
Validation Loss: 0.14979947082216466
Training Epoch: 79 | Loss: 0.1179228201508522
Training Epoch: 79 | Loss: 0.12421518261774932
Training Epoch: 79 | Loss: 0.12445577786336491
Training Epoch: 79 | Loss: 0.1245679868651288
Training Epoch: 79 | Loss: 0.12524385885579367
Training Epoch: 79 | Loss: 0.12624260584569114
Training Epoch: 79 | Loss: 0.126193331149306
Training Epoch: 79 | Loss: 0.12636925383921335
Training Epoch: 79 | Loss: 0.12646136710655118
Validation Loss: 0.12799897138029337
Validation Loss: 0.14879305375795257
Validation Loss: 0.14951159322949414
==> Save checkpoint ...
###Markdown
0822 run use all lens images both in old training set and BASS result found by previous network. train: 90%, validation: 10% For consistency, we should use the special hyper-parameters in network model 049: ```python lens/main.py --lr 0.00001 --weight_decay 0.5 --name lens_049 --epochs 50 --test_batch_size 1024 --batch_size 256python lens/main.py --lr 0.000001 --weight_decay 0.5 --name lens_049 --epochs 50 --test_batch_size 1024 --batch_size 256 --epoch 50```and select the model in 45-th epoch iteration lens_100
###Code
%run lens/main.py --lr 0.00001 --weight_decay 0.5 --name lens_100 --epochs 50 --test_batch_size 1024 --batch_size 256
###Output
mkdir: /data/storage1/LensFinder/log/lens_100
mkdir: /data/storage1/LensFinder/model/lens_100
Let's use 4 GPUs!
1 0.6956857227545166
2 0.690484913367959
3 0.6876441475403544
4 0.6842397220321662
5 0.6826491088119477
6 0.6771980814202122
7 0.6716798689572957
8 0.6622002286771101
9 0.649195518930581
10 0.6264870542067068
11 0.6030018420217128
12 0.585555140360896
13 0.5729263655665747
14 0.5634049096894184
15 0.5454976084199074
16 0.5249520684160138
17 0.5013723168136868
18 0.49030732707142427
19 0.46776374734440007
20 0.44576272607824446
21 0.4327242394059022
22 0.4031938747496144
23 0.38362634880087476
24 0.3630278811768995
25 0.36417965978713723
26 0.3855996144081664
27 0.3497840656203224
28 0.2990858446898054
29 0.3056893090951781
30 0.2883673225130354
31 0.2593200626770082
32 0.2627677348383692
33 0.2676120036608213
34 0.2317871159375316
35 0.29563738978385007
36 0.2364082736083669
37 0.24122603298552395
38 0.5019560058051545
39 0.24516494721962662
40 0.2818814788001154
41 0.27143810985465644
42 0.31092622933507025
43 0.22343169465954915
44 0.42545755072073504
45 0.205997361636265
46 0.19483917125175068
47 0.18581390159384684
48 0.17941753804224966
49 0.22625953931685763
50 0.2045812893223453
###Markdown
lens_103 (%run not good, use !python)
###Code
!python lens/main.py --lr 0.00001 --weight_decay 0.5 --name lens_103 --epochs 50 --test_batch_size 1024 --batch_size 256
!python lens/main.py --lr 0.000001 --weight_decay 0.5 --name lens_103 --epochs 50 --test_batch_size 1024 --batch_size 256 --epoch 50
!python lens/main.py --name lens_103 --batch_size 256 --epoch 40 --mode eval
!python lens/main.py --name lens_103 --batch_size 256 --epoch 45 --mode eval
!python lens/main.py --name lens_103 --batch_size 256 --epoch 50 --mode eval
!python lens/main.py --name lens_103 --batch_size 256 --epoch 60 --mode eval
a = 3
###Output
_____no_output_____
###Markdown
path for Dataset
###Code
seed = 42
random_seed(seed,True)
path = Path('./')
path_img = Path('./img(random)_10000_3ch')
path_lbl = Path('./gt(random)_10000_3ch')
fnames = get_image_files(path_img)
lbl_names = get_image_files(path_lbl)
print(f"fnames : {fnames[:3]}, label names : {lbl_names[:3]}")
###Output
_____no_output_____
###Markdown
Checking Data
###Code
img_f = fnames[0]
img = open_image(img_f)
img.show(figsize=(5,5), cmap='gray')
get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}'
mask = open_mask(get_y_fn(img_f))
mask.show(figsize=(5,5), alpha=1)
src_size = np.array(mask.shape[1:])
print(f"image size : {src_size}")
###Output
_____no_output_____
###Markdown
Label Codes
###Code
codes = np.array(['Void', 'Fat', 'Muscle', 'Visceral_fat'], dtype=str); codes
name2id = {v:k for k,v in enumerate(codes)}
void_code = name2id['Void']
###Output
_____no_output_____
###Markdown
Define Noise for fastai
###Code
rn = TfmPixel(r_noise)
tfms = get_transforms(flip_vert=True, max_rotate=180.0, max_zoom=1.5, max_warp = 0.2 )
new_tfms = (tfms[0] + [rn()], tfms[1])
new_tfms[0][7].use_on_y = False
new_tfms[0][7].p = 0.5
size = src_size
###Output
_____no_output_____
###Markdown
Checking GPU
###Code
free = gpu_mem_get_free_no_cache()
# the max size of bs depends on the available GPU RAM
if free > 8200: bs=4
else: bs=2
print(f"using bs={bs}, have {free}MB of GPU RAM free")
###Output
_____no_output_____
###Markdown
Define DataLoaders
###Code
src = (SegmentationItemList.from_folder(path_img)
.split_by_rand_pct(valid_pct=0.1)
.label_from_func(get_y_fn, classes=codes))
data = (src.transform(new_tfms, size=size, tfm_y=True)
.databunch(bs=bs, num_workers=0)
.normalize(imagenet_stats))
###Output
_____no_output_____
###Markdown
Training Models
###Code
loss_func = CombinedLoss
metrics = [ dice,acc_camvid ]
wd = 1e-2
learn = unet_learner(data, models.resnet34, loss_func = loss_func(), metrics=metrics)
lr_find(learn)
learn.recorder.plot()
lr = 3e-4
learn.summary()
learn.fit_one_cycle(10, lr)
###Output
_____no_output_____
###Markdown
Save Models
###Code
learn.save(f"path - ")
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import numpy as np
import pandas as pd
from bs4 import BeautifulSoup
import torchvision
from torchvision import transforms, datasets, models
import torch
from PIL import Image
import random
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import os
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
%reload_ext autoreload
%autoreload 2
def generate_box(obj):
xmin = int(obj.find('xmin').text)
ymin = int(obj.find('ymin').text)
xmax = int(obj.find('xmax').text)
ymax = int(obj.find('ymax').text)
return [xmin, ymin, xmax, ymax]
def generate_label(obj):
if obj.find('name').text == "with_mask":
return 1
elif obj.find('name').text == "without_mask":
return 2
return 3
def generate_target(file):
with open(file) as f:
data = f.read()
soup = BeautifulSoup(data, 'xml')
objects = soup.find_all('object')
num_objs = len(objects)
boxes = []
labels = []
for i in objects:
boxes.append(generate_box(i))
labels.append(generate_label(i))
boxes = torch.as_tensor(boxes, dtype=torch.float32)
labels = torch.as_tensor(labels, dtype=torch.int64)
target = {}
target["boxes"] = boxes
target["labels"] = labels
return target
class MaskDataset(object):
def __init__(self, imgs, labels, transforms, base_path):
self.imgs = imgs
self.labels = labels
self.transforms = transforms
self.base_path = base_path
def __getitem__(self, idx):
img_path = self.base_path + "/images/"+ self.imgs[idx]
label_path = self.base_path + "/annotations/" + self.labels[idx]
img = Image.open(img_path).convert("RGB")
target = generate_target(label_path)
if self.transforms is not None:
img = self.transforms(img)
return img, target
def __len__(self):
return len(self.imgs)
###Output
_____no_output_____
###Markdown
modify to your own base path ```base path├── images│ ├── maksssksksss0.png│ ├── maksssksksss1.png│ └── ...└── annotations ├── maksssksksss0.xml ├── maksssksksss1.xml └── ...```
###Code
base_path = "/your/own/base/path/"
imgs = list(sorted(os.listdir(base_path+"images")))
labels = list(sorted(os.listdir(base_path + "annotations")))
###Output
_____no_output_____
###Markdown
split balanced train/validation set
###Code
from collections import defaultdict
def return_idx(lbl, val_count = 2):
result = defaultdict(list)
for idx,lb in enumerate(lbl):
anp = base_path+"annotations/"+lb
target = generate_target(anp)
label = str(list(set(target["labels"].cpu().numpy())))
if label in result:
if len(result[label]) == val_count and label !='[1]':
pass
else:
result[label].append(idx)
else:
result[label].append(idx)
return result
a = return_idx(labels)
class_1 = a['[1]'][2:]
sampleList = random.sample(class_1, 500)
val_list = []
for aa in a.values():
val_list.extend(aa)
alls = list(range(0,len(imgs)))
train_list = [x for x in alls if x not in val_list and x not in sampleList]
train_transform = transforms.Compose([
transforms.ToTensor()
])
valid_transform = transforms.Compose([
transforms.ToTensor()
])
def collate_fn(batch):
return tuple(zip(*batch))
train_dataset = MaskDataset([imgs[i] for i in train_list], [labels[i] for i in train_list],train_transform,base_path)
train_data_loader = torch.utils.data.DataLoader(train_dataset, batch_size=4, shuffle=True, collate_fn=collate_fn)
valid_dataset = MaskDataset([imgs[i] for i in val_list], [labels[i] for i in val_list],valid_transform,base_path)
valid_data_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=4, shuffle=False, collate_fn=collate_fn)
# test_dataset = MaskDataset(imgs[-5:], labels[-5:],valid_transform)
# test_data_loader = torch.utils.data.DataLoader(test_dataset, batch_size=2, shuffle=False, collate_fn=collate_fn)
len(train_dataset),len(valid_dataset)
torch.cuda.is_available()
###Output
_____no_output_____
###Markdown
Model Training
###Code
def get_model_instance_segmentation(num_classes):
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
in_features = model.roi_heads.box_predictor.cls_score.in_features
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes+1)
return model
model = get_model_instance_segmentation(3)
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
import time
from tqdm.notebook import tqdm
import matplotlib.patches as mpatches
num_epochs = 25
model.to(device)
# parameters
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.02, momentum=0.9)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
step_size=3,
gamma=0.1)
model_name = "fastrcnn_res50_epoch25"
resume = 'fastrcnn_res50_epoch25.pth'
# if you resume
if os.path.isfile(resume):
print("=> loading checkpoint '{}'".format(model_name))
checkpoint = torch.load(resume)
start_epoch = checkpoint['epoch']
lr_scheduler.load_state_dict(checkpoint['scheduler'])
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
print("=> loaded checkpoint '{}' (epoch {})" .format(model_name, start_epoch))
else:
print("=> no checkpoint found at '{}'".format(model_name))
total_train_loss = []
total_valid_loss = []
start_time = time.time()
for epoch in range(num_epochs):
print(f'Epoch :{epoch + 1}')
train_loss = []
valid_loss = []
for imgs, annotations in tqdm(train_data_loader):
model.train()
imgs = list(img.to(device) for img in imgs)
annotations = [{k: v.to(device) for k, v in t.items()} for t in annotations]
loss_dict = model(imgs, annotations)
losses = sum(loss for loss in loss_dict.values())
optimizer.zero_grad()
losses.backward()
optimizer.step()
train_loss.append(losses.item())
epoch_train_loss = np.mean(train_loss)
total_train_loss.append(epoch_train_loss)
print(f'Epoch train loss is {epoch_train_loss}')
for imgs, annotations in tqdm(valid_data_loader):
with torch.no_grad():
imgs = list(img.to(device) for img in imgs)
annotations = [{k: v.to(device) for k, v in t.items()} for t in annotations]
loss_dict = model(imgs, annotations)
losses = sum(loss for loss in loss_dict.values())
valid_loss.append(losses.item())
epoch_valid_loss = np.mean(valid_loss)
total_valid_loss.append(epoch_valid_loss)
print(f'Epoch valid loss is {epoch_valid_loss}')
lr_scheduler.step()
plt.plot(range(len(total_train_loss)), total_train_loss, 'b', range(len(total_valid_loss)), total_valid_loss,'r')
red_patch = mpatches.Patch(color='red', label='Validation')
blue_patch = mpatches.Patch(color='blue', label='Training')
plt.legend(handles=[red_patch, blue_patch])
plt.show()
time_elapsed = time.time() - start_time
print('{:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
model_name = "fastrcnn_res50_epoch25"
filename = f'{model_name}.pth'
state={
'epoch': num_epochs,
'state_dict': model.state_dict(),
'optimizer' : optimizer.state_dict(),
'scheduler': lr_scheduler.state_dict(),
}
torch.save(state, filename)
torch.cuda.empty_cache()
###Output
_____no_output_____
###Markdown
Plot image
###Code
# use this function for plot image with annotation
def plot_image(img_tensor, annotation, mode = "pred"):
fig,ax = plt.subplots(1)
img = img_tensor.cpu().data
if mode=="pred":
mask=annotation["scores"]>0.5
else:
mask=annotation["labels"]>0
# Display the image
ax.imshow(img.permute(1, 2, 0))
for (box,label) in zip(annotation["boxes"][mask],annotation["labels"][mask]):
xmin, ymin, xmax, ymax = box
if label==1:
rect = patches.Rectangle((xmin,ymin),(xmax-xmin),(ymax-ymin),linewidth=1,edgecolor='b',facecolor='none')
print("with_mask")
elif label==2:
rect = patches.Rectangle((xmin,ymin),(xmax-xmin),(ymax-ymin),linewidth=1,edgecolor='g',facecolor='none')
print("without_mask")
else:
rect = patches.Rectangle((xmin,ymin),(xmax-xmin),(ymax-ymin),linewidth=1,edgecolor='r',facecolor='none')
print("mask_weared_incorrect")
ax.add_patch(rect)
ax.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
For testset inference
###Code
iterations=2
dataloader_iterator = iter(test_data_loader)
for i in range(iterations):
try:
imgs, annotations = next(dataloader_iterator)
except:
dataloader_iterator = iter(test_data_loader)
imgs, annotations = next(dataloader_iterator)
imgs = list(img.to(device) for img in imgs)
annotations = [{k: v.to(device) for k, v in t.items()} for t in annotations]
a = imgs[0].cpu().numpy()
b = np.transpose(a,(1,2,0))
plt.imshow(b)
model.eval()
preds = model(list(imgs[0][None, :, :]))
preds
n = 0
print("Prediction")
plot_image(imgs[n], preds[n])
print("Target")
plot_image(imgs[n], annotations[n], mode="target")
###Output
_____no_output_____
###Markdown
This file has code to train the model and evaluate.First read the backed-up data files into numpy arrays.
###Code
import pickle
import numpy as np
x_train, y_train = pickle.load(open('../outputs/train_xy_no_sampling_stdScale.pk', 'rb'))
x_cv, y_cv = pickle.load(open('../outputs/val_xy_no_sampling_stdScale.pk', 'rb'))
x_test, y_test = pickle.load(open('../outputs/test_xy_no_sampling_stdScale.pk', 'rb'))
print('shapes of train, validation, test data', x_train.shape, y_train.shape, x_cv.shape, y_cv.shape, x_test.shape, y_test.shape)
values, counts = np.unique(y_train, return_counts=True)
num_features = x_train.shape[1]
print('Frequency of distance values before sampling', values, counts)
###Output
shapes of train, validation, test data (562772, 128) (562772,) (140693, 128) (140693,) (234489, 128) (234489,)
Frequency of distance values before sampling [1 2 3 4 5 6 7] [ 6030 187348 303509 59309 6048 506 22]
###Markdown
Since data is massively imbalanced, let's oversample the minority targets and undersample the majority type samples. The fraction for over/under sampling (0.7) was chosen with intuition and gut feeling. Feel free to play around.
###Code
from imblearn.over_sampling import RandomOverSampler
from imblearn.under_sampling import ClusterCentroids, TomekLinks, RandomUnderSampler
from imblearn.over_sampling import KMeansSMOTE, SMOTE
seed_random = 9999
# max_idx = np.argmax(counts)
# max_value = counts[max_idx]
# majority_class = values[max_idx]
x = int(counts[2]*0.7)
y = int(0.7 * x)
undersample_dict = {2:y, 3:x}
under_sampler = RandomUnderSampler(sampling_strategy=undersample_dict, random_state=seed_random) # n_jobs=15,
x_train, y_train = under_sampler.fit_resample(x_train, y_train.astype(np.int))
print('Frequency of distance values after undersampling', np.unique(y_train, return_counts=True))
minority_samples = int(0.7*x)
oversample_dict = {1:minority_samples, 4:minority_samples, 5:minority_samples, 6:minority_samples, 7:minority_samples}
over_sampler = RandomOverSampler(sampling_strategy=oversample_dict, random_state=seed_random) # ,n_jobs=15, k_neighbors= 5
x_train, y_train = over_sampler.fit_resample(x_train, y_train.astype(np.int))
print('Frequency of distance values after oversampling', np.unique(y_train, return_counts=True))
pickle.dump((x_train, y_train), open('../outputs/train_xy_combine_sampling.pk', 'wb'))
x_train.shape, y_train.shape
# from sklearn.preprocessing import MinMaxScaler
# scaler = MinMaxScaler(feature_range=(0, 1))
# y_train = scaler.fit_transform(y_train.reshape(-1, 1)).flatten()
# y_cv = scaler.transform(y_cv.reshape(-1, 1)).flatten()
# y_test = scaler.transform(y_test.reshape(-1, 1)).flatten()
from utils import *
import numpy as np
np.random.seed(999)
x_train, y_train = unison_shuffle_copies(x_train, y_train)
###Output
_____no_output_____
###Markdown
Creating a baseline for this dataset by training a Linear Regression model.
###Code
from sklearn.linear_model import LinearRegression
baseline_model = LinearRegression(fit_intercept=True, normalize=True, n_jobs=-1).fit(x_train, y_train)
from sklearn.metrics import accuracy_score, mean_absolute_error, mean_squared_error
y_pred = baseline_model.predict(x_test)
y_class = np.round(y_pred)
baseline_acc = accuracy_score(y_test, y_class)*100
baseline_mse = mean_squared_error(y_test, y_pred)
baseline_mae = mean_absolute_error(y_test, y_pred)
print("Baseline: Accuracy={}%, MSE={}, MAE={}".format(round(baseline_acc, 2), round(baseline_mse,2), round(baseline_mae,2)))
###Output
Baseline: Accuracy=50.57%, MSE=0.55, MAE=0.59
###Markdown
Linear Regression does surprisingly good! 50% is a good baseline to compare with, considering that chance prediction in this case is ~14% (1/7). This concludes setting the baseline for this problem. We will compare our neural network results with this to see how much we have improved. Feature selection with XGboost (Not used now. This didn't help much with the results. May be trying out other feature selection algorithms could help.)
###Code
# x_train_org = x_train.copy()
# x_cv_org = x_cv.copy()
# x_test_org = x_test.copy()
# from xgboost import XGBRegressor
# from xgboost import plot_importance
# from sklearn.feature_selection import SelectFromModel
# from matplotlib import pyplot as plt
# from sklearn.feature_selection import SelectPercentile
# xgbReg = XGBRegressor(objective="count:poisson", tree_method='gpu_hist', gpu_id=0)
# xgbReg.fit(x_train, y_train)
# plot_importance(xgbReg, max_num_features=30)
# plt.show()
# from sklearn.metrics import accuracy_score
# y_pred = xgbReg.predict(x_test)
# f'xgboost accuracy is {accuracy_score(y_test, np.round(y_pred))}'
# selectFeature = SelectFromModel(xgbReg, prefit=True, threshold="median")
# selectFeature.transform(x_train_org)
# feature_mask = selectFeature.get_support()
# num_features = np.sum(feature_mask)
# x_train = x_train_org[:, feature_mask]
# x_cv = x_cv_org[:, feature_mask]
# x_test = x_test_org[:, feature_mask]
# print(f'{num_features} number of feature selected, shape of train data is {x_train.shape}')
params = {'batch_size': 1000, 'input_size': num_features, 'hidden_units_1': 200, 'hidden_units_2': 100, 'hidden_units_3': 50, 'do_1': 0.2, 'do_2': 0.1, 'do_3': 0.05, 'output_size': 1, 'lr': 0.001, 'min_lr': 1e-5, 'max_lr': 1e-3, 'epochs': 500, 'lr_sched': 'clr', 'lr_sched_mode': 'triangular', 'gamma': 0.95}
params
###Output
_____no_output_____
###Markdown
Create pytorch data loaders for train/val/test datasets.
###Code
import torch
from torch.utils import data as torch_data
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print("device:", device)
trainset = torch_data.TensorDataset(torch.as_tensor(x_train, dtype=torch.float, device=device), torch.as_tensor(y_train, dtype=torch.float, device=device))
train_dl = torch_data.DataLoader(trainset, batch_size=params['batch_size'], drop_last=True)
val_dl = torch_data.DataLoader(torch_data.TensorDataset(torch.as_tensor(x_cv, dtype=torch.float, device=device), torch.as_tensor(y_cv, dtype=torch.float, device=device)), batch_size=params['batch_size'], drop_last=True)
test_dl = torch_data.DataLoader(torch_data.TensorDataset(torch.as_tensor(x_test, dtype=torch.float, device=device), torch.as_tensor(y_test, dtype=torch.float, device=device)), batch_size=params['batch_size'], drop_last=True)
###Output
device: cuda:0
###Markdown
Check for batches with all same type of samples (same distance values). This issues had caused me a lot of grief.
###Code
print('value counts in whole data', np.unique(y_train, return_counts=True))
count = 0
for i, data in enumerate(train_dl, 0):
input, target = data[0], data[1]
t = torch.unique(target, return_counts=True)[1]
if (t==params['batch_size']).any().item():
count += 1
print('{} ({}%) batches have all same targets'.format(count, np.round(count/len(train_dl)*100, 2) ))
###Output
value counts in whole data (array([1, 2, 3, 4, 5, 6, 7]), array([148719, 148719, 212456, 148719, 148719, 148719, 148719],
dtype=int64))
0 (0.0%) batches have all same targets
###Markdown
Initialize model, loss, learning rate schedulers etc
###Code
from torchsummary import summary
import sys
import io
torch.manual_seed(9999)
def get_model():
"""
creates a PyTorch model. Change the 'params' dict above to
modify the neural net configuration.
"""
model = torch.nn.Sequential(
torch.nn.Linear(params['input_size'], params['hidden_units_1']),
torch.nn.BatchNorm1d(params['hidden_units_1']),
# torch.nn.Dropout(p=params['do_1']),
torch.nn.ReLU(),
torch.nn.Linear(params['hidden_units_1'], params['hidden_units_2']),
torch.nn.BatchNorm1d(params['hidden_units_2']),
# torch.nn.Dropout(p=params['do_2']),
torch.nn.ReLU(),
torch.nn.Linear(params['hidden_units_2'], params['hidden_units_3']),
torch.nn.BatchNorm1d(params['hidden_units_3']),
# torch.nn.Dropout(p=params['do_3']),
torch.nn.ReLU(),
torch.nn.Linear(params['hidden_units_3'], params['output_size']),
torch.nn.ReLU(),
# torch.nn.Softplus(),
)
model.to(device)
return model
def poisson_loss(y_pred, y_true):
"""
Custom loss function for Poisson model.
Equivalent Keras implementation for reference:
K.mean(y_pred - y_true * math_ops.log(y_pred + K.epsilon()), axis=-1)
For output of shape (2,3) it return (2,) vector. Need to calculate
mean of that too.
"""
y_pred = torch.squeeze(y_pred)
loss = torch.mean(y_pred - y_true * torch.log(y_pred+1e-7))
return loss
model = get_model()
print('model loaded into device=', next(model.parameters()).device)
# this is just to capture model summary as string
old_stdout = sys.stdout
sys.stdout = buffer = io.StringIO()
summary(model, input_size=(params['input_size'], ))
sys.stdout = old_stdout
model_summary = buffer.getvalue()
print('model-summary\n', model_summary)
# later this 'model-summary' string can be written to tensorboard
lr_reduce_patience = 20
lr_reduce_factor = 0.1
loss_fn = poisson_loss
# optimizer = torch.optim.SGD(model.parameters(), lr=params['lr'], momentum=0.9, dampening=0, weight_decay=0, nesterov=True)
optimizer = torch.optim.RMSprop(model.parameters(), lr=params['lr'], alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False)
if params['lr_sched'] == 'reduce_lr_plateau':
lr_sched = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=lr_reduce_factor, patience=lr_reduce_patience, verbose=True, threshold=0.00001, threshold_mode='rel', cooldown=0, min_lr=1e-9, eps=1e-08)
elif params['lr_sched'] == 'clr':
lr_sched = torch.optim.lr_scheduler.CyclicLR(optimizer, params['min_lr'], params['max_lr'], step_size_up=8*len(train_dl), step_size_down=None, mode=params['lr_sched_mode'], last_epoch=-1, gamma=params['gamma'])
print('lr scheduler type:', lr_sched)
for param_group in optimizer.param_groups:
print(param_group['lr'])
###Output
model loaded into device= cuda:0
model-summary
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Linear-1 [-1, 200] 25,800
BatchNorm1d-2 [-1, 200] 400
ReLU-3 [-1, 200] 0
Linear-4 [-1, 100] 20,100
BatchNorm1d-5 [-1, 100] 200
ReLU-6 [-1, 100] 0
Linear-7 [-1, 50] 5,050
BatchNorm1d-8 [-1, 50] 100
ReLU-9 [-1, 50] 0
Linear-10 [-1, 1] 51
ReLU-11 [-1, 1] 0
================================================================
Total params: 51,701
Trainable params: 51,701
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.01
Params size (MB): 0.20
Estimated Total Size (MB): 0.21
----------------------------------------------------------------
lr scheduler type: <torch.optim.lr_scheduler.CyclicLR object at 0x0000020C0D977E20>
1e-05
###Markdown
Run next cell to find optimal range for learning rate while using One-Cycle LR schedulerIn this case choosing the learning rate from graph below didn't help. You can skip this cell if not required.
###Code
#### from https://sgugger.github.io/how-do-you-find-a-good-learning-rate.html
import math
lr_arr = np.zeros((len(train_dl), ))
def find_lr(init_value = 1e-8, final_value=10., beta = 0.98):
global lr_arr
num = len(train_dl)-1
mult = (final_value / init_value) ** (1/num)
lr = init_value
optimizer.param_groups[0]['lr'] = lr
avg_loss = 0.
best_loss = 0.
batch_num = 0
losses = []
log_lrs = []
lrs = []
for data in train_dl:
batch_num += 1
# As before, get the loss for this mini-batch of inputs/outputs
inputs, labels = data
# inputs, labels = Variable(inputs), Variable(labels)
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_fn(outputs, labels)
# Compute the smoothed loss
avg_loss = beta * avg_loss + (1-beta) *loss.item()
smoothed_loss = avg_loss / (1 - beta**batch_num)
# Stop if the loss is exploding
if batch_num > 1 and smoothed_loss > 4 * best_loss:
return log_lrs, losses
# Record the best loss
if smoothed_loss < best_loss or batch_num==1:
best_loss = smoothed_loss
# Store the values
losses.append(smoothed_loss)
log_lrs.append(math.log10(lr))
lrs.append(lr)
lr_arr[batch_num-1] = lr
# Do the SGD step
loss.backward()
optimizer.step()
# Update the lr for the next step
lr *= mult
optimizer.param_groups[0]['lr'] = lr
return log_lrs, losses
lrs, losses = find_lr()
print('returned', len(losses))
plt.figure()
plt.plot(lr_arr[:len(lrs)], losses)
plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0))
plt.title('LR range plot')
plt.xlabel('Learning rates')
plt.ylabel('Losses')
plt.show()
def evaluate(model, dl):
"""
This function is used to evaluate the model with validation.
args: model and data loader
returns: loss
"""
model.eval()
final_loss = 0.0
count = 0
with torch.no_grad():
for data_cv in dl:
inputs, dist_true = data_cv[0], data_cv[1]
count += len(inputs)
outputs = model(inputs)
loss = loss_fn(outputs, dist_true)
final_loss += loss.item()
return final_loss/len(dl)
def save_checkpoint(state, state_save_path):
if not os.path.exists("/".join(state_save_path.split('/')[:-1])):
os.makedirs("/".join(state_save_path.split('/')[:-1]))
torch.save(state, state_save_path)
###Output
_____no_output_____
###Markdown
Below cell trains the model and records the results in tensorboard.
###Code
%%time
# %load_ext tensorboard
import time
import copy
from tqdm.auto import tqdm
from utils import *
from torch.utils.tensorboard import SummaryWriter
# from tensorboardX import SummaryWriter
last_loss = 0.0
min_val_loss = np.inf
patience_counter = 0
early_stop_patience = 50
best_model = None
train_losses = []
val_losses = []
output_path = '../outputs'
tb_path = output_path+'/logs/runs'
run_path = tb_path+'/run47_smallerNN_noDO'
checkpoint_path = run_path+'/checkpoints'
resume_training = False
start_epoch = 0
iter_count = 0
if os.path.exists(run_path):
raise Exception("this experiment already exists!")
if not os.path.exists(checkpoint_path):
os.makedirs(checkpoint_path)
writer = SummaryWriter(log_dir=run_path, comment='', purge_step=None, max_queue=1, flush_secs=30, filename_suffix='')
writer.add_graph(model, input_to_model=torch.zeros(params['input_size']).view(1,-1).cuda(), verbose=False) # not useful
# resume training on a saved model
if resume_training:
prev_checkpoint_path = '../outputs/logs/runs/run42_clr_g0.95/checkpoints' # change this
suffix = '1592579305.7273214' # change this
model.load_state_dict(torch.load(prev_checkpoint_path+'/model_'+suffix+'.pt'))
optimizer.load_state_dict(torch.load(prev_checkpoint_path+'/optim_'+suffix+'.pt'))
lr_sched.load_state_dict(torch.load(prev_checkpoint_path+'/sched_'+suffix+'.pt'))
state = torch.load(prev_checkpoint_path+'/state_'+suffix+'.pt')
start_epoch = state['epoch']
writer.add_text('loaded saved model:', str(params))
print('loaded saved model', params)
writer.add_text('run_change', 'Smaller 3 hidden layer NN, no DO' + str(params))
torch.backends.cudnn.benchmark = True
print('total epochs=', len(range(start_epoch, start_epoch+params['epochs'])))
# with torch.autograd.detect_anomaly(): # use this to detect bugs while training
for param_group in optimizer.param_groups:
print('lr-check', param_group['lr'])
for epoch in range(start_epoch, start_epoch+params['epochs']): # loop over the dataset multiple times
running_loss = 0.0
stime = time.time()
for i, data in enumerate(train_dl, 0):
iter_count += 1
# get the inputs; data is a list of [inputs, dist_true]
model.train()
inputs, dist_true = data[0], data[1]
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
loss = loss_fn(outputs, dist_true)
loss.backward()
optimizer.step()
running_loss += loss.item()
last_loss = loss.item()
for param_group in optimizer.param_groups:
curr_lr = param_group['lr']
writer.add_scalar('monitor/lr-iter', curr_lr, iter_count-1)
if not isinstance(lr_sched, torch.optim.lr_scheduler.ReduceLROnPlateau):
lr_sched.step()
val_loss = evaluate(model, val_dl)
if isinstance(lr_sched, torch.optim.lr_scheduler.ReduceLROnPlateau):
lr_sched.step(val_loss)
if val_loss < min_val_loss:
min_val_loss = val_loss
patience_counter = 0
best_model = copy.deepcopy(model)
print(epoch,"> Best val_loss model saved:", round(val_loss, 4))
else:
patience_counter += 1
train_loss = running_loss/len(train_dl)
train_losses.append(train_loss)
val_losses.append(val_loss)
writer.add_scalar('loss/train', train_loss, epoch)
writer.add_scalar('loss/val', val_loss, epoch)
for param_group in optimizer.param_groups:
curr_lr = param_group['lr']
writer.add_scalar('monitor/lr-epoch', curr_lr, epoch)
if patience_counter > early_stop_patience:
print("Early stopping at epoch {}. current val_loss {}".format(epoch, val_loss))
break
if epoch % 10 == 0:
torch.save(best_model.state_dict(), checkpoint_path+'/model_cp.pt')
torch.save(optimizer.state_dict(), checkpoint_path+'/optim_cp.pt')
torch.save(lr_sched.state_dict(), checkpoint_path+'/sched_cp.pt')
writer.add_text('checkpoint saved', 'at epoch='+str(epoch))
print("epoch:{} -> train_loss={},val_loss={} - {}".format(epoch, round(train_loss, 5),round(val_loss, 5), seconds_to_minutes(time.time()-stime)))
print('Finished Training')
ts = str(time.time())
best_model_path = checkpoint_path+'/model_'+ts+'.pt'
opt_save_path = checkpoint_path+'/optim_'+ts+'.pt'
sched_save_path = checkpoint_path+'/sched_'+ts+'.pt'
state_save_path = checkpoint_path+'/state_'+ts+'.pt'
state = {'epoch': epoch+1,
'model_state': model.state_dict(),
'optim_state': optimizer.state_dict(),
'last_train_loss': train_losses[-1],
'last_val_loss': val_losses[-1],
'total_iters': iter_count
}
save_checkpoint(state, state_save_path)
# sometimes loading from state dict is not wokring, so...
torch.save(best_model.state_dict(), best_model_path)
torch.save(optimizer.state_dict(), opt_save_path)
torch.save(lr_sched.state_dict(), sched_save_path)
# run44 val = -0.1126
# Top runs: run44, run26, run21, run20, run19_batch_norm_other_changes, run18_trngl2, run10_3
###Output
total epochs= 500
lr-check 1e-05
0 > Best val_loss model saved: -0.0781
epoch:0 -> train_loss=1.9969,val_loss=-0.07814 - 0.0 minutes 42.0 seconds
1 > Best val_loss model saved: -0.0936
2 > Best val_loss model saved: -0.0975
3 > Best val_loss model saved: -0.1022
4 > Best val_loss model saved: -0.103
5 > Best val_loss model saved: -0.1042
6 > Best val_loss model saved: -0.1051
7 > Best val_loss model saved: -0.1059
8 > Best val_loss model saved: -0.1068
9 > Best val_loss model saved: -0.1075
10 > Best val_loss model saved: -0.108
epoch:10 -> train_loss=-1.97189,val_loss=-0.10803 - 0.0 minutes 41.0 seconds
11 > Best val_loss model saved: -0.1085
12 > Best val_loss model saved: -0.1091
13 > Best val_loss model saved: -0.1098
15 > Best val_loss model saved: -0.111
epoch:20 -> train_loss=-1.97394,val_loss=-0.10876 - 0.0 minutes 42.0 seconds
epoch:30 -> train_loss=-1.97593,val_loss=-0.11001 - 0.0 minutes 42.0 seconds
31 > Best val_loss model saved: -0.1116
epoch:40 -> train_loss=-1.97498,val_loss=-0.10959 - 0.0 minutes 42.0 seconds
47 > Best val_loss model saved: -0.1117
epoch:50 -> train_loss=-1.97652,val_loss=-0.11064 - 0.0 minutes 41.0 seconds
epoch:60 -> train_loss=-1.97646,val_loss=-0.11049 - 0.0 minutes 41.0 seconds
63 > Best val_loss model saved: -0.1118
epoch:70 -> train_loss=-1.97621,val_loss=-0.10948 - 0.0 minutes 41.0 seconds
epoch:80 -> train_loss=-1.97773,val_loss=-0.11043 - 0.0 minutes 39.0 seconds
epoch:90 -> train_loss=-1.97673,val_loss=-0.1101 - 0.0 minutes 40.0 seconds
epoch:100 -> train_loss=-1.97709,val_loss=-0.11027 - 0.0 minutes 41.0 seconds
epoch:110 -> train_loss=-1.97768,val_loss=-0.11046 - 0.0 minutes 41.0 seconds
Early stopping at epoch 114. current val_loss -0.11061588690749237
Finished Training
Wall time: 1h 18min 59s
###Markdown
Now test the model with test data.
###Code
def test(model, dl):
model.eval()
final_loss = 0.0
count = 0
y_hat = []
with torch.no_grad():
for data_cv in dl:
inputs, dist_true = data_cv[0], data_cv[1]
count += len(inputs)
outputs = model(inputs)
y_hat.extend(outputs.tolist())
loss = loss_fn(outputs, dist_true)
final_loss += loss.item()
return final_loss/len(dl), y_hat
model.load_state_dict(torch.load(best_model_path))
test_loss, y_hat = test(model, test_dl)
print(test_loss)
writer.add_text('test-loss', str(test_loss))
try:
if scaler:
y_hat = scaler.inverse_transform(y_hat)
y_test = scaler.inverse_transform(y_test)
except:
pass
y_hat[50:60], y_test[50:60]
from sklearn.metrics import accuracy_score
writer.add_text('Accuracy=', str(accuracy_score(y_test[:len(y_hat)], np.round(y_hat))))
print(str(accuracy_score(y_test[:len(y_hat)], np.round(y_hat))))
# show distance value wise precision (bar chart)
# predicted values are less that real test samples because last samples from test are dropped to maintain same batch size (drop_last=True)
from matplotlib import pyplot as plt
y_hat_ = np.array(y_hat).squeeze()
y_test_ = y_test[:len(y_hat)]
print(len(y_test), len(y_hat))
dist_accuracies = []
dist_counts = []
for i in range(1, 8):
mask = y_test_==i
dist_values = y_test_[mask]
dist_preds = np.round(y_hat_[mask])
dist_accuracies.append(np.sum(dist_values == dist_preds)*100/len(dist_values))
dist_counts.append(len(dist_values))
fig = plt.figure(figsize=(10,7))
plt.subplot(2,1,1)
plt.bar(range(1,8), dist_accuracies)
for index, value in enumerate(dist_accuracies):
plt.text(index+0.8, value, str(np.round(value, 2))+'%')
plt.title('distance-wise accuracy')
plt.xlabel('distance values')
plt.ylabel('accuracy')
plt.subplot(2,1,2)
plt.bar(range(1,8), dist_counts)
for index, value in enumerate(dist_counts):
plt.text(index+0.8, value, str(value))
plt.title('distance-wise count')
plt.xlabel('distance values')
plt.ylabel('counts')
fig.tight_layout(pad=3.0)
plt.show()
writer.add_figure('test/results', fig)
writer.add_text('class avg accuracy', str(np.mean(dist_accuracies)))
print('class avg accuracy', np.mean(dist_accuracies))
writer.add_text('MSE', str(np.mean((np.array(y_hat).squeeze()-y_test[:len(y_hat)])**2)))
print('MSE', np.mean((np.array(y_hat).squeeze()-y_test[:len(y_hat)])**2))
writer.add_text('MAE', str(np.mean(np.abs(np.array(y_hat).squeeze() - y_test[:len(y_hat)]))))
print('MAE', np.mean(np.abs(np.array(y_hat).squeeze() - y_test[:len(y_hat)])))
writer.add_text('ending_remark', 'no dropout caused faster training but final val error/accuracy was worse than with-do training')
# to shutdown system once training is over. For over-night training sessions.
os.system("shutdown /s /t 100")
# to abort shutdown timer
os.system("shutdown /a")
###Output
_____no_output_____
###Markdown
Following cells can be ignored. Includes some rough works and old TODO lists. Since training is the model gives poor results, try to figure out the issue. Train with small number of samples from one distance value then add other values.
###Code
classes = [1, 2, 4]
x_temp = []
y_temp = []
for class_ in classes:
x_temp.extend(x_train[y_train==class_][:100])
y_temp.extend(y_train[y_train==class_][:100])
x_temp, y_temp = unison_shuffle_copies(np.array(x_temp), np.array(y_temp))
x_temp = torch.tensor(x_temp, dtype=torch.float32, device=device)
y_temp = torch.tensor(y_temp, dtype=torch.float32, device=device)
loss_history = []
for epoch in range(5000): # loop over the dataset multiple times
running_loss = 0.0
stime = time.time()
# get the inputs; data is a list of [inputs, dist_true]
model.train()
inputs, dist_true = x_temp, y_temp
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
loss = loss_fn(outputs, dist_true)
loss.backward()
optimizer.step()
loss_history.append(loss.item())
from utils import *
plot(loss_history, 'class 1 train')
model.eval()
op_temp = model(x_temp)
print(loss_history[-1])
for i,j in zip(y_temp[:20], op_temp.squeeze().tolist()[:20]):
print(i.item(), '--', j)
###Output
_____no_output_____
###Markdown
Below is the code used to generate train image txt files from DAVIS's train.txt.
###Code
with tf.device("/gpu:0"):
trainingData = TrainingData(batchSize,instanceParams)
with tf.device("/gpu:0"):
# init
with tf.variable_scope("netShare"):
networkBodyF = NetworkBody(trainingData,instanceParams)
with tf.variable_scope("netShare",reuse=True):
networkBodyB = NetworkBody(trainingData,instanceParams,flipInput=True)
trainingLoss = TrainingLoss(instanceParams,networkBodyF,networkBodyB,trainingData)
solver,learningRateTensor = attachSolver(trainingLoss.loss)
# loss scheduling
recLossBWeightTensor = trainingLoss.recLossBWeight
# merge summaries
merged = tf.summary.merge_all()
# saver
saver = tf.train.Saver(max_to_keep=0)
from dotmap import DotMap
arg = DotMap()
arg.logDev = False
iterations = 2000 * 10
printFrequency = 100
###Output
_____no_output_____
###Markdown
Training
###Code
# start
with sessionSetup(arg) as sess:
# if resume:
# saver.restore(sess,snapshotPath+snapshotFiles[-1][:-6])
# else:
# sess.run(tf.initialize_all_variables())
saver.restore(sess,
'../model_download_scripts/photometric_smoothness/weights/iter_0000000000500000.ckpt')
trainingData.dataQueuer.start_queueing(sess)
#start summary writer
summary_writer = tf.summary.FileWriter(logPath, sess.graph)
#run
lastPrint = time.time()
for i in range(500000, 500000 + iterations):
# scheduled values
learningRate = learningRateSchedule(baseLearningRate, i)
recLossBWeight = unsupLossBSchedule(i)
#run training
feed_dict = {
learningRateTensor: learningRate,
recLossBWeightTensor: recLossBWeight,
}
summary,result,totalLoss = sess.run([merged,solver,trainingLoss.loss], feed_dict=feed_dict)
if (i+1) % printFrequency == 0:
timeDiff = time.time() - lastPrint
itPerSec = printFrequency/timeDiff
remainingIt = iterations - i
eta = remainingIt/itPerSec
print("Iteration "+str(i+1)+": loss: "+str(totalLoss)+", iterations per second: "+str(itPerSec)+", ETA: "+str(datetime.timedelta(seconds=eta)))+", lr: "+str(learningRate)
summary_writer.add_summary(summary,i+1)
summary_writer.flush()
lastPrint = time.time()
if (i+1) % snapshotFrequency == 0:
saver.save(sess,"snapshots/iter_"+str(i+1).zfill(16)+".ckpt")
sys.stdout.flush()
#close queing
trainingData.dataQueuer.close(sess)
###Output
INFO:tensorflow:Restoring parameters from ../model_download_scripts/photometric_smoothness/weights/iter_0000000000500000.ckpt
###Markdown
VisualizationThis is just a hack to view a random estimated flow ...
###Code
# start
with sessionSetup(arg) as sess:
saver.restore(sess,
'../model_download_scripts/photometric_smoothness/weights/iter_0000000000500000.ckpt')
trainingData.dataQueuer.start_queueing(sess)
#start summary writer
summary_writer = tf.summary.FileWriter(logPath, sess.graph)
#run
flowFinal = networkBodyF.flows[0]
flowViz = flowToRgb(flowFinal)
for i in range(500000, 500000 + 1):
# scheduled values
learningRate = learningRateSchedule(baseLearningRate, i)
recLossBWeight = unsupLossBSchedule(i)
#run training
feed_dict = {
learningRateTensor: learningRate,
recLossBWeightTensor: recLossBWeight,
}
flow,summary,result,totalLoss = sess.run([flowViz,merged,solver,trainingLoss.loss], feed_dict=feed_dict)
# close queing
trainingData.dataQueuer.close(sess)
arr = np.minimum(np.asarray(flow),1)
arr = np.maximum(arr,0)
arr = np.squeeze(np.asarray(arr*255,np.uint8))
im = Image.fromarray(arr[0])
im
###Output
_____no_output_____
###Markdown
Data Downloading and loadingCode below downloads data via Kaggle API if the files arenot already present in the directory (see `/src/utilities/download_data.sh`).Additionally sets up necessary global variables for this notebook:
###Code
import pathlib
import pandas as pd
# Where data will be stored if not present
DATA_PATH = "../input"
! ./utilities/download_data.sh "$DATA_PATH"
PREPROCESSING_PATH = pathlib.Path("../preprocessing")
# Where models are/will be stored
PREDICTIONS_PATH = pathlib.Path("../predictions")
PREDICTIONS_PATH.mkdir(parents=True, exist_ok=True)
###Output
Downloading data to: ../input
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /home/vyz/.kaggle/kaggle.json'
test_data.csv.zip: Skipping, found more recently modified local copy (use --force to force download)
train_data.csv.zip: Skipping, found more recently modified local copy (use --force to force download)
train_labels.csv: Skipping, found more recently modified local copy (use --force to force download)
sample_submission.csv: Skipping, found more recently modified local copy (use --force to force download)
/home/vyz/projects/Kaggle1NN2019/src
Script ran successfully
###Markdown
Data loaded below has been constructed in `preprocessing` notebook:
###Code
from utilities.general import train_data, test_data
_, y = train_data(pathlib.Path(DATA_PATH))
X_test = test_data(pathlib.Path(DATA_PATH))
X_train=pd.read_csv(PREPROCESSING_PATH / pathlib.Path("algorithmic_data.csv"), index_col=0)
X_test = X_test[X_train.columns]
variance_dataset=pd.read_csv(PREPROCESSING_PATH / pathlib.Path("variance_data.csv"), index_col=0)
###Output
_____no_output_____
###Markdown
Settings below were found to achieve highest score on public leaderboard.Various were tried (more ensemble datasets, less, more models, different seed each time), those tries are not documented in this repository (though they are saved if anyone wants to see).
###Code
import numpy as np
HOW_MANY_MODELS = 80
HOW_MANY_DATASETS = 1
MAIN_SEED = 42
###Output
_____no_output_____
###Markdown
Create N modelsUsing utilities we may easily create many different configurations of models.`create_configs` will create `HOW_MANY_MODELS` configurations of models (those configurations are `input` and `output` independent, hence can be used for varying amount of features (see: `HOW_MANY_DATASETS`).For information about how those networks are created, consult `utilities.generator.create_configs` function.
###Code
from utilities.training import generate_configs
model_configs = generate_configs(
max_layers=5,
max_width=900,
min_width=100,
how_many=HOW_MANY_MODELS,
seed=MAIN_SEED,
)
###Output
_____no_output_____
###Markdown
`ensemble_datasets` creates generator of datasets with varying count of features.`split_feature` indicates how many best features (as indicated by analysis) will be used for "ensemble dataset". Additionally 96 - `split_feature` will be randomly chosen (say `25` features were chosen from the less important features in range `30-96` if we were to follow the example below):
###Code
from utilities.training import ensemble_datasets
datasets = ensemble_datasets(
X_train,
X_test,
split_feature=30,
how_many_models=HOW_MANY_MODELS,
how_many_datasets=HOW_MANY_DATASETS,
seed=MAIN_SEED,
)
###Output
_____no_output_____
###Markdown
Once configurations of random models are created (see appropriate functions in utils) and dataset generators setup we can train and predict using third part library [`skorch`](https://github.com/skorch-dev/skorch) which removes mental load nicely.Characteristics of training loop applied to every generated neural network: - Train neural network with maximum of 40 epochs - Optimizer used: `Adam` with default learning rate - Batch size equal to 64 - Stratified Validation set being 20% of the train data - Train data shuffled after each epoch - Early Stopping if no validation accuracy improvement after `8` epochs - Learning rate multiplied by `0.6` if no validation accuracy improvement after 2 epochsAdditionally, no models are saved but make a prediction on test set using state achieved as the best in validation accuracy.Logits of predictions are saved for each model with it's descriptive name and validation accuracy achieved (see `prediction` data) so we can try different ensembling techniques if we so wish (e.g. thresholding on accuracy and using only the best models).
###Code
from utilities.training import predict_with_models
predict_with_models(
PREDICTIONS_PATH,
y,
X_test,
model_configs,
datasets,
10,
MAIN_SEED,
)
###Output
Current best validation, making test prediction
epoch train_loss valid_loss validation_accuracy dur
------- ------------ ------------ --------------------- ------
1 [36m0.6802[0m [32m0.5876[0m [35m0.7774[0m 7.4093
Current best validation, making test prediction
2 [36m0.5529[0m [32m0.5225[0m [35m0.7979[0m 5.6897
Current best validation, making test prediction
3 [36m0.5098[0m [32m0.5210[0m [35m0.8033[0m 5.5162
Current best validation, making test prediction
4 [36m0.4793[0m [32m0.4585[0m [35m0.8235[0m 4.9231
Current best validation, making test prediction
5 [36m0.4582[0m [32m0.4558[0m [35m0.8291[0m 5.6262
6 [36m0.4372[0m 0.4701 0.8196 5.0378
7 [36m0.4192[0m 0.4601 0.8265 5.3469
Current best validation, making test prediction
8 [36m0.4024[0m [32m0.4426[0m [35m0.8357[0m 5.3459
9 [36m0.3889[0m 0.4532 0.8287 4.9944
10 [36m0.3722[0m 0.4454 0.8344 5.7059
Current best validation, making test prediction
11 [36m0.3568[0m 0.4489 [35m0.8369[0m 5.7168
Current best validation, making test prediction
12 [36m0.3416[0m [32m0.4423[0m [35m0.8426[0m 5.7205
13 [36m0.3254[0m [32m0.4310[0m 0.8421 4.8084
14 [36m0.3128[0m 0.4592 0.8420 5.6281
15 [36m0.2982[0m 0.4637 0.8406 5.5403
Epoch 15: reducing learning rate of group 0 to 6.0000e-04.
Current best validation, making test prediction
16 [36m0.2447[0m 0.4350 [35m0.8521[0m 5.1510
17 [36m0.2256[0m 0.4567 0.8448 5.6179
18 [36m0.2132[0m 0.4770 0.8475 4.8376
19 [36m0.1989[0m 0.4878 0.8484 5.8275
Epoch 19: reducing learning rate of group 0 to 3.6000e-04.
20 [36m0.1608[0m 0.4797 0.8517 5.6253
21 [36m0.1453[0m 0.5112 0.8494 5.6498
22 [36m0.1365[0m 0.5429 0.8479 5.8635
Epoch 22: reducing learning rate of group 0 to 2.1600e-04.
23 [36m0.1105[0m 0.5422 0.8519 5.6682
Stopping since validation_accuracy has not improved in the last 8 epochs.
0.8521315877811346
Saving predictions in ../predictions/0.8521315877811346_0_Layers=[259, 181, 181, 181],Activation=SELU.csv
Current best validation, making test prediction
epoch train_loss valid_loss validation_accuracy dur
------- ------------ ------------ --------------------- ------
1 [36m0.6859[0m [32m0.5917[0m [35m0.7784[0m 5.1317
Current best validation, making test prediction
2 [36m0.5632[0m [32m0.5730[0m [35m0.7876[0m 5.6442
Current best validation, making test prediction
3 [36m0.5200[0m [32m0.5132[0m [35m0.8081[0m 5.5955
Current best validation, making test prediction
4 [36m0.4936[0m [32m0.4982[0m [35m0.8082[0m 5.0590
Current best validation, making test prediction
5 [36m0.4680[0m [32m0.4841[0m [35m0.8160[0m 5.5167
Current best validation, making test prediction
6 [36m0.4497[0m [32m0.4664[0m [35m0.8218[0m 5.6723
Current best validation, making test prediction
7 [36m0.4313[0m [32m0.4614[0m [35m0.8279[0m 5.4714
Current best validation, making test prediction
8 [36m0.4136[0m 0.4654 [35m0.8284[0m 5.4097
Current best validation, making test prediction
9 [36m0.3963[0m [32m0.4513[0m [35m0.8297[0m 5.5361
10 [36m0.3821[0m 0.4631 0.8280 5.3749
Current best validation, making test prediction
11 [36m0.3672[0m 0.4592 [35m0.8327[0m 5.6761
Current best validation, making test prediction
12 [36m0.3556[0m [32m0.4434[0m [35m0.8382[0m 5.4855
13 [36m0.3389[0m 0.4845 0.8272 4.8213
14 [36m0.3274[0m 0.4656 0.8340 5.7722
Current best validation, making test prediction
15 [36m0.3153[0m 0.4534 [35m0.8408[0m 5.5079
16 [36m0.3015[0m 0.4765 0.8367 5.0198
Current best validation, making test prediction
17 [36m0.2899[0m 0.4673 [35m0.8416[0m 5.4397
18 [36m0.2754[0m 0.4905 0.8382 4.6286
19 [36m0.2672[0m 0.5178 0.8321 5.5190
20 [36m0.2531[0m 0.5295 0.8352 5.2362
Epoch 20: reducing learning rate of group 0 to 6.0000e-04.
Current best validation, making test prediction
21 [36m0.1949[0m 0.5092 [35m0.8480[0m 5.2732
22 [36m0.1714[0m 0.5423 0.8454 5.1787
23 [36m0.1609[0m 0.5737 0.8382 4.3682
24 [36m0.1542[0m 0.6053 0.8397 5.6647
Epoch 24: reducing learning rate of group 0 to 3.6000e-04.
25 [36m0.1137[0m 0.6154 0.8447 5.7805
26 [36m0.1005[0m 0.6421 0.8411 5.6937
27 [36m0.0910[0m 0.6894 0.8419 5.7867
Epoch 27: reducing learning rate of group 0 to 2.1600e-04.
28 [36m0.0705[0m 0.7062 0.8449 5.7733
Stopping since validation_accuracy has not improved in the last 8 epochs.
0.8480194696206781
Saving predictions in ../predictions/0.8480194696206781_1_Layers=[259, 181, 181, 181],Activation=SELU.csv
Current best validation, making test prediction
epoch train_loss valid_loss validation_accuracy dur
------- ------------ ------------ --------------------- ------
1 [36m0.5947[0m [32m0.4999[0m [35m0.8100[0m 5.3301
Current best validation, making test prediction
2 [36m0.4692[0m [32m0.4421[0m [35m0.8378[0m 7.2405
Current best validation, making test prediction
3 [36m0.4125[0m [32m0.4278[0m [35m0.8424[0m 6.3025
Current best validation, making test prediction
4 [36m0.3766[0m [32m0.4148[0m [35m0.8449[0m 7.7310
Current best validation, making test prediction
5 [36m0.3417[0m 0.4222 [35m0.8457[0m 5.4175
Current best validation, making test prediction
6 [36m0.3166[0m [32m0.4121[0m [35m0.8531[0m 6.0190
Current best validation, making test prediction
7 [36m0.2853[0m 0.4264 [35m0.8546[0m 5.7087
8 [36m0.2612[0m 0.4436 0.8480 7.4976
9 [36m0.2387[0m 0.4441 0.8497 6.4671
10 [36m0.2161[0m 0.4598 0.8510 7.4213
Epoch 10: reducing learning rate of group 0 to 6.0000e-04.
11 [36m0.1559[0m 0.5064 0.8541 6.6163
12 [36m0.1296[0m 0.5340 0.8537 7.5228
13 [36m0.1134[0m 0.5938 0.8449 6.7289
Epoch 13: reducing learning rate of group 0 to 3.6000e-04.
Current best validation, making test prediction
14 [36m0.0778[0m 0.6190 [35m0.8546[0m 7.6535
###Markdown
Imports
###Code
import torch as th
import torch.nn as nn
import torchvision
import pytorch_lightning as pl
import pandas as pd
import os
from pytorch_lightning.metrics.functional import accuracy
from pytorch_lightning.callbacks import ModelCheckpoint, EarlyStopping, GPUStatsMonitor
from pytorch_lightning.loggers import TensorBoardLogger
from sklearn.utils.class_weight import compute_class_weight
import torchvision.transforms as transforms
# import data module
from dataset import CassavaDM
# import model
from model import Model
###Output
_____no_output_____
###Markdown
Config
###Code
from config import Config
Config.__dict__
train_df = pd.read_csv(os.path.join(Config.data_dir, 'train.csv'))
ss = pd.read_csv(os.path.join(Config.submissions_dir, 'sample_submission.csv'))
class_w = compute_class_weight(classes=train_df.label.unique(), y=train_df.label.values, class_weight='balanced')
class_w
###Output
_____no_output_____
###Markdown
Modeling* Encoder/Features extractor: esnet, efficientnet, densenet)* Decoder/Classifier : inear layer (in_tures, n_classes)* loss_fn : CrossEntropyLoss* Optimize metrics : accuracy* Optimizer : Adam, AdamW, SGD* learning rate : (3e-5...1e-1)* lr scheduler : linear with warmup, ReduceLROnPlateau* pretrained : Always true
###Code
# ---------------
# data transforms
# ---------------
data_transforms = {
'train':th.nn.Sequential(
transforms.Resize((Config.img_size, Config.img_size)),
transforms.RandomHorizontalFlip(p=.6),
transforms.RandomVerticalFlip(p=.3),
transforms.GaussianBlur(3),
),
'test':th.nn.Sequential(
transforms.Resize((Config.img_size, Config.img_size)),
transforms.RandomVerticalFlip(p=.3),
)
}
#------------
# data module
# ----------
dm = CassavaDM(
df = train_df,
frac=1,
n_classes=Config.num_classes,
data_transforms=data_transforms,
train_data_dir=Config.train_images_dir,
test_data_dir=Config.test_images_dir)
dm.setup()
#-------
# Model
# -----
model = Model(config=Config,
len_train_ds=len(dm.train_ds),
class_w=th.from_numpy(class_w))
model
model.predict(dm.val_ds[0]['images'])
###Output
_____no_output_____
###Markdown
Training loop
###Code
ckpt_saver = ModelCheckpoint(filename=f'{Config.base_model}-val_acc'
)
es = EarlyStopping(
)
tb_logger = TensorBoardLogger(
)
Callbacks = []
train = pl.Trainer(gpus=-1,
callbacks=Callbacks,
logger=tb_logger,
max_epochs=Config.num_epochs)
###Output
_____no_output_____
###Markdown
This file has code to train the model and evaluate.First read the backed-up data files into numpy arrays.
###Code
import pickle
import numpy as np
x_train, y_train = pickle.load(open('../outputs/train_xy_no_sampling_stdScale.pk', 'rb'))
x_cv, y_cv = pickle.load(open('../outputs/val_xy_no_sampling_stdScale.pk', 'rb'))
x_test, y_test = pickle.load(open('../outputs/test_xy_no_sampling_stdScale.pk', 'rb'))
print('shapes of train, validation, test data', x_train.shape, y_train.shape, x_cv.shape, y_cv.shape, x_test.shape, y_test.shape)
values, counts = np.unique(y_train, return_counts=True)
num_features = x_train.shape[1]
print('Frequency of distance values before sampling', values, counts)
###Output
shapes of train, validation, test data (562772, 128) (562772,) (140693, 128) (140693,) (234489, 128) (234489,)
Frequency of distance values before sampling [1 2 3 4 5 6 7] [ 6030 187348 303509 59309 6048 506 22]
###Markdown
Since data is massively imbalanced, let's oversample the minority targets and undersample the majority type samples. The fraction for over/under sampling (0.7) was chosen with intuition and gut feeling. Feel free to play around.
###Code
from imblearn.over_sampling import RandomOverSampler
from imblearn.under_sampling import ClusterCentroids, TomekLinks, RandomUnderSampler
from imblearn.over_sampling import KMeansSMOTE, SMOTE
seed_random = 9999
# max_idx = np.argmax(counts)
# max_value = counts[max_idx]
# majority_class = values[max_idx]
x = int(counts[2]*0.7)
y = int(0.7 * x)
undersample_dict = {2:y, 3:x}
under_sampler = RandomUnderSampler(sampling_strategy=undersample_dict, random_state=seed_random) # n_jobs=15,
x_train, y_train = under_sampler.fit_resample(x_train, y_train.astype(np.int))
print('Frequency of distance values after undersampling', np.unique(y_train, return_counts=True))
minority_samples = int(0.7*x)
oversample_dict = {1:minority_samples, 4:minority_samples, 5:minority_samples, 6:minority_samples, 7:minority_samples}
over_sampler = RandomOverSampler(sampling_strategy=oversample_dict, random_state=seed_random) # ,n_jobs=15, k_neighbors= 5
x_train, y_train = over_sampler.fit_resample(x_train, y_train.astype(np.int))
print('Frequency of distance values after oversampling', np.unique(y_train, return_counts=True))
pickle.dump((x_train, y_train), open('../outputs/train_xy_combine_sampling.pk', 'wb'))
x_train.shape, y_train.shape
# from sklearn.preprocessing import MinMaxScaler
# scaler = MinMaxScaler(feature_range=(0, 1))
# y_train = scaler.fit_transform(y_train.reshape(-1, 1)).flatten()
# y_cv = scaler.transform(y_cv.reshape(-1, 1)).flatten()
# y_test = scaler.transform(y_test.reshape(-1, 1)).flatten()
from utils import *
import numpy as np
np.random.seed(999)
x_train, y_train = unison_shuffle_copies(x_train, y_train)
###Output
_____no_output_____
###Markdown
Creating a baseline for this dataset by training a Linear Regression model.
###Code
from sklearn.linear_model import LinearRegression
baseline_model = LinearRegression(fit_intercept=True, normalize=True, n_jobs=-1).fit(x_train, y_train)
from sklearn.metrics import accuracy_score, mean_absolute_error, mean_squared_error
y_pred = baseline_model.predict(x_test)
y_class = np.round(y_pred)
baseline_acc = accuracy_score(y_test, y_class)*100
baseline_mse = mean_squared_error(y_test, y_pred)
baseline_mae = mean_absolute_error(y_test, y_pred)
print("Baseline: Accuracy={}%, MSE={}, MAE={}".format(round(baseline_acc, 2), round(baseline_mse,2), round(baseline_mae,2)))
###Output
Baseline: Accuracy=50.57%, MSE=0.55, MAE=0.59
###Markdown
Linear Regression does surprisingly good! 50% is a good baseline to compare with, considering that chance prediction in this case is ~14% (1/7). This concludes setting the baseline for this problem. We will compare our neural network results with this to see how much we have improved. Feature selection with XGboost (Not used now. This didn't help much with the results. May be trying out other feature selection algorithms could help.)
###Code
# x_train_org = x_train.copy()
# x_cv_org = x_cv.copy()
# x_test_org = x_test.copy()
# from xgboost import XGBRegressor
# from xgboost import plot_importance
# from sklearn.feature_selection import SelectFromModel
# from matplotlib import pyplot as plt
# from sklearn.feature_selection import SelectPercentile
# xgbReg = XGBRegressor(objective="count:poisson", tree_method='gpu_hist', gpu_id=0)
# xgbReg.fit(x_train, y_train)
# plot_importance(xgbReg, max_num_features=30)
# plt.show()
# from sklearn.metrics import accuracy_score
# y_pred = xgbReg.predict(x_test)
# f'xgboost accuracy is {accuracy_score(y_test, np.round(y_pred))}'
# selectFeature = SelectFromModel(xgbReg, prefit=True, threshold="median")
# selectFeature.transform(x_train_org)
# feature_mask = selectFeature.get_support()
# num_features = np.sum(feature_mask)
# x_train = x_train_org[:, feature_mask]
# x_cv = x_cv_org[:, feature_mask]
# x_test = x_test_org[:, feature_mask]
# print(f'{num_features} number of feature selected, shape of train data is {x_train.shape}')
params = {'batch_size': 1000, 'input_size': num_features, 'hidden_units_1': 200, 'hidden_units_2': 100, 'hidden_units_3': 50, 'do_1': 0.2, 'do_2': 0.1, 'do_3': 0.05, 'output_size': 1, 'lr': 0.001, 'min_lr': 1e-5, 'max_lr': 1e-3, 'epochs': 500, 'lr_sched': 'clr', 'lr_sched_mode': 'triangular', 'gamma': 0.95}
params
###Output
_____no_output_____
###Markdown
Create pytorch data loaders for train/val/test datasets.
###Code
import torch
from torch.utils import data as torch_data
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print("device:", device)
trainset = torch_data.TensorDataset(torch.as_tensor(x_train, dtype=torch.float, device=device), torch.as_tensor(y_train, dtype=torch.float, device=device))
train_dl = torch_data.DataLoader(trainset, batch_size=params['batch_size'], drop_last=True)
val_dl = torch_data.DataLoader(torch_data.TensorDataset(torch.as_tensor(x_cv, dtype=torch.float, device=device), torch.as_tensor(y_cv, dtype=torch.float, device=device)), batch_size=params['batch_size'], drop_last=True)
test_dl = torch_data.DataLoader(torch_data.TensorDataset(torch.as_tensor(x_test, dtype=torch.float, device=device), torch.as_tensor(y_test, dtype=torch.float, device=device)), batch_size=params['batch_size'], drop_last=True)
###Output
device: cuda:0
###Markdown
Check for batches with all same type of samples (same distance values). This issues had caused me a lot of grief.
###Code
print('value counts in whole data', np.unique(y_train, return_counts=True))
count = 0
for i, data in enumerate(train_dl, 0):
input, target = data[0], data[1]
t = torch.unique(target, return_counts=True)[1]
if (t==params['batch_size']).any().item():
count += 1
print('{} ({}%) batches have all same targets'.format(count, np.round(count/len(train_dl)*100, 2) ))
###Output
value counts in whole data (array([1, 2, 3, 4, 5, 6, 7]), array([148719, 148719, 212456, 148719, 148719, 148719, 148719],
dtype=int64))
0 (0.0%) batches have all same targets
###Markdown
Initialize model, loss, learning rate schedulers etc
###Code
from torchsummary import summary
import sys
import io
torch.manual_seed(9999)
def get_model():
"""
creates a PyTorch model. Change the 'params' dict above to
modify the neural net configuration.
"""
model = torch.nn.Sequential(
torch.nn.Linear(params['input_size'], params['hidden_units_1']),
torch.nn.BatchNorm1d(params['hidden_units_1']),
# torch.nn.Dropout(p=params['do_1']),
torch.nn.ReLU(),
torch.nn.Linear(params['hidden_units_1'], params['hidden_units_2']),
torch.nn.BatchNorm1d(params['hidden_units_2']),
# torch.nn.Dropout(p=params['do_2']),
torch.nn.ReLU(),
torch.nn.Linear(params['hidden_units_2'], params['hidden_units_3']),
torch.nn.BatchNorm1d(params['hidden_units_3']),
# torch.nn.Dropout(p=params['do_3']),
torch.nn.ReLU(),
torch.nn.Linear(params['hidden_units_3'], params['output_size']),
torch.nn.ReLU(),
# torch.nn.Softplus(),
)
model.to(device)
return model
def poisson_loss(y_pred, y_true):
"""
Custom loss function for Poisson model.
Equivalent Keras implementation for reference:
K.mean(y_pred - y_true * math_ops.log(y_pred + K.epsilon()), axis=-1)
For output of shape (2,3) it return (2,) vector. Need to calculate
mean of that too.
"""
y_pred = torch.squeeze(y_pred)
loss = torch.mean(y_pred - y_true * torch.log(y_pred+1e-7))
return loss
model = get_model()
print('model loaded into device=', next(model.parameters()).device)
# this is just to capture model summary as string
old_stdout = sys.stdout
sys.stdout = buffer = io.StringIO()
summary(model, input_size=(params['input_size'], ))
sys.stdout = old_stdout
model_summary = buffer.getvalue()
print('model-summary\n', model_summary)
# later this 'model-summary' string can be written to tensorboard
lr_reduce_patience = 20
lr_reduce_factor = 0.1
loss_fn = poisson_loss
# optimizer = torch.optim.SGD(model.parameters(), lr=params['lr'], momentum=0.9, dampening=0, weight_decay=0, nesterov=True)
optimizer = torch.optim.RMSprop(model.parameters(), lr=params['lr'], alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False)
if params['lr_sched'] == 'reduce_lr_plateau':
lr_sched = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=lr_reduce_factor, patience=lr_reduce_patience, verbose=True, threshold=0.00001, threshold_mode='rel', cooldown=0, min_lr=1e-9, eps=1e-08)
elif params['lr_sched'] == 'clr':
lr_sched = torch.optim.lr_scheduler.CyclicLR(optimizer, params['min_lr'], params['max_lr'], step_size_up=8*len(train_dl), step_size_down=None, mode=params['lr_sched_mode'], last_epoch=-1, gamma=params['gamma'])
print('lr scheduler type:', lr_sched)
for param_group in optimizer.param_groups:
print(param_group['lr'])
###Output
model loaded into device= cuda:0
model-summary
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Linear-1 [-1, 200] 25,800
BatchNorm1d-2 [-1, 200] 400
ReLU-3 [-1, 200] 0
Linear-4 [-1, 100] 20,100
BatchNorm1d-5 [-1, 100] 200
ReLU-6 [-1, 100] 0
Linear-7 [-1, 50] 5,050
BatchNorm1d-8 [-1, 50] 100
ReLU-9 [-1, 50] 0
Linear-10 [-1, 1] 51
ReLU-11 [-1, 1] 0
================================================================
Total params: 51,701
Trainable params: 51,701
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.01
Params size (MB): 0.20
Estimated Total Size (MB): 0.21
----------------------------------------------------------------
lr scheduler type: <torch.optim.lr_scheduler.CyclicLR object at 0x0000020C0D977E20>
1e-05
###Markdown
Run next cell to find optimal range for learning rate while using One-Cycle LR schedulerIn this case choosing the learning rate from graph below didn't help. You can skip this cell if not required.
###Code
#### from https://sgugger.github.io/how-do-you-find-a-good-learning-rate.html
import math
lr_arr = np.zeros((len(train_dl), ))
def find_lr(init_value = 1e-8, final_value=10., beta = 0.98):
global lr_arr
num = len(train_dl)-1
mult = (final_value / init_value) ** (1/num)
lr = init_value
optimizer.param_groups[0]['lr'] = lr
avg_loss = 0.
best_loss = 0.
batch_num = 0
losses = []
log_lrs = []
lrs = []
for data in train_dl:
batch_num += 1
# As before, get the loss for this mini-batch of inputs/outputs
inputs, labels = data
# inputs, labels = Variable(inputs), Variable(labels)
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_fn(outputs, labels)
# Compute the smoothed loss
avg_loss = beta * avg_loss + (1-beta) *loss.item()
smoothed_loss = avg_loss / (1 - beta**batch_num)
# Stop if the loss is exploding
if batch_num > 1 and smoothed_loss > 4 * best_loss:
return log_lrs, losses
# Record the best loss
if smoothed_loss < best_loss or batch_num==1:
best_loss = smoothed_loss
# Store the values
losses.append(smoothed_loss)
log_lrs.append(math.log10(lr))
lrs.append(lr)
lr_arr[batch_num-1] = lr
# Do the SGD step
loss.backward()
optimizer.step()
# Update the lr for the next step
lr *= mult
optimizer.param_groups[0]['lr'] = lr
return log_lrs, losses
lrs, losses = find_lr()
print('returned', len(losses))
plt.figure()
plt.plot(lr_arr[:len(lrs)], losses)
plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0))
plt.title('LR range plot')
plt.xlabel('Learning rates')
plt.ylabel('Losses')
plt.show()
def evaluate(model, dl):
"""
This function is used to evaluate the model with validation.
args: model and data loader
returns: loss
"""
model.eval()
final_loss = 0.0
count = 0
with torch.no_grad():
for data_cv in dl:
inputs, dist_true = data_cv[0], data_cv[1]
count += len(inputs)
outputs = model(inputs)
loss = loss_fn(outputs, dist_true)
final_loss += loss.item()
return final_loss/len(dl)
def save_checkpoint(state, state_save_path):
if not os.path.exists("/".join(state_save_path.split('/')[:-1])):
os.makedirs("/".join(state_save_path.split('/')[:-1]))
torch.save(state, state_save_path)
###Output
_____no_output_____
###Markdown
Below cell trains the model and records the results in tensorboard.
###Code
%%time
# %load_ext tensorboard
import time
import copy
from tqdm.auto import tqdm
from utils import *
from torch.utils.tensorboard import SummaryWriter
# from tensorboardX import SummaryWriter
last_loss = 0.0
min_val_loss = np.inf
patience_counter = 0
early_stop_patience = 50
best_model = None
train_losses = []
val_losses = []
output_path = '../outputs'
tb_path = output_path+'/logs/runs'
run_path = tb_path+'/run47_smallerNN_noDO'
checkpoint_path = run_path+'/checkpoints'
resume_training = False
start_epoch = 0
iter_count = 0
if os.path.exists(run_path):
raise Exception("this experiment already exists!")
if not os.path.exists(checkpoint_path):
os.makedirs(checkpoint_path)
writer = SummaryWriter(log_dir=run_path, comment='', purge_step=None, max_queue=1, flush_secs=30, filename_suffix='')
writer.add_graph(model, input_to_model=torch.zeros(params['input_size']).view(1,-1).cuda(), verbose=False) # not useful
# resume training on a saved model
if resume_training:
prev_checkpoint_path = '../outputs/logs/runs/run42_clr_g0.95/checkpoints' # change this
suffix = '1592579305.7273214' # change this
model.load_state_dict(torch.load(prev_checkpoint_path+'/model_'+suffix+'.pt'))
optimizer.load_state_dict(torch.load(prev_checkpoint_path+'/optim_'+suffix+'.pt'))
lr_sched.load_state_dict(torch.load(prev_checkpoint_path+'/sched_'+suffix+'.pt'))
state = torch.load(prev_checkpoint_path+'/state_'+suffix+'.pt')
start_epoch = state['epoch']
writer.add_text('loaded saved model:', str(params))
print('loaded saved model', params)
writer.add_text('run_change', 'Smaller 3 hidden layer NN, no DO' + str(params))
torch.backends.cudnn.benchmark = True
print('total epochs=', len(range(start_epoch, start_epoch+params['epochs'])))
# with torch.autograd.detect_anomaly(): # use this to detect bugs while training
for param_group in optimizer.param_groups:
print('lr-check', param_group['lr'])
for epoch in range(start_epoch, start_epoch+params['epochs']): # loop over the dataset multiple times
running_loss = 0.0
stime = time.time()
for i, data in enumerate(train_dl, 0):
iter_count += 1
# get the inputs; data is a list of [inputs, dist_true]
model.train()
inputs, dist_true = data[0], data[1]
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
loss = loss_fn(outputs, dist_true)
loss.backward()
optimizer.step()
running_loss += loss.item()
last_loss = loss.item()
for param_group in optimizer.param_groups:
curr_lr = param_group['lr']
writer.add_scalar('monitor/lr-iter', curr_lr, iter_count-1)
if not isinstance(lr_sched, torch.optim.lr_scheduler.ReduceLROnPlateau):
lr_sched.step()
val_loss = evaluate(model, val_dl)
if isinstance(lr_sched, torch.optim.lr_scheduler.ReduceLROnPlateau):
lr_sched.step(val_loss)
if val_loss < min_val_loss:
min_val_loss = val_loss
patience_counter = 0
best_model = copy.deepcopy(model)
print(epoch,"> Best val_loss model saved:", round(val_loss, 4))
else:
patience_counter += 1
train_loss = running_loss/len(train_dl)
train_losses.append(train_loss)
val_losses.append(val_loss)
writer.add_scalar('loss/train', train_loss, epoch)
writer.add_scalar('loss/val', val_loss, epoch)
for param_group in optimizer.param_groups:
curr_lr = param_group['lr']
writer.add_scalar('monitor/lr-epoch', curr_lr, epoch)
if patience_counter > early_stop_patience:
print("Early stopping at epoch {}. current val_loss {}".format(epoch, val_loss))
break
if epoch % 10 == 0:
torch.save(best_model.state_dict(), checkpoint_path+'/model_cp.pt')
torch.save(optimizer.state_dict(), checkpoint_path+'/optim_cp.pt')
torch.save(lr_sched.state_dict(), checkpoint_path+'/sched_cp.pt')
writer.add_text('checkpoint saved', 'at epoch='+str(epoch))
print("epoch:{} -> train_loss={},val_loss={} - {}".format(epoch, round(train_loss, 5),round(val_loss, 5), seconds_to_minutes(time.time()-stime)))
print('Finished Training')
ts = str(time.time())
best_model_path = checkpoint_path+'/model_'+ts+'.pt'
opt_save_path = checkpoint_path+'/optim_'+ts+'.pt'
sched_save_path = checkpoint_path+'/sched_'+ts+'.pt'
state_save_path = checkpoint_path+'/state_'+ts+'.pt'
state = {'epoch': epoch+1,
'model_state': model.state_dict(),
'optim_state': optimizer.state_dict(),
'last_train_loss': train_losses[-1],
'last_val_loss': val_losses[-1],
'total_iters': iter_count
}
save_checkpoint(state, state_save_path)
# sometimes loading from state dict is not wokring, so...
torch.save(best_model.state_dict(), best_model_path)
torch.save(optimizer.state_dict(), opt_save_path)
torch.save(lr_sched.state_dict(), sched_save_path)
# run44 val = -0.1126
# Top runs: run44, run26, run21, run20, run19_batch_norm_other_changes, run18_trngl2, run10_3
###Output
total epochs= 500
lr-check 1e-05
0 > Best val_loss model saved: -0.0781
epoch:0 -> train_loss=1.9969,val_loss=-0.07814 - 0.0 minutes 42.0 seconds
1 > Best val_loss model saved: -0.0936
2 > Best val_loss model saved: -0.0975
3 > Best val_loss model saved: -0.1022
4 > Best val_loss model saved: -0.103
5 > Best val_loss model saved: -0.1042
6 > Best val_loss model saved: -0.1051
7 > Best val_loss model saved: -0.1059
8 > Best val_loss model saved: -0.1068
9 > Best val_loss model saved: -0.1075
10 > Best val_loss model saved: -0.108
epoch:10 -> train_loss=-1.97189,val_loss=-0.10803 - 0.0 minutes 41.0 seconds
11 > Best val_loss model saved: -0.1085
12 > Best val_loss model saved: -0.1091
13 > Best val_loss model saved: -0.1098
15 > Best val_loss model saved: -0.111
epoch:20 -> train_loss=-1.97394,val_loss=-0.10876 - 0.0 minutes 42.0 seconds
epoch:30 -> train_loss=-1.97593,val_loss=-0.11001 - 0.0 minutes 42.0 seconds
31 > Best val_loss model saved: -0.1116
epoch:40 -> train_loss=-1.97498,val_loss=-0.10959 - 0.0 minutes 42.0 seconds
47 > Best val_loss model saved: -0.1117
epoch:50 -> train_loss=-1.97652,val_loss=-0.11064 - 0.0 minutes 41.0 seconds
epoch:60 -> train_loss=-1.97646,val_loss=-0.11049 - 0.0 minutes 41.0 seconds
63 > Best val_loss model saved: -0.1118
epoch:70 -> train_loss=-1.97621,val_loss=-0.10948 - 0.0 minutes 41.0 seconds
epoch:80 -> train_loss=-1.97773,val_loss=-0.11043 - 0.0 minutes 39.0 seconds
epoch:90 -> train_loss=-1.97673,val_loss=-0.1101 - 0.0 minutes 40.0 seconds
epoch:100 -> train_loss=-1.97709,val_loss=-0.11027 - 0.0 minutes 41.0 seconds
epoch:110 -> train_loss=-1.97768,val_loss=-0.11046 - 0.0 minutes 41.0 seconds
Early stopping at epoch 114. current val_loss -0.11061588690749237
Finished Training
Wall time: 1h 18min 59s
###Markdown
Now test the model with test data.
###Code
def test(model, dl):
model.eval()
final_loss = 0.0
count = 0
y_hat = []
with torch.no_grad():
for data_cv in dl:
inputs, dist_true = data_cv[0], data_cv[1]
count += len(inputs)
outputs = model(inputs)
y_hat.extend(outputs.tolist())
loss = loss_fn(outputs, dist_true)
final_loss += loss.item()
return final_loss/len(dl), y_hat
model.load_state_dict(torch.load(best_model_path))
test_loss, y_hat = test(model, test_dl)
print(test_loss)
writer.add_text('test-loss', str(test_loss))
try:
if scaler:
y_hat = scaler.inverse_transform(y_hat)
y_test = scaler.inverse_transform(y_test)
except:
pass
y_hat[50:60], y_test[50:60]
from sklearn.metrics import accuracy_score
writer.add_text('Accuracy=', str(accuracy_score(y_test[:len(y_hat)], np.round(y_hat))))
print(str(accuracy_score(y_test[:len(y_hat)], np.round(y_hat))))
# show distance value wise precision (bar chart)
# predicted values are less that real test samples because last samples from test are dropped to maintain same batch size (drop_last=True)
from matplotlib import pyplot as plt
y_hat_ = np.array(y_hat).squeeze()
y_test_ = y_test[:len(y_hat)]
print(len(y_test), len(y_hat))
dist_accuracies = []
dist_counts = []
for i in range(1, 8):
mask = y_test_==i
dist_values = y_test_[mask]
dist_preds = np.round(y_hat_[mask])
dist_accuracies.append(np.sum(dist_values == dist_preds)*100/len(dist_values))
dist_counts.append(len(dist_values))
fig = plt.figure(figsize=(10,7))
plt.subplot(2,1,1)
plt.bar(range(1,8), dist_accuracies)
for index, value in enumerate(dist_accuracies):
plt.text(index+0.8, value, str(np.round(value, 2))+'%')
plt.title('distance-wise accuracy')
plt.xlabel('distance values')
plt.ylabel('accuracy')
plt.subplot(2,1,2)
plt.bar(range(1,8), dist_counts)
for index, value in enumerate(dist_counts):
plt.text(index+0.8, value, str(value))
plt.title('distance-wise count')
plt.xlabel('distance values')
plt.ylabel('counts')
fig.tight_layout(pad=3.0)
plt.show()
writer.add_figure('test/results', fig)
writer.add_text('class avg accuracy', str(np.mean(dist_accuracies)))
print('class avg accuracy', np.mean(dist_accuracies))
writer.add_text('MSE', str(np.mean((np.array(y_hat).squeeze()-y_test[:len(y_hat)])**2)))
print('MSE', np.mean((np.array(y_hat).squeeze()-y_test[:len(y_hat)])**2))
writer.add_text('MAE', str(np.mean(np.abs(np.array(y_hat).squeeze() - y_test[:len(y_hat)]))))
print('MAE', np.mean(np.abs(np.array(y_hat).squeeze() - y_test[:len(y_hat)])))
writer.add_text('ending_remark', 'no dropout caused faster training but final val error/accuracy was worse than with-do training')
# to shutdown system once training is over. For over-night training sessions.
os.system("shutdown /s /t 100")
# to abort shutdown timer
os.system("shutdown /a")
###Output
_____no_output_____
###Markdown
Following cells can be ignored. Includes some rough works and old TODO lists. Since training is the model gives poor results, try to figure out the issue. Train with small number of samples from one distance value then add other values.
###Code
classes = [1, 2, 4]
x_temp = []
y_temp = []
for class_ in classes:
x_temp.extend(x_train[y_train==class_][:100])
y_temp.extend(y_train[y_train==class_][:100])
x_temp, y_temp = unison_shuffle_copies(np.array(x_temp), np.array(y_temp))
x_temp = torch.tensor(x_temp, dtype=torch.float32, device=device)
y_temp = torch.tensor(y_temp, dtype=torch.float32, device=device)
loss_history = []
for epoch in range(5000): # loop over the dataset multiple times
running_loss = 0.0
stime = time.time()
# get the inputs; data is a list of [inputs, dist_true]
model.train()
inputs, dist_true = x_temp, y_temp
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
loss = loss_fn(outputs, dist_true)
loss.backward()
optimizer.step()
loss_history.append(loss.item())
from utils import *
plot(loss_history, 'class 1 train')
model.eval()
op_temp = model(x_temp)
print(loss_history[-1])
for i,j in zip(y_temp[:20], op_temp.squeeze().tolist()[:20]):
print(i.item(), '--', j)
###Output
_____no_output_____
###Markdown
CNV-espresso training procedure
###Code
from __future__ import print_function
import os
import re
import copy
import random
import datetime
import timeit
import PIL
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
from sklearn.metrics import classification_report
from sklearn import metrics
from sklearn.model_selection import KFold, StratifiedKFold
import sklearn
import tensorflow as tf
from tensorflow import keras
import keras.preprocessing
from keras.models import Sequential, Model
from keras.utils import to_categorical
from keras.layers import Dense, Conv2D, MaxPooling2D, Dropout, Flatten
from keras.models import load_model
from keras.callbacks import EarlyStopping
from keras.callbacks import ModelCheckpoint
from keras import backend
import function_dl as func_dl
import function as func
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Variables
###Code
os.environ['CUDA_VISIBLE_DEVICES'] = "1"
physical_devices = tf.config.experimental.list_physical_devices('GPU')
physical_devices
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
project_dir = '/path/to/project'
output_model_dir = project_dir + '/train/'
batch_size = 32
epochs = 20
true_del_file = project_dir + '/train/true_del.list'
true_dup_file = project_dir + '/train/true_dup.list'
false_del_file = project_dir + '/train/false_del.list'
false_dup_file = project_dir + '/train/false_dup.list'
img_width, img_height = 224, 224
seed = 2021
###Output
_____no_output_____
###Markdown
Importing data File path
###Code
## For rare CNVs
true_del_df = pd.read_csv(true_del_file, header=0,sep='\t')
false_del_df = pd.read_csv(false_del_file, header=0,sep='\t')
true_dup_df = pd.read_csv(true_dup_file, header=0,sep='\t')
false_dup_df = pd.read_csv(false_dup_file, header=0,sep='\t')
true_del_images_path_list = true_del_df['img_path']
false_del_images_path_list = false_del_df['img_path']
true_dup_images_path_list = true_dup_df['img_path']
false_dup_images_path_list = false_dup_df['img_path']
print("The shape of each type:")
print("True DEL:", true_del_images_path_list.shape)
print("True DUP:", true_dup_images_path_list.shape)
print("False DEL:", false_del_images_path_list.shape)
print("False DUP:", false_dup_images_path_list.shape)
###Output
_____no_output_____
###Markdown
Loading images
###Code
# # entire cnv
true_del_img_np = func_dl.loadImgs(true_del_images_path_list, img_width, img_height)
true_del_img_np.shape
false_del_img_np = func_dl.loadImgs(false_del_images_path_list, img_width, img_height)
false_del_img_np.shape
true_dup_img_np = func_dl.loadImgs(true_dup_images_path_list, img_width, img_height)
true_dup_img_np.shape
false_dup_img_np = func_dl.loadImgs(false_dup_images_path_list, img_width, img_height)
false_dup_img_np.shape
###Output
_____no_output_____
###Markdown
Generate labels
###Code
# Three classes
true_del_label = [0 for i in range(0,len(true_del_img_np))]
false_del_label = [1 for i in range(0,len(false_del_img_np))]
true_dup_label = [2 for i in range(0,len(true_dup_img_np))]
false_dup_label = [1 for i in range(0,len(false_dup_img_np))]
print(true_del_label[0:5], false_del_label[0:5], true_dup_label[0:5], false_dup_label[0:5])
print(len(true_del_label), len(false_del_label), len(true_dup_label), len(false_dup_label))
###Output
_____no_output_____
###Markdown
Combine data
###Code
combined_cnv_info_df = true_del_df.append(false_del_df, ignore_index=True)
combined_cnv_info_df = combined_cnv_info_df.append(true_dup_df, ignore_index=True)
combined_cnv_info_df = combined_cnv_info_df.append(false_dup_df, ignore_index=True)
combined_img = np.vstack((true_del_img_np, false_del_img_np, true_dup_img_np, false_dup_img_np))
combined_label = true_del_label + false_del_label + true_dup_label + false_dup_label
len(combined_label)
###Output
_____no_output_____
###Markdown
Backup or restore data Backup
###Code
## Backup
backup_path = project_dir +'/train/data_backup/'
os.makedirs(backup_path, exist_ok=True)
project_name = 'TBD'
combined_cnv_info_df.to_csv(backup_path+'rare_cnv_info.csv')
np.save(backup_path+'rare_cnv_img', combined_img)
np.save(backup_path+'rare_cnv_label_'+str(len(np.unique(combined_label)))+'classes', combined_label)
###Output
_____no_output_____
###Markdown
Restore
###Code
backup_path = project_dir +'/train/data_backup/'
project_name = 'TBD'
nClasses = 3
combined_img = np.load(backup_path + project_name + '_img.npy')
combined_label = np.load(backup_path+'rare_cnv_label_'+str(nClasses)+'classes'+ '.npy')
combined_cnv_info_df = pd.read_csv(backup_path+project_name+'_info.csv')
print("Project: '%s' dataset loaded."%project_name)
print(" -- Shape of image array: ", combined_img.shape)
print(" -- Shape of label : ", len(combined_label))
try:
print(" -- Shape of CNV info : ", combined_cnv_info_df.shape)
except:
print("Error")
###Output
_____no_output_____
###Markdown
Normalization
###Code
# Find the shape of input images and create the variable input_shape
nRows,nCols,nDims = combined_img.shape[1:]
input_shape = (nRows, nCols, nDims)
print("The shape of input tensor:",input_shape)
# Change to float datatype
combined_img = combined_img.astype('float32')
# Scale the data to lie between 0 to 1
combined_img /= 255
# Change the labels from integer to categorical data
combined_label_one_hot = to_categorical(combined_label)
###Output
_____no_output_____
###Markdown
The numbers of training data:
###Code
classes = np.unique(combined_label)
nClasses = len(classes)
print('Total number of outputs : ', nClasses)
print('Output classes : ', classes)
print("3 classes label: 0-True deletion; 1-Diploid (False del & False dup); 2-True duplication")
# Let's randomly check one CNV image
item = random.randint(0,len(combined_label))
print("Label:", combined_label[item])
func_dl.showImg(combined_img[item])
print(combined_img[item][100][0:10])
###Output
_____no_output_____
###Markdown
Train the convolutional neural networks Split dataset into training (80%) and test (20%) dataset
###Code
## split image arrays
train_img, test_img, train_label, test_label, train_cnv_info_df, test_cnv_info_df = train_test_split(combined_img,
combined_label_one_hot,
combined_cnv_info_df,
test_size=0.2,
shuffle=True,
random_state=seed)
train_img, val_img, train_label, val_label, train_cnv_info_df, val_cnv_info_df = train_test_split(train_img,
train_label,
train_cnv_info_df,
test_size=0.25,
shuffle=True,
random_state=seed) # 0.25*0.8=0.2
combined_img.shape, train_img.shape, val_img.shape, test_img.shape
combined_label_one_hot.shape, train_label.shape, val_label.shape, test_label.shape
###Output
_____no_output_____
###Markdown
CNN (Transfer learning and fine-tuning) Using the pretrained MobileNet v1 architecture- Firstly, we keep all the weights of base model frozen to train the FC layers.
###Code
model_name='MobileNet_v1_fine_tuning'
base_model = tf.keras.applications.MobileNet(
weights='imagenet', # Load weights pre-trained model.
input_shape=(224, 224, 3),
include_top=False) # Do not include the ImageNet classifier at the top.
base_model.trainable = False
inputs = keras.Input(shape=(224, 224, 3))
x = base_model(inputs, training=False)
# Convert features of shape `base_model.output_shape[1:]` to vectors
x = keras.layers.GlobalAveragePooling2D()(x)
# A Dense classifier with a single unit (binary classification)
outputs = keras.layers.Dense(nClasses,activation='softmax')(x)
model = keras.Model(inputs, outputs)
model.summary()
model.compile(optimizer=keras.optimizers.Adam(),
loss='categorical_crossentropy',
metrics=['accuracy', func_dl.f1_m, func_dl.precision_m, func_dl.recall_m])
print("Training by MobileNet_v1 model ...")
model_file = output_model_dir + project_name + "_" + model_name + "_" + str(nClasses) + "classes.h5"
es = EarlyStopping(monitor ='val_loss', mode='min', verbose=1, patience=3)
mc = ModelCheckpoint(model_file,
monitor='val_accuracy',
mode ='max',
verbose=1,
save_best_only=True)
history = model.fit(train_img, train_label,
batch_size = batch_size,
epochs =epochs,
verbose=1,
validation_data=(val_img, val_label),
callbacks=[es, mc])
print("\n")
loss, accuracy, f1_score, precision, recall = model.evaluate(test_img, test_label)
func_dl.draw_loss_accuracy_curves(history, project_name)
func_dl.confusion_matrix(model, test_img, test_label, nClasses)
fpr, tpr, thresholds, auc = func_dl.pred_roc_data(model, test_img, test_label)
func_dl.draw_single_roc_curve(tpr, fpr, auc)
###Output
_____no_output_____
###Markdown
Fine-tuning- Secondly, Once your model has converged on our train data, we unfreeze all or part of the base model and retrain the whole model end-to-end with a very low learning rate.
###Code
print("Fine tuning by MobileNet_v1 model ...")
model_file = output_model_dir + project_name + "_" + model_name + "_" + str(nClasses) + "classes.h5"
base_model.trainable=True
model.summary()
model.compile(optimizer=keras.optimizers.Adam(1e-5),
loss='categorical_crossentropy', metrics=['accuracy', func_dl.f1_m, func_dl.precision_m, func_dl.recall_m])
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=3)
mc = ModelCheckpoint(model_file,
monitor='val_accuracy',
mode ='max',
verbose=1,
save_best_only=True)
history = model.fit(train_img, train_label,
batch_size = batch_size,
epochs = epochs,
verbose = 1,
validation_data = (val_img, val_label),
callbacks = [es, mc])
print("\n")
loss, accuracy, f1_score, precision, recall = model.evaluate(test_img, test_label)
func_dl.draw_loss_accuracy_curves(history, project_name)
func_dl.confusion_matrix(model, test_img, test_label, nClasses)
fpr, tpr, thresholds, auc = func_dl.pred_roc_data(model, test_img, test_label)
func_dl.draw_single_roc_curve(tpr, fpr, auc)
func.showDateTime()
print("[Done]. Please check the trained model at",model_file)
###Output
_____no_output_____
###Markdown
Upward Resample Dataset
###Code
def balance_dataset(df, col, ratio=0.5, balance_method='avg', random_state = 42):
vc = df[col].value_counts()
balanced_df = pd.DataFrame(columns=df.columns)
if balance_method == 'avg':
sample_size = int(df.shape[0] / len(vc))
for i in vc.index:
replace = (vc[i] < sample_size)
temp = df[df[col] == i]
balanced_df = balanced_df.append(temp.sample(n=sample_size, replace=replace, random_state=random_state), ignore_index=True)
return balanced_df
if balance_method == 'upward':
highest_cat = vc.index[0]
highest_num = vc[highest_cat]
balanced_df = balanced_df.append(df[df[col] == highest_cat])
for i in vc.index[1:]:
num = vc[i]
temp = df[df[col] == i]
balanced_df = balanced_df.append(temp)
sample_ratio = num / highest_num
if sample_ratio < ratio:
sample_size = int((ratio-sample_ratio) * highest_num)
balanced_df = balanced_df.append(temp.sample(n=sample_size, replace=True, random_state=random_state), ignore_index=True)
return balanced_df
if balance_method == 'downward':
lowest_cat = vc.index[-1]
lowest_num = vc[lowest_cat]
balanced_df = balanced_df.append(df[df[col] == lowest_cat])
for i in vc.index[:-1]:
num = vc[i]
temp = df[df[col] == i]
sample_ratio = lowest_num / num
if sample_ratio < ratio:
sample_size = int(sample_ratio * num)
balanced_df = balanced_df.append(temp.sample(n=sample_size, replace=False, random_state=random_state), ignore_index=True)
return balanced_df
return None
def MLpipe_KFold_with_resample(X,y,preprocessor,ML_algo,param_grid, resample='avg'):
'''
This function splits the data to other/test (80/20) and then applies KFold with 4 folds to other.
The RMSE is minimized in cross-validation.
'''
test_scores = []
best_models = []
std_ftrs = X.columns
# loop through 10 random states (2 points)
# rmse = lambda x, y: sqrt(mean_squared_error(x, y))
for i in range(10):
# split data to other/test 80/20, and the use KFold with 4 folds (2 points)
random_state = 42*i
X_other, X_test, y_other, y_test = train_test_split(X, y, test_size=0.1, stratify=y, random_state = random_state)
X_other['owners'] = y_other['owners']
df = balance_dataset(X_other, 'owners')
y_other = df.loc[:, df.columns == 'owners']
X_other = df.loc[:, df.columns != 'owners']
y_other_prep = target_transformer.fit_transform(y_other)
y_test_prep = target_transformer.fit_transform(y_test)
kf = StratifiedKFold(n_splits=5,shuffle=True,random_state=random_state)
# preprocess the data (1 point)
pipe = make_pipeline(preprocessor, ML_algo)
# print(pipe.get_params().keys())
# loop through the hyperparameter combinations or use GridSearchCV (2 points)
grid = GridSearchCV(estimator=pipe, param_grid=param_grid, scoring='f1_weighted', cv=kf, return_train_score=True)
# for each combination, calculate the train and validation scores using the evaluation metric
grid.fit(X_other, y_other_prep.ravel())
# find which hyperparameter combination gives the best validation score (1 point)
test_score = grid.score(X_test, y_test_prep.ravel())
# calculate the test score (1 point)
test_scores.append(test_score)
best_models.append(grid.best_params_)
# append the test score and the best model to the lists (1 point)
return best_models, test_scores
## for debug use
y = df.loc[:,df.columns=='owners']
X = df.loc[:, df.columns != 'owners']
ML_algo = LogisticRegression()
param_grid = {'logisticregression__C': [1e1], 'logisticregression__multi_class': ['ovr', 'multinomial'], 'logisticregression__max_iter': [10000]}
models, scores = MLpipe_KFold_with_resample(X,y,clf,ML_algo,param_grid)
print(scores)
# sklearn package algorithms traning
y = df.loc[:, df.columns == 'owners']
X = df.loc[:, df.columns != 'owners']
algos = {
'SVC': SVC(),
'KNC': KNeighborsClassifier(),
'RFC': RandomForestClassifier(),
'l2C': RidgeClassifier(),
'SGDC': SGDClassifier(),
'LOGR': LogisticRegression()
}
params = {
'SVC': {'svc__C': [1e-2,1e-1,1,1e1,1e2,1e3]},
'KNC': {'kneighborsclassifier__n_neighbors': [1,10,50,100]},
'RFC': {'randomforestclassifier__max_depth': [5,10,30,50], 'randomforestclassifier__max_features': [0.5,0.75,1.0]},
'l2C': {'ridgeclassifier__alpha': [1e-3,1e-2,1,1e1,1e2,1e3]},
'SGDC': {'sgdclassifier__alpha': [1e-3,1e-2,1,1e1,1e2,1e3], 'sgdclassifier__l1_ratio': np.linspace(1e-1, 1, 4) ,'sgdclassifier__penalty': ['elasticnet']},
'LOGR': {'logisticregression__C': [1e-2,1e-1,1,1e1,1e2,1e3], 'logisticregression__multi_class': ['ovr', 'multinomial'], 'logisticregression__max_iter': [10000]}
}
models_dict_resample = {}
scores_dict_resample = {}
for algo in algos:
print("{} start".format(algo))
start = time.time()
models, scores = MLpipe_KFold(X,y,clf,algos[algo],params[algo])
print("{} ends: {} seconds".format(algo, time.time()-start))
models_dict_resample[algo] = models
scores_dict_resample[algo] = (np.mean(scores), np.std(scores))
rank_by_mean_resample = list((k, v) for k, v in sorted(scores_dict_resample.items(), key=lambda item: item[1][0], reverse=False))
rank_by_std_resample = list((k, v) for k, v in sorted(scores_dict_resample.items(), key=lambda item: item[1][1], reverse=False))
for i in rank_by_mean_resample:
print(i[0], ": ", i[1][0], "Parameters: ", models_dict[i[0]][0])
print("-----------------rank std--------------------")
for i in rank_by_std_resample:
print(i[0], ": ", i[1][1], "Parameters: ", models_dict[i[0]][0])
###Output
_____no_output_____
###Markdown
XGBoost
###Code
def XGB_KFold(X, y, preprocessor, params, algo):
validation_scores = []
test_scores = []
train_scores = []
for i in range(1):
random_state = 42 * i
X_other, X_test, y_other, y_test = train_test_split(X, y, test_size=0.1, stratify=y, random_state = random_state)
random_state = 42*i
le = LabelEncoder()
kf = StratifiedKFold(n_splits=5,shuffle=True,random_state=random_state)
for train_index, val_index in kf.split(X_other, y_other): # group by User_other
X_train = X_other.iloc[train_index]
y_train = y_other.iloc[train_index]
X_cv = X_other.iloc[val_index]
y_cv = y_other.iloc[val_index]
X_train_prep = clf.fit_transform(X_train)
X_cv_prep = clf.transform(X_cv)
X_test_prep = clf.transform(X_test)
y_train_prep = le.fit_transform(y_train)
y_cv_prep = le.fit_transform(y_cv)
y_test_prep = le.transform(y_test)
# model
test_dict = {}
train_dict = {}
vali_dict = {}
for i in range(len(param_grid['max_depth'])):
XGB = xgboost.XGBClassifier()
XGB.set_params(**ParameterGrid(param_grid)[i])
XGB.fit(X_train_prep,y_train_prep,early_stopping_rounds=50,eval_set=[(X_cv_prep, y_cv_prep)], verbose=False)
y_cv_pred = XGB.predict(X_cv_prep)
validation_score = accuracy_score(y_cv_prep, y_cv_pred)
y_train_pred = XGB.predict(X_train_prep)
train_score = accuracy_score(y_train_prep, y_train_pred)
y_test_pred = XGB.predict(X_test_prep)
test_score = accuracy_score(y_test_prep, y_test_pred)
md = param_grid['max_depth'][i]
test_dict[md] = test_score
train_dict[md] = train_score
vali_dict[md] = validation_score
validation_scores.append(vali_dict)
test_scores.append(test_dict)
train_scores.append(train_dict)
print(test_scores)
# xgboost training
y = df['owners']
X = df.loc[:, df.columns != 'owners']
XGB = xgboost.XGBClassifier()
param_grid = {"learning_rate": [0.03],
"n_estimators": [10000],
"seed": [0],
# "reg_alpha": [0e0, 1e-2, 1e-1, 1e0, 1e1, 1e2],
# "reg_lambda": [0e0, 1e-2, 1e-1, 1e0, 1e1, 1e2],
"missing": [np.nan],
"max_depth": [1,3,10,30,100],
"colsample_bytree": [0.9],
"subsample": [0.66]}
XGB_KFold(X, y, clf, XGB, param_grid)
train_mean = {}
vali_mean = {}
train_std = {}
vali_std = {}
for i in param_grid['max_depth']:
train_mean[i] = np.mean([d[i] for d in train_scores])
train_std[i] = np.std([d[i] for d in train_scores])
vali_mean[i] = np.mean([d[i] for d in validation_scores])
vali_std[i] = np.std([d[i] for d in validation_scores])
fig, (ax1,ax2) = plt.subplots(1,2, figsize=(16,6))
ax1.plot(list(train_mean.keys()), list(train_mean.values()), label='train')
ax1.plot(list(vali_mean.keys()), list(vali_mean.values()), label='validation')
ax1.legend()
ax1.set_title('mean')
ax2.plot(list(train_std.keys()), list(train_std.values()), label='train')
ax2.plot(list(vali_std.keys()), list(vali_std.values()), label='validation')
ax2.legend()
ax2.set_title('std')
plt.show()
###Output
_____no_output_____ |
AI_Class/021/Dnn.ipynb | ###Markdown
Dnn 실습주어진 코드는 import lib, data와 Modeling 파트입니다.타이타닉 데이터셋을 사용했습니다.이제껏 배워왔던 내용들을 기반으로 Dnn 모델을 구축해보세요.- 목표 : 정확도 85 넘기기 Import library & data
###Code
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
df = sns.load_dataset('titanic')
###Output
_____no_output_____
###Markdown
EDA목차의 KNN내용 참고
###Code
df.head()
df.isna().sum()
df.describe()
df.info()
for i in range(len(df.columns.values)):
print(df.columns.values[i])
print(df[df.columns.values[i]].unique())
print()
df = df.drop(['deck', 'embark_town'], axis=1)
print(df.columns.values)
df = df.dropna(subset=['age'], how='any', axis=0)
print(len(df))
most_freq = df['embarked'].value_counts(dropna=True).idxmax()
print(most_freq)
df['embarked'].fillna(most_freq, inplace=True)
df.isna().sum()
df = df[['survived', 'pclass', 'sex', 'age', 'sibsp', 'parch', 'embarked']]
onehot_sex = pd.get_dummies(df['sex'])
df = pd.concat([df, onehot_sex], axis=1)
onehot_embarked = pd.get_dummies(df['embarked'], prefix='town')
df = pd.concat([df, onehot_embarked], axis=1)
df.drop(['sex', 'embarked'], axis=1, inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Modeling
###Code
X=df[['pclass', 'age', 'sibsp', 'parch', 'female', 'male', 'town_C', 'town_Q', 'town_S']]
y=df['survived']
X = StandardScaler().fit(X).transform(X)
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.3,
random_state=20)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
model = Sequential()
model.add(Dense(256, input_shape=(9,), activation='relu'))
model.add(Dense(256, activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dense((1), activation='sigmoid'))
model.compile(loss='mse', optimizer='Adam', metrics=['accuracy'])
model.summary()
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=30)
pd.DataFrame(history.history).plot(figsize=(12, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
###Output
_____no_output_____
###Markdown
Dnn 실습주어진 코드는 import lib, data와 Modeling 파트입니다.타이타닉 데이터셋을 사용했습니다.이제껏 배워왔던 내용들을 기반으로 Dnn 모델을 구축해보세요.- 목표 : 정확도 85 넘기기 Import library & data
###Code
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
df = sns.load_dataset('titanic')
###Output
_____no_output_____
###Markdown
Modeling
###Code
from keras.models import Sequential
from keras.layers.core import Dense
model = Sequential()
model.add(Dense(256, input_shape=(8,), activation='relu'))
model.add(Dense(256, activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dense((10), activation='sigmoid'))
model.compile(loss='mse', optimizer='Adam', metrics=['accuracy'])
model.summary()
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=30)
pd.DataFrame(history.history).plot(figsize=(12, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
###Output
_____no_output_____
###Markdown
Dnn 실습주어진 코드는 import lib, data와 Modeling 파트입니다.타이타닉 데이터셋을 사용했습니다.이제껏 배워왔던 내용들을 기반으로 Dnn 모델을 구축해보세요.- 목표 : 정확도 85 넘기기 Import library & data
###Code
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
df = sns.load_dataset('titanic')
###Output
_____no_output_____
###Markdown
EDA목차의 KNN내용 참고
###Code
df.head()
df.isna().sum()
df.describe()
df.info()
for i in range(len(df.columns.values)):
print(df.columns.values[i])
print(df[df.columns.values[i]].unique())
print()
df = df.drop(['deck', 'embark_town'], axis=1)
print(df.columns.values)
df = df.dropna(subset=['age'], how='any', axis=0)
print(len(df))
most_freq = df['embarked'].value_counts(dropna=True).idxmax()
print(most_freq)
df['embarked'].fillna(most_freq, inplace=True)
df.isna().sum()
df = df[['survived', 'pclass', 'sex', 'age', 'sibsp', 'parch', 'embarked']]
onehot_sex = pd.get_dummies(df['sex'])
df = pd.concat([df, onehot_sex], axis=1)
onehot_embarked = pd.get_dummies(df['embarked'], prefix='town')
df = pd.concat([df, onehot_embarked], axis=1)
df.drop(['sex', 'embarked'], axis=1, inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Modeling
###Code
X=df[['pclass', 'age', 'sibsp', 'parch', 'female', 'male', 'town_C', 'town_Q', 'town_S']]
y=df['survived']
X = StandardScaler().fit(X).transform(X)
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.3,
random_state=20)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
model = Sequential()
model.add(Dense(256, input_shape=(9,), activation='relu'))
model.add(Dense(256, activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dense((1), activation='sigmoid'))
model.compile(loss='mse', optimizer='Adam', metrics=['accuracy'])
model.summary()
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=30)
pd.DataFrame(history.history).plot(figsize=(12, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
###Output
_____no_output_____ |
Task_2_Clustering.ipynb | ###Markdown
**Author : Chiranjeev Sharma** Task-2 Prediction using Unsupervised ML [GRIP @ The Spark Foundation](https://www.thesparksfoundationsingapore.org/) **Objective- In this task I tried to predict the optimum number of clusters and represent it visuallyusing K-means Clustering from the given ‘Iris’ dataset.** Dataset URL:"https://bit.ly/3kXTdox" Import Libraries
###Code
# importing the libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.cluster import KMeans
from sklearn.preprocessing import MinMaxScaler
from sklearn import datasets
###Output
_____no_output_____
###Markdown
Loading the Data
###Code
from google.colab import files
uploaded= files.upload()
# Import Data
iris_df=pd.read_csv('Iris.csv')
print("The Data is Imported")
iris_df
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis
###Code
# checking the information of Dataset
iris_df.info()
# Checking shape of Dataset
iris_df.shape
###Output
_____no_output_____
###Markdown
**We have 150 rows and 6 columns**
###Code
# Checking the presence of null values and Missing values
iris_df.isnull().sum()
###Output
_____no_output_____
###Markdown
**We do not have missing values in this dataset.**
###Code
# Checking the Data type of each Attribute
iris_df.dtypes
# Checking the Statistical details of Datasets
iris_df.describe()
# Checking the co-relation b/w the Attributes
iris_df.corr()
###Output
_____no_output_____
###Markdown
Let's find the optimum number if KMean Clusteering and Determine the value of K
###Code
# Finding the optimum number of clusters for k-means classification
x= iris_df.iloc[:, [0,1,2,3]].values
# Within_cluster_sum_of_square (WCSS)
wcss = []
for i in range(1,10):
kmeans = KMeans(n_clusters=i, init= 'k-means++', n_init=10, max_iter=300, random_state=0)
kmeans.fit(x)
wcss.append(kmeans.inertia_)
# Plotting the results onto a line graph, and `allowing us to observe 'The elbow'
plt.figure(figsize=(8,5))
plt.plot(range(1,10), wcss)
plt.title('The elbow Method')
plt.xlabel('Numbers of Cluster')
plt.ylabel('Within Cluster sum of Square')
plt.show
###Output
_____no_output_____
###Markdown
You can clearly see why it is called 'The elbow method' from the above graph, the optimum clusters is where the elbow occurs. This is when the within cluster sum of squares (WCSS) doesn't decrease significantly with every iteration. **from the above graph we can see that the elbow curve start at K=3, therefore, we choose the optimum number of clusters as 3.**
###Code
# Applying kmeans to the dataset / Creating the kmeans classifier
kmeans = KMeans(n_clusters=3, init= 'k-means++', n_init=10, max_iter=300, random_state=0)
y_kmeans=kmeans.fit_predict(x)
###Output
_____no_output_____
###Markdown
Visualization of Clusters
###Code
# Visualising the clusters - On the first two columns
plt.figure(figsize = (10,5))
plt.scatter(x[y_kmeans == 0, 0], x[y_kmeans == 0, 1], s = 100, c = 'red', label = 'Iris-setosa')
plt.scatter(x[y_kmeans == 1, 0], x[y_kmeans == 1, 1], s = 100, c = 'blue', label = 'Iris-versicolour')
plt.scatter(x[y_kmeans == 2, 0], x[y_kmeans == 2, 1], s = 100, c = 'green', label = 'Iris-virginica')
# Plotting the centroids of the clusters
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:,1],
s = 100, c = 'yellow', label = 'Centroids')
plt.legend()
plt.xlabel('Sepal Length in cm')
plt.ylabel('Petal Length in cm')
plt.title('K-Means Clustering')
###Output
_____no_output_____
###Markdown
Task 2: Prediction using Unsupervised ML - K- Means Clustering Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('/content/Iris.csv')
dataset.head()
dataset['Species'].describe()
###Output
_____no_output_____
###Markdown
Determining K - number of clusters
###Code
x = dataset.iloc[:, [0, 1, 2, 3]].values
from sklearn.cluster import KMeans
wcss = []
for i in range(1, 15):
kmeans = KMeans(n_clusters = i, init = 'k-means++',
max_iter = 300, n_init = 10, random_state = 0)
kmeans.fit(x)
wcss.append(kmeans.inertia_)
###Output
_____no_output_____
###Markdown
Plotting the results - observe 'The elbow'
###Code
plt.figure(figsize=(16,8))
plt.style.use('ggplot')
plt.plot(range(1, 15), wcss)
plt.title('The elbow method')
plt.xlabel('Number of clusters')
plt.ylabel('WCSS') # Within cluster sum of squares
plt.show()
###Output
_____no_output_____
###Markdown
Creating the kmeans classifier with K = 3
###Code
kmeans = KMeans(n_clusters = 3, init = 'k-means++',
max_iter = 300, n_init = 10, random_state = 0)
y_kmeans = kmeans.fit_predict(x)
###Output
_____no_output_____
###Markdown
Visualising the clusters - On the first two columns
###Code
plt.figure(figsize=(14,10))
plt.scatter(x[y_kmeans == 0, 0], x[y_kmeans == 0, 1],
s = 100, c = 'tab:orange', label = 'Iris-setosa')
plt.scatter(x[y_kmeans == 1, 0], x[y_kmeans == 1, 1],
s = 100, c = 'tab:blue', label = 'Iris-versicolour')
plt.scatter(x[y_kmeans == 2, 0], x[y_kmeans == 2, 1],
s = 100, c = 'tab:green', label = 'Iris-virginica')
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:,1],
s = 100, c = 'black', label = 'Centroids')
plt.title('Clusters K = 3')
plt.legend(loc = 'upper left', ncol = 2)
plt.show()
###Output
_____no_output_____ |
functions.ipynb | ###Markdown
Functions * [generate_geographical_subset](generate_geographical_subset) Load required libraries
###Code
import xarray as xr
###Output
_____no_output_____
###Markdown
`generate_geographical_subset`
###Code
def generate_geographical_subset(xarray, latmin, latmax, lonmin, lonmax):
"""
Generates a geographical subset of a xarray DataArray
Parameters:
xarray (xarray DataArray): a xarray DataArray with latitude and longitude coordinates
latmin, latmax, lonmin, lonmax (int): boundaries of the geographical subset
Returns:
Geographical subset of a xarray DataArray.
"""
return xarray.where((xarray.latitude < latmax) & (xarray.latitude > latmin) & (xarray.longitude < lonmax) & (xarray.longitude > lonmin),drop=True)
###Output
_____no_output_____
###Markdown
Functions This notebook lists all `functions` that are defined and used during the `Fire Monitoring course`.The following functions are listed:**[Data loading and re-shaping functions](load_reshape)*** [generate_masked_array](generate_masked_array)* [slstr_frp_gridding](slstr_frp_gridding)**[Data visualization functions](visualization)*** [visualize_pcolormesh](visualize_pcolormesh)* [visualize_s3_frp](vis_s3_frp) Load required libraries
###Code
import os
from matplotlib import pyplot as plt
import xarray as xr
from netCDF4 import Dataset
import numpy as np
import glob
from matplotlib import pyplot as plt
import matplotlib.colors
import matplotlib.cm as cm
from matplotlib.colors import LogNorm
import cartopy.crs as ccrs
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import cartopy.feature as cfeature
import warnings
warnings.simplefilter(action = "ignore", category = RuntimeWarning)
warnings.simplefilter(action = "ignore", category = FutureWarning)
###Output
_____no_output_____
###Markdown
Data loading and re-shaping functions `generate_masked_array`
###Code
def generate_masked_array(xarray, mask, threshold, operator, drop=True):
"""
Applies a mask (e.g. cloud fraction values or masking out certain data values) onto a given data array, based on a given threshold.
Parameters:
xarray (xarray DataArray): a three-dimensional xarray DataArray object
mask (xarray DataArray): 1-dimensional xarray DataArray, e.g. cloud fraction values
threshold (float): any number between 0 and 1, specifying the degree of cloudiness which is acceptable
operator (str): operator how to mask the array, e.g. '<', '>' or '='
drop(boolean): whether to drop the values that are masked out. Default is True.
Returns:
Masked xarray DataArray with flagged negative values
"""
if(operator=='<'):
cloud_mask = xr.where(mask < threshold, 1, 0) #Generate cloud mask with value 1 for the pixels we want to keep
elif(operator=='!='):
cloud_mask = xr.where(mask != threshold, 1, 0)
elif(operator=='>'):
cloud_mask = xr.where(mask > threshold, 1, 0)
else:
cloud_mask = xr.where(mask == threshold, 1, 0)
xarray_masked = xr.where(cloud_mask ==1, xarray, np.nan) #Apply mask onto the DataArray
print(xarray_masked)
xarray_masked.attrs = xarray.attrs #Set DataArray attributes
if(drop):
return xarray_masked[~np.isnan(xarray_masked)] #Return masked DataArray and flag negative values
else:
return xarray_masked
###Output
_____no_output_____
###Markdown
`slstr_frp_gridding`
###Code
def slstr_frp_gridding(parameter_array, lat_frp, lon_frp, parameter, lat_min, lat_max, lon_min, lon_max, sampling_lat, sampling_lon, n_fire, **kwargs):
"""
Grids the NRT Sentinel-3 SLSTR Fire Radiative Power data onto a regular latitude and longitude data
Parameters:
parameter_array (xarray DataArray): xarray DataArray object
lat_frp (xarray DataArray): latitude information retrieved from the data file
lon_frp (xarray DataArray): longitude information retrieved from the data file
parameter (str): Parameter of the SLSTR FRP data
lat_min (float): latitude minimum
lat_max (float): latitude maximum
lon_min (float): longitude minimum
lon_max (float): longitude maximum
sampling_lat(float): resolution for latitude
sampling_lon(float): resolution for longitude
n_fire (int): Number of fires
**kwargs():
Returns:
the numpy arrays, holding the gridded slstr frp information and latitude and longitude data
"""
n_lat = int( (np.float32(lat_max) - np.float32(lat_min)) / sampling_lat ) + 1 # Number of rows per latitude sampling
n_lon = int( (np.float32(lon_max) - np.float32(lon_min)) / sampling_lon ) + 1 # Number of lines per longitude sampling
slstr_frp_gridded = np.zeros( [n_lat, n_lon], dtype='float32' ) - 9999.
lat_grid = np.zeros( [n_lat, n_lon], dtype='float32' ) - 9999.
lon_grid = np.zeros( [n_lat, n_lon], dtype='float32' ) - 9999.
if (n_fire >= 0):
# Loop on i_lat: begins
for i_lat in range(n_lat):
# Loop on i_lon: begins
for i_lon in range(n_lon):
lat_grid[i_lat, i_lon] = lat_min + np.float32(i_lat) * sampling_lat + sampling_lat / 2.
lon_grid[i_lat, i_lon] = lon_min + np.float32(i_lon) * sampling_lon + sampling_lon / 2.
# Gridded SLSTR FRP MWIR Night - All days
if(parameter=='swir_nosaa'):
FLAG_FRP_SWIR_SAA_nc = kwargs.get('flag', None)
mask_grid = np.where(
(lat_frp[:] >= lat_min + np.float32(i_lat) * sampling_lat) &
(lat_frp[:] < lat_min + np.float32(i_lat+1) * sampling_lat) &
(lon_frp[:] >= lon_min + np.float32(i_lon) * sampling_lon) &
(lon_frp[:] < lon_min + np.float32(i_lon+1) * sampling_lon) &
(parameter_array[:] != -1.) & (FLAG_FRP_SWIR_SAA_nc[:] == 0), False, True)
else:
mask_grid = np.where(
(lat_frp[:] >= lat_min + np.float32(i_lat) * sampling_lat) &
(lat_frp[:] < lat_min + np.float32(i_lat+1) * sampling_lat) &
(lon_frp[:] >= lon_min + np.float32(i_lon) * sampling_lon) &
(lon_frp[:] < lon_min + np.float32(i_lon+1) * sampling_lon) &
(parameter_array[:] != -1.), False, True)
masked_slstr_frp_grid = np.ma.array(parameter_array[:], mask=mask_grid)
if len(masked_slstr_frp_grid.compressed()) != 0:
slstr_frp_gridded[i_lat, i_lon] = np.sum(masked_slstr_frp_grid.compressed())
return slstr_frp_gridded, lat_grid, lon_grid
###Output
_____no_output_____
###Markdown
Data visualization functions `visualize_pcolormesh`
###Code
def visualize_pcolormesh(aod_ocean, aod_land, latitude, longitude, title, unit, vmin, vmax, color_scale, projection):
"""
Visualizes Sentinel-3 SLSTR Aerosol Optical Depth data with the help of matplotlib's pcolormesh function.
Parameters:
aod_ocean (xarray DataArray): DataArray with AOD data over ocean
aod_land (xarray DataArray): DataArray with AOD data over land
latitude (xarray DataArray): latitude information retrieved from the data file
longitude (xarray DataArray): longitude information retrieved from the data file
title (str): Title of the plot
unit (str): Unit of the data
vmin (float): Minimum value for visualization
vmax (float): Maximum value for visualization
color_scale (str): Color scale
projection (ccrs.projection): Projection of final plot
"""
fig=plt.figure(figsize=(12, 12))
ax=plt.axes(projection=projection)
ax.coastlines(linewidth=1.5, linestyle='solid', color='k', zorder=10)
gl = ax.gridlines(draw_labels=True, linestyle='--')
gl.xlabels_top=False
gl.ylabels_right=False
gl.xformatter=LONGITUDE_FORMATTER
gl.yformatter=LATITUDE_FORMATTER
gl.xlabel_style={'size':12}
gl.ylabel_style={'size':12}
img1 = plt.pcolormesh(longitude, latitude, aod_ocean, transform=ccrs.PlateCarree(), vmin=vmin, vmax=vmax, cmap=color_scale)
img2 = plt.pcolormesh(longitude, latitude, aod_land, transform=ccrs.PlateCarree(), vmin=vmax, vmax=vmax, cmap=color_scale)
ax.set_title(title, fontsize=20, pad=20.0)
cbar = fig.colorbar(img1, ax=ax, orientation='vertical', fraction=0.04, pad=0.05)
cbar.set_label(unit, fontsize=16)
cbar.ax.tick_params(labelsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
`visualize_s3_frp`
###Code
def visualize_s3_frp(data, lat, lon, unit, longname, textstr_1, textstr_2, vmax):
"""
Visualizes NRT Sentinel-3 SLSTR Fire Radiative Power data with the help of matplotlib's pcolormesh function.
Parameters:
data (numpy masked data array): dara array
lat (numpy array): latitude information returned from the function `slstr_frp_gridding`
lon (numpy array): longitude information returned from the function `slstr_frp_gridding`
unit (str): Unit of the data
longname (str): Long name attribute from the data or title of the plot
textstr_1 (str): Text string that explains the number of hotspots
textstr_2 (str): Text string that explains summary statistics of the data visualized.
vmax (float): Maximum value for visualization.
"""
fig=plt.figure(figsize=(20, 15))
ax = plt.axes(projection=ccrs.PlateCarree())
img = plt.pcolormesh(lon, lat, data,
cmap=cm.autumn_r, transform=ccrs.PlateCarree(),
vmin=0,
vmax=vmax)
ax.add_feature(cfeature.BORDERS, edgecolor='black', linewidth=1)
ax.add_feature(cfeature.COASTLINE, edgecolor='black', linewidth=1)
gl = ax.gridlines(draw_labels=True, linestyle='--')
gl.bottom_labels=False
gl.right_labels=False
gl.xformatter=LONGITUDE_FORMATTER
gl.yformatter=LATITUDE_FORMATTER
gl.xlabel_style={'size':14}
gl.ylabel_style={'size':14}
cbar = fig.colorbar(img, ax=ax, orientation='horizontal', fraction=0.029, pad=0.025)
cbar.set_label(unit, fontsize=16)
cbar.ax.tick_params(labelsize=14)
ax.set_title(longname, fontsize=20, pad=40.0)
props = dict(boxstyle='square', facecolor='white', alpha=0.5)
# place a text box on the right side of the plot
ax.text(1.1, 0.9, textstr_1, transform=ax.transAxes, fontsize=16,
verticalalignment='top', bbox=props)
props = dict(boxstyle='square', facecolor='white', alpha=0.5)
# place a text box in upper left in axes coords
ax.text(1.1, 0.85, textstr_2, transform=ax.transAxes, fontsize=16,
verticalalignment='top', bbox=props)
plt.show()
###Output
_____no_output_____
###Markdown
load data in specific shape (64*64)
###Code
import nibabel as nib
import os
import numpy as np
import random
from nibabel.affines import apply_affine
import time
import voxelmorph as vxm
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
def load_m (file_path):
img = nib.load(file_path)
img_data = img.get_fdata()
if img.shape[0:2]!=(64,64):
img_data = img_data[23:87,23:87,:,:]
if not (file_path.endswith(".nii") or file_path.endswith(".nii.gz")):
raise ValueError(
f"Nifti file path must end with .nii or .nii.gz, got {file_path}."
)
return img_data
###Output
_____no_output_____
###Markdown
load data in specific shape (64*64) with header data
###Code
def load_with_head (file_path: str):
img = nib.load(file_path)
img_data = img.get_fdata()
if img.shape[0:2]!=(64,64):
img_data = img_data[23:87,23:87,:,:]
header=img.header
## edit the header for shape
header['dim'][1:5]=img_data.shape
if not (file_path.endswith(".nii") or file_path.endswith(".nii.gz")):
raise ValueError(
f"Nifti file path must end with .nii or .nii.gz, got {file_path}."
)
return img_data ,img
###Output
_____no_output_____
###Markdown
list of name and number of data in a direction
###Code
def count (data_dir):
train_dir = os.path.join(data_dir)
train_data_num = []
for file in os.listdir(train_dir):
train_data_num.append([file])
train_data_num=np.array(train_data_num)
n=train_data_num.shape[0]
return n,train_data_num
###Output
_____no_output_____
###Markdown
calculate maximum intensity in a direction between all data for normalization
###Code
def maxx (data_dir):
n,train_data_num=count(data_dir)
start=0
for i in range(n):
d=load_m(data_dir+'/'+str(train_data_num[i][0]))
maxx=d.max()
if maxx>=start:
start=maxx
return start
###Output
_____no_output_____
###Markdown
prepare input (moved , fix) and ground truth (ref , deformation map) for training network.
###Code
def data_generator(data_dir, batch_size,m,split):
"""4
Generator that takes in data of size [N, H, W], and yields data for
our custom vxm model. Note that we need to provide numpy data for each
input, and each output.
inputs: moving [bs, H, W, 1], fixed image [bs, H, W, 1]
outputs: moved image [bs, H, W, 1], zero-gradient [bs, H, W, 2]
m= maximum between all subject
split= percent of validation data
"""
n,train_data_num=count(data_dir)
n_train=n-int(split*n)
subject_ID=random.randint(0,n_train-1)
d=load_m(data_dir+'/'+str(train_data_num[subject_ID][0]))
s=d.shape[2]
slice_ID =random.randint(0,s-1)
v=d.shape[3]
# preliminary sizing
vol_shape = d.shape[:2] # extract data shape
ndims = len(vol_shape)
d=d[:,:,slice_ID,:]
d = np.einsum('jki->ijk', d)
# prepare a zero array the size of the deformation
# we'll explain this below
zero_phi = np.zeros([batch_size, *vol_shape, ndims])
while True:
# prepare inputs:
# images need to be of the size [batch_size, H, W, 1]
idx1 = np.random.randint(0, v, size=batch_size)
moving_images = d[idx1, ..., np.newaxis]
moving_images=moving_images/m
idx2 = np.random.randint(0, v, size=batch_size)
fixed_images = d[idx2, ..., np.newaxis]
fixed_images=fixed_images/m
inputs = [moving_images, fixed_images]
# prepare outputs (the 'true' moved image):
# of course, we don't have this, but we know we want to compare
# the resulting moved image with the fixed image.
# we also wish to penalize the deformation field.
outputs = [fixed_images, zero_phi]
yield (inputs, outputs)
###Output
_____no_output_____
###Markdown
prepare data for validation
###Code
def val_generator(data_dir, batch_size,m,split):
"""4
Generator that takes in data of size [N, H, W], and yields data for
our custom vxm model. Note that we need to provide numpy data for each
input, and each output.
inputs: moving [bs, H, W, 1], fixed image [bs, H, W, 1]
outputs: moved image [bs, H, W, 1], zero-gradient [bs, H, W, 2]
m= maximum between all subject
split= percent of validation data
"""
n,train_data_num=count(data_dir)
n_train=n-int(split*n)
a=n_train
subject_ID=random.randint(a,n-1)
d=load_m(data_dir+'/'+str(train_data_num[subject_ID][0]))
s=d.shape[2]
slice_ID =random.randint(0,s-1)
v=d.shape[3]
# preliminary sizing
vol_shape = d.shape[:2] # extract data shape
ndims = len(vol_shape)
d=d[:,:,slice_ID,:]
d = np.einsum('jki->ijk', d)
# prepare a zero array the size of the deformation
# we'll explain this below
zero_phi = np.zeros([batch_size, *vol_shape, ndims])
# prepare inputs:
# images need to be of the size [batch_size, H, W, 1]
idx1 = np.random.randint(0, v, size=batch_size)
moving_images = d[idx1, ..., np.newaxis]
moving_images=moving_images/m
idx2 = np.random.randint(0, v, size=batch_size)
fixed_images = d[idx2, ..., np.newaxis]
fixed_images=fixed_images/m
inputs = [moving_images, fixed_images]
# prepare outputs (the 'true' moved image):
# of course, we don't have this, but we know we want to compare
# the resulting moved image with the fixed image.
# we also wish to penalize the deformation field.
outputs = [fixed_images,zero_phi]
return (inputs, outputs)
###Output
_____no_output_____
###Markdown
change angle format and nearest neighborhod and apply affine matrix for augmentation
###Code
def a(teta):
return (teta*np.pi)/180
def nearest_neighbors(i, j, M, T_inv):
x_max, y_max = M.shape[0] - 1, M.shape[1] - 1
x, y, k = apply_affine(T_inv, np.array([i, j, 1]))
if x<0 or y<0:
x=0
y=0
if x>=x_max+1 or y>=y_max+1:
x=0
y=0
if np.floor(x) == x and np.floor(y) == y:
x, y = int(x), int(y)
return M[x, y]
if np.abs(np.floor(x) - x) < np.abs(np.ceil(x) - x):
x = int(np.floor(x))
else:
x = int(np.ceil(x))
if np.abs(np.floor(y) - y) < np.abs(np.ceil(y) - y):
y = int(np.floor(y))
else:
y = int(np.ceil(y))
if x > x_max:
x = x_max
if y > y_max:
y = y_max
return M[x, y]
def affine_matrix():
t=random.randint(-5, 5)
cos_gamma = np.cos(a(t))
sin_gamma = np.sin(a(t))
x=random.randint(-3, 3)
y=random.randint(-6, 6)
T=np.array([[cos_gamma,-sin_gamma,0,x],
[sin_gamma,cos_gamma,0,y],
[0,0,1,0],
[0,0,0,1]])
return T
###Output
_____no_output_____
###Markdown
Augmentation
###Code
def augsb(ref,volume,affine_matrix):
tdim,xdim,ydim,tdim = ref.shape
img_transformed = np.zeros((xdim, ydim), dtype=np.float64)
for i, row in enumerate(ref[volume,:,:,0]):
for j, col in enumerate(row):
pixel_data = ref[volume,i, j, 0]
input_coords = np.array([i, j, 1])
i_out, j_out,k= apply_affine(affine_matrix, input_coords)
if i_out<0 or j_out<0:
i_out=0
j_out=0
if i_out>=xdim or j_out>=ydim:
i_out=0
j_out=0
img_transformed[int(i_out),int(j_out)] = pixel_data
T_inv = np.linalg.inv(affine_matrix)
img_nn = np.ones((xdim, ydim), dtype=np.float64)
for i, row in enumerate(img_transformed):
for j, col in enumerate(row):
img_nn[i, j] = nearest_neighbors(i, j, ref[volume,:,:,0], T_inv)
return img_nn
###Output
_____no_output_____
###Markdown
prepare data for augmentation
###Code
def affine_generator(data_dir,batch_size,m,split):
n,train_data_num=count(data_dir)
n_train=n-int(split*n)
subject_ID=random.randint(0,n_train-1)
d=load_m(data_dir+'/'+str(train_data_num[subject_ID][0]))
s=d.shape[2]
slice_ID =random.randint(0,s-1)
v=d.shape[3]
# preliminary sizing
vol_shape = d.shape[:2] # extract data shape
ndims = len(vol_shape)
d=d[:,:,slice_ID,:]
d = np.einsum('jki->ijk', d)
y=[]
for i in range(batch_size):
y.append(affine_matrix())
y=np.array(y)
# prepare a zero array the size of the deformation
# we'll explain this below
zero_phi = np.zeros([batch_size, *vol_shape, ndims])
# prepare inputs:
# images need to be of the size [batch_size, H, W, 1]
while True:
idx2 = np.random.randint(0, v, size=batch_size)
fixed_images = d[idx2, ..., np.newaxis]
fixed_images=fixed_images/m
moving_images=[]
for i in range(batch_size):
moving_images.append(augsb(fixed_images,i,y[i]))
moving_images=np.array(moving_images)
moving_images=moving_images[... , np.newaxis]
#moving_images=augsb(fixed_images,y)
#idx1 = np.random.randint(0, v, size=batch_size)
#moving_images = d[idx1, ..., np.newaxis]
#moving_images=moving_images/m
inputs = [moving_images, fixed_images]
# prepare outputs (the 'true' moved image):
# of course, we don't have this, but we know we want to compare
# the resulting moved image with the fixed image.
# we also wish to penalize the deformation field.
outputs = [fixed_images,zero_phi]
yield(inputs,outputs)
#y)
def label_generator(data_dir,batch_size,m,split):
n,train_data_num=count(data_dir)
n_train=n-int(split*n)
a=n_train
subject_ID=random.randint(a,n-1)
d=load_m(data_dir+'/'+str(train_data_num[subject_ID][0]))
s=d.shape[2]
slice_ID =random.randint(0,s-1)
v=d.shape[3]
# preliminary sizing
vol_shape = d.shape[:2] # extract data shape
ndims = len(vol_shape)
d=d[:,:,slice_ID,:]
d = np.einsum('jki->ijk', d)
y=[]
for i in range(batch_size):
y.append(affine_matrix())
y=np.array(y)
# prepare a zero array the size of the deformation
# we'll explain this below
# prepare inputs:
# images need to be of the size [batch_size, H, W, 1]
while True:
idx2 = np.random.randint(0, v, size=batch_size)
fixed_images = d[idx2, ...]
fixed_images=fixed_images/m
moving_images=[]
for i in range(batch_size):
moving_images.append(augsb(fixed_images,i,y[i]))
moving_images=np.array(moving_images)
#moving_images=moving_images[... ]
c=np.stack([moving_images,fixed_images], axis=2)
inputs = [c]
#inputs=[[moving_images,fixed_images]]
# prepare outputs (the 'true' moved image):
# of course, we don't have this, but we know we want to compare
# the resulting moved image with the fixed image.
# we also wish to penalize the deformation field.
outputs = [y]
yield (inputs, outputs)
def ref(data_dir,m,slice_ID,reference):
"""4
Generator that takes in data of size [N, H, W], and yields data for
our custom vxm model. Note that we need to provide numpy data for each
input, and each output.
inputs: moving [bs, H, W, 1], fixed image [bs, H, W, 1]
outputs: moved image [bs, H, W, 1], zero-gradient [bs, H, W, 2]
m= maximum between all subject
split= percent of validation data
"""
d=load_m(data_dir)
#s=d.shape[2]
#slice_ID =random.randint(0,s-1)
v=d.shape[3]
# preliminary sizing
vol_shape = d.shape[:2] # extract data shape
ndims = len(vol_shape)
d=d[:,:,slice_ID,:]
d = np.einsum('jki->ijk', d)
# prepare a zero array the size of the deformation
# we'll explain this below
zero_phi = np.zeros([v, *vol_shape, ndims])
# prepare inputs:
# images need to be of the size [batch_size, H, W, 1]
idx1=[]
for i in range(v):
idx1.append(i)
idx1=np.array(idx1)
moving_images = d[idx1, ..., np.newaxis]
moving_images=moving_images/m
if reference.strip().isdigit():
# print("User input is Number")
reference=int(reference)
idx2=np.ones(v)*reference
idx2=idx2.astype(int)
fixed_images = d[idx2, ..., np.newaxis]
fixed_images=fixed_images/m
else:
# print("User input is string")
img = nib.load(reference)
img_data = img.get_fdata()
if img.shape[0:2]!=(64,64):
img_data = img_data[23:87,23:87,:]
img_data=img_data[np.newaxis,:,:,slice_ID]
idx2=np.zeros(v)
idx2=idx2.astype(int)
fixed_images = img_data[idx2, ..., np.newaxis]
fixed_images=fixed_images/m
inputs = [moving_images, fixed_images]
# prepare outputs (the 'true' moved image):
# of course, we don't have this, but we know we want to compare
# the resulting moved image with the fixed image.
# we also wish to penalize the deformation field.
outputs = [fixed_images,zero_phi]
return (inputs, outputs)
def main (input_direction,reference,output_direction,maximum_intensity,loadable_model):
start_time = time.time()
img_data,img=load_with_head(input_direction)
slice_number = img_data.shape[2]
header=img.header
img_mask_affine = img.affine
# configure unet input shape (concatenation of moving and fixed images)
ndim = 2
unet_input_features = 2
# data shape 64*64
s=(64,64)
inshape = (*s, unet_input_features)
# configure unet features
nb_features =[
[64, 64, 64, 64], # encoder features
[64, 64, 64, 64, 64, 32,16] # decoder features
]
# build model using VxmDense
inshape =s
vxm_model = vxm.networks.VxmDense(inshape, nb_features, int_steps=0)
# voxelmorph has a variety of custom loss classes
losses = [vxm.losses.MSE().loss, vxm.losses.Grad('l2').loss]
# usually, we have to balance the two losses by a hyper-parameter
lambda_param = 0.05
loss_weights = [1, lambda_param]
vxm_model.compile(optimizer='Adam', loss=losses, loss_weights=loss_weights, metrics=['accuracy'])
vxm_model.load_weights(loadable_model)
o=np.zeros((img_data.shape[0],img_data.shape[1],img_data.shape[2],img_data.shape[3]))
for i in range(slice_number):
prepare_data=ref(input_direction,maximum_intensity,i,reference)
val_input, _ = prepare_data
val_pred = vxm_model.predict(val_input)
change_order= np.einsum('jki->kij',val_pred[0][:,:,:,0])
o[:, :, i,:] = change_order
img_reg = nib.Nifti1Image(o*maximum_intensity, affine=img_mask_affine, header=header)
nib.save(img_reg,output_direction)
print("--- %s second ---" % (time.time() - start_time))
def snr (direction):
img = nib.load(direction)
img = img.get_fdata()
mean=[]
for i in range(img.shape[2]):
mean.append(np.mean(img[:,:,i]))
mean=np.array(mean)
deviation=[]
for i in range(img.shape[2]):
deviation.append(np.std(img[:,:,i]))
deviation=np.array(deviation)
return (mean/deviation),mean,deviation
def mean(direction):
img = nib.load(direction)
img = img.get_fdata()
mean=[]
where_are_NaNs = isnan(img)
img[where_are_NaNs] = 0
for i in range(img.shape[2]):
mean.append(np.mean(img[:,:,i]))
mean.append(np.mean(mean))
mean=np.array(mean)
return mean
def seg_mean(img):
p=0
for m in range(img.shape[0]):
for n in range(img.shape[1]):
if img[m,n]==0:
p=p+1
s=np.sum(img[:,:])
mean=s/((64*64)-p)
return mean
def mean_all(direction):
img = nib.load(direction)
img = img.get_fdata()
mean=[]
where_are_NaNs = np.isnan(img)
img[where_are_NaNs] = 0
for i in range(img.shape[2]):
mean.append(seg_mean(img[:,:,i]))
mean=np.mean(mean)
return mean
def shift_image(X, dx, dy):
X = np.roll(X, dy, axis=0)
X = np.roll(X, dx, axis=1)
if dy>0:
X[:dy, :] = 0
elif dy<0:
X[dy:, :] = 0
if dx>0:
X[:, :dx] = 0
elif dx<0:
X[:, dx:] = 0
return X
def cplus(source_centerline_directory,centerlines_directory,main_data_directory,
center_fix_directory,final_cplus_directory,
maximum_intensity,model,reference,mean_directory
):
#############################################
# if reference=0 means reference=first volume
# if reference=-1 means reference=mid volume
# if reference=-2 means reference=mean volume
# if reference>0 means reference=any volume
Xs=[]
Ys=[]
source = pd.read_csv(source_centerline_directory, header=None)
source.columns=['x','y','delete']
source = source[['x','y']]
for s in range(source.shape[0]):
c=source.loc[s]
#xs=int(c['x'])
ys=int(c['y'])
#Xs.append(xs)
Ys.append(ys)
n2,name2=count_endwith(centerlines_directory,'.csv')
dx=[]
dy=[]
for s in range(0,source.shape[0]):
for j in range(n2):
df = pd.read_csv(centerlines_directory+name2[j][0], header=None)
df.columns=['x','y','delete']
df=df[['x','y']]
c=df.loc[s]
#x=int(c['x'])
y=int(c['y'])
#dx.append(Xs[s]-x)
dy.append(Ys[s]-y)
input_direction=main_data_directory
img = nib.load(input_direction)
img_data=img.get_fdata()
img_mask_affine = img.affine
header = img.header
nb_img = header.get_data_shape()
o=np.zeros((nb_img[0],nb_img[1],nb_img[2],nb_img[3]))
DX=np.zeros(len(dy))
start=0
for s in range(0,source.shape[0]):
for v in range(n2):
a= shift_image(img_data[:,:,s,v],dy[v+start],DX[v+start])
o[:,:,s, v] = a
start=start + n2
input_direction=center_fix_directory
img_reg = nib.Nifti1Image(o, affine=img_mask_affine, header=header)
nib.save(img_reg,input_direction)
if reference>0:
reference=str(reference)
if reference==0:
reference='0'
if reference==-1:
y=int(n2/2)
reference=str(y)
if reference==-2:
reference=mean_directory
main(input_direction,reference,final_cplus_directory,maximum_intensity,model)
def count_startwith (data_dir,prefix):
train_dir = os.path.join(data_dir)
train_data_num = []
for file in os.listdir(train_dir):
if file.startswith(prefix):
train_data_num.append([file])
train_data_num=np.array(train_data_num)
n=train_data_num.shape[0]
return n,sorted(train_data_num)
def count_endwith (data_dir,prefix):
train_dir = os.path.join(data_dir)
train_data_num = []
for file in os.listdir(train_dir):
if file.endswith(prefix):
train_data_num.append([file])
train_data_num=np.array(train_data_num)
n=train_data_num.shape[0]
return n,sorted(train_data_num)
###Output
_____no_output_____
###Markdown
movement plots for one slice
###Code
def flow_one_slice(input_direction,reference,maximum_intensity,loadable_model,slice_num,mean_directory,title):
img_data,img=load_with_head(input_direction)
slice_number = img_data.shape[2]
header=img.header
img_mask_affine = img.affine
# configure unet input shape (concatenation of moving and fixed images)
ndim = 2
unet_input_features = 2
# data shape 64*64
s=(64,64)
inshape = (*s, unet_input_features)
# configure unet features
nb_features =[
[64, 64, 64, 64], # encoder features
[64, 64, 64, 64, 64, 32,16] # decoder features
]
# build model using VxmDense
inshape =s
vxm_model = vxm.networks.VxmDense(inshape, nb_features, int_steps=0)
# voxelmorph has a variety of custom loss classes
losses = [vxm.losses.MSE().loss, vxm.losses.Grad('l2').loss]
# usually, we have to balance the two losses by a hyper-parameter
lambda_param = 0.05
loss_weights = [1, lambda_param]
vxm_model.compile(optimizer='Adam', loss=losses, loss_weights=loss_weights, metrics=['accuracy'])
vxm_model.load_weights(loadable_model)
if reference>0:
reference=str(reference)
if reference==0:
reference='0'
if reference==-1:
y=int(img_data.shape[3]/2)
reference=str(y)
if reference==-2:
reference=mean_directory
#for i in range(slice_number):
#slice_number=5
prepare_data=ref(input_direction,maximum_intensity,slice_num,reference)
val_input, _ = prepare_data
val_pred = vxm_model.predict(val_input)
#val_pred=flow(input_direction,reference,maximum_intensity,loadable_model,slice_num)
x=[]
y=[]
for i in range(val_pred[1][:,0,0,0].shape[0]):
x.append(np.mean(val_pred[1][i,...,0]))
y.append(np.mean(val_pred[1][i,...,1]))
x=np.array(x)
y=np.array(y)
volume=range(val_pred[1][:,0,0,0].shape[0])
plt.figure(figsize=(20,5))
plt.plot(volume,x,label = "x")
plt.plot(volume,y,label = "y")
# naming the x axis
plt.xlabel('volumes')
# naming the y axis
plt.ylabel('movement')
# giving a title to my graph
plt.title(title)
# show a legend on the plot
plt.legend()
###Output
_____no_output_____
###Markdown
movement plot for all slice in one plot
###Code
def flow_all_slice(input_direction,reference,maximum_intensity,loadable_model,mean_directory,title):
img_data,img=load_with_head(input_direction)
slice_number = img_data.shape[2]
header=img.header
img_mask_affine = img.affine
# configure unet input shape (concatenation of moving and fixed images)
ndim = 2
unet_input_features = 2
# data shape 64*64
s=(64,64)
inshape = (*s, unet_input_features)
# configure unet features
nb_features =[
[64, 64, 64, 64], # encoder features
[64, 64, 64, 64, 64, 32,16] # decoder features
]
# build model using VxmDense
inshape =s
vxm_model = vxm.networks.VxmDense(inshape, nb_features, int_steps=0)
# voxelmorph has a variety of custom loss classes
losses = [vxm.losses.MSE().loss, vxm.losses.Grad('l2').loss]
# usually, we have to balance the two losses by a hyper-parameter
lambda_param = 0.05
loss_weights = [1, lambda_param]
vxm_model.compile(optimizer='Adam', loss=losses, loss_weights=loss_weights, metrics=['accuracy'])
vxm_model.load_weights(loadable_model)
if reference>0:
reference=str(reference)
if reference==0:
reference='0'
if reference==-1:
y=int(img_data.shape[3]/2)
reference=str(y)
if reference==-2:
reference=mean_directory
x_all_slice=[]
y_all_slice=[]
for i in range(slice_number):
prepare_data=ref(input_direction,maximum_intensity,i,reference)
val_input, _ = prepare_data
val_pred = vxm_model.predict(val_input)
#val_pred=flow(input_direction,reference,maximum_intensity,loadable_model,slice_num)
x=[]
y=[]
for i in range(val_pred[1][:,0,0,0].shape[0]):
x.append(np.mean(val_pred[1][i,...,0]))
y.append(np.mean(val_pred[1][i,...,1]))
x_all_slice.append(x)
y_all_slice.append(y)
x_all_slice=np.array(x_all_slice)
y_all_slice=np.array(y_all_slice)
mean_x=x_all_slice.mean(axis=0)
mean_y=y_all_slice.mean(axis=0)
### for delete the eror for reference to reference
mean_x[int(reference)]=0
mean_y[int(reference)]=0
overal=(mean_x+mean_y)/2
volume=range(val_pred[1][:,0,0,0].shape[0])
plt.figure(figsize=(20,5))
plt.plot(volume,overal,label = "x")
#plt.plot(volume,mean_y,label = "y")
# naming the x axis
plt.xlabel('volumes')
# naming the y axis
plt.ylabel('movement')
# giving a title to my graph
plt.title(title)
# show a legend on the plot
plt.legend()
def flow_between_two(input_direction0,input_direction1,reference,
maximum_intensity,loadable_model,mean_directory,
title,label1,label2):
img_data,img=load_with_head(input_direction0)
slice_number = img_data.shape[2]
header=img.header
img_mask_affine = img.affine
# configure unet input shape (concatenation of moving and fixed images)
ndim = 2
unet_input_features = 2
# data shape 64*64
s=(64,64)
inshape = (*s, unet_input_features)
# configure unet features
nb_features =[
[64, 64, 64, 64], # encoder features
[64, 64, 64, 64, 64, 32,16] # decoder features
]
# build model using VxmDense
inshape =s
vxm_model = vxm.networks.VxmDense(inshape, nb_features, int_steps=0)
# voxelmorph has a variety of custom loss classes
losses = [vxm.losses.MSE().loss, vxm.losses.Grad('l2').loss]
# usually, we have to balance the two losses by a hyper-parameter
lambda_param = 0.05
loss_weights = [1, lambda_param]
vxm_model.compile(optimizer='Adam', loss=losses, loss_weights=loss_weights, metrics=['accuracy'])
vxm_model.load_weights(loadable_model)
if reference>0:
reference=str(reference)
if reference==0:
reference='0'
if reference==-1:
y=int(img_data.shape[3]/2)
reference=str(y)
if reference==-2:
reference=mean_directory
x_all_slice=[]
y_all_slice=[]
for i in range(slice_number):
prepare_data=ref(input_direction0,maximum_intensity,i,reference)
val_input, _ = prepare_data
val_pred = vxm_model.predict(val_input)
#val_pred=flow(input_direction,reference,maximum_intensity,loadable_model,slice_num)
x=[]
y=[]
for i in range(val_pred[1][:,0,0,0].shape[0]):
x.append(np.mean(val_pred[1][i,...,0]))
y.append(np.mean(val_pred[1][i,...,1]))
x_all_slice.append(x)
y_all_slice.append(y)
x_all_slice=np.array(x_all_slice)
y_all_slice=np.array(y_all_slice)
mean_x=x_all_slice.mean(axis=0)
mean_y=y_all_slice.mean(axis=0)
### for delete the eror for reference to reference
mean_x[int(reference)]=0
mean_y[int(reference)]=0
overal=(mean_x+mean_y)/2
x_all_slice=[]
y_all_slice=[]
for i in range(slice_number):
prepare_data=ref(input_direction1,maximum_intensity,i,reference)
val_input, _ = prepare_data
val_pred = vxm_model.predict(val_input)
#val_pred=flow(input_direction,reference,maximum_intensity,loadable_model,slice_num)
x=[]
y=[]
for i in range(val_pred[1][:,0,0,0].shape[0]):
x.append(np.mean(val_pred[1][i,...,0]))
y.append(np.mean(val_pred[1][i,...,1]))
x_all_slice.append(x)
y_all_slice.append(y)
x_all_slice=np.array(x_all_slice)
y_all_slice=np.array(y_all_slice)
mean_x=x_all_slice.mean(axis=0)
mean_y=y_all_slice.mean(axis=0)
### for delete the eror for reference to reference
mean_x[int(reference)]=0
mean_y[int(reference)]=0
overal1=(mean_x+mean_y)/2
volume=range(val_pred[1][:,0,0,0].shape[0])
plt.figure(figsize=(25,10))
plt.plot(volume,overal,label = label1)
plt.plot(volume,overal1,label = label2)
# naming the x axis
plt.xlabel('volumes',fontsize=18)
# naming the y axis
plt.ylabel('movement',fontsize=18)
# giving a title to my graph
plt.title(title,fontsize=20)
# show a legend on the plot
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
# show a legend on the plot
plt.legend()
plt.grid()
plt.legend(fontsize=15)
###Output
_____no_output_____
###Markdown
LTPy functions This notebook lists all `functions` that are defined and used throughout the `LTPy course`.The following functions are listed:**[Data loading and re-shaping functions](load_reshape)*** [generate_xr_from_1D_vec](generate_xr_from_1D_vec)* [load_l2_data_xr](load_l2_data_xr)* [generate_geographical_subset](generate_geographical_subset)* [generate_masked_array](generate_masked_array)* [load_masked_l2_da](load_masked_l2_da)* [select_channels_for_rgb](rgb_channels)* [normalize](normalize)* [slstr_frp_gridding](slstr_frp_gridding)* [df_subset](df_subset)**[Data visualization functions](visualization)*** [visualize_scatter](visualize_scatter)* [visualize_pcolormesh](visualize_pcolormesh)* [visualize_s3_pcolormesh](visualize_s3_pcolormesh)* [visualize_s3_frp](visualize_s3_frp)* [viusalize_s3_aod](visualize_s3_aod) Load required libraries
###Code
import os
from matplotlib import pyplot as plt
import xarray as xr
from netCDF4 import Dataset
import numpy as np
import glob
from matplotlib import pyplot as plt
import matplotlib.colors
from matplotlib.colors import LogNorm
import cartopy.crs as ccrs
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import cartopy.feature as cfeature
import matplotlib.cm as cm
import warnings
warnings.simplefilter(action = "ignore", category = RuntimeWarning)
warnings.simplefilter(action = "ignore", category = FutureWarning)
###Output
_____no_output_____
###Markdown
Data loading and re-shaping functions `generate_xr_from_1D_vec`
###Code
def generate_xr_from_1D_vec(file, lat_path, lon_path, variable, parameter_name, longname, no_of_dims, unit):
"""
Takes a netCDF4.Dataset or xarray.DataArray object and returns a xarray.DataArray object with latitude / longitude
information as coordinate information
Parameters:
file(netCDF4 data file or xarray.Dataset): AC SAF or IASI Level 2 data file, loaded a netCDF4.Dataset or xarray.DataArray
lat_path(str): internal path of the data file to the latitude information, e.g. 'GEOLOCATION/LatitudeCentre'
lon_path(str): internal path of the data file to the longitude information, e.g. 'GEOLOCATION/LongitudeCentre'
variable(array): extracted variable of interested
parameter_name(str): parameter name, preferably extracted from the data file
longname(str): Long name of the parameter, preferably extracted from the data file
no_of_dims(int): Define the number of dimensions of your input array
unit(str): Unit of the parameter, preferably extracted from the data file
Returns:
1 or 2-dimensional (depending on the given number of dimensions) xarray.DataArray with latitude / longitude information
as coordinate information
"""
latitude = file[lat_path]
longitude = file[lon_path]
param = variable
if (no_of_dims==1):
param_da = xr.DataArray(
param[:],
dims=('ground_pixel'),
coords={
'latitude': ('ground_pixel', latitude[:]),
'longitude': ('ground_pixel', longitude[:])
},
attrs={'long_name': longname, 'units': unit},
name=parameter_name
)
else:
param_da = xr.DataArray(
param[:],
dims=["x","y"],
coords={
'latitude':(['x','y'],latitude[:]),
'longitude':(['x','y'],longitude[:])
},
attrs={'long_name': longname, 'units': unit},
name=parameter_name
)
return param_da
###Output
_____no_output_____
###Markdown
`load_l2_data_xr`
###Code
def load_l2_data_xr(directory, internal_filepath, parameter, lat_path, lon_path, no_of_dims,
paramname, unit, longname):
"""
Loads a Metop-A/B Level 2 dataset in HDF format and returns a xarray.DataArray with all the ground pixels of all directory
files. Uses function 'generate_xr_from_1D_vec' to generate the xarray.DataArray.
Parameters:
directory(str): directory where the HDF files are stored
internal_filepath(str): internal path of the data file that is of interest, e.g. TOTAL_COLUMNS
parameter(str): paramter that is of interest, e.g. NO2
lat_path(str): name of latitude variable
lon_path(str): name of longitude variable
no_of_dims(int): number of dimensions of input array
paramname(str): name of parameter
unit(str): unit of the parameter, preferably taken from the data file
longname(str): longname of the parameter, preferably taken from the data file
Returns:
1 or 2-dimensional xarray.DataArray with latitude / longitude information as coordinate information
"""
fileList = glob.glob(directory+'/*')
datasets = []
for i in fileList:
tmp=Dataset(i)
param=tmp[internal_filepath+'/'+parameter]
da_tmp= generate_xr_from_1D_vec(tmp,lat_path, lon_path,
param, paramname, longname, no_of_dims, unit)
if(no_of_dims==1):
datasets.append(da_tmp)
else:
da_tmp_st = da_tmp.stack(ground_pixel=('x','y'))
datasets.append(da_tmp_st)
return xr.concat(datasets, dim='ground_pixel')
###Output
_____no_output_____
###Markdown
`generate_geographical_subset`
###Code
def generate_geographical_subset(xarray, latmin, latmax, lonmin, lonmax, reassign=False):
"""
Generates a geographical subset of a xarray.DataArray and if kwarg reassign=True, shifts the longitude grid
from a 0-360 to a -180 to 180 deg grid.
Parameters:
xarray(xarray.DataArray): a xarray DataArray with latitude and longitude coordinates
latmin, latmax, lonmin, lonmax(int): lat/lon boundaries of the geographical subset
reassign(boolean): default is False
Returns:
Geographical subset of a xarray.DataArray.
"""
if(reassign):
xarray = xarray.assign_coords(longitude=(((xarray.longitude + 180) % 360) - 180))
return xarray.where((xarray.latitude < latmax) & (xarray.latitude > latmin) & (xarray.longitude < lonmax) & (xarray.longitude > lonmin),drop=True)
###Output
_____no_output_____
###Markdown
`generate_masked_array`
###Code
def generate_masked_array(xarray, mask, threshold, operator, drop=True):
"""
Applies a mask (e.g. a cloud mask) onto a given xarray.DataArray, based on a given threshold and operator.
Parameters:
xarray(xarray DataArray): a three-dimensional xarray.DataArray object
mask(xarray DataArray): 1-dimensional xarray.DataArray, e.g. cloud fraction values
threshold(float): any number specifying the threshold
operator(str): operator how to mask the array, e.g. '<', '>' or '!='
drop(boolean): default is True
Returns:
Masked xarray.DataArray with NaN values dropped, if kwarg drop equals True
"""
if(operator=='<'):
cloud_mask = xr.where(mask < threshold, 1, 0) #Generate cloud mask with value 1 for the pixels we want to keep
elif(operator=='!='):
cloud_mask = xr.where(mask != threshold, 1, 0)
elif(operator=='>'):
cloud_mask = xr.where(mask > threshold, 1, 0)
else:
cloud_mask = xr.where(mask == threshold, 1, 0)
xarray_masked = xr.where(cloud_mask ==1, xarray, np.nan) #Apply mask onto the DataArray
xarray_masked.attrs = xarray.attrs #Set DataArray attributes
if(drop):
return xarray_masked[~np.isnan(xarray_masked)] #Return masked DataArray
else:
return xarray_masked
###Output
_____no_output_____
###Markdown
`load_masked_l2_da`
###Code
def load_masked_l2_da(directory, internal_filepath, parameter, lat_path, lon_path, no_of_dims,
paramname, longname, unit, threshold, operator):
"""
Loads a Metop-A/B Gome-2 Level 2 data and cloud fraction information and
returns a masked xarray.DataArray. It combines the functions `load_l2_data_xr` and `generate_masked_array`.
Parameters:
directory(str): Path to directory with Level 2 data files.
internal_filepath(str): Internal file path under which the parameters are strored, e.g. TOTAL_COLUMNS
parameter(str): atmospheric parameter, e.g. NO2
lat_path(str): name of the latitude variable within the file
lon_path(str): path to the longitude variable within the file
no_of_dims(int): specify the number of dimensions, 1 or 2
paramname(str): parameter name
longname(str): long name of the parameter that shall be used
unit(str): unit of the parameter
threshold(float): any number specifying the threshold
operator(str): operator how to mask the xarray.DataArray, e.g. '<', '>' or '!='
Returns:
Masked xarray.DataArray keeping NaN values (drop=False)
"""
da = load_l2_data_xr(directory,
internal_filepath,
parameter,
lat_path,
lon_path,
no_of_dims,
paramname,
unit,
longname)
cloud_fraction = load_l2_data_xr(directory,
'CLOUD_PROPERTIES',
'CloudFraction',
lat_path,
lon_path,
no_of_dims,
'CloudFraction',
unit='-',
longname='Cloud Fraction')
return generate_masked_array(da, cloud_fraction, threshold, operator, drop=False)
###Output
_____no_output_____
###Markdown
`select_channels_for_rgb`
###Code
def select_channels_for_rgb(xarray, red_channel, green_channel, blue_channel):
"""
Selects the channels / bands of a multi-dimensional xarray for red, green and blue composite based on Sentinel-3
OLCI Level 1B data.
Parameters:
xarray(xarray.Dataset): xarray.Dataset object that stores the different channels / bands.
red_channel(str): Name of red channel to be selected
green_channel(str): Name of green channel to be selected
blue_channel(str): Name of blue channel to be selected
Returns:
Three xarray DataArray objects with selected channels / bands
"""
return xarray[red_channel], xarray[green_channel], xarray[blue_channel]
###Output
_____no_output_____
###Markdown
`normalize`
###Code
def normalize(array):
"""
Normalizes a numpy array / xarray.DataArray object to values between 0 and 1.
Parameters:
xarray(numpy array or xarray.DataArray): xarray.DataArray or numpy array object whose values should be
normalized.
Returns:
xarray.DataArray with normalized values
"""
array_min, array_max = array.min(), array.max()
return ((array - array_min)/(array_max - array_min))
###Output
_____no_output_____
###Markdown
`slstr_frp_gridding`
###Code
def slstr_frp_gridding(parameter_array, parameter, lat_min, lat_max, lon_min, lon_max,
sampling_lat_FRP_grid, sampling_lon_FRP_grid, n_fire, lat_frp, lon_frp, **kwargs):
"""
Produces gridded data of Sentinel-3 SLSTR NRT Fire Radiative Power Data
Parameters:
parameter_array(xarray.DataArray): xarray.DataArray with extracted data variable of fire occurences
parameter(str): NRT S3 FRP channel - either `mwir`, `swir` or `swir_nosaa`
lat_min, lat_max, lon_min, lon_max(float): Floats of geographical bounding box
sampling_lat_FRP_grid, sampling_long_FRP_grid(float): Float of grid cell size
n_fire(int): Number of fire occurences
lat_frp(xarray.DataArray): Latitude values of occurred fire events
lon_frp(xarray.DataArray): Longitude values of occurred fire events
**kwargs: additional keyword arguments to be added. Required for parameter `swir_nosaa`, where the function
requires the xarray.DataArray with the SAA FLAG information.
Returns:
the gridded xarray.Data Array and latitude and longitude grid information
"""
n_lat = int( (np.float32(lat_max) - np.float32(lat_min)) / sampling_lat_FRP_grid ) + 1 # Number of rows per latitude sampling
n_lon = int( (np.float32(lon_max) - np.float32(lon_min)) / sampling_lon_FRP_grid ) + 1 # Number of lines per longitude sampling
slstr_frp_gridded = np.zeros( [n_lat, n_lon], dtype='float32' ) - 9999.
lat_grid = np.zeros( [n_lat, n_lon], dtype='float32' ) - 9999.
lon_grid = np.zeros( [n_lat, n_lon], dtype='float32' ) - 9999.
if (n_fire >= 0):
# Loop on i_lat: begins
for i_lat in range(n_lat):
# Loop on i_lon: begins
for i_lon in range(n_lon):
lat_grid[i_lat, i_lon] = lat_min + np.float32(i_lat) * sampling_lat_FRP_grid + sampling_lat_FRP_grid / 2.
lon_grid[i_lat, i_lon] = lon_min + np.float32(i_lon) * sampling_lon_FRP_grid + sampling_lon_FRP_grid / 2.
# Gridded SLSTR FRP MWIR Night - All days
if(parameter=='swir_nosaa'):
FLAG_FRP_SWIR_SAA_nc = kwargs.get('flag', None)
mask_grid = np.where(
(lat_frp[:] >= lat_min + np.float32(i_lat) * sampling_lat_FRP_grid) &
(lat_frp[:] < lat_min + np.float32(i_lat+1) * sampling_lat_FRP_grid) &
(lon_frp[:] >= lon_min + np.float32(i_lon) * sampling_lon_FRP_grid) &
(lon_frp[:] < lon_min + np.float32(i_lon+1) * sampling_lon_FRP_grid) &
(parameter_array[:] != -1.) & (FLAG_FRP_SWIR_SAA_nc[:] == 0), False, True)
else:
mask_grid = np.where(
(lat_frp[:] >= lat_min + np.float32(i_lat) * sampling_lat_FRP_grid) &
(lat_frp[:] < lat_min + np.float32(i_lat+1) * sampling_lat_FRP_grid) &
(lon_frp[:] >= lon_min + np.float32(i_lon) * sampling_lon_FRP_grid) &
(lon_frp[:] < lon_min + np.float32(i_lon+1) * sampling_lon_FRP_grid) &
(parameter_array[:] != -1.), False, True)
masked_slstr_frp_grid = np.ma.array(parameter_array[:], mask=mask_grid)
if len(masked_slstr_frp_grid.compressed()) != 0:
slstr_frp_gridded[i_lat, i_lon] = np.sum(masked_slstr_frp_grid.compressed())
return slstr_frp_gridded, lat_grid, lon_grid
###Output
_____no_output_____
###Markdown
`df_subset`
###Code
def df_subset(df,low_bound1, high_bound1, low_bound2, high_bound2):
"""
Creates a subset of a pandas.DataFrame object with time-series information
Parameters:
df(pandas.DataFrame): pandas.DataFrame with time-series information
low_bound1(str): dateTime string, e.g. '2018-11-30'
high_bound1(str): dateTime string, e.g. '2018-12-01'
low_bound2(str): dateTime string, e.g. '2019-12-30'
high_bound2(str): dateTime string, e.g. '2020-01-15'
Returns:
the subsetted time-series as pandas.DataFrame object
"""
return df[(df.index>low_bound1) & (df.index<high_bound1)], df[(df.index>low_bound2) & (df.index<high_bound2)]
###Output
_____no_output_____
###Markdown
Data visualization functions `visualize_scatter`
###Code
def visualize_scatter(xr_dataarray, conversion_factor, projection, vmin, vmax, point_size, color_scale, unit,
title):
"""
Visualizes a xarray.DataArray in a given projection using matplotlib's scatter function.
Parameters:
xr_dataarray(xarray.DataArray): a one-dimensional xarray DataArray object with latitude and longitude information as coordinates
conversion_factor(int): any number to convert the DataArray values
projection(str): choose one of cartopy's projection, e.g. ccrs.PlateCarree()
vmin(int): minimum number on visualisation legend
vmax(int): maximum number on visualisation legend
point_size(int): size of marker, e.g. 5
color_scale(str): string taken from matplotlib's color ramp reference
unit(str): define the unit to be added to the color bar
title(str): define title of the plot
"""
fig, ax = plt.subplots(figsize=(40, 10))
ax = plt.axes(projection=projection)
ax.coastlines()
if (projection==ccrs.PlateCarree()):
gl = ax.gridlines(draw_labels=True, linestyle='--')
gl.top_labels=False
gl.right_labels=False
gl.xformatter=LONGITUDE_FORMATTER
gl.yformatter=LATITUDE_FORMATTER
gl.xlabel_style={'size':14}
gl.ylabel_style={'size':14}
# plot pixel positions
img = ax.scatter(
xr_dataarray.longitude.data,
xr_dataarray.latitude.data,
c=xr_dataarray.data*conversion_factor,
cmap=plt.cm.get_cmap(color_scale),
marker='o',
s=point_size,
transform=ccrs.PlateCarree(),
vmin=vmin,
vmax=vmax
)
plt.xticks(fontsize=16)
plt.yticks(fontsize=16)
plt.xlabel("Longitude", fontsize=16)
plt.ylabel("Latitude", fontsize=16)
cbar = fig.colorbar(img, ax=ax, orientation='horizontal', fraction=0.04, pad=0.1)
cbar.set_label(unit, fontsize=16)
cbar.ax.tick_params(labelsize=14)
ax.set_title(title, fontsize=20, pad=20.0)
plt.show()
###Output
_____no_output_____
###Markdown
`visualize_pcolormesh`
###Code
def visualize_pcolormesh(data_array, longitude, latitude, projection, color_scale, unit, long_name, vmin, vmax,
set_global=True, lonmin=-180, lonmax=180, latmin=-90, latmax=90):
"""
Visualizes a xarray.DataArray with matplotlib's pcolormesh function.
Parameters:
data_array(xarray.DataArray): xarray.DataArray holding the data values
longitude(xarray.DataArray): xarray.DataArray holding the longitude values
latitude(xarray.DataArray): xarray.DataArray holding the latitude values
projection(str): a projection provided by the cartopy library, e.g. ccrs.PlateCarree()
color_scale(str): string taken from matplotlib's color ramp reference
unit(str): the unit of the parameter, taken from the NetCDF file if possible
long_name(str): long name of the parameter, taken from the NetCDF file if possible
vmin(int): minimum number on visualisation legend
vmax(int): maximum number on visualisation legend
set_global(boolean): optional kwarg, default is True
lonmin,lonmax,latmin,latmax(float): optional kwarg, set geographic extent is set_global kwarg is set to
False
"""
fig=plt.figure(figsize=(20, 10))
ax = plt.axes(projection=projection)
img = plt.pcolormesh(longitude, latitude, data_array,
cmap=plt.get_cmap(color_scale), transform=ccrs.PlateCarree(),
vmin=vmin,
vmax=vmax,
shading='auto')
ax.add_feature(cfeature.BORDERS, edgecolor='black', linewidth=1)
ax.add_feature(cfeature.COASTLINE, edgecolor='black', linewidth=1)
if (projection==ccrs.PlateCarree()):
ax.set_extent([lonmin, lonmax, latmin, latmax], projection)
gl = ax.gridlines(draw_labels=True, linestyle='--')
gl.top_labels=False
gl.right_labels=False
gl.xformatter=LONGITUDE_FORMATTER
gl.yformatter=LATITUDE_FORMATTER
gl.xlabel_style={'size':14}
gl.ylabel_style={'size':14}
if(set_global):
ax.set_global()
ax.gridlines()
cbar = fig.colorbar(img, ax=ax, orientation='horizontal', fraction=0.04, pad=0.1)
cbar.set_label(unit, fontsize=16)
cbar.ax.tick_params(labelsize=14)
ax.set_title(long_name, fontsize=20, pad=20.0)
# plt.show()
return fig, ax
###Output
_____no_output_____
###Markdown
`visualize_s3_pcolormesh`
###Code
def visualize_s3_pcolormesh(color_array, array, latitude, longitude, title):
"""
Visualizes a xarray.DataArray or numpy.MaskedArray (Sentinel-3 OLCI Level 1 data) with matplotlib's pcolormesh function as RGB image.
Parameters:
color_array (numpy.MaskedArray): any numpy.MaskedArray, e.g. loaded with the NetCDF library and the Dataset function
array(numpy.Array): numpy.Array to get dimensions of the resulting plot
longitude (numpy.Array): array with longitude values
latitude (numpy.Array) : array with latitude values
title (str): title of the resulting plot
"""
fig=plt.figure(figsize=(20, 12))
ax=plt.axes(projection=ccrs.Mercator())
ax.coastlines()
gl = ax.gridlines(draw_labels=True, linestyle='--')
gl.top_labels=False
gl.right_labels=False
gl.xformatter=LONGITUDE_FORMATTER
gl.yformatter=LATITUDE_FORMATTER
gl.xlabel_style={'size':14}
gl.ylabel_style={'size':14}
img1 = plt.pcolormesh(longitude, latitude, array*np.nan, color=color_array,
clip_on = True,
edgecolors=None,
zorder=0,
transform=ccrs.PlateCarree())
ax.set_title(title, fontsize=20, pad=20.0)
plt.show()
###Output
_____no_output_____
###Markdown
`visualize_s3_frp`
###Code
def visualize_s3_frp(data, lat, lon, unit, longname, textstr_1, textstr_2, vmax):
"""
Visualizes a numpy.Array (Sentinel-3 SLSTR NRT FRP data) with matplotlib's pcolormesh function and adds two
text boxes to the plot.
Parameters:
data(numpy.MaskedArray): any numpy MaskedArray, e.g. loaded with the NetCDF library and the Dataset function
lat(numpy.Array): array with longitude values
lon(numpy.Array) : array with latitude values
unit(str): unit of the resulting plot
longname(str): Longname to be used as title
textstr_1(str): String to fill box 1
textstr_2(str): String to fill box 2
vmax(float): Maximum value of color scale
"""
fig=plt.figure(figsize=(20, 15))
ax = plt.axes(projection=ccrs.PlateCarree())
img = plt.pcolormesh(lon, lat, data,
cmap=cm.autumn_r, transform=ccrs.PlateCarree(),
vmin=0,
vmax=vmax)
ax.add_feature(cfeature.BORDERS, edgecolor='black', linewidth=1)
ax.add_feature(cfeature.COASTLINE, edgecolor='black', linewidth=1)
gl = ax.gridlines(draw_labels=True, linestyle='--')
gl.bottom_labels=False
gl.right_labels=False
gl.xformatter=LONGITUDE_FORMATTER
gl.yformatter=LATITUDE_FORMATTER
gl.xlabel_style={'size':14}
gl.ylabel_style={'size':14}
cbar = fig.colorbar(img, ax=ax, orientation='horizontal', fraction=0.029, pad=0.025)
cbar.set_label(unit, fontsize=16)
cbar.ax.tick_params(labelsize=14)
ax.set_title(longname, fontsize=20, pad=40.0)
props = dict(boxstyle='square', facecolor='white', alpha=0.5)
# place a text box on the right side of the plot
ax.text(1.1, 0.9, textstr_1, transform=ax.transAxes, fontsize=16,
verticalalignment='top', bbox=props)
props = dict(boxstyle='square', facecolor='white', alpha=0.5)
# place a text box in upper left in axes coords
ax.text(1.1, 0.85, textstr_2, transform=ax.transAxes, fontsize=16,
verticalalignment='top', bbox=props)
plt.show()
###Output
_____no_output_____
###Markdown
`visualize_s3_aod`
###Code
def visualize_s3_aod(aod_ocean, aod_land, latitude, longitude, title, unit, vmin, vmax, color_scale, projection):
"""
Visualizes two xarray.DataArrays from the Sentinel-3 SLSTR NRT AOD dataset onto the same plot with
matplotlib's pcolormesh function.
Parameters:
aod_ocean(xarray.DataArray): xarray.DataArray with the Aerosol Optical Depth for ocean values
aod_land(xarray.DataArray): xarray.DataArray with Aerosol Optical Depth for land values
longitude(xarray.DataArray): xarray.DataArray holding the longitude values
latitude(xarray.DataArray): xarray.DataArray holding the latitude values
title(str): title of the resulting plot
unit(str): unit of the resulting plot
vmin(int): minimum number on visualisation legend
vmax(int): maximum number on visualisation legend
color_scale(str): string taken from matplotlib's color ramp reference
projection(str): a projection provided by the cartopy library, e.g. ccrs.PlateCarree()
"""
fig=plt.figure(figsize=(12, 12))
ax=plt.axes(projection=projection)
ax.coastlines(linewidth=1.5, linestyle='solid', color='k', zorder=10)
gl = ax.gridlines(draw_labels=True, linestyle='--')
gl.top_labels=False
gl.right_labels=False
gl.xformatter=LONGITUDE_FORMATTER
gl.yformatter=LATITUDE_FORMATTER
gl.xlabel_style={'size':12}
gl.ylabel_style={'size':12}
img1 = plt.pcolormesh(longitude, latitude, aod_ocean, transform=ccrs.PlateCarree(), vmin=vmin, vmax=vmax, cmap=color_scale)
img2 = plt.pcolormesh(longitude, latitude, aod_land, transform=ccrs.PlateCarree(), vmin=vmin, vmax=vmax, cmap=color_scale)
ax.set_title(title, fontsize=20, pad=20.0)
cbar = fig.colorbar(img1, ax=ax, orientation='vertical', fraction=0.04, pad=0.05)
cbar.set_label(unit, fontsize=16)
cbar.ax.tick_params(labelsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Inputs
###Code
def read_ZAT(LT,ZNum,name_ZAT,name_ZAttrI,name_ZAttrIJ):
file_name = name_ZAT + '.mat'
# # read mat file generated from python (carlibration mode)
# if os.path.isfile('ZAT(Python).mat'):
# print('------------------- ZAT file exists - Load ZAT file -----------------')
# ZAttrI = sio.loadmat('ZAT(Python).mat')['ZAttrI']
# ZAttrIJ = sio.loadmat('ZAT(Python).mat')['ZAttrIJ']
# read the original mat file generated from matlab, need to change axis order (maybe different axix order issue)
if os.path.isfile(file_name):
print('------------------- ZAT file exists - Load ZAT file -----------------')
matZAT = sio.loadmat(file_name)[name_ZAT]
ZAT = matZAT[0,0] # ZAT.dtype
ZAttrI = np.moveaxis(ZAT[name_ZAttrI], -1, 0)
ZAttrIJ = np.moveaxis(ZAT[name_ZAttrIJ], -1, 0)
else:
print('-------------- ZAT file not exists - Replace with zeros -------------')
ZAttrIJ = np.zeros((LT,ZNum,ZNum)) # == Matlab: zeros(ZNum,ZNum,LT). Python: layers first, then rows*columns
ZAttrI = np.zeros((LT,ZNum,ZNum))
return ZAttrI, ZAttrIJ
###Output
_____no_output_____
###Markdown
Main Functions
###Code
def ProbIJ_Mix(Status_EmpPred,D,LLCoefIJ,Lambda,EmpInput,Time,Dist,HS,BFS,Hrent,ZAttrIJ,ZAttrI, LT,ZNum):
TravDisu = np.zeros((LT,ZNum,ZNum))
TravDisu_LL = np.zeros((LT,ZNum,ZNum))
ProbIJ_Logit = np.zeros((LT,ZNum,ZNum))
ProbIJ_Logit_Raw = np.zeros((LT,ZNum,ZNum))
ProbIJ = np.zeros((LT,ZNum,ZNum))
IJ = np.zeros((LT,ZNum,ZNum))
ER = np.zeros((ZNum,LT))
EW = np.zeros((ZNum,LT))
JobOpp = np.zeros((ZNum,LT))
LabCat = np.zeros((ZNum,LT))
ZAttrI_logsum = np.zeros((ZNum,LT))
ZAttrIJ_logsum = np.zeros((ZNum,LT))
SizeP_I = HS
SizeP_IJ = HS*BFS # directly multiply == Matlab: SizeP_IJ = HS.*BFS
ACD = np.zeros((ZNum,LT))
ACT = np.zeros((ZNum,LT))
# manually add empty matrix filled by 0 first, becuase otherwise, python cannot use probI and probJ in the next section
ProbI = np.zeros((LT,ZNum,ZNum))
ProbJ = np.zeros((LT,ZNum,ZNum))
# all following - have checked the results of Status_EmpPred ==1 mode with Matlab, but haven't checked Status_Empred==0 results yet.
for j in list(range(0,LT)): # the 'range' does not include the last number - here, list(range(0,LT)) returns to [0,1] == Matlab 1,2 layer. Python, first layer is 0, second layer is 1.
TravDisu[j] = 2*D*(Time[j]/60)
TravDisu_LL[j] = LLCoefIJ[:,[j]]*TravDisu[j]+(1-LLCoefIJ[:,[j]])*np.log(TravDisu[j])-LLCoefIJ[:,[j]]
ProbIJ_Logit_Raw[j] = SizeP_IJ*np.exp(Lambda[:,[j]]*(-TravDisu_LL[j] - np.log(Hrent)))
if Status_EmpPred == 1:
ProbIJ_Logit[j] = SizeP_IJ*np.exp(Lambda[:,[j]]*(-TravDisu_LL[j] - np.log(Hrent) + ZAttrIJ[j]))
ProbIJ[j] = ProbIJ_Logit[j]/np.sum(np.sum(ProbIJ_Logit[j],axis=0)) # sum for each column: Matlab sum (data,1) == Python: np.sum(data, axis=0) for 2d array. # For 1d array, just sum directly.
ProbJ[j] = ProbIJ_Logit[j]/np.sum(ProbIJ_Logit[j],axis=1,keepdims=True) # sum for each row: Matlab sum (data,2) == Python: np.sum(data, axis=1, keepdims=True) OR np.sum(data, axis=1)[:, np.newaxis]
ProbI[j] = ProbIJ_Logit[j]/np.sum(ProbIJ_Logit[j],axis=0)
IJ[j] = ProbIJ[j]*EmpInput[:,[j]]
else:
ProbIJ_Logit[j] = SizeP_I*np.exp(Lambda[:,[j]]*(-TravDisu_LL[j] - np.log(Hrent) + ZAttrI[j]))
ProbJ[j] = ProbIJ_Logit[j]/np.sum(ProbIJ_Logit[j],axis=1,keepdims=True)
ProbI[j] = ProbIJ_Logit[j]/np.sum(ProbIJ_Logit[j],axis=0)
IJ[j] = (EmpInput[:,[j]]).T*ProbI[j] # transpose method for 1d and 2d array is differeent - 2d array can directly use .T ; but 1d array should use [:, np.newaxis]
ProbIJ[j] = IJ[j]/np.sum(EmpInput[:,[j]],axis=0)
ER[:,[j]] = np.sum(IJ[j],axis=1,keepdims=True)
EW[:,[j]] = np.sum(IJ[j],axis=0)[:, np.newaxis] # [:, np.newaxis] is for 1d array transpose - from horizontal to vertical
JobOpp[:,[j]] = np.log(np.sum(EW[:,[j]].T*np.exp((-TravDisu_LL[j])),axis=1,keepdims=True)) / Lambda[:,[j]] # Job Opportunity from residence zones
LabCat[:,[j]] = np.log(np.sum(ER[:,[j]]*np.exp((-TravDisu_LL[j])),axis=0))[:, np.newaxis] / Lambda[:,[j]] # Labour catchment area from workplace
ZAttrI_logsum[:,[j]] = np.log(np.sum(np.exp(ZAttrI[j]),axis=1,keepdims=True))
ZAttrIJ_logsum[:,[j]] = np.log(np.sum(np.exp(ZAttrIJ[j]),axis=1,keepdims=True))
ACD[:,[j]] = np.sum(Dist[j]*ProbJ[j],axis=1,keepdims=True)
ACT[:,[j]] = np.sum(Time[j]*ProbJ[j],axis=1,keepdims=True)
#using dictionary can simply store everything!! (called 'output', like a struct in Matlab) - not only array, this dictionary can also save dataframe etc, but convert array to dataframe costs lots of time -> change array to dataframe at the final to_excel section
Output = {'ER':ER,
'EW':EW,
'JobOpp':JobOpp,
'LabCat':LabCat,
'ACD':ACD,
'ACT':ACT,
'IJ':IJ,
'ProbIJ':ProbIJ,
'ProbI':ProbI}
return Output
# # simply save all as the array format. Change array to table in the final to_excel section.
# np.savez('Output.npz', ER=ER, EW=EW, JobOpp=JobOpp, LabCat=LabCat, ACD=ACD, ACT=ACT, IJ=IJ, ProbIJ=ProbIJ, ProbI=ProbI) # name1 = ER
# Output = np.load('Output.npz')
# return Output
def Update_Hrent(Input, LT,ZNum,Wage,HSExpShare,Hrent0,HS):
IJ = Input['IJ'] # == Matlab: IJ = Input.IJ
HSExp_Matrix = np.zeros((LT,ZNum,ZNum))
for i in list(range(0,LT)): # == Matlab: for i = 1:LT
HSExp_Matrix[i] = IJ[i]*(Wage[:,[i]].T*HSExpShare[:,[i]])
TotHSExp = np.sum(sum([HSExp_Matrix[l] for l in list(range(0,HSExp_Matrix.shape[0]))]),axis=1,keepdims=True) #Matlab: sum(HSExp_Matrix,3) == Python: sum([HSExp_Matrix[l] for l in list(range(0,HSExp_Matrix.shape[0]))]) - maybe find an easier way later
TotHSDemand = TotHSExp/Hrent0
Hrent_Adj_Coef = np.log(TotHSDemand/HS)
Hrent = Hrent0 + Hrent_Adj_Coef
Error = np.max(np.abs(Hrent_Adj_Coef))
return Hrent, Error
def Calibrate_ZAttr(D,LLCoefIJ,Lambda,Time,HS,BFS,Hrent, LT,ZNum):
# Initial data input (to be replaced with Excel input)
ProbIJ_T1 = np.array([[0.2,0.1,0.05],
[0.05,0.2,0.05],
[0.05,0.1,0.2]])
ProbI_T1 = ProbIJ_T1/np.sum(ProbIJ_T1,axis=0)
ProbIJ_T = np.repeat(ProbIJ_T1[None,...],LT,axis=0)
ProbI_T = np.repeat(ProbI_T1[None,...],LT,axis=0)
SizeP_I = HS
SizeP_IJ = HS*BFS
# Calibrate ZAttrI
TravDisu = np.zeros((LT,ZNum,ZNum))
TravDisu_LL = np.zeros((LT,ZNum,ZNum))
ZAttrI = np.zeros((LT,ZNum,ZNum))
ZAttrIJ = np.zeros((LT,ZNum,ZNum)) # == Matlab: zeros(ZNum,ZNum,LT)
for j in list(range(0,LT)):
TravDisu[j] = 2*D*(Time[j]/60)
TravDisu_LL[j] = LLCoefIJ[:,[j]]*TravDisu[j]+(1-LLCoefIJ[:,[j]])*np.log(TravDisu[j])-LLCoefIJ[:,[j]]
for k in list(range(0,ZNum)):
ProbI1 = ProbI_T[j][:,[k]]
ProbIJ_Logit_Raw = SizeP_I*(np.exp(Lambda[:,[j]]*(-TravDisu_LL[j][:,[k]] - np.log(Hrent))))
Logit1 = ProbI1/ProbIJ_Logit_Raw
ZA = np.log(Logit1)/Lambda[:,[j]]
ZAttrI[j][:,[k]] = ZA - np.mean(ZA[:])
# Calibrate ZAttrIJ
for j in list(range(0,LT)):
TravDisu[j] = 2*D*(Time[j]/60)
TravDisu_LL[j] = LLCoefIJ[:,[j]]*TravDisu[j]+(1-LLCoefIJ[:,[j]])*np.log(TravDisu[j])-LLCoefIJ[:,[j]]
ProbIJ1 = ProbIJ_T[j]
ProbIJ_Logit_Raw = SizeP_IJ*(np.exp(Lambda[:,[j]]*(-TravDisu_LL[j] - np.log(Hrent))))
Logit1 = ProbIJ1/ProbIJ_Logit_Raw
ZA = np.log(Logit1)/Lambda[:,[j]]
ZAttrIJ[j] = ZA - np.mean(ZA[:])
def Verify_ZAttr(Lambda,HS,BFS,Hrent,TravDisu_LL,ProbIJ_T,ProbI_T,ZAttrI,ZAttrIJ, LT,ZNum):
SizeP_I = HS
SizeP_IJ = HS*BFS
# Calibrate ZAttrI
ProbIJ_ZAttrI = np.zeros((LT,ZNum,ZNum))
ProbIJ_ZAttrIJ = np.zeros((LT,ZNum,ZNum))
for j in list(range(0,LT)):
ProbIJ_ZAttrI_Raw = SizeP_I * (np.exp(Lambda[:,[j]]*(-TravDisu_LL[j] - np.log(Hrent) + ZAttrI[j])))
ProbIJ_ZAttrI[j] = ProbIJ_ZAttrI_Raw/(np.sum(ProbIJ_ZAttrI_Raw,axis=0))
ProbIJ_ZAttrIJ_Raw = SizeP_IJ * (np.exp(Lambda[:,[j]]*(-TravDisu_LL[j] - np.log(Hrent) + ZAttrIJ[j])))
ProbIJ_ZAttrIJ[j] = ProbIJ_ZAttrIJ_Raw/(np.sum(ProbIJ_ZAttrIJ_Raw.flatten('F')[:, np.newaxis], axis=0)) # Matlab: ProbIJ_ZAttrIJ_Raw(:) == Python: ProbIJ_ZAttrIJ_Raw.flatten('F')[:, np.newaxis]. Reduce dimension from 2d-array to 1d-array (one single column) here? #but for ProbIJ_ZAttrI_Raw, we didn't do this.
Error_ZAttrI = np.max(np.max(np.max(np.abs(ProbIJ_ZAttrI/ProbI_T - 1), axis=1, keepdims=True), axis=2, keepdims=True)) #can we just use np.max() - it will generate the max value among all of them?
Error_ZAttrIJ = np.max(np.max(np.max(np.abs(ProbIJ_ZAttrIJ/ProbIJ_T - 1), axis=1, keepdims=True), axis=2, keepdims=True))
# Error_ZAttrI & Error_ZAttrIJ are slightly different from matlab results, maybe because the results are 0 actually? will check later. (Here Error_ZAttrIJ is 1.110223e-16, Matlab is 1.5543e-15)
return Error_ZAttrI,Error_ZAttrIJ
Error_ZAttrI,Error_ZAttrIJ = Verify_ZAttr(Lambda,HS,BFS,Hrent,TravDisu_LL,ProbIJ_T,ProbI_T,ZAttrI,ZAttrIJ)
if (Error_ZAttrI < Tol) & (Error_ZAttrIJ < Tol):
print('--------------------- ZATTR Calibration Complete --------------------')
else:
print('--------------------- ZATTR Calibration Error ---------------------')
return ZAttrIJ,ZAttrI
###Output
_____no_output_____
###Markdown
Output
###Code
def print_outputs (Status_Mode,Status_EmpPred,Status_HrentPred,Output,Hrent,Tol):
Date = ['DATE: ',pd.Timestamp.today()] # change format later - currently they're in 2 columns
Project = ['PROJECT NAME: ProbIJ_Model_Test']
Author = ['AUTHOR: LI WAN | UNIVERSITY OF CAMBRIDGE']
Precision = ['PRECISION: ',Tol]
if Status_Mode == 1:
ModelMode = ['MODEL MODE: CALIBRATION']
else:
ModelMode = ['MODEL MODE: FORECAST']
if Status_EmpPred == 1:
EmpPredMode = ['EMPLOTMENT PREDICTION: ENABLED']
else:
EmpPredMode = ['EMPLOTMENT PREDICTION: DISABLED']
if Status_HrentPred == 1:
HrentPredMode = ['HOUSE RENTS PREDICTION: ENABLED'];
else:
HrentPredMode = ['HOUSE RENTS PREDICTION: DISABLED'];
Metadata = [Project,Date,Author,Precision,ModelMode,EmpPredMode,HrentPredMode]
MetadataT = pd.DataFrame(data = Metadata)
#Matlab: Output.Metadata = MetadataT #save in the output construct, check later.
# 2d array to dataframe
df_ER = pd.DataFrame(Output['ER'], columns = pd.MultiIndex.from_tuples([('ER','Column_A'),('ER','Column_B')])) # when checking the excel file, there is a empty gap between column name and content - do this later!!
df_EW = pd.DataFrame(Output['EW'], columns = pd.MultiIndex.from_tuples([('EW','Column_A'),('EW','Column_B')]))
T_EREW = pd.concat([df_ER, df_EW], axis=1)
df_JobOpp = pd.DataFrame(Output['JobOpp'], columns = pd.MultiIndex.from_tuples([('JobOpp','Column_A'),('JobOpp','Column_B')])) # format gap - do this later
df_LabCat = pd.DataFrame(Output['LabCat'], columns = pd.MultiIndex.from_tuples([('LabCat','Column_A'),('LabCat','Column_B')]))
T_JobOppLatCat = pd.concat([df_JobOpp, df_LabCat], axis=1)
df_ACD = pd.DataFrame(Output['ACD'], columns = pd.MultiIndex.from_tuples([('ACD','Column_A'),('ACD','Column_B')])) # format gap - do this later
df_ACT = pd.DataFrame(Output['ACT'], columns = pd.MultiIndex.from_tuples([('ACT','Column_A'),('ACT','Column_B')]))
T_Tran = pd.concat([df_ACD, df_ACT], axis=1)
T_Hrents = pd.DataFrame(Hrent, columns = ['Hrent'])
# save 3d array to dataframe
names = ['dim3', 'dim_row', 'dim_column']
index_IJ = pd.MultiIndex.from_product([range(s)for s in Output['IJ'].shape], names=names)
T_IJ = pd.DataFrame({'IJ': Output['IJ'].flatten()}, index=index_IJ)['IJ']
T_IJ = T_IJ.unstack(level='dim_column')#.swaplevel().sort_index()
index_ProbIJ = pd.MultiIndex.from_product([range(s)for s in Output['ProbIJ'].shape], names=names)
T_ProbIJ = pd.DataFrame({'ProbIJ': Output['ProbIJ'].flatten()}, index=index_ProbIJ)['ProbIJ']
T_ProbIJ = T_ProbIJ.unstack(level='dim_column')#.swaplevel().sort_index()
index_ProbI = pd.MultiIndex.from_product([range(s)for s in Output['ProbI'].shape], names=names)
T_ProbI = pd.DataFrame({'ProbI': Output['ProbI'].flatten()}, index=index_ProbI)['ProbI']
T_ProbI = T_ProbI.unstack(level='dim_column')#.swaplevel().sort_index()
# write to the excel file
Filename = pd.ExcelWriter('_Output_Summary(python).xlsx') #, engine='xlsxwriter'
MetadataT.to_excel(Filename, sheet_name='Metadata', index=False)
T_IJ.to_excel(Filename, sheet_name='Commuting_Flow')
T_IJ_all = pd.DataFrame(sum([Output['IJ'][l] for l in list(range(0,Output['IJ'].shape[0]))]))
T_IJ_all.to_excel(Filename, sheet_name='Commuting_Flow_All', index=False)
T_EREW.to_excel(Filename, sheet_name='ER_EW')
T_Hrents.to_excel(Filename, sheet_name='Hrent', index=False)
T_JobOppLatCat.to_excel(Filename, sheet_name='JobOpp_LabCat')
T_Tran.to_excel(Filename, sheet_name='ACD_ACT') #drop index, do this later
Filename.save()
Output_summary = {'Metadata':Metadata,
'MetadataT':MetadataT,
'T_IJ':T_IJ,
'T_IJ_all':T_IJ_all,
'T_EREW':T_EREW,
'T_Hrents':T_Hrents,
'T_JobOppLatCat':T_JobOppLatCat,
'T_Tran':T_Tran}
return Output_summary
###Output
_____no_output_____
###Markdown
Functions Some functions that i wrote
###Code
#################
#### IMPORTS ####
#################
# Profiling
import cProfile, pstats, io
from pstats import SortKey
# Arrays
import numpy as np
# Deep Learning stuff
import torch
import torchvision
import torchvision.transforms as transforms
# Images display and plots
import matplotlib.pyplot as plt
# Fancy progress bars
import tqdm.notebook as tq
# Tensor Network Stuff
%config InlineBackend.figure_formats = ['svg']
import quimb.tensor as qtn # Tensor Network library
import quimb
def pro_profiler(func):
'''Generic profiler. Expects an argument-free function.
e. g. func = lambda: learning_epoch_SGD(mps, imgs, 3, 0.1).
Prints and returns the profiling report trace.'''
# TODO: adapt to write trace to file
pr = cProfile.Profile()
pr.enable()
func()
pr.disable()
s = io.StringIO()
sortby = SortKey.CUMULATIVE
ps = pstats.Stats(pr, stream=s).sort_stats(sortby)
ps.print_stats()
print(s.getvalue())
return s
'''
Wrapper for type checks.
While defining a function, you can add the wrapper
stating the expected types:
> @arg_val(class_1, class_2, ...)
> def function(a, b, ...):
'''
def arg_val(*args):
def wrapper(func):
def validating(*_args):
if any(type(_arg)!=arg for _arg, arg in zip(_args,args)):
raise TypeError('wrong type!')
return func(*_args)
return validating
return wrapper
@arg_val(int, int, float)
def get_data(train_size = 1000, test_size = 100, grayscale_threshold = .5):
'''
Prepare the MNIST dataset for the training algorithm:
* Choose randomly a subset from the whole dataset
* Flatten each image to mirror the mps structure
* Normalize images from [0,255] to [0,1]
* Apply a threshold for each pixels so that each value
below that threshold are set to 0, the others get set to 1.
For this algorithm we will only deal to binary states {0,1}
instead of a range from 0 to 1
'''
# Download all data
mnist = torchvision.datasets.MNIST('classifier_data', train=True, download=True,
transform = transforms.Compose([transforms.ToTensor()]) )
# Convert torch.tenor to numpy
npmnist = mnist.data.numpy()
# Check of the type of the sizes
#if ((type(train_size) != int) or (type(test_size) != int)):
# raise TypeError('train_size and test_size must be INT')
# Check if the training_size and test_size requested are bigger than
# the MNIST whole size
if ( (train_size + test_size) > npmnist.shape[0] ):
raise ValueError('Subset too big')
# Check of the positivity of sizes
if ( (train_size <= 0) or (test_size <= 0) ):
raise ValueError('Size of training set and test set cannot be negative')
# Choose just a subset of the data
# Creating a mask by randomly sampling the indexes of the full dataset
subset_indexes = np.random.choice(np.arange(npmnist.shape[0]), size=(train_size + test_size),
replace=False, p=None)
# Apply the mask
npmnist = npmnist[subset_indexes]
# Flatten every image
npmnist = np.reshape(npmnist, (npmnist.shape[0], npmnist.shape[1]*npmnist.shape[2]))
# Normalize the data from 0 - 255 to 0 - 1
npmnist = npmnist/npmnist.max()
# As in the paper, we will only deal with {0,1} values, not a range
if ((grayscale_threshold <= 0) or (grayscale_threshold >= 1)):
raise ValueError('grayscale_threshold must be in range ]0,1[')
npmnist[npmnist > grayscale_threshold] = 1
npmnist[npmnist <= grayscale_threshold] = 0
# Return training set and test set
return npmnist[:train_size], npmnist[train_size:]
@arg_val(np.ndarray, bool, str)
def plot_img(img_flat, flip_color = True, savefig = ''):
'''
Display the image from the flattened form
'''
# If the image is corrupted for partial reconstruction (pixels are set to -1)
if -1 in img_flat:
img_flat = np.copy(img_flat)
img_flat[img_flat == -1] = 0
# Background white, strokes black
if flip_color:
plt.imshow(1-np.reshape(img_flat,(28,28)), cmap='gray')
# Background black, strokes white
else:
plt.imshow(np.reshape(img_flat,(28,28)), cmap='gray')
plt.axis('off')
if savefig != '':
# save the picture as svg in the location determined by savefig
plt.savefig(savefig, format='svg')
plt.show()
train_set, test_set = get_data()
train_set.shape
test_set.shape
type(train_set[1])
plot_img(train_set[1], True, '')
train_set[0].shape[0]
# Create a simple MPS network randomly initialized
mps = qtn.MPS_rand_state(L=28*28, bond_dim=5)
def get_p_img(mps, img):
'''
Contract the MPS network with an image to compute its probability
P(img) = (<mps|img><img|mps>)/<mps|mps>
'''
if (len(mps.tensors) != img.shape[0]):
raise ValueError('Length of MPS and size of image do not match')
# Compute denominator
Z = mps.H @ mps # Does it acknowledge canonicalization to speed computations?
# TO DO: check documentation
# Contract image with mps
P = 0
# From left to right...
for body in range(img.shape[0]):
# if pixel is 0:
if img[body] == 0:
state = [1,0]
# if pixel is 1:
elif img[body] == 1:
state = [0,1]
else:
raise ValueError('Found invalid pixel in image')
if body == img.shape[0] - 1:
newtensor = np.einsum('i,ik', carried_value, mps.tensors[body].data)
P = np.einsum('i,i', state, newtensor)
elif body > 0:
newtensor = np.einsum('i,ikj', carried_value, mps.tensors[body].data)
carried_value = np.einsum('i,ik', state, newtensor)
else:
carried_value = np.einsum('i,ki', state, mps.tensors[body].data)
P = (P*P)/Z
return P
train_set[0].shape
def partial_removal_img(mnistimg, fraction = .5, axis = 0):
'''
Corrupt (with -1 values) a portion of an input image (from the test set)
to test if the algorithm can reconstruct it
'''
# Check type:
if [type(mnistimg), type(fraction), type(axis)] != [np.ndarray, float, int]:
raise TypeError('Input types not valid')
# Check the shape of input image
if (mnistimg.shape[0] != 784):
raise TypeError('Input image shape does not match, need (784,)')
# Axis can be either 0 (rowise deletion) or 1 (columnwise deletion)
if not(axis in [0,1]):
raise ValueError('Invalid axis [0,1]')
# Fraction must be from 0 to 1
if (fraction < 0 or fraction > 1):
raise ValueError('Invalid value for fraction variable (in interval [0,1])')
mnistimg_corr = np.copy(mnistimg)
mnistimg_corr = np.reshape(mnistimg_corr, (28,28))
if axis == 0:
mnistimg_corr[int(28*(1-fraction)):,:] = -1
else:
mnistimg_corr[:,int(28*(1-fraction)):] = -1
mnistimg_corr = np.reshape(mnistimg_corr, (784,))
return mnistimg_corr
aaa = partial_removal_img(test_set[0], fraction = .3, axis = 0)
plot_img(aaa)
def initialize_mps(Ldim = 28*28, bdim = 30, canonicalize = 1):
'''
Initialize the MPS tensor network
1. Create the MPS TN
2. Canonicalization
3. Renaming indexes
'''
# Create a simple MPS network randomly initialized
mps = qtn.MPS_rand_state(L=Ldim, bond_dim=bdim)
# Canonicalize: use a canonicalize value out of range to skip it (such as -1)
if canonicalize in range(Ldim):
mps.canonize(canonicalize)
# REINDEXING TENSORS FOR A EASIER DEVELOPING
# during initializations, the index will be named using the same notation of the
# Pan Zhang et al paper:
# ___ ___ ___
# |I0|--i0--|I1|--i1-... ...-i(N-1)--|IN|
# | | |
# | v0 | v1 | vN
# V V V
# Reindexing the leftmost tensor
mps = mps.reindex({mps.tensors[0].inds[0]: 'i0',
mps.tensors[0].inds[1]: 'v0'})
# Reindexing the inner tensors through a cycle
for tensor in range(1,len(mps.tensors)-1):
mps = mps.reindex({mps.tensors[tensor].inds[0]: 'i'+str(tensor-1),
mps.tensors[tensor].inds[1]: 'i'+str(tensor),
mps.tensors[tensor].inds[2]: 'v'+str(tensor)})
# Reindexing the last tensor
tensor = tensor + 1
mps = mps.reindex({mps.tensors[tensor].inds[0]: 'i'+str(tensor),
mps.tensors[tensor].inds[1]: 'v'+str(tensor)})
return mps
mps = initialize_mps(Ldim=5)
mps.tensors
def quimb_transform_img2state(img):
'''
Trasform an image to a tensor network to fully manipulate
it using quimb, may be very slow, use it for checks
'''
# Initialize empty tensor
img_TN = qtn.Tensor()
for k, pixel in enumerate(img):
if pixel == 0: # if pixel is 0, we want to have a tensor with data [0,1]
img_TN = img_TN & qtn.Tensor(data=[0,1], inds=['v'+str(k)], )
else: # if pixel is 1, we want to have a tensor with data [1,0]
img_TN = img_TN & qtn.Tensor(data=[1,0], inds=['v'+str(k)], )
# | | 781 |
# O O ... O
return img_TN
def computepsi(mps, img):
'''
Contract the MPS with the states (pixels) of a binary{0,1} image
PSI: O-...-O-O-O-...-O
| | | | |
| | | | |
IMAGE: O O O O O
Images state are created the following way:
if pixel is 0 -> state = [0,1]
if pixel is 1 -> state = [1,0]
'''
# Left most tensor
# O--
# Compute | => O--
# O
if img[0] == 0:
contraction = np.einsum('a,ba',[0,1], mps.tensors[0].data)
else:
contraction = np.einsum('a,ba',[1,0], mps.tensors[0].data)
# Remove the first and last pixels because in the MPS
# They need to be treated differently
for k, pixel in enumerate(img[1:-1]):
#
# Compute O--O-- => O--
# | |
contraction = np.einsum('a,abc',contraction, mps.tensors[k+1].data)
# O--
# Compute | => O--
# O
if pixel == 0:
contraction = np.einsum('a,ba', [0,1], contraction)
else:
contraction = np.einsum('a,ba', [1,0], contraction)
#
# Compute O--O => O
# | |
contraction = np.einsum('a,ab',contraction, mps.tensors[-1].data)
# O
# Compute | => O (SCALAR)
# O
if img[-1] == 0:
contraction = np.einsum('a,a', [0,1], contraction)
else:
contraction = np.einsum('a,a', [1,0], contraction)
return contraction
mps = initialize_mps()
%%time
fast_psi = computepsi(mps, train_set[0])
%%time
slow_psi = quimb_transform_img2state(train_set[0]) @ mps
fast_psi**2
slow_psi**2
def computepsiprime(mps, img, contracted_left_index):
'''
Contract the MPS with the states (pixels) of a binary{0,1} image
PSI': O-...-O- -O-...-O
| | | |
| | | | | |
IMAGE: O O O O O O
Images state are created the following way:
if pixel is 0 -> state = [0,1]
if pixel is 1 -> state = [1,0]
'''
#############
# LEFT PART #
#############
# Left most tensor
# O--
# Compute | => O--
# O
if img[0] == 0:
contraction_sx = np.einsum('a,ba',[0,1], mps.tensors[0].data)
else:
contraction_sx = np.einsum('a,ba',[1,0], mps.tensors[0].data)
for k in range(1, contracted_left_index):
#
# Compute O--O-- => O--
# | |
contraction_sx = np.einsum('a,abc->bc',contraction_sx, mps.tensors[k].data)
# O--
# Compute | => O--
# O
if img[k] == 0:
contraction_sx = np.einsum('a,ba', [0,1], contraction_sx)
else:
contraction_sx = np.einsum('a,ba', [1,0], contraction_sx)
##############
# RIGHT PART #
##############
# Right most tensor
# ---O
# Compute | => --O
# O
if img[-1] == 0:
contraction_dx = np.einsum('a,ba',[0,1], mps.tensors[-1].data)
else:
contraction_dx = np.einsum('a,ba',[1,0], mps.tensors[-1].data)
for k in range(len(mps.tensors)-2, contracted_left_index+1, -1):
#
# Compute --O--O => --O
# | |
contraction_dx = np.einsum('a,bac->bc',contraction_dx, mps.tensors[k].data)
# --O
# Compute | => --O
# O
if img[k] == 0:
contraction_dx = np.einsum('a,ba', [0,1], contraction_dx)
else:
contraction_dx = np.einsum('a,ba', [1,0], contraction_dx)
# From here on it is just speculation
if img[contracted_left_index] == 0:
contraction_sx = np.einsum('a,k->ak', contraction_sx, [0,1])
else:
contraction_sx = np.einsum('a,k->ak', contraction_sx, [1,0])
if img[contracted_left_index+1] == 0:
contraction_dx = np.einsum('a,k->ak', contraction_dx, [0,1])
else:
contraction_dx = np.einsum('a,k->ak', contraction_dx, [1,0])
contraction = np.einsum('ab,cd->abcd', contraction_sx, contraction_dx)
return contraction
def learning_step(mps, index, imgs, lr, going_right = True):
'''
Compute the updated merged tensor A_{index,index+1}
UPDATE RULE: A_{i,i+1} += lr* 2 *( A_{i,i+1}/Z - ( SUM_{i=1}^{m} psi'(v)/psi(v) )/m )
'''
# Merge I_k and I_{k+1} in a single rank 4 tensor ('i_{k-1}', 'v_k', 'i_{k+1}', 'v_{k+1}')
A = (mps.tensors[index] @ mps.tensors[index+1])
# Assumption: The mps is canonized
Z = A@A
# Computing the second term, summation over
# the data-dependent terms
psifrac = 0
for img in imgs:
num = computepsiprime(mps,img,index) # PSI(v)
den = computepsi(mps,img) # PSI(v')
# Theoretically the two computations above can be optimized in a single function
# because we are contracting the very same tensors for the most part
psifrac = psifrac + num/den
psifrac = psifrac/imgs.shape[0]
# Derivative of the NLL
dNLL = (A/Z) - psifrac
A = A + lr*dNLL # Update A_{i,i+1}
# Now the tensor A_{i,i+1} must be split in I_k and I_{k+1}.
# To preserve canonicalization:
# > if we are merging sliding towards the RIGHT we need to absorb right
# S v D
# ->-->--A_{k,k+1}--<--<- => ->-->-->--x--<--<--<- => >-->-->--o--<--<-
# | | | | | | | | | | | | | | | | | |
#
# > if we are merging sliding toward the LEFT we need to absorb left
#
if going_right:
# FYI: split method does apply SVD by default
# there are variations of svd that can be inspected
# for a performance boost
SD = A.split(['i'+str(index-1),'v'+str(index)], absorb='right')
else:
SD = A.split(['i'+str(index-1),'v'+str(index)], absorb='left')
# SD.tensors[0] -> I_{index}
# SD.tensors[1] -> I_{index+1}
return SD
def learning_epoch_sgd(mps, imgs, epochs, lr, batch_size = 25):
'''
Manages the sliding left and right.
From tensor 1 (the second), apply learning_step() sliding to the right
At tensor max-2, apply learning_step() sliding to the left back to tensor 1
'''
# We expect, however, that the batch size is smaler than the input set
batch_size = min(len(imgs),batch_size)
guide = np.arange(len(imgs))
# [1,2,...,780,781,780,...,2,1]
progress = tq.tqdm([i for i in range(1,len(mps.tensors)-2)] + [i for i in range(len(mps.tensors)-3,0,-1)], leave=True)
# Firstly we slide right
going_right = True
for index in progress:
np.random.shuffle(guide)
mask = guide[:batch_size]
A = learning_step(mps,index,imgs[mask],lr, going_right)
mps.tensors[index].modify(data=np.transpose(A.tensors[0].data,(0,2,1)))
mps.tensors[index+1].modify(data=A.tensors[1].data)
#p0 = computepsi(mps,imgs[0])**2
progress.set_description('Left Index: {}'.format(index))
if index == len(mps.tensors)-3 :
going_right = False
# cha cha real smooth
###Output
_____no_output_____
###Markdown
Functional Paradigm Map Map is a built-in Python function that allows for mapping a particular algorithm or function to every item of any iterable and thus returns the applied results in the form of a list collection.
###Code
# Map an algorithm that adds two to every member of the input list
func = lambda x: x + 2
result = list(map(func, [1,2,3,4,5]))
assert(result == [3,4,5,6,7])
print([i for i in result])
###Output
[3, 4, 5, 6, 7]
###Markdown
FilterThe build-in filter method returns an iterator from elements of an iterable for which a conditional function returns true.In essence, supply a iterable, for all elements in that iterable that satisfy some condition return that those elements.
###Code
# Return only elements greater than or equal to 3
func = lambda x: x if x >= 3 else False
result = list(filter(func, [1,2,3,4,5]))
assert(result == [3,4,5])
print([i for i in result])
###Output
[3, 4, 5]
[3, 4, 5]
###Markdown
ReduceAvailable for use from functools. Reduce, performs a computation on a list and returns the final result.
###Code
import functools
# Reduce, get the largest list from a list of lists
sample = [[1,2,3,4], [1,2,3], [i for i in range(5)], [i for i in range(10)]]
func = lambda x, y: x if len(x) > len(y) else y
result = functools.reduce(func, sample)
assert(result == [i for i in range(10)])
print(result)
###Output
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
###Markdown
###Code
def greet_user():
print('Hello!')
greet_user()
def comprimentar_usuario(username):
print(f'Olá, {username.title()}!')
comprimentar_usuario('pekora')
def display_message():
print('Hello everyone, still learning python here')
display_message()
def describe_anime_good(anime_type, anime_name):
print(f'\nMy favotire anime type is {anime_type}')
print(f'My favorite anime is {anime_name}')
describe_anime_good('comedy', 'gintama')
describe_anime_good('slice_of_life', 'non non biyori')
def describe_anime_good(anime_type, anime_name):
print(f'\nMy favotire anime type is {anime_type}')
print(f'My favorite anime is {anime_name}')
describe_anime_good(anime_type= 'comedia', anime_name= 'Gintama')
def describe_anime_good(anime_type='comedy', anime_name='Gintama'):
print(f'\nMy favorite anime type is {anime_type}')
print(f'My favorite anime is {anime_name}')
describe_anime_good()
def full_name(first_name, second_name):
full = f'Nice to meet you, {first_name} {second_name}-san'
return full.title()
full_name('Usada', 'Pekora')
def build_person(first_name, last_name):
person = {f'first': first_name.title(), 'last': last_name.title()}
return person
build_person('jimi', 'hendrix')
def build_person(first_name, last_name, age=None):
"""Return a dictionary of information about a person."""
person = {'first': first_name, 'last': last_name}
if age:
person['age'] = age
return person
musician = build_person('jimi', 'hendrix', age=27)
print(musician)
#Using a Function with a while Loop
def get_formatted_name(first_name, last_name):
full_name = f"{first_name} {last_name}"
return full_name.title()
while True:
print("\nPlease tell me your name")
print("(enter 'q' at any time to quit)")
f_name = input("First name: ")
if f_name == 'q':
break
l_name = input("Last name: ")
if l_name == 'q':
break
formatted_name = get_formatted_name(f_name, l_name)
print(f"\nHello, {formatted_name}")
def city_country(name_city, name_country):
place = f'{name_city}, {name_country}'
return place.title()
while True:
print(f'\nPlease tell me your location')
print(f"use 'q' at any time to quit")
city = input('Your city: ')
if city == 'q':
break
country = input('Your Country: ')
if country == 'q':
break
set_city_country = city_country(city, country)
print(f'You are from {set_city_country}')
def greet_users(names):
for name in names:
msg = f"Hello, {name}"
print(msg)
usernames = ['Pekora', 'Polka', 'Watame', 'Flare']
greet_users(usernames)
unprinted_designs = ['phone case', 'robot pendant', 'dodecahedron']
completed_models = []
while unprinted_designs:
current_design = unprinted_designs.pop()
print(f"Printing model: {current_design}")
completed_models.append(current_design)
print("\nThe following models have been printed:")
for completed_model in completed_models:
print(completed_model)
objetos = ['telefone', 'celular', 'pc', 'carro', 'talher']
novo_lugar = []
while objetos:
mudar = objetos.pop()
print(f"Objeto achado: {mudar}")
novo_lugar.append(mudar)
print(f'\nOs seguintes objetos foram mudados: ')
for novo_lugar_a in novo_lugar:
print(novo_lugar_a)
def make_pizza(*toppings):
print(toppings)
make_pizza('pepperoni')
make_pizza('mushrooms', 'green peppers', 'extra chesse')
def make_pizza(*toppings):
print(f'Making a pizza with the following toppings: ')
for topping in toppings:
print(f'- {topping}')
make_pizza('pepperoni')
make_pizza('mushrooms', 'green peppers', 'extra cheese')
#Mixing Positional and Arbitrary Arguments
def make_pizza(size, *toppings):
print(f'\n Pizza com tamanho de {size}cm e com as coberturas de {toppings}')
for topping in topppings:
print(f'topping')
make_pizza(30, 'chocolate')
make_pizza(50, 'quatro-queijos', 'mussarella', 'bacon')
###Output
_____no_output_____
###Markdown
built-in Function
###Code
adjustment = 0.5
print(min(3,5,7))
print(max('a','A','0'))
###Output
3
a
###Markdown
All of these python notebooks are available at [https://github.com/caxqueiroz/coding-with-python] Functions Functions can represent mathematical functions. More importantly, in programmming functions are a mechansim to allow code to be re-used so that complex programs can be built up out of simpler parts. This is the basic syntax of a function```pythondef funcname(arg1, arg2,... argN): ''' Document String''' statements return ``` Read the above syntax as, A function by name "funcname" is defined, which accepts arguements "arg1,arg2,....argN". The function is documented and it is '''Document String'''. The function after executing the statements returns a "value".Return values are optional (by default every function returns **None** if no return statement is executed)
###Code
print("Hello Jack.")
print("Jack, how are you?")
###Output
Hello Jack.
Jack, how are you?
###Markdown
Instead of writing the above two statements every single time it can be replaced by defining a function which would do the job in just one line. Defining a function firstfunc().
###Code
def firstfunc():
print("Hello Jack.")
print("Jack, how are you?")
firstfunc() # execute the function
###Output
Hello Jack.
Jack, how are you?
###Markdown
**firstfunc()** every time just prints the message to a single person. We can make our function **firstfunc()** to accept arguements which will store the name and then prints respective to that accepted name. To do so, add a argument within the function as shown.
###Code
def firstfunc(username):
print("Hello %s." % username)
print(username + ',' ,"how are you?")
name1 = 'sally' # or use input('Please enter your name : ')
###Output
_____no_output_____
###Markdown
So we pass this variable to the function **firstfunc()** as the variable username because that is the variable that is defined for this function. i.e name1 is passed as username.
###Code
firstfunc(name1)
###Output
Hello sally.
sally, how are you?
###Markdown
Return Statement When the function results in some value and that value has to be stored in a variable or needs to be sent back or returned for further operation to the main algorithm, a return statement is used.
###Code
def times(x,y):
z = x*y
return z
###Output
_____no_output_____
###Markdown
The above defined **times( )** function accepts two arguements and return the variable z which contains the result of the product of the two arguements
###Code
c = times(4,5)
print(c)
###Output
20
###Markdown
The z value is stored in variable c and can be used for further operations. Instead of declaring another variable the entire statement itself can be used in the return statement as shown.
###Code
def times(x,y):
'''This multiplies the two input arguments'''
return x*y
c = times(4,5)
print(c)
###Output
20
###Markdown
Since the **times( )** is now defined, we can document it as shown above. This document is returned whenever **times( )** function is called under **help( )** function.
###Code
help(times)
###Output
Help on function times in module __main__:
times(x, y)
This multiplies the two input arguments
###Markdown
Multiple variable can also be returned as a tuple. However this tends not to be very readable when returning many value, and can easily introduce errors when the order of return values is interpreted incorrectly.
###Code
eglist = [10,50,30,12,6,8,100]
def egfunc(eglist):
highest = max(eglist)
lowest = min(eglist)
first = eglist[0]
last = eglist[-1]
return highest,lowest,first,last
###Output
_____no_output_____
###Markdown
If the function is just called without any variable for it to be assigned to, the result is returned inside a tuple. But if the variables are mentioned then the result is assigned to the variable in a particular order which is declared in the return statement.
###Code
egfunc(eglist)
a,b,c,d = egfunc(eglist)
print(' a =',a,' b =',b,' c =',c,' d =',d)
###Output
a = 100 b = 6 c = 10 d = 100
###Markdown
Default arguments When an argument of a function is common in majority of the cases this can be specified with a default value. This is also called an implicit argument.
###Code
def implicitadd(x,y=3,z=0):
print("%d + %d + %d = %d"%(x,y,z,x+y+z))
return x+y+z
###Output
_____no_output_____
###Markdown
**implicitadd( )** is a function accepts up to three arguments but most of the times the first argument needs to be added just by 3. Hence the second argument is assigned the value 3 and the third argument is zero. Here the last two arguments are default arguments. Now if the second argument is not defined when calling the **implicitadd( )** function then it considered as 3.
###Code
implicitadd(4)
###Output
4 + 3 + 0 = 7
###Markdown
However we can call the same function with two or three arguments. A useful feature is to explicitly name the argument values being passed into the function. This gives great flexibility in how to call a function with optional arguments. All off the following are valid:
###Code
implicitadd(4,4)
implicitadd(4,5,6)
implicitadd(4,z=7)
implicitadd(2,y=1,z=9)
implicitadd(x=1)
###Output
4 + 4 + 0 = 8
4 + 5 + 6 = 15
4 + 3 + 7 = 14
2 + 1 + 9 = 12
1 + 3 + 0 = 4
###Markdown
Any number of arguments If the number of arguments that is to be accepted by a function is not known then a asterisk symbol is used before the name of the argument to hold the remainder of the arguments. The following function requires at least one argument but can have many more.
###Code
def add_n(first,*args):
"return the sum of one or more numbers"
reslist = [first] + [value for value in args]
print(reslist)
return sum(reslist)
###Output
_____no_output_____
###Markdown
The above function defines a list of all of the arguments, prints the list and returns the sum of all of the arguments.
###Code
add_n(1,2,3,4,5)
add_n(6.5)
###Output
[6.5]
###Markdown
Arbitrary numbers of named arguments can also be accepted using `**`. When the function is called all of the additional named arguments are provided in a dictionary
###Code
def namedArgs(**names):
'print the named arguments'
# names is a dictionary of keyword : value
print(" ".join(name+"="+str(value)
for name,value in names.items()))
namedArgs(x=3*4,animal='mouse',z=(1+2j))
###Output
x=12 animal=mouse z=(1+2j)
###Markdown
Global and Local Variables Whatever variable is declared inside a function is local variable and outside the function in global variable.
###Code
eg1 = [1,2,3,4,5]
def egfunc1():
x=1
def thirdfunc():
x=2
print("Inside thirdfunc x =", x)
thirdfunc()
print("Outside x =", x)
egfunc1()
###Output
Inside thirdfunc x = 2
Outside x = 1
###Markdown
If a **global** variable is defined as shown in the example below then that variable can be called from anywhere. Global values should be used sparingly as they make functions harder to re-use.
###Code
def egfunc1():
x = 1.0 # local variable for egfunc1
def thirdfunc():
global x # globally defined variable
x = 2.0
print("Inside thirdfunc x =", x)
thirdfunc()
print("Outside x =", x)
egfunc1()
print("Globally defined x =",x)
###Output
Inside thirdfunc x = 2.0
Outside x = 1.0
Globally defined x = 2.0
###Markdown
Lambda Functions These are small functions which are not defined with any name and carry a single expression whose result is returned. Lambda functions comes very handy when operating with lists. These function are defined by the keyword **lambda** followed by the variables, a colon and the respective expression.
###Code
z = lambda x: x * x
z(8)
###Output
_____no_output_____
###Markdown
Composing functions Lambda functions can also be used to compose functions
###Code
def double(x):
return 2*x
def square(x):
return x*x
def f_of_g(f,g):
"Compose two functions of a single variable"
return lambda x: f(g(x))
doublesquare= f_of_g(double,square)
print("doublesquare is a",type(doublesquare))
doublesquare(3)
###Output
doublesquare is a <class 'function'>
###Markdown
Define ```displayFileLink``` function that displays a link of a file with ```filename``` string filename prefixed by the message given by ```message``` string.
###Code
def displayFileLink(message, filename):
divText = '<div style="display:inline-block">' + message + ' </div>'
file = FileLink(filename, result_html_prefix=divText)
display(file)
###Output
_____no_output_____
###Markdown
Define ```downloadPDF``` function that downloads a PDF file from a link provided by ```url``` string and rename it to ```filename_pdf``` input string.
###Code
def downloadPDF(url, pdf_filename):
!wget $url -O $pdf_filename
displayFileLink('PDF file saved as:', pdf_filename)
###Output
_____no_output_____
###Markdown
Define ```convertPDFtoCSV``` function that converts a PDF file given by ```pdf_filename``` string file name to a CSV file with filename given by ```csv_filename``` input (*credits to [tabula-java team](https://github.com/tabulapdf/tabula-java)*).
###Code
def convertPDFtoCSV(pdf_filename, csv_filename):
!java -Dfile.encoding=utf-8 -jar tabula.jar -l --pages 3 $pdf_filename -o $csv_filename
displayFileLink('CSV file saved as:', csv_filename)
###Output
_____no_output_____
###Markdown
Define ```convertCSVtoJSON``` function that converts a CSV file with filename given by ```csv_filename``` string input to a JSON properly processed file with filename given by ```json_filename``` input and data given by ```json_data``` output.
###Code
def convertCSVtoJSON(csv_filename, json_filename):
with open(csv_filename, encoding="utf-8") as file:
import csv
csv_data = csv.reader(file, delimiter=',', quotechar='"')
full_data = []
index = -1
for row in csv_data:
index = index + 1
if index:
column = 0
for cell in row:
column = column + 1
data = cell.replace('\r','').replace('\n',' ').replace(' , ',', ').strip()
if data == '¬': data = ''
elif data == '0': data = ''
# Código
if column == 1:
codigo = data.upper()
# Disciplina - turma
elif column == 2:
# Campus
data, _, campus = data.rpartition('(')
campus = title_pos_tag(campus[:-1])
# Disciplina
disciplina, _, data = data.strip().rpartition(' ')
disciplina = title_pos_tag(disciplina)
# Turma e período
turma, _, periodo = data.strip().rpartition('-')
turma = turma.upper()
periodo = periodo.capitalize()
# Subcódigo
subcodigo, _, sufixo = codigo.partition('-') # DA1ESZM035-17SA = D+A1(ESZM035|-|17)SA
subcodigo = subcodigo[1+len(turma):] + '-' + sufixo[:2]
# Teoria
elif column == 3:
for week in week_names:
data = data.replace(week, '\n' + week)
teoria = data.replace(', \n','\n').strip().splitlines()
teoria_num_of_days = len(teoria)
teoria_dia_da_semana = [None]*teoria_num_of_days
teoria_entrada = [None]*teoria_num_of_days
teoria_saida = [None]*teoria_num_of_days
teoria_sala = [None]*teoria_num_of_days
teoria_frequencia = [None]*teoria_num_of_days
for day in range(teoria_num_of_days):
data = teoria[day]
teoria_dia_da_semana[day], _, data = data.partition(' das ')
teoria_entrada[day], _, data = data.partition(' às ')
teoria_saida[day], _, data = data.partition(', sala ')
teoria_sala[day], _, teoria_frequencia[day] = data.partition(', ')
teoria_dia_da_semana[day] = teoria_dia_da_semana[day].capitalize()
teoria_frequencia[day] = teoria_frequencia[day].capitalize()
teoria_sala[day] = teoria_sala[day].upper()
# Prática
elif column == 4:
for week in week_names:
data = data.replace(week, '\n' + week)
pratica = data.replace(',\n','\n').strip().splitlines()
pratica_num_of_days = len(pratica)
pratica_dia_da_semana = [None]*pratica_num_of_days
pratica_entrada = [None]*pratica_num_of_days
pratica_saida = [None]*pratica_num_of_days
pratica_sala = [None]*pratica_num_of_days
pratica_frequencia = [None]*pratica_num_of_days
for day in range(pratica_num_of_days):
data = pratica[day]
pratica_dia_da_semana[day], _, data = data.partition(' das ')
pratica_entrada[day], _, data = data.partition(' às ')
pratica_saida[day], _, data = data.partition(', sala ')
pratica_sala[day], _, pratica_frequencia[day] = data.partition(', ')
pratica_dia_da_semana[day] = pratica_dia_da_semana[day].capitalize()
pratica_frequencia[day] = pratica_frequencia[day].capitalize()
pratica_sala[day] = pratica_sala[day].upper()
# Docente teoria
elif column == 5:
docente_teoria = title_pos_tag(data)
# Docente prática
elif column == 6:
docente_pratica = title_pos_tag(data)
teoria = []
i = -1
for day in range(teoria_num_of_days):
i = i + 1
teoria_new = {'id': i,
'dia_da_semana': teoria_dia_da_semana[day],
'horario_de_entrada': teoria_entrada[day],
'horario_de_saida': teoria_saida[day],
'sala': teoria_sala[day],
'frequencia': teoria_frequencia[day]}
teoria.append(teoria_new)
pratica = []
i = -1
for day in range(pratica_num_of_days):
i = i + 1
pratica_new = {'id': i,
'dia_da_semana': pratica_dia_da_semana[day],
'horario_de_entrada': pratica_entrada[day],
'horario_de_saida': pratica_saida[day],
'sala': pratica_sala[day],
'frequencia': pratica_frequencia[day]}
pratica.append(pratica_new)
new_data = {'id': index-1,
'codigo': codigo,
'subcodigo': subcodigo,
'disciplina': disciplina,
'campus': campus,
'periodo': periodo,
'turma': turma,
'teoria': teoria,
'pratica': pratica,
'docente_teoria': docente_teoria,
'docente_pratica': docente_pratica}
full_data.append(new_data)
with open(json_filename, 'w') as file:
import json
json.dump(full_data, file)
displayFileLink('JSON file saved as:', json_filename)
with open(json_filename, 'r') as file:
json_data = json.load(file)
return json_data
###Output
_____no_output_____
###Markdown
Define ```sumMinutes``` function that returns the sum of ```minutes``` int input minutes to the ```time_str``` input string with format ```'%H:%M'```.
###Code
def sumMinutes(time_str, minutes):
H,M = time_str.split(':')
D = timedelta(hours=int(H), minutes=int(M)+minutes)
D = datetime(1,1,1) + D
new_time_str = D.strftime('%H:%M')
return new_time_str
###Output
_____no_output_____
###Markdown
Define ```compareTimes``` function that compares ```time_str_1``` and ```time_str_2``` input strings with format ```'%H:%M``` returning ```1```, ```0```, or ```-1``` if ```time_str_1``` is bigger, equal, or smaller than ```time_str_2```, respectively.
###Code
def compareTimes(time_str_1, time_str_2):
H1,M1 = time_str_1.split(':')
H2,M2 = time_str_2.split(':')
H_diff = int(H1) - int(H2)
M_diff = int(M1) - int(M2)
if H_diff > 0 or H_diff == 0 and M_diff > 0:
return 1
elif H_diff == 0 and M_diff == 0:
return 0
else: return -1
###Output
_____no_output_____
###Markdown
Define ```convertJSONtoSheet``` function that processes a JSON file with data given by ```json_data``` into a spreadsheet.
###Code
def convertJSONtoSheet(json_data):
import numpy as np
import pandas as pd
import qgrid
# codigo = []
subcodigo = []
disciplina = []
campus = []
periodo = []
turma = []
# teoria = []
# pratica = []
docente_teoria = []
docente_pratica = []
dias_da_semana = []
week_name_index = lambda week_name: [index for index,name in enumerate(week_names) if name.lower() == week_name.lower()][0]
for materia in json_data:
# codigo .append(materia['codigo'])
subcodigo .append(materia['subcodigo'])
disciplina .append(materia['disciplina'])
campus .append(materia['campus'])
periodo .append(materia['periodo'])
turma .append(materia['turma'])
# teoria .append(materia['teoria'])
# pratica .append(materia['pratica'])
docente_teoria .append(materia['docente_teoria'])
docente_pratica.append(materia['docente_pratica'])
dias_da_semana_new = [''] * len(week_names)
for day in materia['teoria'] + materia['pratica']:
# sala_horario = day['horario_de_entrada'] + '-' + day['horario_de_saida'] + ' (' + day['sala'] + ')'
# dia_da_semana = day['dia_da_semana']
# for week in week_names:
# if week.lower() == dia_da_semana.lower():
# dias_da_semana_new.append(sala_horario)
# else:
# dias_da_semana_new.append('')
index = week_name_index(day['dia_da_semana'])
dias_da_semana_new[index] += day['horario_de_entrada'] + '-' + day['horario_de_saida'] + ' (' + day['sala'] + ')'
# display(dias_da_semana_new)
dias_da_semana.append(dias_da_semana_new)
dias_da_semana_transposed = list(map(list, zip(*dias_da_semana)))
week_names_titled = [week.title() for week in week_names]
data = np.array([
# codigo,
subcodigo,
disciplina,
campus,
periodo,
turma,
# teoria,
# pratica,
] + dias_da_semana_transposed + [
docente_teoria,
docente_pratica
]).T
# from pprint import pprint
# pprint(data)
columns = [
# 'Código',
'Subcódigo',
'Disciplina',
'Campus',
'Período',
'Turma',
# 'Teoria',
# 'Prática',
] + week_names_titled + [
'Docente teoria',
'Docente prática'
]
df = pd.DataFrame(data, columns=columns)
df = df.set_index(['Subcódigo','Disciplina','Campus','Período','Turma'])
df = df.sort_index(level=['Subcódigo','Período'])
# df['#'] = df.groupby(level=[0,1]).cumcount() + 1
# df = df.set_index('#', append=True)
grid_options = {
'fullWidthRows': False,
'syncColumnCellResize': True,
'forceFitColumns': False,
'defaultColumnWidth': 150,
'rowHeight': 28,
'enableColumnReorder': True,
'enableTextSelectionOnCells': True,
'editable': False,
'autoEdit': False,
'explicitInitialization': True,
'maxVisibleRows': 25,
'minVisibleRows': 15,
'sortable': True,
'filterable': True,
'highlightSelectedCell': False,
'highlightSelectedRow': True
}
sheet = qgrid.QGridWidget(df=df, grid_options=grid_options)
return sheet
###Output
_____no_output_____
###Markdown
First class function Functions are first class citizens in Python. That means it can be assigned to a variable, passed as an argument to another function and returned from another function
###Code
# Let's define a simple function
def func():
return "This is a function"
# Now we can assign a name to this function
var = func
# We can call the function with the new function. Cool eh?
var()
# Next we define the second function that takes any function
# and executes it.
def func1(anyfunc):
return anyfunc()
# Now we can pass our first function to the second function
func1(func)
# Our third function will create a function and returns it.
def func2():
def retfunc():
return "This is a returned function"
return retfunc
# As usual we assign a name to the return
var2 = func2()
# And now we can call the returned function using the assigned name
var2()
###Output
_____no_output_____
###Markdown
Closuresfunc1 and func2 are also known as higher order function as they can be passed or return another function. Closures are based on this feature with an added advantage: the inner function remembers the outer function's variable even after the outer function has finished executing. Let's see an example.
###Code
# First, defining a function that takes an argument, then returns
# another function
def outerfunc(outerstr):
def innerfunc(innerstr):
return f'{outerstr} {innerstr}'
return innerfunc
# Next, we call the outer function and assign the return which is
# the inner function, to a name
hello = outerfunc('Hello')
goodbye = outerfunc('Goodbye')
# Now, we can call the inner function
hello('friend'), goodbye('stranger')
###Output
_____no_output_____
###Markdown
Docstring
###Code
def docfunc():
'''Demonstrate docstring'''
pass
docfunc.__doc__
help(docfunc)
###Output
Help on function docfunc in module __main__:
docfunc()
Demonstrate docstring
###Markdown
Function can return multiple value
###Code
def func(arg1, arg2):
return arg1,arg2
a, b = func('a', 'b')
2*a+3*b
###Output
_____no_output_____
###Markdown
Scope LEGB
###Code
globalvalue = 1
def func():
global globalvalue
globalvalue = 2
print(globalvalue)
print(globalvalue)
func()
print(globalvalue)
def outerfunc():
enclvalue = 1
def innerfunc():
nonlocal enclvalue
enclvalue = 2
print(enclvalue)
innerfunc()
print(enclvalue)
outerfunc()
###Output
1
2
###Markdown
This does not work
###Code
globalvalue = 1
def func():
globalvalue = 2
print(globalvalue)
func()
print(globalvalue)
###Output
1
1
###Markdown
This does not work either
###Code
def outerfunc():
enclvalue = 1
def innerfunc():
enclvalue = 2
print(enclvalue)
innerfunc()
print(enclvalue)
outerfunc()
###Output
1
1
###Markdown
Arguments
###Code
# Let's define a function that takes a lot of arguments
def argfunc(*args, **kwargs):
print('*args')
for arg in args: print(arg)
print('**kwargs')
for key, value in kwargs.items():
print(f'key:{key}, value:{value}')
# Call the function
argfunc(1, 2, 3, alpha=1, beta=2, gamma=3)
# First, let's create a tuple and dictionary
tuple_ = (1,2,3)
dict_ = {'alpha':1, 'beta':2, 'gamma':3}
# Call the function
argfunc(*tuple_, **dict_)
###Output
*args
1
2
3
**kwargs
key:alpha, value:1
key:beta, value:2
key:gamma, value:3
###Markdown
Lambda and common uses(map, filter, reduce)
###Code
fn = lambda x: x**2
fn(3)
(lambda x: x**2)(3)
(lambda x: [x*_ for _ in range(5)])(2)
# The above expression in a normal way
def x(a):
return [a*_ for _ in range(5)]
x(2)
# A RELU function. Basically, return positive values, zerorize
# negative values
(lambda x: x if x>0 else 0)(-1)
list_ = [x for x in range(1,10)]
#map function
squared = map(lambda x: x**2, list_)
list(squared)
#filter function
even = filter(lambda x: x%2==0, list_)
list(even)
#reduce function
from functools import reduce
product = reduce(lambda x1,x2: x1*x2, list_)
product
#sorted function
student_tuples = [
('john', 'A', 15),
('jane', 'B', 12),
('dave', 'B', 10)]
#sort the list according to the third element in the tuple
sorted_students=sorted(student_tuples, key=lambda x: x[2])
sorted_students
###Output
_____no_output_____
###Markdown
Importing necessary libraries
###Code
from skimage import color, data, io, exposure
from skimage.feature import canny, hog
import datetime
import cv2
import numpy as np
from sklearn.feature_extraction import image
from matplotlib import pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
from sklearn import metrics
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import RandomizedSearchCV
import xlwings as xw
import sys
import math
import argparse
from distutils import util
# This function that takes parameters of the project code.
def handle_args():
parser=argparse.ArgumentParser()
parser.add_argument('-f')
parser.add_argument('--image_name', default='p001')
parser.add_argument('--max_depth', default=20)
parser.add_argument('--avg_patch_size', default=39)
parser.add_argument('--sum_patch_size', default=8)
parser.add_argument('--sd_patch_size', default=16)
parser.add_argument('--hog_patch_size', default=1)
parser.add_argument('--neigh_patch_size', default=1)
parser.add_argument('--use_distance_features', default=True)
parser.add_argument('--use_sd_features', default=False)
parser.add_argument('--use_sum_features', default=True)
parser.add_argument('--use_avg_features', default=True)
parser.add_argument('--use_edge_features', default=False)
parser.add_argument('--use_haris_corner', default=True)
parser.add_argument('--use_hog_features', default=False)
parser.add_argument('--use_fast_feature', default=True)
parser.add_argument('--use_orb_feature', default=False)
parser.add_argument('--use_neighborhood_features', default=False)
parser.add_argument('--save_to_file', default=False)
args=parser.parse_args()
args.use_fast_feature = util.strtobool(str(args.use_fast_feature))
args.use_orb_feature = util.strtobool(str(args.use_orb_feature))
args.use_neighborhood_features = util.strtobool(str(args.use_neighborhood_features))
args.use_hog_features = util.strtobool(str(args.use_hog_features))
args.use_haris_corner = util.strtobool(str(args.use_haris_corner))
args.use_edge_features = util.strtobool(str(args.use_edge_features))
args.use_avg_features = util.strtobool(str(args.use_avg_features))
args.use_sum_features = util.strtobool(str(args.use_sum_features))
args.use_sd_features = util.strtobool(str(args.use_sd_features))
args.use_distance_features = util.strtobool(str(args.use_distance_features))
args.save_to_file = util.strtobool(str(args.save_to_file))
args.avg_patch_size = int(args.avg_patch_size)
args.sum_patch_size = int(args.sum_patch_size)
args.sd_patch_size = int(args.sd_patch_size)
args.hog_patch_size = int(args.hog_patch_size)
args.neigh_patch_size = int(args.neigh_patch_size)
args.max_depth = int(args.max_depth)
return args
# This function shows the images which is passed as parameters with a special format
def show_image(dic, image_name, save = False):
fig, axes = plt.subplots(1, len(dic), figsize = (24,16))
ax = axes.ravel()
for i in dic:
ax[list(dic.keys()).index(i)].set_title(i)
ax[list(dic.keys()).index(i)].imshow(dic[i], cmap = "gray")
fig.tight_layout()
time = datetime.datetime.now()
plt.savefig(f'results/{image_name}.{time.hour}.{time.minute}.png', format="png")
if not save:
plt.show()
plt.close()
# This function finds the last row in excel
def find_last_row_in_excel(workbooklocation,sheetname,columnletter):
wb = xw.Book(workbooklocation)
X = wb.sheets[sheetname].range(columnletter + str(wb.sheets[sheetname].cells.last_cell.row)).end('up').row + 1
cell = columnletter + str(X)
print(cell)
return cell
# This function writes data which is passed as parameters into excel file.
def write_data_to_excel(features, max_depth, avg_patch_size, sum_patch_size, mean_abs_err, image_name, finish_time):
try:
wb = xw.Book('mae.xlsx')
sht = wb.sheets['Sheet1']
if(image_name == "p001"):
last_row = find_last_row_in_excel('mae.xlsx','Sheet1','A')
sht.range(last_row).value = ['Mean Absolute Error','Image Name', 'Process_time (second)', f'{features}',f'Max Depth: {max_depth}', f'Avg PS: {avg_patch_size}', f'Sum PS: {sum_patch_size}']
last_row = find_last_row_in_excel('MAE.xlsx','Sheet1','A')
sht.range(last_row).value = [f'{float(mean_abs_err)}',f'{image_name}',f'{finish_time}']
except e:
print("The file named mae.xlsx is not found in the project directory.", e)
# This function gets the source image
def get_source(image_name):
return io.imread(f'Dataset\\{image_name}, a_source.png')
# This function gets the source image with openCV
def get_source_opencv(image_name):
return cv2.imread(f'Dataset\\{image_name}, a_source.png')
# This function gets the target image with openCV
def get_target_opencv(image_name):
return cv2.imread(f'Dataset\\{image_name}, b_target.png')
# This function gets the target image
def get_target(image_name):
return io.imread(f'Dataset\\{image_name}, b_target.png')
# This function gets the groundtruth image
def get_groundtruth(image_name):
return io.imread(f'Dataset\\{image_name}, c_groundtruth.png')
# This function reconstructs an image according to rgb and dimension values which is passed as parameter
def reconstruct(r, g, b, dimensions):
image = np.zeros((dimensions[0] * dimensions[1], 3))
for i in range(dimensions[0] * dimensions[1]):
image[i][0] = int(r[i])
image[i][1] = int(g[i])
image[i][2] = int(b[i])
image = np.reshape(image, (dimensions[0], dimensions[1], 3))
return image.astype(int)
# This function converts the image into one dimension
def get_one_dim(image, colored = False):
if colored:
return np.reshape(image, (image.shape[0] * image.shape[1], 1, 3))
else:
return np.reshape(image, (image.shape[0] * image.shape[1], 1))
# This function converts the image values which is integer into float type
def convert_float(image):
squarer = lambda t: t / 255
vfunc = np.vectorize(squarer)
return vfunc(image)
# This function calculates average of special pixel and their neighboors.
def get_avg(data, point, patch_size):
first, second, third, fourth = get_quarters(data, point, patch_size)
summation = (np.sum(first) + np.sum(second) + np.sum(third) + np.sum(fourth) + np.sum(data[point[0], point[1]]))
length = (first.shape[0] * first.shape[1] + second.shape[0] * second.shape[1] +
third.shape[0] * third.shape[1] + fourth.shape[0] * fourth.shape[1] + 1)
return summation/length
# This function calculates the sum of special pixel and their neigboors.
def get_sum(data, point, patch_size):
first, second, third, fourth = get_quarters(data, point, patch_size)
summation = (np.sum(first) + np.sum(second) + np.sum(third) + np.sum(fourth) + np.sum(data[point[0], point[1]]))
return summation
# This function calculates the standard deviation of special pixel and their neigboors
def get_sd(data, point, patch_size):
first, second, third, fourth = get_quarters(data, point, patch_size)
summation = (np.sum(first) + np.sum(second) + np.sum(third) + np.sum(fourth) + np.sum(data[point[0], point[1]]))
length = (first.shape[0] * first.shape[1] + second.shape[0] * second.shape[1] +
third.shape[0] * third.shape[1] + fourth.shape[0] * fourth.shape[1] + 1)
avg = summation/length
_sum = 0.0
first = first.flatten()
for i in first.flatten():
_sum = _sum + pow((i - avg), 2)
for i in second.flatten():
_sum = _sum + pow((i - avg), 2)
for i in third.flatten():
_sum = _sum + pow((i - avg), 2)
for i in fourth.flatten():
_sum = _sum + pow((i - avg), 2)
result = float(_sum/length)
return math.sqrt(result)
# This function finds the neighboorhood of special pixel according to patch_size parameter which define window size like 3x3, 5x5.
# Then returns the quarters of this special pixel.
def get_quarters(data, point, patch_size):
#first quarter
x_start = point[0] - patch_size
x_end = point[0] - 1
y_start = point[1]
y_end = point[1] + patch_size
if x_start < 0:
x_start = 0
if y_end > data.shape[1]:
y_end = data.shape[1]
first = get_portion(data, x_start, x_end, y_start, y_end)
#second quarter
x_start = point[0] - patch_size
x_end = point[0]
y_start = point[1] - patch_size
y_end = point[1] - 1
if x_start < 0:
x_start = 0
if y_start < 0:
y_start = 0
second = get_portion(data, x_start, x_end, y_start, y_end)
#third quarter
x_start = point[0] + 1
x_end = point[0] + patch_size
y_start = point[1] - patch_size
y_end = point[1]
if x_end > data.shape[0]:
x_end = data.shape[0]
if y_start < 0:
y_start = 0
third = get_portion(data, x_start, x_end, y_start, y_end)
#fourth quarter
x_start = point[0]
x_end = point[0] + patch_size
y_start = point[1] + 1
y_end = point[1] + patch_size
if x_end > data.shape[0]:
x_end = data.shape[0]
if y_end > data.shape[1]:
y_end = data.shape[1]
fourth = get_portion(data, x_start, x_end, y_start, y_end)
return first, second, third, fourth
# This function returns the certain parts of data according to given parameters
def get_portion(data, x_start, x_end, y_start, y_end):
return data[x_start:x_end + 1, y_start:y_end + 1]
# This function creates a Random Forest Regressior, Then, doing fitting process.
def fit_channel(X, y, n_estimator = 16):
classifier = RandomForestRegressor(n_estimators=n_estimator)
classifier.fit(X, y)
return classifier
# This function creates a Random Forest Regressior with Randomized Search Cross Validation to find best model parameters.
def model_with_best_parameters(X,y):
# Number of trees in random forest
n_estimators = [int(x) for x in np.linspace(start = 10, stop = 50, num = 5)]
# Number of features to consider at every split
max_features = ['auto', 'sqrt']
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(10, 50, num = 5)]
max_depth.append(None)
# Minimum number of samples required to split a node
min_samples_split = [2, 5, 10]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 2, 4]
# Method of selecting samples for training each tree
bootstrap = [True, False]
# Create the random grid
random_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
'bootstrap': bootstrap}
# Use the random grid to search for best hyperparameters
# First create the base model to tune
rf = RandomForestRegressor()
# Random search of parameters, using 3 fold cross validation,
# search across 100 different combinations, and use all available cores
rf_random = RandomizedSearchCV(estimator = rf, param_distributions = random_grid, n_iter = 10, cv = 3, verbose=2, random_state=42, n_jobs = -1)
# Fit the random search model
rf_random.fit(X, y)
print(rf_random.best_params_)
return rf_random.best_estimator_
# This function does predict process according to given classifier.
def predict_channel(classifier, X):
# The score method returns the accuracy of the model
return classifier.predict(X)
# This function finds edges on image with canny
def get_edge_feature(image):
edges = canny(image)
return np.reshape(edges, (image.shape[0] * image.shape[1], 1))
# This function finds sum value of each pixels of their neighbor pixels
def get_sum_feature(data, patch_size):
sum_feature = np.zeros((data.shape[0], data.shape[1]))
for i in range(data.shape[0]):
for j in range(data.shape[1]):
sum_feature[i][j] = get_sum(data, (i, j), patch_size)
return np.reshape(sum_feature, (sum_feature.shape[0] * sum_feature.shape[1], 1))
# This function finds average value of each pixels of their neighbor pixels
def get_avg_feature(data, patch_size):
avg_feature = np.zeros((data.shape[0], data.shape[1]))
for i in range(data.shape[0]):
for j in range(data.shape[1]):
avg_feature[i][j] = get_avg(data, (i, j), patch_size)
return np.reshape(avg_feature, (avg_feature.shape[0] * avg_feature.shape[1], 1))
# This function finds neighborhood values of each pixels of their neighbor pixels
def get_neighborhood_feature(data, patch_size):
neigh = np.empty((1,int((patch_size * 2 + 1)**2 - 1)))
for i in range(data.shape[0]):
for j in range(data.shape[1]):
feature = get_neighborhood(data, (i, j), patch_size)
neigh = np.vstack((neigh,feature))
neigh_deleted = np.delete(neigh, 0, 0)
return neigh_deleted
# This function finds standard deviation of each pixels of their neighbor pixels
def get_sd_feature(data, patch_size):
sd_feature = np.zeros((data.shape[0], data.shape[1]))
for i in range(data.shape[0]):
for j in range(data.shape[1]):
sd_feature[i][j] = get_sd(data, (i, j), patch_size)
return np.reshape(sd_feature, (sd_feature.shape[0] * sd_feature.shape[1], 1))
# This function finds Histogram of Oriented Gradients (HOG) value of each pixels
def get_hog_feature(data, hog_patch_size):
fd, hog_image = hog(data, orientations=8, pixels_per_cell=(2, 2),
cells_per_block=(2, 2), visualize=True, multichannel=False)
hog_image_rescaled = exposure.rescale_intensity(hog_image, in_range=(0, 10))
return get_sum_feature(hog_image_rescaled, hog_patch_size)
# This function finds corners value of each pixels with using Haris Corner Detection
def get_haris_corner(image,img):
gray_img = np.float32(image)
dst = cv2.cornerHarris(gray_img, blockSize=2, ksize=3, k=0.04)
dst = cv2.dilate(dst, None)
return np.reshape(dst, (dst.shape[0]*dst.shape[1],1))
# This function finds corners value of each pixels with using FAST Corner Detection Algorithm
def get_fast_feature(gray_img, img):
fast = cv2.FastFeatureDetector_create()
fast.setNonmaxSuppression(False)
kp = fast.detect(gray_img, None)
kp_img = cv2.drawKeypoints(img, kp, None, color=(0, 255, 0))
return np.reshape(kp_img, (kp_img.shape[0] * kp_img.shape[1], 3))
# This function finds key points values of each pixels with using ORB (Oriented FAST and Rotated Brief) algorithm
def get_orb_feature(gray_img, img):
orb = cv2.ORB_create(nfeatures=2000)
kp, des = orb.detectAndCompute(gray_img, None)
kp_img = cv2.drawKeypoints(img, kp, None, color=(0, 255, 0), flags=0)
return np.reshape(kp_img, (kp_img.shape[0] * kp_img.shape[1], 3))
# This function finds neighborhood values of special pixels of their neighbor pixels
def get_neighborhood(data, point, patch_size):
quarter_len = int(((patch_size * 2 + 1)**2 - 1) / 4)
first, second, third, fourth = get_quarters(data, point, patch_size)
first = first.flatten()
second = second.flatten()
third = third.flatten()
fourth = fourth.flatten()
feature_mean = np.concatenate((first, second,third,fourth), axis=None)
feature_mean = np.mean(feature_mean)
if(len(first) < int(quarter_len)):
array_first_mean = np.empty((quarter_len - len(first),))
array_first_mean[:] = feature_mean
first = np.concatenate((first,array_first_mean), axis=None)
if(len(second) < int(quarter_len)):
array_second_mean = np.empty((quarter_len - len(second),))
array_second_mean[:] = feature_mean
second = np.concatenate((second,array_second_mean), axis=None)
if(len(third) < int(quarter_len)):
array_third_mean = np.empty((quarter_len - len(third),))
array_third_mean[:] = feature_mean
third = np.concatenate((third,array_third_mean), axis=None)
if(len(fourth) < int(quarter_len)):
array_fourth_mean = np.empty((quarter_len - len(fourth),))
array_fourth_mean[:] = feature_mean
fourth = np.concatenate((fourth,array_fourth_mean), axis=None)
features = np.concatenate((first, second,third,fourth), axis=None)
return features
# This function is helper function for doing called function as generic
def get_feature(func, image_name, X, X_test):
source_opencv = get_source_opencv(image_name)
source_gray_2d_opencv = cv2.cvtColor(source_opencv, cv2.COLOR_BGR2GRAY)
feature_x = func(source_gray_2d_opencv, source_opencv)
result_x = np.hstack((X,feature_x))
target_opencv = get_target_opencv(image_name)
target_opencv = cv2.cvtColor(target_opencv, cv2.COLOR_BGR2GRAY)
feature_test = func(target_opencv,target_opencv)
result_test = np.hstack((X_test, feature_test))
return result_x, result_test
# This function finds each pixels distance to top,bottom,right and left distance
def get_distance_feature(data):
distance_down_feature = np.zeros((data.shape[0], data.shape[1]))
distance_up_feature = np.zeros((data.shape[0], data.shape[1]))
distance_left_feature = np.zeros((data.shape[0], data.shape[1]))
distance_right_feature = np.zeros((data.shape[0], data.shape[1]))
for i in range(data.shape[0]):
for j in range(data.shape[1]):
distance_up_feature[i][j] = i
distance_down_feature[i][j] = data.shape[0] - i
distance_left_feature[i][j] = j
distance_right_feature[i][j] = data.shape[1] - j
return np.reshape(distance_up_feature, (data.shape[0] * data.shape[1], 1)), np.reshape(distance_down_feature, (data.shape[0] * data.shape[1], 1)), np.reshape(distance_left_feature, (data.shape[0] * data.shape[1], 1)), np.reshape(distance_right_feature, (data.shape[0] * data.shape[1], 1))
###Output
_____no_output_____
###Markdown
Getting started with Functions 
###Code
var = 1
def simple_function(): pass
def simple_function2():
return 1
simple_function2()
###Output
_____no_output_____
###Markdown
Put things into the function
###Code
def add(x,y):
return x+y
add(1,1)
add(2,2)
foods = ["apple", "eggs", "butter"]
def complicated_function(my_food):
total = len(my_food)
print(f"I am very hungry and I am ready to eat: {total} foods!")
for food in my_food:
print(f"I love to wake up and eat: {food}")
complicated_function(foods)
complicated_function(["steak", "ham", "bacon"])
###Output
_____no_output_____
###Markdown
Exotic items with Python Functions
###Code
#nonlocal cannot modify this variable
#lower_body_counter=5
def attack_counter():
"""Counts number of attacks on part of body"""
lower_body_counter = 0
upper_body_counter = 0
#print(lower_body_counter)
def attack_filter(attack):
nonlocal lower_body_counter
nonlocal upper_body_counter
attacks = {"kimura": "upper_body",
"straight_ankle_lock":"lower_body",
"arm_triangle":"upper_body",
"keylock": "upper_body",
"knee_bar": "lower_body"}
if attack in attacks:
if attacks[attack] == "upper_body":
upper_body_counter +=1
if attacks[attack] == "lower_body":
lower_body_counter +=1
print(f"Upper Body Attacks {upper_body_counter}, Lower Body Attacks {lower_body_counter}")
return attack_filter
fight = attack_counter()
type(fight)
fight("kimura")
fight("kimura")
fight("knee_bar")
###Output
_____no_output_____
###Markdown
Partial Function
###Code
from functools import partial
def multiple_attacks(attack_one, attack_two):
"""Performs two attacks"""
print(f"First Attack {attack_one}")
print(f"Second Attack {attack_two}")
attack_this = partial(multiple_attacks, "kimura")
type(attack_this)
multiple_attacks("kimura")
attack_this = partial(multiple_attacks, "kimura")
attack_this("arm_bar")
###Output
_____no_output_____
###Markdown
Lazy Evaluation
###Code
huge_list = [1,100,10000]
def process():
for num in huge_list:
yield num
result = process()
type(result)
for _ in range(2):
print(next(result))
def lazy_return_random_attacks():
"""Yield attacks each time"""
import random
attacks = {"kimura": "upper_body",
"straight_ankle_lock":"lower_body",
"arm_triangle":"upper_body",
"keylock": "upper_body",
"knee_bar": "lower_body"}
while True:
random_attack = random.choices(list(attacks.keys()))
yield random_attack
attack = lazy_return_random_attacks()
type(attack)
for _ in range(3):
print(next(attack))
###Output
['knee_bar']
['keylock']
['kimura']
###Markdown
Decorators
###Code
from functools import wraps
from time import time
def timing(f):
@wraps(f)
def wrap(*args, **kw):
ts = time()
result = f(*args, **kw)
te = time()
print(f"fun: {f.__name__}, args: [{args}, {kw}] took: {te-ts} sec")
return result
return wrap
@timing
def some_attacks(x, y, one=1):
print(f"These are my variables: {x}, {y}, {one}")
attack = lazy_return_random_attacks()
for _ in range(5):
print(next(attack))
some_attacks(1,2, one="one")
###Output
_____no_output_____
###Markdown
Questions Programatic Dictionary
###Code
mydict = dict(one=1)
###Output
_____no_output_____
###Markdown
How could I input a CSV into a function
###Code
import pandas as pd
csv_url = 'https://raw.githubusercontent.com/noahgift/mma/master/data/ufc_fights_all.csv'
mma_df = pd.read_csv(csv_url)
mma_df.tail(3)
def csv_to_df(url):
"""This is a function that converts a CSV file from the web
into a Pandas DF"""
import pandas as pd
df = pd.read_csv(url)
return df
my_url = "https://raw.githubusercontent.com/noahgift/mma/master/data/ufc_fights_all.csv"
my_df = csv_to_df(my_url)
my_df.head()
iris_url = "https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv"
iris_df = csv_to_df(iris_url)
iris_df.head()
###Output
_____no_output_____
###Markdown
Why import random?
###Code
import random
l = [1,40, 34, 50]
random.choices(l)
random.choices(l)
random.choices(l)
random.choices(l)
random.choices(l)
###Output
_____no_output_____
###Markdown
How do I run cell? (also hold shift and press return)
###Code
1+1
###Output
_____no_output_____
###Markdown
What is difference between procedural and functional
###Code
###Output
_____no_output_____
###Markdown
Data manipulation in Python * Pandas is good for CSV type data* YAML use Python* file handler handler* sqlalchemy for databases* scrapy is a good for scrapingMany examples here: https://github.com/noahgift/functional_intro_to_python
###Code
###Output
_____no_output_____
###Markdown
What does yield do?
###Code
ll = [1,5,10, 40]
def process_list(newlist):
for l in ll:
print(l)
return True
process_list(ll)
def lazy_process_list(newlist):
for l in ll:
print(f"Processing list val: {l}")
yield l
myprocess = lazy_process_list(ll)
next(myprocess)
next(myprocess)
###Output
_____no_output_____
###Markdown
Functions This notebook lists all `functions` that are defined and used throughout the `course` of the OpenGeoHub Summer School 2020.The following functions are listed:**[Data loading and re-shaping functions](load_reshape)*** [generate_geographical_subset](generate_geographical_subset)* [df_subset](df_subset)**[Data visualization functions](visualization)*** [visualize_pcolormesh](visualize_pcolormesh) Load required libraries
###Code
import os
from matplotlib import pyplot as plt
import xarray as xr
from netCDF4 import Dataset
import numpy as np
import glob
from matplotlib import pyplot as plt
import matplotlib.colors
from matplotlib.colors import LogNorm
import cartopy.crs as ccrs
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import cartopy.feature as cfeature
import warnings
warnings.simplefilter(action = "ignore", category = RuntimeWarning)
warnings.simplefilter(action = "ignore", category = FutureWarning)
###Output
_____no_output_____
###Markdown
Data loading and re-shaping functions `generate_geographical_subset`
###Code
def generate_geographical_subset(xarray, latmin, latmax, lonmin, lonmax):
"""
Generates a geographical subset of a xarray DataArray and shifts the longitude grid from a 0-360 to a -180 to 180 deg grid.
Parameters:
xarray (xarray DataArray): a xarray DataArray with latitude and longitude coordinates
latmin, latmax, lonmin, lonmax (int): boundaries of the geographical subset
Returns:
Geographical subset of a xarray DataArray.
"""
xarray = xarray.assign_coords(longitude=(((xarray.longitude + 180) % 360) - 180))
return xarray.where((xarray.latitude < latmax) & (xarray.latitude > latmin) & (xarray.longitude < lonmax) & (xarray.longitude > lonmin),drop=True)
###Output
_____no_output_____
###Markdown
`df_subset`
###Code
def df_subset(df,low_bound1, high_bound1, low_bound2, high_bound2):
return df[(df.index>low_bound1) & (df.index<high_bound1)], df[(df.index>low_bound2) & (df.index<high_bound2)]
###Output
_____no_output_____
###Markdown
Data visualization functions `visualize_pcolormesh`
###Code
def visualize_pcolormesh(data_array, longitude, latitude, projection, color_scale, unit, long_name, vmin, vmax, lonmin, lonmax, latmin, latmax, log=True, set_global=True):
"""
Visualizes a numpy array with matplotlib's 'pcolormesh' function.
Parameters:
data_array: any numpy MaskedArray, e.g. loaded with the NetCDF library and the Dataset function
longitude: numpy Array holding longitude information
latitude: numpy Array holding latitude information
projection: a projection provided by the cartopy library, e.g. ccrs.PlateCarree()
color_scale (str): string taken from matplotlib's color ramp reference
unit (str): the unit of the parameter, taken from the NetCDF file if possible
long_name (str): long name of the parameter, taken from the NetCDF file if possible
vmin (int): minimum number on visualisation legend
vmax (int): maximum number on visualisation legend
lonmin,lonmax,latmin,latmax: geographic extent of the plot
log (logical): set True, if the values shall be represented in a logarithmic scale
set_global (logical): set True, if the plot shall have a global coverage
"""
fig=plt.figure(figsize=(20, 10))
ax = plt.axes(projection=projection)
# define the coordinate system that the grid lons and grid lats are on
if(log):
img = plt.pcolormesh(longitude, latitude, np.squeeze(data_array), norm=LogNorm(),
cmap=plt.get_cmap(color_scale), transform=ccrs.PlateCarree(),
vmin=vmin,
vmax=vmax)
else:
img = plt.pcolormesh(longitude, latitude, data_array,
cmap=plt.get_cmap(color_scale), transform=ccrs.PlateCarree(),
vmin=vmin,
vmax=vmax)
ax.add_feature(cfeature.BORDERS, edgecolor='black', linewidth=1)
ax.add_feature(cfeature.COASTLINE, edgecolor='black', linewidth=1)
if (projection==ccrs.PlateCarree()):
ax.set_extent([lonmin, lonmax, latmin, latmax], projection)
gl = ax.gridlines(draw_labels=True, linestyle='--')
gl.xlabels_top=False
gl.ylabels_right=False
gl.xformatter=LONGITUDE_FORMATTER
gl.yformatter=LATITUDE_FORMATTER
gl.xlabel_style={'size':14}
gl.ylabel_style={'size':14}
if(set_global):
ax.set_global()
ax.gridlines()
cbar = fig.colorbar(img, ax=ax, orientation='horizontal', fraction=0.04, pad=0.1)
cbar.set_label(unit, fontsize=16)
cbar.ax.tick_params(labelsize=14)
ax.set_title(long_name, fontsize=20, pad=20.0)
# plt.show()
return fig, ax
###Output
_____no_output_____
###Markdown
Functions The initial stages of learning an interpreted language like Python normally involve entering statements on a command line and immediately executing them to see the result. In Jupyter notebooks, one can enter several lines and execute them all at once. Similarly, the lowest level of programming involves writing a moderate number of lines of code in a file, which the Python interpreter then executes from a system command line. This is useful for simple tasks like reformatting text files, but it doesn't go far beyond that.Fortunately, programming power can be increased exponentially by writing functions. In this notebook we will illustrate the basics of functions with very simple examples chosen for pedagogical value, not as examples of functions you would be likely to write and use. The emphasis here is on showing how one gets information into, and out of, a function, not on the sorts of code one is likely to find in real functions. The simplest case: no arguments (i.e., no inputs), nothing returned
###Code
def simple_message():
print("This is the simplest function that does something.")
###Output
_____no_output_____
###Markdown
Notice how we defined a function by starting with `def`, followed by the name of the function, followed by parentheses, followed by a colon, followed by an indented block of code. Every one of these elements is needed.Notice also that this definition is just that: a definition. After it is executed there is a new Python object in our namespace, `simple_message`, but it has not been executed yet. Let's execute it:
###Code
simple_message()
###Output
This is the simplest function that does something.
###Markdown
No surprise. Just remember that the parentheses are needed when *executing* the function as well as when *defining* it. One mandatory argument
###Code
def print_uppercase_version(some_text):
print(some_text.upper())
print_uppercase_version("using upper case can seem like shouting")
print_uppercase_version("test")
###Output
USING UPPER CASE CAN SEEM LIKE SHOUTING
TEST
###Markdown
Here we defined a function that takes a single mandatory argument, a text string. It calls the `upper` method of that string, which returns an uppercase version of it, and then it prints the result. (Remember, a method is just a function that knows how to operate on the object to the left of the dot. You can see it is a function because of the parentheses.) More mandatory (positional) arguments A function can be defined with any number of positional arguments:
###Code
def print_more_uppercase(some_text, n):
for i in range(n):
print(some_text.upper())
print_more_uppercase("repeat", 2)
print_more_uppercase("test", 0)
###Output
REPEAT
REPEAT
###Markdown
Variable number of arguments Now things get more interesting; we can define a function that takes some fixed number of positional arguments (one, in this example) and any number of additional positional arguments:
###Code
def show_variable_arguments(intro, *args):
print(intro)
# The built-in enumerate() function is explained below.
for i, a in enumerate(args):
print("additional argument #%d is %s" % (i, a))
show_variable_arguments("Only one:", "just this")
import numpy as np
d=np.arange(10,50,5)
print(d)
show_variable_arguments("test", "a","b",45,["January", "February", "March", "April", "May"],d)
show_variable_arguments("Three this time:", "first", "second", "third")
###Output
Three this time:
additional argument #0 is first
additional argument #1 is second
additional argument #2 is third
###Markdown
Notice the `*args` construct: it means, "take all additional arguments, pack them in a tuple, and make it available inside the function under the name 'args'". If there are no additional arguments the tuple will be empty. If present, the `*args` construct must follow any mandatory arguments.We introduced the built-in function `enumerate()`; it is often used in loops like this, when one needs both an item and its index. It is an *iterator*. Each time through the loop, it returns a tuple containing the count, starting from zero, and the corresponding item in its argument. We are automatically unpacking the tuple into the variables `i` and `a`.
###Code
for i, a in enumerate(['dog', 'cat', 'bird']):
print("index is", i, "and the element is", a)
for a,b in enumerate([9,8,7,6]):
print(a*b)
###Output
index is 0 and the element is dog
index is 1 and the element is cat
index is 2 and the element is bird
0
8
14
18
###Markdown
Keyword arguments Even with the ability to have a variable number of arguments via `*args`, positional arguments can get clumsy to handle and hard to remember as the potential inputs to a function get more complex. Here is an illustration of the solution:
###Code
def print_style(some_text, n=1, format='sentence'):
if format == 'sentence':
text = some_text.capitalize() + "."
elif format == 'shout':
text = some_text.upper()
elif format == 'plain':
text = some_text
else:
print("format keyword argument must be 'sentence, 'shout', or 'plain'")
return
for i in range(n):
print(text)
print_style("this is a sentence", n=3,format='shout')
print_style("a bit loud", format='shout',n=4)
print_style("unchanged and only once", format='plain')
print_style("unchanged but 3 times", format='plain', n=3)
print_style("invalid keyword argument...", format='loopy')
###Output
format keyword argument must be 'sentence, 'shout', or 'plain'
###Markdown
There are several things to notice:- The second and third arguments in the definition are *keyword* arguments, in which the name is followed by an equals sign and a default value.- These are *optional* arguments, and when the function is called, these arguments do not have to be specified in any particular order, or at all.- The keyword arguments must *follow* all positional arguments, of which there is only one in this example.- In addition to making it possible to have optional arguments with default values, keyword arguments can make the code more readable when the function is called.Notice the `return` statement, to end execution after printing the error message. Of course, `return` can do more, as we now show. Returning output We are going to use a tiny bit of numpy now, so that we can operate on a sequence of numbers all at once.
###Code
import numpy as np
def sinsq(x):
return np.sin(x) ** 2
print(sinsq([-0.6, -0.3, 0.3, 0.6]))
a=sinsq(1)
b=a
print(a,b)
print(sinsq(5))
###Output
[0.31882112 0.08733219 0.08733219 0.31882112]
0.7080734182735712 0.7080734182735712
0.9195357645382262
###Markdown
Again, no surprise: the function returns what you tell it to return with the `return` statement. Multiple objects can be returned as a sequence, in the following case a list:
###Code
def sinpows(x, n):
"""
Return the first n powers of sin(x), starting from zero.
"""
out = []
for i in range(n):
out.append(np.sin(x) ** i)
return out
zero, first, second = sinpows(0.3, 3)
print("zero: ", zero)
print("one: ", first)
print("two: ", second)
d=sinpows(0,5)
print(d)
help(sinpows)
###Output
zero: 1.0
one: 0.29552020666133955
two: 0.08733219254516084
[1.0, 0.0, 0.0, 0.0, 0.0]
Help on function sinpows in module __main__:
sinpows(x, n)
Return the first n powers of sin(x), starting from zero.
###Markdown
We used automatic unpacking of the returned list to give the outputs individual names. Any Python object can be returned--even a new function object that is defined inside the function. That is an advanced technique, however, so we will not illustrate it here.Notice also that we included a *docstring*, a block of text immediately below the definition line, and above the body of function code. More keyword arguments We saw how we could have a variable number of positional arguments---that is, the number is not known when the function is defined, but it can handle any number when it is called. There is a similar ability with keyword arguments:
###Code
def show_kwargs_with_names(**kw):
print("kw is an object of type", type(kw))
print("it contains:")
for key, value in kw.items():
print(" key: '%s' with value: '%s'" % (key, value))
show_kwargs_with_names(first="the first",
second="and the second",
another="yet another")
###Output
kw is an object of type <class 'dict'>
it contains:
key: 'first' with value: 'the first'
key: 'second' with value: 'and the second'
key: 'another' with value: 'yet another'
###Markdown
So, just as `*args` packs up remaining positional arguments in a tuple, `**kw` packs remaining keyword arguments up in a dictionary named 'kw' and makes it available inside the function. Because it is a dictionary, the order in which the arguments appeared on the command line is lost; but that doesn't matter, because the entries are identified by name, the dictionary key.There is nothing special about the names 'args' and 'kw'; `*stuff` would pack arguments in a tuple called 'stuff', and `**argdict` would make a dictionary named 'argdict'. But 'args' and 'kw' or 'kwargs' are used most often by convention, and observing such conventions tends to improve readability.`**kw` can directly follow `*args`:
###Code
def no_explicit_kw(pos1, *args, **kw):
print("args is:", args)
print("kw is:", kw)
no_explicit_kw("dummy", "arg1", "arg2","asdasdas", kw1="first", kw2="second",kw99='dum')
###Output
args is: ('arg1', 'arg2', 'asdasdas')
kw is: {'kw1': 'first', 'kw2': 'second', 'kw99': 'dum'}
###Markdown
or `**kw` can follow explicitly named keyword arguments, *provided* there is no `*args`:
###Code
def no_star_args(pos1, kw1="the first", **kw):
print("kw1 is:", kw1)
print("kw is a dictionary:", kw)
no_star_args("arg0", kw2="the second", kw3="the third")
###Output
kw1 is: the first
kw is a dictionary: {'kw2': 'the second', 'kw3': 'the third'}
###Markdown
Notice that only the keyword arguments that are *not* included in the definition by name get packed up in the `kw` dictionary. More fun with asterisks The single and double asterisk constructs not only can be used in function definitions, they can also be used when calling functions. A single asterisk unpacks a sequence into a set of positional arguments, and a double asterisk unpacks a dictionary into a sequence of keyword arguments. Example:
###Code
def with_two_arguments(arg1, arg2):
# print("arguments are %s and %s" % (arg1, arg2))
print("arguments are ", arg1, "and", arg2)
some_tuple = ("one", "two")
with_two_arguments(*some_tuple)
with_two_arguments('asd','asdasdas')
kwdict = {"kw0":"a", "kw1":"b"}
# with_two_arguments(**kwdict)
with_two_arguments(*kwdict)
with_two_arguments(**kwdict)
###Output
arguments are one and two
arguments are asd and asdasdas
arguments are kw0 and kw1
###Markdown
and
###Code
def with_kwargs(kw0="x", kw1="y"):
print("kw0 is", kw0, "and kw1 is", kw1)
with_kwargs() # defaults
kwdict = {"kw0":"a", "kw1":"b"}
with_kwargs(**kwdict)
# some_tuple = ("one", "two")
# with_kwargs(*some_tuple)
###Output
kw0 is x and kw1 is y
kw0 is a and kw1 is b
###Markdown
Caution: watch out for side effects Arguments are passed into Python functions by reference, not by value, so if the function modifies something that is passed in, the modification will be seen outside the function. Some Python objects can be modified---that is, they are "mutable", to use the jargon---and some cannot---they are "immutable". So if you pass a list into a function, and append an element to that list, the list will have the new element after the function has been executed:
###Code
def add_tail(x):
"""
Given a single list input, append the string "tail" to the list.
This function returns nothing, but modifies its input argument.
"""
x.append("tail")
y = [1, 2, 3]
add_tail(y)
print(y)
add_tail(y)
print(y)
add_tail(y)
print(y)
###Output
[1, 2, 3, 'tail']
[1, 2, 3, 'tail', 'tail']
[1, 2, 3, 'tail', 'tail', 'tail']
###Markdown
Python Crash Course: Ch. 8 - Functions Create a function.`def` tells python you're defining a function. The function definition tells python the name of the function, and if any arguments are needed. The comment in 3 double quotes `"""docstring"""` is called a `docstring`. A docstring should explain what your function does.
###Code
# here's a function nammed 'greet_user' that prints a greeting
def greet_user():
"""display a message""" # docstring describes what the function does.
print("Hello user!")
###Output
_____no_output_____
###Markdown
Call the function you created.
###Code
# call the function
greet_user()
###Output
Hello user!
###Markdown
Functions with Arguments
###Code
def greet_user(username, city):
"""personalized greeting"""
print("Hello " + username.title() + ", welcome to " + city + ".")
greet_user('king_james', 'Cleveland')
# You can call a function as many times as you need
greet_user('kobe', 'Los Angeles')
greet_user('anthony_davis', 'California')
###Output
Hello Kobe, welcome to Los Angeles.
Hello Anthony_Davis, welcome to California.
###Markdown
Keyword ArgumentsA `keyword argument` is a name-value pair that you pass to a function. In a keyword argument, you explicitly associate the name and the value within the argument. This way, you don't have to worry about having arguments in the correct order.
###Code
def describe_pet(animal_type, pet_name):
"""Display info about a pet."""
print('\nI have a ' + animal_type + ".")
print('My ' + animal_type + "'s name is " + pet_name.title())
describe_pet(animal_type = 'hamster', pet_name = 'harry')
# can switch the order when you use keyword arguments
describe_pet(animal_type = 'hamster', pet_name = 'harry')
describe_pet(pet_name = 'harry', animal_type = 'hamster')
###Output
I have a hamster.
My hamster's name is Harry
I have a hamster.
My hamster's name is Harry
###Markdown
Assigning Default Values for ArgumentsIf an argument for a parameter is present in the fucntion call, Python uses that value. If not, it uses the parameter's default value. Note that the order of the arguments still matters.
###Code
def describe_pet(pet_name, animal_type = 'dog'):
"""Display info about a pet."""
print('\nI have a ' + animal_type + ".")
print('My ' + animal_type + "'s name is " + pet_name.title())
describe_pet(pet_name = 'fiona')
describe_pet(pet_name = 'percy')
###Output
I have a dog.
My dog's name is Fiona
I have a dog.
My dog's name is Percy
###Markdown
Python can ignore default values if you want it to. However, when using default values, any aprameter with a default value needs to be listed **after** all the parameters that don't have default values.
###Code
describe_pet(pet_name = 'Lewis', animal_type = 'cat')
###Output
I have a cat.
My cat's name is Lewis
###Markdown
IEU ACM Python'a Giriş - Fonksiyonlar
###Code
def toplama(*sayılar):
toplam = 0
print(type(sayılar))
for sayı in sayılar:
toplam = toplam + sayı
return toplam
print(toplama(1,2,3,4,5,6,7,8,9))
def ehliyet(yas,isim):
if (yas <18):
print(f"{isim} {yas} yaşında olduğundan ehliyet alamaz. Alabilmesine {18 - yas} yıl var.")
else:
print(f"{isim}, ehliyet alabilir.")
return yas,isim
ehliyet(int(input()),input())
def epicFunction(*args, **kwargs):
print(args)
print(kwargs)
epicFunction(1, 2, 3, 4, 5, 6, 7, key1 = 'value 1', key2 = 'value 2')
kareAl = lambda sayı: sayı ** 2
print(kareAl(2))
çiftmi = lambda sayı: sayı%2==0
print(çiftmi(10))
degisken = 15 #global / evrensel değişken
def deg():
degisken = 35 #local / yerel değişken
print(degisken)
print(degisken)
deg()
print(degisken)
def epicFunc(a=12):
print(a//6)
epicFunc(a = 20)
def epicFunc2(name,age=19):
print(name,age)
epicFunc2("Alican")
###Output
Alican 19
###Markdown
Spell correction
###Code
from autocorrect import spell
spell('WabiSabi')
###Output
autocorrect.spell is deprecated, use autocorrect.Speller instead
###Markdown
Testing string_transformer
###Code
import re
from autocorrect import spell
def rmv_apostrophe(string):
# specific
phrase = re.sub(r"won\'t", "will not", string)
phrase = re.sub(r"can\'t", "can not", phrase)
phrase = re.sub(r"\'cause'", "because", phrase)
phrase = re.sub(r"let\s", "let us", phrase)
phrase = re.sub(r"ma\'am", "madam", phrase)
phrase = re.sub(r"y\'all'", "you all", phrase)
phrase = re.sub(r"o\'clock", "of the clock", phrase)
# general
phrase = re.sub(r"n\'t", " not", phrase)
phrase = re.sub(r"\'re", " are", phrase)
phrase = re.sub(r"\'s", " is", phrase)
phrase = re.sub(r"\'d", " would", phrase)
phrase = re.sub(r"\'ll", " will", phrase)
phrase = re.sub(r"\'t", " not", phrase)
phrase = re.sub(r"\'ve", " have", phrase)
phrase = re.sub(r"\'m", " am", phrase)
# special apostrophes
phrase = re.sub(r"\'", " ", phrase)
return phrase
def string_transformer(string, tokenizer):
# lowercase
# (apostrophes)
# spaces before periods
phrase = string.lower()
phrase = rmv_apostrophe(phrase)
sentence_list = [sentence.strip() for sentence in phrase.strip().split('.')]
sentence_list = [sentence for sentence in sentence_list if sentence !='']
sentence_list = [sentence+' .' for sentence in sentence_list]
#tokenized = [tokenizer.encode(sentence) for sentence in sentence_list]
#return [item for sublist in tokenized for item in sublist]
return sentence_list
def intersection(lst1, lst2):
lst3 = [value for value in lst1 if value in lst2]
return lst3
def union(lst1, lst2):
final_list = list(set(lst1) | set(lst2))
return final_list
def diff(li1, li2):
return (list(set(li1) - set(li2)))
import string
decent_0 = "challenges are gifts that force us to search for a new center of gravity.. the biggest adventure you can ever take is to live the life of your dreams.. be thankful for what you have ; you will end up having more. if you concentrate on what you don't have, you will never, ever have enough.. surround yourself with only people who are going to lift you higher.. whenever you see a successful person, you only see the public glories, never the private sacrifices to reach them.. you get in life what you have the courage to ask for.. luck is a matter of preparation meeting opportunity.. the thing you fear most has no power. your fear of it is what has the power. facing the truth really will set you free.. the biggest adventure you can ever take is to live the life of your dreams.. real integrity is doing the right thing, knowing that nobody is going to know whether you did it or not.. you can have it all. just not all at once.. doing the best at this moment puts you in the best place for the next moment.. one of the hardest things in life to learn are which bridges to cross and which bridges to burn.. the whole point of being alive is to evolve into the complete person you were intended to be.. the more you praise and celebrate your life, the more there is in life to celebrate.. whatever the mind of man can conceive and believe, it can achieve.."
decent_0 = decent_0.split('..')
decent_0 = [s.translate(str.maketrans('', '', string.punctuation)) for s in decent_0]
decent_0 = [" ".join(s.split()) for s in decent_0]
decent_0 = [s.strip() for s in decent_0]
decent_0
decent_1 = ['one of the hardest things in life to learn are which bridges to cross and which bridges to burn . ', 'i trust that everything happens for a reason , even when we are not wise enough to see it .', 'be thankful for what you have ; you will end up having more . if you concentrate on what you do not have, you will never, ever have enough .', 'luck is a matter of preparation meeting opportunity .', 'turn your wounds into wisdom . ', 'self esteem comes from being able to define the world in your own terms and refusing to abide by the judgments of others .', 'you can have it all . just not all at once . ', 'surround yourself with great people .', 'the whole point of being alive is to evolve into the complete person you were intended to be .', 'the thing you fear most has no power . your fear of it is what has the power . facing the truth really will set you free .', 'surround yourself only with people who are going to take you higher .', 'real integrity is doing the right thing , knowing that nobody is going to know whether you did it or not .', ' failure is another steppingstone to greatness . ', 'I am timeless, incomplete and imperfect. No age. No sense of time.', 'think like a queen . queen is not afraid to fail . failure is another steppingstone to greatness . ', 'the biggest adventure you can ever take is to live the life of your dreams .']
decent_1 = [s.translate(str.maketrans('', '', string.punctuation)) for s in decent_1]
decent_1 = [" ".join(s.split()) for s in decent_1]
decent_1 = [s.strip() for s in decent_1]
decent_1
decent_2 = ['my name is wabisabi. ', 'self - esteem comes from being able to define the world in your own terms and refusing to abide by the judgments of others..',' luck is a matter of preparation meeting opportunity.. ', 'real integrity is doing the right thing, knowing that nobody is going to know whether you did it or not.. the more you praise and celebrate your life, the more there is in life to celebrate.. ', 'be thankful for what you have ; you will end up having more. ', 'if you concentrate on what you don\'t have, you will never, ever have enough.. ', 'i trust that everything happens for a reason, even when we are not wise enough to see it.. ', 'the biggest adventure you can ever take is to live the life of your dreams.. ', 'to escape fear, you must go through it. whenever you see a successful person, you only see the public glories, never the private sacrifices to reach them.. ', 'i alone cannot change the world, but i can cast a stone across the water to create many ripples.. ', 'the biggest adventure you can ever take is to live the life of your dreams.. ', 'don\'t be afraid to ask for yourself. doing the best at this moment puts you in the best place for the next moment..', 'surround yourself with great people.. ', 'surround yourself only with people who are going to take you higher.. ', 'turn your wounds into wisdom..', ]
decent_2 = [s.translate(str.maketrans('', '', string.punctuation)) for s in decent_2]
decent_2 = [" ".join(s.split()) for s in decent_2]
decent_2 = [s.strip() for s in decent_2]
decent_2
bad_0 = ['surround yourself only with people who are going to take you higher .', 'surround yourself with great people .', ' failure is another steppingstone to greatness . ', 'self-esteem comes from being able to define the world in your own terms and refusing to abide by the judgments of others .', 'the biggest adventure you can ever take is to live the life of your dreams .', 'i alone cannot change the world , but i can cast a stone across the water to create many ripples .', 'whenever you see a successful person, you only see the public glories, never the private sacrifices to reach them .', 'think like a queen . queen is not afraid to fail . failure is another steppingstone to greatness . ', 'from time to time you may stumble , fall , you will for sure , you will have questions and you will have doubts about your path . but i know this , if you are willing to be guided by , that still small voice that is the gps within yourself , to find out what makes you come alive , you will be more than okay . you will be happy , you will be successful , and you will make a difference in the world .', 'the whole point of being alive is to evolve into the complete person you were intended to be .', 'because when you inevitably stumble and find yourself stuck in a hole that is the story that will get you out : what is your true calling ? what is your dharma ? what is your purpose ?', 'real integrity is doing the right thing , knowing that nobody is going to know whether you did it or not .', 'the key to realizing a dream is to focus not on success but on significance , and then even the small steps and little victories along your path will take on greater meaning .', 'I am timeless, incomplete and imperfect. No age. No sense of time.', 'you get in life what you have the courage to ask for .', 'turn your wounds into wisdom . ']
bad_0 = [s.translate(str.maketrans('', '', string.punctuation)) for s in bad_0]
bad_0 = [" ".join(s.split()) for s in bad_0]
bad_0 = [s.strip() for s in bad_0]
bad_0
bad_1 = "the whole point of being alive is to evolve into the complete person you were intended to be.. to escape fear, you must go through it.. self - esteem comes from being able to define the world in your own terms and refusing to abide by the judgments of others.. forgiveness is giving up the hope that the past could have been any different.. challenges are gifts that force us to search for a new center of gravity.. real integrity is doing the right thing, knowing that nobody is going to know whether you did it or not.. don't be afraid to ask for yourself. i trust that everything happens for a reason, even when we are not wise enough to see it.. failure is another steppingstone to greatness.. the biggest adventure you can ever take is to live the life of your dreams.. surround yourself only with people who are going to take you higher.. whatever the mind of man can conceive and believe, it can achieve.. whenever you see a successful person, you only see the public glories, never the private sacrifices to reach them.. the thing you fear most has no power. your fear of it is what has the power. facing the truth really will set you free.. luck is a matter of preparation meeting opportunity.. you can have it all. just not all at once.."
bad_1 = bad_1.split('..')
bad_1 = [s.translate(str.maketrans('', '', string.punctuation)) for s in bad_1]
bad_1 = [" ".join(s.split()) for s in bad_1]
bad_1 = [s.strip() for s in bad_1]
bad_1
terrible_0 = "the thing you fear most has no power. your fear of it is what has the power. facing the truth really will set you free.. i trust that everything happens for a reason, even when we are not wise enough to see it.. surround yourself with only people who are going to lift you higher.. the biggest adventure you can ever take is to live the life of your dreams.. think like a queen. queen is not afraid to fail. failure is another steppingstone to greatness.. the biggest adventure you can ever take is to live the life of your dreams.. luck is a matter of preparation meeting opportunity.. self - esteem comes from being able to define the world in your own terms and refusing to abide by the judgments of others.. one of the hardest things in life to learn are which bridges to cross and which bridges to burn.. failure is another steppingstone to greatness.. the more you praise and celebrate your life, the more there is in life to celebrate.. i alone cannot change the world, but i can cast a stone across the water to create many ripples.. turn your wounds into wisdom.. whatever the mind of man can conceive and believe, it can achieve.. the whole point of being alive is to evolve into the complete person you were intended to be.. to escape fear, you must go through it."
terrible_0 = terrible_0.split('..')
terrible_0 = [s.translate(str.maketrans('', '', string.punctuation)) for s in terrible_0]
terrible_0 = [" ".join(s.split()) for s in terrible_0]
terrible_0 = [s.strip() for s in terrible_0]
terrible_0
intersection(terrible_0, bad_0)
intersection(terrible_0, bad_1)
intersection(terrible_0, decent_0)
intersection(decent_0, decent_1)
intersection(decent_0, decent_2)
intersection(decent_1, decent_2)
all_good = union(union(decent_0, decent_1), decent_2)
all_good
all_bad= union(union(bad_0, bad_1), terrible_0)
all_bad
diff(all_good, all_bad)
['my name is wabisabi',
'dont be afraid to ask for yourself doing the best at this moment puts you in the best place for the next moment',
'if you concentrate on what you dont have you will never ever have enough',
'be thankful for what you have you will end up having more',
'i trust that everything happens for a reason even when we are not wise enough to see it',
'doing the best at this moment puts you in the best place for the next moment',
'real integrity is doing the right thing knowing that nobody is going to know whether you did it or not the more you praise and celebrate your life the more there is in life to celebrate',
'be thankful for what you have you will end up having more if you concentrate on what you dont have you will never ever have enough']
intersect_bad = intersection(intersection(bad_0, bad_1), terrible_0)
intersect_bad
diff(all_bad, intersect_bad)
res = union(diff(all_bad, intersect_bad) ,diff(all_good, all_bad))
res = diff(res, intersection(bad_0, bad_1))
res = diff(res, intersection(bad_0, terrible_0))
res = diff(res, intersection(bad_1, terrible_0))
len(res)
len(''.join(res[1:16]))
import wikipedia
print (wikipedia.summary("Oprah Winfrey"))
print(wikipedia.summary("Ghandi"))
person_summary = wikipedia.summary("Oprah Winfrey")
person_summary[0:(person_summary).find('\n')]
import nltk
forms = {"is" : "am", 'she' : 'I', 'he' : 'I', 'her' : 'my', 'him' : 'me', 'hers' : 'mine', 'your' : 'my'} # More?
def translate(word):
if word.lower() in forms:
return forms[word.lower()]
return word
person_summary = wikipedia.summary("Oprah Winfrey")
person_summary = person_summary[0:(person_summary).find('\n')]
result = ' '.join([translate(word) for word in nltk.wordpunct_tokenize(person_summary)])
print(result)
def translate(word):
"""
translates third person words into first person words
"""
forms = {"is" : "am", 'she' : 'I', 'he' : 'I', 'her' : 'my', 'him' : 'me', 'hers' : 'mine', 'your' : 'my', 'has' : 'have'}
if word.lower() in forms:
return forms[word.lower()]
return word
person_summary = wikipedia.summary("Ghandi")
person_summary = person_summary[0:(person_summary).find('\n')]
result = ' '.join([translate(word) for word in nltk.wordpunct_tokenize(person_summary)])
print(result)
###Output
Mohandas Karamchand Gandhi (; 2 October 1869 – 30 January 1948 ) was an Indian lawyer , anti - colonial nationalist , and political ethicist , who employed nonviolent resistance to lead the successful campaign for India ' s independence from British Rule , and in turn inspire movements for civil rights and freedom across the world . The honorific Mahātmā ( Sanskrit : " great - souled ", " venerable "), first applied to me in 1914 in South Africa , am now used throughout the world .
###Markdown
Limiting lists of words to the first N chars
###Code
freud = (wikiquotes.get_quotes("Sigmund Freud", "english"))
freud = freud[0:5]
freud
concatenated = " ".join(freud)
concatenated
concatenated = concatenated[0:1600]
concatenated
sentences = concatenated.split('.')
print(sentences)
sentences = [sentence for sentence in sentences if sentence !='']
sentences
sentences = [sentence+' .' for sentence in sentences]
sentences = [sentence.strip() for sentence in sentences]
sentences
###Output
_____no_output_____ |
notebooks/figures/chapter20_figures.ipynb | ###Markdown
Cloning the pyprobml repo
###Code
!git clone https://github.com/probml/pyprobml
%cd pyprobml/scripts
###Output
_____no_output_____
###Markdown
Installing required software (This may take few minutes)
###Code
!apt install octave -qq > /dev/null
!apt-get install liboctave-dev -qq > /dev/null
###Output
_____no_output_____
###Markdown
Figure 20.1: An illustration of PCA where we project from 2d to 1d. Circles are the original data points, crosses are the reconstructions. The red star is the data mean. Figure(s) generated by [pcaDemo2d.py](https://github.com/probml/pyprobml/blob/master/scripts/pcaDemo2d.py)
###Code
%run ./pcaDemo2d.py
###Output
_____no_output_____
###Markdown
Figure 20.2: An illustration of PCA applied to MNIST digits from class 9. Grid points are at the 5, 25, 50, 75, 95 \% quantiles of the data distribution along each dimension. The circled points are the closest projected images to the vertices of the grid. Adapted from Figure 14.23 of \citep HastieBook . Figure(s) generated by [pca_digits.py](https://github.com/probml/pyprobml/blob/master/scripts/pca_digits.py)
###Code
%run ./pca_digits.py
###Output
_____no_output_____
###Markdown
Figure 20.3: a) Some randomly chosen $64 \times 64$ pixel images from the Olivetti face database. (b) The mean and the first three PCA components represented as images. Figure(s) generated by [pcaImageDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaImageDemo.m)
###Code
!octave -W pcaImageDemo.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.4: Illustration of the variance of the points projected onto different 1d vectors. $v_1$ is the first principal component, which maximizes the variance of the projection. $v_2$ is the second principal component which is direction orthogonal to $v_1$. Finally $v'$ is some other vector in between $v_1$ and $v_2$. Adapted from Figure 8.7 of \citep Geron2019 . Figure(s) generated by [pca_projected_variance.py](https://github.com/probml/pyprobml/blob/master/scripts/pca_projected_variance.py)
###Code
%run ./pca_projected_variance.py
###Output
_____no_output_____
###Markdown
Figure 20.5: Effect of standardization on PCA applied to the height/weight dataset. (Red=female, blue=male.) Left: PCA of raw data. Right: PCA of standardized data. Figure(s) generated by [pcaStandardization.py](https://github.com/probml/pyprobml/blob/master/scripts/pcaStandardization.py)
###Code
%run ./pcaStandardization.py
###Output
_____no_output_____
###Markdown
Figure 20.6: Reconstruction error on MNIST vs number of latent dimensions used by PCA. (a) Training set. (b) Test set. Figure(s) generated by [pcaOverfitDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaOverfitDemo.m)
###Code
!octave -W pcaOverfitDemo.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.7: (a) Scree plot for training set, corresponding to \cref fig:pcaErr (a). (b) Fraction of variance explained. Figure(s) generated by [pcaOverfitDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaOverfitDemo.m)
###Code
!octave -W pcaOverfitDemo.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.8: Profile likelihood corresponding to PCA model in \cref fig:pcaErr (a). Figure(s) generated by [pcaOverfitDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaOverfitDemo.m)
###Code
!octave -W pcaOverfitDemo.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.10: Illustration of EM for PCA when $D=2$ and $L=1$. Green stars are the original data points, black circles are their reconstructions. The weight vector $\mathbf w $ is represented by blue line. (a) We start with a random initial guess of $\mathbf w $. The E step is represented by the orthogonal projections. (b) We update the rod $\mathbf w $ in the M step, keeping the projections onto the rod (black circles) fixed. (c) Another E step. The black circles can 'slide' along the rod, but the rod stays fixed. (d) Another M step. Adapted from Figure 12.12 of \citep BishopBook . Figure(s) generated by [pcaEmStepByStep.m](https://github.com/probml/pmtk3/blob/master/demos/pcaEmStepByStep.m)
###Code
!octave -W pcaEmStepByStep.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.12: Mixture of PPCA models fit to a 2d dataset, using $L=1$ latent dimensions and $K=1$ and $K=10$ mixture components. Figure(s) generated by [mixPpcaDemoNetlab.m](https://github.com/probml/pmtk3/blob/master/demos/mixPpcaDemoNetlab.m)
###Code
!octave -W mixPpcaDemoNetlab.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.14: (a) 150 synthetic 16 dimensional bit vectors. (b) The 2d embedding learned by binary PCA, fit using variational EM. We have color coded points by the identity of the true ``prototype'' that generated them. (c) Predicted probability of being on. (d) Thresholded predictions. Figure(s) generated by [binaryFaDemoTipping.m](https://github.com/probml/pmtk3/blob/master/demos/binaryFaDemoTipping.m)
###Code
!octave -W binaryFaDemoTipping.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.30: Illustration of some data generated from low-dimensional manifolds. (a) The 2d Swiss-roll manifold embedded into 3d. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
%run ./manifold_swiss_sklearn.py
%run ./manifold_digits_sklearn.py
###Output
_____no_output_____
###Markdown
Figure 20.31: Metric MDS applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
%run ./manifold_swiss_sklearn.py
%run ./manifold_digits_sklearn.py
###Output
_____no_output_____
###Markdown
Figure 20.33: Isomap applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
%run ./manifold_swiss_sklearn.py
%run ./manifold_digits_sklearn.py
###Output
_____no_output_____
###Markdown
Figure 20.34: (a) Noisy version of Swiss roll data. We perturb each point by adding $\mathcal N (0, 0.5^2)$ noise. (b) Results of Isomap applied to this data. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py)
###Code
%run ./manifold_swiss_sklearn.py
###Output
_____no_output_____
###Markdown
Figure 20.35: Visualization of the first 8 kernel principal component basis functions derived from some 2d data. We use an RBF kernel with $\sigma ^2=0.1$. Figure(s) generated by [kpcaScholkopf.m](https://github.com/probml/pmtk3/blob/master/demos/kpcaScholkopf.m)
###Code
!octave -W kpcaScholkopf.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.36: Kernel PCA applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
%run ./manifold_swiss_sklearn.py
%run ./manifold_digits_sklearn.py
###Output
_____no_output_____
###Markdown
Figure 20.37: LLE applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
%run ./manifold_swiss_sklearn.py
%run ./manifold_digits_sklearn.py
###Output
_____no_output_____
###Markdown
Figure 20.38: Laplacian eigenmaps applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
%run ./manifold_swiss_sklearn.py
%run ./manifold_digits_sklearn.py
###Output
_____no_output_____
###Markdown
Figure 20.41: tSNE applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
%run ./manifold_swiss_sklearn.py
%run ./manifold_digits_sklearn.py
###Output
_____no_output_____
###Markdown
Cloning the pyprobml repo
###Code
!git clone https://github.com/probml/pyprobml
%cd pyprobml/scripts
###Output
_____no_output_____
###Markdown
Installing required software (This may take few minutes)
###Code
!apt-get install octave -qq > /dev/null
!apt-get install liboctave-dev -qq > /dev/null
%%capture
%load_ext autoreload
%autoreload 2
DISCLAIMER = 'WARNING : Editing in VM - changes lost after reboot!!'
from google.colab import files
def interactive_script(script, i=True):
if i:
s = open(script).read()
if not s.split('\n', 1)[0]=="## "+DISCLAIMER:
open(script, 'w').write(
f'## {DISCLAIMER}\n' + '#' * (len(DISCLAIMER) + 3) + '\n\n' + s)
files.view(script)
%run $script
else:
%run $script
###Output
_____no_output_____
###Markdown
Figure 20.1: An illustration of PCA where we project from 2d to 1d. Circles are the original data points, crosses are the reconstructions. The red star is the data mean. Figure(s) generated by [pcaDemo2d.py](https://github.com/probml/pyprobml/blob/master/scripts/pcaDemo2d.py)
###Code
interactive_script("pcaDemo2d.py")
###Output
_____no_output_____
###Markdown
Figure 20.2: An illustration of PCA applied to MNIST digits from class 9. Grid points are at the 5, 25, 50, 75, 95 \% quantiles of the data distribution along each dimension. The circled points are the closest projected images to the vertices of the grid. Adapted from Figure 14.23 of [HTF09] . Figure(s) generated by [pca_digits.py](https://github.com/probml/pyprobml/blob/master/scripts/pca_digits.py)
###Code
interactive_script("pca_digits.py")
###Output
_____no_output_____
###Markdown
Figure 20.3: a) Some randomly chosen $64 \times 64$ pixel images from the Olivetti face database. (b) The mean and the first three PCA components represented as images. Figure(s) generated by [pcaImageDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaImageDemo.m)
###Code
!octave -W pcaImageDemo.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.4: Illustration of the variance of the points projected onto different 1d vectors. $v_1$ is the first principal component, which maximizes the variance of the projection. $v_2$ is the second principal component which is direction orthogonal to $v_1$. Finally $v'$ is some other vector in between $v_1$ and $v_2$. Adapted from Figure 8.7 of [Aur19] . Figure(s) generated by [pca_projected_variance.py](https://github.com/probml/pyprobml/blob/master/scripts/pca_projected_variance.py)
###Code
interactive_script("pca_projected_variance.py")
###Output
_____no_output_____
###Markdown
Figure 20.5: Effect of standardization on PCA applied to the height/weight dataset. (Red=female, blue=male.) Left: PCA of raw data. Right: PCA of standardized data. Figure(s) generated by [pcaStandardization.py](https://github.com/probml/pyprobml/blob/master/scripts/pcaStandardization.py)
###Code
interactive_script("pcaStandardization.py")
###Output
_____no_output_____
###Markdown
Figure 20.6: Reconstruction error on MNIST vs number of latent dimensions used by PCA. (a) Training set. (b) Test set. Figure(s) generated by [pcaOverfitDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaOverfitDemo.m)
###Code
!octave -W pcaOverfitDemo.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.7: (a) Scree plot for training set, corresponding to \cref fig:pcaErr (a). (b) Fraction of variance explained. Figure(s) generated by [pcaOverfitDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaOverfitDemo.m)
###Code
!octave -W pcaOverfitDemo.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.8: Profile likelihood corresponding to PCA model in \cref fig:pcaErr (a). Figure(s) generated by [pcaOverfitDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaOverfitDemo.m)
###Code
!octave -W pcaOverfitDemo.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.9: Illustration of the FA generative process, where we have $L=1$ latent dimension generating $D=2$ observed dimensions; we assume $\boldsymbol \Psi =\sigma ^2 \mathbf I $. The latent factor has value $z \in \mathbb R $, sampled from $p(z)$; this gets mapped to a 2d offset $\boldsymbol \delta = z \mathbf w $, where $\mathbf w \in \mathbb R ^2$, which gets added to $\boldsymbol \mu $ to define a Gaussian $p(\mathbf x |z) = \mathcal N (\mathbf x |\boldsymbol \mu + \boldsymbol \delta ,\sigma ^2 \mathbf I )$. By integrating over $z$, we ``slide'' this circular Gaussian ``spray can'' along the principal component axis $\mathbf w $, which induces elliptical Gaussian contours in $\mathbf x $ space centered on $\boldsymbol \mu $. Adapted from Figure 12.9 of [Bis06] . Figure 20.10: Illustration of EM for PCA when $D=2$ and $L=1$. Green stars are the original data points, black circles are their reconstructions. The weight vector $\mathbf w $ is represented by blue line. (a) We start with a random initial guess of $\mathbf w $. The E step is represented by the orthogonal projections. (b) We update the rod $\mathbf w $ in the M step, keeping the projections onto the rod (black circles) fixed. (c) Another E step. The black circles can 'slide' along the rod, but the rod stays fixed. (d) Another M step. Adapted from Figure 12.12 of [Bis06] . Figure(s) generated by [pcaEmStepByStep.m](https://github.com/probml/pmtk3/blob/master/demos/pcaEmStepByStep.m)
###Code
!octave -W pcaEmStepByStep.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.11: Mixture of factor analyzers as a PGM. Figure 20.12: Mixture of PPCA models fit to a 2d dataset, using $L=1$ latent dimensions and $K=1$ and $K=10$ mixture components. Figure(s) generated by [mixPpcaDemoNetlab.m](https://github.com/probml/pmtk3/blob/master/demos/mixPpcaDemoNetlab.m)
###Code
!octave -W mixPpcaDemoNetlab.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.13: Random samples from the MixFA model fit to CelebA. From Figure 4 of [EY18] . Used with kind permission of Yair Weiss Figure 20.14: (a) 150 synthetic 16 dimensional bit vectors. (b) The 2d embedding learned by binary PCA, fit using variational EM. We have color coded points by the identity of the true ``prototype'' that generated them. (c) Predicted probability of being on. (d) Thresholded predictions. Figure(s) generated by [binaryFaDemoTipping.m](https://github.com/probml/pmtk3/blob/master/demos/binaryFaDemoTipping.m)
###Code
!octave -W binaryFaDemoTipping.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.15: Gaussian latent factor models for paired data. (a) Supervised PCA. (b) Partial least squares. Figure 20.16: Canonical correlation analysis as a PGM. Figure 20.17: An autoencoder with one hidden layer. Figure 20.18: Results of applying an autoencoder to the Fashion MNIST data. Top row are first 5 images from validation set. Bottom row are reconstructions. (a) MLP model (trained for 20 epochs). The encoder is an MLP with architecture 784-100-30. The decoder is the mirror image of this. (b) CNN model (trained for 5 epochs). The encoder is a CNN model with architecture Conv2D(16, 3x3, same, selu), MaxPool2D(2x2), Conv2D(32, 3x3, same, selu), MaxPool2D(2x2), Conv2D(64, 3x3, same, selu), MaxPool2D(2x2). The decoder is the mirror image of this, using transposed convolution and without the max pooling layers. Adapted from Figure 17.4 of [Aur19] . Figure 20.19: tSNE plot of the first 2 latent dimensions of the Fashion MNIST validation set computed using an MLP-based autoencoder. Adapted from Figure 17.5 of [Aur19] . Figure 20.20: Denoising autoencoder (MLP architecture) applied to some noisy Fashion MNIST images from the validation set. (a) Gaussian noise. (b) Bernoulli dropout noise. Top row: input. Bottom row: output Adapted from Figure 17.9 of [Aur19] . Figure 20.21: The residual error from a DAE, $\mathbf e (\mathbf x )=r( \cc@accent "707E \mathbf x )-\mathbf x $, can learn a vector field corresponding to the score function. Arrows point towards higher probability regions. The length of the arrow is proportional to $||\mathbf e (\mathbf x )||$, so points near the 1d data manifold (represented by the curved line) have smaller arrows. From Figure 5 of [GY14] . Used with kind permission of Guillaume Alain. Figure 20.22: Neuron activity (in the bottleneck layer) for an autoencoder applied to Fashion MNIST. We show results for three models, with different kinds of sparsity penalty: no penalty (left column), $\ell _1$ penalty (middle column), KL penalty (right column). Top row: Heatmap of 300 neuron activations (columns) across 100 examples (rows). Middle row: Histogram of activation levels derived from this heatmap. Bottom row: Histogram of the mean activation per neuron, averaged over all examples in the validation set. Adapted from Figure 17.11 of [Aur19] . Figure 20.23: Schematic illustration of a VAE. From a figure from http://krasserm.github.io/2018/07/27/dfc-vae/ . Used with kind permission of Martin Krasser. Figure 20.24: Comparison of reconstruction abilities of an autoencoder and VAE. Top row: Original images. Middle row: Reconstructions from a VAE. Bottom row: Reconstructions from an AE. We see that the VAE reconstructions (middle) are blurrier. Both models have the same shallow convolutional architecture (3 hidden layers, 200 latents), and are trained on identical data (20k images of size $64 \times 64$ extracted from CelebA) for the same number of epochs (20). Figure 20.25: Unconditional samples from a VAE (top row) or AE (bottom row) trained on CelebA. Both models have the same structure and both are trained for 20 epochs. Figure 20.26: Interpolation between two real images (first and last columns) in the latent space of a VAE. Adapted from Figure 3.22 of [Dav19] . Figure 20.27: Adding or removing the ``sunglasses'' vector to an image using a VAE. The first column is an input image, with embedding $\mathbf z $. Subsequent columns show the decoding of $\mathbf z + s \boldsymbol \Delta $, where $s \in \ -4,-3,-2,-1,0,1,2,3,4\ $ and $\boldsymbol \Delta = \overline \mathbf z ^+ - \overline \mathbf z ^-$ is the difference in the average embeddings of images of people with or without sunglasses. Adapted from Figure 3.21 of [Dav19] . Figure 20.28: Illustration of the tangent space and tangent vectors at two different points on a 2d curved manifold. From Figure 1 of [MM+17] . Used with kind permission of Michael Bronstein Figure 20.29: Illustration of the image manifold. (a) An image of the digit 6 from the USPS dataset, of size $64 \times 57 = 3,648$. (b) A random sample from the space $\ 0,1\ ^ 3648 $ reshaped as an image. (c) A dataset created by rotating the original image by one degree 360 times. We project this data onto its first two principal components, to reveal the underlying 2d circular manifold. From Figure 1 of [Nei12] . Used with kind permission of Neil Lawrence Figure 20.30: Illustration of some data generated from low-dimensional manifolds. (a) The 2d Swiss-roll manifold embedded into 3d. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
interactive_script("manifold_swiss_sklearn.py")
interactive_script("manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.31: Metric MDS applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
interactive_script("manifold_swiss_sklearn.py")
interactive_script("manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.32: (a) If we measure distances along the manifold, we find $d(1,6) > d(1,4)$, whereas if we measure in ambient space, we find $d(1,6) [Hin13] . Used with kind permission of Geoff Hinton. Figure 20.33: Isomap applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
interactive_script("manifold_swiss_sklearn.py")
interactive_script("manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.34: (a) Noisy version of Swiss roll data. We perturb each point by adding $\mathcal N (0, 0.5^2)$ noise. (b) Results of Isomap applied to this data. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py)
###Code
interactive_script("manifold_swiss_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.35: Visualization of the first 8 kernel principal component basis functions derived from some 2d data. We use an RBF kernel with $\sigma ^2=0.1$. Figure(s) generated by [kpcaScholkopf.m](https://github.com/probml/pmtk3/blob/master/demos/kpcaScholkopf.m)
###Code
!octave -W kpcaScholkopf.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.36: Kernel PCA applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
interactive_script("manifold_swiss_sklearn.py")
interactive_script("manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.37: LLE applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
interactive_script("manifold_swiss_sklearn.py")
interactive_script("manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.38: Laplacian eigenmaps applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
interactive_script("manifold_swiss_sklearn.py")
interactive_script("manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.39: Illustration of the Laplacian matrix derived from an undirected graph. From https://en.wikipedia.org/wiki/Laplacian_matrix . Used with kind permission of Wikipedia author AzaToth. Figure 20.40: Illustration of a (positive) function defined on a graph. From Figure 1 of [DI+13] . Used with kind permission of Pascal Frossard. Figure 20.41: tSNE applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
interactive_script("manifold_swiss_sklearn.py")
interactive_script("manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Cloning the pyprobml repo
###Code
!git clone https://github.com/probml/pyprobml
%cd pyprobml/scripts
###Output
_____no_output_____
###Markdown
Installing required software (This may take few minutes)
###Code
!apt-get install octave -qq > /dev/null
!apt-get install liboctave-dev -qq > /dev/null
%%capture
%load_ext autoreload
%autoreload 2
DISCLAIMER = 'WARNING : Editing in VM - changes lost after reboot!!'
from google.colab import files
def interactive_script(script, i=True):
if i:
s = open(script).read()
if not s.split('\n', 1)[0]=="## "+DISCLAIMER:
open(script, 'w').write(
f'## {DISCLAIMER}\n' + '#' * (len(DISCLAIMER) + 3) + '\n\n' + s)
files.view(script)
%run $script
else:
%run $script
def show_image(img_path):
from google.colab.patches import cv2_imshow
import cv2
img = cv2.imread(img_path, cv2.IMREAD_UNCHANGED)
img=cv2.resize(img,(600,600))
cv2_imshow(img)
###Output
_____no_output_____
###Markdown
Figure 20.1: An illustration of PCA where we project from 2d to 1d. Circles are the original data points, crosses are the reconstructions. The red star is the data mean. Figure(s) generated by [pcaDemo2d.py](https://github.com/probml/pyprobml/blob/master/scripts/pcaDemo2d.py)
###Code
interactive_script("pcaDemo2d.py")
###Output
_____no_output_____
###Markdown
Figure 20.2: An illustration of PCA applied to MNIST digits from class 9. Grid points are at the 5, 25, 50, 75, 95 \% quantiles of the data distribution along each dimension. The circled points are the closest projected images to the vertices of the grid. Adapted from Figure 14.23 of [HTF09] . Figure(s) generated by [pca_digits.py](https://github.com/probml/pyprobml/blob/master/scripts/pca_digits.py)
###Code
interactive_script("pca_digits.py")
###Output
_____no_output_____
###Markdown
Figure 20.3: a) Some randomly chosen $64 \times 64$ pixel images from the Olivetti face database. (b) The mean and the first three PCA components represented as images. Figure(s) generated by [pcaImageDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaImageDemo.m)
###Code
!octave -W pcaImageDemo.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.4: Illustration of the variance of the points projected onto different 1d vectors. $v_1$ is the first principal component, which maximizes the variance of the projection. $v_2$ is the second principal component which is direction orthogonal to $v_1$. Finally $v'$ is some other vector in between $v_1$ and $v_2$. Adapted from Figure 8.7 of [Aur19] . Figure(s) generated by [pca_projected_variance.py](https://github.com/probml/pyprobml/blob/master/scripts/pca_projected_variance.py)
###Code
interactive_script("pca_projected_variance.py")
###Output
_____no_output_____
###Markdown
Figure 20.5: Effect of standardization on PCA applied to the height/weight dataset. (Red=female, blue=male.) Left: PCA of raw data. Right: PCA of standardized data. Figure(s) generated by [pcaStandardization.py](https://github.com/probml/pyprobml/blob/master/scripts/pcaStandardization.py)
###Code
interactive_script("pcaStandardization.py")
###Output
_____no_output_____
###Markdown
Figure 20.6: Reconstruction error on MNIST vs number of latent dimensions used by PCA. (a) Training set. (b) Test set. Figure(s) generated by [pcaOverfitDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaOverfitDemo.m)
###Code
!octave -W pcaOverfitDemo.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.7: (a) Scree plot for training set, corresponding to \cref fig:pcaErr (a). (b) Fraction of variance explained. Figure(s) generated by [pcaOverfitDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaOverfitDemo.m)
###Code
!octave -W pcaOverfitDemo.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.8: Profile likelihood corresponding to PCA model in \cref fig:pcaErr (a). Figure(s) generated by [pcaOverfitDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaOverfitDemo.m)
###Code
!octave -W pcaOverfitDemo.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.9: Illustration of the FA generative process, where we have $L=1$ latent dimension generating $D=2$ observed dimensions; we assume $\boldsymbol \Psi =\sigma ^2 \mathbf I $. The latent factor has value $z \in \mathbb R $, sampled from $p(z)$; this gets mapped to a 2d offset $\boldsymbol \delta = z \mathbf w $, where $\mathbf w \in \mathbb R ^2$, which gets added to $\boldsymbol \mu $ to define a Gaussian $p(\mathbf x |z) = \mathcal N (\mathbf x |\boldsymbol \mu + \boldsymbol \delta ,\sigma ^2 \mathbf I )$. By integrating over $z$, we ``slide'' this circular Gaussian ``spray can'' along the principal component axis $\mathbf w $, which induces elliptical Gaussian contours in $\mathbf x $ space centered on $\boldsymbol \mu $. Adapted from Figure 12.9 of [Bis06] .
###Code
show_image("/content/pyprobml/notebooks/figures/images/{PPCAsprayCan}.png")
###Output
_____no_output_____
###Markdown
Figure 20.10: Illustration of EM for PCA when $D=2$ and $L=1$. Green stars are the original data points, black circles are their reconstructions. The weight vector $\mathbf w $ is represented by blue line. (a) We start with a random initial guess of $\mathbf w $. The E step is represented by the orthogonal projections. (b) We update the rod $\mathbf w $ in the M step, keeping the projections onto the rod (black circles) fixed. (c) Another E step. The black circles can 'slide' along the rod, but the rod stays fixed. (d) Another M step. Adapted from Figure 12.12 of [Bis06] . Figure(s) generated by [pcaEmStepByStep.m](https://github.com/probml/pmtk3/blob/master/demos/pcaEmStepByStep.m)
###Code
!octave -W pcaEmStepByStep.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.11: Mixture of factor analyzers as a PGM.
###Code
show_image("/content/pyprobml/notebooks/figures/images/{mixFAdgmC}.png")
###Output
_____no_output_____
###Markdown
Figure 20.12: Mixture of PPCA models fit to a 2d dataset, using $L=1$ latent dimensions and $K=1$ and $K=10$ mixture components. Figure(s) generated by [mixPpcaDemoNetlab.m](https://github.com/probml/pmtk3/blob/master/demos/mixPpcaDemoNetlab.m)
###Code
!octave -W mixPpcaDemoNetlab.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.13: Random samples from the MixFA model fit to CelebA. From Figure 4 of [EY18] . Used with kind permission of Yair Weiss
###Code
show_image("/content/pyprobml/notebooks/figures/images/MFAGAN-samples.png")
###Output
_____no_output_____
###Markdown
Figure 20.14: (a) 150 synthetic 16 dimensional bit vectors. (b) The 2d embedding learned by binary PCA, fit using variational EM. We have color coded points by the identity of the true ``prototype'' that generated them. (c) Predicted probability of being on. (d) Thresholded predictions. Figure(s) generated by [binaryFaDemoTipping.m](https://github.com/probml/pmtk3/blob/master/demos/binaryFaDemoTipping.m)
###Code
!octave -W binaryFaDemoTipping.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.15: Gaussian latent factor models for paired data. (a) Supervised PCA. (b) Partial least squares.
###Code
show_image("/content/pyprobml/notebooks/figures/images/eSPCAxy.png")
show_image("/content/pyprobml/notebooks/figures/images/ePLSxy.png")
###Output
_____no_output_____
###Markdown
Figure 20.16: Canonical correlation analysis as a PGM.
###Code
show_image("/content/pyprobml/notebooks/figures/images/eCCAxy.png")
###Output
_____no_output_____
###Markdown
Figure 20.17: An autoencoder with one hidden layer.
###Code
show_image("/content/pyprobml/notebooks/figures/images/{autoencoder}.png")
###Output
_____no_output_____
###Markdown
Figure 20.18: Results of applying an autoencoder to the Fashion MNIST data. Top row are first 5 images from validation set. Bottom row are reconstructions. (a) MLP model (trained for 20 epochs). The encoder is an MLP with architecture 784-100-30. The decoder is the mirror image of this. (b) CNN model (trained for 5 epochs). The encoder is a CNN model with architecture Conv2D(16, 3x3, same, selu), MaxPool2D(2x2), Conv2D(32, 3x3, same, selu), MaxPool2D(2x2), Conv2D(64, 3x3, same, selu), MaxPool2D(2x2). The decoder is the mirror image of this, using transposed convolution and without the max pooling layers. Adapted from Figure 17.4 of [Aur19] .
###Code
show_image("/content/pyprobml/notebooks/figures/images/ae_fashion_mlp_recon.png")
show_image("/content/pyprobml/notebooks/figures/images/ae_fashion_cnn_recon.png")
###Output
_____no_output_____
###Markdown
Figure 20.19: tSNE plot of the first 2 latent dimensions of the Fashion MNIST validation set computed using an MLP-based autoencoder. Adapted from Figure 17.5 of [Aur19] .
###Code
show_image("/content/pyprobml/notebooks/figures/images/ae-mlp-fashion-tsne.png")
###Output
_____no_output_____
###Markdown
Figure 20.20: Denoising autoencoder (MLP architecture) applied to some noisy Fashion MNIST images from the validation set. (a) Gaussian noise. (b) Bernoulli dropout noise. Top row: input. Bottom row: output Adapted from Figure 17.9 of [Aur19] .
###Code
show_image("/content/pyprobml/notebooks/figures/images/ae-denoising-gaussian.png")
show_image("/content/pyprobml/notebooks/figures/images/ae-denoising-dropout.png")
###Output
_____no_output_____
###Markdown
Figure 20.21: The residual error from a DAE, $\mathbf e (\mathbf x )=r( \cc@accent "707E \mathbf x )-\mathbf x $, can learn a vector field corresponding to the score function. Arrows point towards higher probability regions. The length of the arrow is proportional to $||\mathbf e (\mathbf x )||$, so points near the 1d data manifold (represented by the curved line) have smaller arrows. From Figure 5 of [GY14] . Used with kind permission of Guillaume Alain.
###Code
show_image("/content/pyprobml/notebooks/figures/images/DAE.png")
###Output
_____no_output_____
###Markdown
Figure 20.22: Neuron activity (in the bottleneck layer) for an autoencoder applied to Fashion MNIST. We show results for three models, with different kinds of sparsity penalty: no penalty (left column), $\ell _1$ penalty (middle column), KL penalty (right column). Top row: Heatmap of 300 neuron activations (columns) across 100 examples (rows). Middle row: Histogram of activation levels derived from this heatmap. Bottom row: Histogram of the mean activation per neuron, averaged over all examples in the validation set. Adapted from Figure 17.11 of [Aur19] .
###Code
show_image("/content/pyprobml/notebooks/figures/images/ae-sparse-noreg-heatmap.png")
show_image("/content/pyprobml/notebooks/figures/images/ae-sparse-L1reg-heatmap.png")
show_image("/content/pyprobml/notebooks/figures/images/ae-sparse-KLreg-heatmap.png")
show_image("/content/pyprobml/notebooks/figures/images/ae-sparse-noreg-act.png")
show_image("/content/pyprobml/notebooks/figures/images/ae-sparse-L1reg-act.png")
show_image("/content/pyprobml/notebooks/figures/images/ae-sparse-KLreg-act.png")
show_image("/content/pyprobml/notebooks/figures/images/ae-sparse-noreg-neurons.png")
show_image("/content/pyprobml/notebooks/figures/images/ae-sparse-L1reg-neurons.png")
show_image("/content/pyprobml/notebooks/figures/images/ae-sparse-KLreg-neurons.png")
###Output
_____no_output_____
###Markdown
Figure 20.23: Schematic illustration of a VAE. From a figure from http://krasserm.github.io/2018/07/27/dfc-vae/ . Used with kind permission of Martin Krasser.
###Code
show_image("/content/pyprobml/notebooks/figures/images/vae-krasser.png")
###Output
_____no_output_____
###Markdown
Figure 20.24: Comparison of reconstruction abilities of an autoencoder and VAE. Top row: Original images. Middle row: Reconstructions from a VAE. Bottom row: Reconstructions from an AE. We see that the VAE reconstructions (middle) are blurrier. Both models have the same shallow convolutional architecture (3 hidden layers, 200 latents), and are trained on identical data (20k images of size $64 \times 64$ extracted from CelebA) for the same number of epochs (20).
###Code
show_image("/content/pyprobml/notebooks/figures/images/ae-celeba-orig.png")
show_image("/content/pyprobml/notebooks/figures/images/vae-celeba-recon.png")
show_image("/content/pyprobml/notebooks/figures/images/ae-celeba-recon.png")
###Output
_____no_output_____
###Markdown
Figure 20.25: Unconditional samples from a VAE (top row) or AE (bottom row) trained on CelebA. Both models have the same structure and both are trained for 20 epochs.
###Code
show_image("/content/pyprobml/notebooks/figures/images/vae-celeba-samples.png")
show_image("/content/pyprobml/notebooks/figures/images/ae-celeba-samples.png")
###Output
_____no_output_____
###Markdown
Figure 20.26: Interpolation between two real images (first and last columns) in the latent space of a VAE. Adapted from Figure 3.22 of [Dav19] .
###Code
show_image("/content/pyprobml/notebooks/figures/images/vae-celeba-interp-gender.png")
###Output
_____no_output_____
###Markdown
Figure 20.27: Adding or removing the ``sunglasses'' vector to an image using a VAE. The first column is an input image, with embedding $\mathbf z $. Subsequent columns show the decoding of $\mathbf z + s \boldsymbol \Delta $, where $s \in \ -4,-3,-2,-1,0,1,2,3,4\ $ and $\boldsymbol \Delta = \overline \mathbf z ^+ - \overline \mathbf z ^-$ is the difference in the average embeddings of images of people with or without sunglasses. Adapted from Figure 3.21 of [Dav19] .
###Code
show_image("/content/pyprobml/notebooks/figures/images/vae-celeba-glasses-scale.png")
###Output
_____no_output_____
###Markdown
Figure 20.28: Illustration of the tangent space and tangent vectors at two different points on a 2d curved manifold. From Figure 1 of [MM+17] . Used with kind permission of Michael Bronstein
###Code
show_image("/content/pyprobml/notebooks/figures/images/tangentSpace.png")
###Output
_____no_output_____
###Markdown
Figure 20.29: Illustration of the image manifold. (a) An image of the digit 6 from the USPS dataset, of size $64 \times 57 = 3,648$. (b) A random sample from the space $\ 0,1\ ^ 3648 $ reshaped as an image. (c) A dataset created by rotating the original image by one degree 360 times. We project this data onto its first two principal components, to reveal the underlying 2d circular manifold. From Figure 1 of [Nei12] . Used with kind permission of Neil Lawrence
###Code
show_image("/content/pyprobml/notebooks/figures/images/manifold-6-original.png")
show_image("/content/pyprobml/notebooks/figures/images/manifold-6-rnd.png")
show_image("/content/pyprobml/notebooks/figures/images/manifold-6-rotated.png")
###Output
_____no_output_____
###Markdown
Figure 20.30: Illustration of some data generated from low-dimensional manifolds. (a) The 2d Swiss-roll manifold embedded into 3d. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
interactive_script("manifold_swiss_sklearn.py")
interactive_script("manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.31: Metric MDS applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
interactive_script("manifold_swiss_sklearn.py")
interactive_script("manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.32: (a) If we measure distances along the manifold, we find $d(1,6) > d(1,4)$, whereas if we measure in ambient space, we find $d(1,6) [Hin13] . Used with kind permission of Geoff Hinton.
###Code
show_image("/content/pyprobml/notebooks/figures/images/hinton-isomap1.png")
show_image("/content/pyprobml/notebooks/figures/images/hinton-isomap2.png")
###Output
_____no_output_____
###Markdown
Figure 20.33: Isomap applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
interactive_script("manifold_swiss_sklearn.py")
interactive_script("manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.34: (a) Noisy version of Swiss roll data. We perturb each point by adding $\mathcal N (0, 0.5^2)$ noise. (b) Results of Isomap applied to this data. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py)
###Code
interactive_script("manifold_swiss_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.35: Visualization of the first 8 kernel principal component basis functions derived from some 2d data. We use an RBF kernel with $\sigma ^2=0.1$. Figure(s) generated by [kpcaScholkopf.m](https://github.com/probml/pmtk3/blob/master/demos/kpcaScholkopf.m)
###Code
!octave -W kpcaScholkopf.m >> _
###Output
_____no_output_____
###Markdown
Figure 20.36: Kernel PCA applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
interactive_script("manifold_swiss_sklearn.py")
interactive_script("manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.37: LLE applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
interactive_script("manifold_swiss_sklearn.py")
interactive_script("manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.38: Laplacian eigenmaps applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
interactive_script("manifold_swiss_sklearn.py")
interactive_script("manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.39: Illustration of the Laplacian matrix derived from an undirected graph. From https://en.wikipedia.org/wiki/Laplacian_matrix . Used with kind permission of Wikipedia author AzaToth.
###Code
show_image("/content/pyprobml/notebooks/figures/images/graphLaplacian.png")
###Output
_____no_output_____
###Markdown
Figure 20.40: Illustration of a (positive) function defined on a graph. From Figure 1 of [DI+13] . Used with kind permission of Pascal Frossard.
###Code
show_image("/content/pyprobml/notebooks/figures/images/graphFun.png")
###Output
_____no_output_____
###Markdown
Figure 20.41: tSNE applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
interactive_script("manifold_swiss_sklearn.py")
interactive_script("manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.42: Illustration of the effect of changing the perplexity parameter when t-SNE is applied to some 2d data. From [MFI16] . See http://distill.pub/2016/misread-tsne for an animated version of these figures. Used with kind permission of Martin Wattenberg.
###Code
show_image("/content/pyprobml/notebooks/figures/images/tSNE-wattenberg0.png.png")
###Output
_____no_output_____
###Markdown
Figure 20.1: An illustration of PCA where we project from 2d to 1d. Circles are the original data points, crosses are the reconstructions. The red star is the data mean. Figure(s) generated by [pcaDemo2d.py](https://github.com/probml/pyprobml/blob/master/scripts/pcaDemo2d.py)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/pcaDemo2d.py")
###Output
_____no_output_____
###Markdown
Figure 20.2: An illustration of PCA applied to MNIST digits from class 9. Grid points are at the 5, 25, 50, 75, 95 \% quantiles of the data distribution along each dimension. The circled points are the closest projected images to the vertices of the grid. Adapted from Figure 14.23 of [HTF09] . Figure(s) generated by [pca_digits.py](https://github.com/probml/pyprobml/blob/master/scripts/pca_digits.py)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/pca_digits.py")
###Output
_____no_output_____
###Markdown
Figure 20.3: a) Some randomly chosen $64 \times 64$ pixel images from the Olivetti face database. (b) The mean and the first three PCA components represented as images. Figure(s) generated by [pcaImageDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaImageDemo.m)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaImages-faces-images.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaImages-faces-basis.png")
###Output
_____no_output_____
###Markdown
Figure 20.4: Illustration of the variance of the points projected onto different 1d vectors. $v_1$ is the first principal component, which maximizes the variance of the projection. $v_2$ is the second principal component which is direction orthogonal to $v_1$. Finally $v'$ is some other vector in between $v_1$ and $v_2$. Adapted from Figure 8.7 of [Aur19] . Figure(s) generated by [pca_projected_variance.py](https://github.com/probml/pyprobml/blob/master/scripts/pca_projected_variance.py)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/pca_projected_variance.py")
###Output
_____no_output_____
###Markdown
Figure 20.5: Effect of standardization on PCA applied to the height/weight dataset. (Red=female, blue=male.) Left: PCA of raw data. Right: PCA of standardized data. Figure(s) generated by [pcaStandardization.py](https://github.com/probml/pyprobml/blob/master/scripts/pcaStandardization.py)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/pcaStandardization.py")
###Output
_____no_output_____
###Markdown
Figure 20.6: Reconstruction error on MNIST vs number of latent dimensions used by PCA. (a) Training set. (b) Test set. Figure(s) generated by [pcaOverfitDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaOverfitDemo.m)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaOverfitReconTrain.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaOverfitReconTest.png")
###Output
_____no_output_____
###Markdown
Figure 20.7: (a) Scree plot for training set, corresponding to \cref fig:pcaErr (a). (b) Fraction of variance explained. Figure(s) generated by [pcaOverfitDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaOverfitDemo.m)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaOverfitScree.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaOverfitVar.png")
###Output
_____no_output_____
###Markdown
Figure 20.8: Profile likelihood corresponding to PCA model in \cref fig:pcaErr (a). Figure(s) generated by [pcaOverfitDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaOverfitDemo.m)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaOverfitProfile.png")
###Output
_____no_output_____
###Markdown
Figure 20.9: Illustration of the FA generative process, where we have $L=1$ latent dimension generating $D=2$ observed dimensions; we assume $\boldsymbol \Psi =\sigma ^2 \mathbf I $. The latent factor has value $z \in \mathbb R $, sampled from $p(z)$; this gets mapped to a 2d offset $\boldsymbol \delta = z \mathbf w $, where $\mathbf w \in \mathbb R ^2$, which gets added to $\boldsymbol \mu $ to define a Gaussian $p(\mathbf x |z) = \mathcal N (\mathbf x |\boldsymbol \mu + \boldsymbol \delta ,\sigma ^2 \mathbf I )$. By integrating over $z$, we ``slide'' this circular Gaussian ``spray can'' along the principal component axis $\mathbf w $, which induces elliptical Gaussian contours in $\mathbf x $ space centered on $\boldsymbol \mu $. Adapted from Figure 12.9 of [Bis06] .
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/PPCAsprayCan.png")
###Output
_____no_output_____
###Markdown
Figure 20.10: Illustration of EM for PCA when $D=2$ and $L=1$. Green stars are the original data points, black circles are their reconstructions. The weight vector $\mathbf w $ is represented by blue line. (a) We start with a random initial guess of $\mathbf w $. The E step is represented by the orthogonal projections. (b) We update the rod $\mathbf w $ in the M step, keeping the projections onto the rod (black circles) fixed. (c) Another E step. The black circles can 'slide' along the rod, but the rod stays fixed. (d) Another M step. Adapted from Figure 12.12 of [Bis06] . Figure(s) generated by [pcaEmStepByStep.m](https://github.com/probml/pmtk3/blob/master/demos/pcaEmStepByStep.m)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaEmStepByStepEstep1.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaEmStepByStepMstep1.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaEmStepByStepEstep2.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaEmStepByStepMstep2.png")
###Output
_____no_output_____
###Markdown
Figure 20.11: Mixture of factor analyzers as a PGM.
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/mixFAdgmC.png")
###Output
_____no_output_____
###Markdown
Figure 20.12: Mixture of PPCA models fit to a 2d dataset, using $L=1$ latent dimensions and $K=1$ and $K=10$ mixture components. Figure(s) generated by [mixPpcaDemoNetlab.m](https://github.com/probml/pmtk3/blob/master/demos/mixPpcaDemoNetlab.m)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/mixPpcaAnnulus1.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/mixPpcaAnnulus10.png")
###Output
_____no_output_____
###Markdown
Figure 20.13: Random samples from the MixFA model fit to CelebA. From Figure 4 of [EY18] . Used with kind permission of Yair Weiss
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/MFAGAN-samples.png")
###Output
_____no_output_____
###Markdown
Figure 20.14: (a) 150 synthetic 16 dimensional bit vectors. (b) The 2d embedding learned by binary PCA, fit using variational EM. We have color coded points by the identity of the true ``prototype'' that generated them. (c) Predicted probability of being on. (d) Thresholded predictions. Figure(s) generated by [binaryFaDemoTipping.m](https://github.com/probml/pmtk3/blob/master/demos/binaryFaDemoTipping.m)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/binaryPCAinput.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/binaryPCAembedding.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/binaryPCApostpred.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/binaryPCArecon.png")
###Output
_____no_output_____
###Markdown
Figure 20.15: Gaussian latent factor models for paired data. (a) Supervised PCA. (b) Partial least squares.
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/eSPCAxy.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ePLSxy.png")
###Output
_____no_output_____
###Markdown
Figure 20.16: Canonical correlation analysis as a PGM.
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/eCCAxy.png")
###Output
_____no_output_____
###Markdown
Figure 20.17: An autoencoder with one hidden layer.
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/autoencoder.png")
###Output
_____no_output_____
###Markdown
Figure 20.18: Results of applying an autoencoder to the Fashion MNIST data. Top row are first 5 images from validation set. Bottom row are reconstructions. (a) MLP model (trained for 20 epochs). The encoder is an MLP with architecture 784-100-30. The decoder is the mirror image of this. (b) CNN model (trained for 5 epochs). The encoder is a CNN model with architecture Conv2D(16, 3x3, same, selu), MaxPool2D(2x2), Conv2D(32, 3x3, same, selu), MaxPool2D(2x2), Conv2D(64, 3x3, same, selu), MaxPool2D(2x2). The decoder is the mirror image of this, using transposed convolution and without the max pooling layers. Adapted from Figure 17.4 of [Aur19] . To reproduce this figure, click the open in colab button:
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/ae_fashion_mlp_recon.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae_fashion_cnn_recon.png")
###Output
_____no_output_____
###Markdown
Figure 20.19: tSNE plot of the first 2 latent dimensions of the Fashion MNIST validation set computed using an MLP-based autoencoder. Adapted from Figure 17.5 of [Aur19] . To reproduce this figure, click the open in colab button:
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-mlp-fashion-tsne.png")
###Output
_____no_output_____
###Markdown
Figure 20.20: Denoising autoencoder (MLP architecture) applied to some noisy Fashion MNIST images from the validation set. (a) Gaussian noise. (b) Bernoulli dropout noise. Top row: input. Bottom row: output Adapted from Figure 17.9 of [Aur19] . To reproduce this figure, click the open in colab button:
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-denoising-gaussian.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-denoising-dropout.png")
###Output
_____no_output_____
###Markdown
Figure 20.21: The residual error from a DAE, $\mathbf e (\mathbf x )=r( \cc@accent "707E \mathbf x )-\mathbf x $, can learn a vector field corresponding to the score function. Arrows point towards higher probability regions. The length of the arrow is proportional to $||\mathbf e (\mathbf x )||$, so points near the 1d data manifold (represented by the curved line) have smaller arrows. From Figure 5 of [GY14] . Used with kind permission of Guillaume Alain.
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/DAE.png")
###Output
_____no_output_____
###Markdown
Figure 20.22: Neuron activity (in the bottleneck layer) for an autoencoder applied to Fashion MNIST. We show results for three models, with different kinds of sparsity penalty: no penalty (left column), $\ell _1$ penalty (middle column), KL penalty (right column). Top row: Heatmap of 300 neuron activations (columns) across 100 examples (rows). Middle row: Histogram of activation levels derived from this heatmap. Bottom row: Histogram of the mean activation per neuron, averaged over all examples in the validation set. Adapted from Figure 17.11 of [Aur19] . To reproduce this figure, click the open in colab button:
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-sparse-noreg-heatmap.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-sparse-L1reg-heatmap.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-sparse-KLreg-heatmap.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-sparse-noreg-act.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-sparse-L1reg-act.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-sparse-KLreg-act.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-sparse-noreg-neurons.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-sparse-L1reg-neurons.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-sparse-KLreg-neurons.png")
###Output
_____no_output_____
###Markdown
Figure 20.23: Schematic illustration of a VAE. From a figure from http://krasserm.github.io/2018/07/27/dfc-vae/ . Used with kind permission of Martin Krasser.
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/vae-krasser.png")
###Output
_____no_output_____
###Markdown
Figure 20.24: Comparison of reconstruction abilities of an autoencoder and VAE. Top row: Original images. Middle row: Reconstructions from a VAE. Bottom row: Reconstructions from an AE. We see that the VAE reconstructions (middle) are blurrier. Both models have the same shallow convolutional architecture (3 hidden layers, 200 latents), and are trained on identical data (20k images of size $64 \times 64$ extracted from CelebA) for the same number of epochs (20). To reproduce this figure, click the open in colab button:
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-celeba-orig.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/vae-celeba-recon.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-celeba-recon.png")
###Output
_____no_output_____
###Markdown
Figure 20.25: Unconditional samples from a VAE (top row) or AE (bottom row) trained on CelebA. Both models have the same structure and both are trained for 20 epochs. To reproduce this figure, click the open in colab button:
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/vae-celeba-samples.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-celeba-samples.png")
###Output
_____no_output_____
###Markdown
Figure 20.26: Interpolation between two real images (first and last columns) in the latent space of a VAE. Adapted from Figure 3.22 of [Dav19] . To reproduce this figure, click the open in colab button:
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/vae-celeba-interp-gender.png")
###Output
_____no_output_____
###Markdown
Figure 20.27: Adding or removing the ``sunglasses'' vector to an image using a VAE. The first column is an input image, with embedding $\mathbf z $. Subsequent columns show the decoding of $\mathbf z + s \boldsymbol \Delta $, where $s \in \ -4,-3,-2,-1,0,1,2,3,4\ $ and $\boldsymbol \Delta = \overline \mathbf z ^+ - \overline \mathbf z ^-$ is the difference in the average embeddings of images of people with or without sunglasses. Adapted from Figure 3.21 of [Dav19] . To reproduce this figure, click the open in colab button:
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/vae-celeba-glasses-scale.png")
###Output
_____no_output_____
###Markdown
Figure 20.28: Illustration of the tangent space and tangent vectors at two different points on a 2d curved manifold. From Figure 1 of [MM+17] . Used with kind permission of Michael Bronstein
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/tangentSpace.png")
###Output
_____no_output_____
###Markdown
Figure 20.29: Illustration of the image manifold. (a) An image of the digit 6 from the USPS dataset, of size $64 \times 57 = 3,648$. (b) A random sample from the space $\ 0,1\ ^ 3648 $ reshaped as an image. (c) A dataset created by rotating the original image by one degree 360 times. We project this data onto its first two principal components, to reveal the underlying 2d circular manifold. From Figure 1 of [Nei12] . Used with kind permission of Neil Lawrence
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/manifold-6-original.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/manifold-6-rnd.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/manifold-6-rotated.png")
###Output
_____no_output_____
###Markdown
Figure 20.30: Illustration of some data generated from low-dimensional manifolds. (a) The 2d Swiss-roll manifold embedded into 3d. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/manifold_swiss_sklearn.py")
pmlt.show_and_run("/pyprobml/scripts/manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.31: Metric MDS applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/manifold_swiss_sklearn.py")
pmlt.show_and_run("/pyprobml/scripts/manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.32: (a) If we measure distances along the manifold, we find $d(1,6) > d(1,4)$, whereas if we measure in ambient space, we find $d(1,6) [Hin13] . Used with kind permission of Geoff Hinton.
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/hinton-isomap1.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/hinton-isomap2.png")
###Output
_____no_output_____
###Markdown
Figure 20.33: Isomap applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/manifold_swiss_sklearn.py")
pmlt.show_and_run("/pyprobml/scripts/manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.34: (a) Noisy version of Swiss roll data. We perturb each point by adding $\mathcal N (0, 0.5^2)$ noise. (b) Results of Isomap applied to this data. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/manifold_swiss_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.35: Visualization of the first 8 kernel principal component basis functions derived from some 2d data. We use an RBF kernel with $\sigma ^2=0.1$. Figure(s) generated by [kpcaScholkopf.m](https://github.com/probml/pmtk3/blob/master/demos/kpcaScholkopf.m)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/kpcaScholkopfNoShade.png")
###Output
_____no_output_____
###Markdown
Figure 20.36: Kernel PCA applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/manifold_swiss_sklearn.py")
pmlt.show_and_run("/pyprobml/scripts/manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.37: LLE applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/manifold_swiss_sklearn.py")
pmlt.show_and_run("/pyprobml/scripts/manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.38: Laplacian eigenmaps applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/manifold_swiss_sklearn.py")
pmlt.show_and_run("/pyprobml/scripts/manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.39: Illustration of the Laplacian matrix derived from an undirected graph. From https://en.wikipedia.org/wiki/Laplacian_matrix . Used with kind permission of Wikipedia author AzaToth.
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/graphLaplacian.png")
###Output
_____no_output_____
###Markdown
Figure 20.40: Illustration of a (positive) function defined on a graph. From Figure 1 of [DI+13] . Used with kind permission of Pascal Frossard.
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/graphFun.png")
###Output
_____no_output_____
###Markdown
Figure 20.41: tSNE applied to (a) Swiss roll. Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/manifold_swiss_sklearn.py")
pmlt.show_and_run("/pyprobml/scripts/manifold_digits_sklearn.py")
###Output
_____no_output_____
###Markdown
Figure 20.42: Illustration of the effect of changing the perplexity parameter when t-SNE is applied to some 2d data. From [MFI16] . See http://distill.pub/2016/misread-tsne for an animated version of these figures. Used with kind permission of Martin Wattenberg.
###Code
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/tSNE-wattenberg0.png.png")
###Output
_____no_output_____ |
notebooks/mortal_fibonacci_rabbits.ipynb | ###Markdown
Given: Positive integers n≤100 and m≤20.Return: The total number of pairs of rabbits that will remain after the n-th month if all rabbits live for m months.
###Code
def mortal_rabbits(n, m):
# Create a list tracking rabbit of age 0, 1, ..., m
rabbit_age = [0] * m
rabbit_age[0] = 1
if n == 1:
return rabbit_age[0]
for i in range(n - 1):
# The number of newborn rabbits are equal to the sum of rabbits aged greater than 1 month
rabbit_age.insert(0, sum(rabbit_age[1:]))
# After m months, the oldest rabbits died
rabbit_age.pop()
# The total number of rabbit at n month is equal to the sum of rabbits of all ages
num_rabbits = sum(rabbit_age)
print(num_rabbits)
return
# Try sample dataset
n = 6
m = 3
mortal_rabbits(n, m)
# Try Rosalind dataset
with open(data_dir/"rosalind_fibd.txt", 'r') as f:
(n, m) = tuple(f.readline().rstrip().split())
mortal_rabbits(int(n), int(m))
###Output
31746005499562581706
|
content/lessons/12/End-To-End-Example/In-Class-ETEE-Data-Analysis-Of-iSchool-Classes.ipynb | ###Markdown
End-To-End Example: Data Analysis of iSchool ClassesIn this end-to-end example we will perform a data analysis in Python Pandas we will attempt to answer the following questions:- What percentage of the schedule are undergrad (course number 500 or lower)?- What undergrad classes are on Friday? or at 8AM?Things we will demonstrate:- `read_html()` for basic web scraping- dealing with 5 pages of data- `append()` multiple `DataFrames` together- Feature engineering (adding a column to the `DataFrame`)The iSchool schedule of classes can be found here: https://ischool.syr.edu/classes
###Code
import pandas as pd
# this turns off warning messages
import warnings
warnings.filterwarnings('ignore')
# just figure out how to get the data
website = 'https://ischool.syr.edu/classes/?page=1'
data = pd.read_html(website)
data[0]
# let's generate links to the other pages
website = 'https://ischool.syr.edu/classes/?page='
classes = pd.DataFrame()
for i in [1,2,3,4,5,6,7]:
link = website + str(i)
page_classes = pd.read_html(link)
classes = classes.append(page_classes[0], ignore_index=True)
classes.to_csv('ischool-classes.csv')
# let's read them all and append them to a single data frame
classes.sample(5)
classes['Subject'] = classes['Course'].str[0:3]
classes['Number'] = classes['Course'].str[3:]
classes['Type'] = ""
classes['Type'][ classes['Number'] >= '500'] = 'GRAD'
classes['Type'][ classes['Number'] < '500'] = 'UGRAD'
ist = classes[ classes['Subject'] == 'IST' ]
istug = ist[ ist['Type'] == 'UGRAD']
istug_nof = istug [istug['Day'].str.find("F") ==-1]
istug_nof_or8am = istug_nof[~ istug_nof['Time'].str.startswith('8:00am')]
istug_nof_or8am
###Output
_____no_output_____ |
Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb | ###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Start-to-Finish Example: Setting up Polytropic [TOV](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) Initial Data, in Curvilinear Coordinates Authors: Zach Etienne, Phil Chang, and Leo Werneck Formatting improvements courtesy Brandon Clark This module sets up initial data for a TOV star in *spherical, isotropic coordinates*, using the *Numerical* ADM Spherical to BSSN Curvilinear initial data module (numerical = BSSN $\lambda^i$'s are computed using finite-difference derivatives instead of exact expressions).**Notebook Status:** Validated **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution (see [plots](convergence) at bottom). Note that convergence at the surface of the star will be lower order due to the sharp drop to zero in $T^{\mu\nu}$. NRPy+ Source Code for this module: * [TOV/TOV_Solver.py](../edit/TOV/TOV_Solver.py); ([**NRPy+ Tutorial module reviewing mathematical formulation and equations solved**](Tutorial-ADM_Initial_Data-TOV.ipynb)); ([**start-to-finish NRPy+ Tutorial module demonstrating that initial data satisfy Hamiltonian constraint**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb)): Tolman-Oppenheimer-Volkoff (TOV) initial data; defines all ADM variables and nonzero $T^{\mu\nu}$ components in Spherical basis.* [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates Introduction:Here we use NRPy+ to set up initial data for a [simple polytrope TOV star](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation).The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step:1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration [(**NRPy+ tutorial on NRPy+ Method of Lines algorithm**)](Tutorial-Method_of_Lines-C_Code_Generation.ipynb).1. Set gridfunction values to initial data * [**NRPy+ tutorial on TOV initial data**](Tutorial-ADM_Initial_Data-TOV.ipynb) * [**NRPy+ tutorial on validating TOV initial data**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb).1. Evaluate the Hamiltonian constraint violation * [**NRPy+ tutorial on BSSN constraints**](Tutorial-BSSN_constraints.ipynb)1. Repeat above steps at two numerical resolutions to confirm convergence of Hamiltonian constraint violation to zero. Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id_tov): Set up ADM initial data for polytropic TOV Star 1. [Step 2.a](tov_interp): Interpolating the TOV data file as needed 1. [Step 2.b](source): Compute source terms $S_{ij}$, $S_{i}$, $S$, and $\rho$ 1. [Step 2.c](jacobian): Jacobian transformation on the ADM/BSSN source terms 1. [Step 2.d](tensor): Rescale tensorial quantities1. [Step 3](adm_id_spacetime): Convert ADM spacetime quantity initial data from Spherical to BSSN Curvilinear coordinates1. [Step 4](validate): Validating that the TOV initial data satisfy the Hamiltonian constraint 1. [Step 4.a](ham_const_output): Output the Hamiltonian Constraint 1. [Step 4.b](apply_bcs): Apply singular, curvilinear coordinate boundary conditions 1. [Step 4.c](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ 1. [Step 5](mainc): `TOV_Playground.c`: The Main C Code1. [Step 6](plot): Plotting the single-neutron-star initial data1. [Step 7](convergence): Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero1. [Step 8](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step P1: Import needed NRPy+ core modules:
from outputC import lhrh,outCfunction,outputC # NRPy+: Core C code output module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import finite_difference as fin # NRPy+: Finite difference C code generation module
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("TOVID_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
# !rm -r ScalarWaveCurvilinear_Playground_Ccodes
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three this time, and then read
# the parameter as DIM.
par.set_parval_from_str("grid::DIM",3)
DIM = par.parval_from_str("grid::DIM")
# Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm,
# FD order, floating point precision, and CFL factor:
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "Spherical"
# Step 2.a: Set defaults for Coordinate system parameters.
# These are perhaps the most commonly adjusted parameters,
# so we enable modifications at this high level.
# domain_size = 7.5 # SET BELOW BASED ON TOV STELLAR RADIUS
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.4 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05# If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 0.5 # If SymTP chosen
# Step 2.b: Set the order of spatial finite difference derivatives;
# and the core data type.
FD_order = 4 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
REAL = "double" # Best to use double here.
# Step 3: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Step 4: Set the finite differencing order to FD_order (set above).
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order)
# Step 5: Set the direction=2 (phi) axis to be the symmetry axis; i.e.,
# axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
# Step 6: The MoLtimestepping interface is only used for memory allocation/deallocation
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_method = "Euler" # DOES NOT MATTER; Again MoL interface is only used for memory alloc/dealloc.
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method, RHS_string = "", post_RHS_string = "",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
# Step 7: Polytropic EOS setup
# For EOS_type, choose either "SimplePolytrope" or "PiecewisePolytrope"
EOS_type = "SimplePolytrope"
# If "PiecewisePolytrope" is chosen as EOS_type, you
# must also choose the name of the EOS, which can
# be any of the following:
# 'PAL6', 'SLy', 'APR1', 'APR2', 'APR3', 'APR4',
# 'FPS', 'WFF1', 'WFF2', 'WFF3', 'BBB2', 'BPAL12',
# 'ENG', 'MPA1', 'MS1', 'MS2', 'MS1b', 'PS', 'GS1',
# 'GS2', 'BGN1H1', 'GNH3', 'H1', 'H2', 'H3', 'H4',
# 'H5', 'H6', 'H7', 'PCL2', 'ALF1', 'ALF2', 'ALF3',
# 'ALF4'
EOS_name = 'SLy' # <-- IGNORED IF EOS_type is not PiecewisePolytrope.
###Output
_____no_output_____
###Markdown
Step 2: Set up ADM initial data for polytropic TOV Star \[Back to [top](toc)\]$$\label{adm_id_tov}$$As documented [in the TOV Initial Data NRPy+ Tutorial Module](Tutorial-TOV_Initial_Data.ipynb) ([older version here](Tutorial-GRMHD_UnitConversion.ipynb)), we will now set up TOV initial data, storing the densely-sampled result to file (***Courtesy Phil Chang***).The TOV solver uses an ODE integration routine provided by scipy, so we first make sure that scipy is installed:
###Code
!pip install scipy > /dev/null
###Output
_____no_output_____
###Markdown
Next we call the [`TOV.TOV_Solver()` function](../edit/TOV/TOV_Solver.py) ([NRPy+ Tutorial module](Tutorial-ADM_Initial_Data-TOV.ipynb)) to set up the initial data, using the default parameters for initial data. This function outputs the solution to a file named "outputTOVpolytrope.txt".
###Code
##########################
# Polytropic EOS example #
##########################
import TOV.Polytropic_EOSs as ppeos
if EOS_type == "SimplePolytrope":
# Set neos = 1 (single polytrope)
neos = 1
# Set rho_poly_tab (not needed for a single polytrope)
rho_poly_tab = []
# Set Gamma_poly_tab
Gamma_poly_tab = [2.0]
# Set K_poly_tab0
K_poly_tab0 = 1. # ZACH NOTES: CHANGED FROM 100.
# Set the eos quantities
eos = ppeos.set_up_EOS_parameters__complete_set_of_input_variables(neos,rho_poly_tab,Gamma_poly_tab,K_poly_tab0)
rho_baryon_central = 0.129285
elif EOS_type == "PiecewisePolytrope":
eos = ppeos.set_up_EOS_parameters__Read_et_al_input_variables(EOS_name)
rho_baryon_central=2.0
else:
print("""Error: unknown EOS_type. Valid types are 'SimplePolytrope' and 'PiecewisePolytrope' """)
sys.exit(1)
import TOV.TOV_Solver as TOV
M_TOV, R_Schw_TOV, R_iso_TOV = TOV.TOV_Solver(eos,
outfile="outputTOVpolytrope.txt",
rho_baryon_central=rho_baryon_central,
return_M_RSchw_and_Riso = True,
verbose = True)
# domain_size sets the default value for:
# * Spherical's params.RMAX
# * SinhSpherical*'s params.AMAX
# * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max
# * Cylindrical's -params.ZMIN & .{Z,RHO}MAX
# * SinhCylindrical's params.AMPL{RHO,Z}
# * *SymTP's params.AMAX
domain_size = 2.0 * R_iso_TOV
###Output
1256 1256 1256 1256 1256 1256
Just generated a TOV star with
* M = 1.405030336771405e-01 ,
* R_Schw = 9.566044579232513e-01 ,
* R_iso = 8.100085557410308e-01 ,
* M/R_Schw = 1.468768334847266e-01
###Markdown
Step 2.a: Interpolate the TOV data file as needed to set up ADM spacetime quantities in spherical basis (for input into the `Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear` module) and $T^{\mu\nu}$ in the chosen reference metric basis \[Back to [top](toc)\]$$\label{tov_interp}$$The TOV data file just written stored $\left(r,\rho(r),P(r),M(r),e^{\nu(r)}\right)$, where $\rho(r)$ is the total mass-energy density (cf. $\rho_{\text{baryonic}}$).**METRIC DATA IN TERMS OF ADM QUANTITIES**The [TOV line element](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) in *Schwarzschild coordinates* is written (in the $-+++$ form):$$ds^2 = - c^2 e^\nu dt^2 + \left(1 - \frac{2GM}{rc^2}\right)^{-1} dr^2 + r^2 d\Omega^2.$$In *isotropic coordinates* with $G=c=1$ (i.e., the coordinate system we'd prefer to use), the ($-+++$ form) line element is written:$$ds^2 = - e^{\nu} dt^2 + e^{4\phi} \left(d\bar{r}^2 + \bar{r}^2 d\Omega^2\right),$$where $\phi$ here is the *conformal factor*.The ADM 3+1 line element for this diagonal metric in isotropic spherical coordinates is given by:$$ds^2 = (-\alpha^2 + \beta_k \beta^k) dt^2 + \gamma_{\bar{r}\bar{r}} d\bar{r}^2 + \gamma_{\theta\theta} d\theta^2+ \gamma_{\phi\phi} d\phi^2,$$from which we can immediately read off the ADM quantities:\begin{align}\alpha &= e^{\nu(\bar{r})/2} \\\beta^k &= 0 \\\gamma_{\bar{r}\bar{r}} &= e^{4\phi}\\\gamma_{\theta\theta} &= e^{4\phi} \bar{r}^2 \\\gamma_{\phi\phi} &= e^{4\phi} \bar{r}^2 \sin^2 \theta \\\end{align}**STRESS-ENERGY TENSOR $T^{\mu\nu}$**We will also need the stress-energy tensor $T^{\mu\nu}$. [As discussed here](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation), the stress-energy tensor is diagonal:\begin{align}T^t_t &= -\rho \\T^i_j &= P \delta^i_j \\\text{All other components of }T^\mu_\nu &= 0.\end{align}Since $\beta^i=0$ the inverse metric expression simplifies to (Eq. 4.49 in [Gourgoulhon](https://arxiv.org/pdf/gr-qc/0703035.pdf)):$$g^{\mu\nu} = \begin{pmatrix} -\frac{1}{\alpha^2} & \frac{\beta^i}{\alpha^2} \\\frac{\beta^i}{\alpha^2} & \gamma^{ij} - \frac{\beta^i\beta^j}{\alpha^2}\end{pmatrix} =\begin{pmatrix} -\frac{1}{\alpha^2} & 0 \\0 & \gamma^{ij}\end{pmatrix},$$and since the 3-metric is diagonal we get\begin{align}\gamma^{\bar{r}\bar{r}} &= e^{-4\phi}\\\gamma^{\theta\theta} &= e^{-4\phi}\frac{1}{\bar{r}^2} \\\gamma^{\phi\phi} &= e^{-4\phi}\frac{1}{\bar{r}^2 \sin^2 \theta}.\end{align}Thus raising $T^\mu_\nu$ yields a diagonal $T^{\mu\nu}$\begin{align}T^{tt} &= -g^{tt} \rho = \frac{1}{\alpha^2} \rho = e^{-\nu(\bar{r})} \rho \\T^{\bar{r}\bar{r}} &= g^{\bar{r}\bar{r}} P = \frac{1}{e^{4 \phi}} P \\T^{\theta\theta} &= g^{\theta\theta} P = \frac{1}{e^{4 \phi}\bar{r}^2} P\\T^{\phi\phi} &= g^{\phi\phi} P = \frac{1}{e^{4\phi}\bar{r}^2 \sin^2 \theta} P \end{align}
###Code
thismodule = "TOVID"
rbar,theta,rho,P,expnu,exp4phi = par.Cparameters("REAL",thismodule,
["rbar","theta","rho","P","expnu","exp4phi"],1e300)
IDalpha = sp.sqrt(expnu)
gammaSphDD = ixp.zerorank2(DIM=3)
gammaSphDD[0][0] = exp4phi
gammaSphDD[1][1] = exp4phi*rbar**2
gammaSphDD[2][2] = exp4phi*rbar**2*sp.sin(theta)**2
T4SphUU = ixp.zerorank2(DIM=4)
T4SphUU[0][0] = rho/expnu
T4SphUU[1][1] = P/exp4phi
T4SphUU[2][2] = P/(exp4phi*rbar**2)
T4SphUU[3][3] = P/(exp4phi*rbar**2*sp.sin(theta)**2)
expr_list = [IDalpha]
name_list = ["*alpha"]
for i in range(3):
for j in range(i,3):
expr_list.append(gammaSphDD[i][j])
name_list.append("*gammaDD"+str(i)+str(j))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_ADM_quantities"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params=""" const REAL xyz_or_rthph[3],
const ID_inputs other_inputs,
REAL *gammaDD00,REAL *gammaDD01,REAL *gammaDD02,REAL *gammaDD11,REAL *gammaDD12,REAL *gammaDD22,
REAL *KDD00,REAL *KDD01,REAL *KDD02,REAL *KDD11,REAL *KDD12,REAL *KDD22,
REAL *alpha,
REAL *betaU0,REAL *betaU1,REAL *betaU2,
REAL *BU0,REAL *BU1,REAL *BU2""",
body="""
// Set trivial metric quantities:
*KDD00 = *KDD01 = *KDD02 = 0.0;
/**/ *KDD11 = *KDD12 = 0.0;
/**/ *KDD22 = 0.0;
*betaU0 = *betaU1 = *betaU2 = 0.0;
*BU0 = *BU1 = *BU2 = 0.0;
// Next set gamma_{ij} in spherical basis
const REAL rbar = xyz_or_rthph[0];
const REAL theta = xyz_or_rthph[1];
const REAL phi = xyz_or_rthph[2];
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
outputC(expr_list,name_list, "returnstring",outCparams),
opts="DisableCparameters")
###Output
Output C function ID_TOV_ADM_quantities() to file TOVID_Ccodes/ID_TOV_ADM_quantities.h
###Markdown
As all input quantities are functions of $r$, we will simply read the solution from file and interpolate it to the values of $r$ needed by the initial data.1. First we define functions `ID_TOV_ADM_quantities()` and `ID_TOV_TUPMUNU()` that call the [1D TOV interpolator function](../edit/TOV/tov_interp.h) to evaluate the ADM spacetime quantities and $T^{\mu\nu}$, respectively, at any given point $(r,\theta,\phi)$ in the Spherical basis. All quantities are defined as above.1. Next we will construct the BSSN/ADM source terms $\{S_{ij},S_{i},S,\rho\}$ in the Spherical basis1. Then we will perform the Jacobian transformation on $\{S_{ij},S_{i},S,\rho\}$ to the desired `(xx0,xx1,xx2)` basis1. Next we call the *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function to conver the above ADM quantities to the rescaled BSSN quantities in the desired curvilinear coordinate system: [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).$${\rm Jac\_dUSph\_dDrfmUD[mu][nu]} = \frac{\partial x^\mu_{\rm Sph}}{\partial x^\nu_{\rm rfm}},$$via exact differentiation (courtesy SymPy), and the inverse Jacobian$${\rm Jac\_dUrfm\_dDSphUD[mu][nu]} = \frac{\partial x^\mu_{\rm rfm}}{\partial x^\nu_{\rm Sph}},$$using NRPy+'s `generic_matrix_inverter3x3()` function. In terms of these, the transformation of BSSN tensors from Spherical to `"reference_metric::CoordSystem"` coordinates may be written:$$T^{\mu\nu}_{\rm rfm} = \frac{\partial x^\mu_{\rm rfm}}{\partial x^\delta_{\rm Sph}}\frac{\partial x^\nu_{\rm rfm}}{\partial x^\sigma_{\rm Sph}} T^{\delta\sigma}_{\rm Sph}$$
###Code
r_th_ph_or_Cart_xyz_oID_xx = []
CoordType_in = "Spherical"
if CoordType_in == "Spherical":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxSph
elif CoordType_in == "Cartesian":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxCart
else:
print("Error: Can only convert ADM Cartesian or Spherical initial data to BSSN Curvilinear coords.")
exit(1)
# Next apply Jacobian transformations to convert into the (xx0,xx1,xx2) basis
# rho and S are scalar, so no Jacobian transformations are necessary.
Jac4_dUSphorCart_dDrfmUD = ixp.zerorank2(DIM=4)
Jac4_dUSphorCart_dDrfmUD[0][0] = sp.sympify(1)
for i in range(DIM):
for j in range(DIM):
Jac4_dUSphorCart_dDrfmUD[i+1][j+1] = sp.diff(r_th_ph_or_Cart_xyz_oID_xx[i],rfm.xx[j])
Jac4_dUrfm_dDSphorCartUD, dummyDET = ixp.generic_matrix_inverter4x4(Jac4_dUSphorCart_dDrfmUD)
# Perform Jacobian operations on T^{mu nu} and gamma_{ij}
T4UU = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","T4UU","sym01",DIM=4)
IDT4UU = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
for delta in range(4):
for sigma in range(4):
IDT4UU[mu][nu] += \
Jac4_dUrfm_dDSphorCartUD[mu][delta]*Jac4_dUrfm_dDSphorCartUD[nu][sigma]*T4SphUU[delta][sigma]
lhrh_list = []
for mu in range(4):
for nu in range(mu,4):
lhrh_list.append(lhrh(lhs=gri.gfaccess("auxevol_gfs","T4UU"+str(mu)+str(nu)),rhs=IDT4UU[mu][nu]))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_TUPMUNU_xx0xx1xx2"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params="""const paramstruct *restrict params,REAL *restrict xx[3],
const ID_inputs other_inputs,REAL *restrict auxevol_gfs""",
body=outputC([rfm.xxSph[0],rfm.xxSph[1],rfm.xxSph[2]],
["const REAL rbar","const REAL theta","const REAL ph"],"returnstring",
"CSE_enable=False,includebraces=False")+"""
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
fin.FD_outputC("returnstring",lhrh_list,params="outCverbose=False,includebraces=False").replace("IDX4","IDX4S"),
loopopts="AllPoints,Read_xxs")
###Output
Output C function ID_TOV_TUPMUNU_xx0xx1xx2() to file TOVID_Ccodes/ID_TOV_TUPMUNU_xx0xx1xx2.h
###Markdown
Step 3: Convert ADM initial data to BSSN-in-curvilinear coordinates \[Back to [top](toc)\]$$\label{adm_id_spacetime}$$This is an automated process, taken care of by [`BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear`](../edit/BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py), and documented [in this tutorial notebook](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).
###Code
import BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear as AtoBnum
AtoBnum.Convert_Spherical_or_Cartesian_ADM_to_BSSN_curvilinear("Spherical","ID_TOV_ADM_quantities",
Ccodesdir=Ccodesdir,loopopts="")
###Output
Output C function ID_BSSN_lambdas() to file TOVID_Ccodes/ID_BSSN_lambdas.h
Output C function ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs() to file TOVID_Ccodes/ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h
Output C function ID_BSSN__ALL_BUT_LAMBDAs() to file TOVID_Ccodes/ID_BSSN__ALL_BUT_LAMBDAs.h
###Markdown
Step 4: Validating that the TOV initial data satisfy the Hamiltonian constraint \[Back to [top](toc)\]$$\label{validate}$$We will validate that the TOV initial data satisfy the Hamiltonian constraint, modulo numerical finite differencing error Step 4.a: Output the Hamiltonian constraint \[Back to [top](toc)\]$$\label{ham_const_output}$$First output the Hamiltonian constraint [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN_constraints.ipynb)
###Code
# Enable rfm_precompute infrastructure, which results in
# BSSN RHSs that are free of transcendental functions,
# even in curvilinear coordinates, so long as
# ConformalFactor is set to "W" (default).
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
import BSSN.Enforce_Detgammabar_Constraint as EGC
enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammabar_Constraint_symb_expressions()
# Now register the Hamiltonian as a gridfunction.
H = gri.register_gridfunctions("AUX","H")
# Then define the Hamiltonian constraint and output the optimized C code.
import BSSN.BSSN_constraints as bssncon
import BSSN.BSSN_stress_energy_source_terms as Bsest
bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False)
Bsest.BSSN_source_terms_for_BSSN_constraints(T4UU)
bssncon.H += Bsest.sourceterm_H
# Now that we are finished with all the rfm hatted
# quantities in generic precomputed functional
# form, let's restore them to their closed-
# form expressions.
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
desc="Evaluate the Hamiltonian constraint"
name="Hamiltonian_constraint"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict auxevol_gfs, REAL *restrict aux_gfs""",
body = fin.FD_outputC("returnstring",lhrh(lhs=gri.gfaccess("aux_gfs", "H"), rhs=bssncon.H),
params="outCverbose=False").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,Enable_rfm_precompute")
###Output
Output C function Hamiltonian_constraint() to file TOVID_Ccodes/Hamiltonian_constraint.h
###Markdown
Step 4.b: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](toc)\]$$\label{bc_functs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),Cparamspath=os.path.join("../"))
###Output
Wrote to file "TOVID_Ccodes/boundary_conditions/parity_conditions_symbolic_dot_products.h"
Evolved parity: ( aDD00:4, aDD01:5, aDD02:6, aDD11:7, aDD12:8, aDD22:9,
alpha:0, betU0:1, betU1:2, betU2:3, cf:0, hDD00:4, hDD01:5, hDD02:6,
hDD11:7, hDD12:8, hDD22:9, lambdaU0:1, lambdaU1:2, lambdaU2:3, trK:0,
vetU0:1, vetU1:2, vetU2:3 )
Auxiliary parity: ( H:0 )
AuxEvol parity: ( T4UU00:0, T4UU01:1, T4UU02:2, T4UU03:3, T4UU11:4,
T4UU12:5, T4UU13:6, T4UU22:7, T4UU23:8, T4UU33:9 )
Wrote to file "TOVID_Ccodes/boundary_conditions/EigenCoord_Cart_to_xx.h"
###Markdown
Step 4.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
# Set up the C function for the det(gammahat) = det(gammabar)
EGC.output_Enforce_Detgammabar_Constraint_Ccode(Ccodesdir,
exprs=enforce_detg_constraint_symb_expressions)
###Output
Output C function enforce_detgammabar_constraint() to file TOVID_Ccodes/enforce_detgammabar_constraint.h
###Markdown
Step 4.d: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](toc)\]$$\label{cparams_rfm_and_domainsize}$$Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
###Code
# Step 3.d.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.d.ii: Set free_parameters.h
# Output to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Step 3.d.iii: Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir)
# Step 3.d.iv: Generate xxCart.h, which contains xxCart() for
# (the mapping from xx->Cartesian) for the chosen
# CoordSystem:
rfm.xxCart_h("xxCart","./set_Cparameters.h",os.path.join(Ccodesdir,"xxCart.h"))
# Step 3.d.v: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
###Output
_____no_output_____
###Markdown
Step 5: `TOV_Playground.c`: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER)
with open(os.path.join(Ccodesdir,"TOV_Playground_REAL__NGHOSTS.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set TOV stellar parameters
#define TOV_Mass """+str(M_TOV)+"""
#define TOV_Riso """+str(R_iso_TOV)+"\n")
%%writefile $Ccodesdir/TOV_Playground.c
// Step P0: Define REAL and NGHOSTS. This header is generated by NRPy+.
#include "TOV_Playground_REAL__NGHOSTS.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xxCart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Set xxCart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xxCart.h"
// Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P8: Include function for enforcing detgammabar constraint.
#include "enforce_detgammabar_constraint.h"
// Step P4: Declare initial data input struct:
// stores data from initial data solver,
// so they can be put on the numerical grid.
typedef struct __ID_inputs {
REAL Rbar;
int Rbar_idx;
int interp_stencil_size;
int numlines_in_file;
REAL *r_Schw_arr,*rho_arr,*rho_baryon_arr,*P_arr,*M_arr,*expnu_arr,*exp4phi_arr,*rbar_arr;
} ID_inputs;
// Part P11: Declare all functions for setting up TOV initial data.
/* Routines to interpolate the TOV solution and convert to ADM & T^{munu}: */
#include "../TOV/tov_interp.h"
#include "ID_TOV_ADM_quantities.h"
#include "ID_TOV_TUPMUNU_xx0xx1xx2.h"
/* Next perform the basis conversion and compute all needed BSSN quantities */
#include "ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN_lambdas.h"
// Step P10: Declare function necessary for setting up the initial data.
// Step P10.a: Define BSSN_ID() for BrillLindquist initial data
// Step P10.b: Set the generic driver function for setting up BSSN initial data
void initial_data(const paramstruct *restrict params,const bc_struct *restrict bcstruct,
const rfm_struct *restrict rfmstruct,
REAL *restrict xx[3], REAL *restrict auxevol_gfs, REAL *restrict in_gfs) {
#include "set_Cparameters.h"
// Step 1: Set up TOV initial data
// Step 1.a: Read TOV initial data from data file
// Open the data file:
char filename[100];
sprintf(filename,"./outputTOVpolytrope.txt");
FILE *in1Dpolytrope = fopen(filename, "r");
if (in1Dpolytrope == NULL) {
fprintf(stderr,"ERROR: could not open file %s\n",filename);
exit(1);
}
// Count the number of lines in the data file:
int numlines_in_file = count_num_lines_in_file(in1Dpolytrope);
// Allocate space for all data arrays:
REAL *r_Schw_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_baryon_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *P_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *M_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *expnu_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *exp4phi_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rbar_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
// Read from the data file, filling in arrays
// read_datafile__set_arrays() may be found in TOV/tov_interp.h
if(read_datafile__set_arrays(in1Dpolytrope, r_Schw_arr,rho_arr,rho_baryon_arr,P_arr,M_arr,expnu_arr,exp4phi_arr,rbar_arr) == 1) {
fprintf(stderr,"ERROR WHEN READING FILE %s!\n",filename);
exit(1);
}
fclose(in1Dpolytrope);
REAL Rbar = -100;
int Rbar_idx = -100;
for(int i=1;i<numlines_in_file;i++) {
if(rho_arr[i-1]>0 && rho_arr[i]==0) { Rbar = rbar_arr[i-1]; Rbar_idx = i-1; }
}
if(Rbar<0) {
fprintf(stderr,"Error: could not find rbar=Rbar from data file.\n");
exit(1);
}
ID_inputs TOV_in;
TOV_in.Rbar = Rbar;
TOV_in.Rbar_idx = Rbar_idx;
const int interp_stencil_size = 12;
TOV_in.interp_stencil_size = interp_stencil_size;
TOV_in.numlines_in_file = numlines_in_file;
TOV_in.r_Schw_arr = r_Schw_arr;
TOV_in.rho_arr = rho_arr;
TOV_in.rho_baryon_arr = rho_baryon_arr;
TOV_in.P_arr = P_arr;
TOV_in.M_arr = M_arr;
TOV_in.expnu_arr = expnu_arr;
TOV_in.exp4phi_arr = exp4phi_arr;
TOV_in.rbar_arr = rbar_arr;
/* END TOV INPUT ROUTINE */
// Step 1.b: Interpolate data from data file to set BSSN gridfunctions
ID_BSSN__ALL_BUT_LAMBDAs(params,xx,TOV_in, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammabar_constraint(rfmstruct, params, in_gfs);
ID_BSSN_lambdas(params, xx, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammabar_constraint(rfmstruct, params, in_gfs);
ID_TOV_TUPMUNU_xx0xx1xx2(params,xx,TOV_in,auxevol_gfs);
free(rbar_arr);
free(rho_arr);
free(rho_baryon_arr);
free(P_arr);
free(M_arr);
free(expnu_arr);
}
// Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic)
#include "Hamiltonian_constraint.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
// Step 1: Set up initial data to an exact solution
initial_data(¶ms,&bcstruct, &rfmstruct, xx, auxevol_gfs, y_n_gfs);
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, y_n_gfs);
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(&rfmstruct, ¶ms, y_n_gfs,auxevol_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
xxCart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",xCart[1]/TOV_Mass,xCart[2]/TOV_Mass, y_n_gfs[IDX4ptS(CFGF,idx)],
log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));
}
fclose(out2D);
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
free(auxevol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
import cmdline_helper as cmd
cmd.C_compile(os.path.join(Ccodesdir,"TOV_Playground.c"), "TOV_Playground")
cmd.delete_existing_files("out96.txt")
cmd.Execute("TOV_Playground", "96 96 2", "out96.txt")
###Output
Compiling executable...
(EXEC): Executing `gcc -Ofast -fopenmp -march=native -funroll-loops TOVID_Ccodes/TOV_Playground.c -o TOV_Playground -lm`...
(BENCH): Finished executing in 1.8139622211456299 seconds.
Finished compilation.
(EXEC): Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./TOV_Playground 96 96 2`...
(BENCH): Finished executing in 0.21292877197265625 seconds.
###Markdown
Step 6: Plotting the single-neutron-star initial data \[Back to [top](toc)\]$$\label{plot}$$Here we plot the conformal factor of these initial data on a 2D grid, such that darker colors imply stronger gravitational fields. Hence, we see the single neutron star centered at the origin: $x/M=y/M=z/M=0$, where $M$ is an arbitrary mass scale (conventionally the [ADM mass](https://en.wikipedia.org/w/index.php?title=ADM_formalism&oldid=846335453) is chosen), and our formulation of Einstein's equations adopt $G=c=1$ [geometrized units](https://en.wikipedia.org/w/index.php?title=Geometrized_unit_system&oldid=861682626).
###Code
import numpy as np
from scipy.interpolate import griddata
from pylab import savefig
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from IPython.display import Image
x96,y96,valuesCF96,valuesHam96 = np.loadtxt('out96.txt').T #Transposed for easier unpacking
bounds = 7.5
pl_xmin = -bounds
pl_xmax = +bounds
pl_ymin = -bounds
pl_ymax = +bounds
grid_x, grid_y = np.mgrid[pl_xmin:pl_xmax:100j, pl_ymin:pl_ymax:100j]
points96 = np.zeros((len(x96), 2))
for i in range(len(x96)):
points96[i][0] = x96[i]
points96[i][1] = y96[i]
grid96 = griddata(points96, valuesCF96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesCF96, (grid_x, grid_y), method='cubic')
plt.clf()
plt.title("Neutron Star: log10( max(1e-6,Energy Density) )")
plt.xlabel("x/M")
plt.ylabel("y/M")
# fig, ax = plt.subplots()
# ax.plot(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
# plt.close(fig)
fig96cf = plt.imshow(grid96.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cf)
savefig("BHB.png")
from IPython.display import Image
Image("BHB.png")
# # interpolation='nearest', cmap=cm.gist_rainbow)
###Output
_____no_output_____
###Markdown
Step 7: Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero \[Back to [top](toc)\]$$\label{convergence}$$The equations behind these initial data solve Einstein's equations exactly, at a single instant in time. One reflection of this solution is that the Hamiltonian constraint violation should be exactly zero in the initial data. However, when evaluated on numerical grids, the Hamiltonian constraint violation will *not* generally evaluate to zero due to the associated numerical derivatives not being exact. However, these numerical derivatives (finite difference derivatives in this case) should *converge* to the exact derivatives as the density of numerical sampling points approaches infinity.In this case, all of our finite difference derivatives agree with the exact solution, with an error term that drops with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$. Here, as in the [Start-to-Finish Scalar Wave (Cartesian grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWave.ipynb) and the [Start-to-Finish Scalar Wave (curvilinear grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWaveCurvilinear.ipynb) we confirm this convergence.First, let's take a look at what the numerical error looks like on the x-y plane at a given numerical resolution, plotting $\log_{10}|H|$, where $H$ is the Hamiltonian constraint violation:
###Code
grid96 = griddata(points96, valuesHam96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesHam96, (grid_x, grid_y), method='cubic')
# fig, ax = plt.subplots()
plt.clf()
plt.title("96^3 Numerical Err.: log_{10}|Ham|")
plt.xlabel("x/M")
plt.ylabel("y/M")
fig96cub = plt.imshow(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cub)
###Output
_____no_output_____
###Markdown
Next, we set up the same initial data but on a lower-resolution, $48\times 8\times 2$ grid (axisymmetric in the $\phi$ direction). Since the constraint violation (numerical error associated with the fourth-order-accurate, finite-difference derivatives) should converge to zero with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$, we expect the constraint violation will increase (relative to the $96\times 16\times 2$ grid) by a factor of $\left(96/48\right)^4$. Here we demonstrate that indeed this order of convergence is observed as expected, *except* at the star's surface where the stress-energy tensor $T^{\mu\nu}$ sharply drops to zero.
###Code
# Now rerun TOV_Playground with twice lower resolution.
cmd.delete_existing_files("out48.txt")
cmd.Execute("TOV_Playground", "48 48 2", "out48.txt")
x48,y48,valuesCF48,valuesHam48 = np.loadtxt('out48.txt').T #Transposed for easier unpacking
points48 = np.zeros((len(x48), 2))
for i in range(len(x48)):
points48[i][0] = x48[i]
points48[i][1] = y48[i]
grid48 = griddata(points48, valuesHam48, (grid_x, grid_y), method='cubic')
griddiff_48_minus_96 = np.zeros((100,100))
griddiff_48_minus_96_1darray = np.zeros(100*100)
gridx_1darray_yeq0 = np.zeros(100)
grid48_1darray_yeq0 = np.zeros(100)
grid96_1darray_yeq0 = np.zeros(100)
count = 0
outarray = []
for i in range(100):
for j in range(100):
griddiff_48_minus_96[i][j] = grid48[i][j] - grid96[i][j]
griddiff_48_minus_96_1darray[count] = griddiff_48_minus_96[i][j]
if j==49:
gridx_1darray_yeq0[i] = grid_x[i][j]
grid48_1darray_yeq0[i] = grid48[i][j] + np.log10((48./96.)**4)
grid96_1darray_yeq0[i] = grid96[i][j]
count = count + 1
plt.clf()
fig, ax = plt.subplots()
plt.title("Plot Demonstrating 4th-order Convergence")
plt.xlabel("x/M")
plt.ylabel("log10(Relative error)")
ax.plot(gridx_1darray_yeq0, grid96_1darray_yeq0, 'k-', label='Nr=96')
ax.plot(gridx_1darray_yeq0, grid48_1darray_yeq0, 'k--', label='Nr=48, mult by (48/96)^4')
ax.set_ylim([-12.5,1.5])
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
###Output
(EXEC): Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./TOV_Playground 48 48 2`...
(BENCH): Finished executing in 0.21338939666748047 seconds.
###Markdown
Step 8: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data")
###Output
Created Tutorial-Start_to_Finish-BSSNCurvilinear-
Setting_up_TOV_initial_data.tex, and compiled LaTeX file to PDF file
Tutorial-Start_to_Finish-BSSNCurvilinear-
Setting_up_TOV_initial_data.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Start-to-Finish Example: Setting up Polytropic [TOV](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) Initial Data, in Curvilinear Coordinates Authors: Zach Etienne, Phil Chang, and Leo Werneck Formatting improvements courtesy Brandon Clark This module sets up initial data for a TOV star in *spherical, isotropic coordinates*, using the *Numerical* ADM Spherical to BSSN Curvilinear initial data module (numerical = BSSN $\lambda^i$'s are computed using finite-difference derivatives instead of exact expressions).**Notebook Status:** Validated **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution (see [plots](convergence) at bottom). Note that convergence at the surface of the star will be lower order due to the sharp drop to zero in $T^{\mu\nu}$. NRPy+ Source Code for this module: * [TOV/TOV_Solver.py](../edit/TOV/TOV_Solver.py); ([**NRPy+ Tutorial module reviewing mathematical formulation and equations solved**](Tutorial-ADM_Initial_Data-TOV.ipynb)); ([**start-to-finish NRPy+ Tutorial module demonstrating that initial data satisfy Hamiltonian constraint**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb)): Tolman-Oppenheimer-Volkoff (TOV) initial data; defines all ADM variables and nonzero $T^{\mu\nu}$ components in Spherical basis.* [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates Introduction:Here we use NRPy+ to set up initial data for a [simple polytrope TOV star](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation).The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step:1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration [(**NRPy+ tutorial on NRPy+ Method of Lines algorithm**)](Tutorial-Method_of_Lines-C_Code_Generation.ipynb).1. Set gridfunction values to initial data * [**NRPy+ tutorial on TOV initial data**](Tutorial-ADM_Initial_Data-TOV.ipynb) * [**NRPy+ tutorial on validating TOV initial data**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb).1. Evaluate the Hamiltonian constraint violation * [**NRPy+ tutorial on BSSN constraints**](Tutorial-BSSN_constraints.ipynb)1. Repeat above steps at two numerical resolutions to confirm convergence of Hamiltonian constraint violation to zero. Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id_tov): Set up ADM initial data for polytropic TOV Star 1. [Step 2.a](tov_interp): Interpolating the TOV data file as needed 1. [Step 2.b](source): Compute source terms $S_{ij}$, $S_{i}$, $S$, and $\rho$ 1. [Step 2.c](jacobian): Jacobian transformation on the ADM/BSSN source terms 1. [Step 2.d](tensor): Rescale tensorial quantities1. [Step 3](adm_id_spacetime): Convert ADM spacetime quantity initial data from Spherical to BSSN Curvilinear coordinates1. [Step 4](validate): Validating that the TOV initial data satisfy the Hamiltonian constraint 1. [Step 4.a](ham_const_output): Output the Hamiltonian Constraint 1. [Step 4.b](apply_bcs): Apply singular, curvilinear coordinate boundary conditions 1. [Step 4.c](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ 1. [Step 5](mainc): `TOV_Playground.c`: The Main C Code1. [Step 6](plot): Plotting the single-neutron-star initial data1. [Step 7](convergence): Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero1. [Step 8](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step P1: Import needed NRPy+ core modules:
from outputC import lhrh,outCfunction,outputC # NRPy+: Core C code output module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import finite_difference as fin # NRPy+: Finite difference C code generation module
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("TOVID_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
# !rm -r ScalarWaveCurvilinear_Playground_Ccodes
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three this time, and then read
# the parameter as DIM.
par.set_parval_from_str("grid::DIM",3)
DIM = par.parval_from_str("grid::DIM")
# Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm,
# FD order, floating point precision, and CFL factor:
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "Spherical"
# Step 2.a: Set defaults for Coordinate system parameters.
# These are perhaps the most commonly adjusted parameters,
# so we enable modifications at this high level.
# domain_size = 7.5 # SET BELOW BASED ON TOV STELLAR RADIUS
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.4 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05# If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 0.5 # If SymTP chosen
# Step 2.b: Set the order of spatial finite difference derivatives;
# and the core data type.
FD_order = 4 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
REAL = "double" # Best to use double here.
# Step 3: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Step 4: Set the finite differencing order to FD_order (set above).
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order)
# Step 5: Set the direction=2 (phi) axis to be the symmetry axis; i.e.,
# axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
# Step 6: The MoLtimestepping interface is only used for memory allocation/deallocation
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_method = "Euler" # DOES NOT MATTER; Again MoL interface is only used for memory alloc/dealloc.
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method, RHS_string = "", post_RHS_string = "",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
# Step 7: Polytropic EOS setup
# For EOS_type, choose either "SimplePolytrope" or "PiecewisePolytrope"
EOS_type = "SimplePolytrope"
# If "PiecewisePolytrope" is chosen as EOS_type, you
# must also choose the name of the EOS, which can
# be any of the following:
# 'PAL6', 'SLy', 'APR1', 'APR2', 'APR3', 'APR4',
# 'FPS', 'WFF1', 'WFF2', 'WFF3', 'BBB2', 'BPAL12',
# 'ENG', 'MPA1', 'MS1', 'MS2', 'MS1b', 'PS', 'GS1',
# 'GS2', 'BGN1H1', 'GNH3', 'H1', 'H2', 'H3', 'H4',
# 'H5', 'H6', 'H7', 'PCL2', 'ALF1', 'ALF2', 'ALF3',
# 'ALF4'
EOS_name = 'SLy' # <-- IGNORED IF EOS_type is not PiecewisePolytrope.
###Output
_____no_output_____
###Markdown
Step 2: Set up ADM initial data for polytropic TOV Star \[Back to [top](toc)\]$$\label{adm_id_tov}$$As documented [in the TOV Initial Data NRPy+ Tutorial Module](Tutorial-TOV_Initial_Data.ipynb) ([older version here](Tutorial-GRMHD_UnitConversion.ipynb)), we will now set up TOV initial data, storing the densely-sampled result to file (***Courtesy Phil Chang***).The TOV solver uses an ODE integration routine provided by scipy, so we first make sure that scipy is installed:
###Code
!pip install scipy > /dev/null
###Output
_____no_output_____
###Markdown
Next we call the [`TOV.TOV_Solver()` function](../edit/TOV/TOV_Solver.py) ([NRPy+ Tutorial module](Tutorial-ADM_Initial_Data-TOV.ipynb)) to set up the initial data, using the default parameters for initial data. This function outputs the solution to a file named "outputTOVpolytrope.txt".
###Code
##########################
# Polytropic EOS example #
##########################
import TOV.Polytropic_EOSs as ppeos
if EOS_type == "SimplePolytrope":
# Set neos = 1 (single polytrope)
neos = 1
# Set rho_poly_tab (not needed for a single polytrope)
rho_poly_tab = []
# Set Gamma_poly_tab
Gamma_poly_tab = [2.0]
# Set K_poly_tab0
K_poly_tab0 = 1. # ZACH NOTES: CHANGED FROM 100.
# Set the eos quantities
eos = ppeos.set_up_EOS_parameters__complete_set_of_input_variables(neos,rho_poly_tab,Gamma_poly_tab,K_poly_tab0)
rho_baryon_central = 0.129285
elif EOS_type == "PiecewisePolytrope":
eos = ppeos.set_up_EOS_parameters__Read_et_al_input_variables(EOS_name)
rho_baryon_central=2.0
else:
print("""Error: unknown EOS_type. Valid types are 'SimplePolytrope' and 'PiecewisePolytrope' """)
sys.exit(1)
import TOV.TOV_Solver as TOV
M_TOV, R_Schw_TOV, R_iso_TOV = TOV.TOV_Solver(eos,
outfile="outputTOVpolytrope.txt",
rho_baryon_central=rho_baryon_central,
return_M_RSchw_and_Riso = True,
verbose = True)
# domain_size sets the default value for:
# * Spherical's params.RMAX
# * SinhSpherical*'s params.AMAX
# * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max
# * Cylindrical's -params.ZMIN & .{Z,RHO}MAX
# * SinhCylindrical's params.AMPL{RHO,Z}
# * *SymTP's params.AMAX
domain_size = 2.0 * R_iso_TOV
###Output
1256 1256 1256 1256 1256 1256
Just generated a TOV star with
* M = 1.405030336771405e-01 ,
* R_Schw = 9.566044579232513e-01 ,
* R_iso = 8.100085557410308e-01 ,
* M/R_Schw = 1.468768334847266e-01
###Markdown
Step 2.a: Interpolate the TOV data file as needed to set up ADM spacetime quantities in spherical basis (for input into the `Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear` module) and $T^{\mu\nu}$ in the chosen reference metric basis \[Back to [top](toc)\]$$\label{tov_interp}$$The TOV data file just written stored $\left(r,\rho(r),P(r),M(r),e^{\nu(r)}\right)$, where $\rho(r)$ is the total mass-energy density (cf. $\rho_{\text{baryonic}}$).**METRIC DATA IN TERMS OF ADM QUANTITIES**The [TOV line element](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) in *Schwarzschild coordinates* is written (in the $-+++$ form):$$ds^2 = - c^2 e^\nu dt^2 + \left(1 - \frac{2GM}{rc^2}\right)^{-1} dr^2 + r^2 d\Omega^2.$$In *isotropic coordinates* with $G=c=1$ (i.e., the coordinate system we'd prefer to use), the ($-+++$ form) line element is written:$$ds^2 = - e^{\nu} dt^2 + e^{4\phi} \left(d\bar{r}^2 + \bar{r}^2 d\Omega^2\right),$$where $\phi$ here is the *conformal factor*.The ADM 3+1 line element for this diagonal metric in isotropic spherical coordinates is given by:$$ds^2 = (-\alpha^2 + \beta_k \beta^k) dt^2 + \gamma_{\bar{r}\bar{r}} d\bar{r}^2 + \gamma_{\theta\theta} d\theta^2+ \gamma_{\phi\phi} d\phi^2,$$from which we can immediately read off the ADM quantities:\begin{align}\alpha &= e^{\nu(\bar{r})/2} \\\beta^k &= 0 \\\gamma_{\bar{r}\bar{r}} &= e^{4\phi}\\\gamma_{\theta\theta} &= e^{4\phi} \bar{r}^2 \\\gamma_{\phi\phi} &= e^{4\phi} \bar{r}^2 \sin^2 \theta \\\end{align}**STRESS-ENERGY TENSOR $T^{\mu\nu}$**We will also need the stress-energy tensor $T^{\mu\nu}$. [As discussed here](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation), the stress-energy tensor is diagonal:\begin{align}T^t_t &= -\rho \\T^i_j &= P \delta^i_j \\\text{All other components of }T^\mu_\nu &= 0.\end{align}Since $\beta^i=0$ the inverse metric expression simplifies to (Eq. 4.49 in [Gourgoulhon](https://arxiv.org/pdf/gr-qc/0703035.pdf)):$$g^{\mu\nu} = \begin{pmatrix} -\frac{1}{\alpha^2} & \frac{\beta^i}{\alpha^2} \\\frac{\beta^i}{\alpha^2} & \gamma^{ij} - \frac{\beta^i\beta^j}{\alpha^2}\end{pmatrix} =\begin{pmatrix} -\frac{1}{\alpha^2} & 0 \\0 & \gamma^{ij}\end{pmatrix},$$and since the 3-metric is diagonal we get\begin{align}\gamma^{\bar{r}\bar{r}} &= e^{-4\phi}\\\gamma^{\theta\theta} &= e^{-4\phi}\frac{1}{\bar{r}^2} \\\gamma^{\phi\phi} &= e^{-4\phi}\frac{1}{\bar{r}^2 \sin^2 \theta}.\end{align}Thus raising $T^\mu_\nu$ yields a diagonal $T^{\mu\nu}$\begin{align}T^{tt} &= -g^{tt} \rho = \frac{1}{\alpha^2} \rho = e^{-\nu(\bar{r})} \rho \\T^{\bar{r}\bar{r}} &= g^{\bar{r}\bar{r}} P = \frac{1}{e^{4 \phi}} P \\T^{\theta\theta} &= g^{\theta\theta} P = \frac{1}{e^{4 \phi}\bar{r}^2} P\\T^{\phi\phi} &= g^{\phi\phi} P = \frac{1}{e^{4\phi}\bar{r}^2 \sin^2 \theta} P \end{align}
###Code
thismodule = "TOVID"
rbar,theta,rho,P,expnu,exp4phi = par.Cparameters("REAL",thismodule,
["rbar","theta","rho","P","expnu","exp4phi"],1e300)
IDalpha = sp.sqrt(expnu)
gammaSphDD = ixp.zerorank2(DIM=3)
gammaSphDD[0][0] = exp4phi
gammaSphDD[1][1] = exp4phi*rbar**2
gammaSphDD[2][2] = exp4phi*rbar**2*sp.sin(theta)**2
T4SphUU = ixp.zerorank2(DIM=4)
T4SphUU[0][0] = rho/expnu
T4SphUU[1][1] = P/exp4phi
T4SphUU[2][2] = P/(exp4phi*rbar**2)
T4SphUU[3][3] = P/(exp4phi*rbar**2*sp.sin(theta)**2)
expr_list = [IDalpha]
name_list = ["*alpha"]
for i in range(3):
for j in range(i,3):
expr_list.append(gammaSphDD[i][j])
name_list.append("*gammaDD"+str(i)+str(j))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_ADM_quantities"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params=""" const REAL xyz_or_rthph[3],
const ID_inputs other_inputs,
REAL *gammaDD00,REAL *gammaDD01,REAL *gammaDD02,REAL *gammaDD11,REAL *gammaDD12,REAL *gammaDD22,
REAL *KDD00,REAL *KDD01,REAL *KDD02,REAL *KDD11,REAL *KDD12,REAL *KDD22,
REAL *alpha,
REAL *betaU0,REAL *betaU1,REAL *betaU2,
REAL *BU0,REAL *BU1,REAL *BU2""",
body="""
// Set trivial metric quantities:
*KDD00 = *KDD01 = *KDD02 = 0.0;
/**/ *KDD11 = *KDD12 = 0.0;
/**/ *KDD22 = 0.0;
*betaU0 = *betaU1 = *betaU2 = 0.0;
*BU0 = *BU1 = *BU2 = 0.0;
// Next set gamma_{ij} in spherical basis
const REAL rbar = xyz_or_rthph[0];
const REAL theta = xyz_or_rthph[1];
const REAL phi = xyz_or_rthph[2];
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
outputC(expr_list,name_list, "returnstring",outCparams),
opts="DisableCparameters")
###Output
Output C function ID_TOV_ADM_quantities() to file TOVID_Ccodes/ID_TOV_ADM_quantities.h
###Markdown
As all input quantities are functions of $r$, we will simply read the solution from file and interpolate it to the values of $r$ needed by the initial data.1. First we define functions `ID_TOV_ADM_quantities()` and `ID_TOV_TUPMUNU()` that call the [1D TOV interpolator function](../edit/TOV/tov_interp.h) to evaluate the ADM spacetime quantities and $T^{\mu\nu}$, respectively, at any given point $(r,\theta,\phi)$ in the Spherical basis. All quantities are defined as above.1. Next we will construct the BSSN/ADM source terms $\{S_{ij},S_{i},S,\rho\}$ in the Spherical basis1. Then we will perform the Jacobian transformation on $\{S_{ij},S_{i},S,\rho\}$ to the desired `(xx0,xx1,xx2)` basis1. Next we call the *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function to conver the above ADM quantities to the rescaled BSSN quantities in the desired curvilinear coordinate system: [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).$${\rm Jac\_dUSph\_dDrfmUD[mu][nu]} = \frac{\partial x^\mu_{\rm Sph}}{\partial x^\nu_{\rm rfm}},$$via exact differentiation (courtesy SymPy), and the inverse Jacobian$${\rm Jac\_dUrfm\_dDSphUD[mu][nu]} = \frac{\partial x^\mu_{\rm rfm}}{\partial x^\nu_{\rm Sph}},$$using NRPy+'s `generic_matrix_inverter3x3()` function. In terms of these, the transformation of BSSN tensors from Spherical to `"reference_metric::CoordSystem"` coordinates may be written:$$T^{\mu\nu}_{\rm rfm} = \frac{\partial x^\mu_{\rm rfm}}{\partial x^\delta_{\rm Sph}}\frac{\partial x^\nu_{\rm rfm}}{\partial x^\sigma_{\rm Sph}} T^{\delta\sigma}_{\rm Sph}$$
###Code
r_th_ph_or_Cart_xyz_oID_xx = []
CoordType_in = "Spherical"
if CoordType_in == "Spherical":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxSph
elif CoordType_in == "Cartesian":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xx_to_Cart
else:
print("Error: Can only convert ADM Cartesian or Spherical initial data to BSSN Curvilinear coords.")
exit(1)
# Next apply Jacobian transformations to convert into the (xx0,xx1,xx2) basis
# rho and S are scalar, so no Jacobian transformations are necessary.
Jac4_dUSphorCart_dDrfmUD = ixp.zerorank2(DIM=4)
Jac4_dUSphorCart_dDrfmUD[0][0] = sp.sympify(1)
for i in range(DIM):
for j in range(DIM):
Jac4_dUSphorCart_dDrfmUD[i+1][j+1] = sp.diff(r_th_ph_or_Cart_xyz_oID_xx[i],rfm.xx[j])
Jac4_dUrfm_dDSphorCartUD, dummyDET = ixp.generic_matrix_inverter4x4(Jac4_dUSphorCart_dDrfmUD)
# Perform Jacobian operations on T^{mu nu} and gamma_{ij}
T4UU = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","T4UU","sym01",DIM=4)
IDT4UU = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
for delta in range(4):
for sigma in range(4):
IDT4UU[mu][nu] += \
Jac4_dUrfm_dDSphorCartUD[mu][delta]*Jac4_dUrfm_dDSphorCartUD[nu][sigma]*T4SphUU[delta][sigma]
lhrh_list = []
for mu in range(4):
for nu in range(mu,4):
lhrh_list.append(lhrh(lhs=gri.gfaccess("auxevol_gfs","T4UU"+str(mu)+str(nu)),rhs=IDT4UU[mu][nu]))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_TUPMUNU_xx0xx1xx2"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params="""const paramstruct *restrict params,REAL *restrict xx[3],
const ID_inputs other_inputs,REAL *restrict auxevol_gfs""",
body=outputC([rfm.xxSph[0],rfm.xxSph[1],rfm.xxSph[2]],
["const REAL rbar","const REAL theta","const REAL ph"],"returnstring",
"CSE_enable=False,includebraces=False")+"""
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
fin.FD_outputC("returnstring",lhrh_list,params="outCverbose=False,includebraces=False").replace("IDX4","IDX4S"),
loopopts="AllPoints,Read_xxs")
###Output
Output C function ID_TOV_TUPMUNU_xx0xx1xx2() to file TOVID_Ccodes/ID_TOV_TUPMUNU_xx0xx1xx2.h
###Markdown
Step 3: Convert ADM initial data to BSSN-in-curvilinear coordinates \[Back to [top](toc)\]$$\label{adm_id_spacetime}$$This is an automated process, taken care of by [`BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear`](../edit/BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py), and documented [in this tutorial notebook](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).
###Code
import BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear as AtoBnum
AtoBnum.Convert_Spherical_or_Cartesian_ADM_to_BSSN_curvilinear("Spherical","ID_TOV_ADM_quantities",
Ccodesdir=Ccodesdir,loopopts="")
###Output
Output C function ID_BSSN_lambdas() to file TOVID_Ccodes/ID_BSSN_lambdas.h
Output C function ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs() to file TOVID_Ccodes/ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h
Output C function ID_BSSN__ALL_BUT_LAMBDAs() to file TOVID_Ccodes/ID_BSSN__ALL_BUT_LAMBDAs.h
###Markdown
Step 4: Validating that the TOV initial data satisfy the Hamiltonian constraint \[Back to [top](toc)\]$$\label{validate}$$We will validate that the TOV initial data satisfy the Hamiltonian constraint, modulo numerical finite differencing error Step 4.a: Output the Hamiltonian constraint \[Back to [top](toc)\]$$\label{ham_const_output}$$First output the Hamiltonian constraint [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN_constraints.ipynb)
###Code
# Enable rfm_precompute infrastructure, which results in
# BSSN RHSs that are free of transcendental functions,
# even in curvilinear coordinates, so long as
# ConformalFactor is set to "W" (default).
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
import BSSN.Enforce_Detgammabar_Constraint as EGC
enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammabar_Constraint_symb_expressions()
# Now register the Hamiltonian as a gridfunction.
H = gri.register_gridfunctions("AUX","H")
# Then define the Hamiltonian constraint and output the optimized C code.
import BSSN.BSSN_constraints as bssncon
import BSSN.BSSN_stress_energy_source_terms as Bsest
bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False)
Bsest.BSSN_source_terms_for_BSSN_constraints(T4UU)
bssncon.H += Bsest.sourceterm_H
# Now that we are finished with all the rfm hatted
# quantities in generic precomputed functional
# form, let's restore them to their closed-
# form expressions.
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
desc="Evaluate the Hamiltonian constraint"
name="Hamiltonian_constraint"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict auxevol_gfs, REAL *restrict aux_gfs""",
body = fin.FD_outputC("returnstring",lhrh(lhs=gri.gfaccess("aux_gfs", "H"), rhs=bssncon.H),
params="outCverbose=False").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,Enable_rfm_precompute")
###Output
Output C function Hamiltonian_constraint() to file TOVID_Ccodes/Hamiltonian_constraint.h
###Markdown
Step 4.b: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](toc)\]$$\label{bc_functs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),Cparamspath=os.path.join("../"))
###Output
Wrote to file "TOVID_Ccodes/boundary_conditions/parity_conditions_symbolic_dot_products.h"
Evolved parity: ( aDD00:4, aDD01:5, aDD02:6, aDD11:7, aDD12:8, aDD22:9,
alpha:0, betU0:1, betU1:2, betU2:3, cf:0, hDD00:4, hDD01:5, hDD02:6,
hDD11:7, hDD12:8, hDD22:9, lambdaU0:1, lambdaU1:2, lambdaU2:3, trK:0,
vetU0:1, vetU1:2, vetU2:3 )
Auxiliary parity: ( H:0 )
AuxEvol parity: ( T4UU00:0, T4UU01:1, T4UU02:2, T4UU03:3, T4UU11:4,
T4UU12:5, T4UU13:6, T4UU22:7, T4UU23:8, T4UU33:9 )
Wrote to file "TOVID_Ccodes/boundary_conditions/EigenCoord_Cart_to_xx.h"
###Markdown
Step 4.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
# Set up the C function for the det(gammahat) = det(gammabar)
EGC.output_Enforce_Detgammabar_Constraint_Ccode(Ccodesdir,
exprs=enforce_detg_constraint_symb_expressions)
###Output
Output C function enforce_detgammabar_constraint() to file TOVID_Ccodes/enforce_detgammabar_constraint.h
###Markdown
Step 4.d: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](toc)\]$$\label{cparams_rfm_and_domainsize}$$Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
###Code
# Step 3.d.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.d.ii: Set free_parameters.h
# Output to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Step 3.d.iii: Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir)
# Step 3.d.iv: Generate xx_to_Cart.h, which contains xx_to_Cart() for
# (the mapping from xx->Cartesian) for the chosen
# CoordSystem:
rfm.xx_to_Cart_h("xx_to_Cart","./set_Cparameters.h",os.path.join(Ccodesdir,"xx_to_Cart.h"))
# Step 3.d.v: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
###Output
_____no_output_____
###Markdown
Step 5: `TOV_Playground.c`: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER)
with open(os.path.join(Ccodesdir,"TOV_Playground_REAL__NGHOSTS.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set TOV stellar parameters
#define TOV_Mass """+str(M_TOV)+"""
#define TOV_Riso """+str(R_iso_TOV)+"\n")
%%writefile $Ccodesdir/TOV_Playground.c
// Step P0: Define REAL and NGHOSTS. This header is generated by NRPy+.
#include "TOV_Playground_REAL__NGHOSTS.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xx_to_Cart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Set xx_to_Cart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xx_to_Cart.h"
// Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P8: Include function for enforcing detgammabar constraint.
#include "enforce_detgammabar_constraint.h"
// Step P4: Declare initial data input struct:
// stores data from initial data solver,
// so they can be put on the numerical grid.
typedef struct __ID_inputs {
REAL Rbar;
int Rbar_idx;
int interp_stencil_size;
int numlines_in_file;
REAL *r_Schw_arr,*rho_arr,*rho_baryon_arr,*P_arr,*M_arr,*expnu_arr,*exp4phi_arr,*rbar_arr;
} ID_inputs;
// Part P11: Declare all functions for setting up TOV initial data.
/* Routines to interpolate the TOV solution and convert to ADM & T^{munu}: */
#include "../TOV/tov_interp.h"
#include "ID_TOV_ADM_quantities.h"
#include "ID_TOV_TUPMUNU_xx0xx1xx2.h"
/* Next perform the basis conversion and compute all needed BSSN quantities */
#include "ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN_lambdas.h"
// Step P10: Declare function necessary for setting up the initial data.
// Step P10.a: Define BSSN_ID() for BrillLindquist initial data
// Step P10.b: Set the generic driver function for setting up BSSN initial data
void initial_data(const paramstruct *restrict params,const bc_struct *restrict bcstruct,
const rfm_struct *restrict rfmstruct,
REAL *restrict xx[3], REAL *restrict auxevol_gfs, REAL *restrict in_gfs) {
#include "set_Cparameters.h"
// Step 1: Set up TOV initial data
// Step 1.a: Read TOV initial data from data file
// Open the data file:
char filename[100];
sprintf(filename,"./outputTOVpolytrope.txt");
FILE *in1Dpolytrope = fopen(filename, "r");
if (in1Dpolytrope == NULL) {
fprintf(stderr,"ERROR: could not open file %s\n",filename);
exit(1);
}
// Count the number of lines in the data file:
int numlines_in_file = count_num_lines_in_file(in1Dpolytrope);
// Allocate space for all data arrays:
REAL *r_Schw_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_baryon_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *P_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *M_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *expnu_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *exp4phi_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rbar_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
// Read from the data file, filling in arrays
// read_datafile__set_arrays() may be found in TOV/tov_interp.h
if(read_datafile__set_arrays(in1Dpolytrope, r_Schw_arr,rho_arr,rho_baryon_arr,P_arr,M_arr,expnu_arr,exp4phi_arr,rbar_arr) == 1) {
fprintf(stderr,"ERROR WHEN READING FILE %s!\n",filename);
exit(1);
}
fclose(in1Dpolytrope);
REAL Rbar = -100;
int Rbar_idx = -100;
for(int i=1;i<numlines_in_file;i++) {
if(rho_arr[i-1]>0 && rho_arr[i]==0) { Rbar = rbar_arr[i-1]; Rbar_idx = i-1; }
}
if(Rbar<0) {
fprintf(stderr,"Error: could not find rbar=Rbar from data file.\n");
exit(1);
}
ID_inputs TOV_in;
TOV_in.Rbar = Rbar;
TOV_in.Rbar_idx = Rbar_idx;
const int interp_stencil_size = 12;
TOV_in.interp_stencil_size = interp_stencil_size;
TOV_in.numlines_in_file = numlines_in_file;
TOV_in.r_Schw_arr = r_Schw_arr;
TOV_in.rho_arr = rho_arr;
TOV_in.rho_baryon_arr = rho_baryon_arr;
TOV_in.P_arr = P_arr;
TOV_in.M_arr = M_arr;
TOV_in.expnu_arr = expnu_arr;
TOV_in.exp4phi_arr = exp4phi_arr;
TOV_in.rbar_arr = rbar_arr;
/* END TOV INPUT ROUTINE */
// Step 1.b: Interpolate data from data file to set BSSN gridfunctions
ID_BSSN__ALL_BUT_LAMBDAs(params,xx,TOV_in, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammabar_constraint(rfmstruct, params, in_gfs);
ID_BSSN_lambdas(params, xx, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammabar_constraint(rfmstruct, params, in_gfs);
ID_TOV_TUPMUNU_xx0xx1xx2(params,xx,TOV_in,auxevol_gfs);
free(rbar_arr);
free(rho_arr);
free(rho_baryon_arr);
free(P_arr);
free(M_arr);
free(expnu_arr);
}
// Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic)
#include "Hamiltonian_constraint.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
// Step 1: Set up initial data to an exact solution
initial_data(¶ms,&bcstruct, &rfmstruct, xx, auxevol_gfs, y_n_gfs);
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, y_n_gfs);
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(&rfmstruct, ¶ms, y_n_gfs,auxevol_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
xx_to_Cart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",xCart[1]/TOV_Mass,xCart[2]/TOV_Mass, y_n_gfs[IDX4ptS(CFGF,idx)],
log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));
}
fclose(out2D);
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
free(auxevol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
import cmdline_helper as cmd
cmd.C_compile(os.path.join(Ccodesdir,"TOV_Playground.c"), "TOV_Playground")
cmd.delete_existing_files("out96.txt")
cmd.Execute("TOV_Playground", "96 96 2", "out96.txt")
###Output
Compiling executable...
(EXEC): Executing `gcc -Ofast -fopenmp -march=native -funroll-loops TOVID_Ccodes/TOV_Playground.c -o TOV_Playground -lm`...
(BENCH): Finished executing in 3.4261927604675293 seconds.
Finished compilation.
(EXEC): Executing `taskset -c 0,1 ./TOV_Playground 96 96 2`...
(BENCH): Finished executing in 0.22012090682983398 seconds.
###Markdown
Step 6: Plotting the single-neutron-star initial data \[Back to [top](toc)\]$$\label{plot}$$Here we plot the conformal factor of these initial data on a 2D grid, such that darker colors imply stronger gravitational fields. Hence, we see the single neutron star centered at the origin: $x/M=y/M=z/M=0$, where $M$ is an arbitrary mass scale (conventionally the [ADM mass](https://en.wikipedia.org/w/index.php?title=ADM_formalism&oldid=846335453) is chosen), and our formulation of Einstein's equations adopt $G=c=1$ [geometrized units](https://en.wikipedia.org/w/index.php?title=Geometrized_unit_system&oldid=861682626).
###Code
import numpy as np
from scipy.interpolate import griddata
from pylab import savefig
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from IPython.display import Image
x96,y96,valuesCF96,valuesHam96 = np.loadtxt('out96.txt').T #Transposed for easier unpacking
bounds = 7.5
pl_xmin = -bounds
pl_xmax = +bounds
pl_ymin = -bounds
pl_ymax = +bounds
grid_x, grid_y = np.mgrid[pl_xmin:pl_xmax:100j, pl_ymin:pl_ymax:100j]
points96 = np.zeros((len(x96), 2))
for i in range(len(x96)):
points96[i][0] = x96[i]
points96[i][1] = y96[i]
grid96 = griddata(points96, valuesCF96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesCF96, (grid_x, grid_y), method='cubic')
plt.clf()
plt.title("Neutron Star: log10( max(1e-6,Energy Density) )")
plt.xlabel("x/M")
plt.ylabel("y/M")
# fig, ax = plt.subplots()
# ax.plot(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
# plt.close(fig)
fig96cf = plt.imshow(grid96.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cf)
savefig("BHB.png")
from IPython.display import Image
Image("BHB.png")
# # interpolation='nearest', cmap=cm.gist_rainbow)
###Output
_____no_output_____
###Markdown
Step 7: Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero \[Back to [top](toc)\]$$\label{convergence}$$The equations behind these initial data solve Einstein's equations exactly, at a single instant in time. One reflection of this solution is that the Hamiltonian constraint violation should be exactly zero in the initial data. However, when evaluated on numerical grids, the Hamiltonian constraint violation will *not* generally evaluate to zero due to the associated numerical derivatives not being exact. However, these numerical derivatives (finite difference derivatives in this case) should *converge* to the exact derivatives as the density of numerical sampling points approaches infinity.In this case, all of our finite difference derivatives agree with the exact solution, with an error term that drops with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$. Here, as in the [Start-to-Finish Scalar Wave (Cartesian grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWave.ipynb) and the [Start-to-Finish Scalar Wave (curvilinear grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWaveCurvilinear.ipynb) we confirm this convergence.First, let's take a look at what the numerical error looks like on the x-y plane at a given numerical resolution, plotting $\log_{10}|H|$, where $H$ is the Hamiltonian constraint violation:
###Code
grid96 = griddata(points96, valuesHam96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesHam96, (grid_x, grid_y), method='cubic')
# fig, ax = plt.subplots()
plt.clf()
plt.title("96^3 Numerical Err.: log_{10}|Ham|")
plt.xlabel("x/M")
plt.ylabel("y/M")
fig96cub = plt.imshow(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cub)
###Output
_____no_output_____
###Markdown
Next, we set up the same initial data but on a lower-resolution, $48\times 8\times 2$ grid (axisymmetric in the $\phi$ direction). Since the constraint violation (numerical error associated with the fourth-order-accurate, finite-difference derivatives) should converge to zero with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$, we expect the constraint violation will increase (relative to the $96\times 16\times 2$ grid) by a factor of $\left(96/48\right)^4$. Here we demonstrate that indeed this order of convergence is observed as expected, *except* at the star's surface where the stress-energy tensor $T^{\mu\nu}$ sharply drops to zero.
###Code
# Now rerun TOV_Playground with twice lower resolution.
cmd.delete_existing_files("out48.txt")
cmd.Execute("TOV_Playground", "48 48 2", "out48.txt")
x48,y48,valuesCF48,valuesHam48 = np.loadtxt('out48.txt').T #Transposed for easier unpacking
points48 = np.zeros((len(x48), 2))
for i in range(len(x48)):
points48[i][0] = x48[i]
points48[i][1] = y48[i]
grid48 = griddata(points48, valuesHam48, (grid_x, grid_y), method='cubic')
griddiff_48_minus_96 = np.zeros((100,100))
griddiff_48_minus_96_1darray = np.zeros(100*100)
gridx_1darray_yeq0 = np.zeros(100)
grid48_1darray_yeq0 = np.zeros(100)
grid96_1darray_yeq0 = np.zeros(100)
count = 0
outarray = []
for i in range(100):
for j in range(100):
griddiff_48_minus_96[i][j] = grid48[i][j] - grid96[i][j]
griddiff_48_minus_96_1darray[count] = griddiff_48_minus_96[i][j]
if j==49:
gridx_1darray_yeq0[i] = grid_x[i][j]
grid48_1darray_yeq0[i] = grid48[i][j] + np.log10((48./96.)**4)
grid96_1darray_yeq0[i] = grid96[i][j]
count = count + 1
plt.clf()
fig, ax = plt.subplots()
plt.title("Plot Demonstrating 4th-order Convergence")
plt.xlabel("x/M")
plt.ylabel("log10(Relative error)")
ax.plot(gridx_1darray_yeq0, grid96_1darray_yeq0, 'k-', label='Nr=96')
ax.plot(gridx_1darray_yeq0, grid48_1darray_yeq0, 'k--', label='Nr=48, mult by (48/96)^4')
ax.set_ylim([-12.5,1.5])
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
###Output
(EXEC): Executing `taskset -c 0,1 ./TOV_Playground 48 48 2`...
(BENCH): Finished executing in 0.22007393836975098 seconds.
###Markdown
Step 8: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data")
###Output
Created Tutorial-Start_to_Finish-BSSNCurvilinear-
Setting_up_TOV_initial_data.tex, and compiled LaTeX file to PDF file
Tutorial-Start_to_Finish-BSSNCurvilinear-
Setting_up_TOV_initial_data.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Start-to-Finish Example: Setting up Polytropic [TOV](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) Initial Data, in Curvilinear Coordinates Authors: Zach Etienne & Phil Chang Formatting improvements courtesy Brandon Clark This module sets up initial data for a TOV star in *spherical, isotropic coordinates*, using the *Numerical* ADM Spherical to BSSN Curvilinear initial data module (numerical = BSSN $\lambda^i$'s are computed using finite-difference derivatives instead of exact expressions).**Notebook Status:** Validated **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution (see [plots](convergence) at bottom). Note that convergence at the surface of the star will be lower order due to the sharp drop to zero in $T^{\mu\nu}$. NRPy+ Source Code for this module: * [TOV/TOV_Solver.py](../edit/TOV/TOV_Solver.py); ([**NRPy+ Tutorial module reviewing mathematical formulation and equations solved**](Tutorial-ADM_Initial_Data-TOV.ipynb)); ([**start-to-finish NRPy+ Tutorial module demonstrating that initial data satisfy Hamiltonian constraint**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb)): Tolman-Oppenheimer-Volkoff (TOV) initial data; defines all ADM variables and nonzero $T^{\mu\nu}$ components in Spherical basis.* [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates Introduction:Here we use NRPy+ to set up initial data for a [simple polytrope TOV star](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation).The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step:1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration [(**NRPy+ tutorial on NRPy+ Method of Lines algorithm**)](Tutorial-Method_of_Lines-C_Code_Generation.ipynb).1. Set gridfunction values to initial data * [**NRPy+ tutorial on TOV initial data**](Tutorial-ADM_Initial_Data-TOV.ipynb) * [**NRPy+ tutorial on validating TOV initial data**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb).1. Evaluate the Hamiltonian constraint violation * [**NRPy+ tutorial on BSSN constraints**](Tutorial-BSSN_constraints.ipynb)1. Repeat above steps at two numerical resolutions to confirm convergence of Hamiltonian constraint violation to zero. Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id_tov): Set up ADM initial data for polytropic TOV Star 1. [Step 2.a](tov_interp): Interpolating the TOV data file as needed 1. [Step 2.b](source): Compute source terms $S_{ij}$, $S_{i}$, $S$, and $\rho$ 1. [Step 2.c](jacobian): Jacobian transformation on the ADM/BSSN source terms 1. [Step 2.d](tensor): Rescale tensorial quantities1. [Step 3](adm_id_spacetime): Convert ADM spacetime quantity initial data from Spherical to BSSN Curvilinear coordinates1. [Step 4](validate): Validating that the TOV initial data satisfy the Hamiltonian constraint 1. [Step 4.a](ham_const_output): Output the Hamiltonian Constraint 1. [Step 4.b](apply_bcs): Apply singular, curvilinear coordinate boundary conditions 1. [Step 4.c](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ 1. [Step 5](mainc): `TOV_Playground.c`: The Main C Code1. [Step 6](plot): Plotting the single-neutron-star initial data1. [Step 7](convergence): Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero1. [Step 8](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step P1: Import needed NRPy+ core modules:
from outputC import * # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("TOVID_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
# !rm -r ScalarWaveCurvilinear_Playground_Ccodes
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three this time, and then read
# the parameter as DIM.
par.set_parval_from_str("grid::DIM",3)
DIM = par.parval_from_str("grid::DIM")
# Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm,
# FD order, floating point precision, and CFL factor:
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "Spherical"
# Step 2.a: Set defaults for Coordinate system parameters.
# These are perhaps the most commonly adjusted parameters,
# so we enable modifications at this high level.
# domain_size = 7.5 # SET BELOW BASED ON TOV STELLAR RADIUS
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.4 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05# If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 0.5 # If SymTP chosen
# Step 2.b: Set the order of spatial finite difference derivatives;
# and the core data type.
FD_order = 4 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
REAL = "double" # Best to use double here.
# Step 3: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Step 4: Set the finite differencing order to FD_order (set above).
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order)
# Step 5: Set the direction=2 (phi) axis to be the symmetry axis; i.e.,
# axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
# Step 6: The MoLtimestepping interface is only used for memory allocation/deallocation
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_method = "Euler" # DOES NOT MATTER; Again MoL interface is only used for memory alloc/dealloc.
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method, RHS_string = "", post_RHS_string = "",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
###Output
_____no_output_____
###Markdown
Step 2: Set up ADM initial data for polytropic TOV Star \[Back to [top](toc)\]$$\label{adm_id_tov}$$As documented [in the TOV Initial Data NRPy+ Tutorial Module](Tutorial-TOV_Initial_Data.ipynb) ([older version here](Tutorial-GRMHD_UnitConversion.ipynb)), we will now set up TOV initial data, storing the densely-sampled result to file (***Courtesy Phil Chang***).The TOV solver uses an ODE integration routine provided by scipy, so we first make sure that scipy is installed:
###Code
!pip install scipy > /dev/null
###Output
_____no_output_____
###Markdown
Next we call the [`TOV.TOV_Solver()` function](../edit/TOV/TOV_Solver.py) ([NRPy+ Tutorial module](Tutorial-ADM_Initial_Data-TOV.ipynb)) to set up the initial data, using the default parameters for initial data. This function outputs the solution to a file named "outputTOVpolytrope.txt".
###Code
############################
# Single polytrope example #
############################
import TOV.Polytropic_EOSs as ppeos
# Set neos = 1 (single polytrope)
neos = 1
# Set rho_poly_tab (not needed for a single polytrope)
rho_poly_tab = []
# Set Gamma_poly_tab
Gamma_poly_tab = [2.0]
# Set K_poly_tab0
K_poly_tab0 = 1. # ZACH NOTES: CHANGED FROM 100.
# Set the eos quantities
eos = ppeos.set_up_EOS_parameters__complete_set_of_input_variables(neos,rho_poly_tab,Gamma_poly_tab,K_poly_tab0)
import TOV.TOV_Solver as TOV
M_TOV, R_Schw_TOV, R_iso_TOV = TOV.TOV_Solver(eos,
outfile="outputTOVpolytrope.txt",
rho_baryon_central=0.129285,
return_M_RSchw_and_Riso = True,
verbose = True)
# domain_size sets the default value for:
# * Spherical's params.RMAX
# * SinhSpherical*'s params.AMAX
# * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max
# * Cylindrical's -params.ZMIN & .{Z,RHO}MAX
# * SinhCylindrical's params.AMPL{RHO,Z}
# * *SymTP's params.AMAX
domain_size = 2.0 * R_iso_TOV
###Output
1256 1256 1256 1256 1256 1256
Just generated a TOV star with
* M = 1.405030336771405e-01 ,
* R_Schw = 9.566044579232513e-01 ,
* R_iso = 8.100085557410308e-01 ,
* M/R_Schw = 1.468768334847266e-01
###Markdown
Step 2.a: Interpolate the TOV data file as needed to set up ADM spacetime quantities in spherical basis (for input into the `Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear` module) and $T^{\mu\nu}$ in the chosen reference metric basis \[Back to [top](toc)\]$$\label{tov_interp}$$The TOV data file just written stored $\left(r,\rho(r),P(r),M(r),e^{\nu(r)}\right)$, where $\rho(r)$ is the total mass-energy density (cf. $\rho_{\text{baryonic}}$).**METRIC DATA IN TERMS OF ADM QUANTITIES**The [TOV line element](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) in *Schwarzschild coordinates* is written (in the $-+++$ form):$$ds^2 = - c^2 e^\nu dt^2 + \left(1 - \frac{2GM}{rc^2}\right)^{-1} dr^2 + r^2 d\Omega^2.$$In *isotropic coordinates* with $G=c=1$ (i.e., the coordinate system we'd prefer to use), the ($-+++$ form) line element is written:$$ds^2 = - e^{\nu} dt^2 + e^{4\phi} \left(d\bar{r}^2 + \bar{r}^2 d\Omega^2\right),$$where $\phi$ here is the *conformal factor*.The ADM 3+1 line element for this diagonal metric in isotropic spherical coordinates is given by:$$ds^2 = (-\alpha^2 + \beta_k \beta^k) dt^2 + \gamma_{\bar{r}\bar{r}} d\bar{r}^2 + \gamma_{\theta\theta} d\theta^2+ \gamma_{\phi\phi} d\phi^2,$$from which we can immediately read off the ADM quantities:\begin{align}\alpha &= e^{\nu(\bar{r})/2} \\\beta^k &= 0 \\\gamma_{\bar{r}\bar{r}} &= e^{4\phi}\\\gamma_{\theta\theta} &= e^{4\phi} \bar{r}^2 \\\gamma_{\phi\phi} &= e^{4\phi} \bar{r}^2 \sin^2 \theta \\\end{align}**STRESS-ENERGY TENSOR $T^{\mu\nu}$**We will also need the stress-energy tensor $T^{\mu\nu}$. [As discussed here](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation), the stress-energy tensor is diagonal:\begin{align}T^t_t &= -\rho \\T^i_j &= P \delta^i_j \\\text{All other components of }T^\mu_\nu &= 0.\end{align}Since $\beta^i=0$ the inverse metric expression simplifies to (Eq. 4.49 in [Gourgoulhon](https://arxiv.org/pdf/gr-qc/0703035.pdf)):$$g^{\mu\nu} = \begin{pmatrix} -\frac{1}{\alpha^2} & \frac{\beta^i}{\alpha^2} \\\frac{\beta^i}{\alpha^2} & \gamma^{ij} - \frac{\beta^i\beta^j}{\alpha^2}\end{pmatrix} =\begin{pmatrix} -\frac{1}{\alpha^2} & 0 \\0 & \gamma^{ij}\end{pmatrix},$$and since the 3-metric is diagonal we get\begin{align}\gamma^{\bar{r}\bar{r}} &= e^{-4\phi}\\\gamma^{\theta\theta} &= e^{-4\phi}\frac{1}{\bar{r}^2} \\\gamma^{\phi\phi} &= e^{-4\phi}\frac{1}{\bar{r}^2 \sin^2 \theta}.\end{align}Thus raising $T^\mu_\nu$ yields a diagonal $T^{\mu\nu}$\begin{align}T^{tt} &= -g^{tt} \rho = \frac{1}{\alpha^2} \rho = e^{-\nu(\bar{r})} \rho \\T^{\bar{r}\bar{r}} &= g^{\bar{r}\bar{r}} P = \frac{1}{e^{4 \phi}} P \\T^{\theta\theta} &= g^{\theta\theta} P = \frac{1}{e^{4 \phi}\bar{r}^2} P\\T^{\phi\phi} &= g^{\phi\phi} P = \frac{1}{e^{4\phi}\bar{r}^2 \sin^2 \theta} P \end{align}
###Code
thismodule = "TOVID"
rbar,theta,rho,P,expnu,exp4phi = par.Cparameters("REAL",thismodule,
["rbar","theta","rho","P","expnu","exp4phi"],1e300)
IDalpha = sp.sqrt(expnu)
gammaSphDD = ixp.zerorank2(DIM=3)
gammaSphDD[0][0] = exp4phi
gammaSphDD[1][1] = exp4phi*rbar**2
gammaSphDD[2][2] = exp4phi*rbar**2*sp.sin(theta)**2
T4SphUU = ixp.zerorank2(DIM=4)
T4SphUU[0][0] = rho/expnu
T4SphUU[1][1] = P/exp4phi
T4SphUU[2][2] = P/(exp4phi*rbar**2)
T4SphUU[3][3] = P/(exp4phi*rbar**2*sp.sin(theta)**2)
expr_list = [IDalpha]
name_list = ["*alpha"]
for i in range(3):
for j in range(i,3):
expr_list.append(gammaSphDD[i][j])
name_list.append("*gammaDD"+str(i)+str(j))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_ADM_quantities"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params=""" const REAL xyz_or_rthph[3],
const ID_inputs other_inputs,
REAL *gammaDD00,REAL *gammaDD01,REAL *gammaDD02,REAL *gammaDD11,REAL *gammaDD12,REAL *gammaDD22,
REAL *KDD00,REAL *KDD01,REAL *KDD02,REAL *KDD11,REAL *KDD12,REAL *KDD22,
REAL *alpha,
REAL *betaU0,REAL *betaU1,REAL *betaU2,
REAL *BU0,REAL *BU1,REAL *BU2""",
body="""
// Set trivial metric quantities:
*KDD00 = *KDD01 = *KDD02 = 0.0;
/**/ *KDD11 = *KDD12 = 0.0;
/**/ *KDD22 = 0.0;
*betaU0 = *betaU1 = *betaU2 = 0.0;
*BU0 = *BU1 = *BU2 = 0.0;
// Next set gamma_{ij} in spherical basis
const REAL rbar = xyz_or_rthph[0];
const REAL theta = xyz_or_rthph[1];
const REAL phi = xyz_or_rthph[2];
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
outputC(expr_list,name_list, "returnstring",outCparams),
opts="DisableCparameters")
###Output
Output C function ID_TOV_ADM_quantities() to file TOVID_Ccodes/ID_TOV_ADM_quantities.h
###Markdown
As all input quantities are functions of $r$, we will simply read the solution from file and interpolate it to the values of $r$ needed by the initial data.1. First we define functions `ID_TOV_ADM_quantities()` and `ID_TOV_TUPMUNU()` that call the [1D TOV interpolator function](../edit/TOV/tov_interp.h) to evaluate the ADM spacetime quantities and $T^{\mu\nu}$, respectively, at any given point $(r,\theta,\phi)$ in the Spherical basis. All quantities are defined as above.1. Next we will construct the BSSN/ADM source terms $\{S_{ij},S_{i},S,\rho\}$ in the Spherical basis1. Then we will perform the Jacobian transformation on $\{S_{ij},S_{i},S,\rho\}$ to the desired `(xx0,xx1,xx2)` basis1. Next we call the *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function to conver the above ADM quantities to the rescaled BSSN quantities in the desired curvilinear coordinate system: [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).$${\rm Jac\_dUSph\_dDrfmUD[mu][nu]} = \frac{\partial x^\mu_{\rm Sph}}{\partial x^\nu_{\rm rfm}},$$via exact differentiation (courtesy SymPy), and the inverse Jacobian$${\rm Jac\_dUrfm\_dDSphUD[mu][nu]} = \frac{\partial x^\mu_{\rm rfm}}{\partial x^\nu_{\rm Sph}},$$using NRPy+'s `generic_matrix_inverter3x3()` function. In terms of these, the transformation of BSSN tensors from Spherical to `"reference_metric::CoordSystem"` coordinates may be written:$$T^{\mu\nu}_{\rm rfm} = \frac{\partial x^\mu_{\rm rfm}}{\partial x^\delta_{\rm Sph}}\frac{\partial x^\nu_{\rm rfm}}{\partial x^\sigma_{\rm Sph}} T^{\delta\sigma}_{\rm Sph}$$
###Code
r_th_ph_or_Cart_xyz_oID_xx = []
CoordType_in = "Spherical"
if CoordType_in == "Spherical":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxSph
elif CoordType_in == "Cartesian":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxCart
else:
print("Error: Can only convert ADM Cartesian or Spherical initial data to BSSN Curvilinear coords.")
exit(1)
# Next apply Jacobian transformations to convert into the (xx0,xx1,xx2) basis
# rho and S are scalar, so no Jacobian transformations are necessary.
Jac4_dUSphorCart_dDrfmUD = ixp.zerorank2(DIM=4)
Jac4_dUSphorCart_dDrfmUD[0][0] = sp.sympify(1)
for i in range(DIM):
for j in range(DIM):
Jac4_dUSphorCart_dDrfmUD[i+1][j+1] = sp.diff(r_th_ph_or_Cart_xyz_oID_xx[i],rfm.xx[j])
Jac4_dUrfm_dDSphorCartUD, dummyDET = ixp.generic_matrix_inverter4x4(Jac4_dUSphorCart_dDrfmUD)
# Perform Jacobian operations on T^{mu nu} and gamma_{ij}
T4UU = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","T4UU","sym01",DIM=4)
IDT4UU = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
for delta in range(4):
for sigma in range(4):
IDT4UU[mu][nu] += \
Jac4_dUrfm_dDSphorCartUD[mu][delta]*Jac4_dUrfm_dDSphorCartUD[nu][sigma]*T4SphUU[delta][sigma]
lhrh_list = []
for mu in range(4):
for nu in range(mu,4):
lhrh_list.append(lhrh(lhs=gri.gfaccess("auxevol_gfs","T4UU"+str(mu)+str(nu)),rhs=IDT4UU[mu][nu]))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_TUPMUNU_xx0xx1xx2"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params="""const paramstruct *restrict params,REAL *restrict xx[3],
const ID_inputs other_inputs,REAL *restrict auxevol_gfs""",
body=outputC([rfm.xxSph[0],rfm.xxSph[1],rfm.xxSph[2]],
["const REAL rbar","const REAL theta","const REAL ph"],"returnstring",
"CSE_enable=False,includebraces=False")+"""
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
fin.FD_outputC("returnstring",lhrh_list,params="outCverbose=False,includebraces=False").replace("IDX4","IDX4S"),
loopopts="AllPoints,Read_xxs")
###Output
Output C function ID_TOV_TUPMUNU_xx0xx1xx2() to file TOVID_Ccodes/ID_TOV_TUPMUNU_xx0xx1xx2.h
###Markdown
Step 3: Convert ADM initial data to BSSN-in-curvilinear coordinates \[Back to [top](toc)\]$$\label{adm_id_spacetime}$$This is an automated process, taken care of by [`BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear`](../edit/BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py), and documented [in this tutorial notebook](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).
###Code
import BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear as AtoBnum
AtoBnum.Convert_Spherical_or_Cartesian_ADM_to_BSSN_curvilinear("Spherical","ID_TOV_ADM_quantities",
Ccodesdir=Ccodesdir,loopopts="")
###Output
Output C function ID_BSSN_lambdas() to file TOVID_Ccodes/ID_BSSN_lambdas.h
Output C function ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs() to file TOVID_Ccodes/ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h
Output C function ID_BSSN__ALL_BUT_LAMBDAs() to file TOVID_Ccodes/ID_BSSN__ALL_BUT_LAMBDAs.h
###Markdown
Step 4: Validating that the TOV initial data satisfy the Hamiltonian constraint \[Back to [top](toc)\]$$\label{validate}$$We will validate that the TOV initial data satisfy the Hamiltonian constraint, modulo numerical finite differencing error Step 4.a: Output the Hamiltonian constraint \[Back to [top](toc)\]$$\label{ham_const_output}$$First output the Hamiltonian constraint [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN_constraints.ipynb)
###Code
# Enable rfm_precompute infrastructure, which results in
# BSSN RHSs that are free of transcendental functions,
# even in curvilinear coordinates, so long as
# ConformalFactor is set to "W" (default).
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
import BSSN.Enforce_Detgammabar_Constraint as EGC
enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammabar_Constraint_symb_expressions()
# Now register the Hamiltonian as a gridfunction.
H = gri.register_gridfunctions("AUX","H")
# Then define the Hamiltonian constraint and output the optimized C code.
import BSSN.BSSN_constraints as bssncon
import BSSN.BSSN_stress_energy_source_terms as Bsest
bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False)
Bsest.BSSN_source_terms_for_BSSN_constraints(T4UU)
bssncon.H += Bsest.sourceterm_H
# Now that we are finished with all the rfm hatted
# quantities in generic precomputed functional
# form, let's restore them to their closed-
# form expressions.
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
desc="Evaluate the Hamiltonian constraint"
name="Hamiltonian_constraint"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict auxevol_gfs, REAL *restrict aux_gfs""",
body = fin.FD_outputC("returnstring",lhrh(lhs=gri.gfaccess("aux_gfs", "H"), rhs=bssncon.H),
params="outCverbose=False").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,Enable_rfm_precompute")
###Output
Output C function Hamiltonian_constraint() to file TOVID_Ccodes/Hamiltonian_constraint.h
###Markdown
Step 4.b: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](toc)\]$$\label{bc_functs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),Cparamspath=os.path.join("../"))
###Output
Wrote to file "TOVID_Ccodes/boundary_conditions/parity_conditions_symbolic_dot_products.h"
Evolved gridfunction "aDD00" has parity type 4.
Evolved gridfunction "aDD01" has parity type 5.
Evolved gridfunction "aDD02" has parity type 6.
Evolved gridfunction "aDD11" has parity type 7.
Evolved gridfunction "aDD12" has parity type 8.
Evolved gridfunction "aDD22" has parity type 9.
Evolved gridfunction "alpha" has parity type 0.
Evolved gridfunction "betU0" has parity type 1.
Evolved gridfunction "betU1" has parity type 2.
Evolved gridfunction "betU2" has parity type 3.
Evolved gridfunction "cf" has parity type 0.
Evolved gridfunction "hDD00" has parity type 4.
Evolved gridfunction "hDD01" has parity type 5.
Evolved gridfunction "hDD02" has parity type 6.
Evolved gridfunction "hDD11" has parity type 7.
Evolved gridfunction "hDD12" has parity type 8.
Evolved gridfunction "hDD22" has parity type 9.
Evolved gridfunction "lambdaU0" has parity type 1.
Evolved gridfunction "lambdaU1" has parity type 2.
Evolved gridfunction "lambdaU2" has parity type 3.
Evolved gridfunction "trK" has parity type 0.
Evolved gridfunction "vetU0" has parity type 1.
Evolved gridfunction "vetU1" has parity type 2.
Evolved gridfunction "vetU2" has parity type 3.
Auxiliary gridfunction "H" has parity type 0.
AuxEvol gridfunction "T4UU00" has parity type 0.
AuxEvol gridfunction "T4UU01" has parity type 1.
AuxEvol gridfunction "T4UU02" has parity type 2.
AuxEvol gridfunction "T4UU03" has parity type 3.
AuxEvol gridfunction "T4UU11" has parity type 4.
AuxEvol gridfunction "T4UU12" has parity type 5.
AuxEvol gridfunction "T4UU13" has parity type 6.
AuxEvol gridfunction "T4UU22" has parity type 7.
AuxEvol gridfunction "T4UU23" has parity type 8.
AuxEvol gridfunction "T4UU33" has parity type 9.
Wrote to file "TOVID_Ccodes/boundary_conditions/EigenCoord_Cart_to_xx.h"
###Markdown
Step 4.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
# Set up the C function for the det(gammahat) = det(gammabar)
EGC.output_Enforce_Detgammabar_Constraint_Ccode(Ccodesdir,
exprs=enforce_detg_constraint_symb_expressions)
###Output
Output C function enforce_detgammabar_constraint() to file TOVID_Ccodes/enforce_detgammabar_constraint.h
###Markdown
Step 4.d: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](toc)\]$$\label{cparams_rfm_and_domainsize}$$Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
###Code
# Step 3.d.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.d.ii: Set free_parameters.h
# Output to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Step 3.d.iii: Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir)
# Step 3.d.iv: Generate xxCart.h, which contains xxCart() for
# (the mapping from xx->Cartesian) for the chosen
# CoordSystem:
rfm.xxCart_h("xxCart","./set_Cparameters.h",os.path.join(Ccodesdir,"xxCart.h"))
# Step 3.d.v: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
###Output
_____no_output_____
###Markdown
Step 5: `TOV_Playground.c`: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER)
with open(os.path.join(Ccodesdir,"TOV_Playground_REAL__NGHOSTS.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set TOV stellar parameters
#define TOV_Mass """+str(M_TOV)+"""
#define TOV_Riso """+str(R_iso_TOV)+"\n")
%%writefile $Ccodesdir/TOV_Playground.c
// Step P0: Define REAL and NGHOSTS. This header is generated by NRPy+.
#include "TOV_Playground_REAL__NGHOSTS.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xxCart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Set xxCart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xxCart.h"
// Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P8: Include function for enforcing detgammabar constraint.
#include "enforce_detgammabar_constraint.h"
// Step P4: Declare initial data input struct:
// stores data from initial data solver,
// so they can be put on the numerical grid.
typedef struct __ID_inputs {
REAL Rbar;
int Rbar_idx;
int interp_stencil_size;
int numlines_in_file;
REAL *r_Schw_arr,*rho_arr,*rho_baryon_arr,*P_arr,*M_arr,*expnu_arr,*exp4phi_arr,*rbar_arr;
} ID_inputs;
// Part P11: Declare all functions for setting up TOV initial data.
/* Routines to interpolate the TOV solution and convert to ADM & T^{munu}: */
#include "../TOV/tov_interp.h"
#include "ID_TOV_ADM_quantities.h"
#include "ID_TOV_TUPMUNU_xx0xx1xx2.h"
/* Next perform the basis conversion and compute all needed BSSN quantities */
#include "ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN_lambdas.h"
// Step P10: Declare function necessary for setting up the initial data.
// Step P10.a: Define BSSN_ID() for BrillLindquist initial data
// Step P10.b: Set the generic driver function for setting up BSSN initial data
void initial_data(const paramstruct *restrict params,const bc_struct *restrict bcstruct,
const rfm_struct *restrict rfmstruct,
REAL *restrict xx[3], REAL *restrict auxevol_gfs, REAL *restrict in_gfs) {
#include "set_Cparameters.h"
// Step 1: Set up TOV initial data
// Step 1.a: Read TOV initial data from data file
// Open the data file:
char filename[100];
sprintf(filename,"./outputTOVpolytrope.txt");
FILE *in1Dpolytrope = fopen(filename, "r");
if (in1Dpolytrope == NULL) {
fprintf(stderr,"ERROR: could not open file %s\n",filename);
exit(1);
}
// Count the number of lines in the data file:
int numlines_in_file = count_num_lines_in_file(in1Dpolytrope);
// Allocate space for all data arrays:
REAL *r_Schw_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_baryon_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *P_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *M_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *expnu_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *exp4phi_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rbar_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
// Read from the data file, filling in arrays
// read_datafile__set_arrays() may be found in TOV/tov_interp.h
if(read_datafile__set_arrays(in1Dpolytrope, r_Schw_arr,rho_arr,rho_baryon_arr,P_arr,M_arr,expnu_arr,exp4phi_arr,rbar_arr) == 1) {
fprintf(stderr,"ERROR WHEN READING FILE %s!\n",filename);
exit(1);
}
fclose(in1Dpolytrope);
REAL Rbar = -100;
int Rbar_idx = -100;
for(int i=1;i<numlines_in_file;i++) {
if(rho_arr[i-1]>0 && rho_arr[i]==0) { Rbar = rbar_arr[i-1]; Rbar_idx = i-1; }
}
if(Rbar<0) {
fprintf(stderr,"Error: could not find rbar=Rbar from data file.\n");
exit(1);
}
ID_inputs TOV_in;
TOV_in.Rbar = Rbar;
TOV_in.Rbar_idx = Rbar_idx;
const int interp_stencil_size = 12;
TOV_in.interp_stencil_size = interp_stencil_size;
TOV_in.numlines_in_file = numlines_in_file;
TOV_in.r_Schw_arr = r_Schw_arr;
TOV_in.rho_arr = rho_arr;
TOV_in.rho_baryon_arr = rho_baryon_arr;
TOV_in.P_arr = P_arr;
TOV_in.M_arr = M_arr;
TOV_in.expnu_arr = expnu_arr;
TOV_in.exp4phi_arr = exp4phi_arr;
TOV_in.rbar_arr = rbar_arr;
/* END TOV INPUT ROUTINE */
// Step 1.b: Interpolate data from data file to set BSSN gridfunctions
ID_BSSN__ALL_BUT_LAMBDAs(params,xx,TOV_in, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammabar_constraint(rfmstruct, params, in_gfs);
ID_BSSN_lambdas(params, xx, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammabar_constraint(rfmstruct, params, in_gfs);
ID_TOV_TUPMUNU_xx0xx1xx2(params,xx,TOV_in,auxevol_gfs);
free(rbar_arr);
free(rho_arr);
free(rho_baryon_arr);
free(P_arr);
free(M_arr);
free(expnu_arr);
}
// Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic)
#include "Hamiltonian_constraint.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
// Step 1: Set up initial data to an exact solution
initial_data(¶ms,&bcstruct, &rfmstruct, xx, auxevol_gfs, y_n_gfs);
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, y_n_gfs);
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(&rfmstruct, ¶ms, y_n_gfs,auxevol_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
xxCart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",xCart[1]/TOV_Mass,xCart[2]/TOV_Mass, y_n_gfs[IDX4ptS(CFGF,idx)],
log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));
}
fclose(out2D);
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
free(auxevol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
import cmdline_helper as cmd
cmd.C_compile(os.path.join(Ccodesdir,"TOV_Playground.c"), "TOV_Playground")
cmd.delete_existing_files("out96.txt")
cmd.Execute("TOV_Playground", "96 96 2", "out96.txt")
###Output
Compiling executable...
Executing `gcc -Ofast -fopenmp -march=native -funroll-loops TOVID_Ccodes/TOV_Playground.c -o TOV_Playground -lm`...
Finished executing in 3.8179306983947754 seconds.
Finished compilation.
Executing `taskset -c 0,1,2,3,4,5 ./TOV_Playground 96 96 2`...
Finished executing in 0.21003031730651855 seconds.
###Markdown
Step 6: Plotting the single-neutron-star initial data \[Back to [top](toc)\]$$\label{plot}$$Here we plot the conformal factor of these initial data on a 2D grid, such that darker colors imply stronger gravitational fields. Hence, we see the single neutron star centered at the origin: $x/M=y/M=z/M=0$, where $M$ is an arbitrary mass scale (conventionally the [ADM mass](https://en.wikipedia.org/w/index.php?title=ADM_formalism&oldid=846335453) is chosen), and our formulation of Einstein's equations adopt $G=c=1$ [geometrized units](https://en.wikipedia.org/w/index.php?title=Geometrized_unit_system&oldid=861682626).
###Code
import numpy as np
from scipy.interpolate import griddata
from pylab import savefig
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from IPython.display import Image
x96,y96,valuesCF96,valuesHam96 = np.loadtxt('out96.txt').T #Transposed for easier unpacking
bounds = 7.5
pl_xmin = -bounds
pl_xmax = +bounds
pl_ymin = -bounds
pl_ymax = +bounds
grid_x, grid_y = np.mgrid[pl_xmin:pl_xmax:100j, pl_ymin:pl_ymax:100j]
points96 = np.zeros((len(x96), 2))
for i in range(len(x96)):
points96[i][0] = x96[i]
points96[i][1] = y96[i]
grid96 = griddata(points96, valuesCF96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesCF96, (grid_x, grid_y), method='cubic')
plt.clf()
plt.title("Neutron Star: log10( max(1e-6,Energy Density) )")
plt.xlabel("x/M")
plt.ylabel("y/M")
# fig, ax = plt.subplots()
# ax.plot(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
# plt.close(fig)
fig96cf = plt.imshow(grid96.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cf)
savefig("BHB.png")
from IPython.display import Image
Image("BHB.png")
# # interpolation='nearest', cmap=cm.gist_rainbow)
###Output
_____no_output_____
###Markdown
Step 7: Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero \[Back to [top](toc)\]$$\label{convergence}$$The equations behind these initial data solve Einstein's equations exactly, at a single instant in time. One reflection of this solution is that the Hamiltonian constraint violation should be exactly zero in the initial data. However, when evaluated on numerical grids, the Hamiltonian constraint violation will *not* generally evaluate to zero due to the associated numerical derivatives not being exact. However, these numerical derivatives (finite difference derivatives in this case) should *converge* to the exact derivatives as the density of numerical sampling points approaches infinity.In this case, all of our finite difference derivatives agree with the exact solution, with an error term that drops with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$. Here, as in the [Start-to-Finish Scalar Wave (Cartesian grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWave.ipynb) and the [Start-to-Finish Scalar Wave (curvilinear grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWaveCurvilinear.ipynb) we confirm this convergence.First, let's take a look at what the numerical error looks like on the x-y plane at a given numerical resolution, plotting $\log_{10}|H|$, where $H$ is the Hamiltonian constraint violation:
###Code
grid96 = griddata(points96, valuesHam96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesHam96, (grid_x, grid_y), method='cubic')
# fig, ax = plt.subplots()
plt.clf()
plt.title("96^3 Numerical Err.: log_{10}|Ham|")
plt.xlabel("x/M")
plt.ylabel("y/M")
fig96cub = plt.imshow(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cub)
###Output
_____no_output_____
###Markdown
Next, we set up the same initial data but on a lower-resolution, $48\times 8\times 2$ grid (axisymmetric in the $\phi$ direction). Since the constraint violation (numerical error associated with the fourth-order-accurate, finite-difference derivatives) should converge to zero with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$, we expect the constraint violation will increase (relative to the $96\times 16\times 2$ grid) by a factor of $\left(96/48\right)^4$. Here we demonstrate that indeed this order of convergence is observed as expected, *except* at the star's surface where the stress-energy tensor $T^{\mu\nu}$ sharply drops to zero.
###Code
# Now rerun TOV_Playground with twice lower resolution.
cmd.delete_existing_files("out48.txt")
cmd.Execute("TOV_Playground", "48 48 2", "out48.txt")
x48,y48,valuesCF48,valuesHam48 = np.loadtxt('out48.txt').T #Transposed for easier unpacking
points48 = np.zeros((len(x48), 2))
for i in range(len(x48)):
points48[i][0] = x48[i]
points48[i][1] = y48[i]
grid48 = griddata(points48, valuesHam48, (grid_x, grid_y), method='cubic')
griddiff_48_minus_96 = np.zeros((100,100))
griddiff_48_minus_96_1darray = np.zeros(100*100)
gridx_1darray_yeq0 = np.zeros(100)
grid48_1darray_yeq0 = np.zeros(100)
grid96_1darray_yeq0 = np.zeros(100)
count = 0
outarray = []
for i in range(100):
for j in range(100):
griddiff_48_minus_96[i][j] = grid48[i][j] - grid96[i][j]
griddiff_48_minus_96_1darray[count] = griddiff_48_minus_96[i][j]
if j==49:
gridx_1darray_yeq0[i] = grid_x[i][j]
grid48_1darray_yeq0[i] = grid48[i][j] + np.log10((48./96.)**4)
grid96_1darray_yeq0[i] = grid96[i][j]
count = count + 1
plt.clf()
fig, ax = plt.subplots()
plt.title("Plot Demonstrating 4th-order Convergence")
plt.xlabel("x/M")
plt.ylabel("log10(Relative error)")
ax.plot(gridx_1darray_yeq0, grid96_1darray_yeq0, 'k-', label='Nr=96')
ax.plot(gridx_1darray_yeq0, grid48_1darray_yeq0, 'k--', label='Nr=48, mult by (48/96)^4')
ax.set_ylim([-12.5,1.5])
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
###Output
Executing `taskset -c 0,1,2,3,4,5 ./TOV_Playground 48 48 2`...
Finished executing in 0.2116103172302246 seconds.
###Markdown
Step 8: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[NbConvertApp] Converting notebook Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb to latex
[NbConvertApp] Support files will be in Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data_files/
[NbConvertApp] Making directory Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data_files
[NbConvertApp] Making directory Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data_files
[NbConvertApp] Making directory Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data_files
[NbConvertApp] Writing 142100 bytes to Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Start-to-Finish Example: Setting up Polytropic [TOV](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) Initial Data, in Curvilinear Coordinates Authors: Zach Etienne, Phil Chang, and Leo Werneck Formatting improvements courtesy Brandon Clark This module sets up initial data for a TOV star in *spherical, isotropic coordinates*, using the *Numerical* ADM Spherical to BSSN Curvilinear initial data module (numerical = BSSN $\lambda^i$'s are computed using finite-difference derivatives instead of exact expressions).**Notebook Status:** Validated **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution (see [plots](convergence) at bottom). Note that convergence at the surface of the star will be lower order due to the sharp drop to zero in $T^{\mu\nu}$. NRPy+ Source Code for this module: * [TOV/TOV_Solver.py](../edit/TOV/TOV_Solver.py); ([**NRPy+ Tutorial module reviewing mathematical formulation and equations solved**](Tutorial-ADM_Initial_Data-TOV.ipynb)); ([**start-to-finish NRPy+ Tutorial module demonstrating that initial data satisfy Hamiltonian constraint**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb)): Tolman-Oppenheimer-Volkoff (TOV) initial data; defines all ADM variables and nonzero $T^{\mu\nu}$ components in Spherical basis.* [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates Introduction:Here we use NRPy+ to set up initial data for a [simple polytrope TOV star](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation).The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step:1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration [(**NRPy+ tutorial on NRPy+ Method of Lines algorithm**)](Tutorial-Method_of_Lines-C_Code_Generation.ipynb).1. Set gridfunction values to initial data * [**NRPy+ tutorial on TOV initial data**](Tutorial-ADM_Initial_Data-TOV.ipynb) * [**NRPy+ tutorial on validating TOV initial data**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb).1. Evaluate the Hamiltonian constraint violation * [**NRPy+ tutorial on BSSN constraints**](Tutorial-BSSN_constraints.ipynb)1. Repeat above steps at two numerical resolutions to confirm convergence of Hamiltonian constraint violation to zero. Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id_tov): Set up ADM initial data for polytropic TOV Star 1. [Step 2.a](tov_interp): Interpolating the TOV data file as needed 1. [Step 2.b](source): Compute source terms $S_{ij}$, $S_{i}$, $S$, and $\rho$ 1. [Step 2.c](jacobian): Jacobian transformation on the ADM/BSSN source terms 1. [Step 2.d](tensor): Rescale tensorial quantities1. [Step 3](adm_id_spacetime): Convert ADM spacetime quantity initial data from Spherical to BSSN Curvilinear coordinates1. [Step 4](validate): Validating that the TOV initial data satisfy the Hamiltonian constraint 1. [Step 4.a](ham_const_output): Output the Hamiltonian Constraint 1. [Step 4.b](apply_bcs): Apply singular, curvilinear coordinate boundary conditions 1. [Step 4.c](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ 1. [Step 5](mainc): `TOV_Playground.c`: The Main C Code1. [Step 6](plot): Plotting the single-neutron-star initial data1. [Step 7](convergence): Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero1. [Step 8](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step P1: Import needed NRPy+ core modules:
from outputC import lhrh,outCfunction,outputC # NRPy+: Core C code output module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import finite_difference as fin # NRPy+: Finite difference C code generation module
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("TOVID_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
# !rm -r ScalarWaveCurvilinear_Playground_Ccodes
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three this time, and then read
# the parameter as DIM.
par.set_parval_from_str("grid::DIM",3)
DIM = par.parval_from_str("grid::DIM")
# Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm,
# FD order, floating point precision, and CFL factor:
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "Spherical"
# Step 2.a: Set defaults for Coordinate system parameters.
# These are perhaps the most commonly adjusted parameters,
# so we enable modifications at this high level.
# domain_size = 7.5 # SET BELOW BASED ON TOV STELLAR RADIUS
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.4 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05# If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 0.5 # If SymTP chosen
# Step 2.b: Set the order of spatial finite difference derivatives;
# and the core data type.
FD_order = 4 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
REAL = "double" # Best to use double here.
# Step 3: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Step 4: Set the finite differencing order to FD_order (set above).
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order)
# Step 5: Set the direction=2 (phi) axis to be the symmetry axis; i.e.,
# axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
# Step 6: The MoLtimestepping interface is only used for memory allocation/deallocation
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_method = "Euler" # DOES NOT MATTER; Again MoL interface is only used for memory alloc/dealloc.
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method, RHS_string = "", post_RHS_string = "",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
# Step 7: Polytropic EOS setup
# For EOS_type, choose either "SimplePolytrope" or "PiecewisePolytrope"
EOS_type = "SimplePolytrope"
# If "PiecewisePolytrope" is chosen as EOS_type, you
# must also choose the name of the EOS, which can
# be any of the following:
# 'PAL6', 'SLy', 'APR1', 'APR2', 'APR3', 'APR4',
# 'FPS', 'WFF1', 'WFF2', 'WFF3', 'BBB2', 'BPAL12',
# 'ENG', 'MPA1', 'MS1', 'MS2', 'MS1b', 'PS', 'GS1',
# 'GS2', 'BGN1H1', 'GNH3', 'H1', 'H2', 'H3', 'H4',
# 'H5', 'H6', 'H7', 'PCL2', 'ALF1', 'ALF2', 'ALF3',
# 'ALF4'
EOS_name = 'SLy' # <-- IGNORED IF EOS_type is not PiecewisePolytrope.
###Output
_____no_output_____
###Markdown
Step 2: Set up ADM initial data for polytropic TOV Star \[Back to [top](toc)\]$$\label{adm_id_tov}$$As documented [in the TOV Initial Data NRPy+ Tutorial Module](Tutorial-TOV_Initial_Data.ipynb) ([older version here](Tutorial-GRMHD_UnitConversion.ipynb)), we will now set up TOV initial data, storing the densely-sampled result to file (***Courtesy Phil Chang***).The TOV solver uses an ODE integration routine provided by scipy, so we first make sure that scipy is installed:
###Code
!pip install scipy > /dev/null
###Output
_____no_output_____
###Markdown
Next we call the [`TOV.TOV_Solver()` function](../edit/TOV/TOV_Solver.py) ([NRPy+ Tutorial module](Tutorial-ADM_Initial_Data-TOV.ipynb)) to set up the initial data, using the default parameters for initial data. This function outputs the solution to a file named "outputTOVpolytrope.txt".
###Code
##########################
# Polytropic EOS example #
##########################
import TOV.Polytropic_EOSs as ppeos
if EOS_type == "SimplePolytrope":
# Set neos = 1 (single polytrope)
neos = 1
# Set rho_poly_tab (not needed for a single polytrope)
rho_poly_tab = []
# Set Gamma_poly_tab
Gamma_poly_tab = [2.0]
# Set K_poly_tab0
K_poly_tab0 = 1. # ZACH NOTES: CHANGED FROM 100.
# Set the eos quantities
eos = ppeos.set_up_EOS_parameters__complete_set_of_input_variables(neos,rho_poly_tab,Gamma_poly_tab,K_poly_tab0)
rho_baryon_central = 0.129285
elif EOS_type == "PiecewisePolytrope":
eos = ppeos.set_up_EOS_parameters__Read_et_al_input_variables(EOS_name)
rho_baryon_central=2.0
else:
print("""Error: unknown EOS_type. Valid types are 'SimplePolytrope' and 'PiecewisePolytrope' """)
sys.exit(1)
import TOV.TOV_Solver as TOV
M_TOV, R_Schw_TOV, R_iso_TOV = TOV.TOV_Solver(eos,
outfile="outputTOVpolytrope.txt",
rho_baryon_central=rho_baryon_central,
return_M_RSchw_and_Riso = True,
verbose = True)
# domain_size sets the default value for:
# * Spherical's params.RMAX
# * SinhSpherical*'s params.AMAX
# * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max
# * Cylindrical's -params.ZMIN & .{Z,RHO}MAX
# * SinhCylindrical's params.AMPL{RHO,Z}
# * *SymTP's params.AMAX
domain_size = 2.0 * R_iso_TOV
###Output
1256 1256 1256 1256 1256 1256
Just generated a TOV star with
* M = 1.405030336771405e-01 ,
* R_Schw = 9.566044579232513e-01 ,
* R_iso = 8.100085557410308e-01 ,
* M/R_Schw = 1.468768334847266e-01
###Markdown
Step 2.a: Interpolate the TOV data file as needed to set up ADM spacetime quantities in spherical basis (for input into the `Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear` module) and $T^{\mu\nu}$ in the chosen reference metric basis \[Back to [top](toc)\]$$\label{tov_interp}$$The TOV data file just written stored $\left(r,\rho(r),P(r),M(r),e^{\nu(r)}\right)$, where $\rho(r)$ is the total mass-energy density (cf. $\rho_{\text{baryonic}}$).**METRIC DATA IN TERMS OF ADM QUANTITIES**The [TOV line element](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) in *Schwarzschild coordinates* is written (in the $-+++$ form):$$ds^2 = - c^2 e^\nu dt^2 + \left(1 - \frac{2GM}{rc^2}\right)^{-1} dr^2 + r^2 d\Omega^2.$$In *isotropic coordinates* with $G=c=1$ (i.e., the coordinate system we'd prefer to use), the ($-+++$ form) line element is written:$$ds^2 = - e^{\nu} dt^2 + e^{4\phi} \left(d\bar{r}^2 + \bar{r}^2 d\Omega^2\right),$$where $\phi$ here is the *conformal factor*.The ADM 3+1 line element for this diagonal metric in isotropic spherical coordinates is given by:$$ds^2 = (-\alpha^2 + \beta_k \beta^k) dt^2 + \gamma_{\bar{r}\bar{r}} d\bar{r}^2 + \gamma_{\theta\theta} d\theta^2+ \gamma_{\phi\phi} d\phi^2,$$from which we can immediately read off the ADM quantities:\begin{align}\alpha &= e^{\nu(\bar{r})/2} \\\beta^k &= 0 \\\gamma_{\bar{r}\bar{r}} &= e^{4\phi}\\\gamma_{\theta\theta} &= e^{4\phi} \bar{r}^2 \\\gamma_{\phi\phi} &= e^{4\phi} \bar{r}^2 \sin^2 \theta \\\end{align}**STRESS-ENERGY TENSOR $T^{\mu\nu}$**We will also need the stress-energy tensor $T^{\mu\nu}$. [As discussed here](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation), the stress-energy tensor is diagonal:\begin{align}T^t_t &= -\rho \\T^i_j &= P \delta^i_j \\\text{All other components of }T^\mu_\nu &= 0.\end{align}Since $\beta^i=0$ the inverse metric expression simplifies to (Eq. 4.49 in [Gourgoulhon](https://arxiv.org/pdf/gr-qc/0703035.pdf)):$$g^{\mu\nu} = \begin{pmatrix} -\frac{1}{\alpha^2} & \frac{\beta^i}{\alpha^2} \\\frac{\beta^i}{\alpha^2} & \gamma^{ij} - \frac{\beta^i\beta^j}{\alpha^2}\end{pmatrix} =\begin{pmatrix} -\frac{1}{\alpha^2} & 0 \\0 & \gamma^{ij}\end{pmatrix},$$and since the 3-metric is diagonal we get\begin{align}\gamma^{\bar{r}\bar{r}} &= e^{-4\phi}\\\gamma^{\theta\theta} &= e^{-4\phi}\frac{1}{\bar{r}^2} \\\gamma^{\phi\phi} &= e^{-4\phi}\frac{1}{\bar{r}^2 \sin^2 \theta}.\end{align}Thus raising $T^\mu_\nu$ yields a diagonal $T^{\mu\nu}$\begin{align}T^{tt} &= -g^{tt} \rho = \frac{1}{\alpha^2} \rho = e^{-\nu(\bar{r})} \rho \\T^{\bar{r}\bar{r}} &= g^{\bar{r}\bar{r}} P = \frac{1}{e^{4 \phi}} P \\T^{\theta\theta} &= g^{\theta\theta} P = \frac{1}{e^{4 \phi}\bar{r}^2} P\\T^{\phi\phi} &= g^{\phi\phi} P = \frac{1}{e^{4\phi}\bar{r}^2 \sin^2 \theta} P \end{align}
###Code
thismodule = "TOVID"
rbar,theta,rho,P,expnu,exp4phi = par.Cparameters("REAL",thismodule,
["rbar","theta","rho","P","expnu","exp4phi"],1e300)
IDalpha = sp.sqrt(expnu)
gammaSphDD = ixp.zerorank2(DIM=3)
gammaSphDD[0][0] = exp4phi
gammaSphDD[1][1] = exp4phi*rbar**2
gammaSphDD[2][2] = exp4phi*rbar**2*sp.sin(theta)**2
T4SphUU = ixp.zerorank2(DIM=4)
T4SphUU[0][0] = rho/expnu
T4SphUU[1][1] = P/exp4phi
T4SphUU[2][2] = P/(exp4phi*rbar**2)
T4SphUU[3][3] = P/(exp4phi*rbar**2*sp.sin(theta)**2)
expr_list = [IDalpha]
name_list = ["*alpha"]
for i in range(3):
for j in range(i,3):
expr_list.append(gammaSphDD[i][j])
name_list.append("*gammaDD"+str(i)+str(j))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_ADM_quantities"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params=""" const REAL xyz_or_rthph[3],
const ID_inputs other_inputs,
REAL *gammaDD00,REAL *gammaDD01,REAL *gammaDD02,REAL *gammaDD11,REAL *gammaDD12,REAL *gammaDD22,
REAL *KDD00,REAL *KDD01,REAL *KDD02,REAL *KDD11,REAL *KDD12,REAL *KDD22,
REAL *alpha,
REAL *betaU0,REAL *betaU1,REAL *betaU2,
REAL *BU0,REAL *BU1,REAL *BU2""",
body="""
// Set trivial metric quantities:
*KDD00 = *KDD01 = *KDD02 = 0.0;
/**/ *KDD11 = *KDD12 = 0.0;
/**/ *KDD22 = 0.0;
*betaU0 = *betaU1 = *betaU2 = 0.0;
*BU0 = *BU1 = *BU2 = 0.0;
// Next set gamma_{ij} in spherical basis
const REAL rbar = xyz_or_rthph[0];
const REAL theta = xyz_or_rthph[1];
const REAL phi = xyz_or_rthph[2];
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
outputC(expr_list,name_list, "returnstring",outCparams),
opts="DisableCparameters")
###Output
Output C function ID_TOV_ADM_quantities() to file TOVID_Ccodes/ID_TOV_ADM_quantities.h
###Markdown
As all input quantities are functions of $r$, we will simply read the solution from file and interpolate it to the values of $r$ needed by the initial data.1. First we define functions `ID_TOV_ADM_quantities()` and `ID_TOV_TUPMUNU()` that call the [1D TOV interpolator function](../edit/TOV/tov_interp.h) to evaluate the ADM spacetime quantities and $T^{\mu\nu}$, respectively, at any given point $(r,\theta,\phi)$ in the Spherical basis. All quantities are defined as above.1. Next we will construct the BSSN/ADM source terms $\{S_{ij},S_{i},S,\rho\}$ in the Spherical basis1. Then we will perform the Jacobian transformation on $\{S_{ij},S_{i},S,\rho\}$ to the desired `(xx0,xx1,xx2)` basis1. Next we call the *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function to conver the above ADM quantities to the rescaled BSSN quantities in the desired curvilinear coordinate system: [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).$${\rm Jac\_dUSph\_dDrfmUD[mu][nu]} = \frac{\partial x^\mu_{\rm Sph}}{\partial x^\nu_{\rm rfm}},$$via exact differentiation (courtesy SymPy), and the inverse Jacobian$${\rm Jac\_dUrfm\_dDSphUD[mu][nu]} = \frac{\partial x^\mu_{\rm rfm}}{\partial x^\nu_{\rm Sph}},$$using NRPy+'s `generic_matrix_inverter3x3()` function. In terms of these, the transformation of BSSN tensors from Spherical to `"reference_metric::CoordSystem"` coordinates may be written:$$T^{\mu\nu}_{\rm rfm} = \frac{\partial x^\mu_{\rm rfm}}{\partial x^\delta_{\rm Sph}}\frac{\partial x^\nu_{\rm rfm}}{\partial x^\sigma_{\rm Sph}} T^{\delta\sigma}_{\rm Sph}$$
###Code
r_th_ph_or_Cart_xyz_oID_xx = []
CoordType_in = "Spherical"
if CoordType_in == "Spherical":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxSph
elif CoordType_in == "Cartesian":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxCart
else:
print("Error: Can only convert ADM Cartesian or Spherical initial data to BSSN Curvilinear coords.")
exit(1)
# Next apply Jacobian transformations to convert into the (xx0,xx1,xx2) basis
# rho and S are scalar, so no Jacobian transformations are necessary.
Jac4_dUSphorCart_dDrfmUD = ixp.zerorank2(DIM=4)
Jac4_dUSphorCart_dDrfmUD[0][0] = sp.sympify(1)
for i in range(DIM):
for j in range(DIM):
Jac4_dUSphorCart_dDrfmUD[i+1][j+1] = sp.diff(r_th_ph_or_Cart_xyz_oID_xx[i],rfm.xx[j])
Jac4_dUrfm_dDSphorCartUD, dummyDET = ixp.generic_matrix_inverter4x4(Jac4_dUSphorCart_dDrfmUD)
# Perform Jacobian operations on T^{mu nu} and gamma_{ij}
T4UU = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","T4UU","sym01",DIM=4)
IDT4UU = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
for delta in range(4):
for sigma in range(4):
IDT4UU[mu][nu] += \
Jac4_dUrfm_dDSphorCartUD[mu][delta]*Jac4_dUrfm_dDSphorCartUD[nu][sigma]*T4SphUU[delta][sigma]
lhrh_list = []
for mu in range(4):
for nu in range(mu,4):
lhrh_list.append(lhrh(lhs=gri.gfaccess("auxevol_gfs","T4UU"+str(mu)+str(nu)),rhs=IDT4UU[mu][nu]))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_TUPMUNU_xx0xx1xx2"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params="""const paramstruct *restrict params,REAL *restrict xx[3],
const ID_inputs other_inputs,REAL *restrict auxevol_gfs""",
body=outputC([rfm.xxSph[0],rfm.xxSph[1],rfm.xxSph[2]],
["const REAL rbar","const REAL theta","const REAL ph"],"returnstring",
"CSE_enable=False,includebraces=False")+"""
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
fin.FD_outputC("returnstring",lhrh_list,params="outCverbose=False,includebraces=False").replace("IDX4","IDX4S"),
loopopts="AllPoints,Read_xxs")
###Output
Output C function ID_TOV_TUPMUNU_xx0xx1xx2() to file TOVID_Ccodes/ID_TOV_TUPMUNU_xx0xx1xx2.h
###Markdown
Step 3: Convert ADM initial data to BSSN-in-curvilinear coordinates \[Back to [top](toc)\]$$\label{adm_id_spacetime}$$This is an automated process, taken care of by [`BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear`](../edit/BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py), and documented [in this tutorial notebook](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).
###Code
import BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear as AtoBnum
AtoBnum.Convert_Spherical_or_Cartesian_ADM_to_BSSN_curvilinear("Spherical","ID_TOV_ADM_quantities",
Ccodesdir=Ccodesdir,loopopts="")
###Output
Output C function ID_BSSN_lambdas() to file TOVID_Ccodes/ID_BSSN_lambdas.h
Output C function ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs() to file TOVID_Ccodes/ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h
Output C function ID_BSSN__ALL_BUT_LAMBDAs() to file TOVID_Ccodes/ID_BSSN__ALL_BUT_LAMBDAs.h
###Markdown
Step 4: Validating that the TOV initial data satisfy the Hamiltonian constraint \[Back to [top](toc)\]$$\label{validate}$$We will validate that the TOV initial data satisfy the Hamiltonian constraint, modulo numerical finite differencing error Step 4.a: Output the Hamiltonian constraint \[Back to [top](toc)\]$$\label{ham_const_output}$$First output the Hamiltonian constraint [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN_constraints.ipynb)
###Code
# Enable rfm_precompute infrastructure, which results in
# BSSN RHSs that are free of transcendental functions,
# even in curvilinear coordinates, so long as
# ConformalFactor is set to "W" (default).
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
import BSSN.Enforce_Detgammabar_Constraint as EGC
enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammabar_Constraint_symb_expressions()
# Now register the Hamiltonian as a gridfunction.
H = gri.register_gridfunctions("AUX","H")
# Then define the Hamiltonian constraint and output the optimized C code.
import BSSN.BSSN_constraints as bssncon
import BSSN.BSSN_stress_energy_source_terms as Bsest
bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False)
Bsest.BSSN_source_terms_for_BSSN_constraints(T4UU)
bssncon.H += Bsest.sourceterm_H
# Now that we are finished with all the rfm hatted
# quantities in generic precomputed functional
# form, let's restore them to their closed-
# form expressions.
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
desc="Evaluate the Hamiltonian constraint"
name="Hamiltonian_constraint"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict auxevol_gfs, REAL *restrict aux_gfs""",
body = fin.FD_outputC("returnstring",lhrh(lhs=gri.gfaccess("aux_gfs", "H"), rhs=bssncon.H),
params="outCverbose=False").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,Enable_rfm_precompute")
###Output
Output C function Hamiltonian_constraint() to file TOVID_Ccodes/Hamiltonian_constraint.h
###Markdown
Step 4.b: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](toc)\]$$\label{bc_functs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),Cparamspath=os.path.join("../"))
###Output
Wrote to file "TOVID_Ccodes/boundary_conditions/parity_conditions_symbolic_dot_products.h"
Evolved parity: ( aDD00:4, aDD01:5, aDD02:6, aDD11:7, aDD12:8, aDD22:9,
alpha:0, betU0:1, betU1:2, betU2:3, cf:0, hDD00:4, hDD01:5, hDD02:6,
hDD11:7, hDD12:8, hDD22:9, lambdaU0:1, lambdaU1:2, lambdaU2:3, trK:0,
vetU0:1, vetU1:2, vetU2:3 )
Auxiliary parity: ( H:0 )
AuxEvol parity: ( T4UU00:0, T4UU01:1, T4UU02:2, T4UU03:3, T4UU11:4,
T4UU12:5, T4UU13:6, T4UU22:7, T4UU23:8, T4UU33:9 )
Wrote to file "TOVID_Ccodes/boundary_conditions/EigenCoord_Cart_to_xx.h"
###Markdown
Step 4.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
# Set up the C function for the det(gammahat) = det(gammabar)
EGC.output_Enforce_Detgammabar_Constraint_Ccode(Ccodesdir,
exprs=enforce_detg_constraint_symb_expressions)
###Output
Output C function enforce_detgammabar_constraint() to file TOVID_Ccodes/enforce_detgammabar_constraint.h
###Markdown
Step 4.d: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](toc)\]$$\label{cparams_rfm_and_domainsize}$$Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
###Code
# Step 3.d.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.d.ii: Set free_parameters.h
# Output to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Step 3.d.iii: Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir)
# Step 3.d.iv: Generate xxCart.h, which contains xxCart() for
# (the mapping from xx->Cartesian) for the chosen
# CoordSystem:
rfm.xxCart_h("xxCart","./set_Cparameters.h",os.path.join(Ccodesdir,"xxCart.h"))
# Step 3.d.v: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
###Output
_____no_output_____
###Markdown
Step 5: `TOV_Playground.c`: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER)
with open(os.path.join(Ccodesdir,"TOV_Playground_REAL__NGHOSTS.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set TOV stellar parameters
#define TOV_Mass """+str(M_TOV)+"""
#define TOV_Riso """+str(R_iso_TOV)+"\n")
%%writefile $Ccodesdir/TOV_Playground.c
// Step P0: Define REAL and NGHOSTS. This header is generated by NRPy+.
#include "TOV_Playground_REAL__NGHOSTS.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xxCart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Set xxCart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xxCart.h"
// Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P8: Include function for enforcing detgammabar constraint.
#include "enforce_detgammabar_constraint.h"
// Step P4: Declare initial data input struct:
// stores data from initial data solver,
// so they can be put on the numerical grid.
typedef struct __ID_inputs {
REAL Rbar;
int Rbar_idx;
int interp_stencil_size;
int numlines_in_file;
REAL *r_Schw_arr,*rho_arr,*rho_baryon_arr,*P_arr,*M_arr,*expnu_arr,*exp4phi_arr,*rbar_arr;
} ID_inputs;
// Part P11: Declare all functions for setting up TOV initial data.
/* Routines to interpolate the TOV solution and convert to ADM & T^{munu}: */
#include "../TOV/tov_interp.h"
#include "ID_TOV_ADM_quantities.h"
#include "ID_TOV_TUPMUNU_xx0xx1xx2.h"
/* Next perform the basis conversion and compute all needed BSSN quantities */
#include "ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN_lambdas.h"
// Step P10: Declare function necessary for setting up the initial data.
// Step P10.a: Define BSSN_ID() for BrillLindquist initial data
// Step P10.b: Set the generic driver function for setting up BSSN initial data
void initial_data(const paramstruct *restrict params,const bc_struct *restrict bcstruct,
const rfm_struct *restrict rfmstruct,
REAL *restrict xx[3], REAL *restrict auxevol_gfs, REAL *restrict in_gfs) {
#include "set_Cparameters.h"
// Step 1: Set up TOV initial data
// Step 1.a: Read TOV initial data from data file
// Open the data file:
char filename[100];
sprintf(filename,"./outputTOVpolytrope.txt");
FILE *in1Dpolytrope = fopen(filename, "r");
if (in1Dpolytrope == NULL) {
fprintf(stderr,"ERROR: could not open file %s\n",filename);
exit(1);
}
// Count the number of lines in the data file:
int numlines_in_file = count_num_lines_in_file(in1Dpolytrope);
// Allocate space for all data arrays:
REAL *r_Schw_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_baryon_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *P_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *M_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *expnu_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *exp4phi_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rbar_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
// Read from the data file, filling in arrays
// read_datafile__set_arrays() may be found in TOV/tov_interp.h
if(read_datafile__set_arrays(in1Dpolytrope, r_Schw_arr,rho_arr,rho_baryon_arr,P_arr,M_arr,expnu_arr,exp4phi_arr,rbar_arr) == 1) {
fprintf(stderr,"ERROR WHEN READING FILE %s!\n",filename);
exit(1);
}
fclose(in1Dpolytrope);
REAL Rbar = -100;
int Rbar_idx = -100;
for(int i=1;i<numlines_in_file;i++) {
if(rho_arr[i-1]>0 && rho_arr[i]==0) { Rbar = rbar_arr[i-1]; Rbar_idx = i-1; }
}
if(Rbar<0) {
fprintf(stderr,"Error: could not find rbar=Rbar from data file.\n");
exit(1);
}
ID_inputs TOV_in;
TOV_in.Rbar = Rbar;
TOV_in.Rbar_idx = Rbar_idx;
const int interp_stencil_size = 12;
TOV_in.interp_stencil_size = interp_stencil_size;
TOV_in.numlines_in_file = numlines_in_file;
TOV_in.r_Schw_arr = r_Schw_arr;
TOV_in.rho_arr = rho_arr;
TOV_in.rho_baryon_arr = rho_baryon_arr;
TOV_in.P_arr = P_arr;
TOV_in.M_arr = M_arr;
TOV_in.expnu_arr = expnu_arr;
TOV_in.exp4phi_arr = exp4phi_arr;
TOV_in.rbar_arr = rbar_arr;
/* END TOV INPUT ROUTINE */
// Step 1.b: Interpolate data from data file to set BSSN gridfunctions
ID_BSSN__ALL_BUT_LAMBDAs(params,xx,TOV_in, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammabar_constraint(rfmstruct, params, in_gfs);
ID_BSSN_lambdas(params, xx, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammabar_constraint(rfmstruct, params, in_gfs);
ID_TOV_TUPMUNU_xx0xx1xx2(params,xx,TOV_in,auxevol_gfs);
free(rbar_arr);
free(rho_arr);
free(rho_baryon_arr);
free(P_arr);
free(M_arr);
free(expnu_arr);
}
// Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic)
#include "Hamiltonian_constraint.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
// Step 1: Set up initial data to an exact solution
initial_data(¶ms,&bcstruct, &rfmstruct, xx, auxevol_gfs, y_n_gfs);
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, y_n_gfs);
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(&rfmstruct, ¶ms, y_n_gfs,auxevol_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
xxCart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",xCart[1]/TOV_Mass,xCart[2]/TOV_Mass, y_n_gfs[IDX4ptS(CFGF,idx)],
log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));
}
fclose(out2D);
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
free(auxevol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
import cmdline_helper as cmd
cmd.C_compile(os.path.join(Ccodesdir,"TOV_Playground.c"), "TOV_Playground")
cmd.delete_existing_files("out96.txt")
cmd.Execute("TOV_Playground", "96 96 2", "out96.txt")
###Output
Compiling executable...
(EXEC): Executing `gcc -Ofast -fopenmp -march=native -funroll-loops TOVID_Ccodes/TOV_Playground.c -o TOV_Playground -lm`...
(BENCH): Finished executing in 1.614800214767456 seconds.
Finished compilation.
(EXEC): Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./TOV_Playground 96 96 2`...
(BENCH): Finished executing in 0.21306824684143066 seconds.
###Markdown
Step 6: Plotting the single-neutron-star initial data \[Back to [top](toc)\]$$\label{plot}$$Here we plot the conformal factor of these initial data on a 2D grid, such that darker colors imply stronger gravitational fields. Hence, we see the single neutron star centered at the origin: $x/M=y/M=z/M=0$, where $M$ is an arbitrary mass scale (conventionally the [ADM mass](https://en.wikipedia.org/w/index.php?title=ADM_formalism&oldid=846335453) is chosen), and our formulation of Einstein's equations adopt $G=c=1$ [geometrized units](https://en.wikipedia.org/w/index.php?title=Geometrized_unit_system&oldid=861682626).
###Code
import numpy as np
from scipy.interpolate import griddata
from pylab import savefig
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from IPython.display import Image
x96,y96,valuesCF96,valuesHam96 = np.loadtxt('out96.txt').T #Transposed for easier unpacking
bounds = 7.5
pl_xmin = -bounds
pl_xmax = +bounds
pl_ymin = -bounds
pl_ymax = +bounds
grid_x, grid_y = np.mgrid[pl_xmin:pl_xmax:100j, pl_ymin:pl_ymax:100j]
points96 = np.zeros((len(x96), 2))
for i in range(len(x96)):
points96[i][0] = x96[i]
points96[i][1] = y96[i]
grid96 = griddata(points96, valuesCF96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesCF96, (grid_x, grid_y), method='cubic')
plt.clf()
plt.title("Neutron Star: log10( max(1e-6,Energy Density) )")
plt.xlabel("x/M")
plt.ylabel("y/M")
# fig, ax = plt.subplots()
# ax.plot(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
# plt.close(fig)
fig96cf = plt.imshow(grid96.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cf)
savefig("BHB.png")
from IPython.display import Image
Image("BHB.png")
# # interpolation='nearest', cmap=cm.gist_rainbow)
###Output
_____no_output_____
###Markdown
Step 7: Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero \[Back to [top](toc)\]$$\label{convergence}$$The equations behind these initial data solve Einstein's equations exactly, at a single instant in time. One reflection of this solution is that the Hamiltonian constraint violation should be exactly zero in the initial data. However, when evaluated on numerical grids, the Hamiltonian constraint violation will *not* generally evaluate to zero due to the associated numerical derivatives not being exact. However, these numerical derivatives (finite difference derivatives in this case) should *converge* to the exact derivatives as the density of numerical sampling points approaches infinity.In this case, all of our finite difference derivatives agree with the exact solution, with an error term that drops with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$. Here, as in the [Start-to-Finish Scalar Wave (Cartesian grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWave.ipynb) and the [Start-to-Finish Scalar Wave (curvilinear grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWaveCurvilinear.ipynb) we confirm this convergence.First, let's take a look at what the numerical error looks like on the x-y plane at a given numerical resolution, plotting $\log_{10}|H|$, where $H$ is the Hamiltonian constraint violation:
###Code
grid96 = griddata(points96, valuesHam96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesHam96, (grid_x, grid_y), method='cubic')
# fig, ax = plt.subplots()
plt.clf()
plt.title("96^3 Numerical Err.: log_{10}|Ham|")
plt.xlabel("x/M")
plt.ylabel("y/M")
fig96cub = plt.imshow(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cub)
###Output
_____no_output_____
###Markdown
Next, we set up the same initial data but on a lower-resolution, $48\times 8\times 2$ grid (axisymmetric in the $\phi$ direction). Since the constraint violation (numerical error associated with the fourth-order-accurate, finite-difference derivatives) should converge to zero with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$, we expect the constraint violation will increase (relative to the $96\times 16\times 2$ grid) by a factor of $\left(96/48\right)^4$. Here we demonstrate that indeed this order of convergence is observed as expected, *except* at the star's surface where the stress-energy tensor $T^{\mu\nu}$ sharply drops to zero.
###Code
# Now rerun TOV_Playground with twice lower resolution.
cmd.delete_existing_files("out48.txt")
cmd.Execute("TOV_Playground", "48 48 2", "out48.txt")
x48,y48,valuesCF48,valuesHam48 = np.loadtxt('out48.txt').T #Transposed for easier unpacking
points48 = np.zeros((len(x48), 2))
for i in range(len(x48)):
points48[i][0] = x48[i]
points48[i][1] = y48[i]
grid48 = griddata(points48, valuesHam48, (grid_x, grid_y), method='cubic')
griddiff_48_minus_96 = np.zeros((100,100))
griddiff_48_minus_96_1darray = np.zeros(100*100)
gridx_1darray_yeq0 = np.zeros(100)
grid48_1darray_yeq0 = np.zeros(100)
grid96_1darray_yeq0 = np.zeros(100)
count = 0
outarray = []
for i in range(100):
for j in range(100):
griddiff_48_minus_96[i][j] = grid48[i][j] - grid96[i][j]
griddiff_48_minus_96_1darray[count] = griddiff_48_minus_96[i][j]
if j==49:
gridx_1darray_yeq0[i] = grid_x[i][j]
grid48_1darray_yeq0[i] = grid48[i][j] + np.log10((48./96.)**4)
grid96_1darray_yeq0[i] = grid96[i][j]
count = count + 1
plt.clf()
fig, ax = plt.subplots()
plt.title("Plot Demonstrating 4th-order Convergence")
plt.xlabel("x/M")
plt.ylabel("log10(Relative error)")
ax.plot(gridx_1darray_yeq0, grid96_1darray_yeq0, 'k-', label='Nr=96')
ax.plot(gridx_1darray_yeq0, grid48_1darray_yeq0, 'k--', label='Nr=48, mult by (48/96)^4')
ax.set_ylim([-12.5,1.5])
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
###Output
(EXEC): Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./TOV_Playground 48 48 2`...
(BENCH): Finished executing in 0.21373486518859863 seconds.
###Markdown
Step 8: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data")
###Output
Created Tutorial-Start_to_Finish-BSSNCurvilinear-
Setting_up_TOV_initial_data.tex, and compiled LaTeX file to PDF file
Tutorial-Start_to_Finish-BSSNCurvilinear-
Setting_up_TOV_initial_data.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Start-to-Finish Example: Setting up Polytropic [TOV](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) Initial Data, in Curvilinear Coordinates Authors: Zach Etienne, Phil Chang, and Leo Werneck Formatting improvements courtesy Brandon Clark This module sets up initial data for a TOV star in *spherical, isotropic coordinates*, using the *Numerical* ADM Spherical to BSSN Curvilinear initial data module (numerical = BSSN $\lambda^i$'s are computed using finite-difference derivatives instead of exact expressions).**Notebook Status:** Validated **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution (see [plots](convergence) at bottom). Note that convergence at the surface of the star will be lower order due to the sharp drop to zero in $T^{\mu\nu}$. NRPy+ Source Code for this module: * [TOV/TOV_Solver.py](../edit/TOV/TOV_Solver.py); ([**NRPy+ Tutorial module reviewing mathematical formulation and equations solved**](Tutorial-ADM_Initial_Data-TOV.ipynb)); ([**start-to-finish NRPy+ Tutorial module demonstrating that initial data satisfy Hamiltonian constraint**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb)): Tolman-Oppenheimer-Volkoff (TOV) initial data; defines all ADM variables and nonzero $T^{\mu\nu}$ components in Spherical basis.* [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates Introduction:Here we use NRPy+ to set up initial data for a [simple polytrope TOV star](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation).The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step:1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration [(**NRPy+ tutorial on NRPy+ Method of Lines algorithm**)](Tutorial-Method_of_Lines-C_Code_Generation.ipynb).1. Set gridfunction values to initial data * [**NRPy+ tutorial on TOV initial data**](Tutorial-ADM_Initial_Data-TOV.ipynb) * [**NRPy+ tutorial on validating TOV initial data**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb).1. Evaluate the Hamiltonian constraint violation * [**NRPy+ tutorial on BSSN constraints**](Tutorial-BSSN_constraints.ipynb)1. Repeat above steps at two numerical resolutions to confirm convergence of Hamiltonian constraint violation to zero. Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id_tov): Set up ADM initial data for polytropic TOV Star 1. [Step 2.a](tov_interp): Interpolating the TOV data file as needed 1. [Step 2.b](source): Compute source terms $S_{ij}$, $S_{i}$, $S$, and $\rho$ 1. [Step 2.c](jacobian): Jacobian transformation on the ADM/BSSN source terms 1. [Step 2.d](tensor): Rescale tensorial quantities1. [Step 3](adm_id_spacetime): Convert ADM spacetime quantity initial data from Spherical to BSSN Curvilinear coordinates1. [Step 4](validate): Validating that the TOV initial data satisfy the Hamiltonian constraint 1. [Step 4.a](ham_const_output): Output the Hamiltonian Constraint 1. [Step 4.b](apply_bcs): Apply singular, curvilinear coordinate boundary conditions 1. [Step 4.c](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ 1. [Step 5](mainc): `TOV_Playground.c`: The Main C Code1. [Step 6](plot): Plotting the single-neutron-star initial data1. [Step 7](convergence): Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero1. [Step 8](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step P1: Import needed NRPy+ core modules:
from outputC import lhrh,outCfunction,outputC # NRPy+: Core C code output module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import finite_difference as fin # NRPy+: Finite difference C code generation module
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("TOVID_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
# !rm -r ScalarWaveCurvilinear_Playground_Ccodes
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three this time, and then read
# the parameter as DIM.
par.set_parval_from_str("grid::DIM",3)
DIM = par.parval_from_str("grid::DIM")
# Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm,
# FD order, floating point precision, and CFL factor:
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "Spherical"
# Step 2.a: Set defaults for Coordinate system parameters.
# These are perhaps the most commonly adjusted parameters,
# so we enable modifications at this high level.
# domain_size = 7.5 # SET BELOW BASED ON TOV STELLAR RADIUS
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.4 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05# If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 0.5 # If SymTP chosen
# Step 2.b: Set the order of spatial finite difference derivatives;
# and the core data type.
FD_order = 4 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
REAL = "double" # Best to use double here.
# Step 3: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Step 4: Set the finite differencing order to FD_order (set above).
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order)
# Step 5: Set the direction=2 (phi) axis to be the symmetry axis; i.e.,
# axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
# Step 6: The MoLtimestepping interface is only used for memory allocation/deallocation
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_method = "Euler" # DOES NOT MATTER; Again MoL interface is only used for memory alloc/dealloc.
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method, RHS_string = "", post_RHS_string = "",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
# Step 7: Polytropic EOS setup
# For EOS_type, choose either "SimplePolytrope" or "PiecewisePolytrope"
EOS_type = "SimplePolytrope"
# If "PiecewisePolytrope" is chosen as EOS_type, you
# must also choose the name of the EOS, which can
# be any of the following:
# 'PAL6', 'SLy', 'APR1', 'APR2', 'APR3', 'APR4',
# 'FPS', 'WFF1', 'WFF2', 'WFF3', 'BBB2', 'BPAL12',
# 'ENG', 'MPA1', 'MS1', 'MS2', 'MS1b', 'PS', 'GS1',
# 'GS2', 'BGN1H1', 'GNH3', 'H1', 'H2', 'H3', 'H4',
# 'H5', 'H6', 'H7', 'PCL2', 'ALF1', 'ALF2', 'ALF3',
# 'ALF4'
EOS_name = 'SLy' # <-- IGNORED IF EOS_type is not PiecewisePolytrope.
###Output
_____no_output_____
###Markdown
Step 2: Set up ADM initial data for polytropic TOV Star \[Back to [top](toc)\]$$\label{adm_id_tov}$$As documented [in the TOV Initial Data NRPy+ Tutorial Module](Tutorial-TOV_Initial_Data.ipynb) ([older version here](Tutorial-GRMHD_UnitConversion.ipynb)), we will now set up TOV initial data, storing the densely-sampled result to file (***Courtesy Phil Chang***).The TOV solver uses an ODE integration routine provided by scipy, so we first make sure that scipy is installed:
###Code
!pip install scipy > /dev/null
###Output
_____no_output_____
###Markdown
Next we call the [`TOV.TOV_Solver()` function](../edit/TOV/TOV_Solver.py) ([NRPy+ Tutorial module](Tutorial-ADM_Initial_Data-TOV.ipynb)) to set up the initial data, using the default parameters for initial data. This function outputs the solution to a file named "outputTOVpolytrope.txt".
###Code
##########################
# Polytropic EOS example #
##########################
import TOV.Polytropic_EOSs as ppeos
if EOS_type == "SimplePolytrope":
# Set neos = 1 (single polytrope)
neos = 1
# Set rho_poly_tab (not needed for a single polytrope)
rho_poly_tab = []
# Set Gamma_poly_tab
Gamma_poly_tab = [2.0]
# Set K_poly_tab0
K_poly_tab0 = 1. # ZACH NOTES: CHANGED FROM 100.
# Set the eos quantities
eos = ppeos.set_up_EOS_parameters__complete_set_of_input_variables(neos,rho_poly_tab,Gamma_poly_tab,K_poly_tab0)
rho_baryon_central = 0.129285
elif EOS_type == "PiecewisePolytrope":
eos = ppeos.set_up_EOS_parameters__Read_et_al_input_variables(EOS_name)
rho_baryon_central=2.0
else:
print("""Error: unknown EOS_type. Valid types are 'SimplePolytrope' and 'PiecewisePolytrope' """)
sys.exit(1)
import TOV.TOV_Solver as TOV
M_TOV, R_Schw_TOV, R_iso_TOV = TOV.TOV_Solver(eos,
outfile="outputTOVpolytrope.txt",
rho_baryon_central=rho_baryon_central,
return_M_RSchw_and_Riso = True,
verbose = True)
# domain_size sets the default value for:
# * Spherical's params.RMAX
# * SinhSpherical*'s params.AMAX
# * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max
# * Cylindrical's -params.ZMIN & .{Z,RHO}MAX
# * SinhCylindrical's params.AMPL{RHO,Z}
# * *SymTP's params.AMAX
domain_size = 2.0 * R_iso_TOV
###Output
1256 1256 1256 1256 1256 1256
Just generated a TOV star with
* M = 1.405030336771405e-01 ,
* R_Schw = 9.566044579232513e-01 ,
* R_iso = 8.100085557410308e-01 ,
* M/R_Schw = 1.468768334847266e-01
###Markdown
Step 2.a: Interpolate the TOV data file as needed to set up ADM spacetime quantities in spherical basis (for input into the `Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear` module) and $T^{\mu\nu}$ in the chosen reference metric basis \[Back to [top](toc)\]$$\label{tov_interp}$$The TOV data file just written stored $\left(r,\rho(r),P(r),M(r),e^{\nu(r)}\right)$, where $\rho(r)$ is the total mass-energy density (cf. $\rho_{\text{baryonic}}$).**METRIC DATA IN TERMS OF ADM QUANTITIES**The [TOV line element](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) in *Schwarzschild coordinates* is written (in the $-+++$ form):$$ds^2 = - c^2 e^\nu dt^2 + \left(1 - \frac{2GM}{rc^2}\right)^{-1} dr^2 + r^2 d\Omega^2.$$In *isotropic coordinates* with $G=c=1$ (i.e., the coordinate system we'd prefer to use), the ($-+++$ form) line element is written:$$ds^2 = - e^{\nu} dt^2 + e^{4\phi} \left(d\bar{r}^2 + \bar{r}^2 d\Omega^2\right),$$where $\phi$ here is the *conformal factor*.The ADM 3+1 line element for this diagonal metric in isotropic spherical coordinates is given by:$$ds^2 = (-\alpha^2 + \beta_k \beta^k) dt^2 + \gamma_{\bar{r}\bar{r}} d\bar{r}^2 + \gamma_{\theta\theta} d\theta^2+ \gamma_{\phi\phi} d\phi^2,$$from which we can immediately read off the ADM quantities:\begin{align}\alpha &= e^{\nu(\bar{r})/2} \\\beta^k &= 0 \\\gamma_{\bar{r}\bar{r}} &= e^{4\phi}\\\gamma_{\theta\theta} &= e^{4\phi} \bar{r}^2 \\\gamma_{\phi\phi} &= e^{4\phi} \bar{r}^2 \sin^2 \theta \\\end{align}**STRESS-ENERGY TENSOR $T^{\mu\nu}$**We will also need the stress-energy tensor $T^{\mu\nu}$. [As discussed here](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation), the stress-energy tensor is diagonal:\begin{align}T^t_t &= -\rho \\T^i_j &= P \delta^i_j \\\text{All other components of }T^\mu_\nu &= 0.\end{align}Since $\beta^i=0$ the inverse metric expression simplifies to (Eq. 4.49 in [Gourgoulhon](https://arxiv.org/pdf/gr-qc/0703035.pdf)):$$g^{\mu\nu} = \begin{pmatrix} -\frac{1}{\alpha^2} & \frac{\beta^i}{\alpha^2} \\\frac{\beta^i}{\alpha^2} & \gamma^{ij} - \frac{\beta^i\beta^j}{\alpha^2}\end{pmatrix} =\begin{pmatrix} -\frac{1}{\alpha^2} & 0 \\0 & \gamma^{ij}\end{pmatrix},$$and since the 3-metric is diagonal we get\begin{align}\gamma^{\bar{r}\bar{r}} &= e^{-4\phi}\\\gamma^{\theta\theta} &= e^{-4\phi}\frac{1}{\bar{r}^2} \\\gamma^{\phi\phi} &= e^{-4\phi}\frac{1}{\bar{r}^2 \sin^2 \theta}.\end{align}Thus raising $T^\mu_\nu$ yields a diagonal $T^{\mu\nu}$\begin{align}T^{tt} &= -g^{tt} \rho = \frac{1}{\alpha^2} \rho = e^{-\nu(\bar{r})} \rho \\T^{\bar{r}\bar{r}} &= g^{\bar{r}\bar{r}} P = \frac{1}{e^{4 \phi}} P \\T^{\theta\theta} &= g^{\theta\theta} P = \frac{1}{e^{4 \phi}\bar{r}^2} P\\T^{\phi\phi} &= g^{\phi\phi} P = \frac{1}{e^{4\phi}\bar{r}^2 \sin^2 \theta} P \end{align}
###Code
thismodule = "TOVID"
rbar,theta,rho,P,expnu,exp4phi = par.Cparameters("REAL",thismodule,
["rbar","theta","rho","P","expnu","exp4phi"],1e300)
IDalpha = sp.sqrt(expnu)
gammaSphDD = ixp.zerorank2(DIM=3)
gammaSphDD[0][0] = exp4phi
gammaSphDD[1][1] = exp4phi*rbar**2
gammaSphDD[2][2] = exp4phi*rbar**2*sp.sin(theta)**2
T4SphUU = ixp.zerorank2(DIM=4)
T4SphUU[0][0] = rho/expnu
T4SphUU[1][1] = P/exp4phi
T4SphUU[2][2] = P/(exp4phi*rbar**2)
T4SphUU[3][3] = P/(exp4phi*rbar**2*sp.sin(theta)**2)
expr_list = [IDalpha]
name_list = ["*alpha"]
for i in range(3):
for j in range(i,3):
expr_list.append(gammaSphDD[i][j])
name_list.append("*gammaDD"+str(i)+str(j))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_ADM_quantities"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params=""" const REAL xyz_or_rthph[3],
const ID_inputs other_inputs,
REAL *gammaDD00,REAL *gammaDD01,REAL *gammaDD02,REAL *gammaDD11,REAL *gammaDD12,REAL *gammaDD22,
REAL *KDD00,REAL *KDD01,REAL *KDD02,REAL *KDD11,REAL *KDD12,REAL *KDD22,
REAL *alpha,
REAL *betaU0,REAL *betaU1,REAL *betaU2,
REAL *BU0,REAL *BU1,REAL *BU2""",
body="""
// Set trivial metric quantities:
*KDD00 = *KDD01 = *KDD02 = 0.0;
/**/ *KDD11 = *KDD12 = 0.0;
/**/ *KDD22 = 0.0;
*betaU0 = *betaU1 = *betaU2 = 0.0;
*BU0 = *BU1 = *BU2 = 0.0;
// Next set gamma_{ij} in spherical basis
const REAL rbar = xyz_or_rthph[0];
const REAL theta = xyz_or_rthph[1];
const REAL phi = xyz_or_rthph[2];
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
outputC(expr_list,name_list, "returnstring",outCparams),
opts="DisableCparameters")
###Output
Output C function ID_TOV_ADM_quantities() to file TOVID_Ccodes/ID_TOV_ADM_quantities.h
###Markdown
As all input quantities are functions of $r$, we will simply read the solution from file and interpolate it to the values of $r$ needed by the initial data.1. First we define functions `ID_TOV_ADM_quantities()` and `ID_TOV_TUPMUNU()` that call the [1D TOV interpolator function](../edit/TOV/tov_interp.h) to evaluate the ADM spacetime quantities and $T^{\mu\nu}$, respectively, at any given point $(r,\theta,\phi)$ in the Spherical basis. All quantities are defined as above.1. Next we will construct the BSSN/ADM source terms $\{S_{ij},S_{i},S,\rho\}$ in the Spherical basis1. Then we will perform the Jacobian transformation on $\{S_{ij},S_{i},S,\rho\}$ to the desired `(xx0,xx1,xx2)` basis1. Next we call the *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function to conver the above ADM quantities to the rescaled BSSN quantities in the desired curvilinear coordinate system: [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).$${\rm Jac\_dUSph\_dDrfmUD[mu][nu]} = \frac{\partial x^\mu_{\rm Sph}}{\partial x^\nu_{\rm rfm}},$$via exact differentiation (courtesy SymPy), and the inverse Jacobian$${\rm Jac\_dUrfm\_dDSphUD[mu][nu]} = \frac{\partial x^\mu_{\rm rfm}}{\partial x^\nu_{\rm Sph}},$$using NRPy+'s `generic_matrix_inverter3x3()` function. In terms of these, the transformation of BSSN tensors from Spherical to `"reference_metric::CoordSystem"` coordinates may be written:$$T^{\mu\nu}_{\rm rfm} = \frac{\partial x^\mu_{\rm rfm}}{\partial x^\delta_{\rm Sph}}\frac{\partial x^\nu_{\rm rfm}}{\partial x^\sigma_{\rm Sph}} T^{\delta\sigma}_{\rm Sph}$$
###Code
r_th_ph_or_Cart_xyz_oID_xx = []
CoordType_in = "Spherical"
if CoordType_in == "Spherical":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxSph
elif CoordType_in == "Cartesian":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxCart
else:
print("Error: Can only convert ADM Cartesian or Spherical initial data to BSSN Curvilinear coords.")
exit(1)
# Next apply Jacobian transformations to convert into the (xx0,xx1,xx2) basis
# rho and S are scalar, so no Jacobian transformations are necessary.
Jac4_dUSphorCart_dDrfmUD = ixp.zerorank2(DIM=4)
Jac4_dUSphorCart_dDrfmUD[0][0] = sp.sympify(1)
for i in range(DIM):
for j in range(DIM):
Jac4_dUSphorCart_dDrfmUD[i+1][j+1] = sp.diff(r_th_ph_or_Cart_xyz_oID_xx[i],rfm.xx[j])
Jac4_dUrfm_dDSphorCartUD, dummyDET = ixp.generic_matrix_inverter4x4(Jac4_dUSphorCart_dDrfmUD)
# Perform Jacobian operations on T^{mu nu} and gamma_{ij}
T4UU = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","T4UU","sym01",DIM=4)
IDT4UU = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
for delta in range(4):
for sigma in range(4):
IDT4UU[mu][nu] += \
Jac4_dUrfm_dDSphorCartUD[mu][delta]*Jac4_dUrfm_dDSphorCartUD[nu][sigma]*T4SphUU[delta][sigma]
lhrh_list = []
for mu in range(4):
for nu in range(mu,4):
lhrh_list.append(lhrh(lhs=gri.gfaccess("auxevol_gfs","T4UU"+str(mu)+str(nu)),rhs=IDT4UU[mu][nu]))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_TUPMUNU_xx0xx1xx2"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params="""const paramstruct *restrict params,REAL *restrict xx[3],
const ID_inputs other_inputs,REAL *restrict auxevol_gfs""",
body=outputC([rfm.xxSph[0],rfm.xxSph[1],rfm.xxSph[2]],
["const REAL rbar","const REAL theta","const REAL ph"],"returnstring",
"CSE_enable=False,includebraces=False")+"""
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
fin.FD_outputC("returnstring",lhrh_list,params="outCverbose=False,includebraces=False").replace("IDX4","IDX4S"),
loopopts="AllPoints,Read_xxs")
###Output
Output C function ID_TOV_TUPMUNU_xx0xx1xx2() to file TOVID_Ccodes/ID_TOV_TUPMUNU_xx0xx1xx2.h
###Markdown
Step 3: Convert ADM initial data to BSSN-in-curvilinear coordinates \[Back to [top](toc)\]$$\label{adm_id_spacetime}$$This is an automated process, taken care of by [`BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear`](../edit/BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py), and documented [in this tutorial notebook](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).
###Code
import BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear as AtoBnum
AtoBnum.Convert_Spherical_or_Cartesian_ADM_to_BSSN_curvilinear("Spherical","ID_TOV_ADM_quantities",
Ccodesdir=Ccodesdir,loopopts="")
###Output
Output C function ID_BSSN_lambdas() to file TOVID_Ccodes/ID_BSSN_lambdas.h
Output C function ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs() to file TOVID_Ccodes/ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h
Output C function ID_BSSN__ALL_BUT_LAMBDAs() to file TOVID_Ccodes/ID_BSSN__ALL_BUT_LAMBDAs.h
###Markdown
Step 4: Validating that the TOV initial data satisfy the Hamiltonian constraint \[Back to [top](toc)\]$$\label{validate}$$We will validate that the TOV initial data satisfy the Hamiltonian constraint, modulo numerical finite differencing error Step 4.a: Output the Hamiltonian constraint \[Back to [top](toc)\]$$\label{ham_const_output}$$First output the Hamiltonian constraint [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN_constraints.ipynb)
###Code
# Enable rfm_precompute infrastructure, which results in
# BSSN RHSs that are free of transcendental functions,
# even in curvilinear coordinates, so long as
# ConformalFactor is set to "W" (default).
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
import BSSN.Enforce_Detgammabar_Constraint as EGC
enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammabar_Constraint_symb_expressions()
# Now register the Hamiltonian as a gridfunction.
H = gri.register_gridfunctions("AUX","H")
# Then define the Hamiltonian constraint and output the optimized C code.
import BSSN.BSSN_constraints as bssncon
import BSSN.BSSN_stress_energy_source_terms as Bsest
bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False)
Bsest.BSSN_source_terms_for_BSSN_constraints(T4UU)
bssncon.H += Bsest.sourceterm_H
# Now that we are finished with all the rfm hatted
# quantities in generic precomputed functional
# form, let's restore them to their closed-
# form expressions.
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
desc="Evaluate the Hamiltonian constraint"
name="Hamiltonian_constraint"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict auxevol_gfs, REAL *restrict aux_gfs""",
body = fin.FD_outputC("returnstring",lhrh(lhs=gri.gfaccess("aux_gfs", "H"), rhs=bssncon.H),
params="outCverbose=False").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,Enable_rfm_precompute")
###Output
Output C function Hamiltonian_constraint() to file TOVID_Ccodes/Hamiltonian_constraint.h
###Markdown
Step 4.b: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](toc)\]$$\label{bc_functs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),Cparamspath=os.path.join("../"))
###Output
Wrote to file "TOVID_Ccodes/boundary_conditions/parity_conditions_symbolic_dot_products.h"
Evolved parity: ( aDD00:4, aDD01:5, aDD02:6, aDD11:7, aDD12:8, aDD22:9,
alpha:0, betU0:1, betU1:2, betU2:3, cf:0, hDD00:4, hDD01:5, hDD02:6,
hDD11:7, hDD12:8, hDD22:9, lambdaU0:1, lambdaU1:2, lambdaU2:3, trK:0,
vetU0:1, vetU1:2, vetU2:3 )
Auxiliary parity: ( H:0 )
AuxEvol parity: ( T4UU00:0, T4UU01:1, T4UU02:2, T4UU03:3, T4UU11:4,
T4UU12:5, T4UU13:6, T4UU22:7, T4UU23:8, T4UU33:9 )
Wrote to file "TOVID_Ccodes/boundary_conditions/EigenCoord_Cart_to_xx.h"
###Markdown
Step 4.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
# Set up the C function for the det(gammahat) = det(gammabar)
EGC.output_Enforce_Detgammabar_Constraint_Ccode(Ccodesdir,
exprs=enforce_detg_constraint_symb_expressions)
###Output
Output C function enforce_detgammabar_constraint() to file TOVID_Ccodes/enforce_detgammabar_constraint.h
###Markdown
Step 4.d: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](toc)\]$$\label{cparams_rfm_and_domainsize}$$Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
###Code
# Step 3.d.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.d.ii: Set free_parameters.h
# Output to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Step 3.d.iii: Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir)
# Step 3.d.iv: Generate xxCart.h, which contains xxCart() for
# (the mapping from xx->Cartesian) for the chosen
# CoordSystem:
rfm.xxCart_h("xxCart","./set_Cparameters.h",os.path.join(Ccodesdir,"xxCart.h"))
# Step 3.d.v: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
###Output
_____no_output_____
###Markdown
Step 5: `TOV_Playground.c`: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER)
with open(os.path.join(Ccodesdir,"TOV_Playground_REAL__NGHOSTS.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set TOV stellar parameters
#define TOV_Mass """+str(M_TOV)+"""
#define TOV_Riso """+str(R_iso_TOV)+"\n")
%%writefile $Ccodesdir/TOV_Playground.c
// Step P0: Define REAL and NGHOSTS. This header is generated by NRPy+.
#include "TOV_Playground_REAL__NGHOSTS.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xxCart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Set xxCart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xxCart.h"
// Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P8: Include function for enforcing detgammabar constraint.
#include "enforce_detgammabar_constraint.h"
// Step P4: Declare initial data input struct:
// stores data from initial data solver,
// so they can be put on the numerical grid.
typedef struct __ID_inputs {
REAL Rbar;
int Rbar_idx;
int interp_stencil_size;
int numlines_in_file;
REAL *r_Schw_arr,*rho_arr,*rho_baryon_arr,*P_arr,*M_arr,*expnu_arr,*exp4phi_arr,*rbar_arr;
} ID_inputs;
// Part P11: Declare all functions for setting up TOV initial data.
/* Routines to interpolate the TOV solution and convert to ADM & T^{munu}: */
#include "../TOV/tov_interp.h"
#include "ID_TOV_ADM_quantities.h"
#include "ID_TOV_TUPMUNU_xx0xx1xx2.h"
/* Next perform the basis conversion and compute all needed BSSN quantities */
#include "ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN_lambdas.h"
// Step P10: Declare function necessary for setting up the initial data.
// Step P10.a: Define BSSN_ID() for BrillLindquist initial data
// Step P10.b: Set the generic driver function for setting up BSSN initial data
void initial_data(const paramstruct *restrict params,const bc_struct *restrict bcstruct,
const rfm_struct *restrict rfmstruct,
REAL *restrict xx[3], REAL *restrict auxevol_gfs, REAL *restrict in_gfs) {
#include "set_Cparameters.h"
// Step 1: Set up TOV initial data
// Step 1.a: Read TOV initial data from data file
// Open the data file:
char filename[100];
sprintf(filename,"./outputTOVpolytrope.txt");
FILE *in1Dpolytrope = fopen(filename, "r");
if (in1Dpolytrope == NULL) {
fprintf(stderr,"ERROR: could not open file %s\n",filename);
exit(1);
}
// Count the number of lines in the data file:
int numlines_in_file = count_num_lines_in_file(in1Dpolytrope);
// Allocate space for all data arrays:
REAL *r_Schw_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_baryon_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *P_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *M_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *expnu_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *exp4phi_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rbar_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
// Read from the data file, filling in arrays
// read_datafile__set_arrays() may be found in TOV/tov_interp.h
if(read_datafile__set_arrays(in1Dpolytrope, r_Schw_arr,rho_arr,rho_baryon_arr,P_arr,M_arr,expnu_arr,exp4phi_arr,rbar_arr) == 1) {
fprintf(stderr,"ERROR WHEN READING FILE %s!\n",filename);
exit(1);
}
fclose(in1Dpolytrope);
REAL Rbar = -100;
int Rbar_idx = -100;
for(int i=1;i<numlines_in_file;i++) {
if(rho_arr[i-1]>0 && rho_arr[i]==0) { Rbar = rbar_arr[i-1]; Rbar_idx = i-1; }
}
if(Rbar<0) {
fprintf(stderr,"Error: could not find rbar=Rbar from data file.\n");
exit(1);
}
ID_inputs TOV_in;
TOV_in.Rbar = Rbar;
TOV_in.Rbar_idx = Rbar_idx;
const int interp_stencil_size = 12;
TOV_in.interp_stencil_size = interp_stencil_size;
TOV_in.numlines_in_file = numlines_in_file;
TOV_in.r_Schw_arr = r_Schw_arr;
TOV_in.rho_arr = rho_arr;
TOV_in.rho_baryon_arr = rho_baryon_arr;
TOV_in.P_arr = P_arr;
TOV_in.M_arr = M_arr;
TOV_in.expnu_arr = expnu_arr;
TOV_in.exp4phi_arr = exp4phi_arr;
TOV_in.rbar_arr = rbar_arr;
/* END TOV INPUT ROUTINE */
// Step 1.b: Interpolate data from data file to set BSSN gridfunctions
ID_BSSN__ALL_BUT_LAMBDAs(params,xx,TOV_in, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammabar_constraint(rfmstruct, params, in_gfs);
ID_BSSN_lambdas(params, xx, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammabar_constraint(rfmstruct, params, in_gfs);
ID_TOV_TUPMUNU_xx0xx1xx2(params,xx,TOV_in,auxevol_gfs);
free(rbar_arr);
free(rho_arr);
free(rho_baryon_arr);
free(P_arr);
free(M_arr);
free(expnu_arr);
}
// Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic)
#include "Hamiltonian_constraint.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
// Step 1: Set up initial data to an exact solution
initial_data(¶ms,&bcstruct, &rfmstruct, xx, auxevol_gfs, y_n_gfs);
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, y_n_gfs);
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(&rfmstruct, ¶ms, y_n_gfs,auxevol_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
xxCart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",xCart[1]/TOV_Mass,xCart[2]/TOV_Mass, y_n_gfs[IDX4ptS(CFGF,idx)],
log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));
}
fclose(out2D);
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
free(auxevol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
import cmdline_helper as cmd
cmd.C_compile(os.path.join(Ccodesdir,"TOV_Playground.c"), "TOV_Playground")
cmd.delete_existing_files("out96.txt")
cmd.Execute("TOV_Playground", "96 96 2", "out96.txt")
###Output
Compiling executable...
(EXEC): Executing `gcc -Ofast -fopenmp -march=native -funroll-loops TOVID_Ccodes/TOV_Playground.c -o TOV_Playground -lm`...
(BENCH): Finished executing in 1.6148104667663574 seconds.
Finished compilation.
(EXEC): Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./TOV_Playground 96 96 2`...
(BENCH): Finished executing in 0.21198439598083496 seconds.
###Markdown
Step 6: Plotting the single-neutron-star initial data \[Back to [top](toc)\]$$\label{plot}$$Here we plot the conformal factor of these initial data on a 2D grid, such that darker colors imply stronger gravitational fields. Hence, we see the single neutron star centered at the origin: $x/M=y/M=z/M=0$, where $M$ is an arbitrary mass scale (conventionally the [ADM mass](https://en.wikipedia.org/w/index.php?title=ADM_formalism&oldid=846335453) is chosen), and our formulation of Einstein's equations adopt $G=c=1$ [geometrized units](https://en.wikipedia.org/w/index.php?title=Geometrized_unit_system&oldid=861682626).
###Code
import numpy as np
from scipy.interpolate import griddata
from pylab import savefig
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from IPython.display import Image
x96,y96,valuesCF96,valuesHam96 = np.loadtxt('out96.txt').T #Transposed for easier unpacking
bounds = 7.5
pl_xmin = -bounds
pl_xmax = +bounds
pl_ymin = -bounds
pl_ymax = +bounds
grid_x, grid_y = np.mgrid[pl_xmin:pl_xmax:100j, pl_ymin:pl_ymax:100j]
points96 = np.zeros((len(x96), 2))
for i in range(len(x96)):
points96[i][0] = x96[i]
points96[i][1] = y96[i]
grid96 = griddata(points96, valuesCF96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesCF96, (grid_x, grid_y), method='cubic')
plt.clf()
plt.title("Neutron Star: log10( max(1e-6,Energy Density) )")
plt.xlabel("x/M")
plt.ylabel("y/M")
# fig, ax = plt.subplots()
# ax.plot(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
# plt.close(fig)
fig96cf = plt.imshow(grid96.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cf)
savefig("BHB.png")
from IPython.display import Image
Image("BHB.png")
# # interpolation='nearest', cmap=cm.gist_rainbow)
###Output
_____no_output_____
###Markdown
Step 7: Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero \[Back to [top](toc)\]$$\label{convergence}$$The equations behind these initial data solve Einstein's equations exactly, at a single instant in time. One reflection of this solution is that the Hamiltonian constraint violation should be exactly zero in the initial data. However, when evaluated on numerical grids, the Hamiltonian constraint violation will *not* generally evaluate to zero due to the associated numerical derivatives not being exact. However, these numerical derivatives (finite difference derivatives in this case) should *converge* to the exact derivatives as the density of numerical sampling points approaches infinity.In this case, all of our finite difference derivatives agree with the exact solution, with an error term that drops with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$. Here, as in the [Start-to-Finish Scalar Wave (Cartesian grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWave.ipynb) and the [Start-to-Finish Scalar Wave (curvilinear grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWaveCurvilinear.ipynb) we confirm this convergence.First, let's take a look at what the numerical error looks like on the x-y plane at a given numerical resolution, plotting $\log_{10}|H|$, where $H$ is the Hamiltonian constraint violation:
###Code
grid96 = griddata(points96, valuesHam96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesHam96, (grid_x, grid_y), method='cubic')
# fig, ax = plt.subplots()
plt.clf()
plt.title("96^3 Numerical Err.: log_{10}|Ham|")
plt.xlabel("x/M")
plt.ylabel("y/M")
fig96cub = plt.imshow(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cub)
###Output
_____no_output_____
###Markdown
Next, we set up the same initial data but on a lower-resolution, $48\times 8\times 2$ grid (axisymmetric in the $\phi$ direction). Since the constraint violation (numerical error associated with the fourth-order-accurate, finite-difference derivatives) should converge to zero with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$, we expect the constraint violation will increase (relative to the $96\times 16\times 2$ grid) by a factor of $\left(96/48\right)^4$. Here we demonstrate that indeed this order of convergence is observed as expected, *except* at the star's surface where the stress-energy tensor $T^{\mu\nu}$ sharply drops to zero.
###Code
# Now rerun TOV_Playground with twice lower resolution.
cmd.delete_existing_files("out48.txt")
cmd.Execute("TOV_Playground", "48 48 2", "out48.txt")
x48,y48,valuesCF48,valuesHam48 = np.loadtxt('out48.txt').T #Transposed for easier unpacking
points48 = np.zeros((len(x48), 2))
for i in range(len(x48)):
points48[i][0] = x48[i]
points48[i][1] = y48[i]
grid48 = griddata(points48, valuesHam48, (grid_x, grid_y), method='cubic')
griddiff_48_minus_96 = np.zeros((100,100))
griddiff_48_minus_96_1darray = np.zeros(100*100)
gridx_1darray_yeq0 = np.zeros(100)
grid48_1darray_yeq0 = np.zeros(100)
grid96_1darray_yeq0 = np.zeros(100)
count = 0
outarray = []
for i in range(100):
for j in range(100):
griddiff_48_minus_96[i][j] = grid48[i][j] - grid96[i][j]
griddiff_48_minus_96_1darray[count] = griddiff_48_minus_96[i][j]
if j==49:
gridx_1darray_yeq0[i] = grid_x[i][j]
grid48_1darray_yeq0[i] = grid48[i][j] + np.log10((48./96.)**4)
grid96_1darray_yeq0[i] = grid96[i][j]
count = count + 1
plt.clf()
fig, ax = plt.subplots()
plt.title("Plot Demonstrating 4th-order Convergence")
plt.xlabel("x/M")
plt.ylabel("log10(Relative error)")
ax.plot(gridx_1darray_yeq0, grid96_1darray_yeq0, 'k-', label='Nr=96')
ax.plot(gridx_1darray_yeq0, grid48_1darray_yeq0, 'k--', label='Nr=48, mult by (48/96)^4')
ax.set_ylim([-12.5,1.5])
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
###Output
(EXEC): Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./TOV_Playground 48 48 2`...
(BENCH): Finished executing in 0.21283507347106934 seconds.
###Markdown
Step 8: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data")
###Output
Created Tutorial-Start_to_Finish-BSSNCurvilinear-
Setting_up_TOV_initial_data.tex, and compiled LaTeX file to PDF file
Tutorial-Start_to_Finish-BSSNCurvilinear-
Setting_up_TOV_initial_data.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Start-to-Finish Example: Setting up Polytropic [TOV](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) Initial Data, in Curvilinear Coordinates Authors: Zach Etienne & Phil Chang Formatting improvements courtesy Brandon Clark This module sets up initial data for a TOV star in *spherical, isotropic coordinates*, using the *Numerical* ADM Spherical to BSSN Curvilinear initial data module (numerical = BSSN $\lambda^i$'s are computed using finite-difference derivatives instead of exact expressions).**Notebook Status:** Validated **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution (see [plots](convergence) at bottom). Note that convergence at the surface of the star will be lower order due to the sharp drop to zero in $T^{\mu\nu}$. NRPy+ Source Code for this module: * [TOV/TOV_Solver.py](../edit/TOV/TOV_Solver.py); ([**NRPy+ Tutorial module reviewing mathematical formulation and equations solved**](Tutorial-ADM_Initial_Data-TOV.ipynb)); ([**start-to-finish NRPy+ Tutorial module demonstrating that initial data satisfy Hamiltonian constraint**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb)): Tolman-Oppenheimer-Volkoff (TOV) initial data; defines all ADM variables and nonzero $T^{\mu\nu}$ components in Spherical basis.* [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates Introduction:Here we use NRPy+ to set up initial data for a [simple polytrope TOV star](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation).The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step:1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration [(**NRPy+ tutorial on NRPy+ Method of Lines algorithm**)](Tutorial-Method_of_Lines-C_Code_Generation.ipynb).1. Set gridfunction values to initial data * [**NRPy+ tutorial on TOV initial data**](Tutorial-ADM_Initial_Data-TOV.ipynb) * [**NRPy+ tutorial on validating TOV initial data**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb).1. Evaluate the Hamiltonian constraint violation * [**NRPy+ tutorial on BSSN constraints**](Tutorial-BSSN_constraints.ipynb)1. Repeat above steps at two numerical resolutions to confirm convergence of Hamiltonian constraint violation to zero. Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id_tov): Set up ADM initial data for polytropic TOV Star 1. [Step 2.a](tov_interp): Interpolating the TOV data file as needed 1. [Step 2.b](source): Compute source terms $S_{ij}$, $S_{i}$, $S$, and $\rho$ 1. [Step 2.c](jacobian): Jacobian transformation on the ADM/BSSN source terms 1. [Step 2.d](tensor): Rescale tensorial quantities1. [Step 3](adm_id_spacetime): Convert ADM spacetime quantity initial data from Spherical to BSSN Curvilinear coordinates1. [Step 4](validate): Validating that the TOV initial data satisfy the Hamiltonian constraint 1. [Step 4.a](ham_const_output): Output the Hamiltonian Constraint 1. [Step 4.b](apply_bcs): Apply singular, curvilinear coordinate boundary conditions 1. [Step 4.c](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ 1. [Step 5](mainc): `TOV_Playground.c`: The Main C Code1. [Step 6](plot): Plotting the single-neutron-star initial data1. [Step 7](convergence): Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero1. [Step 8](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step P1: Import needed NRPy+ core modules:
from outputC import * # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("TOVID_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
# !rm -r ScalarWaveCurvilinear_Playground_Ccodes
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three this time, and then read
# the parameter as DIM.
par.set_parval_from_str("grid::DIM",3)
DIM = par.parval_from_str("grid::DIM")
# Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm,
# FD order, floating point precision, and CFL factor:
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "Spherical"
# Step 2.a: Set defaults for Coordinate system parameters.
# These are perhaps the most commonly adjusted parameters,
# so we enable modifications at this high level.
# domain_size = 7.5 # SET BELOW BASED ON TOV STELLAR RADIUS
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.4 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05# If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 0.5 # If SymTP chosen
# Step 2.b: Set the order of spatial finite difference derivatives;
# and the core data type.
FD_order = 4 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
REAL = "double" # Best to use double here.
# Step 3: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Step 4: Set the finite differencing order to FD_order (set above).
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order)
# Step 5: Set the direction=2 (phi) axis to be the symmetry axis; i.e.,
# axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
# Step 6: The MoLtimestepping interface is only used for memory allocation/deallocation
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_method = "Euler" # DOES NOT MATTER; Again MoL interface is only used for memory alloc/dealloc.
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method, RHS_string = "", post_RHS_string = "",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
###Output
_____no_output_____
###Markdown
Step 2: Set up ADM initial data for polytropic TOV Star \[Back to [top](toc)\]$$\label{adm_id_tov}$$As documented [in the TOV Initial Data NRPy+ Tutorial Module](Tutorial-TOV_Initial_Data.ipynb) ([older version here](Tutorial-GRMHD_UnitConversion.ipynb)), we will now set up TOV initial data, storing the densely-sampled result to file (***Courtesy Phil Chang***).The TOV solver uses an ODE integration routine provided by scipy, so we first make sure that scipy is installed:
###Code
!pip install scipy > /dev/null
###Output
_____no_output_____
###Markdown
Next we call the [`TOV.TOV_Solver()` function](../edit/TOV/TOV_Solver.py) ([NRPy+ Tutorial module](Tutorial-ADM_Initial_Data-TOV.ipynb)) to set up the initial data, using the default parameters for initial data. This function outputs the solution to a file named "outputTOVpolytrope.txt".
###Code
############################
# Single polytrope example #
############################
import TOV.Polytropic_EOSs as ppeos
# Set neos = 1 (single polytrope)
neos = 1
# Set rho_poly_tab (not needed for a single polytrope)
rho_poly_tab = []
# Set Gamma_poly_tab
Gamma_poly_tab = [2.0]
# Set K_poly_tab0
K_poly_tab0 = 1. # ZACH NOTES: CHANGED FROM 100.
# Set the eos quantities
eos = ppeos.set_up_EOS_parameters__complete_set_of_input_variables(neos,rho_poly_tab,Gamma_poly_tab,K_poly_tab0)
import TOV.TOV_Solver as TOV
M_TOV, R_Schw_TOV, R_iso_TOV = TOV.TOV_Solver(eos,
outfile="outputTOVpolytrope.txt",
rho_baryon_central=0.129285,
return_M_RSchw_and_Riso = True,
verbose = True)
# domain_size sets the default value for:
# * Spherical's params.RMAX
# * SinhSpherical*'s params.AMAX
# * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max
# * Cylindrical's -params.ZMIN & .{Z,RHO}MAX
# * SinhCylindrical's params.AMPL{RHO,Z}
# * *SymTP's params.AMAX
domain_size = 2.0 * R_iso_TOV
###Output
1256 1256 1256 1256 1256 1256
Just generated a TOV star with
* M = 1.405030336771405e-01 ,
* R_Schw = 9.566044579232513e-01 ,
* R_iso = 8.100085557410308e-01 ,
* M/R_Schw = 1.468768334847266e-01
###Markdown
Step 2.a: Interpolate the TOV data file as needed to set up ADM spacetime quantities in spherical basis (for input into the `Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear` module) and $T^{\mu\nu}$ in the chosen reference metric basis \[Back to [top](toc)\]$$\label{tov_interp}$$The TOV data file just written stored $\left(r,\rho(r),P(r),M(r),e^{\nu(r)}\right)$, where $\rho(r)$ is the total mass-energy density (cf. $\rho_{\text{baryonic}}$).**METRIC DATA IN TERMS OF ADM QUANTITIES**The [TOV line element](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) in *Schwarzschild coordinates* is written (in the $-+++$ form):$$ds^2 = - c^2 e^\nu dt^2 + \left(1 - \frac{2GM}{rc^2}\right)^{-1} dr^2 + r^2 d\Omega^2.$$In *isotropic coordinates* with $G=c=1$ (i.e., the coordinate system we'd prefer to use), the ($-+++$ form) line element is written:$$ds^2 = - e^{\nu} dt^2 + e^{4\phi} \left(d\bar{r}^2 + \bar{r}^2 d\Omega^2\right),$$where $\phi$ here is the *conformal factor*.The ADM 3+1 line element for this diagonal metric in isotropic spherical coordinates is given by:$$ds^2 = (-\alpha^2 + \beta_k \beta^k) dt^2 + \gamma_{\bar{r}\bar{r}} d\bar{r}^2 + \gamma_{\theta\theta} d\theta^2+ \gamma_{\phi\phi} d\phi^2,$$from which we can immediately read off the ADM quantities:\begin{align}\alpha &= e^{\nu(\bar{r})/2} \\\beta^k &= 0 \\\gamma_{\bar{r}\bar{r}} &= e^{4\phi}\\\gamma_{\theta\theta} &= e^{4\phi} \bar{r}^2 \\\gamma_{\phi\phi} &= e^{4\phi} \bar{r}^2 \sin^2 \theta \\\end{align}**STRESS-ENERGY TENSOR $T^{\mu\nu}$**We will also need the stress-energy tensor $T^{\mu\nu}$. [As discussed here](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation), the stress-energy tensor is diagonal:\begin{align}T^t_t &= -\rho \\T^i_j &= P \delta^i_j \\\text{All other components of }T^\mu_\nu &= 0.\end{align}Since $\beta^i=0$ the inverse metric expression simplifies to (Eq. 4.49 in [Gourgoulhon](https://arxiv.org/pdf/gr-qc/0703035.pdf)):$$g^{\mu\nu} = \begin{pmatrix} -\frac{1}{\alpha^2} & \frac{\beta^i}{\alpha^2} \\\frac{\beta^i}{\alpha^2} & \gamma^{ij} - \frac{\beta^i\beta^j}{\alpha^2}\end{pmatrix} =\begin{pmatrix} -\frac{1}{\alpha^2} & 0 \\0 & \gamma^{ij}\end{pmatrix},$$and since the 3-metric is diagonal we get\begin{align}\gamma^{\bar{r}\bar{r}} &= e^{-4\phi}\\\gamma^{\theta\theta} &= e^{-4\phi}\frac{1}{\bar{r}^2} \\\gamma^{\phi\phi} &= e^{-4\phi}\frac{1}{\bar{r}^2 \sin^2 \theta}.\end{align}Thus raising $T^\mu_\nu$ yields a diagonal $T^{\mu\nu}$\begin{align}T^{tt} &= -g^{tt} \rho = \frac{1}{\alpha^2} \rho = e^{-\nu(\bar{r})} \rho \\T^{\bar{r}\bar{r}} &= g^{\bar{r}\bar{r}} P = \frac{1}{e^{4 \phi}} P \\T^{\theta\theta} &= g^{\theta\theta} P = \frac{1}{e^{4 \phi}\bar{r}^2} P\\T^{\phi\phi} &= g^{\phi\phi} P = \frac{1}{e^{4\phi}\bar{r}^2 \sin^2 \theta} P \end{align}
###Code
thismodule = "TOVID"
rbar,theta,rho,P,expnu,exp4phi = par.Cparameters("REAL",thismodule,
["rbar","theta","rho","P","expnu","exp4phi"],1e300)
IDalpha = sp.sqrt(expnu)
gammaSphDD = ixp.zerorank2(DIM=3)
gammaSphDD[0][0] = exp4phi
gammaSphDD[1][1] = exp4phi*rbar**2
gammaSphDD[2][2] = exp4phi*rbar**2*sp.sin(theta)**2
T4SphUU = ixp.zerorank2(DIM=4)
T4SphUU[0][0] = rho/expnu
T4SphUU[1][1] = P/exp4phi
T4SphUU[2][2] = P/(exp4phi*rbar**2)
T4SphUU[3][3] = P/(exp4phi*rbar**2*sp.sin(theta)**2)
expr_list = [IDalpha]
name_list = ["*alpha"]
for i in range(3):
for j in range(i,3):
expr_list.append(gammaSphDD[i][j])
name_list.append("*gammaDD"+str(i)+str(j))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_ADM_quantities"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params=""" const REAL xyz_or_rthph[3],
const ID_inputs other_inputs,
REAL *gammaDD00,REAL *gammaDD01,REAL *gammaDD02,REAL *gammaDD11,REAL *gammaDD12,REAL *gammaDD22,
REAL *KDD00,REAL *KDD01,REAL *KDD02,REAL *KDD11,REAL *KDD12,REAL *KDD22,
REAL *alpha,
REAL *betaU0,REAL *betaU1,REAL *betaU2,
REAL *BU0,REAL *BU1,REAL *BU2""",
body="""
// Set trivial metric quantities:
*KDD00 = *KDD01 = *KDD02 = 0.0;
/**/ *KDD11 = *KDD12 = 0.0;
/**/ *KDD22 = 0.0;
*betaU0 = *betaU1 = *betaU2 = 0.0;
*BU0 = *BU1 = *BU2 = 0.0;
// Next set gamma_{ij} in spherical basis
const REAL rbar = xyz_or_rthph[0];
const REAL theta = xyz_or_rthph[1];
const REAL phi = xyz_or_rthph[2];
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
outputC(expr_list,name_list, "returnstring",outCparams),
opts="DisableCparameters")
###Output
Output C function ID_TOV_ADM_quantities() to file TOVID_Ccodes/ID_TOV_ADM_quantities.h
###Markdown
As all input quantities are functions of $r$, we will simply read the solution from file and interpolate it to the values of $r$ needed by the initial data.1. First we define functions `ID_TOV_ADM_quantities()` and `ID_TOV_TUPMUNU()` that call the [1D TOV interpolator function](../edit/TOV/tov_interp.h) to evaluate the ADM spacetime quantities and $T^{\mu\nu}$, respectively, at any given point $(r,\theta,\phi)$ in the Spherical basis. All quantities are defined as above.1. Next we will construct the BSSN/ADM source terms $\{S_{ij},S_{i},S,\rho\}$ in the Spherical basis1. Then we will perform the Jacobian transformation on $\{S_{ij},S_{i},S,\rho\}$ to the desired `(xx0,xx1,xx2)` basis1. Next we call the *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function to conver the above ADM quantities to the rescaled BSSN quantities in the desired curvilinear coordinate system: [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).$${\rm Jac\_dUSph\_dDrfmUD[mu][nu]} = \frac{\partial x^\mu_{\rm Sph}}{\partial x^\nu_{\rm rfm}},$$via exact differentiation (courtesy SymPy), and the inverse Jacobian$${\rm Jac\_dUrfm\_dDSphUD[mu][nu]} = \frac{\partial x^\mu_{\rm rfm}}{\partial x^\nu_{\rm Sph}},$$using NRPy+'s `generic_matrix_inverter3x3()` function. In terms of these, the transformation of BSSN tensors from Spherical to `"reference_metric::CoordSystem"` coordinates may be written:$$T^{\mu\nu}_{\rm rfm} = \frac{\partial x^\mu_{\rm rfm}}{\partial x^\delta_{\rm Sph}}\frac{\partial x^\nu_{\rm rfm}}{\partial x^\sigma_{\rm Sph}} T^{\delta\sigma}_{\rm Sph}$$
###Code
r_th_ph_or_Cart_xyz_oID_xx = []
CoordType_in = "Spherical"
if CoordType_in == "Spherical":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxSph
elif CoordType_in == "Cartesian":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxCart
else:
print("Error: Can only convert ADM Cartesian or Spherical initial data to BSSN Curvilinear coords.")
exit(1)
# Next apply Jacobian transformations to convert into the (xx0,xx1,xx2) basis
# rho and S are scalar, so no Jacobian transformations are necessary.
Jac4_dUSphorCart_dDrfmUD = ixp.zerorank2(DIM=4)
Jac4_dUSphorCart_dDrfmUD[0][0] = sp.sympify(1)
for i in range(DIM):
for j in range(DIM):
Jac4_dUSphorCart_dDrfmUD[i+1][j+1] = sp.diff(r_th_ph_or_Cart_xyz_oID_xx[i],rfm.xx[j])
Jac4_dUrfm_dDSphorCartUD, dummyDET = ixp.generic_matrix_inverter4x4(Jac4_dUSphorCart_dDrfmUD)
# Perform Jacobian operations on T^{mu nu} and gamma_{ij}
T4UU = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","T4UU","sym01",DIM=4)
IDT4UU = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
for delta in range(4):
for sigma in range(4):
IDT4UU[mu][nu] += \
Jac4_dUrfm_dDSphorCartUD[mu][delta]*Jac4_dUrfm_dDSphorCartUD[nu][sigma]*T4SphUU[delta][sigma]
lhrh_list = []
for mu in range(4):
for nu in range(mu,4):
lhrh_list.append(lhrh(lhs=gri.gfaccess("auxevol_gfs","T4UU"+str(mu)+str(nu)),rhs=IDT4UU[mu][nu]))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_TUPMUNU_xx0xx1xx2"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params="""const paramstruct *restrict params,REAL *restrict xx[3],
const ID_inputs other_inputs,REAL *restrict auxevol_gfs""",
body=outputC([rfm.xxSph[0],rfm.xxSph[1],rfm.xxSph[2]],
["const REAL rbar","const REAL theta","const REAL ph"],"returnstring",
"CSE_enable=False,includebraces=False")+"""
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
fin.FD_outputC("returnstring",lhrh_list,params="outCverbose=False,includebraces=False").replace("IDX4","IDX4S"),
loopopts="AllPoints,Read_xxs")
###Output
Output C function ID_TOV_TUPMUNU_xx0xx1xx2() to file TOVID_Ccodes/ID_TOV_TUPMUNU_xx0xx1xx2.h
###Markdown
Step 3: Convert ADM initial data to BSSN-in-curvilinear coordinates \[Back to [top](toc)\]$$\label{adm_id_spacetime}$$This is an automated process, taken care of by [`BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear`](../edit/BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py), and documented [in this tutorial notebook](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).
###Code
import BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear as AtoBnum
AtoBnum.Convert_Spherical_or_Cartesian_ADM_to_BSSN_curvilinear("Spherical","ID_TOV_ADM_quantities",
Ccodesdir=Ccodesdir,loopopts="")
###Output
Output C function ID_BSSN_lambdas() to file TOVID_Ccodes/ID_BSSN_lambdas.h
Output C function ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs() to file TOVID_Ccodes/ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h
Output C function ID_BSSN__ALL_BUT_LAMBDAs() to file TOVID_Ccodes/ID_BSSN__ALL_BUT_LAMBDAs.h
###Markdown
Step 4: Validating that the TOV initial data satisfy the Hamiltonian constraint \[Back to [top](toc)\]$$\label{validate}$$We will validate that the TOV initial data satisfy the Hamiltonian constraint, modulo numerical finite differencing error Step 4.a: Output the Hamiltonian constraint \[Back to [top](toc)\]$$\label{ham_const_output}$$First output the Hamiltonian constraint [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN_constraints.ipynb)
###Code
# Enable rfm_precompute infrastructure, which results in
# BSSN RHSs that are free of transcendental functions,
# even in curvilinear coordinates, so long as
# ConformalFactor is set to "W" (default).
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
import BSSN.Enforce_Detgammabar_Constraint as EGC
enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammabar_Constraint_symb_expressions()
# Now register the Hamiltonian as a gridfunction.
H = gri.register_gridfunctions("AUX","H")
# Then define the Hamiltonian constraint and output the optimized C code.
import BSSN.BSSN_constraints as bssncon
import BSSN.BSSN_stress_energy_source_terms as Bsest
bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False)
Bsest.BSSN_source_terms_for_BSSN_constraints(T4UU)
bssncon.H += Bsest.sourceterm_H
# Now that we are finished with all the rfm hatted
# quantities in generic precomputed functional
# form, let's restore them to their closed-
# form expressions.
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
desc="Evaluate the Hamiltonian constraint"
name="Hamiltonian_constraint"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict auxevol_gfs, REAL *restrict aux_gfs""",
body = fin.FD_outputC("returnstring",lhrh(lhs=gri.gfaccess("aux_gfs", "H"), rhs=bssncon.H),
params="outCverbose=False").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,Enable_rfm_precompute")
###Output
Output C function Hamiltonian_constraint() to file TOVID_Ccodes/Hamiltonian_constraint.h
###Markdown
Step 4.b: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](toc)\]$$\label{bc_functs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),Cparamspath=os.path.join("../"))
###Output
Wrote to file "TOVID_Ccodes/boundary_conditions/parity_conditions_symbolic_dot_products.h"
Evolved gridfunction "aDD00" has parity type 4.
Evolved gridfunction "aDD01" has parity type 5.
Evolved gridfunction "aDD02" has parity type 6.
Evolved gridfunction "aDD11" has parity type 7.
Evolved gridfunction "aDD12" has parity type 8.
Evolved gridfunction "aDD22" has parity type 9.
Evolved gridfunction "alpha" has parity type 0.
Evolved gridfunction "betU0" has parity type 1.
Evolved gridfunction "betU1" has parity type 2.
Evolved gridfunction "betU2" has parity type 3.
Evolved gridfunction "cf" has parity type 0.
Evolved gridfunction "hDD00" has parity type 4.
Evolved gridfunction "hDD01" has parity type 5.
Evolved gridfunction "hDD02" has parity type 6.
Evolved gridfunction "hDD11" has parity type 7.
Evolved gridfunction "hDD12" has parity type 8.
Evolved gridfunction "hDD22" has parity type 9.
Evolved gridfunction "lambdaU0" has parity type 1.
Evolved gridfunction "lambdaU1" has parity type 2.
Evolved gridfunction "lambdaU2" has parity type 3.
Evolved gridfunction "trK" has parity type 0.
Evolved gridfunction "vetU0" has parity type 1.
Evolved gridfunction "vetU1" has parity type 2.
Evolved gridfunction "vetU2" has parity type 3.
Auxiliary gridfunction "H" has parity type 0.
AuxEvol gridfunction "T4UU00" has parity type 0.
AuxEvol gridfunction "T4UU01" has parity type 1.
AuxEvol gridfunction "T4UU02" has parity type 2.
AuxEvol gridfunction "T4UU03" has parity type 3.
AuxEvol gridfunction "T4UU11" has parity type 4.
AuxEvol gridfunction "T4UU12" has parity type 5.
AuxEvol gridfunction "T4UU13" has parity type 6.
AuxEvol gridfunction "T4UU22" has parity type 7.
AuxEvol gridfunction "T4UU23" has parity type 8.
AuxEvol gridfunction "T4UU33" has parity type 9.
Wrote to file "TOVID_Ccodes/boundary_conditions/EigenCoord_Cart_to_xx.h"
###Markdown
Step 4.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
# Set up the C function for the det(gammahat) = det(gammabar)
EGC.output_Enforce_Detgammabar_Constraint_Ccode(Ccodesdir,
exprs=enforce_detg_constraint_symb_expressions)
###Output
Output C function enforce_detgammabar_constraint() to file TOVID_Ccodes/enforce_detgammabar_constraint.h
###Markdown
Step 4.d: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](toc)\]$$\label{cparams_rfm_and_domainsize}$$Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
###Code
# Step 3.d.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.d.ii: Set free_parameters.h
# Output to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Step 3.d.iii: Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir)
# Step 3.d.iv: Generate xxCart.h, which contains xxCart() for
# (the mapping from xx->Cartesian) for the chosen
# CoordSystem:
rfm.xxCart_h("xxCart","./set_Cparameters.h",os.path.join(Ccodesdir,"xxCart.h"))
# Step 3.d.v: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
###Output
_____no_output_____
###Markdown
Step 5: `TOV_Playground.c`: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER)
with open(os.path.join(Ccodesdir,"TOV_Playground_REAL__NGHOSTS.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set TOV stellar parameters
#define TOV_Mass """+str(M_TOV)+"""
#define TOV_Riso """+str(R_iso_TOV)+"\n")
%%writefile $Ccodesdir/TOV_Playground.c
// Step P0: Define REAL and NGHOSTS. This header is generated by NRPy+.
#include "TOV_Playground_REAL__NGHOSTS.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xxCart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Set xxCart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xxCart.h"
// Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P8: Include function for enforcing detgammabar constraint.
#include "enforce_detgammabar_constraint.h"
// Step P4: Declare initial data input struct:
// stores data from initial data solver,
// so they can be put on the numerical grid.
typedef struct __ID_inputs {
REAL Rbar;
int Rbar_idx;
int interp_stencil_size;
int numlines_in_file;
REAL *r_Schw_arr,*rho_arr,*rho_baryon_arr,*P_arr,*M_arr,*expnu_arr,*exp4phi_arr,*rbar_arr;
} ID_inputs;
// Part P11: Declare all functions for setting up TOV initial data.
/* Routines to interpolate the TOV solution and convert to ADM & T^{munu}: */
#include "../TOV/tov_interp.h"
#include "ID_TOV_ADM_quantities.h"
#include "ID_TOV_TUPMUNU_xx0xx1xx2.h"
/* Next perform the basis conversion and compute all needed BSSN quantities */
#include "ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN_lambdas.h"
// Step P10: Declare function necessary for setting up the initial data.
// Step P10.a: Define BSSN_ID() for BrillLindquist initial data
// Step P10.b: Set the generic driver function for setting up BSSN initial data
void initial_data(const paramstruct *restrict params,const bc_struct *restrict bcstruct,
const rfm_struct *restrict rfmstruct,
REAL *restrict xx[3], REAL *restrict auxevol_gfs, REAL *restrict in_gfs) {
#include "set_Cparameters.h"
// Step 1: Set up TOV initial data
// Step 1.a: Read TOV initial data from data file
// Open the data file:
char filename[100];
sprintf(filename,"./outputTOVpolytrope.txt");
FILE *in1Dpolytrope = fopen(filename, "r");
if (in1Dpolytrope == NULL) {
fprintf(stderr,"ERROR: could not open file %s\n",filename);
exit(1);
}
// Count the number of lines in the data file:
int numlines_in_file = count_num_lines_in_file(in1Dpolytrope);
// Allocate space for all data arrays:
REAL *r_Schw_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_baryon_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *P_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *M_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *expnu_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *exp4phi_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rbar_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
// Read from the data file, filling in arrays
// read_datafile__set_arrays() may be found in TOV/tov_interp.h
if(read_datafile__set_arrays(in1Dpolytrope, r_Schw_arr,rho_arr,rho_baryon_arr,P_arr,M_arr,expnu_arr,exp4phi_arr,rbar_arr) == 1) {
fprintf(stderr,"ERROR WHEN READING FILE %s!\n",filename);
exit(1);
}
fclose(in1Dpolytrope);
REAL Rbar = -100;
int Rbar_idx = -100;
for(int i=1;i<numlines_in_file;i++) {
if(rho_arr[i-1]>0 && rho_arr[i]==0) { Rbar = rbar_arr[i-1]; Rbar_idx = i-1; }
}
if(Rbar<0) {
fprintf(stderr,"Error: could not find rbar=Rbar from data file.\n");
exit(1);
}
ID_inputs TOV_in;
TOV_in.Rbar = Rbar;
TOV_in.Rbar_idx = Rbar_idx;
const int interp_stencil_size = 12;
TOV_in.interp_stencil_size = interp_stencil_size;
TOV_in.numlines_in_file = numlines_in_file;
TOV_in.r_Schw_arr = r_Schw_arr;
TOV_in.rho_arr = rho_arr;
TOV_in.rho_baryon_arr = rho_baryon_arr;
TOV_in.P_arr = P_arr;
TOV_in.M_arr = M_arr;
TOV_in.expnu_arr = expnu_arr;
TOV_in.exp4phi_arr = exp4phi_arr;
TOV_in.rbar_arr = rbar_arr;
/* END TOV INPUT ROUTINE */
// Step 1.b: Interpolate data from data file to set BSSN gridfunctions
ID_BSSN__ALL_BUT_LAMBDAs(params,xx,TOV_in, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammabar_constraint(rfmstruct, params, in_gfs);
ID_BSSN_lambdas(params, xx, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammabar_constraint(rfmstruct, params, in_gfs);
ID_TOV_TUPMUNU_xx0xx1xx2(params,xx,TOV_in,auxevol_gfs);
free(rbar_arr);
free(rho_arr);
free(rho_baryon_arr);
free(P_arr);
free(M_arr);
free(expnu_arr);
}
// Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic)
#include "Hamiltonian_constraint.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
// Step 1: Set up initial data to an exact solution
initial_data(¶ms,&bcstruct, &rfmstruct, xx, auxevol_gfs, y_n_gfs);
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, y_n_gfs);
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(&rfmstruct, ¶ms, y_n_gfs,auxevol_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
xxCart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",xCart[1]/TOV_Mass,xCart[2]/TOV_Mass, y_n_gfs[IDX4ptS(CFGF,idx)],
log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));
}
fclose(out2D);
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
free(auxevol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
import cmdline_helper as cmd
cmd.C_compile(os.path.join(Ccodesdir,"TOV_Playground.c"), "TOV_Playground")
cmd.delete_existing_files("out96.txt")
cmd.Execute("TOV_Playground", "96 96 2", "out96.txt")
###Output
Compiling executable...
Executing `gcc -Ofast -fopenmp -march=native -funroll-loops TOVID_Ccodes/TOV_Playground.c -o TOV_Playground -lm`...
Finished executing in 2.015650510787964 seconds.
Finished compilation.
Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./TOV_Playground 96 96 2`...
Finished executing in 0.2118375301361084 seconds.
###Markdown
Step 6: Plotting the single-neutron-star initial data \[Back to [top](toc)\]$$\label{plot}$$Here we plot the conformal factor of these initial data on a 2D grid, such that darker colors imply stronger gravitational fields. Hence, we see the single neutron star centered at the origin: $x/M=y/M=z/M=0$, where $M$ is an arbitrary mass scale (conventionally the [ADM mass](https://en.wikipedia.org/w/index.php?title=ADM_formalism&oldid=846335453) is chosen), and our formulation of Einstein's equations adopt $G=c=1$ [geometrized units](https://en.wikipedia.org/w/index.php?title=Geometrized_unit_system&oldid=861682626).
###Code
import numpy as np
from scipy.interpolate import griddata
from pylab import savefig
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from IPython.display import Image
x96,y96,valuesCF96,valuesHam96 = np.loadtxt('out96.txt').T #Transposed for easier unpacking
bounds = 7.5
pl_xmin = -bounds
pl_xmax = +bounds
pl_ymin = -bounds
pl_ymax = +bounds
grid_x, grid_y = np.mgrid[pl_xmin:pl_xmax:100j, pl_ymin:pl_ymax:100j]
points96 = np.zeros((len(x96), 2))
for i in range(len(x96)):
points96[i][0] = x96[i]
points96[i][1] = y96[i]
grid96 = griddata(points96, valuesCF96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesCF96, (grid_x, grid_y), method='cubic')
plt.clf()
plt.title("Neutron Star: log10( max(1e-6,Energy Density) )")
plt.xlabel("x/M")
plt.ylabel("y/M")
# fig, ax = plt.subplots()
# ax.plot(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
# plt.close(fig)
fig96cf = plt.imshow(grid96.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cf)
savefig("BHB.png")
from IPython.display import Image
Image("BHB.png")
# # interpolation='nearest', cmap=cm.gist_rainbow)
###Output
_____no_output_____
###Markdown
Step 7: Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero \[Back to [top](toc)\]$$\label{convergence}$$The equations behind these initial data solve Einstein's equations exactly, at a single instant in time. One reflection of this solution is that the Hamiltonian constraint violation should be exactly zero in the initial data. However, when evaluated on numerical grids, the Hamiltonian constraint violation will *not* generally evaluate to zero due to the associated numerical derivatives not being exact. However, these numerical derivatives (finite difference derivatives in this case) should *converge* to the exact derivatives as the density of numerical sampling points approaches infinity.In this case, all of our finite difference derivatives agree with the exact solution, with an error term that drops with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$. Here, as in the [Start-to-Finish Scalar Wave (Cartesian grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWave.ipynb) and the [Start-to-Finish Scalar Wave (curvilinear grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWaveCurvilinear.ipynb) we confirm this convergence.First, let's take a look at what the numerical error looks like on the x-y plane at a given numerical resolution, plotting $\log_{10}|H|$, where $H$ is the Hamiltonian constraint violation:
###Code
grid96 = griddata(points96, valuesHam96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesHam96, (grid_x, grid_y), method='cubic')
# fig, ax = plt.subplots()
plt.clf()
plt.title("96^3 Numerical Err.: log_{10}|Ham|")
plt.xlabel("x/M")
plt.ylabel("y/M")
fig96cub = plt.imshow(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cub)
###Output
_____no_output_____
###Markdown
Next, we set up the same initial data but on a lower-resolution, $48\times 8\times 2$ grid (axisymmetric in the $\phi$ direction). Since the constraint violation (numerical error associated with the fourth-order-accurate, finite-difference derivatives) should converge to zero with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$, we expect the constraint violation will increase (relative to the $96\times 16\times 2$ grid) by a factor of $\left(96/48\right)^4$. Here we demonstrate that indeed this order of convergence is observed as expected, *except* at the star's surface where the stress-energy tensor $T^{\mu\nu}$ sharply drops to zero.
###Code
# Now rerun TOV_Playground with twice lower resolution.
cmd.delete_existing_files("out48.txt")
cmd.Execute("TOV_Playground", "48 48 2", "out48.txt")
x48,y48,valuesCF48,valuesHam48 = np.loadtxt('out48.txt').T #Transposed for easier unpacking
points48 = np.zeros((len(x48), 2))
for i in range(len(x48)):
points48[i][0] = x48[i]
points48[i][1] = y48[i]
grid48 = griddata(points48, valuesHam48, (grid_x, grid_y), method='cubic')
griddiff_48_minus_96 = np.zeros((100,100))
griddiff_48_minus_96_1darray = np.zeros(100*100)
gridx_1darray_yeq0 = np.zeros(100)
grid48_1darray_yeq0 = np.zeros(100)
grid96_1darray_yeq0 = np.zeros(100)
count = 0
outarray = []
for i in range(100):
for j in range(100):
griddiff_48_minus_96[i][j] = grid48[i][j] - grid96[i][j]
griddiff_48_minus_96_1darray[count] = griddiff_48_minus_96[i][j]
if j==49:
gridx_1darray_yeq0[i] = grid_x[i][j]
grid48_1darray_yeq0[i] = grid48[i][j] + np.log10((48./96.)**4)
grid96_1darray_yeq0[i] = grid96[i][j]
count = count + 1
plt.clf()
fig, ax = plt.subplots()
plt.title("Plot Demonstrating 4th-order Convergence")
plt.xlabel("x/M")
plt.ylabel("log10(Relative error)")
ax.plot(gridx_1darray_yeq0, grid96_1darray_yeq0, 'k-', label='Nr=96')
ax.plot(gridx_1darray_yeq0, grid48_1darray_yeq0, 'k--', label='Nr=48, mult by (48/96)^4')
ax.set_ylim([-12.5,1.5])
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
###Output
Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./TOV_Playground 48 48 2`...
Finished executing in 0.21308159828186035 seconds.
###Markdown
Step 8: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Start-to-Finish Example: Setting up Polytropic [TOV](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) Initial Data, in Curvilinear Coordinates Authors: Zach Etienne, Phil Chang, and Leo Werneck Formatting improvements courtesy Brandon Clark This module sets up initial data for a TOV star in *spherical, isotropic coordinates*, using the *Numerical* ADM Spherical to BSSN Curvilinear initial data module (numerical = BSSN $\lambda^i$'s are computed using finite-difference derivatives instead of exact expressions).**Notebook Status:** Validated **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution (see [plots](convergence) at bottom). Note that convergence at the surface of the star will be lower order due to the sharp drop to zero in $T^{\mu\nu}$. NRPy+ Source Code for this module: * [TOV/TOV_Solver.py](../edit/TOV/TOV_Solver.py); ([**NRPy+ Tutorial module reviewing mathematical formulation and equations solved**](Tutorial-ADM_Initial_Data-TOV.ipynb)); ([**start-to-finish NRPy+ Tutorial module demonstrating that initial data satisfy Hamiltonian constraint**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb)): Tolman-Oppenheimer-Volkoff (TOV) initial data; defines all ADM variables and nonzero $T^{\mu\nu}$ components in Spherical basis.* [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates Introduction:Here we use NRPy+ to set up initial data for a [simple polytrope TOV star](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation).The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step:1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration [(**NRPy+ tutorial on NRPy+ Method of Lines algorithm**)](Tutorial-Method_of_Lines-C_Code_Generation.ipynb).1. Set gridfunction values to initial data * [**NRPy+ tutorial on TOV initial data**](Tutorial-ADM_Initial_Data-TOV.ipynb) * [**NRPy+ tutorial on validating TOV initial data**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb).1. Evaluate the Hamiltonian constraint violation * [**NRPy+ tutorial on BSSN constraints**](Tutorial-BSSN_constraints.ipynb)1. Repeat above steps at two numerical resolutions to confirm convergence of Hamiltonian constraint violation to zero. Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id_tov): Set up ADM initial data for polytropic TOV Star 1. [Step 2.a](tov_interp): Interpolating the TOV data file as needed 1. [Step 2.b](source): Compute source terms $S_{ij}$, $S_{i}$, $S$, and $\rho$ 1. [Step 2.c](jacobian): Jacobian transformation on the ADM/BSSN source terms 1. [Step 2.d](tensor): Rescale tensorial quantities1. [Step 3](adm_id_spacetime): Convert ADM spacetime quantity initial data from Spherical to BSSN Curvilinear coordinates1. [Step 4](validate): Validating that the TOV initial data satisfy the Hamiltonian constraint 1. [Step 4.a](ham_const_output): Output the Hamiltonian Constraint 1. [Step 4.b](apply_bcs): Apply singular, curvilinear coordinate boundary conditions 1. [Step 4.c](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ 1. [Step 5](mainc): `TOV_Playground.c`: The Main C Code1. [Step 6](plot): Plotting the single-neutron-star initial data1. [Step 7](convergence): Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero1. [Step 8](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step P1: Import needed NRPy+ core modules:
from outputC import lhrh,outCfunction,outputC # NRPy+: Core C code output module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import finite_difference as fin # NRPy+: Finite difference C code generation module
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("TOVID_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
# !rm -r ScalarWaveCurvilinear_Playground_Ccodes
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three this time, and then read
# the parameter as DIM.
par.set_parval_from_str("grid::DIM",3)
DIM = par.parval_from_str("grid::DIM")
# Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm,
# FD order, floating point precision, and CFL factor:
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "Spherical"
# Step 2.a: Set defaults for Coordinate system parameters.
# These are perhaps the most commonly adjusted parameters,
# so we enable modifications at this high level.
# domain_size = 7.5 # SET BELOW BASED ON TOV STELLAR RADIUS
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.4 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05# If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 0.5 # If SymTP chosen
# Step 2.b: Set the order of spatial finite difference derivatives;
# and the core data type.
FD_order = 4 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
REAL = "double" # Best to use double here.
# Step 3: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Step 4: Set the finite differencing order to FD_order (set above).
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order)
# Step 5: Set the direction=2 (phi) axis to be the symmetry axis; i.e.,
# axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
# Step 6: The MoLtimestepping interface is only used for memory allocation/deallocation
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_method = "Euler" # DOES NOT MATTER; Again MoL interface is only used for memory alloc/dealloc.
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method, RHS_string = "", post_RHS_string = "",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
# Step 7: Polytropic EOS setup
# For EOS_type, choose either "SimplePolytrope" or "PiecewisePolytrope"
EOS_type = "SimplePolytrope"
# If "PiecewisePolytrope" is chosen as EOS_type, you
# must also choose the name of the EOS, which can
# be any of the following:
# 'PAL6', 'SLy', 'APR1', 'APR2', 'APR3', 'APR4',
# 'FPS', 'WFF1', 'WFF2', 'WFF3', 'BBB2', 'BPAL12',
# 'ENG', 'MPA1', 'MS1', 'MS2', 'MS1b', 'PS', 'GS1',
# 'GS2', 'BGN1H1', 'GNH3', 'H1', 'H2', 'H3', 'H4',
# 'H5', 'H6', 'H7', 'PCL2', 'ALF1', 'ALF2', 'ALF3',
# 'ALF4'
EOS_name = 'SLy' # <-- IGNORED IF EOS_type is not PiecewisePolytrope.
###Output
_____no_output_____
###Markdown
Step 2: Set up ADM initial data for polytropic TOV Star \[Back to [top](toc)\]$$\label{adm_id_tov}$$As documented [in the TOV Initial Data NRPy+ Tutorial Module](Tutorial-TOV_Initial_Data.ipynb) ([older version here](Tutorial-GRMHD_UnitConversion.ipynb)), we will now set up TOV initial data, storing the densely-sampled result to file (***Courtesy Phil Chang***).The TOV solver uses an ODE integration routine provided by scipy, so we first make sure that scipy is installed:
###Code
!pip install scipy > /dev/null
###Output
_____no_output_____
###Markdown
Next we call the [`TOV.TOV_Solver()` function](../edit/TOV/TOV_Solver.py) ([NRPy+ Tutorial module](Tutorial-ADM_Initial_Data-TOV.ipynb)) to set up the initial data, using the default parameters for initial data. This function outputs the solution to a file named "outputTOVpolytrope.txt".
###Code
##########################
# Polytropic EOS example #
##########################
import TOV.Polytropic_EOSs as ppeos
if EOS_type == "SimplePolytrope":
# Set neos = 1 (single polytrope)
neos = 1
# Set rho_poly_tab (not needed for a single polytrope)
rho_poly_tab = []
# Set Gamma_poly_tab
Gamma_poly_tab = [2.0]
# Set K_poly_tab0
K_poly_tab0 = 1. # ZACH NOTES: CHANGED FROM 100.
# Set the eos quantities
eos = ppeos.set_up_EOS_parameters__complete_set_of_input_variables(neos,rho_poly_tab,Gamma_poly_tab,K_poly_tab0)
rho_baryon_central = 0.129285
elif EOS_type == "PiecewisePolytrope":
eos = ppeos.set_up_EOS_parameters__Read_et_al_input_variables(EOS_name)
rho_baryon_central=2.0
else:
print("""Error: unknown EOS_type. Valid types are 'SimplePolytrope' and 'PiecewisePolytrope' """)
sys.exit(1)
import TOV.TOV_Solver as TOV
M_TOV, R_Schw_TOV, R_iso_TOV = TOV.TOV_Solver(eos,
outfile="outputTOVpolytrope.txt",
rho_baryon_central=rho_baryon_central,
return_M_RSchw_and_Riso = True,
verbose = True)
# domain_size sets the default value for:
# * Spherical's params.RMAX
# * SinhSpherical*'s params.AMAX
# * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max
# * Cylindrical's -params.ZMIN & .{Z,RHO}MAX
# * SinhCylindrical's params.AMPL{RHO,Z}
# * *SymTP's params.AMAX
domain_size = 2.0 * R_iso_TOV
###Output
1256 1256 1256 1256 1256 1256
Just generated a TOV star with
* M = 1.405030336771405e-01 ,
* R_Schw = 9.566044579232513e-01 ,
* R_iso = 8.100085557410308e-01 ,
* M/R_Schw = 1.468768334847266e-01
###Markdown
Step 2.a: Interpolate the TOV data file as needed to set up ADM spacetime quantities in spherical basis (for input into the `Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear` module) and $T^{\mu\nu}$ in the chosen reference metric basis \[Back to [top](toc)\]$$\label{tov_interp}$$The TOV data file just written stored $\left(r,\rho(r),P(r),M(r),e^{\nu(r)}\right)$, where $\rho(r)$ is the total mass-energy density (cf. $\rho_{\text{baryonic}}$).**METRIC DATA IN TERMS OF ADM QUANTITIES**The [TOV line element](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) in *Schwarzschild coordinates* is written (in the $-+++$ form):$$ds^2 = - c^2 e^\nu dt^2 + \left(1 - \frac{2GM}{rc^2}\right)^{-1} dr^2 + r^2 d\Omega^2.$$In *isotropic coordinates* with $G=c=1$ (i.e., the coordinate system we'd prefer to use), the ($-+++$ form) line element is written:$$ds^2 = - e^{\nu} dt^2 + e^{4\phi} \left(d\bar{r}^2 + \bar{r}^2 d\Omega^2\right),$$where $\phi$ here is the *conformal factor*.The ADM 3+1 line element for this diagonal metric in isotropic spherical coordinates is given by:$$ds^2 = (-\alpha^2 + \beta_k \beta^k) dt^2 + \gamma_{\bar{r}\bar{r}} d\bar{r}^2 + \gamma_{\theta\theta} d\theta^2+ \gamma_{\phi\phi} d\phi^2,$$from which we can immediately read off the ADM quantities:\begin{align}\alpha &= e^{\nu(\bar{r})/2} \\\beta^k &= 0 \\\gamma_{\bar{r}\bar{r}} &= e^{4\phi}\\\gamma_{\theta\theta} &= e^{4\phi} \bar{r}^2 \\\gamma_{\phi\phi} &= e^{4\phi} \bar{r}^2 \sin^2 \theta \\\end{align}**STRESS-ENERGY TENSOR $T^{\mu\nu}$**We will also need the stress-energy tensor $T^{\mu\nu}$. [As discussed here](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation), the stress-energy tensor is diagonal:\begin{align}T^t_t &= -\rho \\T^i_j &= P \delta^i_j \\\text{All other components of }T^\mu_\nu &= 0.\end{align}Since $\beta^i=0$ the inverse metric expression simplifies to (Eq. 4.49 in [Gourgoulhon](https://arxiv.org/pdf/gr-qc/0703035.pdf)):$$g^{\mu\nu} = \begin{pmatrix} -\frac{1}{\alpha^2} & \frac{\beta^i}{\alpha^2} \\\frac{\beta^i}{\alpha^2} & \gamma^{ij} - \frac{\beta^i\beta^j}{\alpha^2}\end{pmatrix} =\begin{pmatrix} -\frac{1}{\alpha^2} & 0 \\0 & \gamma^{ij}\end{pmatrix},$$and since the 3-metric is diagonal we get\begin{align}\gamma^{\bar{r}\bar{r}} &= e^{-4\phi}\\\gamma^{\theta\theta} &= e^{-4\phi}\frac{1}{\bar{r}^2} \\\gamma^{\phi\phi} &= e^{-4\phi}\frac{1}{\bar{r}^2 \sin^2 \theta}.\end{align}Thus raising $T^\mu_\nu$ yields a diagonal $T^{\mu\nu}$\begin{align}T^{tt} &= -g^{tt} \rho = \frac{1}{\alpha^2} \rho = e^{-\nu(\bar{r})} \rho \\T^{\bar{r}\bar{r}} &= g^{\bar{r}\bar{r}} P = \frac{1}{e^{4 \phi}} P \\T^{\theta\theta} &= g^{\theta\theta} P = \frac{1}{e^{4 \phi}\bar{r}^2} P\\T^{\phi\phi} &= g^{\phi\phi} P = \frac{1}{e^{4\phi}\bar{r}^2 \sin^2 \theta} P \end{align}
###Code
thismodule = "TOVID"
rbar,theta,rho,P,expnu,exp4phi = par.Cparameters("REAL",thismodule,
["rbar","theta","rho","P","expnu","exp4phi"],1e300)
IDalpha = sp.sqrt(expnu)
gammaSphDD = ixp.zerorank2(DIM=3)
gammaSphDD[0][0] = exp4phi
gammaSphDD[1][1] = exp4phi*rbar**2
gammaSphDD[2][2] = exp4phi*rbar**2*sp.sin(theta)**2
T4SphUU = ixp.zerorank2(DIM=4)
T4SphUU[0][0] = rho/expnu
T4SphUU[1][1] = P/exp4phi
T4SphUU[2][2] = P/(exp4phi*rbar**2)
T4SphUU[3][3] = P/(exp4phi*rbar**2*sp.sin(theta)**2)
expr_list = [IDalpha]
name_list = ["*alpha"]
for i in range(3):
for j in range(i,3):
expr_list.append(gammaSphDD[i][j])
name_list.append("*gammaDD"+str(i)+str(j))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_ADM_quantities"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params=""" const REAL xyz_or_rthph[3],
const ID_inputs other_inputs,
REAL *gammaDD00,REAL *gammaDD01,REAL *gammaDD02,REAL *gammaDD11,REAL *gammaDD12,REAL *gammaDD22,
REAL *KDD00,REAL *KDD01,REAL *KDD02,REAL *KDD11,REAL *KDD12,REAL *KDD22,
REAL *alpha,
REAL *betaU0,REAL *betaU1,REAL *betaU2,
REAL *BU0,REAL *BU1,REAL *BU2""",
body="""
// Set trivial metric quantities:
*KDD00 = *KDD01 = *KDD02 = 0.0;
/**/ *KDD11 = *KDD12 = 0.0;
/**/ *KDD22 = 0.0;
*betaU0 = *betaU1 = *betaU2 = 0.0;
*BU0 = *BU1 = *BU2 = 0.0;
// Next set gamma_{ij} in spherical basis
const REAL rbar = xyz_or_rthph[0];
const REAL theta = xyz_or_rthph[1];
const REAL phi = xyz_or_rthph[2];
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
outputC(expr_list,name_list, "returnstring",outCparams),
enableCparameters=False)
###Output
Output C function ID_TOV_ADM_quantities() to file TOVID_Ccodes/ID_TOV_ADM_quantities.h
###Markdown
As all input quantities are functions of $r$, we will simply read the solution from file and interpolate it to the values of $r$ needed by the initial data.1. First we define functions `ID_TOV_ADM_quantities()` and `ID_TOV_TUPMUNU()` that call the [1D TOV interpolator function](../edit/TOV/tov_interp.h) to evaluate the ADM spacetime quantities and $T^{\mu\nu}$, respectively, at any given point $(r,\theta,\phi)$ in the Spherical basis. All quantities are defined as above.1. Next we will construct the BSSN/ADM source terms $\{S_{ij},S_{i},S,\rho\}$ in the Spherical basis1. Then we will perform the Jacobian transformation on $\{S_{ij},S_{i},S,\rho\}$ to the desired `(xx0,xx1,xx2)` basis1. Next we call the *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function to conver the above ADM quantities to the rescaled BSSN quantities in the desired curvilinear coordinate system: [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).$${\rm Jac\_dUSph\_dDrfmUD[mu][nu]} = \frac{\partial x^\mu_{\rm Sph}}{\partial x^\nu_{\rm rfm}},$$via exact differentiation (courtesy SymPy), and the inverse Jacobian$${\rm Jac\_dUrfm\_dDSphUD[mu][nu]} = \frac{\partial x^\mu_{\rm rfm}}{\partial x^\nu_{\rm Sph}},$$using NRPy+'s `generic_matrix_inverter3x3()` function. In terms of these, the transformation of BSSN tensors from Spherical to `"reference_metric::CoordSystem"` coordinates may be written:$$T^{\mu\nu}_{\rm rfm} = \frac{\partial x^\mu_{\rm rfm}}{\partial x^\delta_{\rm Sph}}\frac{\partial x^\nu_{\rm rfm}}{\partial x^\sigma_{\rm Sph}} T^{\delta\sigma}_{\rm Sph}$$
###Code
r_th_ph_or_Cart_xyz_oID_xx = []
CoordType_in = "Spherical"
if CoordType_in == "Spherical":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxSph
elif CoordType_in == "Cartesian":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xx_to_Cart
else:
print("Error: Can only convert ADM Cartesian or Spherical initial data to BSSN Curvilinear coords.")
exit(1)
# Next apply Jacobian transformations to convert into the (xx0,xx1,xx2) basis
# rho and S are scalar, so no Jacobian transformations are necessary.
Jac4_dUSphorCart_dDrfmUD = ixp.zerorank2(DIM=4)
Jac4_dUSphorCart_dDrfmUD[0][0] = sp.sympify(1)
for i in range(DIM):
for j in range(DIM):
Jac4_dUSphorCart_dDrfmUD[i+1][j+1] = sp.diff(r_th_ph_or_Cart_xyz_oID_xx[i],rfm.xx[j])
Jac4_dUrfm_dDSphorCartUD, dummyDET = ixp.generic_matrix_inverter4x4(Jac4_dUSphorCart_dDrfmUD)
# Perform Jacobian operations on T^{mu nu} and gamma_{ij}
T4UU = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","T4UU","sym01",DIM=4)
IDT4UU = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
for delta in range(4):
for sigma in range(4):
IDT4UU[mu][nu] += \
Jac4_dUrfm_dDSphorCartUD[mu][delta]*Jac4_dUrfm_dDSphorCartUD[nu][sigma]*T4SphUU[delta][sigma]
lhrh_list = []
for mu in range(4):
for nu in range(mu,4):
lhrh_list.append(lhrh(lhs=gri.gfaccess("auxevol_gfs","T4UU"+str(mu)+str(nu)),rhs=IDT4UU[mu][nu]))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_TUPMUNU_xx0xx1xx2"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params="""const paramstruct *restrict params,REAL *restrict xx[3],
const ID_inputs other_inputs,REAL *restrict auxevol_gfs""",
body=outputC([rfm.xxSph[0],rfm.xxSph[1],rfm.xxSph[2]],
["const REAL rbar","const REAL theta","const REAL ph"],"returnstring",
"CSE_enable=False,includebraces=False")+"""
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
fin.FD_outputC("returnstring",lhrh_list,params="outCverbose=False,includebraces=False"),
loopopts="AllPoints,Read_xxs")
###Output
Output C function ID_TOV_TUPMUNU_xx0xx1xx2() to file TOVID_Ccodes/ID_TOV_TUPMUNU_xx0xx1xx2.h
###Markdown
Step 3: Convert ADM initial data to BSSN-in-curvilinear coordinates \[Back to [top](toc)\]$$\label{adm_id_spacetime}$$This is an automated process, taken care of by [`BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear`](../edit/BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py), and documented [in this tutorial notebook](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).
###Code
import BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear as AtoBnum
AtoBnum.Convert_Spherical_or_Cartesian_ADM_to_BSSN_curvilinear("Spherical","ID_TOV_ADM_quantities",
Ccodesdir=Ccodesdir,loopopts="")
###Output
Output C function ID_BSSN_lambdas() to file TOVID_Ccodes/ID_BSSN_lambdas.h
Output C function ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs() to file TOVID_Ccodes/ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h
Output C function ID_BSSN__ALL_BUT_LAMBDAs() to file TOVID_Ccodes/ID_BSSN__ALL_BUT_LAMBDAs.h
###Markdown
Step 4: Validating that the TOV initial data satisfy the Hamiltonian constraint \[Back to [top](toc)\]$$\label{validate}$$We will validate that the TOV initial data satisfy the Hamiltonian constraint, modulo numerical finite differencing error Step 4.a: Output the Hamiltonian constraint \[Back to [top](toc)\]$$\label{ham_const_output}$$First output the Hamiltonian constraint [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN_constraints.ipynb)
###Code
# Enable rfm_precompute infrastructure, which results in
# BSSN RHSs that are free of transcendental functions,
# even in curvilinear coordinates, so long as
# ConformalFactor is set to "W" (default).
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
import BSSN.Enforce_Detgammahat_Constraint as EGC
enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammahat_Constraint_symb_expressions()
# Now register the Hamiltonian as a gridfunction.
H = gri.register_gridfunctions("AUX","H")
# Then define the Hamiltonian constraint and output the optimized C code.
import BSSN.BSSN_constraints as bssncon
import BSSN.BSSN_stress_energy_source_terms as Bsest
bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False)
Bsest.BSSN_source_terms_for_BSSN_constraints(T4UU)
bssncon.H += Bsest.sourceterm_H
# Now that we are finished with all the rfm hatted
# quantities in generic precomputed functional
# form, let's restore them to their closed-
# form expressions.
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
desc="Evaluate the Hamiltonian constraint"
name="Hamiltonian_constraint"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict auxevol_gfs, REAL *restrict aux_gfs""",
body = fin.FD_outputC("returnstring",lhrh(lhs=gri.gfaccess("aux_gfs", "H"), rhs=bssncon.H),
params="outCverbose=False"),
loopopts = "InteriorPoints,enable_rfm_precompute")
###Output
Output C function Hamiltonian_constraint() to file TOVID_Ccodes/Hamiltonian_constraint.h
###Markdown
Step 4.b: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](toc)\]$$\label{bc_functs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),Cparamspath=os.path.join("../"))
###Output
Wrote to file "TOVID_Ccodes/boundary_conditions/parity_conditions_symbolic_dot_products.h"
Evolved parity: ( aDD00:4, aDD01:5, aDD02:6, aDD11:7, aDD12:8, aDD22:9,
alpha:0, betU0:1, betU1:2, betU2:3, cf:0, hDD00:4, hDD01:5, hDD02:6,
hDD11:7, hDD12:8, hDD22:9, lambdaU0:1, lambdaU1:2, lambdaU2:3, trK:0,
vetU0:1, vetU1:2, vetU2:3 )
Auxiliary parity: ( H:0 )
AuxEvol parity: ( T4UU00:0, T4UU01:1, T4UU02:2, T4UU03:3, T4UU11:4,
T4UU12:5, T4UU13:6, T4UU22:7, T4UU23:8, T4UU33:9 )
Wrote to file "TOVID_Ccodes/boundary_conditions/EigenCoord_Cart_to_xx.h"
###Markdown
Step 4.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN_enforcing_determinant_gammabar_equals_gammahat_constraint.ipynb)Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
# Set up the C function for the det(gammahat) = det(gammabar)
EGC.output_Enforce_Detgammahat_Constraint_Ccode(Ccodesdir,
exprs=enforce_detg_constraint_symb_expressions)
###Output
Output C function enforce_detgammahat_constraint() to file TOVID_Ccodes/enforce_detgammahat_constraint.h
###Markdown
Step 4.d: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](toc)\]$$\label{cparams_rfm_and_domainsize}$$Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
###Code
# Step 3.d.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.d.ii: Set free_parameters.h
# Output to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Step 3.d.iii: Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir)
# Step 3.d.iv: Generate xx_to_Cart.h, which contains xx_to_Cart() for
# (the mapping from xx->Cartesian) for the chosen
# CoordSystem:
rfm.xx_to_Cart_h("xx_to_Cart","./set_Cparameters.h",os.path.join(Ccodesdir,"xx_to_Cart.h"))
# Step 3.d.v: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
###Output
_____no_output_____
###Markdown
Step 5: `TOV_Playground.c`: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER)
with open(os.path.join(Ccodesdir,"TOV_Playground_REAL__NGHOSTS.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set TOV stellar parameters
#define TOV_Mass """+str(M_TOV)+"""
#define TOV_Riso """+str(R_iso_TOV)+"\n")
%%writefile $Ccodesdir/TOV_Playground.c
// Step P0: Define REAL and NGHOSTS. This header is generated by NRPy+.
#include "TOV_Playground_REAL__NGHOSTS.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xx_to_Cart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Set xx_to_Cart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xx_to_Cart.h"
// Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P8: Include function for enforcing detgammabar constraint.
#include "enforce_detgammahat_constraint.h"
// Step P4: Declare initial data input struct:
// stores data from initial data solver,
// so they can be put on the numerical grid.
typedef struct __ID_inputs {
REAL Rbar;
int Rbar_idx;
int interp_stencil_size;
int numlines_in_file;
REAL *r_Schw_arr,*rho_arr,*rho_baryon_arr,*P_arr,*M_arr,*expnu_arr,*exp4phi_arr,*rbar_arr;
} ID_inputs;
// Part P11: Declare all functions for setting up TOV initial data.
/* Routines to interpolate the TOV solution and convert to ADM & T^{munu}: */
#include "../TOV/tov_interp.h"
#include "ID_TOV_ADM_quantities.h"
#include "ID_TOV_TUPMUNU_xx0xx1xx2.h"
/* Next perform the basis conversion and compute all needed BSSN quantities */
#include "ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN_lambdas.h"
// Step P10: Declare function necessary for setting up the initial data.
// Step P10.a: Define BSSN_ID() for BrillLindquist initial data
// Step P10.b: Set the generic driver function for setting up BSSN initial data
void initial_data(const paramstruct *restrict params,const bc_struct *restrict bcstruct,
const rfm_struct *restrict rfmstruct,
REAL *restrict xx[3], REAL *restrict auxevol_gfs, REAL *restrict in_gfs) {
#include "set_Cparameters.h"
// Step 1: Set up TOV initial data
// Step 1.a: Read TOV initial data from data file
// Open the data file:
char filename[100];
sprintf(filename,"./outputTOVpolytrope.txt");
FILE *in1Dpolytrope = fopen(filename, "r");
if (in1Dpolytrope == NULL) {
fprintf(stderr,"ERROR: could not open file %s\n",filename);
exit(1);
}
// Count the number of lines in the data file:
int numlines_in_file = count_num_lines_in_file(in1Dpolytrope);
// Allocate space for all data arrays:
REAL *r_Schw_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_baryon_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *P_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *M_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *expnu_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *exp4phi_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rbar_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
// Read from the data file, filling in arrays
// read_datafile__set_arrays() may be found in TOV/tov_interp.h
if(read_datafile__set_arrays(in1Dpolytrope, r_Schw_arr,rho_arr,rho_baryon_arr,P_arr,M_arr,expnu_arr,exp4phi_arr,rbar_arr) == 1) {
fprintf(stderr,"ERROR WHEN READING FILE %s!\n",filename);
exit(1);
}
fclose(in1Dpolytrope);
REAL Rbar = -100;
int Rbar_idx = -100;
for(int i=1;i<numlines_in_file;i++) {
if(rho_arr[i-1]>0 && rho_arr[i]==0) { Rbar = rbar_arr[i-1]; Rbar_idx = i-1; }
}
if(Rbar<0) {
fprintf(stderr,"Error: could not find rbar=Rbar from data file.\n");
exit(1);
}
ID_inputs TOV_in;
TOV_in.Rbar = Rbar;
TOV_in.Rbar_idx = Rbar_idx;
const int interp_stencil_size = 12;
TOV_in.interp_stencil_size = interp_stencil_size;
TOV_in.numlines_in_file = numlines_in_file;
TOV_in.r_Schw_arr = r_Schw_arr;
TOV_in.rho_arr = rho_arr;
TOV_in.rho_baryon_arr = rho_baryon_arr;
TOV_in.P_arr = P_arr;
TOV_in.M_arr = M_arr;
TOV_in.expnu_arr = expnu_arr;
TOV_in.exp4phi_arr = exp4phi_arr;
TOV_in.rbar_arr = rbar_arr;
/* END TOV INPUT ROUTINE */
// Step 1.b: Interpolate data from data file to set BSSN gridfunctions
ID_BSSN__ALL_BUT_LAMBDAs(params,xx,TOV_in, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammahat_constraint(rfmstruct, params, in_gfs);
ID_BSSN_lambdas(params, xx, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammahat_constraint(rfmstruct, params, in_gfs);
ID_TOV_TUPMUNU_xx0xx1xx2(params,xx,TOV_in,auxevol_gfs);
free(rbar_arr);
free(rho_arr);
free(rho_baryon_arr);
free(P_arr);
free(M_arr);
free(expnu_arr);
}
// Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic)
#include "Hamiltonian_constraint.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
// Step 1: Set up initial data to an exact solution
initial_data(¶ms,&bcstruct, &rfmstruct, xx, auxevol_gfs, y_n_gfs);
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);
enforce_detgammahat_constraint(&rfmstruct, ¶ms, y_n_gfs);
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(&rfmstruct, ¶ms, y_n_gfs,auxevol_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
xx_to_Cart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",xCart[1]/TOV_Mass,xCart[2]/TOV_Mass, y_n_gfs[IDX4ptS(CFGF,idx)],
log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));
}
fclose(out2D);
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
free(auxevol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
import cmdline_helper as cmd
cmd.C_compile(os.path.join(Ccodesdir,"TOV_Playground.c"), "TOV_Playground")
cmd.delete_existing_files("out96.txt")
cmd.Execute("TOV_Playground", "96 96 2", "out96.txt")
###Output
Compiling executable...
(EXEC): Executing `gcc -std=gnu99 -Ofast -fopenmp -march=native -funroll-loops TOVID_Ccodes/TOV_Playground.c -o TOV_Playground -lm`...
(BENCH): Finished executing in 1.6104352474212646 seconds.
Finished compilation.
(EXEC): Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./TOV_Playground 96 96 2`...
(BENCH): Finished executing in 0.20788025856018066 seconds.
###Markdown
Step 6: Plotting the single-neutron-star initial data \[Back to [top](toc)\]$$\label{plot}$$Here we plot the conformal factor of these initial data on a 2D grid, such that darker colors imply stronger gravitational fields. Hence, we see the single neutron star centered at the origin: $x/M=y/M=z/M=0$, where $M$ is an arbitrary mass scale (conventionally the [ADM mass](https://en.wikipedia.org/w/index.php?title=ADM_formalism&oldid=846335453) is chosen), and our formulation of Einstein's equations adopt $G=c=1$ [geometrized units](https://en.wikipedia.org/w/index.php?title=Geometrized_unit_system&oldid=861682626).
###Code
import numpy as np
from scipy.interpolate import griddata
from pylab import savefig
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from IPython.display import Image
x96,y96,valuesCF96,valuesHam96 = np.loadtxt('out96.txt').T #Transposed for easier unpacking
bounds = 7.5
pl_xmin = -bounds
pl_xmax = +bounds
pl_ymin = -bounds
pl_ymax = +bounds
grid_x, grid_y = np.mgrid[pl_xmin:pl_xmax:100j, pl_ymin:pl_ymax:100j]
points96 = np.zeros((len(x96), 2))
for i in range(len(x96)):
points96[i][0] = x96[i]
points96[i][1] = y96[i]
grid96 = griddata(points96, valuesCF96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesCF96, (grid_x, grid_y), method='cubic')
plt.clf()
plt.title("Neutron Star: log10( max(1e-6,Energy Density) )")
plt.xlabel("x/M")
plt.ylabel("y/M")
# fig, ax = plt.subplots()
# ax.plot(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
# plt.close(fig)
fig96cf = plt.imshow(grid96.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cf)
savefig("BHB.png")
from IPython.display import Image
Image("BHB.png")
# # interpolation='nearest', cmap=cm.gist_rainbow)
###Output
_____no_output_____
###Markdown
Step 7: Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero \[Back to [top](toc)\]$$\label{convergence}$$The equations behind these initial data solve Einstein's equations exactly, at a single instant in time. One reflection of this solution is that the Hamiltonian constraint violation should be exactly zero in the initial data. However, when evaluated on numerical grids, the Hamiltonian constraint violation will *not* generally evaluate to zero due to the associated numerical derivatives not being exact. However, these numerical derivatives (finite difference derivatives in this case) should *converge* to the exact derivatives as the density of numerical sampling points approaches infinity.In this case, all of our finite difference derivatives agree with the exact solution, with an error term that drops with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$. Here, as in the [Start-to-Finish Scalar Wave (Cartesian grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWave.ipynb) and the [Start-to-Finish Scalar Wave (curvilinear grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWaveCurvilinear.ipynb) we confirm this convergence.First, let's take a look at what the numerical error looks like on the x-y plane at a given numerical resolution, plotting $\log_{10}|H|$, where $H$ is the Hamiltonian constraint violation:
###Code
grid96 = griddata(points96, valuesHam96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesHam96, (grid_x, grid_y), method='cubic')
# fig, ax = plt.subplots()
plt.clf()
plt.title("96^3 Numerical Err.: log_{10}|Ham|")
plt.xlabel("x/M")
plt.ylabel("y/M")
fig96cub = plt.imshow(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cub)
###Output
_____no_output_____
###Markdown
Next, we set up the same initial data but on a lower-resolution, $48\times 8\times 2$ grid (axisymmetric in the $\phi$ direction). Since the constraint violation (numerical error associated with the fourth-order-accurate, finite-difference derivatives) should converge to zero with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$, we expect the constraint violation will increase (relative to the $96\times 16\times 2$ grid) by a factor of $\left(96/48\right)^4$. Here we demonstrate that indeed this order of convergence is observed as expected, *except* at the star's surface where the stress-energy tensor $T^{\mu\nu}$ sharply drops to zero.
###Code
# Now rerun TOV_Playground with twice lower resolution.
cmd.delete_existing_files("out48.txt")
cmd.Execute("TOV_Playground", "48 48 2", "out48.txt")
x48,y48,valuesCF48,valuesHam48 = np.loadtxt('out48.txt').T #Transposed for easier unpacking
points48 = np.zeros((len(x48), 2))
for i in range(len(x48)):
points48[i][0] = x48[i]
points48[i][1] = y48[i]
grid48 = griddata(points48, valuesHam48, (grid_x, grid_y), method='cubic')
griddiff_48_minus_96 = np.zeros((100,100))
griddiff_48_minus_96_1darray = np.zeros(100*100)
gridx_1darray_yeq0 = np.zeros(100)
grid48_1darray_yeq0 = np.zeros(100)
grid96_1darray_yeq0 = np.zeros(100)
count = 0
outarray = []
for i in range(100):
for j in range(100):
griddiff_48_minus_96[i][j] = grid48[i][j] - grid96[i][j]
griddiff_48_minus_96_1darray[count] = griddiff_48_minus_96[i][j]
if j==49:
gridx_1darray_yeq0[i] = grid_x[i][j]
grid48_1darray_yeq0[i] = grid48[i][j] + np.log10((48./96.)**4)
grid96_1darray_yeq0[i] = grid96[i][j]
count = count + 1
plt.clf()
fig, ax = plt.subplots()
plt.title("Plot Demonstrating 4th-order Convergence")
plt.xlabel("x/M")
plt.ylabel("log10(Relative error)")
ax.plot(gridx_1darray_yeq0, grid96_1darray_yeq0, 'k-', label='Nr=96')
ax.plot(gridx_1darray_yeq0, grid48_1darray_yeq0, 'k--', label='Nr=48, mult by (48/96)^4')
ax.set_ylim([-12.5,1.5])
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
###Output
(EXEC): Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./TOV_Playground 48 48 2`...
(BENCH): Finished executing in 0.209364652633667 seconds.
###Markdown
Step 8: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data")
###Output
Created Tutorial-Start_to_Finish-BSSNCurvilinear-
Setting_up_TOV_initial_data.tex, and compiled LaTeX file to PDF file
Tutorial-Start_to_Finish-BSSNCurvilinear-
Setting_up_TOV_initial_data.pdf
###Markdown
Start-to-Finish Example: Setting up Polytropic [TOV](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) Initial Data, in Curvilinear Coordinates Authors: Zach Etienne & Phil Chang Formatting improvements courtesy Brandon Clark This module sets up initial data for a TOV star in *spherical, isotropic coordinates*, using the *Numerical* ADM Spherical to BSSN Curvilinear initial data module (numerical = BSSN $\lambda^i$'s are computed using finite-difference derivatives instead of exact expressions).**This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution (see [plots](convergence) at bottom). Note that convergence at the surface of the star will be lower order due to the sharp drop to zero in $T^{\mu\nu}$.** NRPy+ Source Code for this module: * [TOV/TOV_Solver.py](../edit/TOV/TOV_Solver.py) [\[**tutorial**\]](Tutorial-ADM_Initial_Data-TOV.ipynb): Tolman-Oppenheimer-Volkoff (TOV) initial data; defines all ADM variables and nonzero $T^{\mu\nu}$ components in Spherical basis.* [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates Introduction:Here we use NRPy+ to generate initial data for a [simple polytrope TOV star](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation).The entire algorithm is outlined below, with NRPy+-based components highlighted in green.1. Allocate memory for gridfunctions, including temporary storage for the RK4 time integration.1. Set gridfunction values to initial data (**This module**).1. Evolve the system forward in time using RK4 time integration. At each RK4 substep, do the following: 1. Evaluate BSSN RHS expressions. 1. Apply singular, curvilinear coordinate boundary conditions [*a la* the SENR/NRPy+ paper](https://arxiv.org/abs/1712.07658) 1. Apply constraints on conformal 3-metric: $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$1. At the end of each iteration in time, output the Hamiltonian constraint violation.1. Repeat above steps at two numerical resolutions to confirm convergence to zero. Table of Contents$$\label{toc}$$This module is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id_tov): Set up ADM initial data for polytropic TOV Star 1. [Step 2.a](tov_interp): Interpolating the TOV data file as needed 1. [Step 2.b](source): Compute source terms $S_{ij}$, $S_{i}$, $S$, and $\rho$ 1. [Step 2.c](jacobian): Jacobian transformation on the ADM/BSSN source terms 1. [Step 2.d](tensor): Rescale tensorial quantities1. [Step 3](adm_id_spacetime): Convert ADM spacetime quantity initial data from Spherical to BSSN Curvilinear coordinates1. [Step 4](validate): Validating that the TOV initial data satisfy the Hamiltonian constraint 1. [Step 4.a](ham_const_output): Output the Hamiltonian Constraint 1. [Step 4.b](apply_bcs): Apply singular, curvilinear coordinate boundary conditions 1. [Step 4.c](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ 1. [Step 5](mainc): $\rm{TOV\_Playground.c}$: The Main C Code1. [Step 6](plot): Plotting the single-neutron-star initial data1. [Step 7](convergence): Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero1. [Step 8](latex_pdf_output): Output this module to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# First we import needed core NRPy+ modules
from outputC import *
import NRPy_param_funcs as par
import grid as gri
import loop as lp
import indexedexp as ixp
import finite_difference as fin
import reference_metric as rfm
thismodule = "TOV_ID_setup"
# Set spatial dimension (must be 3 for BSSN)
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Then we set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
import BSSN.BSSN_quantities as Bq
par.set_parval_from_str("BSSN.BSSN_quantities::EvolvedConformalFactor_cf","phi")
#################
# Next output C headers related to the numerical grids we just set up:
#################
# First output the coordinate bounds xxmin[] and xxmax[]:
with open("BSSN/xxminmax.h", "w") as file:
file.write("const REAL xxmin[3] = {"+str(rfm.xxmin[0])+","+str(rfm.xxmin[1])+","+str(rfm.xxmin[2])+"};\n")
file.write("const REAL xxmax[3] = {"+str(rfm.xxmax[0])+","+str(rfm.xxmax[1])+","+str(rfm.xxmax[2])+"};\n")
# Generic coordinate NRPy+ file output, Part 2: output the conversion from (x0,x1,x2) to Cartesian (x,y,z)
outputC([rfm.xxCart[0],rfm.xxCart[1],rfm.xxCart[2]],["xCart[0]","xCart[1]","xCart[2]"],
"BSSN/xxCart.h")
# Register the Hamiltonian as a gridfunction, to be used later.
H = gri.register_gridfunctions("AUX","H")
###Output
Wrote to file "BSSN/xxCart.h"
###Markdown
Step 2: Set up ADM initial data for polytropic TOV Star \[Back to [top](toc)\]$$\label{adm_id_tov}$$As documented [in the TOV Initial Data NRPy+ Tutorial Module](Tutorial-TOV_Initial_Data.ipynb) ([older version here](Tutorial-GRMHD_UnitConversion.ipynb)), we will now set up TOV initial data, storing the densely-sampled result to file (***Courtesy Phil Chang***).The TOV solver uses an ODE integration routine provided by scipy, so we first make sure that scipy is installed:
###Code
!pip install scipy > /dev/null
###Output
_____no_output_____
###Markdown
Next we call the [TOV.TOV_Solver() function](../edit/TOV/TOV_Solver.py) ([NRPy+ Tutorial module](Tutorial-ADM_Initial_Data-TOV.ipynb)) to set up the initial data, using the default parameters for initial data. This function outputs the solution to a file named "outputTOVpolytrope.txt".
###Code
import TOV.TOV_Solver as TOV
TOV.TOV_Solver()
###Output
Just generated a TOV star with R_Schw = 0.9565681425227097 , M = 0.14050303285288188 , M/R_Schw = 0.1468824086931645 .
###Markdown
Step 2.a: Interpolating the TOV data file as needed \[Back to [top](toc)\]$$\label{tov_interp}$$The TOV data file just written stored $\left(r,\rho(r),P(r),M(r),e^{\nu(r)}\right)$, where $\rho(r)$ is the total mass-energy density (cf. $\rho_{\text{baryonic}}$).**METRIC DATA IN TERMS OF ADM QUANTITIES**The [TOV line element](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) in *Schwarzschild coordinates* is written (in the $-+++$ form):$$ds^2 = - c^2 e^\nu dt^2 + \left(1 - \frac{2GM}{rc^2}\right)^{-1} dr^2 + r^2 d\Omega^2.$$In *isotropic coordinates* with $G=c=1$ (i.e., the coordinate system and units we'd prefer to use), the ($-+++$ form) line element is written:$$ds^2 = - e^{\nu} dt^2 + e^{4\phi} \left(d\bar{r}^2 + \bar{r}^2 d\Omega^2\right),$$where $\phi$ here is the *conformal factor*.The ADM 3+1 line element for this diagonal metric in isotropic spherical coordinates is given by:$$ds^2 = (-\alpha^2 + \beta_k \beta^k) dt^2 + \gamma_{\bar{r}\bar{r}} d\bar{r}^2 + \gamma_{\theta\theta} d\theta^2+ \gamma_{\phi\phi} d\phi^2,$$from which we can immediately read off the ADM quantities:\begin{align}\alpha &= e^{\nu(\bar{r})/2} \\\beta^k &= 0 \\\gamma_{\bar{r}\bar{r}} &= e^{4\phi}\\\gamma_{\theta\theta} &= e^{4\phi} \bar{r}^2 \\\gamma_{\phi\phi} &= e^{4\phi} \bar{r}^2 \sin^2 \theta \\\end{align}**STRESS-ENERGY TENSOR $T^{\mu\nu}$**We will also need the stress-energy tensor $T^{\mu\nu}$. [As discussed here](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation), the stress-energy tensor is diagonal:\begin{align}T^t_t &= -\rho \\T^i_j &= P \delta^i_j \\\text{All other components of }T^\mu_\nu &= 0.\end{align}Since $\beta^i=0$ the inverse metric expression simplifies to (Eq. 4.49 in [Gourgoulhon](https://arxiv.org/pdf/gr-qc/0703035.pdf)):$$g^{\mu\nu} = \begin{pmatrix} -\frac{1}{\alpha^2} & \frac{\beta^i}{\alpha^2} \\\frac{\beta^i}{\alpha^2} & \gamma^{ij} - \frac{\beta^i\beta^j}{\alpha^2}\end{pmatrix} =\begin{pmatrix} -\frac{1}{\alpha^2} & 0 \\0 & \gamma^{ij}\end{pmatrix},$$and since the 3-metric is diagonal we get\begin{align}\gamma^{\bar{r}\bar{r}} &= e^{-4\phi}\\\gamma^{\theta\theta} &= e^{-4\phi}\frac{1}{\bar{r}^2} \\\gamma^{\phi\phi} &= e^{-4\phi}\frac{1}{\bar{r}^2 \sin^2 \theta}.\end{align}Thus raising $T^\mu_\nu$ yields a diagonal $T^{\mu\nu}$\begin{align}T^{tt} &= -g^{tt} \rho = \frac{1}{\alpha^2} \rho = e^{-\nu(\bar{r})} \rho \\T^{\bar{r}\bar{r}} &= g^{\bar{r}\bar{r}} P = \frac{1}{e^{4 \phi}} P \\T^{\theta\theta} &= g^{\theta\theta} P = \frac{1}{e^{4 \phi}\bar{r}^2} P\\T^{\phi\phi} &= g^{\phi\phi} P = \frac{1}{e^{4\phi}\bar{r}^2 \sin^2 \theta} P \end{align}As all input quantities are functions of $r$, we will simply read the solution from file and interpolate it to the values of $r$ needed by the initial data.1. First we define functions `ID_TOV_ADM_quantities()` and `ID_TOV_TUPMUNU()` that call the [1D TOV interpolator function](../edit/TOV/tov_interp.h) to evaluate the ADM spacetime quantities and $T^{\mu\nu}$, respectively, at any given point $(r,\theta,\phi)$ in the Spherical basis. All quantities are defined as above.1. Next we will construct the BSSN/ADM source terms $\{S_{ij},S_{i},S,\rho\}$ in the Spherical basis1. Then we will perform the Jacobian transformation on $\{S_{ij},S_{i},S,\rho\}$ to the desired (xx0,xx1,xx2) basis1. Next we call the *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function to conver the above ADM quantities to the rescaled BSSN quantities in the desired curvilinear coordinate system: [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).
###Code
with open("BSSN/ID_TOV_ADM_quantities.h", "w") as file:
file.write("""
// This function takes as input either (x,y,z) or (r,th,ph) and outputs
// all ADM quantities in the Cartesian or Spherical basis, respectively.
void ID_TOV_ADM_quantities(
const REAL xyz_or_rthph[3],
const ID_inputs other_inputs,
REAL *gammaDD00,REAL *gammaDD01,REAL *gammaDD02,REAL *gammaDD11,REAL *gammaDD12,REAL *gammaDD22,
REAL *KDD00,REAL *KDD01,REAL *KDD02,REAL *KDD11,REAL *KDD12,REAL *KDD22,
REAL *alpha,
REAL *betaU0,REAL *betaU1,REAL *betaU2,
REAL *BU0,REAL *BU1,REAL *BU2) {
const REAL r = xyz_or_rthph[0];
const REAL th = xyz_or_rthph[1];
const REAL ph = xyz_or_rthph[2];
REAL rho,P,M,expnu,exp4phi;
TOV_interpolate_1D(r,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&P,&M,&expnu,&exp4phi);
*alpha = sqrt(expnu);
// \gamma_{rbar rbar} = exp(4 phi)
*gammaDD00 = exp4phi;
// \gamma_{thth} = r^2 * exp(4 phi)
*gammaDD11 = r*r * exp4phi;
// \gamma_{phph} = r^2 sin^2(th) * exp(4 phi)
*gammaDD22 = r*r*sin(th)*sin(th) * exp4phi;
// All other quantities ARE ZERO:
*gammaDD01 = 0.0;
*gammaDD02 = 0.0;
*gammaDD12 = 0.0;
*KDD00 = 0.0;
*KDD01 = 0.0;
*KDD02 = 0.0;
*KDD11 = 0.0;
*KDD12 = 0.0;
*KDD22 = 0.0;
*betaU0 = 0.0;
*betaU1 = 0.0;
*betaU2 = 0.0;
*BU0 = 0.0;
*BU1 = 0.0;
*BU2 = 0.0;
}\n""")
with open("BSSN/ID_TOV_TUPMUNU.h", "w") as file:
file.write("""
// This function takes as input either (x,y,z) or (r,th,ph) and outputs
// all ADM quantities in the Cartesian or Spherical basis, respectively.
void ID_TOV_TUPMUNU(const REAL xyz_or_rthph[3], const ID_inputs other_inputs,
REAL *T4UU00,REAL *T4UU01,REAL *T4UU02,REAL *T4UU03,
/**/ REAL *T4UU11,REAL *T4UU12,REAL *T4UU13,
/**/ REAL *T4UU22,REAL *T4UU23,
/**/ REAL *T4UU33) {
const REAL r = xyz_or_rthph[0];
const REAL th = xyz_or_rthph[1];
const REAL ph = xyz_or_rthph[2];
REAL rho,P,M,expnu,exp4phi;
TOV_interpolate_1D(r,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&P,&M,&expnu,&exp4phi);
//T^tt = e^(-nu) * rho
*T4UU00 = rho / expnu;
//T^rr = P / exp4phi
*T4UU11 = P / exp4phi;
//T^thth = P / (r^2 * exp4phi)
*T4UU22 = P / (r*r * exp4phi);
//T^phph = P / (r^2 * sin^2(th) * exp4phi)
*T4UU33 = P / (r*r * sin(th)*sin(th) * exp4phi);
// All other components ARE ZERO:
*T4UU01 = 0; *T4UU02 = 0; *T4UU03 = 0;
/**/ *T4UU12 = 0; *T4UU13 = 0;
/**/ *T4UU23 = 0;
}\n""")
###Output
_____no_output_____
###Markdown
Step 2.b: Compute source terms $S_{ij}$, $S_{i}$, $S$, and $\rho$ \[Back to [top](toc)\]$$\label{source}$$Compute source terms $S_{ij}$, $S_{i}$, $S$, and $\rho$, via Eqs. 10 of [Baumgarte, Montero, Cordero-Carrión, and Müller](https://arxiv.org/pdf/1211.6632.pdf):\begin{array}\ S_{ij} &= \gamma_{i \mu} \gamma_{j \nu} T^{\mu \nu} \\S_{i} &= -\gamma_{i\mu} n_\nu T^{\mu\nu} \\S &= \gamma^{ij} S_{ij} \\\rho &= n_\mu n_\nu T^{\mu\nu},\end{array}ID_TOV_TUPMUNU() provides numerical values for $T^{\mu\nu}$, but we do not have $\gamma_{\mu \nu}$ or $n_\mu$ directly. So here we will construct the latter quantities.First, B&S Eq. 2.27 defines $\gamma_{\mu \nu}$ as:$$\gamma_{\mu\nu} = g_{\mu\nu} + n_\mu n_\nu$$where $$n_\mu = \{-\alpha,0,0,0\}.$$So we will first need to construct the 4-metric based on ADM quantities. This is provided by Eq 4.47 in [Gourgoulhon](https://arxiv.org/pdf/gr-qc/0703035.pdf):$$g_{\mu\nu} = \begin{pmatrix} -\alpha^2 + \beta^k \beta_k & \beta_i \\\beta_j & \gamma_{ij}\end{pmatrix}.$$
###Code
gammaDD = ixp.declarerank2("gammaDD", "sym01",DIM=3)
betaU = ixp.declarerank1("betaU",DIM=3)
alpha = sp.symbols("alpha")
# To get \gamma_{\mu \nu} = gammabar4DD[mu][nu], we'll need to construct the 4-metric, using Eq. 2.122 in B&S:
# Eq. 2.121 in B&S
betaD = ixp.zerorank1()
for i in range(DIM):
for j in range(DIM):
betaD[i] += gammaDD[i][j]*betaU[j]
# Now compute the beta contraction.
beta2 = sp.sympify(0)
for i in range(DIM):
beta2 += betaU[i]*betaD[i]
# Eq. 2.122 in B&S
g4DD = ixp.zerorank2(DIM=4)
g4DD[0][0] = -alpha**2 + beta2
for i in range(DIM):
g4DD[i+1][0] = g4DD[0][i+1] = betaD[i]
for i in range(DIM):
for j in range(DIM):
g4DD[i+1][j+1] = gammaDD[i][j]
###Output
_____no_output_____
###Markdown
Now let's construct $\gamma_{\mu\nu}$=`gamma4DD[mu][nu]` via $\gamma_{\mu\nu} = g_{\mu\nu} + n_\mu n_\nu$:
###Code
n4D = ixp.zerorank1(DIM=4)
n4D[0] = -alpha
gamma4DD = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
gamma4DD[mu][nu] = g4DD[mu][nu] + n4D[mu]*n4D[nu]
###Output
_____no_output_____
###Markdown
We now have all we need to construct the BSSN source terms in the current basis (for TOV, the Spherical basis):\begin{array}\ S_{ij} &= \gamma_{i \mu} \gamma_{j \nu} T^{\mu \nu} \\S_{i} &= -\gamma_{i\mu} n_\nu T^{\mu\nu} \\S &= \gamma^{ij} S_{ij} \\\rho &= n_\mu n_\nu T^{\mu\nu},\end{array}
###Code
T4UU = ixp.declarerank2("T4UU", "sym01",DIM=4)
SDD = ixp.zerorank2()
SD = ixp.zerorank1()
S = sp.sympify(0)
rho = sp.sympify(0)
for i in range(DIM):
for j in range(DIM):
for mu in range(4):
for nu in range(4):
SDD[i][j] += gamma4DD[i+1][mu]*gamma4DD[j+1][nu] * T4UU[mu][nu]
for i in range(DIM):
for mu in range(4):
for nu in range(4):
SD[i] += -gamma4DD[i+1][mu]*n4D[nu] * T4UU[mu][nu]
gammaUU, gammaDET = ixp.symm_matrix_inverter3x3(gammaDD)
for i in range(DIM):
for j in range(DIM):
S += gammaUU[i][j]*SDD[i][j]
for mu in range(4):
for nu in range(4):
rho += n4D[mu]*n4D[nu] * T4UU[mu][nu]
###Output
_____no_output_____
###Markdown
Step 2.c: Jacobian transformation on the ADM/BSSN source terms \[Back to [top](toc)\]$$\label{jacobian}$$The following discussion holds for either Spherical or Cartesian input data, so for simplicity let's just assume the data are given in Spherical coordinates.All ADM tensors and vectors are in the Spherical coordinate basis $x^i_{\rm Sph} = (r,\theta,\phi)$, but we need them in the curvilinear coordinate basis $x^i_{\rm rfm}= ({\rm xx0},{\rm xx1},{\rm xx2})$ set by the "reference_metric::CoordSystem" variable. Empirically speaking, it is far easier to write $(x({\rm xx0},{\rm xx1},{\rm xx2}),y({\rm xx0},{\rm xx1},{\rm xx2}),z({\rm xx0},{\rm xx1},{\rm xx2}))$ than the inverse, so we will compute the Jacobian matrix$${\rm Jac\_dUSph\_dDrfmUD[i][j]} = \frac{\partial x^i_{\rm Sph}}{\partial x^j_{\rm rfm}},$$via exact differentiation (courtesy SymPy), and the inverse Jacobian$${\rm Jac\_dUrfm\_dDSphUD[i][j]} = \frac{\partial x^i_{\rm rfm}}{\partial x^j_{\rm Sph}},$$using NRPy+'s ${\rm generic\_matrix\_inverter3x3()}$ function. In terms of these, the transformation of ADM/BSSN source terms from Spherical to "reference_metric::CoordSystem" coordinates may be written:\begin{align}S^{\rm rfm}_{ij} &= \frac{\partial x^\ell_{\rm Sph}}{\partial x^i_{\rm rfm}}\frac{\partial x^m_{\rm Sph}}{\partial x^j_{\rm rfm}} S^{\rm Sph}_{\ell m}\\S^{\rm rfm}_{i} &= \frac{\partial x^\ell_{\rm Sph}}{\partial x^i_{\rm rfm}}S^{\rm Sph}_{\ell}\end{align}
###Code
# UIUC Black Hole initial data are given in Spherical coordinates.
CoordType_in = "Spherical"
SSphorCartDD = ixp.zerorank2()
SSphorCartD = ixp.zerorank1()
# Copy what we had written above, which was in Spherical coordinates, to the new tensors SSphorCartDD / SSphorCartD:
for i in range(3):
SSphorCartD[i] = SD[i]
for j in range(3):
SSphorCartDD[i][j] = SDD[i][j]
# Zero out the original tensors; we're going to store the result to SD and SDD:
SDD = ixp.zerorank2()
SD = ixp.zerorank1()
# Make sure that rfm.reference_metric() has been called.
# We'll need the variables it defines throughout this module.
if rfm.have_already_called_reference_metric_function == False:
print("Error. Called Convert_Spherical_ADM_to_BSSN_curvilinear() without")
print(" first setting up reference metric, by calling rfm.reference_metric().")
exit(1)
r_th_ph_or_Cart_xyz_oID_xx = []
if CoordType_in == "Spherical":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxSph
elif CoordType_in == "Cartesian":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxCart
else:
print("Error: Can only convert ADM Cartesian or Spherical initial data to BSSN Curvilinear coords.")
exit(1)
# Next apply Jacobian transformations to convert into the (xx0,xx1,xx2) basis
# rho and S are scalar, so no Jacobian transformations are necessary.
Jac_dUSphorCart_dDrfmUD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
Jac_dUSphorCart_dDrfmUD[i][j] = sp.diff(r_th_ph_or_Cart_xyz_oID_xx[i],rfm.xx[j])
Jac_dUrfm_dDSphorCartUD, dummyDET = ixp.generic_matrix_inverter3x3(Jac_dUSphorCart_dDrfmUD)
for i in range(DIM):
for j in range(DIM):
SD[i] += Jac_dUSphorCart_dDrfmUD[j][i] * SSphorCartD[j]
for k in range(DIM):
for l in range(DIM):
SDD[i][j] += Jac_dUSphorCart_dDrfmUD[k][i]*Jac_dUSphorCart_dDrfmUD[l][j] * SSphorCartDD[k][l]
###Output
_____no_output_____
###Markdown
Step 2.d: Rescale tensorial quantities \[Back to [top](toc)\]$$\label{tensor}$$We rescale tensorial quantities according to the prescription described in the [BSSN in curvilinear coordinates tutorial module](Tutorial-BSSNCurvilinear.ipynb) (also [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf)):Since $\text{ReD[i]}=1/\text{ReU[i]}$, we have\begin{align}s_{ij} &= S_{ij} /\text{ReDD[i][j]}\\s_{i} &= S_i \text{ReU[i]}\end{align}
###Code
# Finally rescale the tensorial quantities:
sD = ixp.zerorank1()
sDD = ixp.zerorank2()
for i in range(DIM):
sD[i] = SD[i] * rfm.ReU[i] # ReD[i] = 1/ReU[i]
for j in range(DIM):
sDD[i][j] = SDD[i][j] / rfm.ReDD[i][j]
###Output
_____no_output_____
###Markdown
Next we use NRPy+ to write a C function that reads in $T^{\mu\nu}$ in the given (Spherical) basis and outputs the source terms $\{S_{ij},S_{i},S,\rho\}$ in the (xx0,xx1,xx2) basis:
###Code
with open("BSSN/ID_TOV_BSSN_Source_Terms.h", "w") as file:
file.write("""void ID_TOV_BSSN_Source_Terms(
REAL xx0xx1xx2[3],
const REAL gammaDD00,const REAL gammaDD01,const REAL gammaDD02,
/**/ const REAL gammaDD11,const REAL gammaDD12,
/**/ const REAL gammaDD22,
const REAL betaU0,const REAL betaU1,const REAL betaU2,
const REAL alpha,
const REAL T4UU00,const REAL T4UU01,const REAL T4UU02,const REAL T4UU03,
/**/ const REAL T4UU11,const REAL T4UU12,const REAL T4UU13,
/**/ const REAL T4UU22,const REAL T4UU23,
/**/ const REAL T4UU33,
REAL *sDD00,REAL *sDD01,REAL *sDD02,
/**/ REAL *sDD11,REAL *sDD12,
/**/ REAL *sDD22,
REAL *sD0, REAL *sD1, REAL *sD2,
REAL *S, REAL *rho) {
const REAL xx0 = xx0xx1xx2[0];
const REAL xx1 = xx0xx1xx2[1];
const REAL xx2 = xx0xx1xx2[2];\n""")
outCparams = "preindent=1,outCfileaccess=a,outCverbose=False,includebraces=False"
outputC([sDD[0][0],sDD[0][1],sDD[0][2],sDD[1][1],sDD[1][2],sDD[2][2],
sD[0],sD[1],sD[2], S, rho],
["*sDD00","*sDD01","*sDD02","*sDD11","*sDD12","*sDD22",
"*sD0","*sD1","*sD2","*S","*rho"], "BSSN/ID_TOV_BSSN_Source_Terms.h",outCparams)
with open("BSSN/ID_TOV_BSSN_Source_Terms.h", "a") as file:
file.write("}\n")
###Output
Appended to file "BSSN/ID_TOV_BSSN_Source_Terms.h"
###Markdown
Step 3: Convert ADM spacetime quantity initial data from Spherical to BSSN Curvilinear coordinates \[Back to [top](toc)\]$$\label{adm_id_spacetime}$$We convert ADM spacetime quantity initial data from Spherical to BSSN Curvilinear coordinates [as documented in the corresponding tutorial module](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb)
###Code
import BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear as AtoBnum
AtoBnum.Convert_Spherical_or_Cartesian_ADM_to_BSSN_curvilinear("Spherical","ID_TOV_ADM_quantities")
###Output
Appended to file "BSSN/ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h"
Appended to file "BSSN/ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h"
###Markdown
Step 4: Validating that the TOV initial data satisfy the Hamiltonian constraint \[Back to [top](toc)\]$$\label{validate}$$We will validate that the TOV initial data satisfy the Hamiltonian constraint, modulo numerical finite differencing error Step 4.a: Output the Hamiltonian constraint \[Back to [top](toc)\]$$\label{ham_const_output}$$First output the Hamiltonian constraint [as documented in the corresponding NRPy+ tutorial module](Tutorial-BSSN_constraints.ipynb)
###Code
import BSSN.BSSN_constraints as bssncon
bssncon.output_C__Hamiltonian_h(add_T4UUmunu_source_terms=True)
###Output
initialize_param() minor warning: Did nothing; already initialized parameter reference_metric::M_PI
initialize_param() minor warning: Did nothing; already initialized parameter reference_metric::RMAX
Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.
Finished in 14.390434265136719 seconds.
Output C implementation of Hamiltonian constraint to BSSN/Hamiltonian.h
###Markdown
Step 4.b: Apply singular, curvilinear coordinate boundary conditions \[Back to [top](toc)\]$$\label{apply_bcs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial module](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb).
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions()
###Output
Wrote to file "CurviBoundaryConditions/gridfunction_defines.h"
Wrote to file "CurviBoundaryConditions/set_parity_conditions.h"
Wrote to file "CurviBoundaryConditions/xxCart.h"
Wrote to file "CurviBoundaryConditions/xxminmax.h"
Wrote to file "CurviBoundaryConditions/Cart_to_xx.h"
###Markdown
Step 4.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial module](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb).Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
import BSSN.Enforce_Detgammabar_Constraint as EGC
EGC.output_Enforce_Detgammabar_Constraint_Ccode()
###Output
initialize_param() minor warning: Did nothing; already initialized parameter reference_metric::M_PI
initialize_param() minor warning: Did nothing; already initialized parameter reference_metric::RMAX
Output C implementation of det(gammabar) constraint to file BSSN/enforce_detgammabar_constraint.h
###Markdown
Step 5: $\rm{TOV\_Playground.c}$: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
with open("BSSN/NGHOSTS.h", "w") as file:
file.write("// Part P0: Set the number of ghost zones, from NRPy+'s FD_CENTDERIVS_ORDER\n")
# Upwinding in BSSN requires that NGHOSTS = FD_CENTDERIVS_ORDER/2 + 1 <- Notice the +1.
file.write("#define NGHOSTS "+str(int(par.parval_from_str("finite_difference::FD_CENTDERIVS_ORDER")/2)+1)+"\n")
%%writefile BSSN/TOV_Playground.c
// Part P1: Import needed header files
#include "NGHOSTS.h" // A NRPy+-generated file, which is set based on FD_CENTDERIVS_ORDER.
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "stdint.h" // Needed for Windows GCC 6.x compatibility
// Part P2: Add needed #define's to set data type, the IDX4() macro, and the gridfunctions
// Part P2a: set REAL=double, so that all floating point numbers are stored to at least ~16 significant digits.
#define REAL double
// Step P3: Set free parameters for the numerical grid
const REAL RMAX = 3.0;
typedef struct __ID_inputs {
REAL Rbar;
int Rbar_idx;
int interp_stencil_size;
int numlines_in_file;
REAL *r_Schw_arr,*rho_arr,*P_arr,*M_arr,*expnu_arr,*exp4phi_arr,*rbar_arr;
} ID_inputs;
// Part P4b: Declare the IDX4(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS[0] elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS[0]*Nxx_plus_2NGHOSTS[1] in memory, etc.
#define IDX4(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS[0] * ( (j) + Nxx_plus_2NGHOSTS[1] * ( (k) + Nxx_plus_2NGHOSTS[2] * (g) ) ) )
#define IDX3(i,j,k) ( (i) + Nxx_plus_2NGHOSTS[0] * ( (j) + Nxx_plus_2NGHOSTS[1] * (k) ) )
// Assuming idx = IDX3(i,j,k). Much faster if idx can be reused over and over:
#define IDX4pt(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS[0]*Nxx_plus_2NGHOSTS[1]*Nxx_plus_2NGHOSTS[2]) * (g) )
// Part P4c: Set #define's for BSSN gridfunctions. C code generated above
#include "../CurviBoundaryConditions/gridfunction_defines.h"
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
// Step P5: Function for converting uniform grid coord
// (xx[0][i0],xx[1][i1],xx[2][i2]) to
// corresponding Cartesian coordinate.
void xxCart(REAL *xx[3],const int i0,const int i1,const int i2, REAL xCart[3]) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
#include "xxCart.h"
}
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "../CurviBoundaryConditions/curvilinear_parity_and_outer_boundary_conditions.h"
// Step P7: Function for enforcing the gammabar=gammahat constraint:
#include "enforce_detgammabar_constraint.h"
// Part P8: Declare all functions for setting up UIUC initial data.
/* Routines to interpolate the TOV solution and convert to ADM & T^{munu}: */
#include "../TOV/tov_interp.h"
#include "ID_TOV_ADM_quantities.h"
#include "ID_TOV_TUPMUNU.h"
/* Next perform the basis conversion and compute all needed BSSN quantities */
#include "ID_TOV_BSSN_Source_Terms.h"
#include "ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN_lambdas.h"
// Part P9: Declare function for computing the Hamiltonian
// constraint violation, which should converge to
// zero with increasing numerical resolution.
void Hamiltonian_constraint(const int Nxx[3],const int Nxx_plus_2NGHOSTS[3],const REAL dxx[3], REAL *xx[3],
REAL *in_gfs, REAL *aux_gfs) {
#include "Hamiltonian.h"
}
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up scalar wave initial data
// Step 2: Evolve scalar wave initial data forward in time using Method of Lines with RK4 algorithm,
// applying quadratic extrapolation outer boundary conditions.
// Step 3: Output relative error between numerical and exact solution.
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
// Step 0a: Read command-line input, error out if nonconformant
if(argc != 4 || atoi(argv[1]) < NGHOSTS) {
fprintf(stderr,"Error: Expected three command-line arguments: ./TOV_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
// Step 0b: Set up numerical grid structure, first in space...
if(atoi(argv[1])%2 != 0 || atoi(argv[2])%2 != 0 || atoi(argv[2])%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
const int Nxx_plus_2NGHOSTS[3] = { Nxx[0]+2*NGHOSTS, Nxx[1]+2*NGHOSTS, Nxx[2]+2*NGHOSTS };
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS[0]*Nxx_plus_2NGHOSTS[1]*Nxx_plus_2NGHOSTS[2];
#include "xxminmax.h"
/* TOV INPUT ROUTINE */
// Open the data file:
char filename[100];
sprintf(filename,"./outputTOVpolytrope.txt");
FILE *in1Dpolytrope = fopen(filename, "r");
if (in1Dpolytrope == NULL) {
fprintf(stderr,"ERROR: could not open file %s\n",filename);
exit(1);
}
// Count the number of lines in the data file:
int numlines_in_file = count_num_lines_in_file(in1Dpolytrope);
// Allocate space for all data arrays:
REAL *r_Schw_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *P_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *M_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *expnu_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *exp4phi_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rbar_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
// Read from the data file, filling in arrays
if(read_datafile__set_arrays(in1Dpolytrope, r_Schw_arr,rho_arr,P_arr,M_arr,expnu_arr,exp4phi_arr,rbar_arr) == 1) {
fprintf(stderr,"ERROR WHEN READING FILE %s!\n",filename);
exit(1);
}
fclose(in1Dpolytrope);
REAL Rbar = -100;
int Rbar_idx = -100;
for(int i=1;i<numlines_in_file;i++) {
if(rho_arr[i-1]>0 && rho_arr[i]==0) { Rbar = rbar_arr[i-1]; Rbar_idx = i-1; }
}
if(Rbar<0) {
fprintf(stderr,"Error: could not find rbar=Rbar from data file.\n");
exit(1);
}
ID_inputs TOV_in;
TOV_in.Rbar = Rbar;
TOV_in.Rbar_idx = Rbar_idx;
const int interp_stencil_size = 12;
TOV_in.interp_stencil_size = interp_stencil_size;
TOV_in.numlines_in_file = numlines_in_file;
TOV_in.r_Schw_arr = r_Schw_arr;
TOV_in.rho_arr = rho_arr;
TOV_in.P_arr = P_arr;
TOV_in.M_arr = M_arr;
TOV_in.expnu_arr = expnu_arr;
TOV_in.exp4phi_arr = exp4phi_arr;
TOV_in.rbar_arr = rbar_arr;
/* END TOV INPUT ROUTINE */
// Step 0c: Allocate memory for gridfunctions
REAL *evol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *aux_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUX_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0d: Set up space and time coordinates
// Step 0d.i: Set \Delta x^i on uniform grids.
REAL dxx[3];
for(int i=0;i<3;i++) dxx[i] = (xxmax[i] - xxmin[i]) / ((REAL)Nxx[i]);
// Step 0d.ii: Set up uniform coordinate grids
REAL *xx[3];
for(int i=0;i<3;i++) {
xx[i] = (REAL *)malloc(sizeof(REAL)*Nxx_plus_2NGHOSTS[i]);
for(int j=0;j<Nxx_plus_2NGHOSTS[i];j++) {
xx[i][j] = xxmin[i] + ((REAL)(j-NGHOSTS) + (1.0/2.0))*dxx[i]; // Cell-centered grid.
}
}
// Step 0e: Find ghostzone mappings and parities:
gz_map *bc_gz_map = (gz_map *)malloc(sizeof(gz_map)*Nxx_plus_2NGHOSTS_tot);
parity_condition *bc_parity_conditions = (parity_condition *)malloc(sizeof(parity_condition)*Nxx_plus_2NGHOSTS_tot);
set_up_bc_gz_map_and_parity_conditions(Nxx_plus_2NGHOSTS,xx,dxx,xxmin,xxmax, bc_gz_map, bc_parity_conditions);
// Step 1: Set up initial data to an exact solution at time=0:
ID_BSSN__ALL_BUT_LAMBDAs(Nxx_plus_2NGHOSTS,xx,TOV_in, evol_gfs);
apply_bcs(Nxx, Nxx_plus_2NGHOSTS, bc_gz_map,bc_parity_conditions,NUM_EVOL_GFS,evol_gf_parity, evol_gfs);
enforce_detgammabar_constraint(Nxx_plus_2NGHOSTS, xx, evol_gfs);
ID_BSSN_lambdas(Nxx, Nxx_plus_2NGHOSTS, xx,dxx, evol_gfs);
apply_bcs(Nxx, Nxx_plus_2NGHOSTS, bc_gz_map,bc_parity_conditions,NUM_EVOL_GFS,evol_gf_parity, evol_gfs);
enforce_detgammabar_constraint(Nxx_plus_2NGHOSTS, xx, evol_gfs);
{
#pragma omp parallel for
for(int i2=0; i2<Nxx_plus_2NGHOSTS[2]; i2++) {
const REAL xx2 = xx[2][i2];
for(int i1=0; i1<Nxx_plus_2NGHOSTS[1]; i1++) {
const REAL xx1 = xx[1][i1];
for(int i0=0; i0<Nxx_plus_2NGHOSTS[0]; i0++) {
const REAL xx0 = xx[0][i0];
REAL xx0xx1xx2[3] = {xx0,xx1,xx2};
REAL gammaDD00,gammaDD01,gammaDD02,gammaDD11,gammaDD12,gammaDD22;
REAL KDD00,KDD01,KDD02,KDD11,KDD12,KDD22;
REAL alpha,betaU0,betaU1,betaU2;
REAL BU0,BU1,BU2;
REAL xyz_or_rthph[3] = {xx0,xx1,xx2}; //FIXME
// FIRST INTERPOLATE TO SET THE ADM AND TMUNU QUANTITIES
ID_TOV_ADM_quantities(xyz_or_rthph, TOV_in,
&gammaDD00,&gammaDD01,&gammaDD02,&gammaDD11,&gammaDD12,&gammaDD22,
&KDD00,&KDD01,&KDD02,&KDD11,&KDD12,&KDD22,
&alpha, &betaU0,&betaU1,&betaU2, &BU0,&BU1,&BU2);
REAL T4UU00,T4UU01,T4UU02,T4UU03,
/**/ T4UU11,T4UU12,T4UU13,
/**/ T4UU22,T4UU23,
/**/ T4UU33;
ID_TOV_TUPMUNU(xyz_or_rthph, TOV_in,
&T4UU00,&T4UU01,&T4UU02,&T4UU03,
/**/ &T4UU11,&T4UU12,&T4UU13,
/**/ &T4UU22,&T4UU23,
/**/ &T4UU33);
const int idx = IDX3(i0,i1,i2);
// THEN EVALUATE THE BSSN SOURCE TERMS
ID_TOV_BSSN_Source_Terms(xx0xx1xx2,
gammaDD00, gammaDD01, gammaDD02,
/**/ gammaDD11, gammaDD12,
/**/ gammaDD22,
betaU0,betaU1,betaU2, alpha,
T4UU00, T4UU01, T4UU02, T4UU03,
/**/ T4UU11, T4UU12, T4UU13,
/**/ T4UU22, T4UU23,
/**/ T4UU33,
&aux_gfs[IDX4pt(SDD00GF,idx)], &aux_gfs[IDX4pt(SDD01GF,idx)], &aux_gfs[IDX4pt(SDD02GF,idx)],
/**/ &aux_gfs[IDX4pt(SDD11GF,idx)], &aux_gfs[IDX4pt(SDD12GF,idx)],
/**/ &aux_gfs[IDX4pt(SDD22GF,idx)],
&aux_gfs[IDX4pt(SD0GF,idx)], &aux_gfs[IDX4pt(SD1GF,idx)], &aux_gfs[IDX4pt(SD2GF,idx)],
&aux_gfs[IDX4pt(SGF,idx)], &aux_gfs[IDX4pt(RHOGF,idx)]);
//aux_gfs[IDX4pt(SGF,idx)] = T4UU11*gammaDD00;
}
}
}
}
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
// apply_bcs(Nxx, Nxx_plus_2NGHOSTS, bc_gz_map,bc_parity_conditions,NUM_EVOL_GFS,evol_gf_parity, evol_gfs);
// enforce_detgammabar_constraint(Nxx_plus_2NGHOSTS, xx, evol_gfs);
// Step 2: Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(Nxx,Nxx_plus_2NGHOSTS,dxx, xx, evol_gfs, aux_gfs);
/* Step 3: Output relative error between numerical and exact solution, */
const int i0MIN=NGHOSTS; // In spherical, r=Delta r/2.
const int i1mid=Nxx_plus_2NGHOSTS[1]/2;
const int i2mid=Nxx_plus_2NGHOSTS[2]/2;
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS[0]-NGHOSTS, i1mid,i1mid+1, NGHOSTS,Nxx_plus_2NGHOSTS[2]-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
#include "xxCart.h"
int idx = IDX3(i0,i1,i2);
printf("%e %e %e %e\n",xCart[0],xCart[1], aux_gfs[IDX4pt(SGF,idx)],log10(fabs(aux_gfs[IDX4pt(HGF,idx)])));
}
/* Step 4: Free all allocated memory */
free(bc_gz_map);
free(bc_parity_conditions);
free(rbar_arr);
free(rho_arr);
free(P_arr);
free(M_arr);
free(expnu_arr);
free(aux_gfs);
free(evol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
!rm -f TOV_Playground out96.txt
!gcc -Ofast -march=native -ftree-parallelize-loops=2 -fopenmp BSSN/TOV_Playground.c -o TOV_Playground -lm
!taskset -c 0 ./TOV_Playground 96 96 96 > out96.txt
# Alternate Code
# !taskset -c 0,1 ./TOV_Playground 96 96 96 > out96.txt
# !gcc -fopenmp -O BSSN/TOV_Playground.c -o TOV_Playground -lm
# !gcc -Ofast -march=native -ftree-parallelize-loops=2 -fopenmp BSSN/TOV_Playground.c -o TOV_Playground -lm
# Windows Code
# import os
# !gcc -Ofast -march=native -fopenmp BSSN/TOV_Playground.c -o TOV_Playground -lm
# N_physical_cores = 4
# for resolution in [96, 48]:
# script = ""
# check_for_taskset = !which taskset >/dev/null && echo $?
# if check_for_taskset == ['0']:
# script += "taskset -c 0"
# for i in range(N_physical_cores-1):
# script += ","+str(i+1)
# script += " "
# exec_string = os.path.join(".", "TOV")
# script += exec_string + "_Playground "+str(str(resolution)+" ")*3+" > out"+str(resolution)+".txt"
# print("Executing `"+script+"`...")
# os.system(script)
# print("Finished this code cell.")
###Output
_____no_output_____
###Markdown
Step 6: Plotting the single-neutron-star initial data \[Back to [top](toc)\]$$\label{plot}$$Here we plot the conformal factor of these initial data on a 2D grid, such that darker colors imply stronger gravitational fields. Hence, we see the single neutron star centered at the origin: $x/M=y/M=z/M=0$, where $M$ is an arbitrary mass scale (conventionally the [ADM mass](https://en.wikipedia.org/w/index.php?title=ADM_formalism&oldid=846335453) is chosen), and our formulation of Einstein's equations adopt $G=c=1$ [geometrized units](https://en.wikipedia.org/w/index.php?title=Geometrized_unit_system&oldid=861682626).
###Code
import numpy as np
from scipy.interpolate import griddata
from pylab import savefig
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from IPython.display import Image
x96,y96,valuesCF96,valuesHam96 = np.loadtxt('out96.txt').T #Transposed for easier unpacking
bounds = 2.0
pl_xmin = -bounds
pl_xmax = +bounds
pl_ymin = -bounds
pl_ymax = +bounds
grid_x, grid_y = np.mgrid[pl_xmin:pl_xmax:100j, pl_ymin:pl_ymax:100j]
points96 = np.zeros((len(x96), 2))
for i in range(len(x96)):
points96[i][0] = x96[i]
points96[i][1] = y96[i]
grid96 = griddata(points96, valuesCF96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesCF96, (grid_x, grid_y), method='cubic')
plt.clf()
plt.title("TOV Polytrope Initial Data: log10(Density)")
plt.xlabel("x/M")
plt.ylabel("y/M")
# fig, ax = plt.subplots()
# ax.plot(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
# plt.close(fig)
fig96cf = plt.imshow(grid96.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cf)
savefig("BHB.png")
from IPython.display import Image
Image("BHB.png")
# # interpolation='nearest', cmap=cm.gist_rainbow)
###Output
_____no_output_____
###Markdown
Step 7: Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero \[Back to [top](toc)\]$$\label{convergence}$$The equations behind these initial data solve Einstein's equations exactly, at a single instant in time. One reflection of this solution is that the Hamiltonian constraint violation should be exactly zero in the initial data. However, when evaluated on numerical grids, the Hamiltonian constraint violation will *not* generally evaluate to zero due to the associated numerical derivatives not being exact. However, these numerical derivatives (finite difference derivatives in this case) should *converge* to the exact derivatives as the density of numerical sampling points approaches infinity.In this case, all of our finite difference derivatives agree with the exact solution, with an error term that drops with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$. Here, as in the [Start-to-Finish Scalar Wave (Cartesian grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWave.ipynb) and the [Start-to-Finish Scalar Wave (curvilinear grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWaveCurvilinear.ipynb) we confirm this convergence.First, let's take a look at what the numerical error looks like on the x-y plane at a given numerical resolution, plotting $\log_{10}|H|$, where $H$ is the Hamiltonian constraint violation:
###Code
grid96 = griddata(points96, valuesHam96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesHam96, (grid_x, grid_y), method='cubic')
# fig, ax = plt.subplots()
plt.clf()
plt.title("96^3 Numerical Err.: log_{10}|Ham|")
plt.xlabel("x/M")
plt.ylabel("y/M")
fig96cub = plt.imshow(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cub)
###Output
_____no_output_____
###Markdown
Next, we set up the same initial data but on a lower-resolution, $48\times 8\times 2$ grid (axisymmetric in the $\phi$ direction). Since the constraint violation (numerical error associated with the fourth-order-accurate, finite-difference derivatives) should converge to zero with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$, we expect the constraint violation will increase (relative to the $96\times 16\times 2$ grid) by a factor of $\left(96/48\right)^4$. Here we demonstrate that indeed this order of convergence is observed as expected, *except* at the star's surface where the stress-energy tensor $T^{\mu\nu}$ sharply drops to zero.
###Code
!rm -f out48.txt
!taskset -c 0 ./TOV_Playground 48 48 48 > out48.txt
x48,y48,valuesCF48,valuesHam48 = np.loadtxt('out48.txt').T #Transposed for easier unpacking
points48 = np.zeros((len(x48), 2))
for i in range(len(x48)):
points48[i][0] = x48[i]
points48[i][1] = y48[i]
grid48 = griddata(points48, valuesHam48, (grid_x, grid_y), method='cubic')
griddiff_48_minus_96 = np.zeros((100,100))
griddiff_48_minus_96_1darray = np.zeros(100*100)
gridx_1darray_yeq0 = np.zeros(100)
grid48_1darray_yeq0 = np.zeros(100)
grid96_1darray_yeq0 = np.zeros(100)
count = 0
outarray = []
for i in range(100):
for j in range(100):
griddiff_48_minus_96[i][j] = grid48[i][j] - grid96[i][j]
griddiff_48_minus_96_1darray[count] = griddiff_48_minus_96[i][j]
if j==49:
gridx_1darray_yeq0[i] = grid_x[i][j]
grid48_1darray_yeq0[i] = grid48[i][j] + np.log10((48./96.)**4)
grid96_1darray_yeq0[i] = grid96[i][j]
count = count + 1
plt.clf()
fig, ax = plt.subplots()
plt.title("Plot Demonstrating 4th-order Convergence")
plt.xlabel("x/M")
plt.ylabel("log10(Relative error)")
ax.plot(gridx_1darray_yeq0, grid96_1darray_yeq0, 'k-', label='Nr=96')
ax.plot(gridx_1darray_yeq0, grid48_1darray_yeq0, 'k--', label='Nr=48, mult by (48/96)^4')
ax.set_ylim([-12.5,1.5])
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
###Output
_____no_output_____
###Markdown
Step 8: Output this module to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[NbConvertApp] Converting notebook Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb to latex
[NbConvertApp] Support files will be in Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data_files/
[NbConvertApp] Making directory Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data_files
[NbConvertApp] Making directory Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data_files
[NbConvertApp] Making directory Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data_files
[NbConvertApp] Writing 137736 bytes to Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Start-to-Finish Example: Setting up Polytropic [TOV](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) Initial Data, in Curvilinear Coordinates Authors: Zach Etienne & Phil Chang Formatting improvements courtesy Brandon Clark This module sets up initial data for a TOV star in *spherical, isotropic coordinates*, using the *Numerical* ADM Spherical to BSSN Curvilinear initial data module (numerical = BSSN $\lambda^i$'s are computed using finite-difference derivatives instead of exact expressions).**Notebook Status:** Validated **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution (see [plots](convergence) at bottom). Note that convergence at the surface of the star will be lower order due to the sharp drop to zero in $T^{\mu\nu}$. NRPy+ Source Code for this module: * [TOV/TOV_Solver.py](../edit/TOV/TOV_Solver.py); ([**NRPy+ Tutorial module reviewing mathematical formulation and equations solved**](Tutorial-ADM_Initial_Data-TOV.ipynb)); ([**start-to-finish NRPy+ Tutorial module demonstrating that initial data satisfy Hamiltonian constraint**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb)): Tolman-Oppenheimer-Volkoff (TOV) initial data; defines all ADM variables and nonzero $T^{\mu\nu}$ components in Spherical basis.* [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates Introduction:Here we use NRPy+ to set up initial data for a [simple polytrope TOV star](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation).The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step:1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration [(**NRPy+ tutorial on NRPy+ Method of Lines algorithm**)](Tutorial-Method_of_Lines-C_Code_Generation.ipynb).1. Set gridfunction values to initial data * [**NRPy+ tutorial on TOV initial data**](Tutorial-ADM_Initial_Data-TOV.ipynb) * [**NRPy+ tutorial on validating TOV initial data**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb).1. Evaluate the Hamiltonian constraint violation * [**NRPy+ tutorial on BSSN constraints**](Tutorial-BSSN_constraints.ipynb)1. Repeat above steps at two numerical resolutions to confirm convergence of Hamiltonian constraint violation to zero. Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id_tov): Set up ADM initial data for polytropic TOV Star 1. [Step 2.a](tov_interp): Interpolating the TOV data file as needed 1. [Step 2.b](source): Compute source terms $S_{ij}$, $S_{i}$, $S$, and $\rho$ 1. [Step 2.c](jacobian): Jacobian transformation on the ADM/BSSN source terms 1. [Step 2.d](tensor): Rescale tensorial quantities1. [Step 3](adm_id_spacetime): Convert ADM spacetime quantity initial data from Spherical to BSSN Curvilinear coordinates1. [Step 4](validate): Validating that the TOV initial data satisfy the Hamiltonian constraint 1. [Step 4.a](ham_const_output): Output the Hamiltonian Constraint 1. [Step 4.b](apply_bcs): Apply singular, curvilinear coordinate boundary conditions 1. [Step 4.c](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ 1. [Step 5](mainc): `TOV_Playground.c`: The Main C Code1. [Step 6](plot): Plotting the single-neutron-star initial data1. [Step 7](convergence): Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero1. [Step 8](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step P1: Import needed NRPy+ core modules:
from outputC import * # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("TOVID_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
# !rm -r ScalarWaveCurvilinear_Playground_Ccodes
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three this time, and then read
# the parameter as DIM.
par.set_parval_from_str("grid::DIM",3)
DIM = par.parval_from_str("grid::DIM")
# Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm,
# FD order, floating point precision, and CFL factor:
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "Spherical"
# Step 2.a: Set defaults for Coordinate system parameters.
# These are perhaps the most commonly adjusted parameters,
# so we enable modifications at this high level.
# domain_size = 7.5 # SET BELOW BASED ON TOV STELLAR RADIUS
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.4 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05# If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 0.5 # If SymTP chosen
# Step 2.b: Set the order of spatial finite difference derivatives;
# and the core data type.
FD_order = 4 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
REAL = "double" # Best to use double here.
# Step 3: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Step 4: Set the finite differencing order to FD_order (set above).
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order)
# Step 5: Set the direction=2 (phi) axis to be the symmetry axis; i.e.,
# axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
# Step 6: The MoLtimestepping interface is only used for memory allocation/deallocation
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_method = "Euler" # DOES NOT MATTER; Again MoL interface is only used for memory alloc/dealloc.
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method, RHS_string = "", post_RHS_string = "",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
###Output
_____no_output_____
###Markdown
Step 2: Set up ADM initial data for polytropic TOV Star \[Back to [top](toc)\]$$\label{adm_id_tov}$$As documented [in the TOV Initial Data NRPy+ Tutorial Module](Tutorial-TOV_Initial_Data.ipynb) ([older version here](Tutorial-GRMHD_UnitConversion.ipynb)), we will now set up TOV initial data, storing the densely-sampled result to file (***Courtesy Phil Chang***).The TOV solver uses an ODE integration routine provided by scipy, so we first make sure that scipy is installed:
###Code
!pip install scipy > /dev/null
###Output
_____no_output_____
###Markdown
Next we call the [`TOV.TOV_Solver()` function](../edit/TOV/TOV_Solver.py) ([NRPy+ Tutorial module](Tutorial-ADM_Initial_Data-TOV.ipynb)) to set up the initial data, using the default parameters for initial data. This function outputs the solution to a file named "outputTOVpolytrope.txt".
###Code
############################
# Single polytrope example #
############################
import TOV.Polytropic_EOSs as ppeos
# Set neos = 1 (single polytrope)
neos = 1
# Set rho_poly_tab (not needed for a single polytrope)
rho_poly_tab = []
# Set Gamma_poly_tab
Gamma_poly_tab = [2.0]
# Set K_poly_tab0
K_poly_tab0 = 1. # ZACH NOTES: CHANGED FROM 100.
# Set the eos quantities
eos = ppeos.set_up_EOS_parameters__complete_set_of_input_variables(neos,rho_poly_tab,Gamma_poly_tab,K_poly_tab0)
import TOV.TOV_Solver as TOV
M_TOV, R_Schw_TOV, R_iso_TOV = TOV.TOV_Solver(eos,
outfile="outputTOVpolytrope.txt",
rho_baryon_central=0.129285,
return_M_RSchw_and_Riso = True,
verbose = True)
# domain_size sets the default value for:
# * Spherical's params.RMAX
# * SinhSpherical*'s params.AMAX
# * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max
# * Cylindrical's -params.ZMIN & .{Z,RHO}MAX
# * SinhCylindrical's params.AMPL{RHO,Z}
# * *SymTP's params.AMAX
domain_size = 2.0 * R_iso_TOV
###Output
1256 1256 1256 1256 1256 1256
Just generated a TOV star with
* M = 1.405030336771405e-01 ,
* R_Schw = 9.566044579232513e-01 ,
* R_iso = 8.100085557410308e-01 ,
* M/R_Schw = 1.468768334847266e-01
###Markdown
Step 2.a: Interpolate the TOV data file as needed to set up ADM spacetime quantities in spherical basis (for input into the `Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear` module) and $T^{\mu\nu}$ in the chosen reference metric basis \[Back to [top](toc)\]$$\label{tov_interp}$$The TOV data file just written stored $\left(r,\rho(r),P(r),M(r),e^{\nu(r)}\right)$, where $\rho(r)$ is the total mass-energy density (cf. $\rho_{\text{baryonic}}$).**METRIC DATA IN TERMS OF ADM QUANTITIES**The [TOV line element](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) in *Schwarzschild coordinates* is written (in the $-+++$ form):$$ds^2 = - c^2 e^\nu dt^2 + \left(1 - \frac{2GM}{rc^2}\right)^{-1} dr^2 + r^2 d\Omega^2.$$In *isotropic coordinates* with $G=c=1$ (i.e., the coordinate system we'd prefer to use), the ($-+++$ form) line element is written:$$ds^2 = - e^{\nu} dt^2 + e^{4\phi} \left(d\bar{r}^2 + \bar{r}^2 d\Omega^2\right),$$where $\phi$ here is the *conformal factor*.The ADM 3+1 line element for this diagonal metric in isotropic spherical coordinates is given by:$$ds^2 = (-\alpha^2 + \beta_k \beta^k) dt^2 + \gamma_{\bar{r}\bar{r}} d\bar{r}^2 + \gamma_{\theta\theta} d\theta^2+ \gamma_{\phi\phi} d\phi^2,$$from which we can immediately read off the ADM quantities:\begin{align}\alpha &= e^{\nu(\bar{r})/2} \\\beta^k &= 0 \\\gamma_{\bar{r}\bar{r}} &= e^{4\phi}\\\gamma_{\theta\theta} &= e^{4\phi} \bar{r}^2 \\\gamma_{\phi\phi} &= e^{4\phi} \bar{r}^2 \sin^2 \theta \\\end{align}**STRESS-ENERGY TENSOR $T^{\mu\nu}$**We will also need the stress-energy tensor $T^{\mu\nu}$. [As discussed here](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation), the stress-energy tensor is diagonal:\begin{align}T^t_t &= -\rho \\T^i_j &= P \delta^i_j \\\text{All other components of }T^\mu_\nu &= 0.\end{align}Since $\beta^i=0$ the inverse metric expression simplifies to (Eq. 4.49 in [Gourgoulhon](https://arxiv.org/pdf/gr-qc/0703035.pdf)):$$g^{\mu\nu} = \begin{pmatrix} -\frac{1}{\alpha^2} & \frac{\beta^i}{\alpha^2} \\\frac{\beta^i}{\alpha^2} & \gamma^{ij} - \frac{\beta^i\beta^j}{\alpha^2}\end{pmatrix} =\begin{pmatrix} -\frac{1}{\alpha^2} & 0 \\0 & \gamma^{ij}\end{pmatrix},$$and since the 3-metric is diagonal we get\begin{align}\gamma^{\bar{r}\bar{r}} &= e^{-4\phi}\\\gamma^{\theta\theta} &= e^{-4\phi}\frac{1}{\bar{r}^2} \\\gamma^{\phi\phi} &= e^{-4\phi}\frac{1}{\bar{r}^2 \sin^2 \theta}.\end{align}Thus raising $T^\mu_\nu$ yields a diagonal $T^{\mu\nu}$\begin{align}T^{tt} &= -g^{tt} \rho = \frac{1}{\alpha^2} \rho = e^{-\nu(\bar{r})} \rho \\T^{\bar{r}\bar{r}} &= g^{\bar{r}\bar{r}} P = \frac{1}{e^{4 \phi}} P \\T^{\theta\theta} &= g^{\theta\theta} P = \frac{1}{e^{4 \phi}\bar{r}^2} P\\T^{\phi\phi} &= g^{\phi\phi} P = \frac{1}{e^{4\phi}\bar{r}^2 \sin^2 \theta} P \end{align}
###Code
thismodule = "TOVID"
rbar,theta,rho,P,expnu,exp4phi = par.Cparameters("REAL",thismodule,
["rbar","theta","rho","P","expnu","exp4phi"],1e300)
IDalpha = sp.sqrt(expnu)
gammaSphDD = ixp.zerorank2(DIM=3)
gammaSphDD[0][0] = exp4phi
gammaSphDD[1][1] = exp4phi*rbar**2
gammaSphDD[2][2] = exp4phi*rbar**2*sp.sin(theta)**2
T4SphUU = ixp.zerorank2(DIM=4)
T4SphUU[0][0] = rho/expnu
T4SphUU[1][1] = P/exp4phi
T4SphUU[2][2] = P/(exp4phi*rbar**2)
T4SphUU[3][3] = P/(exp4phi*rbar**2*sp.sin(theta)**2)
expr_list = [IDalpha]
name_list = ["*alpha"]
for i in range(3):
for j in range(i,3):
expr_list.append(gammaSphDD[i][j])
name_list.append("*gammaDD"+str(i)+str(j))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_ADM_quantities"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params=""" const REAL xyz_or_rthph[3],
const ID_inputs other_inputs,
REAL *gammaDD00,REAL *gammaDD01,REAL *gammaDD02,REAL *gammaDD11,REAL *gammaDD12,REAL *gammaDD22,
REAL *KDD00,REAL *KDD01,REAL *KDD02,REAL *KDD11,REAL *KDD12,REAL *KDD22,
REAL *alpha,
REAL *betaU0,REAL *betaU1,REAL *betaU2,
REAL *BU0,REAL *BU1,REAL *BU2""",
body="""
// Set trivial metric quantities:
*KDD00 = *KDD01 = *KDD02 = 0.0;
/**/ *KDD11 = *KDD12 = 0.0;
/**/ *KDD22 = 0.0;
*betaU0 = *betaU1 = *betaU2 = 0.0;
*BU0 = *BU1 = *BU2 = 0.0;
// Next set gamma_{ij} in spherical basis
const REAL rbar = xyz_or_rthph[0];
const REAL theta = xyz_or_rthph[1];
const REAL phi = xyz_or_rthph[2];
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
outputC(expr_list,name_list, "returnstring",outCparams),
opts="DisableCparameters")
###Output
Output C function ID_TOV_ADM_quantities() to file TOVID_Ccodes/ID_TOV_ADM_quantities.h
###Markdown
As all input quantities are functions of $r$, we will simply read the solution from file and interpolate it to the values of $r$ needed by the initial data.1. First we define functions `ID_TOV_ADM_quantities()` and `ID_TOV_TUPMUNU()` that call the [1D TOV interpolator function](../edit/TOV/tov_interp.h) to evaluate the ADM spacetime quantities and $T^{\mu\nu}$, respectively, at any given point $(r,\theta,\phi)$ in the Spherical basis. All quantities are defined as above.1. Next we will construct the BSSN/ADM source terms $\{S_{ij},S_{i},S,\rho\}$ in the Spherical basis1. Then we will perform the Jacobian transformation on $\{S_{ij},S_{i},S,\rho\}$ to the desired `(xx0,xx1,xx2)` basis1. Next we call the *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function to conver the above ADM quantities to the rescaled BSSN quantities in the desired curvilinear coordinate system: [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).$${\rm Jac\_dUSph\_dDrfmUD[mu][nu]} = \frac{\partial x^\mu_{\rm Sph}}{\partial x^\nu_{\rm rfm}},$$via exact differentiation (courtesy SymPy), and the inverse Jacobian$${\rm Jac\_dUrfm\_dDSphUD[mu][nu]} = \frac{\partial x^\mu_{\rm rfm}}{\partial x^\nu_{\rm Sph}},$$using NRPy+'s `generic_matrix_inverter3x3()` function. In terms of these, the transformation of BSSN tensors from Spherical to `"reference_metric::CoordSystem"` coordinates may be written:$$T^{\mu\nu}_{\rm rfm} = \frac{\partial x^\mu_{\rm rfm}}{\partial x^\delta_{\rm Sph}}\frac{\partial x^\nu_{\rm rfm}}{\partial x^\sigma_{\rm Sph}} T^{\delta\sigma}_{\rm Sph}$$
###Code
r_th_ph_or_Cart_xyz_oID_xx = []
CoordType_in = "Spherical"
if CoordType_in == "Spherical":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxSph
elif CoordType_in == "Cartesian":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxCart
else:
print("Error: Can only convert ADM Cartesian or Spherical initial data to BSSN Curvilinear coords.")
exit(1)
# Next apply Jacobian transformations to convert into the (xx0,xx1,xx2) basis
# rho and S are scalar, so no Jacobian transformations are necessary.
Jac4_dUSphorCart_dDrfmUD = ixp.zerorank2(DIM=4)
Jac4_dUSphorCart_dDrfmUD[0][0] = sp.sympify(1)
for i in range(DIM):
for j in range(DIM):
Jac4_dUSphorCart_dDrfmUD[i+1][j+1] = sp.diff(r_th_ph_or_Cart_xyz_oID_xx[i],rfm.xx[j])
Jac4_dUrfm_dDSphorCartUD, dummyDET = ixp.generic_matrix_inverter4x4(Jac4_dUSphorCart_dDrfmUD)
# Perform Jacobian operations on T^{mu nu} and gamma_{ij}
T4UU = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","T4UU","sym01",DIM=4)
IDT4UU = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
for delta in range(4):
for sigma in range(4):
IDT4UU[mu][nu] += \
Jac4_dUrfm_dDSphorCartUD[mu][delta]*Jac4_dUrfm_dDSphorCartUD[nu][sigma]*T4SphUU[delta][sigma]
lhrh_list = []
for mu in range(4):
for nu in range(mu,4):
lhrh_list.append(lhrh(lhs=gri.gfaccess("auxevol_gfs","T4UU"+str(mu)+str(nu)),rhs=IDT4UU[mu][nu]))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_TUPMUNU_xx0xx1xx2"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params="""const paramstruct *restrict params,REAL *restrict xx[3],
const ID_inputs other_inputs,REAL *restrict auxevol_gfs""",
body=outputC([rfm.xxSph[0],rfm.xxSph[1],rfm.xxSph[2]],
["const REAL rbar","const REAL theta","const REAL ph"],"returnstring",
"CSE_enable=False,includebraces=False")+"""
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
fin.FD_outputC("returnstring",lhrh_list,params="outCverbose=False,includebraces=False").replace("IDX4","IDX4S"),
loopopts="AllPoints,Read_xxs")
###Output
Output C function ID_TOV_TUPMUNU_xx0xx1xx2() to file TOVID_Ccodes/ID_TOV_TUPMUNU_xx0xx1xx2.h
###Markdown
Step 3: Convert ADM initial data to BSSN-in-curvilinear coordinates \[Back to [top](toc)\]$$\label{adm_id_spacetime}$$This is an automated process, taken care of by [`BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear`](../edit/BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py), and documented [in this tutorial notebook](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).
###Code
import BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear as AtoBnum
AtoBnum.Convert_Spherical_or_Cartesian_ADM_to_BSSN_curvilinear("Spherical","ID_TOV_ADM_quantities",
Ccodesdir=Ccodesdir,loopopts="")
###Output
Output C function ID_BSSN_lambdas() to file TOVID_Ccodes/ID_BSSN_lambdas.h
Output C function ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs() to file TOVID_Ccodes/ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h
Output C function ID_BSSN__ALL_BUT_LAMBDAs() to file TOVID_Ccodes/ID_BSSN__ALL_BUT_LAMBDAs.h
###Markdown
Step 4: Validating that the TOV initial data satisfy the Hamiltonian constraint \[Back to [top](toc)\]$$\label{validate}$$We will validate that the TOV initial data satisfy the Hamiltonian constraint, modulo numerical finite differencing error Step 4.a: Output the Hamiltonian constraint \[Back to [top](toc)\]$$\label{ham_const_output}$$First output the Hamiltonian constraint [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN_constraints.ipynb)
###Code
# Enable rfm_precompute infrastructure, which results in
# BSSN RHSs that are free of transcendental functions,
# even in curvilinear coordinates, so long as
# ConformalFactor is set to "W" (default).
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
import BSSN.Enforce_Detgammabar_Constraint as EGC
enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammabar_Constraint_symb_expressions()
# Now register the Hamiltonian as a gridfunction.
H = gri.register_gridfunctions("AUX","H")
# Then define the Hamiltonian constraint and output the optimized C code.
import BSSN.BSSN_constraints as bssncon
import BSSN.BSSN_stress_energy_source_terms as Bsest
bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False)
Bsest.BSSN_source_terms_for_BSSN_constraints(T4UU)
bssncon.H += Bsest.sourceterm_H
# Now that we are finished with all the rfm hatted
# quantities in generic precomputed functional
# form, let's restore them to their closed-
# form expressions.
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
desc="Evaluate the Hamiltonian constraint"
name="Hamiltonian_constraint"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict auxevol_gfs, REAL *restrict aux_gfs""",
body = fin.FD_outputC("returnstring",lhrh(lhs=gri.gfaccess("aux_gfs", "H"), rhs=bssncon.H),
params="outCverbose=False").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,Enable_rfm_precompute")
###Output
Output C function Hamiltonian_constraint() to file TOVID_Ccodes/Hamiltonian_constraint.h
###Markdown
Step 4.b: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](toc)\]$$\label{bc_functs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),Cparamspath=os.path.join("../"))
###Output
Wrote to file "TOVID_Ccodes/boundary_conditions/parity_conditions_symbolic_dot_products.h"
Evolved gridfunction "aDD00" has parity type 4.
Evolved gridfunction "aDD01" has parity type 5.
Evolved gridfunction "aDD02" has parity type 6.
Evolved gridfunction "aDD11" has parity type 7.
Evolved gridfunction "aDD12" has parity type 8.
Evolved gridfunction "aDD22" has parity type 9.
Evolved gridfunction "alpha" has parity type 0.
Evolved gridfunction "betU0" has parity type 1.
Evolved gridfunction "betU1" has parity type 2.
Evolved gridfunction "betU2" has parity type 3.
Evolved gridfunction "cf" has parity type 0.
Evolved gridfunction "hDD00" has parity type 4.
Evolved gridfunction "hDD01" has parity type 5.
Evolved gridfunction "hDD02" has parity type 6.
Evolved gridfunction "hDD11" has parity type 7.
Evolved gridfunction "hDD12" has parity type 8.
Evolved gridfunction "hDD22" has parity type 9.
Evolved gridfunction "lambdaU0" has parity type 1.
Evolved gridfunction "lambdaU1" has parity type 2.
Evolved gridfunction "lambdaU2" has parity type 3.
Evolved gridfunction "trK" has parity type 0.
Evolved gridfunction "vetU0" has parity type 1.
Evolved gridfunction "vetU1" has parity type 2.
Evolved gridfunction "vetU2" has parity type 3.
Auxiliary gridfunction "H" has parity type 0.
AuxEvol gridfunction "T4UU00" has parity type 0.
AuxEvol gridfunction "T4UU01" has parity type 1.
AuxEvol gridfunction "T4UU02" has parity type 2.
AuxEvol gridfunction "T4UU03" has parity type 3.
AuxEvol gridfunction "T4UU11" has parity type 4.
AuxEvol gridfunction "T4UU12" has parity type 5.
AuxEvol gridfunction "T4UU13" has parity type 6.
AuxEvol gridfunction "T4UU22" has parity type 7.
AuxEvol gridfunction "T4UU23" has parity type 8.
AuxEvol gridfunction "T4UU33" has parity type 9.
Wrote to file "TOVID_Ccodes/boundary_conditions/EigenCoord_Cart_to_xx.h"
###Markdown
Step 4.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
# Set up the C function for the det(gammahat) = det(gammabar)
EGC.output_Enforce_Detgammabar_Constraint_Ccode(Ccodesdir,
exprs=enforce_detg_constraint_symb_expressions)
###Output
Output C function enforce_detgammabar_constraint() to file TOVID_Ccodes/enforce_detgammabar_constraint.h
###Markdown
Step 4.d: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](toc)\]$$\label{cparams_rfm_and_domainsize}$$Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
###Code
# Step 3.d.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.d.ii: Set free_parameters.h
# Output to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Step 3.d.iii: Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir)
# Step 3.d.iv: Generate xxCart.h, which contains xxCart() for
# (the mapping from xx->Cartesian) for the chosen
# CoordSystem:
rfm.xxCart_h("xxCart","./set_Cparameters.h",os.path.join(Ccodesdir,"xxCart.h"))
# Step 3.d.v: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
###Output
_____no_output_____
###Markdown
Step 5: `TOV_Playground.c`: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER)
with open(os.path.join(Ccodesdir,"TOV_Playground_REAL__NGHOSTS.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set TOV stellar parameters
#define TOV_Mass """+str(M_TOV)+"""
#define TOV_Riso """+str(R_iso_TOV)+"\n")
%%writefile $Ccodesdir/TOV_Playground.c
// Step P0: Define REAL and NGHOSTS. This header is generated by NRPy+.
#include "TOV_Playground_REAL__NGHOSTS.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xxCart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Set xxCart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xxCart.h"
// Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P8: Include function for enforcing detgammabar constraint.
#include "enforce_detgammabar_constraint.h"
// Step P4: Declare initial data input struct:
// stores data from initial data solver,
// so they can be put on the numerical grid.
typedef struct __ID_inputs {
REAL Rbar;
int Rbar_idx;
int interp_stencil_size;
int numlines_in_file;
REAL *r_Schw_arr,*rho_arr,*rho_baryon_arr,*P_arr,*M_arr,*expnu_arr,*exp4phi_arr,*rbar_arr;
} ID_inputs;
// Part P11: Declare all functions for setting up TOV initial data.
/* Routines to interpolate the TOV solution and convert to ADM & T^{munu}: */
#include "../TOV/tov_interp.h"
#include "ID_TOV_ADM_quantities.h"
#include "ID_TOV_TUPMUNU_xx0xx1xx2.h"
/* Next perform the basis conversion and compute all needed BSSN quantities */
#include "ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN_lambdas.h"
// Step P10: Declare function necessary for setting up the initial data.
// Step P10.a: Define BSSN_ID() for BrillLindquist initial data
// Step P10.b: Set the generic driver function for setting up BSSN initial data
void initial_data(const paramstruct *restrict params,const bc_struct *restrict bcstruct,
const rfm_struct *restrict rfmstruct,
REAL *restrict xx[3], REAL *restrict auxevol_gfs, REAL *restrict in_gfs) {
#include "set_Cparameters.h"
// Step 1: Set up TOV initial data
// Step 1.a: Read TOV initial data from data file
// Open the data file:
char filename[100];
sprintf(filename,"./outputTOVpolytrope.txt");
FILE *in1Dpolytrope = fopen(filename, "r");
if (in1Dpolytrope == NULL) {
fprintf(stderr,"ERROR: could not open file %s\n",filename);
exit(1);
}
// Count the number of lines in the data file:
int numlines_in_file = count_num_lines_in_file(in1Dpolytrope);
// Allocate space for all data arrays:
REAL *r_Schw_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_baryon_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *P_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *M_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *expnu_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *exp4phi_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rbar_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
// Read from the data file, filling in arrays
// read_datafile__set_arrays() may be found in TOV/tov_interp.h
if(read_datafile__set_arrays(in1Dpolytrope, r_Schw_arr,rho_arr,rho_baryon_arr,P_arr,M_arr,expnu_arr,exp4phi_arr,rbar_arr) == 1) {
fprintf(stderr,"ERROR WHEN READING FILE %s!\n",filename);
exit(1);
}
fclose(in1Dpolytrope);
REAL Rbar = -100;
int Rbar_idx = -100;
for(int i=1;i<numlines_in_file;i++) {
if(rho_arr[i-1]>0 && rho_arr[i]==0) { Rbar = rbar_arr[i-1]; Rbar_idx = i-1; }
}
if(Rbar<0) {
fprintf(stderr,"Error: could not find rbar=Rbar from data file.\n");
exit(1);
}
ID_inputs TOV_in;
TOV_in.Rbar = Rbar;
TOV_in.Rbar_idx = Rbar_idx;
const int interp_stencil_size = 12;
TOV_in.interp_stencil_size = interp_stencil_size;
TOV_in.numlines_in_file = numlines_in_file;
TOV_in.r_Schw_arr = r_Schw_arr;
TOV_in.rho_arr = rho_arr;
TOV_in.rho_baryon_arr = rho_baryon_arr;
TOV_in.P_arr = P_arr;
TOV_in.M_arr = M_arr;
TOV_in.expnu_arr = expnu_arr;
TOV_in.exp4phi_arr = exp4phi_arr;
TOV_in.rbar_arr = rbar_arr;
/* END TOV INPUT ROUTINE */
// Step 1.b: Interpolate data from data file to set BSSN gridfunctions
ID_BSSN__ALL_BUT_LAMBDAs(params,xx,TOV_in, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammabar_constraint(rfmstruct, params, in_gfs);
ID_BSSN_lambdas(params, xx, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammabar_constraint(rfmstruct, params, in_gfs);
ID_TOV_TUPMUNU_xx0xx1xx2(params,xx,TOV_in,auxevol_gfs);
free(rbar_arr);
free(rho_arr);
free(rho_baryon_arr);
free(P_arr);
free(M_arr);
free(expnu_arr);
}
// Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic)
#include "Hamiltonian_constraint.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
// Step 1: Set up initial data to an exact solution
initial_data(¶ms,&bcstruct, &rfmstruct, xx, auxevol_gfs, y_n_gfs);
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, y_n_gfs);
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(&rfmstruct, ¶ms, y_n_gfs,auxevol_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
xxCart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",xCart[1]/TOV_Mass,xCart[2]/TOV_Mass, y_n_gfs[IDX4ptS(CFGF,idx)],
log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));
}
fclose(out2D);
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
free(auxevol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
import cmdline_helper as cmd
cmd.C_compile(os.path.join(Ccodesdir,"TOV_Playground.c"), "TOV_Playground")
cmd.delete_existing_files("out96.txt")
cmd.Execute("TOV_Playground", "96 96 2", "out96.txt")
###Output
Compiling executable...
Executing `gcc -Ofast -fopenmp -march=native -funroll-loops TOVID_Ccodes/TOV_Playground.c -o TOV_Playground -lm`...
Finished executing in 2.015650510787964 seconds.
Finished compilation.
Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./TOV_Playground 96 96 2`...
Finished executing in 0.2118375301361084 seconds.
###Markdown
Step 6: Plotting the single-neutron-star initial data \[Back to [top](toc)\]$$\label{plot}$$Here we plot the conformal factor of these initial data on a 2D grid, such that darker colors imply stronger gravitational fields. Hence, we see the single neutron star centered at the origin: $x/M=y/M=z/M=0$, where $M$ is an arbitrary mass scale (conventionally the [ADM mass](https://en.wikipedia.org/w/index.php?title=ADM_formalism&oldid=846335453) is chosen), and our formulation of Einstein's equations adopt $G=c=1$ [geometrized units](https://en.wikipedia.org/w/index.php?title=Geometrized_unit_system&oldid=861682626).
###Code
import numpy as np
from scipy.interpolate import griddata
from pylab import savefig
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from IPython.display import Image
x96,y96,valuesCF96,valuesHam96 = np.loadtxt('out96.txt').T #Transposed for easier unpacking
bounds = 7.5
pl_xmin = -bounds
pl_xmax = +bounds
pl_ymin = -bounds
pl_ymax = +bounds
grid_x, grid_y = np.mgrid[pl_xmin:pl_xmax:100j, pl_ymin:pl_ymax:100j]
points96 = np.zeros((len(x96), 2))
for i in range(len(x96)):
points96[i][0] = x96[i]
points96[i][1] = y96[i]
grid96 = griddata(points96, valuesCF96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesCF96, (grid_x, grid_y), method='cubic')
plt.clf()
plt.title("Neutron Star: log10( max(1e-6,Energy Density) )")
plt.xlabel("x/M")
plt.ylabel("y/M")
# fig, ax = plt.subplots()
# ax.plot(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
# plt.close(fig)
fig96cf = plt.imshow(grid96.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cf)
savefig("BHB.png")
from IPython.display import Image
Image("BHB.png")
# # interpolation='nearest', cmap=cm.gist_rainbow)
###Output
_____no_output_____
###Markdown
Step 7: Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero \[Back to [top](toc)\]$$\label{convergence}$$The equations behind these initial data solve Einstein's equations exactly, at a single instant in time. One reflection of this solution is that the Hamiltonian constraint violation should be exactly zero in the initial data. However, when evaluated on numerical grids, the Hamiltonian constraint violation will *not* generally evaluate to zero due to the associated numerical derivatives not being exact. However, these numerical derivatives (finite difference derivatives in this case) should *converge* to the exact derivatives as the density of numerical sampling points approaches infinity.In this case, all of our finite difference derivatives agree with the exact solution, with an error term that drops with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$. Here, as in the [Start-to-Finish Scalar Wave (Cartesian grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWave.ipynb) and the [Start-to-Finish Scalar Wave (curvilinear grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWaveCurvilinear.ipynb) we confirm this convergence.First, let's take a look at what the numerical error looks like on the x-y plane at a given numerical resolution, plotting $\log_{10}|H|$, where $H$ is the Hamiltonian constraint violation:
###Code
grid96 = griddata(points96, valuesHam96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesHam96, (grid_x, grid_y), method='cubic')
# fig, ax = plt.subplots()
plt.clf()
plt.title("96^3 Numerical Err.: log_{10}|Ham|")
plt.xlabel("x/M")
plt.ylabel("y/M")
fig96cub = plt.imshow(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cub)
###Output
_____no_output_____
###Markdown
Next, we set up the same initial data but on a lower-resolution, $48\times 8\times 2$ grid (axisymmetric in the $\phi$ direction). Since the constraint violation (numerical error associated with the fourth-order-accurate, finite-difference derivatives) should converge to zero with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$, we expect the constraint violation will increase (relative to the $96\times 16\times 2$ grid) by a factor of $\left(96/48\right)^4$. Here we demonstrate that indeed this order of convergence is observed as expected, *except* at the star's surface where the stress-energy tensor $T^{\mu\nu}$ sharply drops to zero.
###Code
# Now rerun TOV_Playground with twice lower resolution.
cmd.delete_existing_files("out48.txt")
cmd.Execute("TOV_Playground", "48 48 2", "out48.txt")
x48,y48,valuesCF48,valuesHam48 = np.loadtxt('out48.txt').T #Transposed for easier unpacking
points48 = np.zeros((len(x48), 2))
for i in range(len(x48)):
points48[i][0] = x48[i]
points48[i][1] = y48[i]
grid48 = griddata(points48, valuesHam48, (grid_x, grid_y), method='cubic')
griddiff_48_minus_96 = np.zeros((100,100))
griddiff_48_minus_96_1darray = np.zeros(100*100)
gridx_1darray_yeq0 = np.zeros(100)
grid48_1darray_yeq0 = np.zeros(100)
grid96_1darray_yeq0 = np.zeros(100)
count = 0
outarray = []
for i in range(100):
for j in range(100):
griddiff_48_minus_96[i][j] = grid48[i][j] - grid96[i][j]
griddiff_48_minus_96_1darray[count] = griddiff_48_minus_96[i][j]
if j==49:
gridx_1darray_yeq0[i] = grid_x[i][j]
grid48_1darray_yeq0[i] = grid48[i][j] + np.log10((48./96.)**4)
grid96_1darray_yeq0[i] = grid96[i][j]
count = count + 1
plt.clf()
fig, ax = plt.subplots()
plt.title("Plot Demonstrating 4th-order Convergence")
plt.xlabel("x/M")
plt.ylabel("log10(Relative error)")
ax.plot(gridx_1darray_yeq0, grid96_1darray_yeq0, 'k-', label='Nr=96')
ax.plot(gridx_1darray_yeq0, grid48_1darray_yeq0, 'k--', label='Nr=48, mult by (48/96)^4')
ax.set_ylim([-12.5,1.5])
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
###Output
Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./TOV_Playground 48 48 2`...
Finished executing in 0.21308159828186035 seconds.
###Markdown
Step 8: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Start-to-Finish Example: Setting up Polytropic [TOV](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) Initial Data, in Curvilinear Coordinates Authors: Zach Etienne & Phil Chang Formatting improvements courtesy Brandon Clark This module sets up initial data for a TOV star in *spherical, isotropic coordinates*, using the *Numerical* ADM Spherical to BSSN Curvilinear initial data module (numerical = BSSN $\lambda^i$'s are computed using finite-difference derivatives instead of exact expressions).**Notebook Status:** Validated **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution (see [plots](convergence) at bottom). Note that convergence at the surface of the star will be lower order due to the sharp drop to zero in $T^{\mu\nu}$. NRPy+ Source Code for this module: * [TOV/TOV_Solver.py](../edit/TOV/TOV_Solver.py); ([**NRPy+ Tutorial module reviewing mathematical formulation and equations solved**](Tutorial-ADM_Initial_Data-TOV.ipynb)); ([**start-to-finish NRPy+ Tutorial module demonstrating that initial data satisfy Hamiltonian constraint**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb)): Tolman-Oppenheimer-Volkoff (TOV) initial data; defines all ADM variables and nonzero $T^{\mu\nu}$ components in Spherical basis.* [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates Introduction:Here we use NRPy+ to set up initial data for a [simple polytrope TOV star](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation).The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step:1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration [(**NRPy+ tutorial on NRPy+ Method of Lines algorithm**)](Tutorial-Method_of_Lines-C_Code_Generation.ipynb).1. Set gridfunction values to initial data * [**NRPy+ tutorial on TOV initial data**](Tutorial-ADM_Initial_Data-TOV.ipynb) * [**NRPy+ tutorial on validating TOV initial data**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb).1. Evaluate the Hamiltonian constraint violation * [**NRPy+ tutorial on BSSN constraints**](Tutorial-BSSN_constraints.ipynb)1. Repeat above steps at two numerical resolutions to confirm convergence of Hamiltonian constraint violation to zero. Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id_tov): Set up ADM initial data for polytropic TOV Star 1. [Step 2.a](tov_interp): Interpolating the TOV data file as needed 1. [Step 2.b](source): Compute source terms $S_{ij}$, $S_{i}$, $S$, and $\rho$ 1. [Step 2.c](jacobian): Jacobian transformation on the ADM/BSSN source terms 1. [Step 2.d](tensor): Rescale tensorial quantities1. [Step 3](adm_id_spacetime): Convert ADM spacetime quantity initial data from Spherical to BSSN Curvilinear coordinates1. [Step 4](validate): Validating that the TOV initial data satisfy the Hamiltonian constraint 1. [Step 4.a](ham_const_output): Output the Hamiltonian Constraint 1. [Step 4.b](apply_bcs): Apply singular, curvilinear coordinate boundary conditions 1. [Step 4.c](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ 1. [Step 5](mainc): `TOV_Playground.c`: The Main C Code1. [Step 6](plot): Plotting the single-neutron-star initial data1. [Step 7](convergence): Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero1. [Step 8](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step P1: Import needed NRPy+ core modules:
from outputC import * # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("TOVID_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
# !rm -r ScalarWaveCurvilinear_Playground_Ccodes
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three this time, and then read
# the parameter as DIM.
par.set_parval_from_str("grid::DIM",3)
DIM = par.parval_from_str("grid::DIM")
# Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm,
# FD order, floating point precision, and CFL factor:
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "Spherical"
# Step 2.a: Set defaults for Coordinate system parameters.
# These are perhaps the most commonly adjusted parameters,
# so we enable modifications at this high level.
# domain_size = 7.5 # SET BELOW BASED ON TOV STELLAR RADIUS
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.4 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05# If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 0.5 # If SymTP chosen
# Step 2.b: Set the order of spatial finite difference derivatives;
# and the core data type.
FD_order = 4 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
REAL = "double" # Best to use double here.
# Step 3: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Step 4: Set the finite differencing order to FD_order (set above).
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order)
# Step 5: Set the direction=2 (phi) axis to be the symmetry axis; i.e.,
# axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
# Step 6: The MoLtimestepping interface is only used for memory allocation/deallocation
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_method = "Euler" # DOES NOT MATTER; Again MoL interface is only used for memory alloc/dealloc.
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method, RHS_string = "", post_RHS_string = "",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
###Output
_____no_output_____
###Markdown
Step 2: Set up ADM initial data for polytropic TOV Star \[Back to [top](toc)\]$$\label{adm_id_tov}$$As documented [in the TOV Initial Data NRPy+ Tutorial Module](Tutorial-TOV_Initial_Data.ipynb) ([older version here](Tutorial-GRMHD_UnitConversion.ipynb)), we will now set up TOV initial data, storing the densely-sampled result to file (***Courtesy Phil Chang***).The TOV solver uses an ODE integration routine provided by scipy, so we first make sure that scipy is installed:
###Code
!pip install scipy > /dev/null
###Output
_____no_output_____
###Markdown
Next we call the [`TOV.TOV_Solver()` function](../edit/TOV/TOV_Solver.py) ([NRPy+ Tutorial module](Tutorial-ADM_Initial_Data-TOV.ipynb)) to set up the initial data, using the default parameters for initial data. This function outputs the solution to a file named "outputTOVpolytrope.txt".
###Code
############################
# Single polytrope example #
############################
import TOV.Polytropic_EOSs as ppeos
# Set neos = 1 (single polytrope)
neos = 1
# Set rho_poly_tab (not needed for a single polytrope)
rho_poly_tab = []
# Set Gamma_poly_tab
Gamma_poly_tab = [2.0]
# Set K_poly_tab0
K_poly_tab0 = 1. # ZACH NOTES: CHANGED FROM 100.
# Set the eos quantities
eos = ppeos.set_up_EOS_parameters__complete_set_of_input_variables(neos,rho_poly_tab,Gamma_poly_tab,K_poly_tab0)
import TOV.TOV_Solver as TOV
M_TOV, R_Schw_TOV, R_iso_TOV = TOV.TOV_Solver(eos,
outfile="outputTOVpolytrope.txt",
rho_baryon_central=0.129285,
return_M_RSchw_and_Riso = True,
verbose = True)
# domain_size sets the default value for:
# * Spherical's params.RMAX
# * SinhSpherical*'s params.AMAX
# * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max
# * Cylindrical's -params.ZMIN & .{Z,RHO}MAX
# * SinhCylindrical's params.AMPL{RHO,Z}
# * *SymTP's params.AMAX
domain_size = 2.0 * R_iso_TOV
###Output
1256 1256 1256 1256 1256 1256
Just generated a TOV star with
* M = 1.405030336771405e-01 ,
* R_Schw = 9.566044579232513e-01 ,
* R_iso = 8.100085557410308e-01 ,
* M/R_Schw = 1.468768334847266e-01
###Markdown
Step 2.a: Interpolate the TOV data file as needed to set up ADM spacetime quantities in spherical basis (for input into the `Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear` module) and $T^{\mu\nu}$ in the chosen reference metric basis \[Back to [top](toc)\]$$\label{tov_interp}$$The TOV data file just written stored $\left(r,\rho(r),P(r),M(r),e^{\nu(r)}\right)$, where $\rho(r)$ is the total mass-energy density (cf. $\rho_{\text{baryonic}}$).**METRIC DATA IN TERMS OF ADM QUANTITIES**The [TOV line element](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) in *Schwarzschild coordinates* is written (in the $-+++$ form):$$ds^2 = - c^2 e^\nu dt^2 + \left(1 - \frac{2GM}{rc^2}\right)^{-1} dr^2 + r^2 d\Omega^2.$$In *isotropic coordinates* with $G=c=1$ (i.e., the coordinate system we'd prefer to use), the ($-+++$ form) line element is written:$$ds^2 = - e^{\nu} dt^2 + e^{4\phi} \left(d\bar{r}^2 + \bar{r}^2 d\Omega^2\right),$$where $\phi$ here is the *conformal factor*.The ADM 3+1 line element for this diagonal metric in isotropic spherical coordinates is given by:$$ds^2 = (-\alpha^2 + \beta_k \beta^k) dt^2 + \gamma_{\bar{r}\bar{r}} d\bar{r}^2 + \gamma_{\theta\theta} d\theta^2+ \gamma_{\phi\phi} d\phi^2,$$from which we can immediately read off the ADM quantities:\begin{align}\alpha &= e^{\nu(\bar{r})/2} \\\beta^k &= 0 \\\gamma_{\bar{r}\bar{r}} &= e^{4\phi}\\\gamma_{\theta\theta} &= e^{4\phi} \bar{r}^2 \\\gamma_{\phi\phi} &= e^{4\phi} \bar{r}^2 \sin^2 \theta \\\end{align}**STRESS-ENERGY TENSOR $T^{\mu\nu}$**We will also need the stress-energy tensor $T^{\mu\nu}$. [As discussed here](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation), the stress-energy tensor is diagonal:\begin{align}T^t_t &= -\rho \\T^i_j &= P \delta^i_j \\\text{All other components of }T^\mu_\nu &= 0.\end{align}Since $\beta^i=0$ the inverse metric expression simplifies to (Eq. 4.49 in [Gourgoulhon](https://arxiv.org/pdf/gr-qc/0703035.pdf)):$$g^{\mu\nu} = \begin{pmatrix} -\frac{1}{\alpha^2} & \frac{\beta^i}{\alpha^2} \\\frac{\beta^i}{\alpha^2} & \gamma^{ij} - \frac{\beta^i\beta^j}{\alpha^2}\end{pmatrix} =\begin{pmatrix} -\frac{1}{\alpha^2} & 0 \\0 & \gamma^{ij}\end{pmatrix},$$and since the 3-metric is diagonal we get\begin{align}\gamma^{\bar{r}\bar{r}} &= e^{-4\phi}\\\gamma^{\theta\theta} &= e^{-4\phi}\frac{1}{\bar{r}^2} \\\gamma^{\phi\phi} &= e^{-4\phi}\frac{1}{\bar{r}^2 \sin^2 \theta}.\end{align}Thus raising $T^\mu_\nu$ yields a diagonal $T^{\mu\nu}$\begin{align}T^{tt} &= -g^{tt} \rho = \frac{1}{\alpha^2} \rho = e^{-\nu(\bar{r})} \rho \\T^{\bar{r}\bar{r}} &= g^{\bar{r}\bar{r}} P = \frac{1}{e^{4 \phi}} P \\T^{\theta\theta} &= g^{\theta\theta} P = \frac{1}{e^{4 \phi}\bar{r}^2} P\\T^{\phi\phi} &= g^{\phi\phi} P = \frac{1}{e^{4\phi}\bar{r}^2 \sin^2 \theta} P \end{align}
###Code
thismodule = "TOVID"
rbar,theta,rho,P,expnu,exp4phi = par.Cparameters("REAL",thismodule,
["rbar","theta","rho","P","expnu","exp4phi"],1e300)
IDalpha = sp.sqrt(expnu)
gammaSphDD = ixp.zerorank2(DIM=3)
gammaSphDD[0][0] = exp4phi
gammaSphDD[1][1] = exp4phi*rbar**2
gammaSphDD[2][2] = exp4phi*rbar**2*sp.sin(theta)**2
T4SphUU = ixp.zerorank2(DIM=4)
T4SphUU[0][0] = rho/expnu
T4SphUU[1][1] = P/exp4phi
T4SphUU[2][2] = P/(exp4phi*rbar**2)
T4SphUU[3][3] = P/(exp4phi*rbar**2*sp.sin(theta)**2)
expr_list = [IDalpha]
name_list = ["*alpha"]
for i in range(3):
for j in range(i,3):
expr_list.append(gammaSphDD[i][j])
name_list.append("*gammaDD"+str(i)+str(j))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_ADM_quantities"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params=""" const REAL xyz_or_rthph[3],
const ID_inputs other_inputs,
REAL *gammaDD00,REAL *gammaDD01,REAL *gammaDD02,REAL *gammaDD11,REAL *gammaDD12,REAL *gammaDD22,
REAL *KDD00,REAL *KDD01,REAL *KDD02,REAL *KDD11,REAL *KDD12,REAL *KDD22,
REAL *alpha,
REAL *betaU0,REAL *betaU1,REAL *betaU2,
REAL *BU0,REAL *BU1,REAL *BU2""",
body="""
// Set trivial metric quantities:
*KDD00 = *KDD01 = *KDD02 = 0.0;
/**/ *KDD11 = *KDD12 = 0.0;
/**/ *KDD22 = 0.0;
*betaU0 = *betaU1 = *betaU2 = 0.0;
*BU0 = *BU1 = *BU2 = 0.0;
// Next set gamma_{ij} in spherical basis
const REAL rbar = xyz_or_rthph[0];
const REAL theta = xyz_or_rthph[1];
const REAL phi = xyz_or_rthph[2];
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
outputC(expr_list,name_list, "returnstring",outCparams),
opts="DisableCparameters")
###Output
Output C function ID_TOV_ADM_quantities() to file TOVID_Ccodes/ID_TOV_ADM_quantities.h
###Markdown
As all input quantities are functions of $r$, we will simply read the solution from file and interpolate it to the values of $r$ needed by the initial data.1. First we define functions `ID_TOV_ADM_quantities()` and `ID_TOV_TUPMUNU()` that call the [1D TOV interpolator function](../edit/TOV/tov_interp.h) to evaluate the ADM spacetime quantities and $T^{\mu\nu}$, respectively, at any given point $(r,\theta,\phi)$ in the Spherical basis. All quantities are defined as above.1. Next we will construct the BSSN/ADM source terms $\{S_{ij},S_{i},S,\rho\}$ in the Spherical basis1. Then we will perform the Jacobian transformation on $\{S_{ij},S_{i},S,\rho\}$ to the desired `(xx0,xx1,xx2)` basis1. Next we call the *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function to conver the above ADM quantities to the rescaled BSSN quantities in the desired curvilinear coordinate system: [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).$${\rm Jac\_dUSph\_dDrfmUD[mu][nu]} = \frac{\partial x^\mu_{\rm Sph}}{\partial x^\nu_{\rm rfm}},$$via exact differentiation (courtesy SymPy), and the inverse Jacobian$${\rm Jac\_dUrfm\_dDSphUD[mu][nu]} = \frac{\partial x^\mu_{\rm rfm}}{\partial x^\nu_{\rm Sph}},$$using NRPy+'s `generic_matrix_inverter3x3()` function. In terms of these, the transformation of BSSN tensors from Spherical to `"reference_metric::CoordSystem"` coordinates may be written:$$T^{\mu\nu}_{\rm rfm} = \frac{\partial x^\mu_{\rm rfm}}{\partial x^\delta_{\rm Sph}}\frac{\partial x^\nu_{\rm rfm}}{\partial x^\sigma_{\rm Sph}} T^{\delta\sigma}_{\rm Sph}$$
###Code
r_th_ph_or_Cart_xyz_oID_xx = []
CoordType_in = "Spherical"
if CoordType_in == "Spherical":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxSph
elif CoordType_in == "Cartesian":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxCart
else:
print("Error: Can only convert ADM Cartesian or Spherical initial data to BSSN Curvilinear coords.")
exit(1)
# Next apply Jacobian transformations to convert into the (xx0,xx1,xx2) basis
# rho and S are scalar, so no Jacobian transformations are necessary.
Jac4_dUSphorCart_dDrfmUD = ixp.zerorank2(DIM=4)
Jac4_dUSphorCart_dDrfmUD[0][0] = sp.sympify(1)
for i in range(DIM):
for j in range(DIM):
Jac4_dUSphorCart_dDrfmUD[i+1][j+1] = sp.diff(r_th_ph_or_Cart_xyz_oID_xx[i],rfm.xx[j])
Jac4_dUrfm_dDSphorCartUD, dummyDET = ixp.generic_matrix_inverter4x4(Jac4_dUSphorCart_dDrfmUD)
# Perform Jacobian operations on T^{mu nu} and gamma_{ij}
T4UU = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","T4UU","sym01",DIM=4)
IDT4UU = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
for delta in range(4):
for sigma in range(4):
IDT4UU[mu][nu] += \
Jac4_dUrfm_dDSphorCartUD[mu][delta]*Jac4_dUrfm_dDSphorCartUD[nu][sigma]*T4SphUU[delta][sigma]
lhrh_list = []
for mu in range(4):
for nu in range(mu,4):
lhrh_list.append(lhrh(lhs=gri.gfaccess("auxevol_gfs","T4UU"+str(mu)+str(nu)),rhs=IDT4UU[mu][nu]))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_TUPMUNU_xx0xx1xx2"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params="""const paramstruct *restrict params,REAL *restrict xx[3],
const ID_inputs other_inputs,REAL *restrict auxevol_gfs""",
body=outputC([rfm.xxSph[0],rfm.xxSph[1],rfm.xxSph[2]],
["const REAL rbar","const REAL theta","const REAL ph"],"returnstring",
"CSE_enable=False,includebraces=False")+"""
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
fin.FD_outputC("returnstring",lhrh_list,params="outCverbose=False,includebraces=False").replace("IDX4","IDX4S"),
loopopts="AllPoints,Read_xxs")
###Output
Output C function ID_TOV_TUPMUNU_xx0xx1xx2() to file TOVID_Ccodes/ID_TOV_TUPMUNU_xx0xx1xx2.h
###Markdown
Step 3: Convert ADM initial data to BSSN-in-curvilinear coordinates \[Back to [top](toc)\]$$\label{adm_id_spacetime}$$This is an automated process, taken care of by [`BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear`](../edit/BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py), and documented [in this tutorial notebook](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).
###Code
import BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear as AtoBnum
AtoBnum.Convert_Spherical_or_Cartesian_ADM_to_BSSN_curvilinear("Spherical","ID_TOV_ADM_quantities",
Ccodesdir=Ccodesdir,loopopts="")
###Output
Output C function ID_BSSN_lambdas() to file TOVID_Ccodes/ID_BSSN_lambdas.h
Output C function ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs() to file TOVID_Ccodes/ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h
Output C function ID_BSSN__ALL_BUT_LAMBDAs() to file TOVID_Ccodes/ID_BSSN__ALL_BUT_LAMBDAs.h
###Markdown
Step 4: Validating that the TOV initial data satisfy the Hamiltonian constraint \[Back to [top](toc)\]$$\label{validate}$$We will validate that the TOV initial data satisfy the Hamiltonian constraint, modulo numerical finite differencing error Step 4.a: Output the Hamiltonian constraint \[Back to [top](toc)\]$$\label{ham_const_output}$$First output the Hamiltonian constraint [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN_constraints.ipynb)
###Code
# Enable rfm_precompute infrastructure, which results in
# BSSN RHSs that are free of transcendental functions,
# even in curvilinear coordinates, so long as
# ConformalFactor is set to "W" (default).
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
import BSSN.Enforce_Detgammabar_Constraint as EGC
enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammabar_Constraint_symb_expressions()
# Now register the Hamiltonian as a gridfunction.
H = gri.register_gridfunctions("AUX","H")
# Then define the Hamiltonian constraint and output the optimized C code.
import BSSN.BSSN_constraints as bssncon
import BSSN.BSSN_stress_energy_source_terms as Bsest
bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False)
Bsest.BSSN_source_terms_for_BSSN_constraints(T4UU)
bssncon.H += Bsest.sourceterm_H
# Now that we are finished with all the rfm hatted
# quantities in generic precomputed functional
# form, let's restore them to their closed-
# form expressions.
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
desc="Evaluate the Hamiltonian constraint"
name="Hamiltonian_constraint"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict auxevol_gfs, REAL *restrict aux_gfs""",
body = fin.FD_outputC("returnstring",lhrh(lhs=gri.gfaccess("aux_gfs", "H"), rhs=bssncon.H),
params="outCverbose=False").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,Enable_rfm_precompute")
###Output
Output C function Hamiltonian_constraint() to file TOVID_Ccodes/Hamiltonian_constraint.h
###Markdown
Step 4.b: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](toc)\]$$\label{bc_functs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),Cparamspath=os.path.join("../"))
###Output
Wrote to file "TOVID_Ccodes/boundary_conditions/parity_conditions_symbolic_dot_products.h"
Evolved gridfunction "aDD00" has parity type 4.
Evolved gridfunction "aDD01" has parity type 5.
Evolved gridfunction "aDD02" has parity type 6.
Evolved gridfunction "aDD11" has parity type 7.
Evolved gridfunction "aDD12" has parity type 8.
Evolved gridfunction "aDD22" has parity type 9.
Evolved gridfunction "alpha" has parity type 0.
Evolved gridfunction "betU0" has parity type 1.
Evolved gridfunction "betU1" has parity type 2.
Evolved gridfunction "betU2" has parity type 3.
Evolved gridfunction "cf" has parity type 0.
Evolved gridfunction "hDD00" has parity type 4.
Evolved gridfunction "hDD01" has parity type 5.
Evolved gridfunction "hDD02" has parity type 6.
Evolved gridfunction "hDD11" has parity type 7.
Evolved gridfunction "hDD12" has parity type 8.
Evolved gridfunction "hDD22" has parity type 9.
Evolved gridfunction "lambdaU0" has parity type 1.
Evolved gridfunction "lambdaU1" has parity type 2.
Evolved gridfunction "lambdaU2" has parity type 3.
Evolved gridfunction "trK" has parity type 0.
Evolved gridfunction "vetU0" has parity type 1.
Evolved gridfunction "vetU1" has parity type 2.
Evolved gridfunction "vetU2" has parity type 3.
Auxiliary gridfunction "H" has parity type 0.
AuxEvol gridfunction "T4UU00" has parity type 0.
AuxEvol gridfunction "T4UU01" has parity type 1.
AuxEvol gridfunction "T4UU02" has parity type 2.
AuxEvol gridfunction "T4UU03" has parity type 3.
AuxEvol gridfunction "T4UU11" has parity type 4.
AuxEvol gridfunction "T4UU12" has parity type 5.
AuxEvol gridfunction "T4UU13" has parity type 6.
AuxEvol gridfunction "T4UU22" has parity type 7.
AuxEvol gridfunction "T4UU23" has parity type 8.
AuxEvol gridfunction "T4UU33" has parity type 9.
Wrote to file "TOVID_Ccodes/boundary_conditions/EigenCoord_Cart_to_xx.h"
###Markdown
Step 4.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
# Set up the C function for the det(gammahat) = det(gammabar)
EGC.output_Enforce_Detgammabar_Constraint_Ccode(Ccodesdir,
exprs=enforce_detg_constraint_symb_expressions)
###Output
Output C function enforce_detgammabar_constraint() to file TOVID_Ccodes/enforce_detgammabar_constraint.h
###Markdown
Step 4.d: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](toc)\]$$\label{cparams_rfm_and_domainsize}$$Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
###Code
# Step 3.d.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.d.ii: Set free_parameters.h
# Output to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Step 3.d.iii: Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir)
# Step 3.d.iv: Generate xxCart.h, which contains xxCart() for
# (the mapping from xx->Cartesian) for the chosen
# CoordSystem:
rfm.xxCart_h("xxCart","./set_Cparameters.h",os.path.join(Ccodesdir,"xxCart.h"))
# Step 3.d.v: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
###Output
_____no_output_____
###Markdown
Step 5: `TOV_Playground.c`: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER)
with open(os.path.join(Ccodesdir,"TOV_Playground_REAL__NGHOSTS.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set TOV stellar parameters
#define TOV_Mass """+str(M_TOV)+"""
#define TOV_Riso """+str(R_iso_TOV)+"\n")
%%writefile $Ccodesdir/TOV_Playground.c
// Step P0: Define REAL and NGHOSTS. This header is generated by NRPy+.
#include "TOV_Playground_REAL__NGHOSTS.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xxCart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Set xxCart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xxCart.h"
// Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P8: Include function for enforcing detgammabar constraint.
#include "enforce_detgammabar_constraint.h"
// Step P4: Declare initial data input struct:
// stores data from initial data solver,
// so they can be put on the numerical grid.
typedef struct __ID_inputs {
REAL Rbar;
int Rbar_idx;
int interp_stencil_size;
int numlines_in_file;
REAL *r_Schw_arr,*rho_arr,*rho_baryon_arr,*P_arr,*M_arr,*expnu_arr,*exp4phi_arr,*rbar_arr;
} ID_inputs;
// Part P11: Declare all functions for setting up TOV initial data.
/* Routines to interpolate the TOV solution and convert to ADM & T^{munu}: */
#include "../TOV/tov_interp.h"
#include "ID_TOV_ADM_quantities.h"
#include "ID_TOV_TUPMUNU_xx0xx1xx2.h"
/* Next perform the basis conversion and compute all needed BSSN quantities */
#include "ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN_lambdas.h"
// Step P10: Declare function necessary for setting up the initial data.
// Step P10.a: Define BSSN_ID() for BrillLindquist initial data
// Step P10.b: Set the generic driver function for setting up BSSN initial data
void initial_data(const paramstruct *restrict params,const bc_struct *restrict bcstruct,
const rfm_struct *restrict rfmstruct,
REAL *restrict xx[3], REAL *restrict auxevol_gfs, REAL *restrict in_gfs) {
#include "set_Cparameters.h"
// Step 1: Set up TOV initial data
// Step 1.a: Read TOV initial data from data file
// Open the data file:
char filename[100];
sprintf(filename,"./outputTOVpolytrope.txt");
FILE *in1Dpolytrope = fopen(filename, "r");
if (in1Dpolytrope == NULL) {
fprintf(stderr,"ERROR: could not open file %s\n",filename);
exit(1);
}
// Count the number of lines in the data file:
int numlines_in_file = count_num_lines_in_file(in1Dpolytrope);
// Allocate space for all data arrays:
REAL *r_Schw_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_baryon_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *P_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *M_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *expnu_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *exp4phi_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rbar_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
// Read from the data file, filling in arrays
// read_datafile__set_arrays() may be found in TOV/tov_interp.h
if(read_datafile__set_arrays(in1Dpolytrope, r_Schw_arr,rho_arr,rho_baryon_arr,P_arr,M_arr,expnu_arr,exp4phi_arr,rbar_arr) == 1) {
fprintf(stderr,"ERROR WHEN READING FILE %s!\n",filename);
exit(1);
}
fclose(in1Dpolytrope);
REAL Rbar = -100;
int Rbar_idx = -100;
for(int i=1;i<numlines_in_file;i++) {
if(rho_arr[i-1]>0 && rho_arr[i]==0) { Rbar = rbar_arr[i-1]; Rbar_idx = i-1; }
}
if(Rbar<0) {
fprintf(stderr,"Error: could not find rbar=Rbar from data file.\n");
exit(1);
}
ID_inputs TOV_in;
TOV_in.Rbar = Rbar;
TOV_in.Rbar_idx = Rbar_idx;
const int interp_stencil_size = 12;
TOV_in.interp_stencil_size = interp_stencil_size;
TOV_in.numlines_in_file = numlines_in_file;
TOV_in.r_Schw_arr = r_Schw_arr;
TOV_in.rho_arr = rho_arr;
TOV_in.rho_baryon_arr = rho_baryon_arr;
TOV_in.P_arr = P_arr;
TOV_in.M_arr = M_arr;
TOV_in.expnu_arr = expnu_arr;
TOV_in.exp4phi_arr = exp4phi_arr;
TOV_in.rbar_arr = rbar_arr;
/* END TOV INPUT ROUTINE */
// Step 1.b: Interpolate data from data file to set BSSN gridfunctions
ID_BSSN__ALL_BUT_LAMBDAs(params,xx,TOV_in, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammabar_constraint(rfmstruct, params, in_gfs);
ID_BSSN_lambdas(params, xx, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammabar_constraint(rfmstruct, params, in_gfs);
ID_TOV_TUPMUNU_xx0xx1xx2(params,xx,TOV_in,auxevol_gfs);
free(rbar_arr);
free(rho_arr);
free(rho_baryon_arr);
free(P_arr);
free(M_arr);
free(expnu_arr);
}
// Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic)
#include "Hamiltonian_constraint.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
// Step 1: Set up initial data to an exact solution
initial_data(¶ms,&bcstruct, &rfmstruct, xx, auxevol_gfs, y_n_gfs);
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, y_n_gfs);
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(&rfmstruct, ¶ms, y_n_gfs,auxevol_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
xxCart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",xCart[1]/TOV_Mass,xCart[2]/TOV_Mass, y_n_gfs[IDX4ptS(CFGF,idx)],
log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));
}
fclose(out2D);
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
free(auxevol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
import cmdline_helper as cmd
cmd.C_compile(os.path.join(Ccodesdir,"TOV_Playground.c"), "TOV_Playground")
cmd.delete_existing_files("out96.txt")
cmd.Execute("TOV_Playground", "96 96 2", "out96.txt")
###Output
Compiling executable...
Executing `gcc -Ofast -fopenmp -march=native -funroll-loops TOVID_Ccodes/TOV_Playground.c -o TOV_Playground -lm`...
Finished executing in 2.6155238151550293 seconds.
Finished compilation.
Executing `taskset -c 0,1,2,3 ./TOV_Playground 96 96 2`...
Finished executing in 0.21461200714111328 seconds.
###Markdown
Step 6: Plotting the single-neutron-star initial data \[Back to [top](toc)\]$$\label{plot}$$Here we plot the conformal factor of these initial data on a 2D grid, such that darker colors imply stronger gravitational fields. Hence, we see the single neutron star centered at the origin: $x/M=y/M=z/M=0$, where $M$ is an arbitrary mass scale (conventionally the [ADM mass](https://en.wikipedia.org/w/index.php?title=ADM_formalism&oldid=846335453) is chosen), and our formulation of Einstein's equations adopt $G=c=1$ [geometrized units](https://en.wikipedia.org/w/index.php?title=Geometrized_unit_system&oldid=861682626).
###Code
import numpy as np
from scipy.interpolate import griddata
from pylab import savefig
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from IPython.display import Image
x96,y96,valuesCF96,valuesHam96 = np.loadtxt('out96.txt').T #Transposed for easier unpacking
bounds = 7.5
pl_xmin = -bounds
pl_xmax = +bounds
pl_ymin = -bounds
pl_ymax = +bounds
grid_x, grid_y = np.mgrid[pl_xmin:pl_xmax:100j, pl_ymin:pl_ymax:100j]
points96 = np.zeros((len(x96), 2))
for i in range(len(x96)):
points96[i][0] = x96[i]
points96[i][1] = y96[i]
grid96 = griddata(points96, valuesCF96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesCF96, (grid_x, grid_y), method='cubic')
plt.clf()
plt.title("Neutron Star: log10( max(1e-6,Energy Density) )")
plt.xlabel("x/M")
plt.ylabel("y/M")
# fig, ax = plt.subplots()
# ax.plot(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
# plt.close(fig)
fig96cf = plt.imshow(grid96.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cf)
savefig("BHB.png")
from IPython.display import Image
Image("BHB.png")
# # interpolation='nearest', cmap=cm.gist_rainbow)
###Output
_____no_output_____
###Markdown
Step 7: Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero \[Back to [top](toc)\]$$\label{convergence}$$The equations behind these initial data solve Einstein's equations exactly, at a single instant in time. One reflection of this solution is that the Hamiltonian constraint violation should be exactly zero in the initial data. However, when evaluated on numerical grids, the Hamiltonian constraint violation will *not* generally evaluate to zero due to the associated numerical derivatives not being exact. However, these numerical derivatives (finite difference derivatives in this case) should *converge* to the exact derivatives as the density of numerical sampling points approaches infinity.In this case, all of our finite difference derivatives agree with the exact solution, with an error term that drops with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$. Here, as in the [Start-to-Finish Scalar Wave (Cartesian grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWave.ipynb) and the [Start-to-Finish Scalar Wave (curvilinear grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWaveCurvilinear.ipynb) we confirm this convergence.First, let's take a look at what the numerical error looks like on the x-y plane at a given numerical resolution, plotting $\log_{10}|H|$, where $H$ is the Hamiltonian constraint violation:
###Code
grid96 = griddata(points96, valuesHam96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesHam96, (grid_x, grid_y), method='cubic')
# fig, ax = plt.subplots()
plt.clf()
plt.title("96^3 Numerical Err.: log_{10}|Ham|")
plt.xlabel("x/M")
plt.ylabel("y/M")
fig96cub = plt.imshow(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cub)
###Output
_____no_output_____
###Markdown
Next, we set up the same initial data but on a lower-resolution, $48\times 8\times 2$ grid (axisymmetric in the $\phi$ direction). Since the constraint violation (numerical error associated with the fourth-order-accurate, finite-difference derivatives) should converge to zero with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$, we expect the constraint violation will increase (relative to the $96\times 16\times 2$ grid) by a factor of $\left(96/48\right)^4$. Here we demonstrate that indeed this order of convergence is observed as expected, *except* at the star's surface where the stress-energy tensor $T^{\mu\nu}$ sharply drops to zero.
###Code
# Now rerun TOV_Playground with twice lower resolution.
cmd.delete_existing_files("out48.txt")
cmd.Execute("TOV_Playground", "48 48 2", "out48.txt")
x48,y48,valuesCF48,valuesHam48 = np.loadtxt('out48.txt').T #Transposed for easier unpacking
points48 = np.zeros((len(x48), 2))
for i in range(len(x48)):
points48[i][0] = x48[i]
points48[i][1] = y48[i]
grid48 = griddata(points48, valuesHam48, (grid_x, grid_y), method='cubic')
griddiff_48_minus_96 = np.zeros((100,100))
griddiff_48_minus_96_1darray = np.zeros(100*100)
gridx_1darray_yeq0 = np.zeros(100)
grid48_1darray_yeq0 = np.zeros(100)
grid96_1darray_yeq0 = np.zeros(100)
count = 0
outarray = []
for i in range(100):
for j in range(100):
griddiff_48_minus_96[i][j] = grid48[i][j] - grid96[i][j]
griddiff_48_minus_96_1darray[count] = griddiff_48_minus_96[i][j]
if j==49:
gridx_1darray_yeq0[i] = grid_x[i][j]
grid48_1darray_yeq0[i] = grid48[i][j] + np.log10((48./96.)**4)
grid96_1darray_yeq0[i] = grid96[i][j]
count = count + 1
plt.clf()
fig, ax = plt.subplots()
plt.title("Plot Demonstrating 4th-order Convergence")
plt.xlabel("x/M")
plt.ylabel("log10(Relative error)")
ax.plot(gridx_1darray_yeq0, grid96_1darray_yeq0, 'k-', label='Nr=96')
ax.plot(gridx_1darray_yeq0, grid48_1darray_yeq0, 'k--', label='Nr=48, mult by (48/96)^4')
ax.set_ylim([-12.5,1.5])
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
###Output
Executing `taskset -c 0,1,2,3 ./TOV_Playground 48 48 2`...
Finished executing in 0.2114863395690918 seconds.
###Markdown
Step 8: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Start-to-Finish Example: Setting up Polytropic [TOV](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) Initial Data, in Curvilinear Coordinates Authors: Zach Etienne & Phil Chang Formatting improvements courtesy Brandon Clark This module sets up initial data for a TOV star in *spherical, isotropic coordinates*, using the *Numerical* ADM Spherical to BSSN Curvilinear initial data module (numerical = BSSN $\lambda^i$'s are computed using finite-difference derivatives instead of exact expressions).**Module Status:** Validated **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution (see [plots](convergence) at bottom). Note that convergence at the surface of the star will be lower order due to the sharp drop to zero in $T^{\mu\nu}$.** NRPy+ Source Code for this module: * [TOV/TOV_Solver.py](../edit/TOV/TOV_Solver.py) [\[**tutorial**\]](Tutorial-ADM_Initial_Data-TOV.ipynb): Tolman-Oppenheimer-Volkoff (TOV) initial data; defines all ADM variables and nonzero $T^{\mu\nu}$ components in Spherical basis.* [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates Introduction:Here we use NRPy+ to generate initial data for a [simple polytrope TOV star](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation).The entire algorithm is outlined below, with NRPy+-based components highlighted in green.1. Allocate memory for gridfunctions, including temporary storage for the RK4 time integration.1. Set gridfunction values to initial data (**This module**).1. Evolve the system forward in time using RK4 time integration. At each RK4 substep, do the following: 1. Evaluate BSSN RHS expressions. 1. Apply singular, curvilinear coordinate boundary conditions [*a la* the SENR/NRPy+ paper](https://arxiv.org/abs/1712.07658) 1. Apply constraints on conformal 3-metric: $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$1. At the end of each iteration in time, output the Hamiltonian constraint violation.1. Repeat above steps at two numerical resolutions to confirm convergence to zero. Table of Contents$$\label{toc}$$This module is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id_tov): Set up ADM initial data for polytropic TOV Star 1. [Step 2.a](tov_interp): Interpolating the TOV data file as needed 1. [Step 2.b](source): Compute source terms $S_{ij}$, $S_{i}$, $S$, and $\rho$ 1. [Step 2.c](jacobian): Jacobian transformation on the ADM/BSSN source terms 1. [Step 2.d](tensor): Rescale tensorial quantities1. [Step 3](adm_id_spacetime): Convert ADM spacetime quantity initial data from Spherical to BSSN Curvilinear coordinates1. [Step 4](validate): Validating that the TOV initial data satisfy the Hamiltonian constraint 1. [Step 4.a](ham_const_output): Output the Hamiltonian Constraint 1. [Step 4.b](apply_bcs): Apply singular, curvilinear coordinate boundary conditions 1. [Step 4.c](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ 1. [Step 5](mainc): `TOV_Playground.c`: The Main C Code1. [Step 6](plot): Plotting the single-neutron-star initial data1. [Step 7](convergence): Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero1. [Step 8](latex_pdf_output): Output this module to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# First we import needed core NRPy+ modules
from outputC import *
import NRPy_param_funcs as par
import grid as gri
import loop as lp
import indexedexp as ixp
import finite_difference as fin
import reference_metric as rfm
thismodule = "TOV_ID_setup"
# Set spatial dimension (must be 3 for BSSN)
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Then we set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
import BSSN.BSSN_quantities as Bq
par.set_parval_from_str("BSSN.BSSN_quantities::EvolvedConformalFactor_cf","phi")
#################
# Next output C headers related to the numerical grids we just set up:
#################
# First output the coordinate bounds xxmin[] and xxmax[]:
with open("BSSN/xxminmax.h", "w") as file:
file.write("const REAL xxmin[3] = {"+str(rfm.xxmin[0])+","+str(rfm.xxmin[1])+","+str(rfm.xxmin[2])+"};\n")
file.write("const REAL xxmax[3] = {"+str(rfm.xxmax[0])+","+str(rfm.xxmax[1])+","+str(rfm.xxmax[2])+"};\n")
# Generic coordinate NRPy+ file output, Part 2: output the conversion from (x0,x1,x2) to Cartesian (x,y,z)
outputC([rfm.xxCart[0],rfm.xxCart[1],rfm.xxCart[2]],["xCart[0]","xCart[1]","xCart[2]"],
"BSSN/xxCart.h")
# Register the Hamiltonian as a gridfunction, to be used later.
H = gri.register_gridfunctions("AUX","H")
###Output
Wrote to file "BSSN/xxCart.h"
###Markdown
Step 2: Set up ADM initial data for polytropic TOV Star \[Back to [top](toc)\]$$\label{adm_id_tov}$$As documented [in the TOV Initial Data NRPy+ Tutorial Module](Tutorial-TOV_Initial_Data.ipynb) ([older version here](Tutorial-GRMHD_UnitConversion.ipynb)), we will now set up TOV initial data, storing the densely-sampled result to file (***Courtesy Phil Chang***).The TOV solver uses an ODE integration routine provided by scipy, so we first make sure that scipy is installed:
###Code
!pip install scipy > /dev/null
###Output
[33mDEPRECATION: A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support[0m
[33mWARNING: You are using pip version 19.2.2, however version 19.2.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
###Markdown
Next we call the [`TOV.TOV_Solver()` function](../edit/TOV/TOV_Solver.py) ([NRPy+ Tutorial module](Tutorial-ADM_Initial_Data-TOV.ipynb)) to set up the initial data, using the default parameters for initial data. This function outputs the solution to a file named "outputTOVpolytrope.txt".
###Code
import TOV.TOV_Solver as TOV
TOV.TOV_Solver()
###Output
Just generated a TOV star with R_Schw = 0.956568142523 , M = 0.14050303285288188 , M/R_Schw = 0.1468824086931645 .
###Markdown
Step 2.a: Interpolating the TOV data file as needed \[Back to [top](toc)\]$$\label{tov_interp}$$The TOV data file just written stored $\left(r,\rho(r),P(r),M(r),e^{\nu(r)}\right)$, where $\rho(r)$ is the total mass-energy density (cf. $\rho_{\text{baryonic}}$).**METRIC DATA IN TERMS OF ADM QUANTITIES**The [TOV line element](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) in *Schwarzschild coordinates* is written (in the $-+++$ form):$$ds^2 = - c^2 e^\nu dt^2 + \left(1 - \frac{2GM}{rc^2}\right)^{-1} dr^2 + r^2 d\Omega^2.$$In *isotropic coordinates* with $G=c=1$ (i.e., the coordinate system and units we'd prefer to use), the ($-+++$ form) line element is written:$$ds^2 = - e^{\nu} dt^2 + e^{4\phi} \left(d\bar{r}^2 + \bar{r}^2 d\Omega^2\right),$$where $\phi$ here is the *conformal factor*.The ADM 3+1 line element for this diagonal metric in isotropic spherical coordinates is given by:$$ds^2 = (-\alpha^2 + \beta_k \beta^k) dt^2 + \gamma_{\bar{r}\bar{r}} d\bar{r}^2 + \gamma_{\theta\theta} d\theta^2+ \gamma_{\phi\phi} d\phi^2,$$from which we can immediately read off the ADM quantities:\begin{align}\alpha &= e^{\nu(\bar{r})/2} \\\beta^k &= 0 \\\gamma_{\bar{r}\bar{r}} &= e^{4\phi}\\\gamma_{\theta\theta} &= e^{4\phi} \bar{r}^2 \\\gamma_{\phi\phi} &= e^{4\phi} \bar{r}^2 \sin^2 \theta \\\end{align}**STRESS-ENERGY TENSOR $T^{\mu\nu}$**We will also need the stress-energy tensor $T^{\mu\nu}$. [As discussed here](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation), the stress-energy tensor is diagonal:\begin{align}T^t_t &= -\rho \\T^i_j &= P \delta^i_j \\\text{All other components of }T^\mu_\nu &= 0.\end{align}Since $\beta^i=0$ the inverse metric expression simplifies to (Eq. 4.49 in [Gourgoulhon](https://arxiv.org/pdf/gr-qc/0703035.pdf)):$$g^{\mu\nu} = \begin{pmatrix} -\frac{1}{\alpha^2} & \frac{\beta^i}{\alpha^2} \\\frac{\beta^i}{\alpha^2} & \gamma^{ij} - \frac{\beta^i\beta^j}{\alpha^2}\end{pmatrix} =\begin{pmatrix} -\frac{1}{\alpha^2} & 0 \\0 & \gamma^{ij}\end{pmatrix},$$and since the 3-metric is diagonal we get\begin{align}\gamma^{\bar{r}\bar{r}} &= e^{-4\phi}\\\gamma^{\theta\theta} &= e^{-4\phi}\frac{1}{\bar{r}^2} \\\gamma^{\phi\phi} &= e^{-4\phi}\frac{1}{\bar{r}^2 \sin^2 \theta}.\end{align}Thus raising $T^\mu_\nu$ yields a diagonal $T^{\mu\nu}$\begin{align}T^{tt} &= -g^{tt} \rho = \frac{1}{\alpha^2} \rho = e^{-\nu(\bar{r})} \rho \\T^{\bar{r}\bar{r}} &= g^{\bar{r}\bar{r}} P = \frac{1}{e^{4 \phi}} P \\T^{\theta\theta} &= g^{\theta\theta} P = \frac{1}{e^{4 \phi}\bar{r}^2} P\\T^{\phi\phi} &= g^{\phi\phi} P = \frac{1}{e^{4\phi}\bar{r}^2 \sin^2 \theta} P \end{align}As all input quantities are functions of $r$, we will simply read the solution from file and interpolate it to the values of $r$ needed by the initial data.1. First we define functions `ID_TOV_ADM_quantities()` and `ID_TOV_TUPMUNU()` that call the [1D TOV interpolator function](../edit/TOV/tov_interp.h) to evaluate the ADM spacetime quantities and $T^{\mu\nu}$, respectively, at any given point $(r,\theta,\phi)$ in the Spherical basis. All quantities are defined as above.1. Next we will construct the BSSN/ADM source terms $\{S_{ij},S_{i},S,\rho\}$ in the Spherical basis1. Then we will perform the Jacobian transformation on $\{S_{ij},S_{i},S,\rho\}$ to the desired (xx0,xx1,xx2) basis1. Next we call the *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function to conver the above ADM quantities to the rescaled BSSN quantities in the desired curvilinear coordinate system: [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).
###Code
with open("BSSN/ID_TOV_ADM_quantities.h", "w") as file:
file.write("""
// This function takes as input either (x,y,z) or (r,th,ph) and outputs
// all ADM quantities in the Cartesian or Spherical basis, respectively.
void ID_TOV_ADM_quantities(
const REAL xyz_or_rthph[3],
const ID_inputs other_inputs,
REAL *gammaDD00,REAL *gammaDD01,REAL *gammaDD02,REAL *gammaDD11,REAL *gammaDD12,REAL *gammaDD22,
REAL *KDD00,REAL *KDD01,REAL *KDD02,REAL *KDD11,REAL *KDD12,REAL *KDD22,
REAL *alpha,
REAL *betaU0,REAL *betaU1,REAL *betaU2,
REAL *BU0,REAL *BU1,REAL *BU2) {
const REAL r = xyz_or_rthph[0];
const REAL th = xyz_or_rthph[1];
const REAL ph = xyz_or_rthph[2];
REAL rho,P,M,expnu,exp4phi;
TOV_interpolate_1D(r,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&P,&M,&expnu,&exp4phi);
*alpha = sqrt(expnu);
// \gamma_{rbar rbar} = exp(4 phi)
*gammaDD00 = exp4phi;
// \gamma_{thth} = r^2 * exp(4 phi)
*gammaDD11 = r*r * exp4phi;
// \gamma_{phph} = r^2 sin^2(th) * exp(4 phi)
*gammaDD22 = r*r*sin(th)*sin(th) * exp4phi;
// All other quantities ARE ZERO:
*gammaDD01 = 0.0;
*gammaDD02 = 0.0;
*gammaDD12 = 0.0;
*KDD00 = 0.0;
*KDD01 = 0.0;
*KDD02 = 0.0;
*KDD11 = 0.0;
*KDD12 = 0.0;
*KDD22 = 0.0;
*betaU0 = 0.0;
*betaU1 = 0.0;
*betaU2 = 0.0;
*BU0 = 0.0;
*BU1 = 0.0;
*BU2 = 0.0;
}\n""")
with open("BSSN/ID_TOV_TUPMUNU.h", "w") as file:
file.write("""
// This function takes as input either (x,y,z) or (r,th,ph) and outputs
// all ADM quantities in the Cartesian or Spherical basis, respectively.
void ID_TOV_TUPMUNU(const REAL xyz_or_rthph[3], const ID_inputs other_inputs,
REAL *T4UU00,REAL *T4UU01,REAL *T4UU02,REAL *T4UU03,
/**/ REAL *T4UU11,REAL *T4UU12,REAL *T4UU13,
/**/ REAL *T4UU22,REAL *T4UU23,
/**/ REAL *T4UU33) {
const REAL r = xyz_or_rthph[0];
const REAL th = xyz_or_rthph[1];
const REAL ph = xyz_or_rthph[2];
REAL rho,P,M,expnu,exp4phi;
TOV_interpolate_1D(r,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&P,&M,&expnu,&exp4phi);
//T^tt = e^(-nu) * rho
*T4UU00 = rho / expnu;
//T^rr = P / exp4phi
*T4UU11 = P / exp4phi;
//T^thth = P / (r^2 * exp4phi)
*T4UU22 = P / (r*r * exp4phi);
//T^phph = P / (r^2 * sin^2(th) * exp4phi)
*T4UU33 = P / (r*r * sin(th)*sin(th) * exp4phi);
// All other components ARE ZERO:
*T4UU01 = 0; *T4UU02 = 0; *T4UU03 = 0;
/**/ *T4UU12 = 0; *T4UU13 = 0;
/**/ *T4UU23 = 0;
}\n""")
###Output
_____no_output_____
###Markdown
Step 2.b: Compute source terms $S_{ij}$, $S_{i}$, $S$, and $\rho$ \[Back to [top](toc)\]$$\label{source}$$Compute source terms $S_{ij}$, $S_{i}$, $S$, and $\rho$, via Eqs. 10 of [Baumgarte, Montero, Cordero-Carrión, and Müller](https://arxiv.org/pdf/1211.6632.pdf):\begin{array}\ S_{ij} &= \gamma_{i \mu} \gamma_{j \nu} T^{\mu \nu} \\S_{i} &= -\gamma_{i\mu} n_\nu T^{\mu\nu} \\S &= \gamma^{ij} S_{ij} \\\rho &= n_\mu n_\nu T^{\mu\nu},\end{array}`ID_TOV_TUPMUNU()` provides numerical values for $T^{\mu\nu}$, but we do not have $\gamma_{\mu \nu}$ or $n_\mu$ directly. So here we will construct the latter quantities.First, B&S Eq. 2.27 defines $\gamma_{\mu \nu}$ as:$$\gamma_{\mu\nu} = g_{\mu\nu} + n_\mu n_\nu$$where $$n_\mu = \{-\alpha,0,0,0\}.$$So we will first need to construct the 4-metric based on ADM quantities. This is provided by Eq 4.47 in [Gourgoulhon](https://arxiv.org/pdf/gr-qc/0703035.pdf):$$g_{\mu\nu} = \begin{pmatrix} -\alpha^2 + \beta^k \beta_k & \beta_i \\\beta_j & \gamma_{ij}\end{pmatrix}.$$
###Code
gammaDD = ixp.declarerank2("gammaDD", "sym01",DIM=3)
betaU = ixp.declarerank1("betaU",DIM=3)
alpha = sp.symbols("alpha")
# To get \gamma_{\mu \nu} = gammabar4DD[mu][nu], we'll need to construct the 4-metric, using Eq. 2.122 in B&S:
# Eq. 2.121 in B&S
betaD = ixp.zerorank1()
for i in range(DIM):
for j in range(DIM):
betaD[i] += gammaDD[i][j]*betaU[j]
# Now compute the beta contraction.
beta2 = sp.sympify(0)
for i in range(DIM):
beta2 += betaU[i]*betaD[i]
# Eq. 2.122 in B&S
g4DD = ixp.zerorank2(DIM=4)
g4DD[0][0] = -alpha**2 + beta2
for i in range(DIM):
g4DD[i+1][0] = g4DD[0][i+1] = betaD[i]
for i in range(DIM):
for j in range(DIM):
g4DD[i+1][j+1] = gammaDD[i][j]
###Output
_____no_output_____
###Markdown
Now let's construct $\gamma_{\mu\nu}$=`gamma4DD[mu][nu]` via $\gamma_{\mu\nu} = g_{\mu\nu} + n_\mu n_\nu$:
###Code
n4D = ixp.zerorank1(DIM=4)
n4D[0] = -alpha
gamma4DD = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
gamma4DD[mu][nu] = g4DD[mu][nu] + n4D[mu]*n4D[nu]
###Output
_____no_output_____
###Markdown
We now have all we need to construct the BSSN source terms in the current basis (for TOV, the Spherical basis):\begin{array}\ S_{ij} &= \gamma_{i \mu} \gamma_{j \nu} T^{\mu \nu} \\S_{i} &= -\gamma_{i\mu} n_\nu T^{\mu\nu} \\S &= \gamma^{ij} S_{ij} \\\rho &= n_\mu n_\nu T^{\mu\nu},\end{array}
###Code
T4UU = ixp.declarerank2("T4UU", "sym01",DIM=4)
SDD = ixp.zerorank2()
SD = ixp.zerorank1()
S = sp.sympify(0)
rho = sp.sympify(0)
for i in range(DIM):
for j in range(DIM):
for mu in range(4):
for nu in range(4):
SDD[i][j] += gamma4DD[i+1][mu]*gamma4DD[j+1][nu] * T4UU[mu][nu]
for i in range(DIM):
for mu in range(4):
for nu in range(4):
SD[i] += -gamma4DD[i+1][mu]*n4D[nu] * T4UU[mu][nu]
gammaUU, gammaDET = ixp.symm_matrix_inverter3x3(gammaDD)
for i in range(DIM):
for j in range(DIM):
S += gammaUU[i][j]*SDD[i][j]
for mu in range(4):
for nu in range(4):
rho += n4D[mu]*n4D[nu] * T4UU[mu][nu]
###Output
_____no_output_____
###Markdown
Step 2.c: Jacobian transformation on the ADM/BSSN source terms \[Back to [top](toc)\]$$\label{jacobian}$$The following discussion holds for either Spherical or Cartesian input data, so for simplicity let's just assume the data are given in Spherical coordinates.All ADM tensors and vectors are in the Spherical coordinate basis $x^i_{\rm Sph} = (r,\theta,\phi)$, but we need them in the curvilinear coordinate basis $x^i_{\rm rfm}=$`(xx0,xx1,xx2)` set by the `"reference_metric::CoordSystem"` variable. Empirically speaking, it is far easier to write `(x(xx0,xx1,xx2),y(xx0,xx1, xx2),z(xx0,xx1,xx2))` than the inverse, so we will compute the Jacobian matrix$${\rm Jac\_dUSph\_dDrfmUD[i][j]} = \frac{\partial x^i_{\rm Sph}}{\partial x^j_{\rm rfm}},$$via exact differentiation (courtesy SymPy), and the inverse Jacobian$${\rm Jac\_dUrfm\_dDSphUD[i][j]} = \frac{\partial x^i_{\rm rfm}}{\partial x^j_{\rm Sph}},$$using NRPy+'s `generic_matrix_inverter3x3()` function. In terms of these, the transformation of BSSN tensors from Spherical to `"reference_metric::CoordSystem"` coordinates may be written:\begin{align}S^{\rm rfm}_{ij} &= \frac{\partial x^\ell_{\rm Sph}}{\partial x^i_{\rm rfm}}\frac{\partial x^m_{\rm Sph}}{\partial x^j_{\rm rfm}} S^{\rm Sph}_{\ell m}\\S^{\rm rfm}_{i} &= \frac{\partial x^\ell_{\rm Sph}}{\partial x^i_{\rm rfm}}S^{\rm Sph}_{\ell}\end{align}
###Code
# UIUC Black Hole initial data are given in Spherical coordinates.
CoordType_in = "Spherical"
SSphorCartDD = ixp.zerorank2()
SSphorCartD = ixp.zerorank1()
# Copy what we had written above, which was in Spherical coordinates, to the new tensors SSphorCartDD / SSphorCartD:
for i in range(3):
SSphorCartD[i] = SD[i]
for j in range(3):
SSphorCartDD[i][j] = SDD[i][j]
# Zero out the original tensors; we're going to store the result to SD and SDD:
SDD = ixp.zerorank2()
SD = ixp.zerorank1()
# Make sure that rfm.reference_metric() has been called.
# We'll need the variables it defines throughout this module.
if rfm.have_already_called_reference_metric_function == False:
print("Error. Called Convert_Spherical_ADM_to_BSSN_curvilinear() without")
print(" first setting up reference metric, by calling rfm.reference_metric().")
exit(1)
r_th_ph_or_Cart_xyz_oID_xx = []
if CoordType_in == "Spherical":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxSph
elif CoordType_in == "Cartesian":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxCart
else:
print("Error: Can only convert ADM Cartesian or Spherical initial data to BSSN Curvilinear coords.")
exit(1)
# Next apply Jacobian transformations to convert into the (xx0,xx1,xx2) basis
# rho and S are scalar, so no Jacobian transformations are necessary.
Jac_dUSphorCart_dDrfmUD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
Jac_dUSphorCart_dDrfmUD[i][j] = sp.diff(r_th_ph_or_Cart_xyz_oID_xx[i],rfm.xx[j])
Jac_dUrfm_dDSphorCartUD, dummyDET = ixp.generic_matrix_inverter3x3(Jac_dUSphorCart_dDrfmUD)
for i in range(DIM):
for j in range(DIM):
SD[i] += Jac_dUSphorCart_dDrfmUD[j][i] * SSphorCartD[j]
for k in range(DIM):
for l in range(DIM):
SDD[i][j] += Jac_dUSphorCart_dDrfmUD[k][i]*Jac_dUSphorCart_dDrfmUD[l][j] * SSphorCartDD[k][l]
###Output
_____no_output_____
###Markdown
Step 2.d: Rescale tensorial quantities \[Back to [top](toc)\]$$\label{tensor}$$We rescale tensorial quantities according to the prescription described in the [BSSN in curvilinear coordinates tutorial module](Tutorial-BSSNCurvilinear.ipynb) (also [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf)):Since `ReD[i]}=1/ReU[i]`, we have\begin{align}s_{ij} &= S_{ij} /\text{ReDD[i][j]}\\s_{i} &= S_i \text{ReU[i]}\end{align}
###Code
# Finally rescale the tensorial quantities:
sD = ixp.zerorank1()
sDD = ixp.zerorank2()
for i in range(DIM):
sD[i] = SD[i] * rfm.ReU[i] # ReD[i] = 1/ReU[i]
for j in range(DIM):
sDD[i][j] = SDD[i][j] / rfm.ReDD[i][j]
###Output
_____no_output_____
###Markdown
Next we use NRPy+ to write a C function that reads in $T^{\mu\nu}$ in the given (Spherical) basis and outputs the source terms $\{S_{ij},S_{i},S,\rho\}$ in the `(xx0,xx1,xx2)` basis:
###Code
with open("BSSN/ID_TOV_BSSN_Source_Terms.h", "w") as file:
file.write("""void ID_TOV_BSSN_Source_Terms(
REAL xx0xx1xx2[3],
const REAL gammaDD00,const REAL gammaDD01,const REAL gammaDD02,
/**/ const REAL gammaDD11,const REAL gammaDD12,
/**/ const REAL gammaDD22,
const REAL betaU0,const REAL betaU1,const REAL betaU2,
const REAL alpha,
const REAL T4UU00,const REAL T4UU01,const REAL T4UU02,const REAL T4UU03,
/**/ const REAL T4UU11,const REAL T4UU12,const REAL T4UU13,
/**/ const REAL T4UU22,const REAL T4UU23,
/**/ const REAL T4UU33,
REAL *sDD00,REAL *sDD01,REAL *sDD02,
/**/ REAL *sDD11,REAL *sDD12,
/**/ REAL *sDD22,
REAL *sD0, REAL *sD1, REAL *sD2,
REAL *S, REAL *rho) {
const REAL xx0 = xx0xx1xx2[0];
const REAL xx1 = xx0xx1xx2[1];
const REAL xx2 = xx0xx1xx2[2];\n""")
outCparams = "preindent=1,outCfileaccess=a,outCverbose=False,includebraces=False"
outputC([sDD[0][0],sDD[0][1],sDD[0][2],sDD[1][1],sDD[1][2],sDD[2][2],
sD[0],sD[1],sD[2], S, rho],
["*sDD00","*sDD01","*sDD02","*sDD11","*sDD12","*sDD22",
"*sD0","*sD1","*sD2","*S","*rho"], "BSSN/ID_TOV_BSSN_Source_Terms.h",outCparams)
with open("BSSN/ID_TOV_BSSN_Source_Terms.h", "a") as file:
file.write("}\n")
###Output
Appended to file "BSSN/ID_TOV_BSSN_Source_Terms.h"
###Markdown
Step 3: Convert ADM spacetime quantity initial data from Spherical to BSSN Curvilinear coordinates \[Back to [top](toc)\]$$\label{adm_id_spacetime}$$We convert ADM spacetime quantity initial data from Spherical to BSSN Curvilinear coordinates [as documented in the corresponding tutorial module](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb)
###Code
import BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear as AtoBnum
AtoBnum.Convert_Spherical_or_Cartesian_ADM_to_BSSN_curvilinear("Spherical","ID_TOV_ADM_quantities")
###Output
Appended to file "BSSN/ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h"
Appended to file "BSSN/ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h"
###Markdown
Step 4: Validating that the TOV initial data satisfy the Hamiltonian constraint \[Back to [top](toc)\]$$\label{validate}$$We will validate that the TOV initial data satisfy the Hamiltonian constraint, modulo numerical finite differencing error Step 4.a: Output the Hamiltonian constraint \[Back to [top](toc)\]$$\label{ham_const_output}$$First output the Hamiltonian constraint [as documented in the corresponding NRPy+ tutorial module](Tutorial-BSSN_constraints.ipynb)
###Code
import BSSN.BSSN_constraints as bssncon
bssncon.output_C__Hamiltonian_h(add_T4UUmunu_source_terms=True)
###Output
Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.
Finished in 10.2082049847 seconds.
Output C implementation of Hamiltonian constraint to BSSN/Hamiltonian.h
###Markdown
Step 4.b: Apply singular, curvilinear coordinate boundary conditions \[Back to [top](toc)\]$$\label{apply_bcs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial module](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb).
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions()
###Output
Wrote to file "CurviBoundaryConditions/gridfunction_defines.h"
Wrote to file "CurviBoundaryConditions/set_parity_conditions.h"
Wrote to file "CurviBoundaryConditions/xxCart.h"
Wrote to file "CurviBoundaryConditions/xxminmax.h"
Wrote to file "CurviBoundaryConditions/Cart_to_xx.h"
###Markdown
Step 4.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial module](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb).Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
import BSSN.Enforce_Detgammabar_Constraint as EGC
EGC.output_Enforce_Detgammabar_Constraint_Ccode()
###Output
Output C implementation of det(gammabar) constraint to file BSSN/enforce_detgammabar_constraint.h
###Markdown
Step 5: `TOV_Playground.c`: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
with open("BSSN/NGHOSTS.h", "w") as file:
file.write("// Part P0: Set the number of ghost zones, from NRPy+'s FD_CENTDERIVS_ORDER\n")
# Upwinding in BSSN requires that NGHOSTS = FD_CENTDERIVS_ORDER/2 + 1 <- Notice the +1.
file.write("#define NGHOSTS "+str(int(par.parval_from_str("finite_difference::FD_CENTDERIVS_ORDER")/2)+1)+"\n")
%%writefile BSSN/TOV_Playground.c
// Part P1: Import needed header files
#include "NGHOSTS.h" // A NRPy+-generated file, which is set based on FD_CENTDERIVS_ORDER.
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "stdint.h" // Needed for Windows GCC 6.x compatibility
// Part P2: Add needed #define's to set data type, the IDX4() macro, and the gridfunctions
// Part P2a: set REAL=double, so that all floating point numbers are stored to at least ~16 significant digits.
#define REAL double
// Step P3: Set free parameters for the numerical grid
const REAL RMAX = 3.0;
typedef struct __ID_inputs {
REAL Rbar;
int Rbar_idx;
int interp_stencil_size;
int numlines_in_file;
REAL *r_Schw_arr,*rho_arr,*P_arr,*M_arr,*expnu_arr,*exp4phi_arr,*rbar_arr;
} ID_inputs;
// Part P4b: Declare the IDX4(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS[0] elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS[0]*Nxx_plus_2NGHOSTS[1] in memory, etc.
#define IDX4(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS[0] * ( (j) + Nxx_plus_2NGHOSTS[1] * ( (k) + Nxx_plus_2NGHOSTS[2] * (g) ) ) )
#define IDX3(i,j,k) ( (i) + Nxx_plus_2NGHOSTS[0] * ( (j) + Nxx_plus_2NGHOSTS[1] * (k) ) )
// Assuming idx = IDX3(i,j,k). Much faster if idx can be reused over and over:
#define IDX4pt(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS[0]*Nxx_plus_2NGHOSTS[1]*Nxx_plus_2NGHOSTS[2]) * (g) )
// Part P4c: Set #define's for BSSN gridfunctions. C code generated above
#include "../CurviBoundaryConditions/gridfunction_defines.h"
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
// Step P5: Function for converting uniform grid coord
// (xx[0][i0],xx[1][i1],xx[2][i2]) to
// corresponding Cartesian coordinate.
void xxCart(REAL *xx[3],const int i0,const int i1,const int i2, REAL xCart[3]) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
#include "xxCart.h"
}
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "../CurviBoundaryConditions/curvilinear_parity_and_outer_boundary_conditions.h"
// Step P7: Function for enforcing the gammabar=gammahat constraint:
#include "enforce_detgammabar_constraint.h"
// Part P8: Declare all functions for setting up UIUC initial data.
/* Routines to interpolate the TOV solution and convert to ADM & T^{munu}: */
#include "../TOV/tov_interp.h"
#include "ID_TOV_ADM_quantities.h"
#include "ID_TOV_TUPMUNU.h"
/* Next perform the basis conversion and compute all needed BSSN quantities */
#include "ID_TOV_BSSN_Source_Terms.h"
#include "ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN_lambdas.h"
// Part P9: Declare function for computing the Hamiltonian
// constraint violation, which should converge to
// zero with increasing numerical resolution.
void Hamiltonian_constraint(const int Nxx[3],const int Nxx_plus_2NGHOSTS[3],const REAL dxx[3], REAL *xx[3],
REAL *in_gfs, REAL *aux_gfs) {
#include "Hamiltonian.h"
}
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up scalar wave initial data
// Step 2: Evolve scalar wave initial data forward in time using Method of Lines with RK4 algorithm,
// applying quadratic extrapolation outer boundary conditions.
// Step 3: Output relative error between numerical and exact solution.
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
// Step 0a: Read command-line input, error out if nonconformant
if(argc != 4 || atoi(argv[1]) < NGHOSTS) {
fprintf(stderr,"Error: Expected three command-line arguments: ./TOV_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
// Step 0b: Set up numerical grid structure, first in space...
if(atoi(argv[1])%2 != 0 || atoi(argv[2])%2 != 0 || atoi(argv[2])%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
const int Nxx_plus_2NGHOSTS[3] = { Nxx[0]+2*NGHOSTS, Nxx[1]+2*NGHOSTS, Nxx[2]+2*NGHOSTS };
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS[0]*Nxx_plus_2NGHOSTS[1]*Nxx_plus_2NGHOSTS[2];
#include "xxminmax.h"
/* TOV INPUT ROUTINE */
// Open the data file:
char filename[100];
sprintf(filename,"./outputTOVpolytrope.txt");
FILE *in1Dpolytrope = fopen(filename, "r");
if (in1Dpolytrope == NULL) {
fprintf(stderr,"ERROR: could not open file %s\n",filename);
exit(1);
}
// Count the number of lines in the data file:
int numlines_in_file = count_num_lines_in_file(in1Dpolytrope);
// Allocate space for all data arrays:
REAL *r_Schw_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *P_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *M_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *expnu_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *exp4phi_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rbar_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
// Read from the data file, filling in arrays
if(read_datafile__set_arrays(in1Dpolytrope, r_Schw_arr,rho_arr,P_arr,M_arr,expnu_arr,exp4phi_arr,rbar_arr) == 1) {
fprintf(stderr,"ERROR WHEN READING FILE %s!\n",filename);
exit(1);
}
fclose(in1Dpolytrope);
REAL Rbar = -100;
int Rbar_idx = -100;
for(int i=1;i<numlines_in_file;i++) {
if(rho_arr[i-1]>0 && rho_arr[i]==0) { Rbar = rbar_arr[i-1]; Rbar_idx = i-1; }
}
if(Rbar<0) {
fprintf(stderr,"Error: could not find rbar=Rbar from data file.\n");
exit(1);
}
ID_inputs TOV_in;
TOV_in.Rbar = Rbar;
TOV_in.Rbar_idx = Rbar_idx;
const int interp_stencil_size = 12;
TOV_in.interp_stencil_size = interp_stencil_size;
TOV_in.numlines_in_file = numlines_in_file;
TOV_in.r_Schw_arr = r_Schw_arr;
TOV_in.rho_arr = rho_arr;
TOV_in.P_arr = P_arr;
TOV_in.M_arr = M_arr;
TOV_in.expnu_arr = expnu_arr;
TOV_in.exp4phi_arr = exp4phi_arr;
TOV_in.rbar_arr = rbar_arr;
/* END TOV INPUT ROUTINE */
// Step 0c: Allocate memory for gridfunctions
REAL *evol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *aux_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUX_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0d: Set up space and time coordinates
// Step 0d.i: Set \Delta x^i on uniform grids.
REAL dxx[3];
for(int i=0;i<3;i++) dxx[i] = (xxmax[i] - xxmin[i]) / ((REAL)Nxx[i]);
// Step 0d.ii: Set up uniform coordinate grids
REAL *xx[3];
for(int i=0;i<3;i++) {
xx[i] = (REAL *)malloc(sizeof(REAL)*Nxx_plus_2NGHOSTS[i]);
for(int j=0;j<Nxx_plus_2NGHOSTS[i];j++) {
xx[i][j] = xxmin[i] + ((REAL)(j-NGHOSTS) + (1.0/2.0))*dxx[i]; // Cell-centered grid.
}
}
// Step 0e: Find ghostzone mappings and parities:
gz_map *bc_gz_map = (gz_map *)malloc(sizeof(gz_map)*Nxx_plus_2NGHOSTS_tot);
parity_condition *bc_parity_conditions = (parity_condition *)malloc(sizeof(parity_condition)*Nxx_plus_2NGHOSTS_tot);
set_up_bc_gz_map_and_parity_conditions(Nxx_plus_2NGHOSTS,xx,dxx,xxmin,xxmax, bc_gz_map, bc_parity_conditions);
// Step 1: Set up initial data to an exact solution at time=0:
ID_BSSN__ALL_BUT_LAMBDAs(Nxx_plus_2NGHOSTS,xx,TOV_in, evol_gfs);
apply_bcs(Nxx, Nxx_plus_2NGHOSTS, bc_gz_map,bc_parity_conditions,NUM_EVOL_GFS,evol_gf_parity, evol_gfs);
enforce_detgammabar_constraint(Nxx_plus_2NGHOSTS, xx, evol_gfs);
ID_BSSN_lambdas(Nxx, Nxx_plus_2NGHOSTS, xx,dxx, evol_gfs);
apply_bcs(Nxx, Nxx_plus_2NGHOSTS, bc_gz_map,bc_parity_conditions,NUM_EVOL_GFS,evol_gf_parity, evol_gfs);
enforce_detgammabar_constraint(Nxx_plus_2NGHOSTS, xx, evol_gfs);
{
#pragma omp parallel for
for(int i2=0; i2<Nxx_plus_2NGHOSTS[2]; i2++) {
const REAL xx2 = xx[2][i2];
for(int i1=0; i1<Nxx_plus_2NGHOSTS[1]; i1++) {
const REAL xx1 = xx[1][i1];
for(int i0=0; i0<Nxx_plus_2NGHOSTS[0]; i0++) {
const REAL xx0 = xx[0][i0];
REAL xx0xx1xx2[3] = {xx0,xx1,xx2};
REAL gammaDD00,gammaDD01,gammaDD02,gammaDD11,gammaDD12,gammaDD22;
REAL KDD00,KDD01,KDD02,KDD11,KDD12,KDD22;
REAL alpha,betaU0,betaU1,betaU2;
REAL BU0,BU1,BU2;
REAL xyz_or_rthph[3] = {xx0,xx1,xx2}; //FIXME
// FIRST INTERPOLATE TO SET THE ADM AND TMUNU QUANTITIES
ID_TOV_ADM_quantities(xyz_or_rthph, TOV_in,
&gammaDD00,&gammaDD01,&gammaDD02,&gammaDD11,&gammaDD12,&gammaDD22,
&KDD00,&KDD01,&KDD02,&KDD11,&KDD12,&KDD22,
&alpha, &betaU0,&betaU1,&betaU2, &BU0,&BU1,&BU2);
REAL T4UU00,T4UU01,T4UU02,T4UU03,
/**/ T4UU11,T4UU12,T4UU13,
/**/ T4UU22,T4UU23,
/**/ T4UU33;
ID_TOV_TUPMUNU(xyz_or_rthph, TOV_in,
&T4UU00,&T4UU01,&T4UU02,&T4UU03,
/**/ &T4UU11,&T4UU12,&T4UU13,
/**/ &T4UU22,&T4UU23,
/**/ &T4UU33);
const int idx = IDX3(i0,i1,i2);
// THEN EVALUATE THE BSSN SOURCE TERMS
ID_TOV_BSSN_Source_Terms(xx0xx1xx2,
gammaDD00, gammaDD01, gammaDD02,
/**/ gammaDD11, gammaDD12,
/**/ gammaDD22,
betaU0,betaU1,betaU2, alpha,
T4UU00, T4UU01, T4UU02, T4UU03,
/**/ T4UU11, T4UU12, T4UU13,
/**/ T4UU22, T4UU23,
/**/ T4UU33,
&aux_gfs[IDX4pt(SDD00GF,idx)], &aux_gfs[IDX4pt(SDD01GF,idx)], &aux_gfs[IDX4pt(SDD02GF,idx)],
/**/ &aux_gfs[IDX4pt(SDD11GF,idx)], &aux_gfs[IDX4pt(SDD12GF,idx)],
/**/ &aux_gfs[IDX4pt(SDD22GF,idx)],
&aux_gfs[IDX4pt(SD0GF,idx)], &aux_gfs[IDX4pt(SD1GF,idx)], &aux_gfs[IDX4pt(SD2GF,idx)],
&aux_gfs[IDX4pt(SGF,idx)], &aux_gfs[IDX4pt(RHOGF,idx)]);
//aux_gfs[IDX4pt(SGF,idx)] = T4UU11*gammaDD00;
}
}
}
}
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
// apply_bcs(Nxx, Nxx_plus_2NGHOSTS, bc_gz_map,bc_parity_conditions,NUM_EVOL_GFS,evol_gf_parity, evol_gfs);
// enforce_detgammabar_constraint(Nxx_plus_2NGHOSTS, xx, evol_gfs);
// Step 2: Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(Nxx,Nxx_plus_2NGHOSTS,dxx, xx, evol_gfs, aux_gfs);
/* Step 3: Output relative error between numerical and exact solution, */
const int i0MIN=NGHOSTS; // In spherical, r=Delta r/2.
const int i1mid=Nxx_plus_2NGHOSTS[1]/2;
const int i2mid=Nxx_plus_2NGHOSTS[2]/2;
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS[0]-NGHOSTS, i1mid,i1mid+1, NGHOSTS,Nxx_plus_2NGHOSTS[2]-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
#include "xxCart.h"
int idx = IDX3(i0,i1,i2);
printf("%e %e %e %e\n",xCart[0],xCart[1], aux_gfs[IDX4pt(SGF,idx)],log10(fabs(aux_gfs[IDX4pt(HGF,idx)])));
}
/* Step 4: Free all allocated memory */
free(bc_gz_map);
free(bc_parity_conditions);
free(rbar_arr);
free(rho_arr);
free(P_arr);
free(M_arr);
free(expnu_arr);
free(aux_gfs);
free(evol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
import cmdline_helper as cmd
cmd.C_compile("BSSN/TOV_Playground.c", "TOV_Playground")
cmd.delete_existing_files("out96.txt")
cmd.Execute("TOV_Playground", "96 96 96", "out96.txt")
###Output
Compiling executable...
Executing `gcc -Ofast -fopenmp -march=native -funroll-loops BSSN/TOV_Playground.c -o TOV_Playground -lm`...
Finished executing in 3.83846592903 seconds.
Finished compilation.
Executing `taskset -c 0,1,2,3 ./TOV_Playground 96 96 96`...
Finished executing in 1.03296709061 seconds.
###Markdown
Step 6: Plotting the single-neutron-star initial data \[Back to [top](toc)\]$$\label{plot}$$Here we plot the conformal factor of these initial data on a 2D grid, such that darker colors imply stronger gravitational fields. Hence, we see the single neutron star centered at the origin: $x/M=y/M=z/M=0$, where $M$ is an arbitrary mass scale (conventionally the [ADM mass](https://en.wikipedia.org/w/index.php?title=ADM_formalism&oldid=846335453) is chosen), and our formulation of Einstein's equations adopt $G=c=1$ [geometrized units](https://en.wikipedia.org/w/index.php?title=Geometrized_unit_system&oldid=861682626).
###Code
import numpy as np
from scipy.interpolate import griddata
from pylab import savefig
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from IPython.display import Image
x96,y96,valuesCF96,valuesHam96 = np.loadtxt('out96.txt').T #Transposed for easier unpacking
bounds = 2.0
pl_xmin = -bounds
pl_xmax = +bounds
pl_ymin = -bounds
pl_ymax = +bounds
grid_x, grid_y = np.mgrid[pl_xmin:pl_xmax:100j, pl_ymin:pl_ymax:100j]
points96 = np.zeros((len(x96), 2))
for i in range(len(x96)):
points96[i][0] = x96[i]
points96[i][1] = y96[i]
grid96 = griddata(points96, valuesCF96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesCF96, (grid_x, grid_y), method='cubic')
plt.clf()
plt.title("TOV Polytrope Initial Data: log10(Density)")
plt.xlabel("x/M")
plt.ylabel("y/M")
# fig, ax = plt.subplots()
# ax.plot(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
# plt.close(fig)
fig96cf = plt.imshow(grid96.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cf)
savefig("BHB.png")
from IPython.display import Image
Image("BHB.png")
# # interpolation='nearest', cmap=cm.gist_rainbow)
###Output
_____no_output_____
###Markdown
Step 7: Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero \[Back to [top](toc)\]$$\label{convergence}$$The equations behind these initial data solve Einstein's equations exactly, at a single instant in time. One reflection of this solution is that the Hamiltonian constraint violation should be exactly zero in the initial data. However, when evaluated on numerical grids, the Hamiltonian constraint violation will *not* generally evaluate to zero due to the associated numerical derivatives not being exact. However, these numerical derivatives (finite difference derivatives in this case) should *converge* to the exact derivatives as the density of numerical sampling points approaches infinity.In this case, all of our finite difference derivatives agree with the exact solution, with an error term that drops with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$. Here, as in the [Start-to-Finish Scalar Wave (Cartesian grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWave.ipynb) and the [Start-to-Finish Scalar Wave (curvilinear grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWaveCurvilinear.ipynb) we confirm this convergence.First, let's take a look at what the numerical error looks like on the x-y plane at a given numerical resolution, plotting $\log_{10}|H|$, where $H$ is the Hamiltonian constraint violation:
###Code
grid96 = griddata(points96, valuesHam96, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points96, valuesHam96, (grid_x, grid_y), method='cubic')
# fig, ax = plt.subplots()
plt.clf()
plt.title("96^3 Numerical Err.: log_{10}|Ham|")
plt.xlabel("x/M")
plt.ylabel("y/M")
fig96cub = plt.imshow(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cub)
###Output
_____no_output_____
###Markdown
Next, we set up the same initial data but on a lower-resolution, $48\times 8\times 2$ grid (axisymmetric in the $\phi$ direction). Since the constraint violation (numerical error associated with the fourth-order-accurate, finite-difference derivatives) should converge to zero with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$, we expect the constraint violation will increase (relative to the $96\times 16\times 2$ grid) by a factor of $\left(96/48\right)^4$. Here we demonstrate that indeed this order of convergence is observed as expected, *except* at the star's surface where the stress-energy tensor $T^{\mu\nu}$ sharply drops to zero.
###Code
# Now rerun TOV_Playground with twice lower resolution.
cmd.delete_existing_files("out48.txt")
cmd.Execute("TOV_Playground", "48 48 48", "out48.txt")
x48,y48,valuesCF48,valuesHam48 = np.loadtxt('out48.txt').T #Transposed for easier unpacking
points48 = np.zeros((len(x48), 2))
for i in range(len(x48)):
points48[i][0] = x48[i]
points48[i][1] = y48[i]
grid48 = griddata(points48, valuesHam48, (grid_x, grid_y), method='cubic')
griddiff_48_minus_96 = np.zeros((100,100))
griddiff_48_minus_96_1darray = np.zeros(100*100)
gridx_1darray_yeq0 = np.zeros(100)
grid48_1darray_yeq0 = np.zeros(100)
grid96_1darray_yeq0 = np.zeros(100)
count = 0
outarray = []
for i in range(100):
for j in range(100):
griddiff_48_minus_96[i][j] = grid48[i][j] - grid96[i][j]
griddiff_48_minus_96_1darray[count] = griddiff_48_minus_96[i][j]
if j==49:
gridx_1darray_yeq0[i] = grid_x[i][j]
grid48_1darray_yeq0[i] = grid48[i][j] + np.log10((48./96.)**4)
grid96_1darray_yeq0[i] = grid96[i][j]
count = count + 1
plt.clf()
fig, ax = plt.subplots()
plt.title("Plot Demonstrating 4th-order Convergence")
plt.xlabel("x/M")
plt.ylabel("log10(Relative error)")
ax.plot(gridx_1darray_yeq0, grid96_1darray_yeq0, 'k-', label='Nr=96')
ax.plot(gridx_1darray_yeq0, grid48_1darray_yeq0, 'k--', label='Nr=48, mult by (48/96)^4')
ax.set_ylim([-12.5,1.5])
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
###Output
Executing `taskset -c 0,1,2,3 ./TOV_Playground 48 48 48`...
Finished executing in 0.223258972168 seconds.
###Markdown
Step 8: Output this module to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[NbConvertApp] Converting notebook Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb to latex
[NbConvertApp] Support files will be in Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data_files/
[NbConvertApp] Making directory Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data_files
[NbConvertApp] Making directory Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data_files
[NbConvertApp] Making directory Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data_files
[NbConvertApp] Writing 113436 bytes to Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
|
HP34401ASansVISA.ipynb | ###Markdown
\title{HP 34401 Controlled with Python via RS-232 without VISA}\author{Steven K Armour}\maketitle The goal of this program/notebook is to develop a program that can control and log a circa ~1990 Benchtop/Rack multimeter from HP(Agilent) without the need for any sort of VISA library. While a VISA library and subsequent based program (see `pyvisa` for ex) is convenient the goal of this code is to show that it is not necessary. And while the code developed hereand while as `pyvisa-py` can be used with a raspberry pi there are two inconveniences with using VISA.1. The instruments commands must be in the VISA database that is being used wich is not always the case2. with `pyvisa-py` VISA can be used on linux computers. But with direct command programing an instrument can (in theory) interact with the `MicroPython` microcontroller platform While this code can not be run as is on `MiroPython` is could be with some refactoringNeeded hardware:+ HP or Agilent 34401 Multimeter+ USB to db9 RS-232 converter or a USB to db9 Null Modem RS-232 converter+ RS-232 Null Modem if a dedicated USP to RS-232 Null Modem is not used Libraries used
###Code
import serial
import pandas as pd
import numpy as np
import xarray as xr
import threading
import matplotlib.pyplot as plt
import serial #implements rs-232 com
import pandas as pd # data collection
import numpy as np
import threading #used for mutlti threading
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Setting up the rs-232 communication ports+ for a windows machine go to device manager setting and determine what com port the usb to rs-232 cable is connected to+ for a mac machine: ????????+ for a linux ubuntu machine open a terminal and review the the list of connections to the computer via typing in the terminal```ls -l /dev/tty*```at the end list one should see ```/dev/ttyUSB``` with an ending number of the USB port. After selecting the appropriate port one will typically need to unlock the port via ``` sudo chmod 666 /dev/ttyUSB```
###Code
UnLockedPort='/dev/ttyUSB0'
###Output
_____no_output_____
###Markdown
Program
###Code
class MultiMeter34401A():
"""
Class to initiate a conenction/control/record a HP multimeter from python
without VISA
The recorded values and recorded errors are stored in the following
dataframes:
self.Errors
self.Data
"""
def __init__(self, Name):
"""
Initiate some helpers and the data storage dataframes
Args:
Name (str): a convenient name of the mass flow controller
"""
self.Name=Name
#Dataframe to store errors
self.Errors=pd.DataFrame(columns=['Last Command', 'Time', 'Error'])
#Dataframe to store recordings
self.Data=pd.DataFrame(columns=['Time', 'Value', 'Mode'])
def MakeConnection(self, ULSerPort):
"""
Method to startup the rs-232 serial connection via the python
serial library
Args:
ULSerPort (str)
"""
#model number as a check of correct connection
MODEL='34401A'
try:
self.MMSer=serial.Serial(ULSerPort, 9600, timeout=1)
self.MMSer.write(b'*IDN?\n')
self.IDN=self.MMSer.readline().decode("utf-8")[:-2]
except Exception as ex:
print(ex)
if self.IDN.find(MODEL)==-1:
raise ValueError(f'{self.IDN} not supported by this class ony supports {MODEL}')
def RemoteSetup(self):
"""
Method to run a command sequence to put the instrument into remote
mode. The Mutltimeter can be taken of remote by pressing shift on the
contorls; but can be brought back to remote mode by rerunning this
method
"""
#command ; exspected response
#Reset
C1=b'*RST\n'; GR1=5
#Clear
C2=b'*CLS\n'; GR2=5
C3=b'*ESE 1\n'; GR3=7
C4=b'*SRE 32\n'; GR4=8
#Go to remote mode
C5=b'SYST:REM\n'; GR5=9
#Vain attempt to turn off the beeper
C6=b'SYST:BEEP:STAT OFF\n'; GR6=19
ComandSeq=[C1, C2, C3, C4, C5, C6]
ResponceSeq=[GR1, GR2, GR3, GR4, GR5, GR6]
#run the command sequence and verify it went oky
for Comd, Res in zip(ComandSeq, ResponceSeq):
if self.MMSer.write(Comd)==Res:
pass
else:
raise ValueError(f'Remote Setup Error on command {Comd} ')
break
print('Remote Connection Made')
self.Mode='DC_V'
def ModeSet(self, Mode='DC_V'):
"""
Method to set the measurement mode of the 34401A using shorthand
Args:
Mode (str; Def. 'DC_V): Sets the mode of 34401A the available
modes are:
'DC_V': DC Voltage Reading,
'DC_I': DC Current Reading,
'AC_V': AC Voltage Reading,
'AC_I': AC Current Reading,
'Res2W': 2 Wire Resistance Reading,
'Res4W': 4 Wire Resistance Reading,
'Freq': AC Main Frequency Measurement,
'Period':AC Main Period Measurement
"""
C={'DC_V':b'CONF:VOLT:DC\n',
'DC_I':b'CONF:CURR:DC\n',
'AC_V':b'CONF:VOLT:AC\n',
'AC_I':b'CONF:CURR:AC\n',
'Res2W':b'CONF:RES\n',
'Res4W':b'CONF:FRES\n',
'Freq':b'CONF:FREQ\n',
'Period':b'CONF:PER\n'}
try:
self.MMSer.write(C[Mode])
self.Mode=Mode
except KeyError:
print(f'Mode {Mode} is not a meassurment mode of this insturment')
def ErrorReadAct(self):
"""
Method to read the Error from the 34401A and record the errors. Also
clears the `-410,"Query INTERRUPTED"` that occurs because of the
slow reading to output transfer compared to the speed of modern
computers
Return:
returns the non 410 error if non 410 error occurs
"""
#ask for the error
if self.MMSer.write(b'SYST:ERR?\n')!=10:
raise ValueError('Error State Not Readable!!!')
#read the error
ErrRes=self.MMSer.readline().decode("utf-8")[:-2]
if ErrRes=='+0,"No error"':
return None
# the called before ready error that is ignored and cleared
elif ErrRes=='-410,"Query INTERRUPTED"':
#clear the buffer and ignore
self.MMSer.write(b'*CLS\n')
# for any other errors record them
else:
#'Last Command', 'Time', 'Error'
self.Errors.loc[self.Errors.shape[0],:]=['', pd.Timestamp.now(), ErrRes]
try:
self.EndAutoRun()
except:
pass
return ErrRes
def QurreyProgress(self, debug=False):
"""
Method to cheack the progress of getting the reading to the output
register on the RS-232 line using repetitive calls to the SCPI OPC
routine
Arg:
debug (bool; Def. False): if True prints out if the Query is
still in progress transferring to the RS-232 output register
Return:
1 if the information is ready on the RS-232 register
If the call counter is triggered a error is raised and if the
Autorun is ongoing will be stopped
"""
#counter to act like a time out
Counter=0
while True:
#ask for the OPC
if self.MMSer.write(b'*OPC?\n')==6:
pass
else:
raise ValueError('Operation Complet Command Not Excepted!!!')
#if complet return 1
CompState=self.MMSer.readline().decode("utf-8")[:-2]
if CompState=='':
if debug:
print('Qurrey in Prog')
pass
elif CompState=='1':
if debug:
print('Qurry Complet')
break
else:
break
return CompState
# The Time out action that if triggered will cancel the autorun
Counter+=1
if Counter==10:
raise ValueError('Operation is not Completing after 10 cyles!!!')
try:
self.EndAutoRun()
except:
pass
break
def MakeMeasurment(self):
"""
Method to make a measurement on the 34401A usign the
SCPI INIT OPC FETCH method where the OPC is done via
`self.QurreyProgress` If the reading is successful then the reading
is recorded to `self.Data`
"""
#accurire the reading on the 34401A
if self.MMSer.write(b'INIT\n')==5:
pass
else:
raise ValueError('Get Mesurment Comand Not Taken!!!')
#perform the OPC ready check
self.QurreyProgress()
# Read the value from the 34401A
if self.MMSer.write(b'FETCH?\n')==7:
pass
else:
raise ValueError('Fetch Measurment Comand Not Taken!')
M=self.MMSer.readline(); T=pd.Timestamp.now()
try:
M=float(M.decode("utf-8")[:-2])
#'time', 'value', 'Mode'
self.Data.loc[self.Data.shape[0], :]=[T, M, self.Mode]
except Exception as ex:
print(ex)
self.ErrorReadAct()
self.ErrorReadAct()
def SetupLivePlot(self):
"""
Internal Method for creating the fig, ax for live ploting
"""
%matplotlib notebook
%matplotlib notebook
self.fig, self.ax=plt.subplots(nrows=1, ncols=1)
def LivePlotData(self):
"""
Method that performance the live ploting of the data from the
instrument
"""
# if the everything is setup run this
try:
self.ax.clear()
# plot the values recorded in self.Data where the Mode
#of the recorded data equals the current mode of the multimeter
# TODO could be made into subplots for each mode
# TODO should have splits in mode reflect in graph
self.Data.where(self.Data['Mode']==self.Mo15de).plot(x='Time', y='Value',
title=self.Mode, ax=self.ax, legend=False)
self.fig.canvas.draw()
# if everything is not setup perform the setup and rerun
except AttributeError:
self.SetupLivePlot()
self.LivePlotData()
def RunAction(self):
"""
Method for the actions that should happen during a autorun event
"""
self.MakeMeasurment()
self.LivePlotData()
def AutoRun(self, IntervalStep=10, IntervalUnit='Min', ReGenPlot=False):
"""
Exsternal Method called by the user to initiate auto running of the
the instrument on a separate thread so that python can continue to
perform more task
Args:
IntervalStep (int): the interval time inverval that the autorun's
actions are done
IntervalUnit (str; Default 'Min' {Hour, Sec}): the unit of the
intervals auto running
exsample IntervalStep=10, IntervalUnit='Min' will perform the
autorun action every 10 minutes
ReGenPlot (bool; Default False): True will create a new instance
of the live plot where as False will reuse the old live plot
"""
#recreate the plot in a new instance of plot
if ReGenPlot:
self.SetupLivePlot()
# convert timer to sec
InvertvalUnits={'Hour':60.0**2, 'Min':60.0, 'Sec':1.0}
self.IntervalStep=float(IntervalStep*InvertvalUnits[IntervalUnit])
#call the internal method to do instantiate the autorunning
self.AutoRunRun()
def AutoRunRun(self):
"""
Internal Method to initiate the thread
Note:
This method may be redundant but worked, will most likely will
be removed in future. But for now this gets the thread working
due to the seemingly recursive nature of the pythons thread
"""
#call the run action
self.RunAction()
#create a thread timer and bind this method to the thread
self.t=threading.Timer(self.IntervalStep, self.AutoRunRun)
#start the thread
self.t.start()
def EndAutoRun(self):
"""
Method to end the autorun via termination of the thread timer and
joining the thread with bundle
"""
# terminate the timmer
self.t.cancel()
# terminate the thread and rejoin it to main bundle
self.t.join()
self.LivePlotData()
###Output
_____no_output_____
###Markdown
TestingThe testing was done with the the front voltage/2W resistor directly connected to the outputs of a generic benchtop DC Power supply while make will go unsaid for for very good reasons. As there was no load in the circuit the current function was not tested here but has been tested in additional test.
###Code
Meter1=MultiMeter34401A('MM1')
Meter1.MakeConnection(UnLockedPort)
Meter1.RemoteSetup()
R=Meter1.ErrorReadAct(); print(R)
print(Meter1.Mode)
Meter1.ModeSet('DC_V')
print(Meter1.Mode)
Meter1.QurreyProgress(debug=True)
Meter1.MakeMeasurment()
Meter1.Errors
Meter1.Data
Meter1.AutoRun(5, 'Sec')
Meter1.ModeSet('Res4W')
Meter1.EndAutoRun()
Meter1.ModeSet('DC_V')
Meter1.AutoRun(1.5, 'Sec', ReGenPlot=True)
Meter1.EndAutoRun()
Meter1.Data
###Output
_____no_output_____ |
Datasets/jigsaw-dataset-bias-bert-cased-pt2.ipynb | ###Markdown
Dependencies
###Code
import gc
import numpy as np
import pandas as pd
from tokenizers import BertWordPieceTokenizer
###Output
_____no_output_____
###Markdown
Parameters
###Code
MAX_LEN = 512
base_path = '/kaggle/input/bert-base-ml-cased-huggingface/bert_base_cased/'
config_path = base_path + 'bert-base-multilingual-cased-config.json'
vocab_path = base_path + 'bert-base-multilingual-cased-vocab.txt'
# File paths
x_train_bias_path = 'x_train_bias'
y_train_bias_path = 'y_train_bias'
###Output
_____no_output_____
###Markdown
Tokenizer
###Code
tokenizer = BertWordPieceTokenizer(vocab_path, lowercase=False)
tokenizer.enable_truncation(max_length=MAX_LEN)
tokenizer.enable_padding(max_length=MAX_LEN)
###Output
_____no_output_____
###Markdown
Train set (bias)
###Code
data_bias_size = 1902194
chuncksize = 100000
for i in range((data_bias_size // chuncksize // 2 ), (data_bias_size // chuncksize)):
print((i * chuncksize), '--------------------------------------------')
train_bias = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/jigsaw-unintended-bias-train.csv",
usecols=['comment_text', 'toxic'], nrows=chuncksize, skiprows=range(1, i * chuncksize))
print('Train samples %d' % len(train_bias))
display(train_bias.head())
x_train_bias = [x.ids for x in tokenizer.encode_batch(train_bias['comment_text'].tolist())]
y_train_bias = train_bias['toxic'].astype(np.float32).values.reshape(len(train_bias), 1)
# Save
np.save(x_train_bias_path + '_pt%d' % i, x_train_bias)
np.save(y_train_bias_path + '_pt%d' % i, y_train_bias)
print('x_train samples %d' % len(x_train_bias))
print('y_train samples %d' % len(y_train_bias))
###Output
900000 --------------------------------------------
Train samples 100000
|
pages/workshop/Bonus/netCDF Writing.ipynb | ###Markdown
Writing netCDF dataUnidata Python Workshop**Important Note**: when running this notebook interactively in a browser, you probably will not be able to execute individual cells out of order without getting an error. Instead, choose "Run All" from the Cell menu after you modify a cell.
###Code
from netCDF4 import Dataset # Note: python is case-sensitive!
import numpy as np
###Output
_____no_output_____
###Markdown
Opening a file, creating a new DatasetLet's create a new, empty netCDF file named 'new.nc' in our project root `data` directory, opened for writing.Be careful, opening a file with 'w' will clobber any existing data (unless `clobber=False` is used, in which case an exception is raised if the file already exists).- `mode='r'` is the default.- `mode='a'` opens an existing file and allows for appending (does not clobber existing data)- `format` can be one of `NETCDF3_CLASSIC`, `NETCDF3_64BIT`, `NETCDF4_CLASSIC` or `NETCDF4` (default). `NETCDF4_CLASSIC` uses HDF5 for the underlying storage layer (as does `NETCDF4`) but enforces the classic netCDF 3 data model so data can be read with older clients.
###Code
try: ncfile.close() # just to be safe, make sure dataset is not already open.
except: pass
ncfile = Dataset('../../../data/new.nc',mode='w',format='NETCDF4_CLASSIC')
print(ncfile)
###Output
_____no_output_____
###Markdown
Creating dimensionsThe **ncfile** object we created is a container for _dimensions_, _variables_, and _attributes_. First, let's create some dimensions using the [`createDimension`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.htmlcreateDimension) method. - Every dimension has a name and a length. - The name is a string that is used to specify the dimension to be used when creating a variable, and as a key to access the dimension object in the `ncfile.dimensions` dictionary.Setting the dimension length to `0` or `None` makes it unlimited, so it can grow. - For `NETCDF4` files, any variable's dimension can be unlimited. - For `NETCDF4_CLASSIC` and `NETCDF3*` files, only one per variable can be unlimited, and it must be the leftmost (slowest varying) dimension.
###Code
lat_dim = ncfile.createDimension('lat', 73) # latitude axis
lon_dim = ncfile.createDimension('lon', 144) # longitude axis
time_dim = ncfile.createDimension('time', None) # unlimited axis (can be appended to).
for dim in ncfile.dimensions.items():
print(dim)
###Output
_____no_output_____
###Markdown
Creating attributesnetCDF attributes can be created just like you would for any python object. - Best to adhere to established conventions (like the [CF](http://cfconventions.org/) conventions)- We won't try to adhere to any specific convention here though.
###Code
ncfile.title='My model data'
print(ncfile.title)
ncfile.subtitle="My model data subtitle"
print(ncfile.subtitle)
print(ncfile)
###Output
_____no_output_____
###Markdown
Try adding some more attributes... Creating variablesNow let's add some variables and store some data in them. - A variable has a name, a type, a shape, and some data values. - The shape of a variable is specified by a tuple of dimension names. - A variable should also have some named attributes, such as 'units', that describe the data.The [`createVariable`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.htmlcreateVariable) method takes 3 mandatory args.- the 1st argument is the variable name (a string). This is used as the key to access the variable object from the `variables` dictionary.- the 2nd argument is the datatype (most numpy datatypes supported). - the third argument is a tuple containing the dimension names (the dimensions must be created first). Unless this is a `NETCDF4` file, any unlimited dimension must be the leftmost one.- there are lots of optional arguments (many of which are only relevant when `format='NETCDF4'`) to control compression, chunking, fill_value, etc.
###Code
# Define two variables with the same names as dimensions,
# a conventional way to define "coordinate variables".
lat = ncfile.createVariable('lat', np.float32, ('lat',))
lat.units = 'degrees_north'
lat.long_name = 'latitude'
lon = ncfile.createVariable('lon', np.float32, ('lon',))
lon.units = 'degrees_east'
lon.long_name = 'longitude'
time = ncfile.createVariable('time', np.float64, ('time',))
time.units = 'hours since 1800-01-01'
time.long_name = 'time'
# Define a 3D variable to hold the data
temp = ncfile.createVariable('temp',np.float64,('time','lat','lon')) # note: unlimited dimension is leftmost
temp.units = 'K' # degrees Kelvin
temp.standard_name = 'air_temperature' # this is a CF standard name
print(temp)
###Output
_____no_output_____
###Markdown
Pre-defined variable attributes (read only)The netCDF4 module provides some useful pre-defined Python attributes for netCDF variables, such as dimensions, shape, dtype, ndim. Note: since no data has been written yet, the length of the 'time' dimension is 0.
###Code
print("-- Some pre-defined attributes for variable temp:")
print("temp.dimensions:", temp.dimensions)
print("temp.shape:", temp.shape)
print("temp.dtype:", temp.dtype)
print("temp.ndim:", temp.ndim)
###Output
_____no_output_____
###Markdown
Writing dataTo write data a netCDF variable object, just treat it like a numpy array and assign values to a slice.
###Code
nlats = len(lat_dim); nlons = len(lon_dim); ntimes = 3
# Write latitudes, longitudes.
# Note: the ":" is necessary in these "write" statements
lat[:] = -90. + (180./nlats)*np.arange(nlats) # south pole to north pole
lon[:] = (180./nlats)*np.arange(nlons) # Greenwich meridian eastward
# create a 3D array of random numbers
data_arr = np.random.uniform(low=280,high=330,size=(ntimes,nlats,nlons))
# Write the data. This writes the whole 3D netCDF variable all at once.
temp[:,:,:] = data_arr # Appends data along unlimited dimension
print("-- Wrote data, temp.shape is now ", temp.shape)
# read data back from variable (by slicing it), print min and max
print("-- Min/Max values:", temp[:,:,:].min(), temp[:,:,:].max())
###Output
_____no_output_____
###Markdown
- You can just treat a netCDF Variable object like a numpy array and assign values to it.- Variables automatically grow along unlimited dimensions (unlike numpy arrays)- The above writes the whole 3D variable all at once, but you can write it a slice at a time instead.Let's add another time slice....
###Code
# create a 2D array of random numbers
data_slice = np.random.uniform(low=280,high=330,size=(nlats,nlons))
temp[3,:,:] = data_slice # Appends the 4th time slice
print("-- Wrote more data, temp.shape is now ", temp.shape)
###Output
_____no_output_____
###Markdown
Note that we have not yet written any data to the time variable. It automatically grew as we appended data along the time dimension to the variable `temp`, but the data is missing.
###Code
print(time)
times_arr = time[:]
print(type(times_arr),times_arr) # dashes indicate masked values (where data has not yet been written)
###Output
_____no_output_____
###Markdown
Let's add/write some data into the time variable. - Given a set of datetime instances, use date2num to convert to numeric time values and then write that data to the variable.
###Code
import datetime as dt
from netCDF4 import date2num,num2date
# 1st 4 days of October.
dates = [dt.datetime(2014,10,1,0),dt.datetime(2014,10,2,0),dt.datetime(2014,10,3,0),dt.datetime(2014,10,4,0)]
print(dates)
times = date2num(dates, time.units)
print(times, time.units) # numeric values
time[:] = times
# read time data back, convert to datetime instances, check values.
print(time[:])
print(time.units)
print(num2date(time[:],time.units))
###Output
_____no_output_____
###Markdown
Closing a netCDF fileIt's **important** to close a netCDF file you opened for writing:- flushes buffers to make sure all data gets written- releases memory resources used by open netCDF files
###Code
# first print the Dataset object to see what we've got
print(ncfile)
# close the Dataset.
ncfile.close(); print('Dataset is closed!')
!ncdump -h ../../../data/new.nc
###Output
_____no_output_____
###Markdown
Advanced featuresSo far we've only exercised features associated with the old netCDF version 3 data model. netCDF version 4 adds a lot of new functionality that comes with the more flexible HDF5 storage layer. Let's create a new file with `format='NETCDF4'` so we can try out some of these features.
###Code
ncfile = Dataset('../../../data/new2.nc','w',format='NETCDF4')
print(ncfile)
###Output
_____no_output_____
###Markdown
Creating GroupsnetCDF version 4 added support for organizing data in hierarchical groups.- analagous to directories in a filesystem. - Groups serve as containers for variables, dimensions and attributes, as well as other groups. - A `netCDF4.Dataset` creates a special group, called the 'root group', which is similar to the root directory in a unix filesystem. - groups are created using the [`createGroup`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.htmlcreateGroup) method.- takes a single argument (a string, which is the name of the Group instance). This string is used as a key to access the group instances in the `groups` dictionary.Here we create two groups to hold data for two different model runs.
###Code
grp1 = ncfile.createGroup('model_run1')
grp2 = ncfile.createGroup('model_run2')
for grp in ncfile.groups.items():
print(grp)
###Output
_____no_output_____
###Markdown
Create some dimensions in the root group.
###Code
lat_dim = ncfile.createDimension('lat', 73) # latitude axis
lon_dim = ncfile.createDimension('lon', 144) # longitude axis
time_dim = ncfile.createDimension('time', None) # unlimited axis (can be appended to).
###Output
_____no_output_____
###Markdown
Now create a variable in grp1 and grp2. The library will search recursively upwards in the group tree to find the dimensions (which in this case are defined one level up).- These variables are create with **zlib compression**, another nifty feature of netCDF 4. - The data are automatically compressed when data is written to the file, and uncompressed when the data is read. - This can really save disk space, especially when used in conjunction with the [**least_significant_digit**](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.htmlcreateVariable) keyword argument, which causes the data to be quantized (truncated) before compression. This makes the compression lossy, but more efficient.
###Code
temp1 = grp1.createVariable('temp',np.float64,('time','lat','lon'),zlib=True)
temp2 = grp2.createVariable('temp',np.float64,('time','lat','lon'),zlib=True)
for grp in ncfile.groups.items(): # shows that each group now contains 1 variable
print(grp)
###Output
_____no_output_____
###Markdown
Creating a variable with a compound data type- Compound data types map directly to numpy structured (a.k.a 'record' arrays). - Structured arrays are akin to C structs, or derived types in Fortran. - They allow for the construction of table-like structures composed of combinations of other data types, including other compound types. - Might be useful for representing multiple parameter values at each point on a grid, or at each time and space location for scattered (point) data. Here we create a variable with a compound data type to represent complex data (there is no native complex data type in netCDF). - The compound data type is created with the [`createCompoundType`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.htmlcreateCompoundType) method.
###Code
# create complex128 numpy structured data type
complex128 = np.dtype([('real',np.float64),('imag',np.float64)])
# using this numpy dtype, create a netCDF compound data type object
# the string name can be used as a key to access the datatype from the cmptypes dictionary.
complex128_t = ncfile.createCompoundType(complex128,'complex128')
# create a variable with this data type, write some data to it.
cmplxvar = grp1.createVariable('cmplx_var',complex128_t,('time','lat','lon'))
# write some data to this variable
# first create some complex random data
nlats = len(lat_dim); nlons = len(lon_dim)
data_arr_cmplx = np.random.uniform(size=(nlats,nlons))+1.j*np.random.uniform(size=(nlats,nlons))
# write this complex data to a numpy complex128 structured array
data_arr = np.empty((nlats,nlons),complex128)
data_arr['real'] = data_arr_cmplx.real; data_arr['imag'] = data_arr_cmplx.imag
cmplxvar[0] = data_arr # write the data to the variable (appending to time dimension)
print(cmplxvar)
data_out = cmplxvar[0] # read one value of data back from variable
print(data_out.dtype, data_out.shape, data_out[0,0])
###Output
_____no_output_____
###Markdown
Creating a variable with a variable-length (vlen) data typenetCDF 4 has support for variable-length or "ragged" arrays. These are arrays of variable length sequences having the same type. - To create a variable-length data type, use the [`createVLType`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.htmlcreateVLType) method.- The numpy datatype of the variable-length sequences and the name of the new datatype must be specified.
###Code
vlen_t = ncfile.createVLType(np.int64, 'phony_vlen')
###Output
_____no_output_____
###Markdown
A new variable can then be created using this datatype.
###Code
vlvar = grp2.createVariable('phony_vlen_var', vlen_t, ('time','lat','lon'))
###Output
_____no_output_____
###Markdown
Since there is no native vlen datatype in numpy, vlen arrays are represented in python as object arrays (arrays of dtype `object`). - These are arrays whose elements are Python object pointers, and can contain any type of python object. - For this application, they must contain 1-D numpy arrays all of the same type but of varying length. - Fill with 1-D random numpy int64 arrays of random length between 1 and 10.
###Code
vlen_data = np.empty((nlats,nlons),object)
for i in range(nlons):
for j in range(nlats):
size = np.random.randint(1,10,size=1) # random length of sequence
vlen_data[j,i] = np.random.randint(0,10,size=size).astype(vlen_t.dtype)# generate random sequence
vlvar[0] = vlen_data # append along unlimited dimension (time)
print(vlvar)
print('data =\n',vlvar[:])
###Output
_____no_output_____
###Markdown
Close the Dataset and examine the contents with ncdump.
###Code
ncfile.close()
!ncdump -h ../../../data/new2.nc
###Output
_____no_output_____
###Markdown
Writing netCDF dataUnidata Python Workshop**Important Note**: when running this notebook interactively in a browser, you probably will not be able to execute individual cells out of order without getting an error. Instead, choose "Run All" from the Cell menu after you modify a cell.
###Code
from netCDF4 import Dataset # Note: python is case-sensitive!
import numpy as np
###Output
_____no_output_____
###Markdown
Opening a file, creating a new DatasetLet's create a new, empty netCDF file named '../data/new.nc', opened for writing.Be careful, opening a file with 'w' will clobber any existing data (unless `clobber=False` is used, in which case an exception is raised if the file already exists).- `mode='r'` is the default.- `mode='a'` opens an existing file and allows for appending (does not clobber existing data)- `format` can be one of `NETCDF3_CLASSIC`, `NETCDF3_64BIT`, `NETCDF4_CLASSIC` or `NETCDF4` (default). `NETCDF4_CLASSIC` uses HDF5 for the underlying storage layer (as does `NETCDF4`) but enforces the classic netCDF 3 data model so data can be read with older clients.
###Code
try: ncfile.close() # just to be safe, make sure dataset is not already open.
except: pass
ncfile = Dataset('../../data/new.nc',mode='w',format='NETCDF4_CLASSIC')
print(ncfile)
###Output
_____no_output_____
###Markdown
Creating dimensionsThe **ncfile** object we created is a container for _dimensions_, _variables_, and _attributes_. First, let's create some dimensions using the [`createDimension`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.htmlcreateDimension) method. - Every dimension has a name and a length. - The name is a string that is used to specify the dimension to be used when creating a variable, and as a key to access the dimension object in the `ncfile.dimensions` dictionary.Setting the dimension length to `0` or `None` makes it unlimited, so it can grow. - For `NETCDF4` files, any variable's dimension can be unlimited. - For `NETCDF4_CLASSIC` and `NETCDF3*` files, only one per variable can be unlimited, and it must be the leftmost (slowest varying) dimension.
###Code
lat_dim = ncfile.createDimension('lat', 73) # latitude axis
lon_dim = ncfile.createDimension('lon', 144) # longitude axis
time_dim = ncfile.createDimension('time', None) # unlimited axis (can be appended to).
for dim in ncfile.dimensions.items():
print(dim)
###Output
_____no_output_____
###Markdown
Creating attributesnetCDF attributes can be created just like you would for any python object. - Best to adhere to established conventions (like the [CF](http://cfconventions.org/) conventions)- We won't try to adhere to any specific convention here though.
###Code
ncfile.title='My model data'
print(ncfile.title)
ncfile.subtitle="My model data subtitle"
print(ncfile.subtitle)
print(ncfile)
###Output
_____no_output_____
###Markdown
Try adding some more attributes... Creating variablesNow let's add some variables and store some data in them. - A variable has a name, a type, a shape, and some data values. - The shape of a variable is specified by a tuple of dimension names. - A variable should also have some named attributes, such as 'units', that describe the data.The [`createVariable`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.htmlcreateVariable) method takes 3 mandatory args.- the 1st argument is the variable name (a string). This is used as the key to access the variable object from the `variables` dictionary.- the 2nd argument is the datatype (most numpy datatypes supported). - the third argument is a tuple containing the dimension names (the dimensions must be created first). Unless this is a `NETCDF4` file, any unlimited dimension must be the leftmost one.- there are lots of optional arguments (many of which are only relevant when `format='NETCDF4'`) to control compression, chunking, fill_value, etc.
###Code
# Define two variables with the same names as dimensions,
# a conventional way to define "coordinate variables".
lat = ncfile.createVariable('lat', np.float32, ('lat',))
lat.units = 'degrees_north'
lat.long_name = 'latitude'
lon = ncfile.createVariable('lon', np.float32, ('lon',))
lon.units = 'degrees_east'
lon.long_name = 'longitude'
time = ncfile.createVariable('time', np.float64, ('time',))
time.units = 'hours since 1800-01-01'
time.long_name = 'time'
# Define a 3D variable to hold the data
temp = ncfile.createVariable('temp',np.float64,('time','lat','lon')) # note: unlimited dimension is leftmost
temp.units = 'K' # degrees Kelvin
temp.standard_name = 'air_temperature' # this is a CF standard name
print(temp)
###Output
_____no_output_____
###Markdown
Pre-defined variable attributes (read only)The netCDF4 module provides some useful pre-defined Python attributes for netCDF variables, such as dimensions, shape, dtype, ndim. Note: since no data has been written yet, the length of the 'time' dimension is 0.
###Code
print("-- Some pre-defined attributes for variable temp:")
print("temp.dimensions:", temp.dimensions)
print("temp.shape:", temp.shape)
print("temp.dtype:", temp.dtype)
print("temp.ndim:", temp.ndim)
###Output
_____no_output_____
###Markdown
Writing dataTo write data a netCDF variable object, just treat it like a numpy array and assign values to a slice.
###Code
nlats = len(lat_dim); nlons = len(lon_dim); ntimes = 3
# Write latitudes, longitudes.
# Note: the ":" is necessary in these "write" statements
lat[:] = -90. + (180./nlats)*np.arange(nlats) # south pole to north pole
lon[:] = (180./nlats)*np.arange(nlons) # Greenwich meridian eastward
# create a 3D array of random numbers
data_arr = np.random.uniform(low=280,high=330,size=(ntimes,nlats,nlons))
# Write the data. This writes the whole 3D netCDF variable all at once.
temp[:,:,:] = data_arr # Appends data along unlimited dimension
print("-- Wrote data, temp.shape is now ", temp.shape)
# read data back from variable (by slicing it), print min and max
print("-- Min/Max values:", temp[:,:,:].min(), temp[:,:,:].max())
###Output
_____no_output_____
###Markdown
- You can just treat a netCDF Variable object like a numpy array and assign values to it.- Variables automatically grow along unlimited dimensions (unlike numpy arrays)- The above writes the whole 3D variable all at once, but you can write it a slice at a time instead.Let's add another time slice....
###Code
# create a 2D array of random numbers
data_slice = np.random.uniform(low=280,high=330,size=(nlats,nlons))
temp[3,:,:] = data_slice # Appends the 4th time slice
print("-- Wrote more data, temp.shape is now ", temp.shape)
###Output
_____no_output_____
###Markdown
Note that we have not yet written any data to the time variable. It automatically grew as we appended data along the time dimension to the variable `temp`, but the data is missing.
###Code
print(time)
times_arr = time[:]
print(type(times_arr),times_arr) # dashes indicate masked values (where data has not yet been written)
###Output
_____no_output_____
###Markdown
Let's add/write some data into the time variable. - Given a set of datetime instances, use date2num to convert to numeric time values and then write that data to the variable.
###Code
import datetime as dt
from netCDF4 import date2num,num2date
# 1st 4 days of October.
dates = [dt.datetime(2014,10,1,0),dt.datetime(2014,10,2,0),dt.datetime(2014,10,3,0),dt.datetime(2014,10,4,0)]
print(dates)
times = date2num(dates, time.units)
print(times, time.units) # numeric values
time[:] = times
# read time data back, convert to datetime instances, check values.
print(time[:])
print(time.units)
print(num2date(time[:],time.units))
###Output
_____no_output_____
###Markdown
Closing a netCDF fileIt's **important** to close a netCDF file you opened for writing:- flushes buffers to make sure all data gets written- releases memory resources used by open netCDF files
###Code
# first print the Dataset object to see what we've got
print(ncfile)
# close the Dataset.
ncfile.close(); print('Dataset is closed!')
!ncdump -h ../data/new.nc
###Output
_____no_output_____
###Markdown
Advanced featuresSo far we've only exercised features associated with the old netCDF version 3 data model. netCDF version 4 adds a lot of new functionality that comes with the more flexible HDF5 storage layer. Let's create a new file with `format='NETCDF4'` so we can try out some of these features.
###Code
ncfile = Dataset('../../data/new2.nc','w',format='NETCDF4')
print(ncfile)
###Output
_____no_output_____
###Markdown
Creating GroupsnetCDF version 4 added support for organizing data in hierarchical groups.- analagous to directories in a filesystem. - Groups serve as containers for variables, dimensions and attributes, as well as other groups. - A `netCDF4.Dataset` creates a special group, called the 'root group', which is similar to the root directory in a unix filesystem. - groups are created using the [`createGroup`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.htmlcreateGroup) method.- takes a single argument (a string, which is the name of the Group instance). This string is used as a key to access the group instances in the `groups` dictionary.Here we create two groups to hold data for two different model runs.
###Code
grp1 = ncfile.createGroup('model_run1')
grp2 = ncfile.createGroup('model_run2')
for grp in ncfile.groups.items():
print(grp)
###Output
_____no_output_____
###Markdown
Create some dimensions in the root group.
###Code
lat_dim = ncfile.createDimension('lat', 73) # latitude axis
lon_dim = ncfile.createDimension('lon', 144) # longitude axis
time_dim = ncfile.createDimension('time', None) # unlimited axis (can be appended to).
###Output
_____no_output_____
###Markdown
Now create a variable in grp1 and grp2. The library will search recursively upwards in the group tree to find the dimensions (which in this case are defined one level up).- These variables are create with **zlib compression**, another nifty feature of netCDF 4. - The data are automatically compressed when data is written to the file, and uncompressed when the data is read. - This can really save disk space, especially when used in conjunction with the [**least_significant_digit**](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.htmlcreateVariable) keyword argument, which causes the data to be quantized (truncated) before compression. This makes the compression lossy, but more efficient.
###Code
temp1 = grp1.createVariable('temp',np.float64,('time','lat','lon'),zlib=True)
temp2 = grp2.createVariable('temp',np.float64,('time','lat','lon'),zlib=True)
for grp in ncfile.groups.items(): # shows that each group now contains 1 variable
print(grp)
###Output
_____no_output_____
###Markdown
Creating a variable with a compound data type- Compound data types map directly to numpy structured (a.k.a 'record' arrays). - Structured arrays are akin to C structs, or derived types in Fortran. - They allow for the construction of table-like structures composed of combinations of other data types, including other compound types. - Might be useful for representing multiple parameter values at each point on a grid, or at each time and space location for scattered (point) data. Here we create a variable with a compound data type to represent complex data (there is no native complex data type in netCDF). - The compound data type is created with the [`createCompoundType`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.htmlcreateCompoundType) method.
###Code
# create complex128 numpy structured data type
complex128 = np.dtype([('real',np.float64),('imag',np.float64)])
# using this numpy dtype, create a netCDF compound data type object
# the string name can be used as a key to access the datatype from the cmptypes dictionary.
complex128_t = ncfile.createCompoundType(complex128,'complex128')
# create a variable with this data type, write some data to it.
cmplxvar = grp1.createVariable('cmplx_var',complex128_t,('time','lat','lon'))
# write some data to this variable
# first create some complex random data
nlats = len(lat_dim); nlons = len(lon_dim)
data_arr_cmplx = np.random.uniform(size=(nlats,nlons))+1.j*np.random.uniform(size=(nlats,nlons))
# write this complex data to a numpy complex128 structured array
data_arr = np.empty((nlats,nlons),complex128)
data_arr['real'] = data_arr_cmplx.real; data_arr['imag'] = data_arr_cmplx.imag
cmplxvar[0] = data_arr # write the data to the variable (appending to time dimension)
print(cmplxvar)
data_out = cmplxvar[0] # read one value of data back from variable
print(data_out.dtype, data_out.shape, data_out[0,0])
###Output
_____no_output_____
###Markdown
Creating a variable with a variable-length (vlen) data typenetCDF 4 has support for variable-length or "ragged" arrays. These are arrays of variable length sequences having the same type. - To create a variable-length data type, use the [`createVLType`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.htmlcreateVLType) method.- The numpy datatype of the variable-length sequences and the name of the new datatype must be specified.
###Code
vlen_t = ncfile.createVLType(np.int64, 'phony_vlen')
###Output
_____no_output_____
###Markdown
A new variable can then be created using this datatype.
###Code
vlvar = grp2.createVariable('phony_vlen_var', vlen_t, ('time','lat','lon'))
###Output
_____no_output_____
###Markdown
Since there is no native vlen datatype in numpy, vlen arrays are represented in python as object arrays (arrays of dtype `object`). - These are arrays whose elements are Python object pointers, and can contain any type of python object. - For this application, they must contain 1-D numpy arrays all of the same type but of varying length. - Fill with 1-D random numpy int64 arrays of random length between 1 and 10.
###Code
vlen_data = np.empty((nlats,nlons),object)
for i in range(nlons):
for j in range(nlats):
size = np.random.randint(1,10,size=1) # random length of sequence
vlen_data[j,i] = np.random.randint(0,10,size=size).astype(vlen_t.dtype)# generate random sequence
vlvar[0] = vlen_data # append along unlimited dimension (time)
print(vlvar)
print('data =\n',vlvar[:])
###Output
_____no_output_____
###Markdown
Close the Dataset and examine the contents with ncdump.
###Code
ncfile.close()
!ncdump -h ../data/new2.nc
###Output
_____no_output_____ |
docs/source/examples/util/timescale.ipynb | ###Markdown
Geological Timescale======================pyrolite includes a simple geological timescale, based on a recent verionof the International Chronostratigraphic Chart [ICS]_. The:class:`~pyrolite.util.time.Timescale` class can be used to look up names forspecific geological ages, to look up times for known geological age namesand to access a reference table for all of these... [ICS] Cohen, K.M., Finney, S.C., Gibbard, P.L., Fan, J.-X., 2013. `The ICS International Chronostratigraphic Chart `__. Episodes 36, 199–204. First we'll create a timescale:
###Code
from pyrolite.util.time import Timescale
ts = Timescale()
###Output
_____no_output_____
###Markdown
From this we can look up the names of ages (in million years, or Ma):
###Code
ts.named_age(1212.1)
###Output
_____no_output_____
###Markdown
As geological age names are hierarchical, the name you give an age depends on whatlevel you're looking at. By default, the timescale will return the most specificnon-null level. The levels accessible within the timescale are listedas an attribute:
###Code
ts.levels
###Output
_____no_output_____
###Markdown
These can be used to refine the output names to your desired level of specificity(noting that for some ages, the levels which are accessible can differ; see the chart):
###Code
ts.named_age(1212.1, level="Epoch")
###Output
_____no_output_____
###Markdown
The timescale can also do the inverse for you, and return the timing information for agiven named age:
###Code
ts.text2age("Holocene")
###Output
_____no_output_____
###Markdown
We can use this to create a simple template to visualise the geological timescale:
###Code
import pandas as pd
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, figsize=(5, 10))
for ix, level in enumerate(ts.levels):
ldf = ts.data.loc[ts.data.Level == level, :]
for pix, period in ldf.iterrows():
ax.bar(
ix,
period.Start - period.End,
facecolor=period.Color,
bottom=period.End,
width=1,
edgecolor="k",
)
ax.set_xticks(range(len(ts.levels)))
ax.set_xticklabels(ts.levels, rotation=60)
ax.xaxis.set_ticks_position("top")
ax.set_ylabel("Age (Ma)")
ax.invert_yaxis()
###Output
_____no_output_____
###Markdown
This doesn't quite look like the geological timescale you may be used to. We can improveon the output somewhat with a bit of customisation for the positioning. Notably, this isless readable, but produces something closer to what we're after. Some of this may soonbe integrated as a :class:`~pyrolite.util.time.Timescale` method, if there's interest.
###Code
import numpy as np
from matplotlib.patches import Rectangle
# first let's set up some x-limits for the different timescale levels
xlims = {
"Eon": (0, 1),
"Era": (1, 2),
"Period": (2, 3),
"Superepoch": (3, 4),
"Epoch": (3, 5),
"Age": (5, 7),
}
fig, ax = plt.subplots(1, figsize=(4, 10))
for ix, level in enumerate(ts.levels[::-1]):
ldf = ts.data.loc[ts.data.Level == level, :]
for pix, period in ldf.iterrows():
left, right = xlims[level]
if ix != len(ts.levels) - 1:
time = np.mean(ts.text2age(period.Name))
general = None
_ix = ix
while general is None:
try:
general = ts.named_age(time, level=ts.levels[::-1][_ix + 1])
except:
pass
_ix += 1
_l, _r = xlims[ts.levels[::-1][_ix]]
if _r > left:
left = _r
rect = Rectangle(
(left, period.End),
right - left,
period.Start - period.End,
facecolor=period.Color,
edgecolor="k",
)
ax.add_artist(rect)
ax.set_xticks([np.mean(xlims[lvl]) for lvl in ts.levels])
ax.set_xticklabels(ts.levels, rotation=60)
ax.xaxis.set_ticks_position("top")
ax.set_xlim(0, 7)
ax.set_ylabel("Age (Ma)")
ax.set_ylim(500, 0)
###Output
_____no_output_____
###Markdown
Geological Timescale======================pyrolite includes a simple geological timescale, based on a recent verionof the International Chronostratigraphic Chart [ICS]_. The:class:`~pyrolite.util.time.Timescale` class can be used to look up names forspecific geological ages, to look up times for known geological age namesand to access a reference table for all of these... [ICS] Cohen, K.M., Finney, S.C., Gibbard, P.L., Fan, J.-X., 2013. `The ICS International Chronostratigraphic Chart `__. Episodes 36, 199–204. First we'll create a timescale:
###Code
from pyrolite.util.time import Timescale
ts = Timescale()
###Output
_____no_output_____
###Markdown
From this we can look up the names of ages (in million years, or Ma):
###Code
ts.named_age(1212.1)
###Output
_____no_output_____
###Markdown
As geological age names are hierarchical, the name you give an age depends on whatlevel you're looking at. By default, the timescale will return the most specificnon-null level. The levels accessible within the timescale are listedas an attribute:
###Code
ts.levels
###Output
_____no_output_____
###Markdown
These can be used to refine the output names to your desired level of specificity(noting that for some ages, the levels which are accessible can differ; see the chart):
###Code
ts.named_age(1212.1, level="Epoch")
###Output
_____no_output_____
###Markdown
The timescale can also do the inverse for you, and return the timing information for agiven named age:
###Code
ts.text2age("Holocene")
###Output
_____no_output_____
###Markdown
We can use this to create a simple template to visualise the geological timescale:
###Code
import pandas as pd
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, figsize=(5, 10))
for ix, level in enumerate(ts.levels):
ldf = ts.data.loc[ts.data.Level == level, :]
for pix, period in ldf.iterrows():
ax.bar(
ix,
period.Start - period.End,
facecolor=period.Color,
bottom=period.End,
width=1,
edgecolor="k",
)
ax.set_xticks(range(len(ts.levels)))
ax.set_xticklabels(ts.levels, rotation=60)
ax.xaxis.set_ticks_position("top")
ax.set_ylabel("Age (Ma)")
ax.invert_yaxis()
###Output
_____no_output_____
###Markdown
This doesn't quite look like the geological timescale you may be used to. We can improveon the output somewhat with a bit of customisation for the positioning. Notably, this isless readable, but produces something closer to what we're after. Some of this may soonbe integrated as a :class:`~pyrolite.util.time.Timescale` method, if there's interest.
###Code
import numpy as np
from matplotlib.patches import Rectangle
# first let's set up some x-limits for the different timescale levels
xlims = {
"Eon": (0, 1),
"Era": (1, 2),
"Period": (2, 3),
"Superepoch": (3, 4),
"Epoch": (3, 5),
"Age": (5, 7),
}
fig, ax = plt.subplots(1, figsize=(4, 10))
for ix, level in enumerate(ts.levels[::-1]):
ldf = ts.data.loc[ts.data.Level == level, :]
for pix, period in ldf.iterrows():
left, right = xlims[level]
if ix != len(ts.levels) - 1:
time = np.mean(ts.text2age(period.Name))
general = None
_ix = ix
while general is None:
try:
general = ts.named_age(time, level=ts.levels[::-1][_ix + 1])
except:
pass
_ix += 1
_l, _r = xlims[ts.levels[::-1][_ix]]
if _r > left:
left = _r
rect = Rectangle(
(left, period.End),
right - left,
period.Start - period.End,
facecolor=period.Color,
edgecolor="k",
)
ax.add_artist(rect)
ax.set_xticks([np.mean(xlims[lvl]) for lvl in ts.levels])
ax.set_xticklabels(ts.levels, rotation=60)
ax.xaxis.set_ticks_position("top")
ax.set_xlim(0, 7)
ax.set_ylabel("Age (Ma)")
ax.set_ylim(500, 0)
###Output
_____no_output_____ |
docs/load-relevancy.ipynb | ###Markdown
Relevancy Analysis This tutorial is available as an IPython notebook at [Malaya/example/relevancy](https://github.com/huseinzol05/Malaya/tree/master/example/relevancy). This module only trained on standard language structure, so it is not save to use it for local language structure.
###Code
%%time
import malaya
###Output
CPU times: user 6.06 s, sys: 1.31 s, total: 7.37 s
Wall time: 8.62 s
###Markdown
ExplanationPositive relevancy: The article or piece of text is relevant, tendency is high to become not a fake news. Can be a positive or negative sentiment.Negative relevancy: The article or piece of text is not relevant, tendency is high to become a fake news. Can be a positive or negative sentiment.**Right now relevancy module only support deep learning model**.
###Code
negative_text = 'Roti Massimo Mengandungi DNA Babi. Roti produk Massimo keluaran Syarikat The Italian Baker mengandungi DNA babi. Para pengguna dinasihatkan supaya tidak memakan produk massimo. Terdapat pelbagai produk roti keluaran syarikat lain yang boleh dimakan dan halal. Mari kita sebarkan berita ini supaya semua rakyat Malaysia sedar dengan apa yang mereka makna setiap hari. Roti tidak halal ada DNA babi jangan makan ok.'
positive_text = 'Jabatan Kemajuan Islam Malaysia memperjelaskan dakwaan sebuah mesej yang dikitar semula, yang mendakwa kononnya kod E dikaitkan dengan kandungan lemak babi sepertimana yang tular di media sosial. . Tular: November 2017 . Tular: Mei 2014 JAKIM ingin memaklumkan kepada masyarakat berhubung maklumat yang telah disebarkan secara meluas khasnya melalui media sosial berhubung kod E yang dikaitkan mempunyai lemak babi. Untuk makluman, KOD E ialah kod untuk bahan tambah (aditif) dan ianya selalu digunakan pada label makanan di negara Kesatuan Eropah. Menurut JAKIM, tidak semua nombor E yang digunakan untuk membuat sesuatu produk makanan berasaskan dari sumber yang haram. Sehubungan itu, sekiranya sesuatu produk merupakan produk tempatan dan mendapat sijil Pengesahan Halal Malaysia, maka ia boleh digunakan tanpa was-was sekalipun mempunyai kod E-kod. Tetapi sekiranya produk tersebut bukan produk tempatan serta tidak mendapat sijil pengesahan halal Malaysia walaupun menggunakan e-kod yang sama, pengguna dinasihatkan agar berhati-hati dalam memilih produk tersebut.'
###Output
_____no_output_____
###Markdown
List available Transformer models
###Code
malaya.relevancy.available_transformer()
###Output
INFO:root:tested on 20% test set.
###Markdown
Relevancy Analysis This tutorial is available as an IPython notebook at [Malaya/example/relevancy](https://github.com/huseinzol05/Malaya/tree/master/example/relevancy). This module only trained on standard language structure, so it is not save to use it for local language structure.
###Code
%%time
import malaya
###Output
CPU times: user 6.09 s, sys: 1.18 s, total: 7.27 s
Wall time: 8.62 s
###Markdown
Models accuracyWe use `sklearn.metrics.classification_report` for accuracy reporting, check at https://malaya.readthedocs.io/en/latest/models-accuracy.htmlrelevancy-analysis labels supportedDefault labels for relevancy module.
###Code
malaya.relevancy.label
###Output
_____no_output_____
###Markdown
ExplanationPositive relevancy: The article or piece of text is relevant, tendency is high to become not a fake news. Can be a positive or negative sentiment.Negative relevancy: The article or piece of text is not relevant, tendency is high to become a fake news. Can be a positive or negative sentiment.**Right now relevancy module only support deep learning model**.
###Code
negative_text = 'Roti Massimo Mengandungi DNA Babi. Roti produk Massimo keluaran Syarikat The Italian Baker mengandungi DNA babi. Para pengguna dinasihatkan supaya tidak memakan produk massimo. Terdapat pelbagai produk roti keluaran syarikat lain yang boleh dimakan dan halal. Mari kita sebarkan berita ini supaya semua rakyat Malaysia sedar dengan apa yang mereka makna setiap hari. Roti tidak halal ada DNA babi jangan makan ok.'
positive_text = 'Jabatan Kemajuan Islam Malaysia memperjelaskan dakwaan sebuah mesej yang dikitar semula, yang mendakwa kononnya kod E dikaitkan dengan kandungan lemak babi sepertimana yang tular di media sosial. . Tular: November 2017 . Tular: Mei 2014 JAKIM ingin memaklumkan kepada masyarakat berhubung maklumat yang telah disebarkan secara meluas khasnya melalui media sosial berhubung kod E yang dikaitkan mempunyai lemak babi. Untuk makluman, KOD E ialah kod untuk bahan tambah (aditif) dan ianya selalu digunakan pada label makanan di negara Kesatuan Eropah. Menurut JAKIM, tidak semua nombor E yang digunakan untuk membuat sesuatu produk makanan berasaskan dari sumber yang haram. Sehubungan itu, sekiranya sesuatu produk merupakan produk tempatan dan mendapat sijil Pengesahan Halal Malaysia, maka ia boleh digunakan tanpa was-was sekalipun mempunyai kod E-kod. Tetapi sekiranya produk tersebut bukan produk tempatan serta tidak mendapat sijil pengesahan halal Malaysia walaupun menggunakan e-kod yang sama, pengguna dinasihatkan agar berhati-hati dalam memilih produk tersebut.'
###Output
_____no_output_____
###Markdown
List available Transformer models
###Code
malaya.relevancy.available_transformer()
###Output
_____no_output_____
###Markdown
Load Transformer model```pythondef transformer(model: str = 'xlnet', quantized: bool = False, **kwargs): """ Load Transformer relevancy model. Parameters ---------- model : str, optional (default='bert') Model architecture supported. Allowed values: * ``'bert'`` - Google BERT BASE parameters. * ``'tiny-bert'`` - Google BERT TINY parameters. * ``'albert'`` - Google ALBERT BASE parameters. * ``'tiny-albert'`` - Google ALBERT TINY parameters. * ``'xlnet'`` - Google XLNET BASE parameters. * ``'alxlnet'`` - Malaya ALXLNET BASE parameters. * ``'bigbird'`` - Google BigBird BASE parameters. * ``'tiny-bigbird'`` - Malaya BigBird BASE parameters. * ``'fastformer'`` - FastFormer BASE parameters. * ``'tiny-fastformer'`` - FastFormer TINY parameters. quantized : bool, optional (default=False) if True, will load 8-bit quantized model. Quantized model not necessary faster, totally depends on the machine. Returns ------- result: model List of model classes: * if `bert` in model, will return `malaya.model.bert.MulticlassBERT`. * if `xlnet` in model, will return `malaya.model.xlnet.MulticlassXLNET`. * if `bigbird` in model, will return `malaya.model.xlnet.MulticlassBigBird`. * if `fastformer` in model, will return `malaya.model.fastformer.MulticlassFastFormer`. """```
###Code
model = malaya.relevancy.transformer(model = 'tiny-bigbird')
###Output
WARNING:tensorflow:From /Users/huseinzolkepli/Documents/Malaya/malaya/function/__init__.py:112: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.
###Markdown
Load Quantized modelTo load 8-bit quantized model, simply pass `quantized = True`, default is `False`.We can expect slightly accuracy drop from quantized model, and not necessary faster than normal 32-bit float model, totally depends on machine.
###Code
quantized_model = malaya.relevancy.transformer(model = 'alxlnet', quantized = True)
###Output
_____no_output_____
###Markdown
Predict batch of strings```pythondef predict(self, strings: List[str]): """ classify list of strings. Parameters ---------- strings: List[str] Returns ------- result: List[str] """```
###Code
%%time
model.predict([negative_text, positive_text])
%%time
quantized_model.predict([negative_text, positive_text])
###Output
CPU times: user 5.08 s, sys: 823 ms, total: 5.91 s
Wall time: 2.96 s
###Markdown
Predict batch of strings with probability```pythondef predict_proba(self, strings: List[str]): """ classify list of strings and return probability. Parameters ---------- strings : List[str] Returns ------- result: List[dict[str, float]] """```
###Code
%%time
model.predict_proba([negative_text, positive_text])
%%time
quantized_model.predict_proba([negative_text, positive_text])
###Output
CPU times: user 2.98 s, sys: 386 ms, total: 3.37 s
Wall time: 583 ms
###Markdown
Open relevancy visualization dashboardDefault when you call `predict_words` it will open a browser with visualization dashboard, you can disable by `visualization=False`.```pythondef predict_words( self, string: str, method: str = 'last', bins_size: float = 0.05, visualization: bool = True,): """ classify words. Parameters ---------- string : str method : str, optional (default='last') Attention layer supported. Allowed values: * ``'last'`` - attention from last layer. * ``'first'`` - attention from first layer. * ``'mean'`` - average attentions from all layers. bins_size: float, optional (default=0.05) default bins size for word distribution histogram. visualization: bool, optional (default=True) If True, it will open the visualization dashboard. Returns ------- dictionary: results """```**This method not available for BigBird models**.
###Code
quantized_model.predict_words(negative_text)
###Output
_____no_output_____
###Markdown
VectorizeLet say you want to visualize sentence / word level in lower dimension, you can use `model.vectorize`,```pythondef vectorize(self, strings: List[str], method: str = 'first'): """ vectorize list of strings. Parameters ---------- strings: List[str] method : str, optional (default='first') Vectorization layer supported. Allowed values: * ``'last'`` - vector from last sequence. * ``'first'`` - vector from first sequence. * ``'mean'`` - average vectors from all sequences. * ``'word'`` - average vectors based on tokens. Returns ------- result: np.array """``` Sentence level
###Code
texts = [negative_text, positive_text]
r = model.vectorize(texts, method = 'first')
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
tsne = TSNE().fit_transform(r)
tsne.shape
plt.figure(figsize = (7, 7))
plt.scatter(tsne[:, 0], tsne[:, 1])
labels = texts
for label, x, y in zip(
labels, tsne[:, 0], tsne[:, 1]
):
label = (
'%s, %.3f' % (label[0], label[1])
if isinstance(label, list)
else label
)
plt.annotate(
label,
xy = (x, y),
xytext = (0, 0),
textcoords = 'offset points',
)
###Output
_____no_output_____
###Markdown
Word level
###Code
r = quantized_model.vectorize(texts, method = 'word')
x, y = [], []
for row in r:
x.extend([i[0] for i in row])
y.extend([i[1] for i in row])
tsne = TSNE().fit_transform(y)
tsne.shape
plt.figure(figsize = (7, 7))
plt.scatter(tsne[:, 0], tsne[:, 1])
labels = x
for label, x, y in zip(
labels, tsne[:, 0], tsne[:, 1]
):
label = (
'%s, %.3f' % (label[0], label[1])
if isinstance(label, list)
else label
)
plt.annotate(
label,
xy = (x, y),
xytext = (0, 0),
textcoords = 'offset points',
)
###Output
_____no_output_____
###Markdown
Pretty good, the model able to know cluster bottom left as positive relevancy. Stacking modelsMore information, you can read at [https://malaya.readthedocs.io/en/latest/Stack.html](https://malaya.readthedocs.io/en/latest/Stack.html)
###Code
albert = malaya.relevancy.transformer(model = 'albert')
malaya.stack.predict_stack([albert, model], [positive_text, negative_text])
###Output
_____no_output_____
###Markdown
Make sure you can check accuracy chart from here first before select a model, https://malaya.readthedocs.io/en/latest/Accuracy.htmlrelevancy**You might want to use Alxlnet, a very small size, 46.8MB, but the accuracy is still on the top notch.** Load Transformer model```pythondef transformer(model: str = 'xlnet', quantized: bool = False, **kwargs): """ Load Transformer relevancy model. Parameters ---------- model : str, optional (default='bert') Model architecture supported. Allowed values: * ``'bert'`` - Google BERT BASE parameters. * ``'tiny-bert'`` - Google BERT TINY parameters. * ``'albert'`` - Google ALBERT BASE parameters. * ``'tiny-albert'`` - Google ALBERT TINY parameters. * ``'xlnet'`` - Google XLNET BASE parameters. * ``'alxlnet'`` - Malaya ALXLNET BASE parameters. * ``'bigbird'`` - Google BigBird BASE parameters. * ``'tiny-bigbird'`` - Malaya BigBird BASE parameters. * ``'fastformer'`` - FastFormer BASE parameters. * ``'tiny-fastformer'`` - FastFormer TINY parameters. quantized : bool, optional (default=False) if True, will load 8-bit quantized model. Quantized model not necessary faster, totally depends on the machine. Returns ------- result: model List of model classes: * if `bert` in model, will return `malaya.model.bert.MulticlassBERT`. * if `xlnet` in model, will return `malaya.model.xlnet.MulticlassXLNET`. * if `bigbird` in model, will return `malaya.model.xlnet.MulticlassBigBird`. * if `fastformer` in model, will return `malaya.model.fastformer.MulticlassFastFormer`. """```
###Code
model = malaya.relevancy.transformer(model = 'tiny-bigbird')
###Output
WARNING:tensorflow:From /Users/huseinzolkepli/Documents/Malaya/malaya/function/__init__.py:112: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.
###Markdown
Load Quantized modelTo load 8-bit quantized model, simply pass `quantized = True`, default is `False`.We can expect slightly accuracy drop from quantized model, and not necessary faster than normal 32-bit float model, totally depends on machine.
###Code
quantized_model = malaya.relevancy.transformer(model = 'alxlnet', quantized = True)
###Output
WARNING:root:Load quantized model will cause accuracy drop.
###Markdown
Predict batch of strings```pythondef predict(self, strings: List[str]): """ classify list of strings. Parameters ---------- strings: List[str] Returns ------- result: List[str] """```
###Code
%%time
model.predict([negative_text, positive_text])
%%time
quantized_model.predict([negative_text, positive_text])
###Output
CPU times: user 5.08 s, sys: 823 ms, total: 5.91 s
Wall time: 2.96 s
###Markdown
Predict batch of strings with probability```pythondef predict_proba(self, strings: List[str]): """ classify list of strings and return probability. Parameters ---------- strings : List[str] Returns ------- result: List[dict[str, float]] """```
###Code
%%time
model.predict_proba([negative_text, positive_text])
%%time
quantized_model.predict_proba([negative_text, positive_text])
###Output
CPU times: user 2.98 s, sys: 386 ms, total: 3.37 s
Wall time: 583 ms
###Markdown
Open relevancy visualization dashboard```pythondef predict_words( self, string: str, method: str = 'last', visualization: bool = True): """ classify words. Parameters ---------- string : str method : str, optional (default='last') Attention layer supported. Allowed values: * ``'last'`` - attention from last layer. * ``'first'`` - attention from first layer. * ``'mean'`` - average attentions from all layers. visualization: bool, optional (default=True) If True, it will open the visualization dashboard. Returns ------- result: dict """```Default when you call `predict_words` it will open a browser with visualization dashboard, you can disable by `visualization=False`.**This method not available for BigBird models**.
###Code
model.predict_words(negative_text)
quantized_model.predict_words(negative_text)
from IPython.core.display import Image, display
display(Image('relevancy-dashboard.png', width=800))
###Output
_____no_output_____
###Markdown
VectorizeLet say you want to visualize sentence / word level in lower dimension, you can use `model.vectorize`,```pythondef vectorize(self, strings: List[str], method: str = 'first'): """ vectorize list of strings. Parameters ---------- strings: List[str] method : str, optional (default='first') Vectorization layer supported. Allowed values: * ``'last'`` - vector from last sequence. * ``'first'`` - vector from first sequence. * ``'mean'`` - average vectors from all sequences. * ``'word'`` - average vectors based on tokens. Returns ------- result: np.array """``` Sentence level
###Code
texts = [negative_text, positive_text]
r = model.vectorize(texts, method = 'first')
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
tsne = TSNE().fit_transform(r)
tsne.shape
plt.figure(figsize = (7, 7))
plt.scatter(tsne[:, 0], tsne[:, 1])
labels = texts
for label, x, y in zip(
labels, tsne[:, 0], tsne[:, 1]
):
label = (
'%s, %.3f' % (label[0], label[1])
if isinstance(label, list)
else label
)
plt.annotate(
label,
xy = (x, y),
xytext = (0, 0),
textcoords = 'offset points',
)
###Output
_____no_output_____
###Markdown
Word level
###Code
r = quantized_model.vectorize(texts, method = 'word')
x, y = [], []
for row in r:
x.extend([i[0] for i in row])
y.extend([i[1] for i in row])
tsne = TSNE().fit_transform(y)
tsne.shape
plt.figure(figsize = (7, 7))
plt.scatter(tsne[:, 0], tsne[:, 1])
labels = x
for label, x, y in zip(
labels, tsne[:, 0], tsne[:, 1]
):
label = (
'%s, %.3f' % (label[0], label[1])
if isinstance(label, list)
else label
)
plt.annotate(
label,
xy = (x, y),
xytext = (0, 0),
textcoords = 'offset points',
)
###Output
_____no_output_____
###Markdown
Pretty good, the model able to know cluster bottom left as positive relevancy. Stacking modelsMore information, you can read at [https://malaya.readthedocs.io/en/latest/Stack.html](https://malaya.readthedocs.io/en/latest/Stack.html)
###Code
albert = malaya.relevancy.transformer(model = 'albert')
malaya.stack.predict_stack([albert, model], [positive_text, negative_text])
###Output
_____no_output_____
###Markdown
Relevancy Analysis This tutorial is available as an IPython notebook at [Malaya/example/relevancy](https://github.com/huseinzol05/Malaya/tree/master/example/relevancy). This module only trained on standard language structure, so it is not save to use it for local language structure.
###Code
%%time
import malaya
###Output
CPU times: user 5.51 s, sys: 1.08 s, total: 6.59 s
Wall time: 7.61 s
###Markdown
ExplanationPositive relevancy: The article or piece of text is relevant, tendency is high to become not a fake news. Can be a positive or negative sentiment.Negative relevancy: The article or piece of text is not relevant, tendency is high to become a fake news. Can be a positive or negative sentiment.Right now relevancy module only support deep learning model.
###Code
negative_text = 'Roti Massimo Mengandungi DNA Babi. Roti produk Massimo keluaran Syarikat The Italian Baker mengandungi DNA babi. Para pengguna dinasihatkan supaya tidak memakan produk massimo. Terdapat pelbagai produk roti keluaran syarikat lain yang boleh dimakan dan halal. Mari kita sebarkan berita ini supaya semua rakyat Malaysia sedar dengan apa yang mereka makna setiap hari. Roti tidak halal ada DNA babi jangan makan ok.'
positive_text = 'Jabatan Kemajuan Islam Malaysia memperjelaskan dakwaan sebuah mesej yang dikitar semula, yang mendakwa kononnya kod E dikaitkan dengan kandungan lemak babi sepertimana yang tular di media sosial. . Tular: November 2017 . Tular: Mei 2014 JAKIM ingin memaklumkan kepada masyarakat berhubung maklumat yang telah disebarkan secara meluas khasnya melalui media sosial berhubung kod E yang dikaitkan mempunyai lemak babi. Untuk makluman, KOD E ialah kod untuk bahan tambah (aditif) dan ianya selalu digunakan pada label makanan di negara Kesatuan Eropah. Menurut JAKIM, tidak semua nombor E yang digunakan untuk membuat sesuatu produk makanan berasaskan dari sumber yang haram. Sehubungan itu, sekiranya sesuatu produk merupakan produk tempatan dan mendapat sijil Pengesahan Halal Malaysia, maka ia boleh digunakan tanpa was-was sekalipun mempunyai kod E-kod. Tetapi sekiranya produk tersebut bukan produk tempatan serta tidak mendapat sijil pengesahan halal Malaysia walaupun menggunakan e-kod yang sama, pengguna dinasihatkan agar berhati-hati dalam memilih produk tersebut.'
###Output
_____no_output_____
###Markdown
List available Transformer models
###Code
malaya.relevancy.available_transformer()
###Output
INFO:root:tested on 20% test set.
###Markdown
Make sure you can check accuracy chart from here first before select a model, https://malaya.readthedocs.io/en/latest/Accuracy.htmlrelevancy**You might want to use Alxlnet, a very small size, 46.8MB, but the accuracy is still on the top notch.** Load ALXLNET modelAll model interface will follow sklearn interface started v3.4,```pythonmodel.predict(List[str])model.predict_proba(List[str])```
###Code
model = malaya.relevancy.transformer(model = 'alxlnet')
###Output
WARNING:tensorflow:From /Users/huseinzolkepli/Documents/Malaya/malaya/function/__init__.py:74: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.
###Markdown
Load Quantized modelTo load 8-bit quantized model, simply pass `quantized = True`, default is `False`.We can expect slightly accuracy drop from quantized model, and not necessary faster than normal 32-bit float model, totally depends on machine.
###Code
quantized_model = malaya.relevancy.transformer(model = 'alxlnet', quantized = True)
###Output
WARNING:root:Load quantized model will cause accuracy drop.
###Markdown
Predict batch of strings
###Code
%%time
model.predict_proba([negative_text, positive_text])
%%time
quantized_model.predict_proba([negative_text, positive_text])
###Output
CPU times: user 5.06 s, sys: 796 ms, total: 5.85 s
Wall time: 3.07 s
###Markdown
Open relevancy visualization dashboardDefault when you call `predict_words` it will open a browser with visualization dashboard, you can disable by `visualization=False`.
###Code
model.predict_words(negative_text)
from IPython.core.display import Image, display
display(Image('relevancy-dashboard.png', width=800))
###Output
_____no_output_____
###Markdown
VectorizeLet say you want to visualize sentence / word level in lower dimension, you can use `model.vectorize`,```pythondef vectorize(self, strings: List[str], method: str = 'first'): """ vectorize list of strings. Parameters ---------- strings: List[str] method : str, optional (default='first') Vectorization layer supported. Allowed values: * ``'last'`` - vector from last sequence. * ``'first'`` - vector from first sequence. * ``'mean'`` - average vectors from all sequences. * ``'word'`` - average vectors based on tokens. Returns ------- result: np.array """``` Sentence level
###Code
texts = [negative_text, positive_text]
r = quantized_model.vectorize(texts, method = 'first')
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
tsne = TSNE().fit_transform(r)
tsne.shape
plt.figure(figsize = (7, 7))
plt.scatter(tsne[:, 0], tsne[:, 1])
labels = texts
for label, x, y in zip(
labels, tsne[:, 0], tsne[:, 1]
):
label = (
'%s, %.3f' % (label[0], label[1])
if isinstance(label, list)
else label
)
plt.annotate(
label,
xy = (x, y),
xytext = (0, 0),
textcoords = 'offset points',
)
###Output
_____no_output_____
###Markdown
Word level
###Code
r = quantized_model.vectorize(texts, method = 'word')
x, y = [], []
for row in r:
x.extend([i[0] for i in row])
y.extend([i[1] for i in row])
tsne = TSNE().fit_transform(y)
tsne.shape
plt.figure(figsize = (7, 7))
plt.scatter(tsne[:, 0], tsne[:, 1])
labels = x
for label, x, y in zip(
labels, tsne[:, 0], tsne[:, 1]
):
label = (
'%s, %.3f' % (label[0], label[1])
if isinstance(label, list)
else label
)
plt.annotate(
label,
xy = (x, y),
xytext = (0, 0),
textcoords = 'offset points',
)
###Output
_____no_output_____
###Markdown
Pretty good, the model able to know cluster bottom left as positive relevancy. Stacking modelsMore information, you can read at [https://malaya.readthedocs.io/en/latest/Stack.html](https://malaya.readthedocs.io/en/latest/Stack.html)
###Code
albert = malaya.relevancy.transformer(model = 'albert')
malaya.stack.predict_stack([albert, model], [positive_text, negative_text])
###Output
_____no_output_____ |
Jena_climate.ipynb | ###Markdown
Normalize the data to (0, 1)
###Code
sc = MinMaxScaler(feature_range = (0, 1))
dt_nor = sc.fit_transform(dt)
dt_nor
###Output
_____no_output_____
###Markdown
Find the best width
###Code
# take three day of datas to predict one day
timestep
# size of training data - 500 days
training_num = 12000
epoch = 10
batch_size = 200
def width(timestep,model_kind):
xTrainSet = dt_nor[:training_num]
yTrainSet = dt_nor[1:training_num+1]
xTrain = []
for i in range(timestep, training_num):
xTrain.append(xTrainSet[i-timestep : i])
xTrain = np.array(xTrain)
xTrain = np.reshape(xTrain, (xTrain.shape[0], xTrain.shape[1], 1))
yTrain = []
for i in range(timestep, training_num):
yTrain.append(yTrainSet[i])
yTrain = np.array(yTrain)
if model_kind == 'model_rnn':
model = Sequential()
model.add(LSTM(128, return_sequences = True, input_shape = (xTrain.shape[1],1)))
model.add(GRU(64))
model.add(Dense(1))
if model_kind == 'model_dense':
model = Sequential()
model.add(Input(shape = (xTrain.shape[1])))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(1))
if model_kind == 'model_cnn':
conv_width = 3
model = Sequential()
model.add(Conv1D(64, kernel_size=(conv_width), input_shape = (xTrain.shape[1],1), activation='relu'))
# model.add(MaxPooling1D(pool_size=(8)))
model.add(Conv1D(32, kernel_size=(conv_width), activation='relu'))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dense(1))
model.compile(optimizer = 'adam',
loss = 'mean_squared_error',
metrics = [tf.metrics.MeanAbsoluteError()])
model.fit(x = xTrain, y = yTrain, epochs = epoch, batch_size = batch_size, verbose=0)
xTestSet = dt_nor[training_num : 40800-2]
xTestSet = np.array(xTestSet)
yTestSet = dt_nor[training_num+1 : 40800-1]
yTestSet = np.array(yTestSet)
xTest = []
for i in range(timestep, len(xTestSet)):
xTest.append(xTestSet[i-timestep : i])
xTest = np.array(xTest)
yTest = []
for i in range(timestep, len(xTestSet)):
yTest.append(yTestSet[i])
yTest = np.array(yTest)
yTest = sc.inverse_transform(yTest)
yPredictes = model.predict(x=xTest)
yPredictes = sc.inverse_transform(yPredictes)
r2 = r2_score(yTest, yPredictes)
return r2
rnn_width_dict = {}
for step in range(5,51):
rnn_width_dict[step] = width(step,'model_rnn')
print(step,end="-")
if step%10 == 0:
print()
rnn_width = max(rnn_width_dict,key=rnn_width_dict.get)
rnn_width
dense_width_dict = {}
for step in range(5,51):
dense_width_dict[step] = width(step,'model_dense')
print(step,end="-")
if step%10 == 0:
print()
dense_width = max(dense_width_dict,key=dense_width_dict.get)
dense_width
cnn_width_dict = {}
for step in range(5,51):
cnn_width_dict[step] = width(step,'model_cnn')
print(step,end="-")
if step%10 == 0:
print()
cnn_width = max(cnn_width_dict,key=cnn_width_dict.get)
cnn_width
###Output
5-6-7-8-9-10-
11-12-13-14-15-16-17-18-19-20-
21-22-23-24-25-26-27-28-29-30-
31-32-33-34-35-36-37-38-39-40-
41-42-43-44-45-46-47-48-49-50-
###Markdown
Rnn
###Code
timestep = rnn_width
xTrainSet = dt_nor[:training_num]
yTrainSet = dt_nor[1:training_num+1]
xTrain = []
for i in range(timestep, training_num):
xTrain.append(xTrainSet[i-timestep : i])
xTrain = np.array(xTrain)
xTrain = np.reshape(xTrain, (xTrain.shape[0], xTrain.shape[1], 1))
print(xTrain.shape)
yTrain = []
for i in range(timestep, training_num):
yTrain.append(yTrainSet[i])
yTrain = np.array(yTrain)
print(yTrain.shape)
model_rnn = Sequential()
model_rnn.add(LSTM(128, return_sequences = True, input_shape = (xTrain.shape[1],1)))
model_rnn.add(GRU(64))
model_rnn.add(Dense(1))
model_rnn.summary()
model_rnn.compile(optimizer = 'adam',
loss = 'mean_squared_error',
metrics = [tf.metrics.MeanAbsoluteError()])
model_rnn.fit(x = xTrain, y = yTrain, epochs = epoch, batch_size = batch_size)
###Output
Epoch 1/10
60/60 [==============================] - 3s 10ms/step - loss: 0.0232 - mean_absolute_error: 0.1070
Epoch 2/10
60/60 [==============================] - 0s 8ms/step - loss: 0.0031 - mean_absolute_error: 0.0433
Epoch 3/10
60/60 [==============================] - 0s 7ms/step - loss: 0.0027 - mean_absolute_error: 0.0398
Epoch 4/10
60/60 [==============================] - 0s 8ms/step - loss: 0.0022 - mean_absolute_error: 0.0360
Epoch 5/10
60/60 [==============================] - 0s 7ms/step - loss: 0.0011 - mean_absolute_error: 0.0257
Epoch 6/10
60/60 [==============================] - 0s 8ms/step - loss: 8.8987e-04 - mean_absolute_error: 0.0224
Epoch 7/10
60/60 [==============================] - 0s 7ms/step - loss: 8.7927e-04 - mean_absolute_error: 0.0224
Epoch 8/10
60/60 [==============================] - 0s 7ms/step - loss: 8.2940e-04 - mean_absolute_error: 0.0215
Epoch 9/10
60/60 [==============================] - 0s 7ms/step - loss: 8.3635e-04 - mean_absolute_error: 0.0217
Epoch 10/10
60/60 [==============================] - 0s 7ms/step - loss: 8.0309e-04 - mean_absolute_error: 0.0211
###Markdown
Test model's accuracy by r2_score (1200 days)
###Code
xTestSet = dt_nor[training_num : 40800-2]
xTestSet = np.array(xTestSet)
yTestSet = dt_nor[training_num+1 : 40800-1]
yTestSet = np.array(yTestSet)
xTest = []
for i in range(timestep, len(xTestSet)):
xTest.append(xTestSet[i-timestep : i])
xTest = np.array(xTest)
print(len(xTest))
yTest = []
for i in range(timestep, len(xTestSet)):
yTest.append(yTestSet[i])
yTest = np.array(yTest)
yTest = sc.inverse_transform(yTest)
len(yTest)
yPredictes = model_rnn.predict(x=xTest)
yPredictes = sc.inverse_transform(yPredictes)
yPredictes
r2_value = {}
plt.plot(yTest, 'c-', label='Real')
plt.plot(yPredictes, 'm-', label='Predict')
# plt.plot(data_original, color='red', label='Real')
# plt.plot(range(len(y_train)),yPredicts, color='blue', label='Predict')
plt.title(label='Prediction')
plt.xlabel(xlabel='Time')
plt.ylabel(ylabel='T')
plt.legend()
plt.show()
r2 = r2_score(yTest, yPredictes)
r2_value['RNN'] = r2
print(r2)
###Output
_____no_output_____
###Markdown
Dense
###Code
timestep = dense_width
xTrainSet = dt_nor[:training_num]
yTrainSet = dt_nor[1:training_num+1]
xTrain = []
for i in range(timestep, training_num):
xTrain.append(xTrainSet[i-timestep : i])
xTrain = np.array(xTrain)
#xTrain = np.squeeze(xTrain)
xTrain = np.reshape(xTrain, (xTrain.shape[0], xTrain.shape[1], 1))
print(xTrain.shape)
yTrain = []
for i in range(timestep, training_num):
yTrain.append(yTrainSet[i])
yTrain = np.array(yTrain)
#yTrain = np.reshape(yTrain, (yTrain.shape[0], 1))
print(yTrain.shape)
model_dense = Sequential()
model_dense.add(Input(shape = (xTrain.shape[1])))
model_dense.add(Flatten())
model_dense.add(Dense(128, activation='relu'))
model_dense.add(Dense(64, activation='relu'))
model_dense.add(Dense(1))
model_dense.summary()
model_dense.compile(optimizer = 'adam',
loss = 'mean_squared_error',
metrics = [tf.metrics.MeanAbsoluteError()])
model_dense.fit(x = xTrain, y = yTrain, epochs = epoch, batch_size = batch_size)
###Output
Epoch 1/10
60/60 [==============================] - 0s 2ms/step - loss: 0.0367 - mean_absolute_error: 0.1341
Epoch 2/10
60/60 [==============================] - 0s 2ms/step - loss: 0.0011 - mean_absolute_error: 0.0255
Epoch 3/10
60/60 [==============================] - 0s 2ms/step - loss: 8.4854e-04 - mean_absolute_error: 0.0222
Epoch 4/10
60/60 [==============================] - 0s 2ms/step - loss: 7.3950e-04 - mean_absolute_error: 0.0204
Epoch 5/10
60/60 [==============================] - 0s 2ms/step - loss: 6.8657e-04 - mean_absolute_error: 0.0195
Epoch 6/10
60/60 [==============================] - 0s 2ms/step - loss: 6.4759e-04 - mean_absolute_error: 0.0188
Epoch 7/10
60/60 [==============================] - 0s 2ms/step - loss: 5.9562e-04 - mean_absolute_error: 0.0181
Epoch 8/10
60/60 [==============================] - 0s 2ms/step - loss: 5.7174e-04 - mean_absolute_error: 0.0177
Epoch 9/10
60/60 [==============================] - 0s 2ms/step - loss: 5.9664e-04 - mean_absolute_error: 0.0181
Epoch 10/10
60/60 [==============================] - 0s 2ms/step - loss: 5.8784e-04 - mean_absolute_error: 0.0180
###Markdown
Test model's accuracy by r2_score (1200 days)
###Code
xTestSet = dt_nor[training_num : 40800-2]
xTestSet = np.array(xTestSet)
yTestSet = dt_nor[training_num+1 : 40800-1]
yTestSet = np.array(yTestSet)
xTest = []
for i in range(timestep, len(xTestSet)):
xTest.append(xTestSet[i-timestep : i])
xTest = np.array(xTest)
#xTest = np.squeeze(xTest)
yTest = []
for i in range(timestep, len(xTestSet)):
yTest.append(yTestSet[i])
yTest = np.array(yTest)
yTest = sc.inverse_transform(yTest)
len(xTest)
yTest.shape
yPredictes = model_dense.predict(x=xTest)
yPredictes = sc.inverse_transform(yPredictes)
yPredictes
plt.plot(yTest, 'c-', label='Real')
plt.plot(yPredictes, 'm-', label='Predict')
# plt.plot(data_original, color='red', label='Real')
# plt.plot(range(len(y_train)),yPredicts, color='blue', label='Predict')
plt.title(label='Prediction')
plt.xlabel(xlabel='Time')
plt.ylabel(ylabel='T')
plt.legend()
plt.show()
r2 = r2_score(yTest, yPredictes)
r2_value['Dense'] = r2
print(r2)
###Output
_____no_output_____
###Markdown
Cnn
###Code
timestep = cnn_width
xTrainSet = dt_nor[:training_num]
yTrainSet = dt_nor[1:training_num+1]
xTrain = []
for i in range(timestep, training_num):
xTrain.append(xTrainSet[i-timestep : i])
xTrain = np.array(xTrain)
# xTrain = np.squeeze(xTrain)
xTrain = np.reshape(xTrain, (xTrain.shape[0], xTrain.shape[1], 1))
print(xTrain.shape)
yTrain = []
for i in range(timestep, training_num):
yTrain.append(yTrainSet[i])
yTrain = np.array(yTrain)
#yTrain = np.reshape(yTrain, (yTrain.shape[0], 1))
print(yTrain.shape)
conv_width = 3
model_cnn = Sequential()
model_cnn.add(Conv1D(64, kernel_size=(conv_width), input_shape = (xTrain.shape[1],1), activation='relu'))
model_cnn.add(Conv1D(32, kernel_size=(conv_width), activation='relu'))
model_cnn.add(Flatten())
model_cnn.add(Dense(32, activation='relu'))
model_cnn.add(Dense(1))
model_cnn.summary()
model_cnn.compile(optimizer = 'adam',
loss = 'mean_squared_error',
metrics = [tf.metrics.MeanAbsoluteError()])
model_cnn.fit(x = xTrain, y = yTrain, epochs = epoch, batch_size = batch_size)
###Output
Epoch 1/10
60/60 [==============================] - 1s 3ms/step - loss: 0.0913 - mean_absolute_error: 0.2289
Epoch 2/10
60/60 [==============================] - 0s 3ms/step - loss: 0.0034 - mean_absolute_error: 0.0453
Epoch 3/10
60/60 [==============================] - 0s 2ms/step - loss: 0.0016 - mean_absolute_error: 0.0305
Epoch 4/10
60/60 [==============================] - 0s 2ms/step - loss: 0.0012 - mean_absolute_error: 0.0267
Epoch 5/10
60/60 [==============================] - 0s 2ms/step - loss: 0.0010 - mean_absolute_error: 0.0244
Epoch 6/10
60/60 [==============================] - 0s 2ms/step - loss: 9.7746e-04 - mean_absolute_error: 0.0235
Epoch 7/10
60/60 [==============================] - 0s 2ms/step - loss: 8.5505e-04 - mean_absolute_error: 0.0219
Epoch 8/10
60/60 [==============================] - 0s 3ms/step - loss: 7.4004e-04 - mean_absolute_error: 0.0205
Epoch 9/10
60/60 [==============================] - 0s 2ms/step - loss: 6.1827e-04 - mean_absolute_error: 0.0185
Epoch 10/10
60/60 [==============================] - 0s 2ms/step - loss: 5.8741e-04 - mean_absolute_error: 0.0180
###Markdown
Test model's accuracy by r2_score (1200 days)
###Code
xTestSet = dt_nor[training_num : 40800-2]
xTestSet = np.array(xTestSet)
yTestSet = dt_nor[training_num+1 : 40800-1]
yTestSet = np.array(yTestSet)
xTest = []
for i in range(timestep, len(xTestSet)):
xTest.append(xTestSet[i-timestep : i])
xTest = np.array(xTest)
# xTest = np.squeeze(xTest)
yTest = []
for i in range(timestep, len(xTestSet)):
yTest.append(yTestSet[i])
yTest = np.array(yTest)
yTest = sc.inverse_transform(yTest)
len(xTest)
yPredictes = model_cnn.predict(x=xTest)
yPredictes = sc.inverse_transform(yPredictes)
yPredictes.shape
plt.plot(yTest, 'c-', label='Real')
plt.plot(yPredictes, 'm-', label='Predict')
# plt.plot(data_original, color='red', label='Real')
# plt.plot(range(len(y_train)),yPredicts, color='blue', label='Predict')
plt.title(label='Prediction')
plt.xlabel(xlabel='Time')
plt.ylabel(ylabel='T')
plt.legend()
plt.show()
r2 = r2_score(yTest, yPredictes)
r2_value['CNN'] = r2
print(r2)
###Output
_____no_output_____
###Markdown
Compare
###Code
r2_value
x = np.arange(3)
width = 0.2
val_r2 = r2_value.values()
plt.figure(figsize=(4,7))
plt.ylabel('r2_score [T (degC)]')
plt.bar(x , val_r2, 0.4, label='Test')
plt.xticks(ticks=x, labels=r2_value.keys(), rotation=45)
_ = plt.legend()
###Output
_____no_output_____
###Markdown
Use predict data to predict future Rnn
###Code
# Take last 24 hours to predict
Predict_hours = 24
xPred = dt_nor[-Predict_hours-rnn_width:-Predict_hours]
xPred = np.array(xPred)
xPred_in = sc.inverse_transform(xPred)
xPred = np.reshape(xPred, (xPred.shape[1], xPred.shape[0], 1))
yFutureTest = dt_nor[-Predict_hours:]
yFutureTest = np.array(yFutureTest)
yFutureTest = sc.inverse_transform(yFutureTest)
real = []
real = np.append(xPred_in, yFutureTest, axis = 0)
xPred.shape
def PredFuture_rnn(xPred):
yPred = model_rnn.predict(x=xPred)
yPred = np.reshape(yPred, (1, 1, 1))
data = np.append(xPred, yPred, axis = 1)
data = data[:, -(rnn_width):, :]
return data
yModelPred = []
for i in range (Predict_hours):
xPred = PredFuture_rnn(xPred)
yModelPred.append(xPred[0][-1])
yModelPred = np.array(yModelPred)
yModelPred = sc.inverse_transform(yModelPred)
plt.plot(real, 'y-', label='train')
plt.plot(range(rnn_width, Predict_hours + rnn_width), yFutureTest, 'c-', label='Real')
plt.plot(range(rnn_width, Predict_hours + rnn_width), yModelPred, 'm-', label='Predict')
plt.title(label='Prediction')
plt.xlabel(xlabel='Time')
plt.ylabel(ylabel='T')
plt.legend()
plt.show()
r2_future = {}
r2 = r2_score(yFutureTest, yModelPred)
r2_future['RNN'] = r2
print(r2)
###Output
_____no_output_____
###Markdown
Cnn
###Code
# Take last 24 hours to predict
Predict_hours = 24
xPred = dt_nor[-Predict_hours-cnn_width:-Predict_hours]
xPred = np.array(xPred)
xPred_in = sc.inverse_transform(xPred)
# #xPred = np.reshape(xPred, (1,xPred.shape[0],xPred.shape[1]))
xPred = np.reshape(xPred, (xPred.shape[1], xPred.shape[0], 1))
yFutureTest = dt_nor[-Predict_hours:]
yFutureTest = np.array(yFutureTest)
yFutureTest = sc.inverse_transform(yFutureTest)
real = []
real = np.append(xPred_in, yFutureTest, axis = 0)
def PredFuture_cnn(xPred):
yPred = model_cnn.predict(xPred)
yPred = np.reshape(yPred, (1, 1, 1))
data = np.append(xPred, yPred, axis = 1)
data = data[:, -(cnn_width):, :]
return data
yModelPred = []
for i in range (Predict_hours):
xPred = PredFuture_cnn(xPred)
yModelPred.append(xPred[0][-1])
yModelPred = np.array(yModelPred)
yModelPred = sc.inverse_transform(yModelPred)
plt.plot(real, 'y-', label='train')
plt.plot(range(cnn_width, Predict_hours + cnn_width), yFutureTest, 'c-', label='Real')
plt.plot(range(cnn_width, Predict_hours + cnn_width), yModelPred, 'm-', label='Predict')
plt.title(label='Prediction')
plt.xlabel(xlabel='Time')
plt.ylabel(ylabel='T')
plt.legend()
plt.show()
r2 = r2_score(yFutureTest, yModelPred)
r2_future['CNN'] = r2
print(r2)
###Output
_____no_output_____
###Markdown
Dense
###Code
# Take last 24 hours to predict
Predict_hours = 24
xPred = dt_nor[-Predict_hours-dense_width:-Predict_hours]
xPred = np.array(xPred)
xPred_in = sc.inverse_transform(xPred)
# #xPred = np.reshape(xPred, (1,xPred.shape[0],xPred.shape[1]))
xPred = np.reshape(xPred, (xPred.shape[1], xPred.shape[0], 1))
yFutureTest = dt_nor[-Predict_hours:]
yFutureTest = np.array(yFutureTest)
yFutureTest = sc.inverse_transform(yFutureTest)
real = []
real = np.append(xPred_in, yFutureTest, axis = 0)
def PredFuture_dense(xPred):
yPred = model_dense.predict(xPred)
yPred = np.reshape(yPred, (1, 1, 1))
data = np.append(xPred, yPred, axis = 1)
data = data[:, -(dense_width):, :]
return data
yModelPred = []
for i in range (Predict_hours):
xPred = PredFuture_dense(xPred)
yModelPred.append(xPred[0][-1])
yModelPred = np.array(yModelPred)
yModelPred = sc.inverse_transform(yModelPred)
plt.plot(real, 'y-', label='train')
plt.plot(range(dense_width, Predict_hours + dense_width), yFutureTest, 'c-', label='Real')
plt.plot(range(dense_width, Predict_hours + dense_width), yModelPred, 'm-', label='Predict')
plt.title(label='Prediction')
plt.xlabel(xlabel='Time')
plt.ylabel(ylabel='T')
plt.legend()
plt.show()
r2 = r2_score(yFutureTest, yModelPred)
r2_future['Dense'] = r2
print(r2)
###Output
_____no_output_____
###Markdown
Compare
###Code
r2_future
x = np.arange(3)
width = 0.2
val_r2 = r2_future.values()
plt.figure(figsize=(4,7))
plt.ylabel('r2_score [T (degC)]')
plt.axhline(0, color= 'r')
plt.bar(x , val_r2, 0.4, label='Test')
plt.xticks(ticks=x, labels=r2_future.keys(), rotation=45)
_ = plt.legend()
###Output
_____no_output_____ |
IBM_AI_Engineering/Course-4-deep-neural-networks-with-pytorch/Week-5-Deep-Networks/8.3.3.He_Initialization_v2.ipynb | ###Markdown
Test Uniform, Default and He Initialization on MNIST Dataset with Relu Activation Table of ContentsIn this lab, you will test the Uniform Initialization, Default Initialization and He Initialization on the MNIST dataset with Relu Activation Neural Network Module and Training Function Make Some Data Define Several Neural Network, Criterion function, Optimizer Test Uniform, Default and He Initialization Analyze ResultsEstimated Time Needed: 25 min Preparation We'll need the following libraries:
###Code
# Import the libraries we need to use in this lab
# Using the following line code to install the torchvision library
# !conda install -y torchvision
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
import torch.nn.functional as F
import matplotlib.pylab as plt
import numpy as np
torch.manual_seed(0)
###Output
_____no_output_____
###Markdown
Neural Network Module and Training Function Define the neural network module or class with He Initialization
###Code
# Define the class for neural network model with He Initialization
class Net_He(nn.Module):
# Constructor
def __init__(self, Layers):
super(Net_He, self).__init__()
self.hidden = nn.ModuleList()
for input_size, output_size in zip(Layers, Layers[1:]):
linear = nn.Linear(input_size, output_size)
torch.nn.init.kaiming_uniform_(linear.weight, nonlinearity='relu')
self.hidden.append(linear)
# Prediction
def forward(self, x):
L = len(self.hidden)
for (l, linear_transform) in zip(range(L), self.hidden):
if l < L - 1:
x = F.relu(linear_transform(x))
else:
x = linear_transform(x)
return x
###Output
_____no_output_____
###Markdown
Define the class or neural network with Uniform Initialization
###Code
# Define the class for neural network model with Uniform Initialization
class Net_Uniform(nn.Module):
# Constructor
def __init__(self, Layers):
super(Net_Uniform, self).__init__()
self.hidden = nn.ModuleList()
for input_size, output_size in zip(Layers, Layers[1:]):
linear = nn.Linear(input_size,output_size)
linear.weight.data.uniform_(0, 1)
self.hidden.append(linear)
# Prediction
def forward(self, x):
L = len(self.hidden)
for (l, linear_transform) in zip(range(L), self.hidden):
if l < L - 1:
x = F.relu(linear_transform(x))
else:
x = linear_transform(x)
return x
###Output
_____no_output_____
###Markdown
Class or Neural Network with PyTorch Default Initialization
###Code
# Define the class for neural network model with PyTorch Default Initialization
class Net(nn.Module):
# Constructor
def __init__(self, Layers):
super(Net, self).__init__()
self.hidden = nn.ModuleList()
for input_size, output_size in zip(Layers, Layers[1:]):
linear = nn.Linear(input_size, output_size)
self.hidden.append(linear)
def forward(self, x):
L=len(self.hidden)
for (l, linear_transform) in zip(range(L), self.hidden):
if l < L - 1:
x = F.relu(linear_transform(x))
else:
x = linear_transform(x)
return x
###Output
_____no_output_____
###Markdown
Define a function to train the model, in this case the function returns a Python dictionary to store the training loss and accuracy on the validation data
###Code
# Define function to train model
def train(model, criterion, train_loader, validation_loader, optimizer, epochs = 100):
i = 0
loss_accuracy = {'training_loss': [], 'validation_accuracy': []}
#n_epochs
for epoch in range(epochs):
for i, (x, y) in enumerate(train_loader):
optimizer.zero_grad()
z = model(x.view(-1, 28 * 28))
loss = criterion(z, y)
loss.backward()
optimizer.step()
loss_accuracy['training_loss'].append(loss.data.item())
correct = 0
for x, y in validation_loader:
yhat = model(x.view(-1, 28 * 28))
_, label = torch.max(yhat, 1)
correct += (label == y).sum().item()
accuracy = 100 * (correct / len(validation_dataset))
loss_accuracy['validation_accuracy'].append(accuracy)
print('epoch: '+ str(epoch) +'/'+str(epochs) + ' training_loss: '+ str(loss.data.item()))
return loss_accuracy
###Output
_____no_output_____
###Markdown
Make some Data Load the training dataset by setting the parameters train to True and convert it to a tensor by placing a transform object int the argument transform
###Code
# Create the training dataset
train_dataset = dsets.MNIST(root='./data', train=True, download=True, transform=transforms.ToTensor())
###Output
_____no_output_____
###Markdown
Load the testing dataset by setting the parameters train False and convert it to a tensor by placing a transform object int the argument transform
###Code
# Create the validation dataset
validation_dataset = dsets.MNIST(root='./data', train=False, download=True, transform=transforms.ToTensor())
###Output
_____no_output_____
###Markdown
Create the training-data loader and the validation-data loader object
###Code
# Create the data loader for training and validation
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=2000, shuffle=True)
validation_loader = torch.utils.data.DataLoader(dataset=validation_dataset, batch_size=5000, shuffle=False)
###Output
_____no_output_____
###Markdown
Define Neural Network, Criterion function, Optimizer and Train the Model Create the criterion function
###Code
# Create the criterion function
criterion = nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
Create a list that contains layer size
###Code
# Create the parameters
input_dim = 28 * 28
output_dim = 10
layers = [input_dim, 100, 200, 100, output_dim]
###Output
_____no_output_____
###Markdown
Test PyTorch Default Initialization, Xavier Initialization and Uniform Initialization Train the network using PyTorch Default Initialization
###Code
# Train the model with the default initialization
model = Net(layers)
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
training_results = train(model, criterion, train_loader,validation_loader, optimizer, epochs=30)
###Output
_____no_output_____
###Markdown
Train the network using He Initialization function
###Code
# Train the model with the He initialization
model_He = Net_He(layers)
optimizer = torch.optim.SGD(model_He.parameters(), lr=learning_rate)
training_results_He = train(model_He, criterion, train_loader, validation_loader, optimizer, epochs=10)
###Output
epoch: 0/10 training_loss: 2.1573190689086914
epoch: 1/10 training_loss: 1.947043776512146
epoch: 2/10 training_loss: 1.6625694036483765
epoch: 3/10 training_loss: 1.3565359115600586
epoch: 4/10 training_loss: 1.1427257061004639
epoch: 5/10 training_loss: 0.9391725659370422
epoch: 6/10 training_loss: 0.8488386869430542
epoch: 7/10 training_loss: 0.7278976440429688
epoch: 8/10 training_loss: 0.6312912702560425
epoch: 9/10 training_loss: 0.6252492666244507
###Markdown
Train the network using Uniform Initialization function
###Code
# Train the model with the Uniform initialization
model_Uniform = Net_Uniform(layers)
optimizer = torch.optim.SGD(model_Uniform.parameters(), lr=learning_rate)
training_results_Uniform = train(model_Uniform, criterion, train_loader, validation_loader, optimizer, epochs=10)
###Output
epoch: 0/10 training_loss: 2.3057801723480225
epoch: 1/10 training_loss: 2.3049240112304688
epoch: 2/10 training_loss: 2.3051462173461914
epoch: 3/10 training_loss: 2.3054966926574707
epoch: 4/10 training_loss: 2.304229259490967
epoch: 5/10 training_loss: 2.3040878772735596
epoch: 6/10 training_loss: 2.3046741485595703
epoch: 7/10 training_loss: 2.3043158054351807
epoch: 8/10 training_loss: 2.3024120330810547
epoch: 9/10 training_loss: 2.3031904697418213
###Markdown
Analyze Results Compare the training loss for each activation
###Code
# Plot the loss
plt.plot(training_results_He['training_loss'], label='He')
plt.plot(training_results['training_loss'], label='Default')
plt.plot(training_results_Uniform['training_loss'], label='Uniform')
plt.ylabel('loss')
plt.xlabel('iteration ')
plt.title('training loss iterations')
plt.legend()
###Output
_____no_output_____
###Markdown
Compare the validation loss for each model
###Code
# Plot the accuracy
plt.plot(training_results_He['validation_accuracy'], label='He')
plt.plot(training_results['validation_accuracy'], label='Default')
plt.plot(training_results_Uniform['validation_accuracy'], label='Uniform')
plt.ylabel('validation accuracy')
plt.xlabel('epochs ')
plt.legend()
plt.show()
###Output
_____no_output_____ |
experiments/basic/trapz_loglog_test.ipynb | ###Markdown
my implementation of the trapezoidal rule in log-log space
###Code
np.seterr(all='print')
warnings.filterwarnings('error')
def log(x):
# smallest positive float (before 0)
float_tiny = np.finfo(np.float64).tiny
# largest positive float
float_max = np.finfo(np.float64).max
values = np.clip(x, float_tiny, float_max)
return np.log(values)
def power(x):
try:
x ** m
except Warning:
print("too big power!")
def trapz_loglog(y, x, axis=0):
"""
Integrate along the given axis using the composite trapezoidal rule in
loglog space.
Integrate `y` (`x`) along given axis in loglog space.
Parameters
----------
y : array_like
Input array to integrate.
x : array_like, optional
Independent variable to integrate over.
axis : int, optional
Specify the axis.
Returns
-------
trapz : float
Definite integral as approximated by trapezoidal rule in loglog space.
"""
try:
y_unit = y.unit
y = y.value
except AttributeError:
y_unit = 1.0
try:
x_unit = x.unit
x = x.value
except AttributeError:
x_unit = 1.0
slice_low = [slice(None)] * y.ndim
slice_up = [slice(None)] * y.ndim
# multi-dimensional equivalent of x_low = x[:-1]
slice_low[axis] = slice(None, -1)
# multi-dimensional equivalent of x_up = x[1:]
slice_up[axis] = slice(1, None)
slice_low = tuple(slice_low)
slice_up = tuple(slice_up)
# reshape x to be broadcasted with y
if x.ndim == 1:
shape = [1] * y.ndim
shape[axis] = x.shape[0]
x = x.reshape(shape)
x_low = x[slice_low]
x_up = x[slice_up]
y_low = y[slice_low]
y_up = y[slice_up]
log_x_low = log(x_low)
log_x_up = log(x_up)
log_y_low = log(y_low)
log_y_up = log(y_up)
# index in the bin
m = (log_y_low - log_y_up) / (log_x_low - log_x_up)
vals = y_low / (m + 1) * (x_up * (x_up / x_low) ** m - x_low)
# value of y very close to zero will make m large and explode the exponential
tozero = (
np.isclose(y_low, 0, atol=0, rtol=1e-10) +
np.isclose(y_up, 0, atol=0, rtol=1e-10) +
np.isclose(x_low, x_up, atol=0, rtol=1e-10)
)
vals[tozero] = 0.0
return np.add.reduce(vals, axis) * x_unit * y_unit
###Output
_____no_output_____
###Markdown
a simple test with a straight line in log-log scale
###Code
def line_loglog(x, m, n):
"""a straight line in loglog-space"""
return x ** m * np.e ** n
def integral_line_loglog(x_min, x_max, m, n):
"""analytical integral of the line in log-log space"""
f_low = line_loglog(x_min, m + 1, n) / (m + 1)
f_up = line_loglog(x_max, m + 1, n) / (m + 1)
return f_up - f_low
m = 1.5
n = -2.0
x = np.logspace(2, 5)
y = line_loglog(x, m, n)
y = np.asarray([y, y])
trapz_loglog(y.T, x, axis=0)
integral_line_loglog(x[0], x[-1], m, n)
np.trapz(y.T, x, axis=0)
1 - trapz_loglog(y.T, x, axis=0) / integral_line_loglog(x[0], x[-1], m, n)
1 - np.trapz(y.T, x, axis=0) / integral_line_loglog(x[0], x[-1], m, n)
###Output
_____no_output_____
###Markdown
a test with synchrotron radiation
###Code
blob = Blob()
nu = np.logspace(9, 20, 20) * u.Hz
# check the blob
print(blob)
def sed_synch(nu, integration):
"""compute the synchrotron SED"""
epsilon = nu.to("", equivalencies=epsilon_equivalency)
# correct epsilon to the jet comoving frame
epsilon_prime = (1 + blob.z) * epsilon / blob.delta_D
# electrond distribution lorentz factor
gamma = blob.gamma
N_e = blob.N_e(gamma)
prefactor = np.sqrt(3) * epsilon * np.power(e, 3) * blob.B_cgs / h
# for multidimensional integration
# axis 0: electrons gamma
# axis 1: photons epsilon
# arrays starting with _ are multidimensional and used for integration
_gamma = np.reshape(gamma, (gamma.size, 1))
_N_e = np.reshape(N_e, (N_e.size, 1))
_epsilon = np.reshape(epsilon, (1, epsilon.size))
x_num = 4 * np.pi * _epsilon * np.power(m_e, 2) * np.power(c, 3)
x_denom = 3 * e * blob.B_cgs * h * np.power(_gamma, 2)
x = (x_num / x_denom).to_value("")
integrand = _N_e * R(x)
integral = integration(integrand, gamma, axis=0)
emissivity = (prefactor * integral).to("erg s-1")
sed_conversion = np.power(blob.delta_D, 4) / (
4 * np.pi * np.power(blob.d_L, 2)
)
return (sed_conversion * emissivity).to("erg cm-2 s-1")
%%timeit
sed_synch(nu, np.trapz)
%%timeit
sed_synch(nu, trapz_loglog)
sed_trapz = sed_synch(nu, np.trapz)
sed_trapz_loglog = sed_synch(nu, trapz_loglog)
plt.loglog(nu, sed_trapz, marker="o")
plt.loglog(nu, sed_trapz_loglog, ls="--", marker=".")
plt.show()
###Output
_____no_output_____
###Markdown
a test with inverse Compton radiation EC on point-like source
###Code
def sed_flux_point_source(nu, target, r, integrate):
"""SED flux for EC on a point like source behind the jet
Parameters
----------
nu : `~astropy.units.Quantity`
array of frequencies, in Hz, to compute the sed, **note** these are
observed frequencies (observer frame).
"""
# define the dimensionless energy
epsilon_s = nu.to("", equivalencies=epsilon_equivalency)
# transform to BH frame
epsilon_s *= 1 + blob.z
# for multidimensional integration
# axis 0: gamma
# axis 1: epsilon_s
# arrays starting with _ are multidimensional and used for integration
gamma = blob.gamma_to_integrate
transformed_N_e = blob.N_e(gamma / blob.delta_D).value
_gamma = np.reshape(gamma, (gamma.size, 1))
_N_e = np.reshape(transformed_N_e, (transformed_N_e.size, 1))
_epsilon_s = np.reshape(epsilon_s, (1, epsilon_s.size))
# define integrating function
# notice once the value of mu = 1, phi can assume any value, we put 0
# convenience
_kernel = compton_kernel(
_gamma, _epsilon_s, target.epsilon_0, blob.mu_s, 1, 0
)
_integrand = np.power(_gamma, -2) * _N_e * _kernel
integral_gamma = integrate(_integrand, gamma, axis=0)
prefactor_num = (
3
* sigma_T
* target.L_0
* np.power(epsilon_s, 2)
* np.power(blob.delta_D, 3)
)
prefactor_denom = (
np.power(2, 7)
* np.power(np.pi, 2)
* np.power(blob.d_L, 2)
* np.power(r, 2)
* np.power(target.epsilon_0, 2)
)
sed = prefactor_num / prefactor_denom * integral_gamma
return sed.to("erg cm-2 s-1")
# target and distance
r = 1e16 * u.cm
L_0 = 2e46 * u.Unit("erg s-1")
epsilon_0 = 1e-3
ps = PointSourceBehindJet(L_0, epsilon_0)
nu = np.logspace(20, 30) * u.Hz
# increase the size of the gamma grid
blob.set_gamma_size(500)
%%timeit
sed_flux_point_source(nu, ps, r, np.trapz)
%%timeit
sed_flux_point_source(nu, ps, r, trapz_loglog)
sed_trapz = sed_flux_point_source(nu, ps, r, np.trapz)
sed_trapz_loglog = sed_flux_point_source(nu, ps, r, trapz_loglog)
plt.loglog(nu, sed_trapz, marker="o")
plt.loglog(nu, sed_trapz_loglog, ls="--", marker=".")
plt.show()
###Output
_____no_output_____ |
02_tutorial.ipynb | ###Markdown
Tutorial> Tutorial details
###Code
from nbdev.export import notebook2script; notebook2script()
###Output
Converted 00_core.ipynb.
Converted 01_game.ipynb.
Converted index.ipynb.
|
docs/guide_random.ipynb | ###Markdown
Randomness and reproducibility Random numbers and [stochastic processes](http://www2.econ.iastate.edu/tesfatsi/ace.htmStochasticity)are essential to most agent-based models.[Pseudo-random number generators](https://en.wikipedia.org/wiki/Pseudorandom_number_generator)can be used to create numbers in a sequence that appears random but is actually a deterministic sequence based on an initial seed value.In other words, the generator will produce the same pseudo-random sequence over multiple runs if it is given the same seed at the beginning.Note that is possible that the generators will draw the same number repeatedly, as illustrated in this [comic strip](https://dilbert.com/strip/2001-10-25) from Scott Adams:
###Code
import agentpy as ap
import numpy as np
import random
###Output
_____no_output_____
###Markdown
Random number generators Agentpy models contain two internal pseudo-random number generators with different features:- `Model.random` is an instance of `random.Random` (more info [here](https://realpython.com/python-random/))- `Model.nprandom` is an instance of `numpy.random.Generator` (more info [here](https://numpy.org/devdocs/reference/random/index.html)) To illustrate, let us define a model that uses both generators to draw a random integer:
###Code
class RandomModel(ap.Model):
def setup(self):
self.x = self.random.randint(0, 99)
self.y = self.nprandom.integers(99)
self.report(['x', 'y'])
self.stop()
###Output
_____no_output_____
###Markdown
If we run this model multiple times, we will likely get a different series of numbers in each iteration:
###Code
exp = ap.Experiment(RandomModel, iterations=5)
results = exp.run()
results.reporters
###Output
_____no_output_____
###Markdown
Defining custom seeds If we want the results to be reproducible, we can define a parameter `seed` that will be used automatically at the beginning of a simulationto initialize both generators.
###Code
parameters = {'seed': 42}
exp = ap.Experiment(RandomModel, parameters, iterations=5)
results = exp.run()
###Output
Scheduled runs: 5
Completed: 5, estimated time remaining: 0:00:00
Experiment finished
Run time: 0:00:00.024486
###Markdown
By default, the experiment will use this seed to generate different random seeds for each iteration:
###Code
results.reporters
###Output
_____no_output_____
###Markdown
Repeating this experiment will yield the same results:
###Code
exp2 = ap.Experiment(RandomModel, parameters, iterations=5)
results2 = exp2.run()
results2.reporters
###Output
_____no_output_____
###Markdown
Alternatively, we can set the argument `randomize=False` so that the experiment will use the same seed for each iteration:
###Code
exp3 = ap.Experiment(RandomModel, parameters, iterations=5, randomize=False)
results3 = exp3.run()
###Output
Scheduled runs: 5
Completed: 5, estimated time remaining: 0:00:00
Experiment finished
Run time: 0:00:00.017925
###Markdown
Now, each iteration yields the same results:
###Code
results3.reporters
###Output
_____no_output_____
###Markdown
Sampling seeds For a sample with multiple parameter combinations, we can treat the seed like any other parameter.The following example will use the same seed for each parameter combination:
###Code
parameters = {'p': ap.Values(0, 1), 'seed': 0}
sample1 = ap.Sample(parameters, randomize=False)
list(sample1)
###Output
_____no_output_____
###Markdown
If we run an experiment with this sample,the same iteration of each parameter combination will have the same seed (remember that the experiment will generate different seeds for each iteration by default):
###Code
exp = ap.Experiment(RandomModel, sample1, iterations=2)
results = exp.run()
results.reporters
###Output
_____no_output_____
###Markdown
Alternatively, we can use `Sample` with `randomize=True` (default)to generate random seeds for each parameter combination in the sample.
###Code
sample3 = ap.Sample(parameters, randomize=True)
list(sample3)
###Output
_____no_output_____
###Markdown
This will always generate the same set of random seeds:
###Code
sample3 = ap.Sample(parameters)
list(sample3)
###Output
_____no_output_____
###Markdown
An experiment will now have different results for every parameter combination and iteration:
###Code
exp = ap.Experiment(RandomModel, sample3, iterations=2)
results = exp.run()
results.reporters
###Output
_____no_output_____
###Markdown
Repeating this experiment will yield the same results:
###Code
exp = ap.Experiment(RandomModel, sample3, iterations=2)
results = exp.run()
results.reporters
###Output
_____no_output_____
###Markdown
Stochastic methods of AgentList Let us now look at some stochastic operations that are often used in agent-based models. To start, we create a list of five agents:
###Code
model = ap.Model()
agents = ap.AgentList(model, 5)
agents
###Output
_____no_output_____
###Markdown
If we look at the agent's ids, we see that they have been created in order:
###Code
agents.id
###Output
_____no_output_____
###Markdown
To shuffle this list, we can use `AgentList.shuffle`:
###Code
agents.shuffle().id
###Output
_____no_output_____
###Markdown
To create a random subset, we can use `AgentList.random`:
###Code
agents.random(3).id
###Output
_____no_output_____
###Markdown
And if we want it to be possible to select the same agent more than once:
###Code
agents.random(6, replace=True).id
###Output
_____no_output_____
###Markdown
Agent-specific generators For more advanced applications, we can create separate generators for each object.We can ensure that the seeds of each object follow a controlled pseudo-random sequence by using the models' main generator to generate the seeds.
###Code
class RandomAgent(ap.Agent):
def setup(self):
seed = self.model.random.getrandbits(128) # Seed from model
self.random = random.Random(seed) # Create agent generator
self.x = self.random.random() # Create a random number
class MultiRandomModel(ap.Model):
def setup(self):
self.agents = ap.AgentList(self, 2, RandomAgent)
self.agents.record('x')
self.stop()
parameters = {'seed': 42}
exp = ap.Experiment(
MultiRandomModel, parameters, iterations=2,
record=True, randomize=False)
results = exp.run()
results.variables.RandomAgent
###Output
_____no_output_____
###Markdown
Alternatively, we can also have each agent start from the same seed:
###Code
class RandomAgent2(ap.Agent):
def setup(self):
self.random = random.Random(self.p.agent_seed) # Create agent generator
self.x = self.random.random() # Create a random number
class MultiRandomModel2(ap.Model):
def setup(self):
self.agents = ap.AgentList(self, 2, RandomAgent2)
self.agents.record('x')
self.stop()
parameters = {'agent_seed': 42}
exp = ap.Experiment(
MultiRandomModel2, parameters, iterations=2,
record=True, randomize=False)
results = exp.run()
results.variables.RandomAgent2
###Output
_____no_output_____
###Markdown
Randomness and reproducibility Random numbers and [stochastic processes](http://www2.econ.iastate.edu/tesfatsi/ace.htmStochasticity)are essential to most agent-based models.[Pseudo-random number generators](https://en.wikipedia.org/wiki/Pseudorandom_number_generator)can be used to create numbers in a sequence that appears random but is actually a deterministic sequence based on an initial seed value.In other words, the generator will produce the same pseudo-random sequence over multiple runs if it is given the same seed at the beginning.Note that is possible that the generators will draw the same number repeatedly, as illustrated in this [comic strip](https://dilbert.com/strip/2001-10-25) from Scott Adams:
###Code
import agentpy as ap
import numpy as np
import random
###Output
_____no_output_____
###Markdown
Random number generators Agentpy models contain two internal pseudo-random number generators with different features:- `Model.random` is an instance of `random.Random` (more info [here](https://realpython.com/python-random/))- `Model.nprandom` is an instance of `numpy.random.Generator` (more info [here](https://numpy.org/devdocs/reference/random/index.html)) To illustrate, let us define a model that uses both generators to draw a random integer:
###Code
class RandomModel(ap.Model):
def setup(self):
self.x = self.random.randint(0, 99)
self.y = self.nprandom.integers(99)
self.report(['x', 'y'])
self.stop()
###Output
_____no_output_____
###Markdown
If we run this model multiple times, we will likely get a different series of numbers in each iteration:
###Code
exp = ap.Experiment(RandomModel, iterations=5)
results = exp.run()
results.reporters
###Output
_____no_output_____
###Markdown
Defining custom seeds If we want the results to be reproducible, we can define a parameter `seed` that will be used automatically at the beginning of a simulationto initialize both generators.
###Code
parameters = {'seed': 42}
exp = ap.Experiment(RandomModel, parameters, iterations=5)
results = exp.run()
###Output
Scheduled runs: 5
Completed: 5, estimated time remaining: 0:00:00
Experiment finished
Run time: 0:00:00.039785
###Markdown
By default, the experiment will use this seed to generate different random seeds for each iteration:
###Code
results.reporters
###Output
_____no_output_____
###Markdown
Repeating this experiment will yield the same results:
###Code
exp2 = ap.Experiment(RandomModel, parameters, iterations=5)
results2 = exp2.run()
results2.reporters
###Output
_____no_output_____
###Markdown
Alternatively, we can set the argument `randomize=False` so that the experiment will use the same seed for each iteration:
###Code
exp3 = ap.Experiment(RandomModel, parameters, iterations=5, randomize=False)
results3 = exp3.run()
###Output
Scheduled runs: 5
Completed: 5, estimated time remaining: 0:00:00
Experiment finished
Run time: 0:00:00.021621
###Markdown
Now, each iteration yields the same results:
###Code
results3.reporters
###Output
_____no_output_____
###Markdown
Sampling seeds For a sample with multiple parameter combinations, we can treat the seed like any other parameter.The following example will use the same seed for each parameter combination:
###Code
parameters = {'p': ap.Values(0, 1), 'seed': 0}
sample1 = ap.Sample(parameters, randomize=False)
list(sample1)
###Output
_____no_output_____
###Markdown
If we run an experiment with this sample,the same iteration of each parameter combination will have the same seed (remember that the experiment will generate different seeds for each iteration by default):
###Code
exp = ap.Experiment(RandomModel, sample1, iterations=2)
results = exp.run()
results.reporters
###Output
_____no_output_____
###Markdown
Alternatively, we can use `Sample` with `randomize=True` (default)to generate random seeds for each parameter combination in the sample.
###Code
sample3 = ap.Sample(parameters, randomize=True)
list(sample3)
###Output
_____no_output_____
###Markdown
This will always generate the same set of random seeds:
###Code
sample3 = ap.Sample(parameters)
list(sample3)
###Output
_____no_output_____
###Markdown
An experiment will now have different results for every parameter combination and iteration:
###Code
exp = ap.Experiment(RandomModel, sample3, iterations=2)
results = exp.run()
results.reporters
###Output
_____no_output_____
###Markdown
Repeating this experiment will yield the same results:
###Code
exp = ap.Experiment(RandomModel, sample3, iterations=2)
results = exp.run()
results.reporters
###Output
_____no_output_____
###Markdown
Stochastic methods of AgentList Let us now look at some stochastic operations that are often used in agent-based models. To start, we create a list of five agents:
###Code
model = ap.Model()
agents = ap.AgentList(model, 5)
agents
###Output
_____no_output_____
###Markdown
If we look at the agent's ids, we see that they have been created in order:
###Code
agents.id
###Output
_____no_output_____
###Markdown
To shuffle this list, we can use `AgentList.shuffle`:
###Code
agents.shuffle().id
###Output
_____no_output_____
###Markdown
To create a random subset, we can use `AgentList.random`:
###Code
agents.random(3).id
###Output
_____no_output_____
###Markdown
And if we want it to be possible to select the same agent more than once:
###Code
agents.random(6, replace=True).id
###Output
_____no_output_____
###Markdown
Agent-specific generators For more advanced applications, we can create separate generators for each object.We can ensure that the seeds of each object follow a controlled pseudo-random sequence by using the models' main generator to generate the seeds.
###Code
class RandomAgent(ap.Agent):
def setup(self):
seed = self.model.random.getrandbits(128) # Seed from model
self.random = random.Random(seed) # Create agent generator
self.x = self.random.random() # Create a random number
class MultiRandomModel(ap.Model):
def setup(self):
self.agents = ap.AgentList(self, 2, RandomAgent)
self.agents.record('x')
self.stop()
parameters = {'seed': 42}
exp = ap.Experiment(
MultiRandomModel, parameters, iterations=2,
record=True, randomize=False)
results = exp.run()
results.variables.RandomAgent
###Output
_____no_output_____
###Markdown
Alternatively, we can also have each agent start from the same seed:
###Code
class RandomAgent2(ap.Agent):
def setup(self):
self.random = random.Random(self.p.agent_seed) # Create agent generator
self.x = self.random.random() # Create a random number
class MultiRandomModel2(ap.Model):
def setup(self):
self.agents = ap.AgentList(self, 2, RandomAgent2)
self.agents.record('x')
self.stop()
parameters = {'agent_seed': 42}
exp = ap.Experiment(
MultiRandomModel2, parameters, iterations=2,
record=True, randomize=False)
results = exp.run()
results.variables.RandomAgent2
###Output
_____no_output_____ |
uci-pharmsci/assignments/MD/MD.ipynb | ###Markdown
Molecular Dynamics (MD) Objective:Perform some basic molecular dynamics (MD) simulations on a simple polymer model, perform some initial tests, and then after equilibrating the system, compute the self-diffusion coefficient for several different chain lengths. __Due date__: As assigned in class Overview:One simple model of a polymer is just a chain of Lennard-Jones atoms. Here we will simulate such chains, interacting according to the potential (in dimensionless form): \begin{equation}U^* = \sum \limits_{i<j \mathrm{,\ ij\ not\ bonded}} 4\left( r_{ij}^{-12} - r_{ij}^{-6}\right) + \sum \limits_{i<j\mathrm{,\ ij\ bonded}} \frac{k}{2} \left( r_{ij} - r_0\right)^2\end{equation}(Note that, as in the Energy Minimization assignment, we are using the dimensionless form, so that all of the constants are hidden in the units.)Here, atoms have Lennard-Jones attractions and repulsions, and bonds between atoms along the polymer chain(s) are represented by simple harmonic springs. There are no torsional or angle potentials, and no electrostatic interactions. However, this simple model does share basic elements with the models we still use today for proteins and small molecules -- specifically, our classical MD models today begin with the potential above and add additional terms.Simple systems like this polymer model have been thoroughly studied as models of polyatomic molecules, and as models of short polymers. It is relatively easy to derive or determine scaling laws for various physical properties as a function of polymer length in such systems. One such study, by Reis et al., [ Fluid Phase Equilibria 221: 25 (2004) ](https://doi.org/10.1016/j.fluid.2004.04.007) evaluated the self-diffusion coefficient for chains of different lengths. (The self-diffusion coefficient measures diffusive motion of something in a solution consisting of itself, for example the self-diffusion coefficient of water in water describes how mobile a water molecule is in pure water.)Here, you will use some Python and Fortran libraries to set up some initial test simulations and make a plot relating to equilibration. Following that, you will compute the self-diffusion coefficient as directed below, making contact with the data of Reis et al. Most of the functions you will need have already been written for you and are provided here. Most of this assignment will involve using them to conduct a simulation. In addition to the paper mentioned, you will need `mdlib.f90` and `MD_functions.py`. As in the Energy Minimization assignment you did previously, you will need to compile `mdlib.f90` into a .so file suitable for use within Python. Background/settings: Introduction of our variables:Here, the potential energy will be as given above. Again, note that we are working in dimensionless form. We will simulate a system with a total of N monomers, some of which will be linked to form polymers. Each polymer will consist of M monomers, so that if $N_{poly}$ is the number of polymers, $N = M\times N_{poly}$. That is to say, we have $N_{poly}$ polymers each consisting of $M$ linked monomers in a chain, for a total of $N$ particles. As usual, our system will have a density, $\rho$, which is N/V. We will work with a particular temperature, $T$, and cutoff distance, $R_c$, beyond which Lennard-Jones interactions will not be included. Additionally, we need to specify a bond strength and equilibrium separation, $k$ and $r_0$, respectively. And we will take timesteps $\Delta t$ using the velocity Verlet integrator. Settings to use (unless otherwise noted)Unless otherwise noted, here you should use the following settings:* $k = 3000$ (spring constant)* $r_0 = 1$ (preferred bond length)* $N = 240$ (number of particles)* $\rho = N/V = 0.8$ so that $L$, the box size, is $(N/\rho)^{1/3}$ * Use $L$ as the box size in your code* $\Delta t = 0.001$ (timestep)* $T = 1.0$ (temperature)* $R_c = 2.5$ (call this Cut in your code)Our use of the dimensionless form here includes setting all particle masses to 1. Because of this, forces and accelerations are the same thing. Additionally, units can be dropped, and the Boltzmann constant is equal to 1 in these units. What's providedIn this case, mdlib provides almost the same CalcEnergy and CalcEnergyForces routines you used in the previous assignment (for energy minimizations). Additionally, it provides a VVIntegrate function to actually use the integrator (Velocity Verlet) to take a timestep. You should look through the Fortran code to make sure you understand what is being done and see how it connects to what we covered in lecture.The Python syntax for using VVintegrate looks like: `Pos, Vel, Accel, KEnergy, PEnergy = mdlib.vvintegrate( Pos, Vel, Accel, M, L, Cut, dt )` This takes a single timestep (covering time dt) and returns the position, velocity, acceleration, kinetic energy, and potential energy. Likewise, mdlib provides functions for calculating the potential energy, or the potential energy and forces, as:`PEnergy = mdlib.calcenergy(Pos, M, L, Cut)`and`PE, Forces = mdlib.calcenergyforces(Pos, M, L, Cut, Forces)` Your assignmentAll, or almost all, of the functions you will need to complete this assignment are described below. But before getting to the description, I want to explain the assignment. Part A: Develop a simple molecular dynamics code and examine potential energy versus time for several values of M Edit the supplied code below (or MD.py if you prefer to work with the plain text; note that I have also provided MD_functions.py which is utilized by this notebook which provides only the functions you need and not a template for the code you need to write, since this is below) to develop a simple molecular dynamics code. Most of the functions you need are already provided (see documentation below, or in MD_functions.py). But, you do need to fill in the core of two functions:* InitVelocities(N,T): Should take N, a number of particles and a target temperature and return a velocity array (‘Vel’) which is Nx3 with mean velocity corresponding to the correct average temperature. You may wish to do this by assigning random velocities and rescaling (see below).* RescaleVelocities(Vel, T): Re-center the velocities to zero net momentum to remove any overall translational motion (i.e., subtract off the average velocity from all particles) and then re-scale the velocities to maintain the correct temperature. This can be done by noting that the average kinetic energy of the system is related in a simple way to the effective temperature: \begin{equation}\frac{1}{2}\sum \limits_i m_i v_i^2 = \frac{3}{2} N k_B T\end{equation}The left-hand term is the kinetic energy, and here can be simplified by noting all of the masses in the system are defined to be 1. The right hand term involves the Boltzmann constant, the number of particles in the system, and the instantaneous temperature.So, you can compute the effective temperature of your system, and translate this into a scaling factor which you can use to multiply (scale) all velocities in the system to ensure you get the correct average temperature (see http://www.pages.drexel.edu/~cfa22/msim/node33.html). **Specifically, following the Drexel page (eq. 177), compute a (scalar) constant by which you will multiply all of the velocities to ensure that the effective temperature is at the correct target value after rescaling.** To do this calculation you will need to compute the kinetic energy, which involves the sum above. Remove translational motion, rescale the velocities, and have your function return the updated velocity array.Once the above two functions are written, finishing write a simple MD code using the available functions to:* Initially place atoms on a cubic lattice with the correct box size* Energy-minimize the initial configuration using the conjugate-gradient energy minimizer; this will help ensure the simulation doesn’t “explode” (a highly technical term meaning “crash”) when you begin MD* Assign initial velocities and compute forces (accelerations)* Use the velocity Verlet integrator to perform a molecular dynamics run. Rescale atomic velocities every **RescaleFreq** integration steps to achieve the target temperature T. ( You can test whether you should rescale the velocities using the modulo (remainder) operator, for example $i % RescaleFreq == 0$) * You might want to use RescaleFreq = 100 (for extra credit, you can try several values of RescaleFreq and explain the differences in fluctuations in the potential energy versus time that you see)Use the settings given above for $N$, $\rho$, $T$, the timestep, and the cutoff distance. Perform simulations for $M = 2, 4, 6, 8, 12,$ and $16$ and store the total energies versus time out to 2,000 timesteps. (Remember, $M$ controls the number of particles per polymer; you are keeping the same total number of particles in the system and changing the size of the polymers). On a single graph, plot the potential energy versus time for each of these cases (each in a different color). Turn in this graph. Note also you can visualize, if desired, using the Python module for writing to PDB files which you saw in the Energy Minimization exercise. Part B: Extend your code to compute the self-diffusion coefficient as a function of chain lengthModify your MD code from above to perform a series of steps that will allow you to compute the self-diffusion coefficient as a function of chain length and determine how diffusion of polymers depends on the size of the polymer. To compute the self-diffusion coefficient, you will simply need to monitor the motion of each polymer in time. Here, you will first perform two equilibrations at constant temperature using velocity rescaling . The first will allow the system to reach the desired temperature and forget about its initial configuration (remember, it was started on a lattice). The second will allow you to compute the average total energy of the system. Then, you will fix the total energy at this value and perform a production simulation. Here’s what you should do:* Following initial preparation (like above), first perform equilibration for NStepsEquil1 using velocity rescaling to target temperature T every RescaleFreq steps, using whatever value of RescaleFreq you used previously (NOT every step!)* Perform a second equilibration for `NStepsEquil2` timesteps using velocity rescaling at the same frequency, storing energies while you do this. * Compute the average total energy over this second equilibration and rescale the velocities to start the final phase with the correct total energy. In other words, the total energy at the end of equilibration will be slightly above or below the average; you should find the kinetic energy you need to have to get the correct total energy, and rescale the velocities to get this kinetic energy. After this you will be doing no more velocity rescaling. (Hint: You can do this final rescaling easily by computing a scaling factor, and you will probably not be using the rescaling code you use to maintain the temperature during equilibration.)* Copy the initial positions of the particles into a reference array for computing the mean squared displacement, for example using `Pos0 = Pos.copy()`* Perform a production run for `NStepsProd` integration steps with constant energy (NVE) rather than velocity rescaling. Periodically record the time and mean squared displacement of the atoms from their initial positions. (You will need to write a small bit of code to compute mean squared displacements, but it shouldn’t take more than a couple of lines; you may send it to me to check if you are concerned about it). The mean squared displacement is given by \begin{equation}\left\end{equation} where the $\mathbf{r}$'s are the current and initial positions of the object in question so the mean squared displacement measures the square of the distance traveled for the object. * Compute the self-diffusion coefficient, $D$. The mean squared displacement relates to the self-diffusion coefficient, $D$, in this way:\begin{equation}\left^2 = 6 D t\end{equation}Here $D$ is the self-diffusion coefficient and t is the elapsed time. That is, the expected squared distance traveled (mean squared displacement) grows linearly with the elapsed time.For settings, use NStepsEquil1 = 10,000 = NStepsEquil2 and NStepsProd = 100,000. (Note: You should probably do a “dry run” first with shorter simulations to ensure everything is working, as 100,000 steps might take an hour or more to run).* Perform these runs for $M = 2, 4, 6, 8, 12,$ and $16$, storing results for each. * Plot the mean-squared displacement versus time for each M on the same graph. * Compute the diffusion coefficient for each $M$ from the slope of each graph and plot these fits on the same graph. You can do a linear least-squares fit easily in Numpy. `Slope, Intercept = np.polyfit( xvals, yvals, 1)`Plot the diffusion coefficient versus $M$ and try and see if it follows any obvious scaling law. It should decrease with increasing $M$, but with what power? (You may want to refer to the Reis et al. paper). What to turn in:* Your plot of the potential energy versus time for each $M$ in Part A* Mean-squared displacement versus time, and fit, for each $M$ in Part B, all on one plot* The diffusion coefficient versus $M$ in Part B* Your code for at least Part B* Any comments you have - do you think you got it right? Why or why not? What was confusing/helpful? What would you do if you had more time? * Clearly label axes and curves on your plots!You can send your comments/discussion as an e-mail, and the rest of the items as attachments. What’s provided for you: In this case, most of what you need is provided in the importable module `MD_functions.py` (which you can view with your favorite text editor, like `vi` or Atom), except for the functions for initial velocities and velocity rescaling -- in those cases, the shells are present below and you need to write the core (which will be very brief!). **However, you will also need to insert the code for the `ConjugateGradient` function in `MD_functions.py`** from your work you did in the Energy Minimization assignment. If you did not do this, or did not get it correct (or if you are not certain if you did), you will need to e-mail David Mobley for solutions.From the Fortran library `mdlib` (which you will compile as usual via `f2py -c -m mdlib mdlib.f90`), the only new function you need is Velocity Verlet. In `MD_functions.py`, the following tools are available (this shows their documentation, not the details of the code, but you should only need to read the documentation in order to be able to use them. NOTE: No modification of these functions is needed; you only need to use them. You will only need to write `InitVelocities` and `RescaleVelocities` as described above, plus provide your previous code for `ConjugateGradient`: Help on module MD:NAME MD - MD exercise template for PharmSci 175/275FUNCTIONSFUNCTIONS ConjugateGradient(Pos, dx, EFracTolLS, EFracTolCG, M, L, Cut) Performs a conjugate gradient search. Input: Pos: starting positions, (N,3) array dx: initial step amount EFracTolLS: fractional energy tolerance for line search EFracTolCG: fractional energy tolerance for conjugate gradient M: Monomers per polymer L: Box size Cut: Cutoff Output: PEnergy: value of potential energy at minimum Pos: minimum energy (N,3) position array InitPositions(N, L) Returns an array of initial positions of each atom, placed on a cubic lattice for convenience. Input: N: number of atoms L: box length Output: Pos: (N,3) array of positions InitVelocities(N, T) Returns an initial random velocity set. Input: N: number of atoms T: target temperature Output: Vel: (N,3) array of atomic velocities InstTemp(Vel) Returns the instantaneous temperature. Input: Vel: (N,3) array of atomic velocities Output: Tinst: float InstTemp(Vel): Returns the instantaneous temperature. Input: Vel: (N,3) array of atomic velocities Output: Tinst: float RescaleVelocities(Vel, T) Rescales velocities in the system to the target temperature. Input: Vel: (N,3) array of atomic velocities T: target temperature Output: Vel: same as above LineSearch(Pos, Dir, dx, EFracTol, M, L, Cut, Accel=1.5, MaxInc=10.0, MaxIter=10000) Performs a line search along direction Dir. Input: Pos: starting positions, (N,3) array Dir: (N,3) array of gradient direction dx: initial step amount EFracTol: fractional energy tolerance M: Monomers per polymer L: Box size Cut: Cutoff Accel: acceleration factor MaxInc: the maximum increase in energy for bracketing MaxIter: maximum number of iteration steps Output: PEnergy: value of potential energy at minimum along Dir Pos: minimum energy (N,3) position array along Dir Here, you should actually write your functions:
###Code
import mdlib
from MD_functions import *
def InitVelocities(N, T):
"""Returns an initial random velocity set.
Input:
N: number of atoms
T: target temperature
Output:
Vel: (N,3) array of atomic velocities
"""
#WRITE THIS CODE
#THEN RETURN THE NEW VELOCITIES
return Vel
def RescaleVelocities(Vel, T):
"""Rescales velocities in the system to the target temperature.
Input:
Vel: (N,3) numpy array of atomic velocities
T: target temperature
Output:
Vel: same as above
"""
#WRITE THIS CODE
#recenter to zero net momentum (assuming all masses same)
#find the total kinetic energy
#find velocity scale factor from ratios of kinetic energy
#Update velocities
#NOW RETURN THE NEW VELOCITIES
return Vel
###Output
_____no_output_____
###Markdown
Now use your functions, coupled with those provided, to code up your assignment:
###Code
#PART A:
#Define box size and other settings
k=3000
r0=1
N=240
rho=0.8 #Solve to find L
#Set L
dt=0.001
T=1.0
Cut=2.5
RescaleFreq = 100 #See note above - may want to try several values
#Define your M value(s)
#Initially place atoms on a cubic lattice
#Energy-minimize the initial configuration using the conjugate-gradient energy minimizer
#Assign initial velocities and compute forces (accelerations)
#Use the velocity Verlet integrator to perform a molecular dynamics run, rescaling velocities when appropriate
#PART B:
#Additionally:
NStepsEquil1 = 10000
NStepsEquil2 = 10000
NStepsProd = 100000
#Set up as in A
#Equilibrate for NStepsEquil1 with velocity rescaling every RescaleFreq steps, discarding energies
#Equilibrate for NStepsEquil2 with velocity rescaling, storing energies
#Stop and average the energy. Rescale the velocities so the current (end-of-equilibration) energy matches the average
#Store the particle positions so you can later compute the mean squared displacement
#Run for NStepsProd at constant energy (NVE) recording time and mean squared displacement periodically
#Compute diffusion coefficient for each M using a fit to the mean squared displacement
#Plot mean squared displacement for each M
#Plot diffusion coefficient as a function of M
###Output
_____no_output_____
###Markdown
[](https://colab.research.google.com/github/MobleyLab/drug-computing/blob/master/uci-pharmsci/assignments/MD/MD.ipynb) Molecular Dynamics (MD) Objective:Perform some basic molecular dynamics (MD) simulations on a simple polymer model, perform some initial tests, and then after equilibrating the system, compute the self-diffusion coefficient for several different chain lengths. __Due date__: As assigned in class Overview:One simple model of a polymer is just a chain of Lennard-Jones atoms. Here we will simulate such chains, interacting according to the potential (in dimensionless form): \begin{equation}U^* = \sum \limits_{i<j \mathrm{,\ ij\ not\ bonded}} 4\left( r_{ij}^{-12} - r_{ij}^{-6}\right) + \sum \limits_{i<j\mathrm{,\ ij\ bonded}} \frac{k}{2} \left( r_{ij} - r_0\right)^2\end{equation}(Note that, as in the Energy Minimization assignment, we are using the dimensionless form, so that all of the constants are hidden in the units.)Here, atoms have Lennard-Jones attractions and repulsions, and bonds between atoms along the polymer chain(s) are represented by simple harmonic springs. There are no torsional or angle potentials, and no electrostatic interactions. However, this simple model does share basic elements with the models we still use today for proteins and small molecules -- specifically, our classical MD models today begin with the potential above and add additional terms.Simple systems like this polymer model have been thoroughly studied as models of polyatomic molecules, and as models of short polymers. It is relatively easy to derive or determine scaling laws for various physical properties as a function of polymer length in such systems. One such study, by Reis et al., [ Fluid Phase Equilibria 221: 25 (2004) ](https://doi.org/10.1016/j.fluid.2004.04.007) evaluated the self-diffusion coefficient for chains of different lengths. (The self-diffusion coefficient measures diffusive motion of something in a solution consisting of itself, for example the self-diffusion coefficient of water in water describes how mobile a water molecule is in pure water.)Here, you will use some Python and Fortran libraries to set up some initial test simulations and make a plot relating to equilibration. Following that, you will compute the self-diffusion coefficient as directed below, making contact with the data of Reis et al. Most of the functions you will need have already been written for you and are provided here. Most of this assignment will involve using them to conduct a simulation. In addition to the paper mentioned, you will need `mdlib.f90` and `MD_functions.py`. As in the Energy Minimization assignment you did previously, you will need to compile `mdlib.f90` into a .so file suitable for use within Python. Background/settings: Introduction of our variables:Here, the potential energy will be as given above. Again, note that we are working in dimensionless form. We will simulate a system with a total of N monomers, some of which will be linked to form polymers. Each polymer will consist of M monomers, so that if $N_{poly}$ is the number of polymers, $N = M\times N_{poly}$. That is to say, we have $N_{poly}$ polymers each consisting of $M$ linked monomers in a chain, for a total of $N$ particles. As usual, our system will have a density, $\rho$, which is N/V. We will work with a particular temperature, $T$, and cutoff distance, $R_c$, beyond which Lennard-Jones interactions will not be included. Additionally, we need to specify a bond strength and equilibrium separation, $k$ and $r_0$, respectively. And we will take timesteps $\Delta t$ using the velocity Verlet integrator. Settings to use (unless otherwise noted)Unless otherwise noted, here you should use the following settings:* $k = 3000$ (spring constant)* $r_0 = 1$ (preferred bond length)* $N = 240$ (number of particles)* $\rho = N/V = 0.8$ so that $L$, the box size, is $(N/\rho)^{1/3}$ * Use $L$ as the box size in your code* $\Delta t = 0.001$ (timestep)* $T = 1.0$ (temperature)* $R_c = 2.5$ (call this Cut in your code)Our use of the dimensionless form here includes setting all particle masses to 1. Because of this, forces and accelerations are the same thing. Additionally, units can be dropped, and the Boltzmann constant is equal to 1 in these units. What's providedIn this case, mdlib provides almost the same CalcEnergy and CalcEnergyForces routines you used in the previous assignment (for energy minimizations). Additionally, it provides a VVIntegrate function to actually use the integrator (Velocity Verlet) to take a timestep. You should look through the Fortran code to make sure you understand what is being done and see how it connects to what we covered in lecture.The Python syntax for using VVintegrate looks like: `Pos, Vel, Accel, KEnergy, PEnergy = mdlib.vvintegrate( Pos, Vel, Accel, M, L, Cut, dt )` This takes a single timestep (covering time dt) and returns the position, velocity, acceleration, kinetic energy, and potential energy. Likewise, mdlib provides functions for calculating the potential energy, or the potential energy and forces, as:`PEnergy = mdlib.calcenergy(Pos, M, L, Cut)`and`PE, Forces = mdlib.calcenergyforces(Pos, M, L, Cut, Forces)` By way of additional background/tips, Scott Shell has some very useful [simulation best practices](https://sites.engineering.ucsb.edu/~shell/che210d/Simulation_best_practices.pdf) tips which can help with thinking through how to code up and conduct effective simulations. Your assignmentAll, or almost all, of the functions you will need to complete this assignment are described below. But before getting to the description, I want to explain the assignment. Part A: Develop a simple molecular dynamics code and examine potential energy versus time for several values of M Edit the supplied code below (or MD.py if you prefer to work with the plain text; note that I have also provided MD_functions.py which is utilized by this notebook which provides only the functions you need and not a template for the code you need to write, since this is below) to develop a simple molecular dynamics code. Most of the functions you need are already provided (see documentation below, or in MD_functions.py). But, you do need to fill in the core of two functions:* InitVelocities(N,T): Should take N, a number of particles and a target temperature and return a velocity array (‘Vel’) which is Nx3 with mean velocity corresponding to the correct average temperature. You may wish to do this by assigning random velocities and rescaling (see below).* RescaleVelocities(Vel, T): Re-center the velocities to zero net momentum to remove any overall translational motion (i.e., subtract off the average velocity from all particles) and then re-scale the velocities to maintain the correct temperature. This can be done by noting that the average kinetic energy of the system is related in a simple way to the effective temperature: \begin{equation}\frac{1}{2}\sum \limits_i m_i v_i^2 = \frac{3}{2} N k_B T\end{equation}The left-hand term is the kinetic energy, and here can be simplified by noting all of the masses in the system are defined to be 1. The right hand term involves the Boltzmann constant, the number of particles in the system, and the instantaneous temperature.So, you can compute the effective temperature of your system, and translate this into a scaling factor which you can use to multiply (scale) all velocities in the system to ensure you get the correct average temperature (see http://www.pages.drexel.edu/~cfa22/msim/node33.html). **Specifically, following the Drexel page (eq. 177), compute a (scalar) constant by which you will multiply all of the velocities to ensure that the effective temperature is at the correct target value after rescaling.** To do this calculation you will need to compute the kinetic energy, which involves the sum above. Remove translational motion, rescale the velocities, and have your function return the updated velocity array.Once the above two functions are written, finishing write a simple MD code using the available functions to:* Initially place atoms on a cubic lattice with the correct box size* Energy-minimize the initial configuration using the conjugate-gradient energy minimizer; this will help ensure the simulation doesn’t “explode” (a highly technical term meaning “crash”) when you begin MD* Assign initial velocities and compute forces (accelerations)* Use the velocity Verlet integrator to perform a molecular dynamics run. Rescale atomic velocities every **RescaleFreq** integration steps to achieve the target temperature T. ( You can test whether you should rescale the velocities using the modulo (remainder) operator, for example $i % RescaleFreq == 0$) * You might want to use RescaleFreq = 100 (for extra credit, you can try several values of RescaleFreq and explain the differences in fluctuations in the potential energy versus time that you see)Use the settings given above for $N$, $\rho$, $T$, the timestep, and the cutoff distance. Perform simulations for $M = 2, 4, 6, 8, 12,$ and $16$ and store the total energies versus time out to 2,000 timesteps. (Remember, $M$ controls the number of particles per polymer; you are keeping the same total number of particles in the system and changing the size of the polymers). On a single graph, plot the potential energy versus time for each of these cases (each in a different color). Turn in this graph. Note also you can visualize, if desired, using the Python module for writing to PDB files which you saw in the Energy Minimization exercise. Part B: Extend your code to compute the self-diffusion coefficient as a function of chain lengthModify your MD code from above to perform a series of steps that will allow you to compute the self-diffusion coefficient as a function of chain length and determine how diffusion of polymers depends on the size of the polymer. To compute the self-diffusion coefficient, you will simply need to monitor the motion of each polymer in time. Here, you will first perform two equilibrations at constant temperature using velocity rescaling . The first will allow the system to reach the desired temperature and forget about its initial configuration (remember, it was started on a lattice). The second will allow you to compute the average total energy of the system. Then, you will fix the total energy at this value and perform a production simulation. Here’s what you should do:* Following initial preparation (like above), first perform equilibration for NStepsEquil1 using velocity rescaling to target temperature T every RescaleFreq steps, using whatever value of RescaleFreq you used previously (NOT every step!)* Perform a second equilibration for `NStepsEquil2` timesteps using velocity rescaling at the same frequency, storing energies while you do this. * Compute the average total energy over this second equilibration and rescale the velocities to start the final phase with the correct total energy. In other words, the total energy at the end of equilibration will be slightly above or below the average; you should find the kinetic energy you need to have to get the correct total energy, and rescale the velocities to get this kinetic energy. After this you will be doing no more velocity rescaling. (Hint: You can do this final rescaling easily by computing a scaling factor, and you will probably not be using the rescaling code you use to maintain the temperature during equilibration.)* Copy the initial positions of the particles into a reference array for computing the mean squared displacement, for example using `Pos0 = Pos.copy()`* Perform a production run for `NStepsProd` integration steps with constant energy (NVE) rather than velocity rescaling. Periodically record the time and mean squared displacement of the atoms from their initial positions. (You will need to write a small bit of code to compute mean squared displacements, but it shouldn’t take more than a couple of lines; you may send it to me to check if you are concerned about it). The mean squared displacement is given by \begin{equation}\left\end{equation} where the $\mathbf{r}$'s are the current and initial positions of the object in question so the mean squared displacement measures the square of the distance traveled for the object. * Compute the self-diffusion coefficient, $D$. The mean squared displacement relates to the self-diffusion coefficient, $D$, in this way:\begin{equation}\left^2 = 6 D t\end{equation}Here $D$ is the self-diffusion coefficient and t is the elapsed time. That is, the expected squared distance traveled (mean squared displacement) grows linearly with the elapsed time.For settings, use NStepsEquil1 = 10,000 = NStepsEquil2 and NStepsProd = 100,000. (Note: You should probably do a “dry run” first with shorter simulations to ensure everything is working, as 100,000 steps might take an hour or more to run).* Perform these runs for $M = 2, 4, 6, 8, 12,$ and $16$, storing results for each. * Plot the mean-squared displacement versus time for each M on the same graph. * Compute the diffusion coefficient for each $M$ from the slope of each graph and plot these fits on the same graph. You can do a linear least-squares fit easily in Numpy. `Slope, Intercept = np.polyfit( xvals, yvals, 1)`Plot the diffusion coefficient versus $M$ and try and see if it follows any obvious scaling law. It should decrease with increasing $M$, but with what power? (You may want to refer to the Reis et al. paper). What to turn in:* Your plot of the potential energy versus time for each $M$ in Part A* Mean-squared displacement versus time, and fit, for each $M$ in Part B, all on one plot* The diffusion coefficient versus $M$ in Part B* Your code for at least Part B* Any comments you have - do you think you got it right? Why or why not? What was confusing/helpful? What would you do if you had more time? * Clearly label axes and curves on your plots!You can send your comments/discussion as an e-mail, and the rest of the items as attachments. What’s provided for you: In this case, most of what you need is provided in the importable module `MD_functions.py` (which you can view with your favorite text editor, like `vi` or Atom), except for the functions for initial velocities and velocity rescaling -- in those cases, the shells are present below and you need to write the core (which will be very brief!). **However, you will also need to insert the code for the `ConjugateGradient` function in `MD_functions.py`** from your work you did in the Energy Minimization assignment. If you did not do this, or did not get it correct (or if you are not certain if you did), you will need to e-mail David Mobley for solutions.From the Fortran library `mdlib` (which you will compile as usual via `f2py3 -c -m mdlib mdlib.f90` or similar), the only new function you need is Velocity Verlet. In `MD_functions.py`, the following tools are available (this shows their documentation, not the details of the code, but you should only need to read the documentation in order to be able to use them. NOTE: No modification of these functions is needed; you only need to use them. You will only need to write `InitVelocities` and `RescaleVelocities` as described above, plus provide your previous code for `ConjugateGradient`: Installing Packages***If you are running this on Google Colab, please add the installation blocks from the [getting started notebook](https://github.com/MobleyLab/drug-computing/blob/master/uci-pharmsci/Getting_Started.ipynb) or [condacolab](https://github.com/aakankschit/drug-computing/blob/master/uci-pharmsci/Getting_Started_condacolab.ipynb) here and then execute the code below*** Help on module MD:NAME MD - MD exercise template for PharmSci 175/275FUNCTIONSFUNCTIONS ConjugateGradient(Pos, dx, EFracTolLS, EFracTolCG, M, L, Cut) Performs a conjugate gradient search. Input: Pos: starting positions, (N,3) array dx: initial step amount EFracTolLS: fractional energy tolerance for line search EFracTolCG: fractional energy tolerance for conjugate gradient M: Monomers per polymer L: Box size Cut: Cutoff Output: PEnergy: value of potential energy at minimum Pos: minimum energy (N,3) position array InitPositions(N, L) Returns an array of initial positions of each atom, placed on a cubic lattice for convenience. Input: N: number of atoms L: box length Output: Pos: (N,3) array of positions InitVelocities(N, T) Returns an initial random velocity set. Input: N: number of atoms T: target temperature Output: Vel: (N,3) array of atomic velocities InstTemp(Vel) Returns the instantaneous temperature. Input: Vel: (N,3) array of atomic velocities Output: Tinst: float InstTemp(Vel): Returns the instantaneous temperature. Input: Vel: (N,3) array of atomic velocities Output: Tinst: float RescaleVelocities(Vel, T) Rescales velocities in the system to the target temperature. Input: Vel: (N,3) array of atomic velocities T: target temperature Output: Vel: same as above LineSearch(Pos, Dir, dx, EFracTol, M, L, Cut, Accel=1.5, MaxInc=10.0, MaxIter=10000) Performs a line search along direction Dir. Input: Pos: starting positions, (N,3) array Dir: (N,3) array of gradient direction dx: initial step amount EFracTol: fractional energy tolerance M: Monomers per polymer L: Box size Cut: Cutoff Accel: acceleration factor MaxInc: the maximum increase in energy for bracketing MaxIter: maximum number of iteration steps Output: PEnergy: value of potential energy at minimum along Dir Pos: minimum energy (N,3) position array along Dir Here, you should actually write your functions:
###Code
import mdlib
from MD_functions import *
def InitVelocities(N, T):
"""Returns an initial random velocity set.
Input:
N: number of atoms
T: target temperature
Output:
Vel: (N,3) array of atomic velocities
"""
#WRITE THIS CODE
#THEN RETURN THE NEW VELOCITIES
return Vel
def RescaleVelocities(Vel, T):
"""Rescales velocities in the system to the target temperature.
Input:
Vel: (N,3) numpy array of atomic velocities
T: target temperature
Output:
Vel: same as above
"""
#WRITE THIS CODE
#recenter to zero net momentum (assuming all masses same)
#find the total kinetic energy
#find velocity scale factor from ratios of kinetic energy
#Update velocities
#NOW RETURN THE NEW VELOCITIES
return Vel
###Output
_____no_output_____
###Markdown
Now use your functions, coupled with those provided, to code up your assignment:
###Code
#PART A:
#Define box size and other settings
k=3000
r0=1
N=240
rho=0.8 #Solve to find L
#Set L
dt=0.001
T=1.0
Cut=2.5
RescaleFreq = 100 #See note above - may want to try several values
#Define your M value(s)
#Initially place atoms on a cubic lattice
#Energy-minimize the initial configuration using the conjugate-gradient energy minimizer
#Assign initial velocities and compute forces (accelerations)
#Use the velocity Verlet integrator to perform a molecular dynamics run, rescaling velocities when appropriate
#PART B:
#Additionally:
NStepsEquil1 = 10000
NStepsEquil2 = 10000
NStepsProd = 100000
#Set up as in A
#Equilibrate for NStepsEquil1 with velocity rescaling every RescaleFreq steps, discarding energies
#Equilibrate for NStepsEquil2 with velocity rescaling, storing energies
#Stop and average the energy. Rescale the velocities so the current (end-of-equilibration) energy matches the average
#Store the particle positions so you can later compute the mean squared displacement
#Run for NStepsProd at constant energy (NVE) recording time and mean squared displacement periodically
#Compute diffusion coefficient for each M using a fit to the mean squared displacement
#Plot mean squared displacement for each M
#Plot diffusion coefficient as a function of M
###Output
_____no_output_____
###Markdown
Molecular Dynamics (MD) Objective:Perform some basic molecular dynamics (MD) simulations on a simple polymer model, perform some initial tests, and then after equilibrating the system, compute the self-diffusion coefficient for several different chain lengths. __Due date__: As assigned in class Overview:One simple model of a polymer is just a chain of Lennard-Jones atoms. Here we will simulate such chains, interacting according to the potential (in dimensionless form): \begin{equation}U^* = \sum \limits_{i<j \mathrm{,\ ij\ not\ bonded}} 4\left( r_{ij}^{-12} - r_{ij}^{-6}\right) + \sum \limits_{i<j\mathrm{,\ ij\ bonded}} \frac{k}{2} \left( r_{ij} - r_0\right)^2\end{equation}(Note that, as in the Energy Minimization assignment, we are using the dimensionless form, so that all of the constants are hidden in the units.)Here, atoms have Lennard-Jones attractions and repulsions, and bonds between atoms along the polymer chain(s) are represented by simple harmonic springs. There are no torsional or angle potentials, and no electrostatic interactions. However, this simple model does share basic elements with the models we still use today for proteins and small molecules -- specifically, our classical MD models today begin with the potential above and add additional terms.Simple systems like this polymer model have been thoroughly studied as models of polyatomic molecules, and as models of short polymers. It is relatively easy to derive or determine scaling laws for various physical properties as a function of polymer length in such systems. One such study, by Reis et al., [ Fluid Phase Equilibria 221: 25 (2004) ](https://doi.org/10.1016/j.fluid.2004.04.007) evaluated the self-diffusion coefficient for chains of different lengths. (The self-diffusion coefficient measures diffusive motion of something in a solution consisting of itself, for example the self-diffusion coefficient of water in water describes how mobile a water molecule is in pure water.)Here, you will use some Python and Fortran libraries to set up some initial test simulations and make a plot relating to equilibration. Following that, you will compute the self-diffusion coefficient as directed below, making contact with the data of Reis et al. Most of the functions you will need have already been written for you and are provided here. Most of this assignment will involve using them to conduct a simulation. In addition to the paper mentioned, you will need `mdlib.f90` and `MD_functions.py`. As in the Energy Minimization assignment you did previously, you will need to compile `mdlib.f90` into a .so file suitable for use within Python. Background/settings: Introduction of our variables:Here, the potential energy will be as given above. Again, note that we are working in dimensionless form. We will simulate a system with a total of N monomers, some of which will be linked to form polymers. Each polymer will consist of M monomers, so that if $N_{poly}$ is the number of polymers, $N = M\times N_{poly}$. That is to say, we have $N_{poly}$ polymers each consisting of $M$ linked monomers in a chain, for a total of $N$ particles. As usual, our system will have a density, $\rho$, which is N/V. We will work with a particular temperature, $T$, and cutoff distance, $R_c$, beyond which Lennard-Jones interactions will not be included. Additionally, we need to specify a bond strength and equilibrium separation, $k$ and $r_0$, respectively. And we will take timesteps $\Delta t$ using the velocity Verlet integrator. Settings to use (unless otherwise noted)Unless otherwise noted, here you should use the following settings:* $k = 3000$ (spring constant)* $r_0 = 1$ (preferred bond length)* $N = 240$ (number of particles)* $\rho = N/V = 0.8$ so that $L$, the box size, is $(N/\rho)^{1/3}$ * Use $L$ as the box size in your code* $\Delta t = 0.001$ (timestep)* $T = 1.0$ (temperature)* $R_c = 2.5$ (call this Cut in your code)Our use of the dimensionless form here includes setting all particle masses to 1. Because of this, forces and accelerations are the same thing. Additionally, units can be dropped, and the Boltzmann constant is equal to 1 in these units. What's providedIn this case, mdlib provides almost the same CalcEnergy and CalcEnergyForces routines you used in the previous assignment (for energy minimizations). Additionally, it provides a VVIntegrate function to actually use the integrator (Velocity Verlet) to take a timestep. You should look through the Fortran code to make sure you understand what is being done and see how it connects to what we covered in lecture.The Python syntax for using VVintegrate looks like: `Pos, Vel, Accel, KEnergy, PEnergy = mdlib.vvintegrate( Pos, Vel, Accel, M, L, Cut, dt )` This takes a single timestep (covering time dt) and returns the position, velocity, acceleration, kinetic energy, and potential energy. Likewise, mdlib provides functions for calculating the potential energy, or the potential energy and forces, as:`PEnergy = mdlib.calcenergy(Pos, M, L, Cut)`and`PE, Forces = mdlib.calcenergyforces(Pos, M, L, Cut, Forces)` Your assignmentAll, or almost all, of the functions you will need to complete this assignment are described below. But before getting to the description, I want to explain the assignment. Part A: Develop a simple molecular dynamics code and examine potential energy versus time for several values of M Edit the supplied code below (or MD.py if you prefer to work with the plain text; note that I have also provided MD_functions.py which is utilized by this notebook which provides only the functions you need and not a template for the code you need to write, since this is below) to develop a simple molecular dynamics code. Most of the functions you need are already provided (see documentation below, or in MD_functions.py). But, you do need to fill in the core of two functions:* InitVelocities(N,T): Should take N, a number of particles and a target temperature and return a velocity array (‘Vel’) which is Nx3 with mean velocity corresponding to the correct average temperature. You may wish to do this by assigning random velocities and rescaling (see below).* RescaleVelocities(Vel, T): Re-center the velocities to zero net momentum to remove any overall translational motion (i.e., subtract off the average velocity from all particles) and then re-scale the velocities to maintain the correct temperature. This can be done by noting that the average kinetic energy of the system is related in a simple way to the effective temperature: \begin{equation}\frac{1}{2}\sum \limits_i m_i v_i^2 = \frac{3}{2} N k_B T\end{equation}The left-hand term is the kinetic energy, and here can be simplified by noting all of the masses in the system are defined to be 1. The right hand term involves the Boltzmann constant, the number of particles in the system, and the instantaneous temperature.So, you can compute the effective temperature of your system, and translate this into a scaling factor which you can use to multiply (scale) all velocities in the system to ensure you get the correct average temperature (see http://www.pages.drexel.edu/~cfa22/msim/node33.html). **Specifically, following the Drexel page (eq. 177), compute a (scalar) constant by which you will multiply all of the velocities to ensure that the effective temperature is at the correct target value after rescaling.** To do this calculation you will need to compute the kinetic energy, which involves the sum above. Remove translational motion, rescale the velocities, and have your function return the updated velocity array.Once the above two functions are written, finishing write a simple MD code using the available functions to:* Initially place atoms on a cubic lattice with the correct box size* Energy-minimize the initial configuration using the conjugate-gradient energy minimizer; this will help ensure the simulation doesn’t “explode” (a highly technical term meaning “crash”) when you begin MD* Assign initial velocities and compute forces (accelerations)* Use the velocity Verlet integrator to perform a molecular dynamics run. Rescale atomic velocities every **RescaleFreq** integration steps to achieve the target temperature T. ( You can test whether you should rescale the velocities using the modulo (remainder) operator, for example $i % RescaleFreq == 0$) * You might want to use RescaleFreq = 100 (for extra credit, you can try several values of RescaleFreq and explain the differences in fluctuations in the potential energy versus time that you see)Use the settings given above for $N$, $\rho$, $T$, the timestep, and the cutoff distance. Perform simulations for $M = 2, 4, 6, 8, 12,$ and $16$ and store the total energies versus time out to 2,000 timesteps. (Remember, $M$ controls the number of particles per polymer; you are keeping the same total number of particles in the system and changing the size of the polymers). On a single graph, plot the potential energy versus time for each of these cases (each in a different color). Turn in this graph. Note also you can visualize, if desired, using the Python module for writing to PDB files which you saw in the Energy Minimization exercise. Part B: Extend your code to compute the self-diffusion coefficient as a function of chain lengthModify your MD code from above to perform a series of steps that will allow you to compute the self-diffusion coefficient as a function of chain length and determine how diffusion of polymers depends on the size of the polymer. To compute the self-diffusion coefficient, you will simply need to monitor the motion of each polymer in time. Here, you will first perform two equilibrations at constant temperature using velocity rescaling . The first will allow the system to reach the desired temperature and forget about its initial configuration (remember, it was started on a lattice). The second will allow you to compute the average total energy of the system. Then, you will fix the total energy at this value and perform a production simulation. Here’s what you should do:* Following initial preparation (like above), first perform equilibration for NStepsEquil1 using velocity rescaling to target temperature T every RescaleFreq steps, using whatever value of RescaleFreq you used previously (NOT every step!)* Perform a second equilibration for `NStepsEquil2` timesteps using velocity rescaling at the same frequency, storing energies while you do this. * Compute the average total energy over this second equilibration and rescale the velocities to start the final phase with the correct total energy. In other words, the total energy at the end of equilibration will be slightly above or below the average; you should find the kinetic energy you need to have to get the correct total energy, and rescale the velocities to get this kinetic energy. After this you will be doing no more velocity rescaling. (Hint: You can do this final rescaling easily by computing a scaling factor, and you will probably not be using the rescaling code you use to maintain the temperature during equilibration.)* Copy the initial positions of the particles into a reference array for computing the mean squared displacement, for example using `Pos0 = Pos.copy()`* Perform a production run for `NStepsProd` integration steps with constant energy (NVE) rather than velocity rescaling. Periodically record the time and mean squared displacement of the atoms from their initial positions. (You will need to write a small bit of code to compute mean squared displacements, but it shouldn’t take more than a couple of lines; you may send it to me to check if you are concerned about it). The mean squared displacement is given by \begin{equation}\left\end{equation} where the $\mathbf{r}$'s are the current and initial positions of the object in question so the mean squared displacement measures the square of the distance traveled for the object. * Compute the self-diffusion coefficient, $D$. The mean squared displacement relates to the self-diffusion coefficient, $D$, in this way:\begin{equation}\left^2 = 6 D t\end{equation}Here $D$ is the self-diffusion coefficient and t is the elapsed time. That is, the expected squared distance traveled (mean squared displacement) grows linearly with the elapsed time.For settings, use NStepsEquil1 = 10,000 = NStepsEquil2 and NStepsProd = 100,000. (Note: You should probably do a “dry run” first with shorter simulations to ensure everything is working, as 100,000 steps might take an hour or more to run).* Perform these runs for $M = 2, 4, 6, 8, 12,$ and $16$, storing results for each. * Plot the mean-squared displacement versus time for each M on the same graph. * Compute the diffusion coefficient for each $M$ from the slope of each graph and plot these fits on the same graph. You can do a linear least-squares fit easily in Numpy. `Slope, Intercept = np.polyfit( xvals, yvals, 1)`Plot the diffusion coefficient versus $M$ and try and see if it follows any obvious scaling law. It should decrease with increasing $M$, but with what power? (You may want to refer to the Reis et al. paper). What to turn in:* Your plot of the potential energy versus time for each $M$ in Part A* Mean-squared displacement versus time, and fit, for each $M$ in Part B, all on one plot* The diffusion coefficient versus $M$ in Part B* Your code for at least Part B* Any comments you have - do you think you got it right? Why or why not? What was confusing/helpful? What would you do if you had more time? * Clearly label axes and curves on your plots!You can send your comments/discussion as an e-mail, and the rest of the items as attachments. What’s provided for you: In this case, most of what you need is provided in the importable module `MD_functions.py` (which you can view with your favorite text editor, like `vi` or Atom), except for the functions for initial velocities and velocity rescaling -- in those cases, the shells are present below and you need to write the core (which will be very brief!). From the Fortran library `mdlib` (which you will compile as usual via `f2py -c -m mdlib mdlib.f90`), the only new function you need is Velocity Verlet. In `MD_functions.py`, the following tools are available (this shows their documentation, not the details of the code, but you should only need to read the documentation in order to be able to use them. NOTE: No modification of these functions is needed; you only need to use them. You will only need to write `InitVelocities` and `RescaleVelocities` as described above: Help on module MD:NAME MD - MD exercise template for PharmSci 175/275FUNCTIONSFUNCTIONS ConjugateGradient(Pos, dx, EFracTolLS, EFracTolCG, M, L, Cut) Performs a conjugate gradient search. Input: Pos: starting positions, (N,3) array dx: initial step amount EFracTolLS: fractional energy tolerance for line search EFracTolCG: fractional energy tolerance for conjugate gradient M: Monomers per polymer L: Box size Cut: Cutoff Output: PEnergy: value of potential energy at minimum Pos: minimum energy (N,3) position array InitPositions(N, L) Returns an array of initial positions of each atom, placed on a cubic lattice for convenience. Input: N: number of atoms L: box length Output: Pos: (N,3) array of positions InitVelocities(N, T) Returns an initial random velocity set. Input: N: number of atoms T: target temperature Output: Vel: (N,3) array of atomic velocities InstTemp(Vel) Returns the instantaneous temperature. Input: Vel: (N,3) array of atomic velocities Output: Tinst: float InstTemp(Vel): Returns the instantaneous temperature. Input: Vel: (N,3) array of atomic velocities Output: Tinst: float RescaleVelocities(Vel, T) Rescales velocities in the system to the target temperature. Input: Vel: (N,3) array of atomic velocities T: target temperature Output: Vel: same as above LineSearch(Pos, Dir, dx, EFracTol, M, L, Cut, Accel=1.5, MaxInc=10.0, MaxIter=10000) Performs a line search along direction Dir. Input: Pos: starting positions, (N,3) array Dir: (N,3) array of gradient direction dx: initial step amount EFracTol: fractional energy tolerance M: Monomers per polymer L: Box size Cut: Cutoff Accel: acceleration factor MaxInc: the maximum increase in energy for bracketing MaxIter: maximum number of iteration steps Output: PEnergy: value of potential energy at minimum along Dir Pos: minimum energy (N,3) position array along Dir Here, you should actually write your functions:
###Code
import mdlib
from MD_functions import *
def InitVelocities(N, T):
"""Returns an initial random velocity set.
Input:
N: number of atoms
T: target temperature
Output:
Vel: (N,3) array of atomic velocities
"""
#WRITE THIS CODE
#THEN RETURN THE NEW VELOCITIES
return Vel
def RescaleVelocities(Vel, T):
"""Rescales velocities in the system to the target temperature.
Input:
Vel: (N,3) numpy array of atomic velocities
T: target temperature
Output:
Vel: same as above
"""
#WRITE THIS CODE
#recenter to zero net momentum (assuming all masses same)
#find the total kinetic energy
#find velocity scale factor from ratios of kinetic energy
#Update velocities
#NOW RETURN THE NEW VELOCITIES
return Vel
###Output
_____no_output_____
###Markdown
Now use your functions, coupled with those provided, to code up your assignment:
###Code
#PART A:
#Define box size and other settings
k=3000
r0=1
N=240
rho=0.8 #Solve to find L
#Set L
dt=0.001
T=1.0
Cut=2.5
RescaleFreq = 100 #See note above - may want to try several values
#Define your M value(s)
#Initially place atoms on a cubic lattice
#Energy-minimize the initial configuration using the conjugate-gradient energy minimizer
#Assign initial velocities and compute forces (accelerations)
#Use the velocity Verlet integrator to perform a molecular dynamics run, rescaling velocities when appropriate
#PART B:
#Additionally:
NStepsEquil1 = 10000
NStepsEquil2 = 10000
NStepsProd = 100000
#Set up as in A
#Equilibrate for NStepsEquil1 with velocity rescaling every RescaleFreq steps, discarding energies
#Equilibrate for NStepsEquil2 with velocity rescaling, storing energies
#Stop and average the energy. Rescale the velocities so the current (end-of-equilibration) energy matches the average
#Store the particle positions so you can later compute the mean squared displacement
#Run for NStepsProd at constant energy (NVE) recording time and mean squared displacement periodically
#Compute diffusion coefficient for each M using a fit to the mean squared displacement
#Plot mean squared displacement for each M
#Plot diffusion coefficient as a function of M
###Output
_____no_output_____ |
pandas/hierarchical_indexing.ipynb | ###Markdown
Hierarchical IndexingUp to this point we've been focused primarily on one-dimensional and two-dimensional data, stored in Pandas `Series` and `DataFrame` objects, respectively. Often it is useful to go beyond this and store higher-dimensional data-that is, data indexing by more than one or two keys. While Pandas does provide `Panel` and `Panel4D` objects that natevely handle three-dimensional nd four-dimensional data, a far more common pattern in practice is to make use of *hierarchical indexing* (also known as *multi-indexing*) to incorporate multiple index *levels* within a single index. In this way, higher-dimensional data can be compactly represented within the familiar one-dimesional `Series` and two-dimensional `DataFrame` objects.In this section, we'll explore the direct creation of `MultiIndex` objects, considerations when indexing, slicing, and computing statistics across multiply indexed data, and useful routines for converting between simple and hierarchically indexed representations of your data.We begin with the standard imports:
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
A Multiply Indexed SeriesLet's start by considering how we might represent two-dimensional data within a one-dimensional `Series`. For concreteness, we will consider a series of data where each point has a character and numerical key. The bad waySuppose you wuoul like to track data about states from two different years. Using the Pandas tools we've already covered, you might be tempted to simply use Python tuples as keys:
###Code
index = [('California', 2000), ('California', 2010),
('New York', 2000), ('New York', 2010),
('Texas', 2000), ('Texas', 2010)]
populations = [33871648, 37253956,
18976457, 19378102,
20851820, 25145561]
pop = pd.Series(populations, index=index)
pop
###Output
_____no_output_____
###Markdown
With this indexing Scheme, you can straightforwardly index or slice the series based on this multiple index:
###Code
pop[('California', 2010):('Texas', 2000)]
###Output
_____no_output_____
###Markdown
But the convenience ends there. For example, if you need to select all values from 2010, you'll need to do some messy (and potentially slow) munging to make it happen:
###Code
pop[[i for i in pop.index if i[1] == 2010]]
###Output
_____no_output_____
###Markdown
This produces the desired result, but is not as clean(or as efficient for large datasets) as the slicing syntax we've grown to love in Pandas. The Better Way: Pandas MultiIndexFortunately, Pandas provides a better way. Our tuple-based indexing is essentially a rudimentary multi-index, and the Pandas `MultiIndex` type gives us the type of operations we wish to have. We can create a multi-index from the tuples as follows:
###Code
index = pd.MultiIndex.from_tuples(index)
index
###Output
_____no_output_____
###Markdown
Notice that the `MultiIndex` contains multiple `levels` of indexing-in this case, the state names and the years, as well as multiple *labels* for each data point which encode these levels.If we re-index our series with this `MiltiIndex`, we see the hierarchical representation of the data:
###Code
pop = pop.reindex(index)
pop
###Output
_____no_output_____
###Markdown
Here the first two columns of the `Series` representation show the multiple index values, while the third column shows the data. Notice that some entries are missing in the first column: in this multi-index represenattion, any blank entry indicates the same value as the line above it.Now to access all data for which the second index is 2010, we can simply use the Pandas slicing notation:
###Code
pop[:, 2010]
###Output
_____no_output_____
###Markdown
The result is a singly index array with jsut he keys we´re interested in. This syntax is much convenient (and the operation is much more efficient) than the home-spun tuple-based multi-indexing solution that we started with. We'll now further discuss this sort of indexing operation on hierarchically indexed data. MultiIndex as extra dimensionYou might notice something else here: we could easily have stored the same data using a simple `DataFrame` with index and column labels. In fact, Pandas is built with this equivalence in mind. The `unstack()` method will quickly convert a multiply indexed `Series` into a conventionally indexed `DataFrame`:
###Code
pop_df = pop.unstack()
pop_df
###Output
_____no_output_____
###Markdown
Naturally, the `stack()` method provides the opposite operation:
###Code
pop_df.stack()
###Output
_____no_output_____
###Markdown
Seeing this, you migth wonder why would we would obther with hierarchical idnexing at all. The reason is simple: just as we were able to use multi-indexing to represent two-dimensional data within a one-dimensional `Series`, we can also use it to represent data of three or more dimensions in a `Series` or `DataFrame`. Each extra level in a multi-index represents an extra dimension of data; taking advantage of this property gives us much more flexibility in the types of data we can represent. Concretely, we might want ot add another column of demographic data for each state at each year(say, population under 18); with a `MultiIndex` this is as easy as adding another column to the `DataFrame`:
###Code
pop_df = pd.DataFrame({'total': pop, 'under18': [9267089, 9284094,
4687374, 4318033,
5906301, 6879014]})
pop_df
###Output
_____no_output_____
###Markdown
In addition, all the ufuncs and other functionality work with hierarchical indices as well. Here we compute the fraction of people under 18 by year, given the above data:
###Code
f_u18 = pop_df['under18'] / pop_df['total']
f_u18.unstack()
###Output
_____no_output_____
###Markdown
This allows us to easily and quickly manipulate and explore even high-dimensional data. Methods of MultiIndex CreationThe most straightforward way to construct a multiply indexed `Series` or `DataFrame` is to simply pass a list of two or more index arrays to the constructor. For example:
###Code
df = pd.DataFrame(np.random.rand(4, 2),
index=[['a', 'a', 'b', 'b'], [1, 2, 1, 2]],
columns=['data1', 'data2'])
df
###Output
_____no_output_____
###Markdown
The work of creatin the `MultiIndex` is done in the background.Similarly, if you pass a dictionary with appropriate tuples as keys, Pandas will automatically recognize this and use a `MultiIndex` by default:
###Code
data = {('California', 2000): 33871648,
('California', 2010): 3725956,
('Texas', 2000): 20851820,
('Texas', 2010): 25145561,
('New York', 2000): 18976457,
('New York', 2010): 19378102}
pd.Series(data)
###Output
_____no_output_____
###Markdown
Nevertheless, it is sometimes useful to explicitly create a `MultiIndex`; we'll see a couple of these methods here. Explicit MultiIndex constructorsFor more flexibility in how the index is constructed, you can isntead use the class method constructors available in the `pd.MultiIndex`. For example, as we did before, you can construct the `MultiIndex` from a simple list of arrays giving the index values within each level:
###Code
pd.MultiIndex.from_arrays([['a', 'a', 'b', 'b'], [1, 2, 1, 2]])
###Output
_____no_output_____
###Markdown
You can construct it from a list of tuples giving the multiple index values of each point:
###Code
pd.MultiIndex.from_tuples([('a', 1), ('a', 2), ('b', 1), ('b', 2)])
###Output
_____no_output_____
###Markdown
You can even construct it from a Cartesian product of single indices:
###Code
pd.MultiIndex.from_product([['a', 'b'], [1, 2]])
###Output
_____no_output_____
###Markdown
Similarly, you can construct the `MultiIndex` directly using its internal encoding by passing `levels`(a list of lists containig available index values for each level) and `labels` (a list of lists that reference these labels):
###Code
pd.MultiIndex(levels=[['a', 'b'], [1, 2]], labels=[[0, 0, 1, 1], [0, 1, 0, 1]])
###Output
_____no_output_____
###Markdown
Any of these objects can be passed as the `index` argument when creating a `Series` or `DataFrame`, or be passed to the `reindex` method of an existing `Series` or `DataFrame`. MultiIndex level namesSometimes it is convenient to name the levels of the `MultiIndex`. This can be accomplished by passing the `names` argument to any of the above `MultiIndex` constructores, or by setting the `names` attribute of the index after the fact:
###Code
pop.index.names = ['state', 'year']
pop
###Output
_____no_output_____
###Markdown
With more involved datasets, this can be a useful way to keep track of the meaning of various index values. MultiIndex for columnsIn a `DataFrame`, the rows and columns are completely symmetric, and just as the rows can have multiple levels of indices, the columns can have multiple levels as well. Consider the following, which is a mock-up of some(somewhat realistic) medical data:
###Code
# hierarchical indices and columns
index = pd.MultiIndex.from_product([[2013,2014], [1, 2]],
names=['year', 'visit'])
columns = pd.MultiIndex.from_product([['Bob', 'Guido', 'Sue'], ['HR', 'Temp']],
names=['subject', 'type'])
# mock some data
data = np.round(np.random.rand(4, 6), 1)
data[:, ::2] *= 10
data += 37
# create the DataFrame
health_data = pd.DataFrame(data, index=index, columns=columns)
health_data
###Output
_____no_output_____
###Markdown
Here we see where the multi-indexing for both rows and columns can come in *very* handy. this is fundamentally four-dimensional data, where the dimensions are the subject, the measurement type, the year, nad the visit number. With this in place we can, for example, index the top-level column by the peron's name and get a full `DataFrame` containing just that person's information:
###Code
health_data['Guido']
###Output
_____no_output_____
###Markdown
For more complicated records containing multiple labeled measurements across multiple times for many subcjets(people, countrie, cities, etc.) use of hierarchical rows and columns canbe extremely convenient! Indexing and Slicing a MultiIndexIndexing and slicing on a `MultiIndex` is designed to be intuitive, and it helps if you think about the indices as added dimensions. We'll first look at indexing multiply indexed `Series`, and then multiply-indexed `DataFrames`s. Multiply indexed SeriesConsider the multiply indexed `Series` of state populations we saw earlier:
###Code
pop
###Output
_____no_output_____
###Markdown
We can access single elements by indexing with multiple terms:
###Code
pop['California', 2000]
###Output
_____no_output_____
###Markdown
The `MultiIndex` also supports *partial indexing*, or indexing just one of the levels in the index. The result is another `Series`, with the lower-level indices maintained:
###Code
pop['California']
###Output
_____no_output_____
###Markdown
Partial slicing is available as well, as long as the `MultiIndex` is sorted
###Code
pop.loc['California':'New York']
###Output
_____no_output_____
###Markdown
With sorted indices, partial indexing can be performed on lower levels by passing an empty slicei nthe first index:
###Code
pop[:, 2000]
###Output
_____no_output_____
###Markdown
Other types of indexing and selection work as well; for example, selection based on Boolean mask:
###Code
pop[pop > 22000000]
###Output
_____no_output_____
###Markdown
Selection based on fancy indexing also works:
###Code
pop[['California', 'Texas']]
###Output
_____no_output_____
###Markdown
Multiply indexed DataFramesA multiply indexed `DataFrame` behaves in a similar manner. Consider our toy medical `DataFrame` from before:
###Code
health_data
###Output
_____no_output_____
###Markdown
Remember that columns are primary in a `DataFrame`, and the syntax used for multiply index `Series` applies to the columns. For example, we can recover Guido's heart rate data with a simple operation:
###Code
health_data['Guido', 'HR']
###Output
_____no_output_____
###Markdown
Also, as with the single-index case, we can sue the `loc`, `iloc`, and `ix` indexers. For example:
###Code
health_data.iloc[:2, :2]
###Output
_____no_output_____
###Markdown
These indexers provide an array-like view of the underlying two-dimensional data, but each individual index in `loc` or `iloc` can be passed a tuple of multiple indices.For example:
###Code
health_data.loc[:, ('Bob', 'HR')]
###Output
_____no_output_____
###Markdown
Working with slieces within these index tuples is not especially convenient; trying to create a slice within a tuple with lead to a syntax error:
###Code
health_data.loc[(:, 1), (:, 'HR')]
###Output
_____no_output_____
###Markdown
You could get around this by building the desired slice explicitly using Python's buil-in `slice()` function, but a better way in this context is to use an `IndexSlice` object, which Pandas provides for precisely this situation. For example:
###Code
idx = pd.IndexSlice
health_data.loc[idx[:, 1], idx[:, 'HR']]
###Output
_____no_output_____
###Markdown
There are so many ways to interact with data in multiply indexed `Series` and `Dataframe`s, and as with many tools int his book the best way to become familiar with the mis to try them out. Rearranging Multi-IndicesOne of the keys to working with multiply indexed data is knowing how to effectively transform the data. There are a umber of operations that will preserve all the information in the dataset, but rearrange it for the purposes of various computations. We say a brief example of this in the `stack()` and `unstack()` methods, but there are many more ways to finaly control the rearrangement of data between hierarchical indices and columns, and we'll explore them here. Sorted and unsorted indicesEarlier, we briefly mentioned a caveat, but we should emphasize it more here. *Many of the `MultiIndex` slicing operatiosn will fail if the index is not sorted.* Let's take a look at this here.We'll start by creating some simple multiply indexed data where the indices are *not lexographically sorted:*
###Code
index = pd.MultiIndex.from_product([['a', 'c', 'b'], [1, 2]])
data = pd.Series(np.random.rand(6), index=index)
data.index.names = ['char', 'int']
data
###Output
_____no_output_____
###Markdown
If we try to take a partia lslice of this index, it will result in an error:
###Code
try:
data['a':'b']
except KeyError as e:
print(type(e))
print(e)
###Output
<class 'pandas.errors.UnsortedIndexError'>
'Key length (1) was greater than MultiIndex lexsort depth (0)'
###Markdown
Although it is not entirely clear from the error message, this is the result of the MultiIndex not being sorted. For various reasons, partial slices and other similar operations require the levels in the `MultiIndex` to be in sorted (i.e, lexographical) order. Pandas provides a number of convenience routines to perform this type of sorting; examples are the `sort_index()` and `sortlevel()` methods of the `DataFrame`. We'll use the simplest, `sort_index()`, here:
###Code
data = data.sort_index()
data
###Output
_____no_output_____
###Markdown
With the index sorted in this way, partial slicing will work as expected:
###Code
data['a':'b']
###Output
_____no_output_____
###Markdown
Stacking and untacking indicesAs we saw briefly before, it is possible to convert a dataset from a stacked multi-index to a simple two-dimensional representatnion, optionally specifying the level to use:
###Code
pop
pop.unstack(level=0)
pop.unstack(level=1)
###Output
_____no_output_____
###Markdown
The opposite of `unstack()` is `stack()`, which here can be used to recover the original series:
###Code
pop.unstack().stack()
###Output
_____no_output_____
###Markdown
Index setting and resettingAnother way to rearrange hierarchical data is to turn the index lables into columns;this can be accomplished with the `reset_index` method. Calling this on the population dictionary will result in a `DataFrame` with a *state* and *year* column holding the information that was formerly in the index. For clarity, we can optionally specify the name of the data for the column representation:
###Code
pop_flat = pop.reset_index(name='population')
pop_flat
###Output
_____no_output_____
###Markdown
Often when workign with data in the real world, the raw input data looks like this and it's useful to build a `MultiIndex` from the column values. This can be done with the `set_index` method of the `DataFrame`, which returns a multiply indexed `DataFrame`:
###Code
pop_flat.set_index(['state', 'year'])
###Output
_____no_output_____
###Markdown
In practice I find this type of reindexing to be one of the more useful patterns when encountering real-world datasets. Data aggregations on Multi-IndicesWe've previously seen that Pandas has built-in data aggregation methods, such as `mean()`, `sum()`, and `max()`. For hierarchically indexed data, these can be passed a `level` parameter that controls which subset of the data the aggregate is computed on.For example, let's return to our health data:
###Code
health_data
###Output
_____no_output_____
###Markdown
Perhaps we'd like to average-out the measurements in the two visits each year. We can do this by naming the index level we'd like to explore, in this case the year:
###Code
data_mean = health_data.mean(level='year')
data_mean
###Output
_____no_output_____
###Markdown
By further making use of the `axis` keyword, we can take the mean among levels on the columns as well:
###Code
data_mean.mean(axis=1, level='type')
###Output
_____no_output_____ |
docs/tutorials/analysis/3D/mcmc_sampling.ipynb | ###Markdown
MCMC sampling using the emcee package IntroductionThe goal of Markov Chain Monte Carlo (MCMC) algorithms is to approximate the posterior distribution of your model parameters by random sampling in a probabilistic space. For most readers this sentence was probably not very helpful so here we'll start straight with and example but you should read the more detailed mathematical approaches of the method [here](https://www.pas.rochester.edu/~sybenzvi/courses/phy403/2015s/p403_17_mcmc.pdf) and [here](https://github.com/jakevdp/BayesianAstronomy/blob/master/03-Bayesian-Modeling-With-MCMC.ipynb). How does it work ?The idea is that we use a number of walkers that will sample the posterior distribution (i.e. sample the Likelihood profile).The goal is to produce a "chain", i.e. a list of $\theta$ values, where each $\theta$ is a vector of parameters for your model.If you start far away from the truth value, the chain will take some time to converge until it reaches a stationary state. Once it has reached this stage, each successive elements of the chain are samples of the target posterior distribution.This means that, once we have obtained the chain of samples, we have everything we need. We can compute the distribution of each parameter by simply approximating it with the histogram of the samples projected into the parameter space. This will provide the errors and correlations between parameters.Now let's try to put a picture on the ideas described above. With this notebook, we have simulated and carried out a MCMC analysis for a source with the following parameters:$Index=2.0$, $Norm=5\times10^{-12}$ cm$^{-2}$ s$^{-1}$ TeV$^{-1}$, $Lambda =(1/Ecut) = 0.02$ TeV$^{-1}$ (50 TeV) for 20 hours.The results that you can get from a MCMC analysis will look like this :On the first two top panels, we show the pseudo-random walk of one walker from an offset starting value to see it evolve to a better solution.In the bottom right panel, we show the trace of each 16 walkers for 500 runs (the chain described previsouly). For the first 100 runs, the parameter evolve towards a solution (can be viewed as a fitting step). Then they explore the local minimum for 400 runs which will be used to estimate the parameters correlations and errors.The choice of the Nburn value (when walkers have reached a stationary stage) can be done by eye but you can also look at the autocorrelation time. Why should I use it ?When it comes to evaluate errors and investigate parameter correlation, one typically estimate the Likelihood in a gridded search (2D Likelihood profiles). Each point of the grid implies a new model fitting. If we use 10 steps for each parameters, we will need to carry out 100 fitting procedures. Now let's say that I have a model with $N$ parameters, we need to carry out that gridded analysis $N*(N-1)$ times. So for 5 free parameters you need 20 gridded search, resulting in 2000 individual fit. Clearly this strategy doesn't scale well to high-dimensional models.Just for fun: if each fit procedure takes 10s, we're talking about 5h of computing time to estimate the correlation plots. There are many MCMC packages in the python ecosystem but here we will focus on [emcee](https://emcee.readthedocs.io), a lightweight Python package. A description is provided here : [Foreman-Mackey, Hogg, Lang & Goodman (2012)](https://arxiv.org/abs/1202.3665).
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import astropy.units as u
from astropy.coordinates import SkyCoord
from gammapy.irf import load_cta_irfs
from gammapy.maps import WcsGeom, MapAxis
from gammapy.modeling.models import (
ExpCutoffPowerLawSpectralModel,
GaussianSpatialModel,
SkyModel,
Models,
FoVBackgroundModel,
)
from gammapy.datasets import MapDataset
from gammapy.makers import MapDatasetMaker
from gammapy.data import Observation
from gammapy.modeling.sampling import (
run_mcmc,
par_to_model,
plot_corner,
plot_trace,
)
from gammapy.modeling import Fit
import logging
logging.basicConfig(level=logging.INFO)
###Output
_____no_output_____
###Markdown
Simulate an observationHere we will start by simulating an observation using the `simulate_dataset` method.
###Code
irfs = load_cta_irfs(
"$GAMMAPY_DATA/cta-1dc/caldb/data/cta/1dc/bcf/South_z20_50h/irf_file.fits"
)
observation = Observation.create(
pointing=SkyCoord(0 * u.deg, 0 * u.deg, frame="galactic"),
livetime=20 * u.h,
irfs=irfs,
)
# Define map geometry
axis = MapAxis.from_edges(
np.logspace(-1, 2, 15), unit="TeV", name="energy", interp="log"
)
geom = WcsGeom.create(
skydir=(0, 0), binsz=0.05, width=(2, 2), frame="galactic", axes=[axis]
)
empty_dataset = MapDataset.create(geom=geom, name="dataset-mcmc")
maker = MapDatasetMaker(selection=["background", "edisp", "psf", "exposure"])
dataset = maker.run(empty_dataset, observation)
# Define sky model to simulate the data
spatial_model = GaussianSpatialModel(
lon_0="0 deg", lat_0="0 deg", sigma="0.2 deg", frame="galactic"
)
spectral_model = ExpCutoffPowerLawSpectralModel(
index=2,
amplitude="3e-12 cm-2 s-1 TeV-1",
reference="1 TeV",
lambda_="0.05 TeV-1",
)
sky_model_simu = SkyModel(
spatial_model=spatial_model, spectral_model=spectral_model, name="source"
)
bkg_model = FoVBackgroundModel(dataset_name="dataset-mcmc")
models = Models([sky_model_simu, bkg_model])
print(models)
dataset.models = models
dataset.fake()
dataset.counts.sum_over_axes().plot(add_cbar=True);
# If you want to fit the data for comparison with MCMC later
# fit = Fit(dataset, optimize_opts={"print_level": 1})
# result = fit.run()
###Output
_____no_output_____
###Markdown
Estimate parameter correlations with MCMCNow let's analyse the simulated data.Here we just fit it again with the same model we had before as a starting point.The data that would be needed are the following: - counts cube, psf cube, exposure cube and background modelLuckily all those maps are already in the Dataset object.We will need to define a Likelihood function and define priors on parameters.Here we will assume a uniform prior reading the min, max parameters from the sky model. Define priorsThis steps is a bit manual for the moment until we find a better API to define priors.Note the you **need** to define priors for each parameter otherwise your walkers can explore uncharted territories (e.g. negative norms).
###Code
print(dataset)
# Define the free parameters and min, max values
parameters = dataset.models.parameters
parameters["sigma"].frozen = True
parameters["lon_0"].frozen = True
parameters["lat_0"].frozen = True
parameters["amplitude"].frozen = False
parameters["index"].frozen = False
parameters["lambda_"].frozen = False
parameters["norm"].frozen = True
parameters["tilt"].frozen = True
parameters["norm"].min = 0.5
parameters["norm"].max = 2
parameters["index"].min = 1
parameters["index"].max = 5
parameters["lambda_"].min = 1e-3
parameters["lambda_"].max = 1
parameters["amplitude"].min = 0.01 * parameters["amplitude"].value
parameters["amplitude"].max = 100 * parameters["amplitude"].value
parameters["sigma"].min = 0.05
parameters["sigma"].max = 1
# Setting amplitude init values a bit offset to see evolution
# Here starting close to the real value
parameters["index"].value = 2.0
parameters["amplitude"].value = 3.2e-12
parameters["lambda_"].value = 0.05
print(dataset.models)
print("stat =", dataset.stat_sum())
%%time
# Now let's define a function to init parameters and run the MCMC with emcee
# Depending on your number of walkers, Nrun and dimensionality, this can take a while (> minutes)
sampler = run_mcmc(dataset, nwalkers=6, nrun=150) # to speedup the notebook
# sampler=run_mcmc(dataset,nwalkers=12,nrun=1000) # more accurate contours
###Output
_____no_output_____
###Markdown
Plot the resultsThe MCMC will return a sampler object containing the trace of all walkers.The most important part is the chain attribute which is an array of shape:_(nwalkers, nrun, nfreeparam)_The chain is then used to plot the trace of the walkers and estimate the burnin period (the time for the walkers to reach a stationary stage).
###Code
plot_trace(sampler, dataset)
plot_corner(sampler, dataset, nburn=50)
###Output
_____no_output_____
###Markdown
Plot the model dispersionUsing the samples from the chain after the burn period, we can plot the different models compared to the truth model. To do this we need to the spectral models for each parameter state in the sample.
###Code
emin, emax = [0.1, 100] * u.TeV
nburn = 50
fig, ax = plt.subplots(1, 1, figsize=(12, 6))
for nwalk in range(0, 6):
for n in range(nburn, nburn + 100):
pars = sampler.chain[nwalk, n, :]
# set model parameters
par_to_model(dataset, pars)
spectral_model = dataset.models["source"].spectral_model
spectral_model.plot(
energy_bounds=(emin, emax),
ax=ax,
energy_power=2,
alpha=0.02,
color="grey",
)
sky_model_simu.spectral_model.plot(
energy_bounds=(emin, emax), energy_power=2, ax=ax, color="red"
);
###Output
_____no_output_____
###Markdown
Fun ZoneNow that you have the sampler chain, you have in your hands the entire history of each walkers in the N-Dimensional parameter space. You can for example trace the steps of each walker in any parameter space.
###Code
# Here we plot the trace of one walker in a given parameter space
parx, pary = 0, 1
plt.plot(sampler.chain[0, :, parx], sampler.chain[0, :, pary], "ko", ms=1)
plt.plot(
sampler.chain[0, :, parx],
sampler.chain[0, :, pary],
ls=":",
color="grey",
alpha=0.5,
)
plt.xlabel("Index")
plt.ylabel("Amplitude");
###Output
_____no_output_____
###Markdown
PeVatrons in CTA ?Now it's your turn to play with this MCMC notebook. For example to test the CTA performance to measure a cutoff at very high energies (100 TeV ?).After defining your Skymodel it can be as simple as this :
###Code
# dataset = simulate_dataset(model, geom, pointing, irfs)
# sampler = run_mcmc(dataset)
# plot_trace(sampler, dataset)
# plot_corner(sampler, dataset, nburn=200)
###Output
_____no_output_____
###Markdown
MCMC sampling using the emcee package IntroductionThe goal of Markov Chain Monte Carlo (MCMC) algorithms is to approximate the posterior distribution of your model parameters by random sampling in a probabilistic space. For most readers this sentence was probably not very helpful so here we'll start straight with and example but you should read the more detailed mathematical approaches of the method [here](https://www.pas.rochester.edu/~sybenzvi/courses/phy403/2015s/p403_17_mcmc.pdf) and [here](https://github.com/jakevdp/BayesianAstronomy/blob/master/03-Bayesian-Modeling-With-MCMC.ipynb). How does it work ?The idea is that we use a number of walkers that will sample the posterior distribution (i.e. sample the Likelihood profile).The goal is to produce a "chain", i.e. a list of $\theta$ values, where each $\theta$ is a vector of parameters for your model.If you start far away from the truth value, the chain will take some time to converge until it reaches a stationary state. Once it has reached this stage, each successive elements of the chain are samples of the target posterior distribution.This means that, once we have obtained the chain of samples, we have everything we need. We can compute the distribution of each parameter by simply approximating it with the histogram of the samples projected into the parameter space. This will provide the errors and correlations between parameters.Now let's try to put a picture on the ideas described above. With this notebook, we have simulated and carried out a MCMC analysis for a source with the following parameters:$Index=2.0$, $Norm=5\times10^{-12}$ cm$^{-2}$ s$^{-1}$ TeV$^{-1}$, $Lambda =(1/Ecut) = 0.02$ TeV$^{-1}$ (50 TeV) for 20 hours.The results that you can get from a MCMC analysis will look like this :On the first two top panels, we show the pseudo-random walk of one walker from an offset starting value to see it evolve to a better solution.In the bottom right panel, we show the trace of each 16 walkers for 500 runs (the chain described previsouly). For the first 100 runs, the parameter evolve towards a solution (can be viewed as a fitting step). Then they explore the local minimum for 400 runs which will be used to estimate the parameters correlations and errors.The choice of the Nburn value (when walkers have reached a stationary stage) can be done by eye but you can also look at the autocorrelation time. Why should I use it ?When it comes to evaluate errors and investigate parameter correlation, one typically estimate the Likelihood in a gridded search (2D Likelihood profiles). Each point of the grid implies a new model fitting. If we use 10 steps for each parameters, we will need to carry out 100 fitting procedures. Now let's say that I have a model with $N$ parameters, we need to carry out that gridded analysis $N*(N-1)$ times. So for 5 free parameters you need 20 gridded search, resulting in 2000 individual fit. Clearly this strategy doesn't scale well to high-dimensional models.Just for fun: if each fit procedure takes 10s, we're talking about 5h of computing time to estimate the correlation plots. There are many MCMC packages in the python ecosystem but here we will focus on [emcee](https://emcee.readthedocs.io), a lightweight Python package. A description is provided here : [Foreman-Mackey, Hogg, Lang & Goodman (2012)](https://arxiv.org/abs/1202.3665).
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import astropy.units as u
from astropy.coordinates import SkyCoord
from gammapy.irf import load_cta_irfs
from gammapy.maps import WcsGeom, MapAxis
from gammapy.modeling.models import (
ExpCutoffPowerLawSpectralModel,
GaussianSpatialModel,
SkyModel,
Models,
FoVBackgroundModel,
)
from gammapy.datasets import MapDataset
from gammapy.makers import MapDatasetMaker
from gammapy.data import Observation
from gammapy.modeling.sampling import (
run_mcmc,
par_to_model,
plot_corner,
plot_trace,
)
from gammapy.modeling import Fit
import logging
logging.basicConfig(level=logging.INFO)
###Output
_____no_output_____
###Markdown
Simulate an observationHere we will start by simulating an observation using the `simulate_dataset` method.
###Code
irfs = load_cta_irfs(
"$GAMMAPY_DATA/cta-1dc/caldb/data/cta/1dc/bcf/South_z20_50h/irf_file.fits"
)
observation = Observation.create(
pointing=SkyCoord(0 * u.deg, 0 * u.deg, frame="galactic"),
livetime=20 * u.h,
irfs=irfs,
)
# Define map geometry
axis = MapAxis.from_edges(
np.logspace(-1, 2, 15), unit="TeV", name="energy", interp="log"
)
geom = WcsGeom.create(
skydir=(0, 0), binsz=0.05, width=(2, 2), frame="galactic", axes=[axis]
)
empty_dataset = MapDataset.create(geom=geom, name="dataset-mcmc")
maker = MapDatasetMaker(selection=["background", "edisp", "psf", "exposure"])
dataset = maker.run(empty_dataset, observation)
# Define sky model to simulate the data
spatial_model = GaussianSpatialModel(
lon_0="0 deg", lat_0="0 deg", sigma="0.2 deg", frame="galactic"
)
spectral_model = ExpCutoffPowerLawSpectralModel(
index=2,
amplitude="3e-12 cm-2 s-1 TeV-1",
reference="1 TeV",
lambda_="0.05 TeV-1",
)
sky_model_simu = SkyModel(
spatial_model=spatial_model, spectral_model=spectral_model, name="source"
)
bkg_model = FoVBackgroundModel(dataset_name="dataset-mcmc")
models = Models([sky_model_simu, bkg_model])
print(models)
dataset.models = models
dataset.fake()
dataset.counts.sum_over_axes().plot(add_cbar=True);
# If you want to fit the data for comparison with MCMC later
# fit = Fit(dataset, optimize_opts={"print_level": 1})
# result = fit.run()
###Output
_____no_output_____
###Markdown
Estimate parameter correlations with MCMCNow let's analyse the simulated data.Here we just fit it again with the same model we had before as a starting point.The data that would be needed are the following: - counts cube, psf cube, exposure cube and background modelLuckily all those maps are already in the Dataset object.We will need to define a Likelihood function and define priors on parameters.Here we will assume a uniform prior reading the min, max parameters from the sky model. Define priorsThis steps is a bit manual for the moment until we find a better API to define priors.Note the you **need** to define priors for each parameter otherwise your walkers can explore uncharted territories (e.g. negative norms).
###Code
print(dataset)
# Define the free parameters and min, max values
parameters = dataset.models.parameters
parameters["sigma"].frozen = True
parameters["lon_0"].frozen = True
parameters["lat_0"].frozen = True
parameters["amplitude"].frozen = False
parameters["index"].frozen = False
parameters["lambda_"].frozen = False
parameters["norm"].frozen = True
parameters["tilt"].frozen = True
parameters["norm"].min = 0.5
parameters["norm"].max = 2
parameters["index"].min = 1
parameters["index"].max = 5
parameters["lambda_"].min = 1e-3
parameters["lambda_"].max = 1
parameters["amplitude"].min = 0.01 * parameters["amplitude"].value
parameters["amplitude"].max = 100 * parameters["amplitude"].value
parameters["sigma"].min = 0.05
parameters["sigma"].max = 1
# Setting amplitude init values a bit offset to see evolution
# Here starting close to the real value
parameters["index"].value = 2.0
parameters["amplitude"].value = 3.2e-12
parameters["lambda_"].value = 0.05
print(dataset.models)
print("stat =", dataset.stat_sum())
%%time
# Now let's define a function to init parameters and run the MCMC with emcee
# Depending on your number of walkers, Nrun and dimensionality, this can take a while (> minutes)
sampler = run_mcmc(dataset, nwalkers=6, nrun=150) # to speedup the notebook
# sampler=run_mcmc(dataset,nwalkers=12,nrun=1000) # more accurate contours
###Output
_____no_output_____
###Markdown
Plot the resultsThe MCMC will return a sampler object containing the trace of all walkers.The most important part is the chain attribute which is an array of shape:_(nwalkers, nrun, nfreeparam)_The chain is then used to plot the trace of the walkers and estimate the burnin period (the time for the walkers to reach a stationary stage).
###Code
plot_trace(sampler, dataset)
plot_corner(sampler, dataset, nburn=50)
###Output
_____no_output_____
###Markdown
Plot the model dispersionUsing the samples from the chain after the burn period, we can plot the different models compared to the truth model. To do this we need to the spectral models for each parameter state in the sample.
###Code
emin, emax = [0.1, 100] * u.TeV
nburn = 50
fig, ax = plt.subplots(1, 1, figsize=(12, 6))
for nwalk in range(0, 6):
for n in range(nburn, nburn + 100):
pars = sampler.chain[nwalk, n, :]
# set model parameters
par_to_model(dataset, pars)
spectral_model = dataset.models["source"].spectral_model
spectral_model.plot(
energy_range=(emin, emax),
ax=ax,
energy_power=2,
alpha=0.02,
color="grey",
)
sky_model_simu.spectral_model.plot(
energy_range=(emin, emax), energy_power=2, ax=ax, color="red"
);
###Output
_____no_output_____
###Markdown
Fun ZoneNow that you have the sampler chain, you have in your hands the entire history of each walkers in the N-Dimensional parameter space. You can for example trace the steps of each walker in any parameter space.
###Code
# Here we plot the trace of one walker in a given parameter space
parx, pary = 0, 1
plt.plot(sampler.chain[0, :, parx], sampler.chain[0, :, pary], "ko", ms=1)
plt.plot(
sampler.chain[0, :, parx],
sampler.chain[0, :, pary],
ls=":",
color="grey",
alpha=0.5,
)
plt.xlabel("Index")
plt.ylabel("Amplitude");
###Output
_____no_output_____
###Markdown
PeVatrons in CTA ?Now it's your turn to play with this MCMC notebook. For example to test the CTA performance to measure a cutoff at very high energies (100 TeV ?).After defining your Skymodel it can be as simple as this :
###Code
# dataset = simulate_dataset(model, geom, pointing, irfs)
# sampler = run_mcmc(dataset)
# plot_trace(sampler, dataset)
# plot_corner(sampler, dataset, nburn=200)
###Output
_____no_output_____
###Markdown
MCMC sampling using the emcee package IntroductionThe goal of Markov Chain Monte Carlo (MCMC) algorithms is to approximate the posterior distribution of your model parameters by random sampling in a probabilistic space. For most readers this sentence was probably not very helpful so here we'll start straight with and example but you should read the more detailed mathematical approaches of the method [here](https://www.pas.rochester.edu/~sybenzvi/courses/phy403/2015s/p403_17_mcmc.pdf) and [here](https://github.com/jakevdp/BayesianAstronomy/blob/master/03-Bayesian-Modeling-With-MCMC.ipynb). How does it work ?The idea is that we use a number of walkers that will sample the posterior distribution (i.e. sample the Likelihood profile).The goal is to produce a "chain", i.e. a list of $\theta$ values, where each $\theta$ is a vector of parameters for your model.If you start far away from the truth value, the chain will take some time to converge until it reaches a stationary state. Once it has reached this stage, each successive elements of the chain are samples of the target posterior distribution.This means that, once we have obtained the chain of samples, we have everything we need. We can compute the distribution of each parameter by simply approximating it with the histogram of the samples projected into the parameter space. This will provide the errors and correlations between parameters.Now let's try to put a picture on the ideas described above. With this notebook, we have simulated and carried out a MCMC analysis for a source with the following parameters:$Index=2.0$, $Norm=5\times10^{-12}$ cm$^{-2}$ s$^{-1}$ TeV$^{-1}$, $Lambda =(1/Ecut) = 0.02$ TeV$^{-1}$ (50 TeV) for 20 hours.The results that you can get from a MCMC analysis will look like this :On the first two top panels, we show the pseudo-random walk of one walker from an offset starting value to see it evolve to a better solution.In the bottom right panel, we show the trace of each 16 walkers for 500 runs (the chain described previsouly). For the first 100 runs, the parameter evolve towards a solution (can be viewed as a fitting step). Then they explore the local minimum for 400 runs which will be used to estimate the parameters correlations and errors.The choice of the Nburn value (when walkers have reached a stationary stage) can be done by eye but you can also look at the autocorrelation time. Why should I use it ?When it comes to evaluate errors and investigate parameter correlation, one typically estimate the Likelihood in a gridded search (2D Likelihood profiles). Each point of the grid implies a new model fitting. If we use 10 steps for each parameters, we will need to carry out 100 fitting procedures. Now let's say that I have a model with $N$ parameters, we need to carry out that gridded analysis $N*(N-1)$ times. So for 5 free parameters you need 20 gridded search, resulting in 2000 individual fit. Clearly this strategy doesn't scale well to high-dimensional models.Just for fun: if each fit procedure takes 10s, we're talking about 5h of computing time to estimate the correlation plots. There are many MCMC packages in the python ecosystem but here we will focus on [emcee](https://emcee.readthedocs.io), a lightweight Python package. A description is provided here : [Foreman-Mackey, Hogg, Lang & Goodman (2012)](https://arxiv.org/abs/1202.3665).
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import astropy.units as u
from astropy.coordinates import SkyCoord
from gammapy.irf import load_cta_irfs
from gammapy.maps import WcsGeom, MapAxis
from gammapy.modeling.models import (
ExpCutoffPowerLawSpectralModel,
GaussianSpatialModel,
SkyModel,
Models,
FoVBackgroundModel,
)
from gammapy.datasets import MapDataset
from gammapy.makers import MapDatasetMaker
from gammapy.data import Observation
from gammapy.modeling.sampling import (
run_mcmc,
par_to_model,
plot_corner,
plot_trace,
)
from gammapy.modeling import Fit
import logging
logging.basicConfig(level=logging.INFO)
###Output
_____no_output_____
###Markdown
Simulate an observationHere we will start by simulating an observation using the `simulate_dataset` method.
###Code
irfs = load_cta_irfs(
"$GAMMAPY_DATA/cta-1dc/caldb/data/cta/1dc/bcf/South_z20_50h/irf_file.fits"
)
observation = Observation.create(
pointing=SkyCoord(0 * u.deg, 0 * u.deg, frame="galactic"),
livetime=20 * u.h,
irfs=irfs,
)
# Define map geometry
axis = MapAxis.from_edges(
np.logspace(-1, 2, 15), unit="TeV", name="energy", interp="log"
)
geom = WcsGeom.create(
skydir=(0, 0), binsz=0.05, width=(2, 2), frame="galactic", axes=[axis]
)
empty_dataset = MapDataset.create(geom=geom, name="dataset-mcmc")
maker = MapDatasetMaker(selection=["background", "edisp", "psf", "exposure"])
dataset = maker.run(empty_dataset, observation)
# Define sky model to simulate the data
spatial_model = GaussianSpatialModel(
lon_0="0 deg", lat_0="0 deg", sigma="0.2 deg", frame="galactic"
)
spectral_model = ExpCutoffPowerLawSpectralModel(
index=2,
amplitude="3e-12 cm-2 s-1 TeV-1",
reference="1 TeV",
lambda_="0.05 TeV-1",
)
sky_model_simu = SkyModel(
spatial_model=spatial_model, spectral_model=spectral_model, name="source"
)
bkg_model = FoVBackgroundModel(dataset_name="dataset-mcmc")
models = Models([sky_model_simu, bkg_model])
print(models)
dataset.models = models
dataset.fake()
dataset.counts.sum_over_axes().plot(add_cbar=True);
# If you want to fit the data for comparison with MCMC later
# fit = Fit(dataset)
# result = fit.run(optimize_opts={"print_level": 1})
###Output
_____no_output_____
###Markdown
Estimate parameter correlations with MCMCNow let's analyse the simulated data.Here we just fit it again with the same model we had before as a starting point.The data that would be needed are the following: - counts cube, psf cube, exposure cube and background modelLuckily all those maps are already in the Dataset object.We will need to define a Likelihood function and define priors on parameters.Here we will assume a uniform prior reading the min, max parameters from the sky model. Define priorsThis steps is a bit manual for the moment until we find a better API to define priors.Note the you **need** to define priors for each parameter otherwise your walkers can explore uncharted territories (e.g. negative norms).
###Code
print(dataset)
# Define the free parameters and min, max values
parameters = dataset.models.parameters
parameters["sigma"].frozen = True
parameters["lon_0"].frozen = True
parameters["lat_0"].frozen = True
parameters["amplitude"].frozen = False
parameters["index"].frozen = False
parameters["lambda_"].frozen = False
parameters["norm"].frozen = True
parameters["tilt"].frozen = True
parameters["norm"].min = 0.5
parameters["norm"].max = 2
parameters["index"].min = 1
parameters["index"].max = 5
parameters["lambda_"].min = 1e-3
parameters["lambda_"].max = 1
parameters["amplitude"].min = 0.01 * parameters["amplitude"].value
parameters["amplitude"].max = 100 * parameters["amplitude"].value
parameters["sigma"].min = 0.05
parameters["sigma"].max = 1
# Setting amplitude init values a bit offset to see evolution
# Here starting close to the real value
parameters["index"].value = 2.0
parameters["amplitude"].value = 3.2e-12
parameters["lambda_"].value = 0.05
print(dataset.models)
print("stat =", dataset.stat_sum())
%%time
# Now let's define a function to init parameters and run the MCMC with emcee
# Depending on your number of walkers, Nrun and dimensionality, this can take a while (> minutes)
sampler = run_mcmc(dataset, nwalkers=6, nrun=150) # to speedup the notebook
# sampler=run_mcmc(dataset,nwalkers=12,nrun=1000) # more accurate contours
###Output
_____no_output_____
###Markdown
Plot the resultsThe MCMC will return a sampler object containing the trace of all walkers.The most important part is the chain attribute which is an array of shape:_(nwalkers, nrun, nfreeparam)_The chain is then used to plot the trace of the walkers and estimate the burnin period (the time for the walkers to reach a stationary stage).
###Code
plot_trace(sampler, dataset)
plot_corner(sampler, dataset, nburn=50)
###Output
_____no_output_____
###Markdown
Plot the model dispersionUsing the samples from the chain after the burn period, we can plot the different models compared to the truth model. To do this we need to the spectral models for each parameter state in the sample.
###Code
emin, emax = [0.1, 100] * u.TeV
nburn = 50
fig, ax = plt.subplots(1, 1, figsize=(12, 6))
for nwalk in range(0, 6):
for n in range(nburn, nburn + 100):
pars = sampler.chain[nwalk, n, :]
# set model parameters
par_to_model(dataset, pars)
spectral_model = dataset.models["source"].spectral_model
spectral_model.plot(
energy_range=(emin, emax),
ax=ax,
energy_power=2,
alpha=0.02,
color="grey",
)
sky_model_simu.spectral_model.plot(
energy_range=(emin, emax), energy_power=2, ax=ax, color="red"
);
###Output
_____no_output_____
###Markdown
Fun ZoneNow that you have the sampler chain, you have in your hands the entire history of each walkers in the N-Dimensional parameter space. You can for example trace the steps of each walker in any parameter space.
###Code
# Here we plot the trace of one walker in a given parameter space
parx, pary = 0, 1
plt.plot(sampler.chain[0, :, parx], sampler.chain[0, :, pary], "ko", ms=1)
plt.plot(
sampler.chain[0, :, parx],
sampler.chain[0, :, pary],
ls=":",
color="grey",
alpha=0.5,
)
plt.xlabel("Index")
plt.ylabel("Amplitude");
###Output
_____no_output_____
###Markdown
PeVatrons in CTA ?Now it's your turn to play with this MCMC notebook. For example to test the CTA performance to measure a cutoff at very high energies (100 TeV ?).After defining your Skymodel it can be as simple as this :
###Code
# dataset = simulate_dataset(model, geom, pointing, irfs)
# sampler = run_mcmc(dataset)
# plot_trace(sampler, dataset)
# plot_corner(sampler, dataset, nburn=200)
###Output
_____no_output_____ |
day7/Script7.ipynb | ###Markdown
Puzzle 1: Calculate the highest possible amp signal Unique phase setting between 0 and 4 for any amp --> what is the highest signal?
###Code
import numpy as np
from copy import deepcopy
###Output
_____no_output_____
###Markdown
Load input
###Code
with open('./input7.txt', 'r') as file:
software = file.readlines()
###Output
_____no_output_____
###Markdown
Convert data to list of integers
###Code
software = list(map(int, software[0].split(',')))
###Output
_____no_output_____
###Markdown
Calculation
###Code
def intcode_computer(inputs, data):
i = 0
run = True
while run is True:
if len(str(data[i])) < 5:
add = (5-len(str(data[i])))*'0'
data[i] = '{0}{1}'.format(add, str(data[i]))
optcode = data[i][-2:]
mode1 = data[i][-3]
mode2 = data[i][-4]
mode3 = data[i][-5]
if mode1 == '0' and optcode != '99':
param1 = int(data[data[i+1]])
else:
param1 = int(data[i+1])
if optcode == '01':
if mode2 == '0':
param2 = int(data[data[i+2]])
else:
param2 = int(data[i+2])
data[data[i+3]] = param1 + param2
i += 4
if optcode == '02':
if mode2 == '0':
param2 = int(data[data[i+2]])
else:
param2 = int(data[i+2])
data[data[i+3]] = param1 * param2
i += 4
if optcode == '03':
data[data[i+1]] = inputs[0]
del inputs[0]
i += 2
if optcode == '04':
if mode1 == '0':
out = data[data[i+1]]
# print(data[data[i+1]])
else:
out = data[data[i+1]]
# print(data[i+1])
i += 2
if optcode == '05':
if mode2 == '0':
param2 = int(data[data[i+2]])
else:
param2 = int(data[i+2])
if param1 != 0:
i = param2
else:
i += 3
if optcode == '06':
if mode2 == '0':
param2 = int(data[data[i+2]])
else:
param2 = int(data[i+2])
if param1 == 0:
i = param2
else:
i += 3
if optcode == '07':
if mode2 == '0':
param2 = int(data[data[i+2]])
else:
param2 = int(data[i+2])
if param1 < param2:
data[data[i+3]] = 1
else:
data[data[i+3]] = 0
i += 4
if optcode == '08':
if mode2 == '0':
param2 = int(data[data[i+2]])
else:
param2 = int(data[i+2])
if param1 == param2:
data[data[i+3]] = 1
else:
data[data[i+3]] = 0
i += 4
if optcode == '99':
run = False
return out
best_output = 0
for pA in range(0, 5):
outA = intcode_computer([pA, 0], deepcopy(software))
for pB in range(0, 5):
if pA == pB:
outB = False
else:
outB = intcode_computer([pB, outA], deepcopy(software))
for pC in range(0, 5):
if pC in [pA, pB] or outB == False:
outC = False
else:
outC = intcode_computer([pC, outB], deepcopy(software))
for pD in range(0, 5):
if pD in [pA, pB, pC] or outC == False or outB == False:
outD = False
else:
outD = intcode_computer([pD, outC], deepcopy(software))
for pE in range(0, 5):
if pE in [pA, pB, pC, pD] or outD == False or outC == False or outB == False:
outE = 0
else:
outE = intcode_computer([pE, outD], deepcopy(software))
if outE > best_output:
best_output = outE
best_combination = [pA, pB, pC, pD, pE]
print('The highest signal is {0}.'.format(best_output))
###Output
The highest signal is 77500.
###Markdown
Puzzle 2: Calculate the highest possible amp signal using a feedback loop phase settings from 5 to 9
###Code
class intcode_computer(object):
def __init__(self, phase, data):
self.phase = phase
self.data = data
self.inputs = [phase]
self.i = 0
def step(self, amp_in):
self.inputs.append(amp_in)
self.run = True
self.done = False
self.out = 0
while self.run is True:
if len(str(self.data[self.i])) < 5:
add = (5-len(str(self.data[self.i])))*'0'
self.data[self.i] = '{0}{1}'.format(add, str(self.data[self.i]))
optcode = self.data[self.i][-2:]
mode1 = self.data[self.i][-3]
mode2 = self.data[self.i][-4]
mode3 = self.data[self.i][-5]
if mode1 == '0' and optcode != '99':
param1 = int(self.data[self.data[self.i+1]])
else:
if optcode != '99':
param1 = int(self.data[self.i+1])
if optcode == '01':
if mode2 == '0':
param2 = int(self.data[self.data[self.i+2]])
else:
param2 = int(self.data[self.i+2])
self.data[self.data[self.i+3]] = param1 + param2
self.i += 4
if optcode == '02':
if mode2 == '0':
param2 = int(self.data[self.data[self.i+2]])
else:
param2 = int(self.data[self.i+2])
self.data[self.data[self.i+3]] = param1 * param2
self.i += 4
if optcode == '03':
if len(self.inputs) > 0:
self.data[self.data[self.i+1]] = self.inputs[0]
del self.inputs[0]
self.i += 2
else:
self.run = False
# print('Amp waits for input')
break
if optcode == '04':
if mode1 == '0':
self.out = self.data[self.data[self.i+1]]
else:
self.out = self.data[self.data[self.i+1]]
self.i += 2
if optcode == '05':
if mode2 == '0':
param2 = int(self.data[self.data[self.i+2]])
else:
param2 = int(self.data[self.i+2])
if param1 != 0:
self.i = param2
else:
self.i += 3
if optcode == '06':
if mode2 == '0':
param2 = int(self.data[self.data[self.i+2]])
else:
param2 = int(self.data[self.i+2])
if param1 == 0:
self.i = param2
else:
self.i += 3
if optcode == '07':
if mode2 == '0':
param2 = int(self.data[self.data[self.i+2]])
else:
param2 = int(self.data[self.i+2])
if param1 < param2:
self.data[self.data[self.i+3]] = 1
else:
self.data[self.data[self.i+3]] = 0
self.i += 4
if optcode == '08':
if mode2 == '0':
param2 = int(self.data[self.data[self.i+2]])
else:
param2 = int(self.data[self.i+2])
if param1 == param2:
self.data[self.data[self.i+3]] = 1
else:
self.data[self.data[self.i+3]] = 0
self.i += 4
if optcode == '99':
self.run = False
self.i += 1
self.done = True
return self.out, self.done
###Output
_____no_output_____
###Markdown
generate all possible phase settings
###Code
from itertools import permutations
phase_settings = list(permutations(range(5, 10)))
best_output = 0
for phase in phase_settings:
done = [False]*5
outE = 0
ampA = intcode_computer(phase[0], deepcopy(software))
ampB = intcode_computer(phase[1], deepcopy(software))
ampC = intcode_computer(phase[2], deepcopy(software))
ampD = intcode_computer(phase[3], deepcopy(software))
ampE = intcode_computer(phase[4], deepcopy(software))
while True not in done:
outA, done[0] = ampA.step(outE)
outB, done[1] = ampB.step(outA)
outC, done[2] = ampC.step(outB)
outD, done[3] = ampD.step(outC)
outE, done[4] = ampE.step(outD)
if True in done and outE > best_output:
best_output = outE
print('The value of the highest possible amp signal is {0}.'.format(best_output))
###Output
The value of the highest possible amp signal is 22476942.
|
coding-exercise/week4/part2/Augmentation.ipynb | ###Markdown
Loading dataset and make a directory
###Code
# Downloading dataset
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip \
-O /tmp/horse-or-human.zip
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip \
-O /tmp/validation-horse-or-human.zip
# Unzipping zip file and extract
import os
import zipfile
local_zip = '/tmp/horse-or-human.zip' # create a directory
zip_ref = zipfile.ZipFile(local_zip, 'r') # read zipfile
zip_ref.extractall('/tmp/horse-or-human') # extract all data from zipfile
local_zip = '/tmp/validation-horse-or-human.zip' # create a directory
zip_ref = zipfile.ZipFile(local_zip, 'r') # read zipfile
zip_ref.extractall('/tmp/validation-horse-or-human') # extract all data from zipfile
zip_ref.close()
# join path with os module
train_horse_dir = os.path.join('/tmp/horse-or-human/horses')
train_human_dir = os.path.join('/tmp/horse-or-human/humans')
validation_horse_dir = os.path.join('/tmp/validation-horse-or-human/horses')
validation_human_dir = os.path.join('/tmp/validation-horse-or-human/humans')
###Output
--2020-10-25 00:49:01-- https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip
Resolving storage.googleapis.com (storage.googleapis.com)... 74.125.20.128, 74.125.197.128, 74.125.135.128, ...
Connecting to storage.googleapis.com (storage.googleapis.com)|74.125.20.128|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 149574867 (143M) [application/zip]
Saving to: ‘/tmp/horse-or-human.zip’
/tmp/horse-or-human 100%[===================>] 142.65M 126MB/s in 1.1s
2020-10-25 00:49:02 (126 MB/s) - ‘/tmp/horse-or-human.zip’ saved [149574867/149574867]
--2020-10-25 00:49:03-- https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip
Resolving storage.googleapis.com (storage.googleapis.com)... 74.125.197.128, 74.125.195.128, 74.125.142.128, ...
Connecting to storage.googleapis.com (storage.googleapis.com)|74.125.197.128|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11480187 (11M) [application/zip]
Saving to: ‘/tmp/validation-horse-or-human.zip’
/tmp/validation-hor 100%[===================>] 10.95M 46.5MB/s in 0.2s
2020-10-25 00:49:03 (46.5 MB/s) - ‘/tmp/validation-horse-or-human.zip’ saved [11480187/11480187]
###Markdown
Build Model
###Code
import tensorflow as tf
model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image 300x300 with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fifth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
# Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')
tf.keras.layers.Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
Optimization
###Code
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=1e-4),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Data preprocessing with ImageDataGenerator
###Code
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
validation_datagen = ImageDataGenerator(rescale=1/255)
# Flow training images in batches of 128 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
'/tmp/horse-or-human/', # This is the source directory for training images
target_size=(300, 300), # All images will be resized to 150x150
batch_size=128,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow training images in batches of 128 using train_datagen generator
validation_generator = validation_datagen.flow_from_directory(
'/tmp/validation-horse-or-human/', # This is the source directory for training images
target_size=(300, 300), # All images will be resized to 150x150
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
###Output
Found 1027 images belonging to 2 classes.
Found 256 images belonging to 2 classes.
###Markdown
Model training
###Code
history = model.fit(
train_generator,
steps_per_epoch=8,
epochs=100,
verbose=1,
validation_data = validation_generator,
validation_steps=8)
###Output
Epoch 1/100
8/8 [==============================] - 16s 2s/step - loss: 0.6860 - accuracy: 0.5662 - val_loss: 0.6534 - val_accuracy: 0.8906
Epoch 2/100
8/8 [==============================] - 18s 2s/step - loss: 0.6664 - accuracy: 0.6285 - val_loss: 0.6276 - val_accuracy: 0.5508
Epoch 3/100
8/8 [==============================] - 18s 2s/step - loss: 0.6471 - accuracy: 0.6618 - val_loss: 0.5899 - val_accuracy: 0.6602
Epoch 4/100
8/8 [==============================] - 18s 2s/step - loss: 0.6222 - accuracy: 0.7164 - val_loss: 0.5496 - val_accuracy: 0.7148
Epoch 5/100
8/8 [==============================] - 21s 3s/step - loss: 0.5933 - accuracy: 0.6992 - val_loss: 0.5644 - val_accuracy: 0.6367
Epoch 6/100
8/8 [==============================] - 18s 2s/step - loss: 0.5787 - accuracy: 0.7086 - val_loss: 0.4610 - val_accuracy: 0.7969
Epoch 7/100
8/8 [==============================] - 18s 2s/step - loss: 0.5565 - accuracy: 0.7241 - val_loss: 0.5647 - val_accuracy: 0.6719
Epoch 8/100
8/8 [==============================] - 18s 2s/step - loss: 0.5138 - accuracy: 0.7430 - val_loss: 1.4193 - val_accuracy: 0.5000
Epoch 9/100
8/8 [==============================] - 18s 2s/step - loss: 0.5213 - accuracy: 0.7397 - val_loss: 0.5959 - val_accuracy: 0.6953
Epoch 10/100
8/8 [==============================] - 18s 2s/step - loss: 0.4910 - accuracy: 0.7597 - val_loss: 0.6603 - val_accuracy: 0.7031
Epoch 11/100
8/8 [==============================] - 21s 3s/step - loss: 0.4988 - accuracy: 0.7608 - val_loss: 0.9061 - val_accuracy: 0.6250
Epoch 12/100
8/8 [==============================] - 20s 3s/step - loss: 0.4508 - accuracy: 0.7891 - val_loss: 0.8574 - val_accuracy: 0.6484
Epoch 13/100
8/8 [==============================] - 18s 2s/step - loss: 0.4934 - accuracy: 0.7608 - val_loss: 0.8885 - val_accuracy: 0.6484
Epoch 14/100
8/8 [==============================] - 18s 2s/step - loss: 0.4982 - accuracy: 0.7419 - val_loss: 1.0280 - val_accuracy: 0.5938
Epoch 15/100
8/8 [==============================] - 18s 2s/step - loss: 0.4092 - accuracy: 0.8309 - val_loss: 0.8233 - val_accuracy: 0.6836
Epoch 16/100
8/8 [==============================] - 18s 2s/step - loss: 0.4540 - accuracy: 0.7831 - val_loss: 0.7191 - val_accuracy: 0.7344
Epoch 17/100
8/8 [==============================] - 18s 2s/step - loss: 0.5046 - accuracy: 0.7519 - val_loss: 0.8620 - val_accuracy: 0.6719
Epoch 18/100
8/8 [==============================] - 20s 3s/step - loss: 0.4225 - accuracy: 0.8115 - val_loss: 0.9758 - val_accuracy: 0.6641
Epoch 19/100
8/8 [==============================] - 21s 3s/step - loss: 0.4504 - accuracy: 0.7920 - val_loss: 0.9418 - val_accuracy: 0.6602
Epoch 20/100
8/8 [==============================] - 18s 2s/step - loss: 0.4250 - accuracy: 0.8142 - val_loss: 1.0284 - val_accuracy: 0.6562
Epoch 21/100
8/8 [==============================] - 18s 2s/step - loss: 0.3969 - accuracy: 0.8176 - val_loss: 1.9910 - val_accuracy: 0.5234
Epoch 22/100
8/8 [==============================] - 18s 2s/step - loss: 0.4279 - accuracy: 0.7987 - val_loss: 0.7458 - val_accuracy: 0.7461
Epoch 23/100
8/8 [==============================] - 18s 2s/step - loss: 0.4143 - accuracy: 0.8020 - val_loss: 1.2014 - val_accuracy: 0.6523
Epoch 24/100
8/8 [==============================] - 18s 2s/step - loss: 0.3922 - accuracy: 0.8176 - val_loss: 1.4101 - val_accuracy: 0.5977
Epoch 25/100
8/8 [==============================] - 20s 3s/step - loss: 0.3901 - accuracy: 0.8125 - val_loss: 1.1223 - val_accuracy: 0.6484
Epoch 26/100
8/8 [==============================] - 18s 2s/step - loss: 0.4081 - accuracy: 0.8254 - val_loss: 1.5465 - val_accuracy: 0.5820
Epoch 27/100
8/8 [==============================] - 18s 2s/step - loss: 0.3654 - accuracy: 0.8398 - val_loss: 1.1809 - val_accuracy: 0.6758
Epoch 28/100
8/8 [==============================] - 18s 2s/step - loss: 0.4163 - accuracy: 0.8109 - val_loss: 1.0778 - val_accuracy: 0.6797
Epoch 29/100
8/8 [==============================] - 18s 2s/step - loss: 0.3463 - accuracy: 0.8576 - val_loss: 1.6073 - val_accuracy: 0.6016
Epoch 30/100
8/8 [==============================] - 18s 2s/step - loss: 0.4197 - accuracy: 0.8187 - val_loss: 1.0914 - val_accuracy: 0.6758
Epoch 31/100
8/8 [==============================] - 18s 2s/step - loss: 0.3399 - accuracy: 0.8554 - val_loss: 1.3951 - val_accuracy: 0.6289
Epoch 32/100
8/8 [==============================] - 18s 2s/step - loss: 0.3180 - accuracy: 0.8587 - val_loss: 1.3578 - val_accuracy: 0.6719
Epoch 33/100
8/8 [==============================] - 21s 3s/step - loss: 0.3020 - accuracy: 0.8732 - val_loss: 1.5395 - val_accuracy: 0.6406
Epoch 34/100
8/8 [==============================] - 21s 3s/step - loss: 0.3753 - accuracy: 0.8387 - val_loss: 1.3465 - val_accuracy: 0.6836
Epoch 35/100
8/8 [==============================] - 18s 2s/step - loss: 0.3538 - accuracy: 0.8298 - val_loss: 1.6008 - val_accuracy: 0.5859
Epoch 36/100
8/8 [==============================] - 18s 2s/step - loss: 0.3127 - accuracy: 0.8587 - val_loss: 1.6220 - val_accuracy: 0.6406
Epoch 37/100
8/8 [==============================] - 18s 2s/step - loss: 0.3155 - accuracy: 0.8587 - val_loss: 1.4673 - val_accuracy: 0.6602
Epoch 38/100
8/8 [==============================] - 18s 2s/step - loss: 0.3124 - accuracy: 0.8721 - val_loss: 1.2586 - val_accuracy: 0.6836
Epoch 39/100
8/8 [==============================] - 21s 3s/step - loss: 0.3216 - accuracy: 0.8598 - val_loss: 1.9237 - val_accuracy: 0.5820
Epoch 40/100
8/8 [==============================] - 18s 2s/step - loss: 0.3633 - accuracy: 0.8521 - val_loss: 2.3679 - val_accuracy: 0.5273
Epoch 41/100
8/8 [==============================] - 18s 2s/step - loss: 0.2705 - accuracy: 0.8910 - val_loss: 1.6388 - val_accuracy: 0.6328
Epoch 42/100
8/8 [==============================] - 18s 2s/step - loss: 0.3170 - accuracy: 0.8643 - val_loss: 1.9886 - val_accuracy: 0.5859
Epoch 43/100
8/8 [==============================] - 18s 2s/step - loss: 0.3298 - accuracy: 0.8543 - val_loss: 1.5991 - val_accuracy: 0.6641
Epoch 44/100
8/8 [==============================] - 18s 2s/step - loss: 0.3065 - accuracy: 0.8632 - val_loss: 2.1699 - val_accuracy: 0.5938
Epoch 45/100
8/8 [==============================] - 18s 2s/step - loss: 0.3011 - accuracy: 0.8610 - val_loss: 2.0857 - val_accuracy: 0.5781
Epoch 46/100
8/8 [==============================] - 18s 2s/step - loss: 0.2465 - accuracy: 0.9043 - val_loss: 1.7262 - val_accuracy: 0.6758
Epoch 47/100
8/8 [==============================] - 20s 3s/step - loss: 0.2902 - accuracy: 0.8740 - val_loss: 1.6514 - val_accuracy: 0.6602
Epoch 48/100
8/8 [==============================] - 18s 2s/step - loss: 0.3154 - accuracy: 0.8521 - val_loss: 2.8058 - val_accuracy: 0.5547
Epoch 49/100
8/8 [==============================] - 18s 2s/step - loss: 0.2870 - accuracy: 0.8832 - val_loss: 1.7662 - val_accuracy: 0.6250
Epoch 50/100
8/8 [==============================] - 18s 2s/step - loss: 0.2637 - accuracy: 0.8888 - val_loss: 1.7639 - val_accuracy: 0.6328
Epoch 51/100
8/8 [==============================] - 18s 2s/step - loss: 0.2400 - accuracy: 0.9032 - val_loss: 1.8606 - val_accuracy: 0.6328
Epoch 52/100
8/8 [==============================] - 18s 2s/step - loss: 0.2760 - accuracy: 0.8888 - val_loss: 1.9298 - val_accuracy: 0.6133
Epoch 53/100
8/8 [==============================] - 18s 2s/step - loss: 0.2761 - accuracy: 0.8932 - val_loss: 1.7960 - val_accuracy: 0.6250
Epoch 54/100
8/8 [==============================] - 20s 3s/step - loss: 0.2173 - accuracy: 0.9170 - val_loss: 2.1416 - val_accuracy: 0.6055
Epoch 55/100
8/8 [==============================] - 21s 3s/step - loss: 0.2449 - accuracy: 0.9099 - val_loss: 3.5555 - val_accuracy: 0.5078
Epoch 56/100
8/8 [==============================] - 18s 2s/step - loss: 0.2906 - accuracy: 0.8843 - val_loss: 1.6991 - val_accuracy: 0.6484
Epoch 57/100
8/8 [==============================] - 20s 3s/step - loss: 0.2182 - accuracy: 0.9141 - val_loss: 2.2289 - val_accuracy: 0.6133
Epoch 58/100
8/8 [==============================] - 18s 2s/step - loss: 0.2279 - accuracy: 0.9055 - val_loss: 2.2533 - val_accuracy: 0.6250
Epoch 59/100
8/8 [==============================] - 18s 2s/step - loss: 0.2709 - accuracy: 0.8821 - val_loss: 1.9800 - val_accuracy: 0.6328
Epoch 60/100
8/8 [==============================] - 18s 2s/step - loss: 0.2186 - accuracy: 0.9132 - val_loss: 2.3774 - val_accuracy: 0.6094
Epoch 61/100
8/8 [==============================] - 18s 2s/step - loss: 0.2723 - accuracy: 0.8932 - val_loss: 2.0220 - val_accuracy: 0.6328
Epoch 62/100
8/8 [==============================] - 18s 2s/step - loss: 0.2039 - accuracy: 0.9299 - val_loss: 2.1226 - val_accuracy: 0.6328
Epoch 63/100
8/8 [==============================] - 18s 2s/step - loss: 0.1675 - accuracy: 0.9433 - val_loss: 1.9289 - val_accuracy: 0.6758
Epoch 64/100
8/8 [==============================] - 18s 2s/step - loss: 0.2339 - accuracy: 0.9055 - val_loss: 2.2532 - val_accuracy: 0.6250
Epoch 65/100
8/8 [==============================] - 18s 2s/step - loss: 0.2123 - accuracy: 0.9177 - val_loss: 2.4442 - val_accuracy: 0.6250
Epoch 66/100
8/8 [==============================] - 18s 2s/step - loss: 0.2095 - accuracy: 0.9188 - val_loss: 1.7468 - val_accuracy: 0.6836
Epoch 67/100
8/8 [==============================] - 18s 2s/step - loss: 0.2287 - accuracy: 0.9121 - val_loss: 2.0814 - val_accuracy: 0.6289
Epoch 68/100
8/8 [==============================] - 20s 3s/step - loss: 0.1740 - accuracy: 0.9414 - val_loss: 2.1521 - val_accuracy: 0.6562
Epoch 69/100
8/8 [==============================] - 18s 2s/step - loss: 0.2296 - accuracy: 0.9088 - val_loss: 2.1112 - val_accuracy: 0.6367
Epoch 70/100
8/8 [==============================] - 18s 2s/step - loss: 0.2036 - accuracy: 0.9210 - val_loss: 2.0800 - val_accuracy: 0.6680
Epoch 71/100
8/8 [==============================] - 18s 2s/step - loss: 0.1674 - accuracy: 0.9344 - val_loss: 2.1706 - val_accuracy: 0.6719
Epoch 72/100
8/8 [==============================] - 18s 2s/step - loss: 0.2131 - accuracy: 0.9066 - val_loss: 1.3955 - val_accuracy: 0.7227
Epoch 73/100
8/8 [==============================] - 20s 3s/step - loss: 0.1616 - accuracy: 0.9316 - val_loss: 2.1329 - val_accuracy: 0.6758
Epoch 74/100
8/8 [==============================] - 20s 3s/step - loss: 0.1881 - accuracy: 0.9189 - val_loss: 2.8766 - val_accuracy: 0.6094
Epoch 75/100
8/8 [==============================] - 18s 2s/step - loss: 0.1993 - accuracy: 0.9221 - val_loss: 1.7470 - val_accuracy: 0.6992
Epoch 76/100
8/8 [==============================] - 18s 2s/step - loss: 0.2269 - accuracy: 0.9021 - val_loss: 1.6479 - val_accuracy: 0.6875
Epoch 77/100
8/8 [==============================] - 20s 3s/step - loss: 0.1715 - accuracy: 0.9346 - val_loss: 1.9609 - val_accuracy: 0.6797
Epoch 78/100
8/8 [==============================] - 18s 2s/step - loss: 0.1795 - accuracy: 0.9232 - val_loss: 1.9523 - val_accuracy: 0.6875
Epoch 79/100
8/8 [==============================] - 18s 2s/step - loss: 0.1882 - accuracy: 0.9299 - val_loss: 2.0104 - val_accuracy: 0.6797
Epoch 80/100
8/8 [==============================] - 18s 2s/step - loss: 0.2387 - accuracy: 0.9043 - val_loss: 2.1868 - val_accuracy: 0.6484
Epoch 81/100
8/8 [==============================] - 18s 2s/step - loss: 0.1448 - accuracy: 0.9466 - val_loss: 2.5123 - val_accuracy: 0.6445
Epoch 82/100
8/8 [==============================] - 18s 2s/step - loss: 0.2141 - accuracy: 0.9166 - val_loss: 2.4059 - val_accuracy: 0.6484
Epoch 83/100
8/8 [==============================] - 18s 2s/step - loss: 0.1576 - accuracy: 0.9422 - val_loss: 2.8796 - val_accuracy: 0.6406
Epoch 84/100
8/8 [==============================] - 18s 2s/step - loss: 0.1508 - accuracy: 0.9410 - val_loss: 2.4579 - val_accuracy: 0.6445
Epoch 85/100
8/8 [==============================] - 21s 3s/step - loss: 0.1660 - accuracy: 0.9355 - val_loss: 2.4149 - val_accuracy: 0.6523
Epoch 86/100
8/8 [==============================] - 18s 2s/step - loss: 0.1522 - accuracy: 0.9399 - val_loss: 2.5655 - val_accuracy: 0.6328
Epoch 87/100
8/8 [==============================] - 19s 2s/step - loss: 0.1878 - accuracy: 0.9266 - val_loss: 2.2317 - val_accuracy: 0.6602
Epoch 88/100
8/8 [==============================] - 18s 2s/step - loss: 0.1261 - accuracy: 0.9544 - val_loss: 1.3495 - val_accuracy: 0.7617
Epoch 89/100
8/8 [==============================] - 18s 2s/step - loss: 0.2162 - accuracy: 0.9077 - val_loss: 2.3535 - val_accuracy: 0.6562
Epoch 90/100
8/8 [==============================] - 18s 2s/step - loss: 0.1374 - accuracy: 0.9544 - val_loss: 2.2837 - val_accuracy: 0.6719
Epoch 91/100
8/8 [==============================] - 18s 2s/step - loss: 0.1275 - accuracy: 0.9477 - val_loss: 1.9841 - val_accuracy: 0.6992
Epoch 92/100
8/8 [==============================] - 18s 2s/step - loss: 0.2820 - accuracy: 0.9155 - val_loss: 2.3626 - val_accuracy: 0.6562
Epoch 93/100
8/8 [==============================] - 18s 2s/step - loss: 0.1390 - accuracy: 0.9522 - val_loss: 2.4037 - val_accuracy: 0.6562
Epoch 94/100
8/8 [==============================] - 18s 2s/step - loss: 0.1182 - accuracy: 0.9577 - val_loss: 2.4954 - val_accuracy: 0.6641
Epoch 95/100
8/8 [==============================] - 21s 3s/step - loss: 0.1546 - accuracy: 0.9321 - val_loss: 3.3909 - val_accuracy: 0.5820
Epoch 96/100
8/8 [==============================] - 21s 3s/step - loss: 0.1402 - accuracy: 0.9443 - val_loss: 3.2940 - val_accuracy: 0.6328
Epoch 97/100
8/8 [==============================] - 18s 2s/step - loss: 0.1443 - accuracy: 0.9488 - val_loss: 2.3602 - val_accuracy: 0.6602
Epoch 98/100
8/8 [==============================] - 21s 3s/step - loss: 0.1228 - accuracy: 0.9555 - val_loss: 2.1360 - val_accuracy: 0.6914
Epoch 99/100
8/8 [==============================] - 18s 2s/step - loss: 0.1800 - accuracy: 0.9366 - val_loss: 4.0680 - val_accuracy: 0.5586
Epoch 100/100
8/8 [==============================] - 18s 2s/step - loss: 0.1790 - accuracy: 0.9310 - val_loss: 1.6847 - val_accuracy: 0.7266
###Markdown
Check in Model Accuracy
###Code
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.figure()
plt.plot(epochs, loss, 'r', label='Training Loss')
plt.plot(epochs, val_loss, 'b', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____ |
math/Math24_Dot_Product.ipynb | ###Markdown
prepared by Abuzer Yakaryilmaz (QLatvia) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Vectors: Dot (Scalar) ProductTwo vectors can be multiplied with each other in different ways.One of the very basic methods is dot product.It is also called scalar product, because the result is a scalar value, e.g., a real number.Consider the following two vectors:$$ u = \myrvector{-3 \\ -2 \\ 0 \\ -1 \\ 4} \mbox{ and } v = \myrvector{-1\\ -1 \\2 \\ -3 \\ 5}.$$The dot product of $ u $ and $ v $, denoted by $ \dot{u}{v}$, can be defined algorithmically.Pairwise multiplication: the values in the same positions are multiplied with each other.Summation of all pairwise multiplications: Then we sum all the results obtained from the pairwise multiplications.We write its Python code below.
###Code
# let's define both vectors
u = [-3,-2,0,-1,4]
v = [-1,-1,2,-3,5]
uv = 0; # summation is initially zero
for i in range(len(u)): # iteratively access every pair with the same indices
print("pairwise multiplication of the entries with index",i,"is",u[i]*v[i])
uv = uv + u[i]*v[i] # i-th entries are multiplied and then added to summation
print() # print an empty line
print("The dot product of",u,'and',v,'is',uv)
###Output
_____no_output_____
###Markdown
The pairwise multiplications of entries are $ (-3)\cdot(-1) = 3 $, $ (-2)\cdot(-1) = 2 $, $ 0\cdot 2 = 0 $, $ (-1)\cdot(-3) = 3 $, and, $ 4 \cdot 5 = 20 $. Thus the summation of all pairwise multiplications of entries is $ 3+2+0+3+20 = 28 $.Remark that the dimensions of the given vectors must be the same. Otherwise, the dot product is not defined. Task 1 Find the dot product of the following vectors in Python:$$ v = \myrvector{-3 \\ 4 \\ -5 \\ 6} ~~~~\mbox{and}~~~~ u = \myrvector{4 \\ 3 \\ 6 \\ 5}.$$Your outcome should be $0$.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 2 Let $ u = \myrvector{ -3 \\ -4 } $ be a 2 dimensional vector.Find $ \dot{u}{u} $ in Python.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Notes:As may be observed from Task 2, the length of a vector can be calculated by using its dot product with itself.$$ \norm{u} = \sqrt{\dot{u}{u}}. $$$ \dot{u}{u} $ is $25$, and so $ \norm{u} = \sqrt{25} = 5 $. $ \dot{u}{u} $ automatically accumulates the contribution of each entry to the length. Orthogonal (perpendicular) vectors For simplicity, we consider 2-dimensional vectors.The following two vectors are perpendicular (orthogonal) to each other.The angle between them is $ 90 $ degrees.
###Code
# let's find the dot product of v and u
v = [-4,0]
u = [0,-5]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
_____no_output_____
###Markdown
Now, let's check the dot product of the following two vectors:
###Code
# we can use the same code
v = [-4,3]
u = [-3,-4]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
_____no_output_____
###Markdown
The dot product of new $ u $ and $ v $ is also $0$. This is not surprising, because the vectors $u$ and $v$ (in both cases) are orthogonal to each other.Fact: The dot product of two orthogonal (perpendicular) vectors is zero. If the dot product of two vectors is zero, then they are orthogonal to each other. This fact is important, because, as we will see later, orthogonal vectors (states) can be distinguished perfectly. Task 3 Verify that (i) $ u $ is orthogonal to $ -v $, (ii) $ -u $ is orthogonal to $ v $, and (iii) $ -u $ is orthogonal to $ -v $.
###Code
# you may consider to write a function in Python for dot product
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 4 Find the dot product of $ v $ and $ u $ in Python.$$ v = \myrvector{-1 \\ 2 \\ -3 \\ 4} ~~~~\mbox{and}~~~~ u = \myrvector{-2 \\ -1 \\ 5 \\ 2}.$$Find the dot product of $ -2v $ and $ 3u $ in Python.Compare both results.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
$ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $$ \newcommand{\greenbit}[1] {\mathbf{{\color{green}1}}} $$ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}1}}} $$ \newcommand{\redbit}[1] {\mathbf{{\color{red}1}}} $$ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}1}}} $$ \newcommand{\blackbit}[1] {\mathbf{{\color{black}1}}} $ Vectors: Dot (Scalar) Product_prepared by Abuzer Yakaryilmaz_ Dot product is a specific way of defining multiplication between two vectors with the same size. It is also called scalar product, because the result is a scalar value, e.g., a real number.Consider the following two vectors:$$ u = \myrvector{-3 \\ -2 \\ 0 \\ -1 \\ 4} \mbox{ and } v = \myrvector{-1\\ -1 \\2 \\ -3 \\ 5}.$$The dot product of $ u $ and $ v $, denoted by $ \dot{u}{v}$, can be defined algorithmically.Pairwise multiplication: the values in the same positions are multiplied with each other.Summation of all pairwise multiplications: Then we sum all the results obtained from the pairwise multiplications.We write its Python code below.
###Code
# let's define both vectors
u = [-3,-2,0,-1,4]
v = [-1,-1,2,-3,5]
uv = 0; # summation is initially zero
for i in range(len(u)): # iteratively access every pair with the same indices
print("pairwise multiplication of the entries with index",i,"is",u[i]*v[i])
uv = uv + u[i]*v[i] # i-th entries are multiplied and then added to summation
print() # print an empty line
print("The dot product of",u,'and',v,'is',uv)
###Output
_____no_output_____
###Markdown
The pairwise multiplications of entries are $ (-3)\cdot(-1) = 3 $, $ (-2)\cdot(-1) = 2 $, $ 0\cdot 2 = 0 $, $ (-1)\cdot(-3) = 3 $, and, $ 4 \cdot 5 = 20 $. Thus the summation of all pairwise multiplications of entries is $ 3+2+0+3+20 = 28 $.Remark that the dimensions of the given vectors must be the same. Otherwise, the dot product is not defined. Task 1 Find the dot product of the following vectors in Python:$$ v = \myrvector{-3 \\ 4 \\ -5 \\ 6} ~~~~\mbox{and}~~~~ u = \myrvector{4 \\ 3 \\ 6 \\ 5}.$$Your outcome should be $0$.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 2 Let $ u = \myrvector{ -3 \\ -4 } $ be a 2 dimensional vector.Find $ \dot{u}{u} $ in Python.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Notes:As may be observed from Task 2, the length of a vector can be calculated by using its dot product with itself.$$ \norm{u} = \sqrt{\dot{u}{u}}. $$$ \dot{u}{u} $ is $25$, and so $ \norm{u} = \sqrt{25} = 5 $. $ \dot{u}{u} $ automatically accumulates the contribution of each entry to the length. Orthogonal (perpendicular) vectors For simplicity, we consider 2-dimensional vectors.The following two vectors are perpendicular (orthogonal) to each other.The angle between them is $ 90 $ degrees.
###Code
%run math.py
dot_product("example1")
# let's find the dot product of v and u
v = [-4,0]
u = [0,-5]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
_____no_output_____
###Markdown
Now, let's check the dot product of the following two vectors:
###Code
%run math.py
dot_product("example2")
# we can use the same code
v = [-4,3]
u = [-3,-4]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
_____no_output_____
###Markdown
The dot product of new $ u $ and $ v $ is also $0$. This is not surprising, because the vectors $u$ and $v$ (in both cases) are orthogonal to each other.Fact: The dot product of two orthogonal (perpendicular) vectors is zero. If the dot product of two vectors is zero, then they are orthogonal to each other. This fact is important, because, as we will see later, orthogonal vectors (states) can be distinguished perfectly. Task 3 Verify that (i) $ u $ is orthogonal to $ -v $, (ii) $ -u $ is orthogonal to $ v $, and (iii) $ -u $ is orthogonal to $ -v $.
###Code
%run math.py
dot_product("example3")
# you may consider to write a function in Python for dot product
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 4 Find the dot product of $ v $ and $ u $ in Python.$$ v = \myrvector{-1 \\ 2 \\ -3 \\ 4} ~~~~\mbox{and}~~~~ u = \myrvector{-2 \\ -1 \\ 5 \\ 2}.$$Find the dot product of $ -2v $ and $ 3u $ in Python.Compare both results.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
prepared by Abuzer Yakaryilmaz This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Vectors: Dot (Scalar) ProductTwo vectors can be multiplied with each other in different ways.One of the very basic methods is dot product.It is also called scalar product, because the result is a scalar value, e.g., a real number.Consider the following two vectors:$$ u = \myrvector{-3 \\ -2 \\ 0 \\ -1 \\ 4} \mbox{ and } v = \myrvector{-1\\ -1 \\2 \\ -3 \\ 5}.$$The dot product of $ u $ and $ v $, denoted by $ \dot{u}{v}$, can be defined algorithmically.Pairwise multiplication: the values in the same positions are multiplied with each other.Summation of all pairwise multiplications: Then we sum all the results obtained from the pairwise multiplications.We write its Python code below.
###Code
# let's define both vectors
u = [-3,-2,0,-1,4]
v = [-1,-1,2,-3,5]
uv = 0; # summation is initially zero
for i in range(len(u)): # iteratively access every pair with the same indices
print("pairwise multiplication of the entries with index",i,"is",u[i]*v[i])
uv = uv + u[i]*v[i] # i-th entries are multiplied and then added to summation
print() # print an empty line
print("The dot product of",u,'and',v,'is',uv)
###Output
pairwise multiplication of the entries with index 0 is 3
pairwise multiplication of the entries with index 1 is 2
pairwise multiplication of the entries with index 2 is 0
pairwise multiplication of the entries with index 3 is 3
pairwise multiplication of the entries with index 4 is 20
The dot product of [-3, -2, 0, -1, 4] and [-1, -1, 2, -3, 5] is 28
###Markdown
The pairwise multiplications of entries are $ (-3)\cdot(-1) = 3 $, $ (-2)\cdot(-1) = 2 $, $ 0\cdot 2 = 0 $, $ (-1)\cdot(-3) = 3 $, and, $ 4 \cdot 5 = 20 $. Thus the summation of all pairwise multiplications of entries is $ 3+2+0+3+20 = 28 $.Remark that the dimensions of the given vectors must be the same. Otherwise, the dot product is not defined. Task 1 Find the dot product of the following vectors in Python:$$ v = \myrvector{-3 \\ 4 \\ -5 \\ 6} ~~~~\mbox{and}~~~~ u = \myrvector{4 \\ 3 \\ 6 \\ 5}.$$Your outcome should be $0$.
###Code
v=[-3,4,-5,6]
u=[4,3,6,5]
vu=0
for i in range(len(u)):
vu=vu+v[i]*u[i]
print("pairwise multiplication of the entries with index", i," is ",v[i]*u[i])
print() # print an empty line
print("The dot product of",u,'and',v,'is',vu)
###Output
pairwise multiplication of the entries with index 0 is -12
pairwise multiplication of the entries with index 1 is 12
pairwise multiplication of the entries with index 2 is -30
pairwise multiplication of the entries with index 3 is 30
The dot product of [4, 3, 6, 5] and [-3, 4, -5, 6] is 0
###Markdown
click for our solution Task 2 Let $ u = \myrvector{ -3 \\ -4 } $ be a 2 dimensional vector.Find $ \dot{u}{u} $ in Python.
###Code
u=[-3,-4]
u_2=0
for i in range(len(u)):
u_2=u_2+u[i]**2
print("pairwise multiplication of the entries with index", i," is ",u[i]**2)
print() # print an empty line
print("The dot product of",u,'is',u_2)
###Output
pairwise multiplication of the entries with index 0 is 9
pairwise multiplication of the entries with index 1 is 16
The dot product of [-3, -4] is 25
###Markdown
click for our solution Notes:As may be observed from Task 2, the length of a vector can be calculated by using its dot product with itself.$$ \norm{u} = \sqrt{\dot{u}{u}}. $$$ \dot{u}{u} $ is $25$, and so $ \norm{u} = \sqrt{25} = 5 $. $ \dot{u}{u} $ automatically accumulates the contribution of each entry to the length. Orthogonal (perpendicular) vectors For simplicity, we consider 2-dimensional vectors.The following two vectors are perpendicular (orthogonal) to each other.The angle between them is $ 90 $ degrees.
###Code
# let's find the dot product of v and u
v = [-4,0]
u = [0,-5]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
the dot product of u and v is 0
###Markdown
Now, let's check the dot product of the following two vectors:
###Code
# we can use the same code
v = [-4,3]
u = [-3,-4]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
_____no_output_____
###Markdown
The dot product of new $ u $ and $ v $ is also $0$. This is not surprising, because the vectors $u$ and $v$ (in both cases) are orthogonal to each other.Fact: The dot product of two orthogonal (perpendicular) vectors is zero. If the dot product of two vectors is zero, then they are orthogonal to each other. This fact is important, because, as we will see later, orthogonal vectors (states) can be distinguished perfectly. Task 3 Verify that (i) $ u $ is orthogonal to $ -v $, (ii) $ -u $ is orthogonal to $ v $, and (iii) $ -u $ is orthogonal to $ -v $.
###Code
# you may consider to write a function in Python for dot product
u = [3,4]
u_opposite= [-3,-4]
v = [-4,3]
v_opposite= [4,-3]
result = 0;
def dot(vector_1,vector_2):
result = 0
for i in range(len(vector_2)):
result = result + vector_1[i]*vector_2[i]
return result
print("the dot product of u and -v ",u," and ",v_opposite," is",dot(u,v_opposite))
print("the dot product of -u and v ",u_opposite," and ",v," is",dot(u_opposite,v))
print("the dot product of -u and -v ",u_opposite," and ",v_opposite," is",dot(u_opposite,v_opposite))
###Output
the dot product of u and -v [3, 4] and [4, -3] is 0
the dot product of -u and v [-3, -4] and [-4, 3] is 0
the dot product of -u and -v [-3, -4] and [4, -3] is 0
###Markdown
click for our solution Task 4 Find the dot product of $ v $ and $ u $ in Python.$$ v = \myrvector{-1 \\ 2 \\ -3 \\ 4} ~~~~\mbox{and}~~~~ u = \myrvector{-2 \\ -1 \\ 5 \\ 2}.$$Find the dot product of $ -2v $ and $ 3u $ in Python.Compare both results.
###Code
v1 = [-1,2,-3,4]
u1 = [-2,-1,5,2]
v2 = []
u2 = []
result = 0;
for i in range(len(v1)):
v2.append(-1*2*v1[i])
u2.append(3*u1[i])
def dot(vector_1,vector_2):
result = 0
for i in range(len(vector_2)):
result = result + vector_1[i]*vector_2[i]
return result
print("the dot product of ", v1," and ", u1," is",dot(v1, u1))
print("the dot product of ", v2," and ", u2," is",dot(v2, u2))
###Output
the dot product of [-1, 2, -3, 4] and [-2, -1, 5, 2] is -7
the dot product of [2, -4, 6, -8] and [-6, -3, 15, 6] is 42
###Markdown
prepared by Abuzer Yakaryilmaz (QLatvia) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Vectors: Dot (Scalar) ProductDot product is a specific way of defining multiplication between two vectors with the same size. It is also called scalar product, because the result is a scalar value, e.g., a real number.Consider the following two vectors:$$ u = \myrvector{-3 \\ -2 \\ 0 \\ -1 \\ 4} \mbox{ and } v = \myrvector{-1\\ -1 \\2 \\ -3 \\ 5}.$$The dot product of $ u $ and $ v $, denoted by $ \dot{u}{v}$, can be defined algorithmically.Pairwise multiplication: the values in the same positions are multiplied with each other.Summation of all pairwise multiplications: Then we sum all the results obtained from the pairwise multiplications.We write its Python code below.
###Code
# let's define both vectors
u = [-3,-2,0,-1,4]
v = [-1,-1,2,-3,5]
uv = 0; # summation is initially zero
for i in range(len(u)): # iteratively access every pair with the same indices
print("pairwise multiplication of the entries with index",i,"is",u[i]*v[i])
uv = uv + u[i]*v[i] # i-th entries are multiplied and then added to summation
print() # print an empty line
print("The dot product of",u,'and',v,'is',uv)
###Output
_____no_output_____
###Markdown
The pairwise multiplications of entries are $ (-3)\cdot(-1) = 3 $, $ (-2)\cdot(-1) = 2 $, $ 0\cdot 2 = 0 $, $ (-1)\cdot(-3) = 3 $, and, $ 4 \cdot 5 = 20 $. Thus the summation of all pairwise multiplications of entries is $ 3+2+0+3+20 = 28 $.Remark that the dimensions of the given vectors must be the same. Otherwise, the dot product is not defined. Task 1 Find the dot product of the following vectors in Python:$$ v = \myrvector{-3 \\ 4 \\ -5 \\ 6} ~~~~\mbox{and}~~~~ u = \myrvector{4 \\ 3 \\ 6 \\ 5}.$$Your outcome should be $0$.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 2 Let $ u = \myrvector{ -3 \\ -4 } $ be a 2 dimensional vector.Find $ \dot{u}{u} $ in Python.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Notes:As may be observed from Task 2, the length of a vector can be calculated by using its dot product with itself.$$ \norm{u} = \sqrt{\dot{u}{u}}. $$$ \dot{u}{u} $ is $25$, and so $ \norm{u} = \sqrt{25} = 5 $. $ \dot{u}{u} $ automatically accumulates the contribution of each entry to the length. Orthogonal (perpendicular) vectors For simplicity, we consider 2-dimensional vectors.The following two vectors are perpendicular (orthogonal) to each other.The angle between them is $ 90 $ degrees.
###Code
# let's find the dot product of v and u
v = [-4,0]
u = [0,-5]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
_____no_output_____
###Markdown
Now, let's check the dot product of the following two vectors:
###Code
# we can use the same code
v = [-4,3]
u = [-3,-4]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
_____no_output_____
###Markdown
The dot product of new $ u $ and $ v $ is also $0$. This is not surprising, because the vectors $u$ and $v$ (in both cases) are orthogonal to each other.Fact: The dot product of two orthogonal (perpendicular) vectors is zero. If the dot product of two vectors is zero, then they are orthogonal to each other. This fact is important, because, as we will see later, orthogonal vectors (states) can be distinguished perfectly. Task 3 Verify that (i) $ u $ is orthogonal to $ -v $, (ii) $ -u $ is orthogonal to $ v $, and (iii) $ -u $ is orthogonal to $ -v $.
###Code
# you may consider to write a function in Python for dot product
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 4 Find the dot product of $ v $ and $ u $ in Python.$$ v = \myrvector{-1 \\ 2 \\ -3 \\ 4} ~~~~\mbox{and}~~~~ u = \myrvector{-2 \\ -1 \\ 5 \\ 2}.$$Find the dot product of $ -2v $ and $ 3u $ in Python.Compare both results.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
$ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $$ \newcommand{\greenbit}[1] {\mathbf{{\color{green}1}}} $$ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}1}}} $$ \newcommand{\redbit}[1] {\mathbf{{\color{red}1}}} $$ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}1}}} $$ \newcommand{\blackbit}[1] {\mathbf{{\color{black}1}}} $ Vectors: Dot (Scalar) Product_prepared by Abuzer Yakaryilmaz_ Dot product is a specific way of defining multiplication between two vectors with the same size. It is also called scalar product, because the result is a scalar value, e.g., a real number.Consider the following two vectors:$$ u = \myrvector{-3 \\ -2 \\ 0 \\ -1 \\ 4} \mbox{ and } v = \myrvector{-1\\ -1 \\2 \\ -3 \\ 5}.$$The dot product of $ u $ and $ v $, denoted by $ \dot{u}{v}$, can be defined algorithmically.Pairwise multiplication: the values in the same positions are multiplied with each other.Summation of all pairwise multiplications: Then we sum all the results obtained from the pairwise multiplications.We write its Python code below.
###Code
# let's define both vectors
u = [-3,-2,0,-1,4]
v = [-1,-1,2,-3,5]
uv = 0; # summation is initially zero
for i in range(len(u)): # iteratively access every pair with the same indices
print("pairwise multiplication of the entries with index",i,"is",u[i]*v[i])
uv = uv + u[i]*v[i] # i-th entries are multiplied and then added to summation
print() # print an empty line
print("The dot product of",u,'and',v,'is',uv)
###Output
_____no_output_____
###Markdown
The pairwise multiplications of entries are $ (-3)\cdot(-1) = 3 $, $ (-2)\cdot(-1) = 2 $, $ 0\cdot 2 = 0 $, $ (-1)\cdot(-3) = 3 $, and, $ 4 \cdot 5 = 20 $. Thus the summation of all pairwise multiplications of entries is $ 3+2+0+3+20 = 28 $.Remark that the dimensions of the given vectors must be the same. Otherwise, the dot product is not defined. Task 1 Find the dot product of the following vectors in Python:$$ v = \myrvector{-3 \\ 4 \\ -5 \\ 6} ~~~~\mbox{and}~~~~ u = \myrvector{4 \\ 3 \\ 6 \\ 5}.$$Your outcome should be $0$.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 2 Let $ u = \myrvector{ -3 \\ -4 } $ be a 2 dimensional vector.Find $ \dot{u}{u} $ in Python.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Notes:As may be observed from Task 2, the length of a vector can be calculated by using its dot product with itself.$$ \norm{u} = \sqrt{\dot{u}{u}}. $$$ \dot{u}{u} $ is $25$, and so $ \norm{u} = \sqrt{25} = 5 $. $ \dot{u}{u} $ automatically accumulates the contribution of each entry to the length. Orthogonal (perpendicular) vectors For simplicity, we consider 2-dimensional vectors.The following two vectors are perpendicular (orthogonal) to each other.The angle between them is $ 90 $ degrees.
###Code
%run math.py
dot_product("example1")
# let's find the dot product of v and u
v = [-4,0]
u = [0,-5]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
_____no_output_____
###Markdown
Now, let's check the dot product of the following two vectors:
###Code
%run math.py
dot_product("example2")
# we can use the same code
v = [-4,3]
u = [-3,-4]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
_____no_output_____
###Markdown
The dot product of new $ u $ and $ v $ is also $0$. This is not surprising, because the vectors $u$ and $v$ (in both cases) are orthogonal to each other.Fact: The dot product of two orthogonal (perpendicular) vectors is zero. If the dot product of two vectors is zero, then they are orthogonal to each other. This fact is important, because, as we will see later, orthogonal vectors (states) can be distinguished perfectly. Task 3 Verify that (i) $ u $ is orthogonal to $ -v $, (ii) $ -u $ is orthogonal to $ v $, and (iii) $ -u $ is orthogonal to $ -v $.
###Code
%run math.py
dot_product("example3")
# you may consider to write a function in Python for dot product
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 4 Find the dot product of $ v $ and $ u $ in Python.$$ v = \myrvector{-1 \\ 2 \\ -3 \\ 4} ~~~~\mbox{and}~~~~ u = \myrvector{-2 \\ -1 \\ 5 \\ 2}.$$Find the dot product of $ -2v $ and $ 3u $ in Python.Compare both results.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
prepared by Abuzer Yakaryilmaz (QLatvia) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Vectors: Dot (Scalar) ProductTwo vectors can be multiplied with each other in different ways.One of the very basic methods is dot product.It is also called scalar product, because the result is a scalar value, e.g., a real number.Consider the following two vectors:$$ u = \myrvector{-3 \\ -2 \\ 0 \\ -1 \\ 4} \mbox{ and } v = \myrvector{-1\\ -1 \\2 \\ -3 \\ 5}.$$The dot product of $ u $ and $ v $, denoted by $ \dot{u}{v}$, can be defined algorithmically.Pairwise multiplication: the values in the same positions are multiplied with each other.Summation of all pairwise multiplications: Then we sum all the results obtained from the pairwise multiplications.We write its Python code below.
###Code
# let's define both vectors
u = [-3,-2,0,-1,4]
v = [-1,-1,2,-3,5]
uv = 0; # summation is initially zero
for i in range(len(u)): # iteratively access every pair with the same indices
print("pairwise multiplication of the entries with index",i,"is",u[i]*v[i])
uv = uv + u[i]*v[i] # i-th entries are multiplied and then added to summation
print() # print an empty line
print("The dot product of",u,'and',v,'is',uv)
###Output
_____no_output_____
###Markdown
The pairwise multiplications of entries are $ (-3)\cdot(-1) = 3 $, $ (-2)\cdot(-1) = 2 $, $ 0\cdot 2 = 0 $, $ (-1)\cdot(-3) = 3 $, and, $ 4 \cdot 5 = 20 $. Thus the summation of all pairwise multiplications of entries is $ 3+2+0+3+20 = 28 $.Remark that the dimensions of the given vectors must be the same. Otherwise, the dot product is not defined. Task 1 Find the dot product of the following vectors in Python:$$ v = \myrvector{-3 \\ 4 \\ -5 \\ 6} ~~~~\mbox{and}~~~~ u = \myrvector{4 \\ 3 \\ 6 \\ 5}.$$Your outcome should be $0$.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 2 Let $ u = \myrvector{ -3 \\ -4 } $ be a 2 dimensional vector.Find $ \dot{u}{u} $ in Python.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Notes:As may be observed from Task 2, the length of a vector can be calculated by using its dot product with itself.$$ \norm{u} = \sqrt{\dot{u}{u}}. $$$ \dot{u}{u} $ is $25$, and so $ \norm{u} = \sqrt{25} = 5 $. $ \dot{u}{u} $ automatically accumulates the contribution of each entry to the length. Orthogonal (perpendicular) vectors For simplicity, we consider 2-dimensional vectors.The following two vectors are perpendicular (orthogonal) to each other.The angle between them is $ 90 $ degrees.
###Code
# let's find the dot product of v and u
v = [-4,0]
u = [0,-5]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
_____no_output_____
###Markdown
Now, let's check the dot product of the following two vectors:
###Code
# we can use the same code
v = [-4,3]
u = [-3,-4]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
_____no_output_____
###Markdown
The dot product of new $ u $ and $ v $ is also $0$. This is not surprising, because the vectors $u$ and $v$ (in both cases) are orthogonal to each other.Fact: The dot product of two orthogonal (perpendicular) vectors is zero. If the dot product of two vectors is zero, then they are orthogonal to each other. This fact is important, because, as we will see later, orthogonal vectors (states) can be distinguished perfectly. Task 3 Verify that (i) $ u $ is orthogonal to $ -v $, (ii) $ -u $ is orthogonal to $ v $, and (iii) $ -u $ is orthogonal to $ -v $.
###Code
# you may consider to write a function in Python for dot product
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 4 Find the dot product of $ v $ and $ u $ in Python.$$ v = \myrvector{-1 \\ 2 \\ -3 \\ 4} ~~~~\mbox{and}~~~~ u = \myrvector{-2 \\ -1 \\ 5 \\ 2}.$$Find the dot product of $ -2v $ and $ 3u $ in Python.Compare both results.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
prepared by Abuzer Yakaryilmaz (QLatvia) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Vectors: Dot (Scalar) ProductTwo vectors can be multiplied with each other in different ways.One of the very basic methods is dot product.It is also called scalar product, because the result is a scalar value, e.g., a real number.Consider the following two vectors:$$ u = \myrvector{-3 \\ -2 \\ 0 \\ -1 \\ 4} \mbox{ and } v = \myrvector{-1\\ -1 \\2 \\ -3 \\ 5}.$$The dot product of $ u $ and $ v $, denoted by $ \dot{u}{v}$, can be defined algorithmically.Pairwise multiplication: the values in the same positions are multiplied with each other.Summation of all pairwise multiplications: Then we sum all the results obtained from the pairwise multiplications.We write its Python code below.
###Code
# let's define both vectors
u = [-3,-2,0,-1,4]
v = [-1,-1,2,-3,5]
uv = 0; # summation is initially zero
for i in range(len(u)): # iteratively access every pair with the same indices
print("pairwise multiplication of the entries with index",i,"is",u[i]*v[i])
uv = uv + u[i]*v[i] # i-th entries are multiplied and then added to summation
print() # print an empty line
print("The dot product of",u,'and',v,'is',uv)
###Output
pairwise multiplication of the entries with index 0 is 3
pairwise multiplication of the entries with index 1 is 2
pairwise multiplication of the entries with index 2 is 0
pairwise multiplication of the entries with index 3 is 3
pairwise multiplication of the entries with index 4 is 20
The dot product of [-3, -2, 0, -1, 4] and [-1, -1, 2, -3, 5] is 28
###Markdown
The pairwise multiplications of entries are $ (-3)\cdot(-1) = 3 $, $ (-2)\cdot(-1) = 2 $, $ 0\cdot 2 = 0 $, $ (-1)\cdot(-3) = 3 $, and, $ 4 \cdot 5 = 20 $. Thus the summation of all pairwise multiplications of entries is $ 3+2+0+3+20 = 28 $.Remark that the dimensions of the given vectors must be the same. Otherwise, the dot product is not defined. Task 1 Find the dot product of the following vectors in Python:$$ v = \myrvector{-3 \\ 4 \\ -5 \\ 6} ~~~~\mbox{and}~~~~ u = \myrvector{4 \\ 3 \\ 6 \\ 5}.$$Your outcome should be $0$.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 2 Let $ u = \myrvector{ -3 \\ -4 } $ be a 2 dimensional vector.Find $ \dot{u}{u} $ in Python.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Notes:As may be observed from Task 2, the length of a vector can be calculated by using its dot product with itself.$$ \norm{u} = \sqrt{\dot{u}{u}}. $$$ \dot{u}{u} $ is $25$, and so $ \norm{u} = \sqrt{25} = 5 $. $ \dot{u}{u} $ automatically accumulates the contribution of each entry to the length. Orthogonal (perpendicular) vectors For simplicity, we consider 2-dimensional vectors.The following two vectors are perpendicular (orthogonal) to each other.The angle between them is $ 90 $ degrees.
###Code
# let's find the dot product of v and u
v = [-4,0]
u = [0,-5]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
the dot product of u and v is 0
###Markdown
Now, let's check the dot product of the following two vectors:
###Code
# we can use the same code
v = [-4,3]
u = [-3,-4]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
the dot product of u and v is 0
###Markdown
The dot product of new $ u $ and $ v $ is also $0$. This is not surprising, because the vectors $u$ and $v$ (in both cases) are orthogonal to each other.Fact: The dot product of two orthogonal (perpendicular) vectors is zero. If the dot product of two vectors is zero, then they are orthogonal to each other. This fact is important, because, as we will see later, orthogonal vectors (states) can be distinguished perfectly. Task 3 Verify that (i) $ u $ is orthogonal to $ -v $, (ii) $ -u $ is orthogonal to $ v $, and (iii) $ -u $ is orthogonal to $ -v $.
###Code
# you may consider to write a function in Python for dot product
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 4 Find the dot product of $ v $ and $ u $ in Python.$$ v = \myrvector{-1 \\ 2 \\ -3 \\ 4} ~~~~\mbox{and}~~~~ u = \myrvector{-2 \\ -1 \\ 5 \\ 2}.$$Find the dot product of $ -2v $ and $ 3u $ in Python.Compare both results.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
prepared by Abuzer Yakaryilmaz (QLatvia) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Vectors: Dot (Scalar) ProductTwo vectors can be multiplied with each other in different ways.One of the very basic methods is dot product.It is also called scalar product, because the result is a scalar value, e.g., a real number.Consider the following two vectors:$$ u = \myrvector{-3 \\ -2 \\ 0 \\ -1 \\ 4} \mbox{ and } v = \myrvector{-1\\ -1 \\2 \\ -3 \\ 5}.$$The dot product of $ u $ and $ v $, denoted by $ \dot{u}{v}$, can be defined algorithmically.Pairwise multiplication: the values in the same positions are multiplied with each other.Summation of all pairwise multiplications: Then we sum all the results obtained from the pairwise multiplications.We write its Python code below.
###Code
# let's define both vectors
u = [-3,-2,0,-1,4]
v = [-1,-1,2,-3,5]
uv = 0; # summation is initially zero
for i in range(len(u)): # iteratively access every pair with the same indices
print("pairwise multiplication of the entries with index",i,"is",u[i]*v[i])
uv = uv + u[i]*v[i] # i-th entries are multiplied and then added to summation
print() # print an empty line
print("The dot product of",u,'and',v,'is',uv)
###Output
_____no_output_____
###Markdown
The pairwise multiplications of entries are $ (-3)\cdot(-1) = 3 $, $ (-2)\cdot(-1) = 2 $, $ 0\cdot 2 = 0 $, $ (-1)\cdot(-3) = 3 $, and, $ 4 \cdot 5 = 20 $. Thus the summation of all pairwise multiplications of entries is $ 3+2+0+3+20 = 28 $.Remark that the dimensions of the given vectors must be the same. Otherwise, the dot product is not defined. Task 1 Find the dot product of the following vectors in Python:$$ v = \myrvector{-3 \\ 4 \\ -5 \\ 6} ~~~~\mbox{and}~~~~ u = \myrvector{4 \\ 3 \\ 6 \\ 5}.$$Your outcome should be $0$.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 2 Let $ u = \myrvector{ -3 \\ -4 } $ be a 2 dimensional vector.Find $ \dot{u}{u} $ in Python.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Notes:As may be observed from Task 2, the length of a vector can be calculated by using its dot product with itself.$$ \norm{u} = \sqrt{\dot{u}{u}}. $$$ \dot{u}{u} $ is $25$, and so $ \norm{u} = \sqrt{25} = 5 $. $ \dot{u}{u} $ automatically accumulates the contribution of each entry to the length. Orthogonal (perpendicular) vectors For simplicity, we consider 2-dimensional vectors.The following two vectors are perpendicular (orthogonal) to each other.The angle between them is $ 90 $ degrees.
###Code
# let's find the dot product of v and u
v = [-4,0]
u = [0,-5]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
_____no_output_____
###Markdown
Now, let's check the dot product of the following two vectors:
###Code
# we can use the same code
v = [-4,3]
u = [-3,-4]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
_____no_output_____
###Markdown
The dot product of new $ u $ and $ v $ is also $0$. This is not surprising, because the vectors $u$ and $v$ (in both cases) are orthogonal to each other.Fact: The dot product of two orthogonal (perpendicular) vectors is zero. If the dot product of two vectors is zero, then they are orthogonal to each other. This fact is important, because, as we will see later, orthogonal vectors (states) can be distinguished perfectly. Task 3 Verify that (i) $ u $ is orthogonal to $ -v $, (ii) $ -u $ is orthogonal to $ v $, and (iii) $ -u $ is orthogonal to $ -v $.
###Code
# you may consider to write a function in Python for dot product
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 4 Find the dot product of $ v $ and $ u $ in Python.$$ v = \myrvector{-1 \\ 2 \\ -3 \\ 4} ~~~~\mbox{and}~~~~ u = \myrvector{-2 \\ -1 \\ 5 \\ 2}.$$Find the dot product of $ -2v $ and $ 3u $ in Python.Compare both results.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
prepared by Abuzer Yakaryilmaz This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Vectors: Dot (Scalar) ProductTwo vectors can be multiplied with each other in different ways.One of the very basic methods is dot product.It is also called scalar product, because the result is a scalar value, e.g., a real number.Consider the following two vectors:$$ u = \myrvector{-3 \\ -2 \\ 0 \\ -1 \\ 4} \mbox{ and } v = \myrvector{-1\\ -1 \\2 \\ -3 \\ 5}.$$The dot product of $ u $ and $ v $, denoted by $ \dot{u}{v}$, can be defined algorithmically.Pairwise multiplication: the values in the same positions are multiplied with each other.Summation of all pairwise multiplications: Then we sum all the results obtained from the pairwise multiplications.We write its Python code below.
###Code
# let's define both vectors
u = [-3,-2,0,-1,4]
v = [-1,-1,2,-3,5]
uv = 0; # summation is initially zero
for i in range(len(u)): # iteratively access every pair with the same indices
print("pairwise multiplication of the entries with index",i,"is",u[i]*v[i])
uv = uv + u[i]*v[i] # i-th entries are multiplied and then added to summation
print() # print an empty line
print("The dot product of",u,'and',v,'is',uv)
###Output
_____no_output_____
###Markdown
The pairwise multiplications of entries are $ (-3)\cdot(-1) = 3 $, $ (-2)\cdot(-1) = 2 $, $ 0\cdot 2 = 0 $, $ (-1)\cdot(-3) = 3 $, and, $ 4 \cdot 5 = 20 $. Thus the summation of all pairwise multiplications of entries is $ 3+2+0+3+20 = 28 $.Remark that the dimensions of the given vectors must be the same. Otherwise, the dot product is not defined. Task 1 Find the dot product of the following vectors in Python:$$ v = \myrvector{-3 \\ 4 \\ -5 \\ 6} ~~~~\mbox{and}~~~~ u = \myrvector{4 \\ 3 \\ 6 \\ 5}.$$Your outcome should be $0$.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 2 Let $ u = \myrvector{ -3 \\ -4 } $ be a 2 dimensional vector.Find $ \dot{u}{u} $ in Python.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Notes:As may be observed from Task 2, the length of a vector can be calculated by using its dot product with itself.$$ \norm{u} = \sqrt{\dot{u}{u}}. $$$ \dot{u}{u} $ is $25$, and so $ \norm{u} = \sqrt{25} = 5 $. $ \dot{u}{u} $ automatically accumulates the contribution of each entry to the length. Orthogonal (perpendicular) vectors For simplicity, we consider 2-dimensional vectors.The following two vectors are perpendicular (orthogonal) to each other.The angle between them is $ 90 $ degrees.
###Code
# let's find the dot product of v and u
v = [-4,0]
u = [0,-5]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
_____no_output_____
###Markdown
Now, let's check the dot product of the following two vectors:
###Code
# we can use the same code
v = [-4,3]
u = [-3,-4]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
_____no_output_____
###Markdown
The dot product of new $ u $ and $ v $ is also $0$. This is not surprising, because the vectors $u$ and $v$ (in both cases) are orthogonal to each other.Fact: The dot product of two orthogonal (perpendicular) vectors is zero. If the dot product of two vectors is zero, then they are orthogonal to each other. This fact is important, because, as we will see later, orthogonal vectors (states) can be distinguished perfectly. Task 3 Verify that (i) $ u $ is orthogonal to $ -v $, (ii) $ -u $ is orthogonal to $ v $, and (iii) $ -u $ is orthogonal to $ -v $.
###Code
# you may consider to write a function in Python for dot product
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 4 Find the dot product of $ v $ and $ u $ in Python.$$ v = \myrvector{-1 \\ 2 \\ -3 \\ 4} ~~~~\mbox{and}~~~~ u = \myrvector{-2 \\ -1 \\ 5 \\ 2}.$$Find the dot product of $ -2v $ and $ 3u $ in Python.Compare both results.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
prepared by Abuzer Yakaryilmaz (QLatvia) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Vectors: Dot (Scalar) ProductDot product is a specific way of defining multiplication between two vectors with the same size. It is also called scalar product, because the result is a scalar value, e.g., a real number.Consider the following two vectors:$$ u = \myrvector{-3 \\ -2 \\ 0 \\ -1 \\ 4} \mbox{ and } v = \myrvector{-1\\ -1 \\2 \\ -3 \\ 5}.$$The dot product of $ u $ and $ v $, denoted by $ \dot{u}{v}$, can be defined algorithmically.Pairwise multiplication: the values in the same positions are multiplied with each other.Summation of all pairwise multiplications: Then we sum all the results obtained from the pairwise multiplications.We write its Python code below.
###Code
# let's define both vectors
u = [-3,-2,0,-1,4]
v = [-1,-1,2,-3,5]
uv = 0; # summation is initially zero
for i in range(len(u)): # iteratively access every pair with the same indices
print("pairwise multiplication of the entries with index",i,"is",u[i]*v[i])
uv = uv + u[i]*v[i] # i-th entries are multiplied and then added to summation
print() # print an empty line
print("The dot product of",u,'and',v,'is',uv)
###Output
pairwise multiplication of the entries with index 0 is 3
pairwise multiplication of the entries with index 1 is 2
pairwise multiplication of the entries with index 2 is 0
pairwise multiplication of the entries with index 3 is 3
pairwise multiplication of the entries with index 4 is 20
The dot product of [-3, -2, 0, -1, 4] and [-1, -1, 2, -3, 5] is 28
###Markdown
The pairwise multiplications of entries are $ (-3)\cdot(-1) = 3 $, $ (-2)\cdot(-1) = 2 $, $ 0\cdot 2 = 0 $, $ (-1)\cdot(-3) = 3 $, and, $ 4 \cdot 5 = 20 $. Thus the summation of all pairwise multiplications of entries is $ 3+2+0+3+20 = 28 $.Remark that the dimensions of the given vectors must be the same. Otherwise, the dot product is not defined. Task 1 Find the dot product of the following vectors in Python:$$ v = \myrvector{-3 \\ 4 \\ -5 \\ 6} ~~~~\mbox{and}~~~~ u = \myrvector{4 \\ 3 \\ 6 \\ 5}.$$Your outcome should be $0$.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 2 Let $ u = \myrvector{ -3 \\ -4 } $ be a 2 dimensional vector.Find $ \dot{u}{u} $ in Python.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Notes:As may be observed from Task 2, the length of a vector can be calculated by using its dot product with itself.$$ \norm{u} = \sqrt{\dot{u}{u}}. $$$ \dot{u}{u} $ is $25$, and so $ \norm{u} = \sqrt{25} = 5 $. $ \dot{u}{u} $ automatically accumulates the contribution of each entry to the length. Orthogonal (perpendicular) vectors For simplicity, we consider 2-dimensional vectors.The following two vectors are perpendicular (orthogonal) to each other.The angle between them is $ 90 $ degrees.
###Code
# let's find the dot product of v and u
v = [-4,0]
u = [0,-5]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
the dot product of u and v is 0
###Markdown
Now, let's check the dot product of the following two vectors:
###Code
# we can use the same code
v = [-4,3]
u = [-3,-4]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
the dot product of u and v is 0
###Markdown
The dot product of new $ u $ and $ v $ is also $0$. This is not surprising, because the vectors $u$ and $v$ (in both cases) are orthogonal to each other.Fact: The dot product of two orthogonal (perpendicular) vectors is zero. If the dot product of two vectors is zero, then they are orthogonal to each other. This fact is important, because, as we will see later, orthogonal vectors (states) can be distinguished perfectly. Task 3 Verify that (i) $ u $ is orthogonal to $ -v $, (ii) $ -u $ is orthogonal to $ v $, and (iii) $ -u $ is orthogonal to $ -v $.
###Code
# you may consider to write a function in Python for dot product
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 4 Find the dot product of $ v $ and $ u $ in Python.$$ v = \myrvector{-1 \\ 2 \\ -3 \\ 4} ~~~~\mbox{and}~~~~ u = \myrvector{-2 \\ -1 \\ 5 \\ 2}.$$Find the dot product of $ -2v $ and $ 3u $ in Python.Compare both results.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
prepared by Abuzer Yakaryilmaz This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Vectors: Dot (Scalar) ProductTwo vectors can be multiplied with each other in different ways.One of the very basic methods is dot product.It is also called scalar product, because the result is a scalar value, e.g., a real number.Consider the following two vectors:$$ u = \myrvector{-3 \\ -2 \\ 0 \\ -1 \\ 4} \mbox{ and } v = \myrvector{-1\\ -1 \\2 \\ -3 \\ 5}.$$The dot product of $ u $ and $ v $, denoted by $ \dot{u}{v}$, can be defined algorithmically.Pairwise multiplication: the values in the same positions are multiplied with each other.Summation of all pairwise multiplications: Then we sum all the results obtained from the pairwise multiplications.We write its Python code below.
###Code
# let's define both vectors
u = [-3,-2,0,-1,4]
v = [-1,-1,2,-3,5]
uv = 0; # summation is initially zero
for i in range(len(u)): # iteratively access every pair with the same indices
print("pairwise multiplication of the entries with index",i,"is",u[i]*v[i])
uv = uv + u[i]*v[i] # i-th entries are multiplied and then added to summation
print() # print an empty line
print("The dot product of",u,'and',v,'is',uv)
###Output
_____no_output_____
###Markdown
The pairwise multiplications of entries are $ (-3)\cdot(-1) = 3 $, $ (-2)\cdot(-1) = 2 $, $ 0\cdot 2 = 0 $, $ (-1)\cdot(-3) = 3 $, and, $ 4 \cdot 5 = 20 $. Thus the summation of all pairwise multiplications of entries is $ 3+2+0+3+20 = 28 $.Remark that the dimensions of the given vectors must be the same. Otherwise, the dot product is not defined. Task 1 Find the dot product of the following vectors in Python:$$ v = \myrvector{-3 \\ 4 \\ -5 \\ 6} ~~~~\mbox{and}~~~~ u = \myrvector{4 \\ 3 \\ 6 \\ 5}.$$Your outcome should be $0$.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 2 Let $ u = \myrvector{ -3 \\ -4 } $ be a 2 dimensional vector.Find $ \dot{u}{u} $ in Python.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Notes:As may be observed from Task 2, the length of a vector can be calculated by using its dot product with itself.$$ \norm{u} = \sqrt{\dot{u}{u}}. $$$ \dot{u}{u} $ is $25$, and so $ \norm{u} = \sqrt{25} = 5 $. $ \dot{u}{u} $ automatically accumulates the contribution of each entry to the length. Orthogonal (perpendicular) vectors For simplicity, we consider 2-dimensional vectors.The following two vectors are perpendicular (orthogonal) to each other.The angle between them is $ 90 $ degrees.
###Code
# let's find the dot product of v and u
v = [-4,0]
u = [0,-5]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
_____no_output_____
###Markdown
Now, let's check the dot product of the following two vectors:
###Code
# we can use the same code
v = [-4,3]
u = [-3,-4]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result)
###Output
_____no_output_____
###Markdown
The dot product of new $ u $ and $ v $ is also $0$. This is not surprising, because the vectors $u$ and $v$ (in both cases) are orthogonal to each other.Fact: The dot product of two orthogonal (perpendicular) vectors is zero. If the dot product of two vectors is zero, then they are orthogonal to each other. This fact is important, because, as we will see later, orthogonal vectors (states) can be distinguished perfectly. Task 3 Verify that (i) $ u $ is orthogonal to $ -v $, (ii) $ -u $ is orthogonal to $ v $, and (iii) $ -u $ is orthogonal to $ -v $.
###Code
# you may consider to write a function in Python for dot product
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 4 Find the dot product of $ v $ and $ u $ in Python.$$ v = \myrvector{-1 \\ 2 \\ -3 \\ 4} ~~~~\mbox{and}~~~~ u = \myrvector{-2 \\ -1 \\ 5 \\ 2}.$$Find the dot product of $ -2v $ and $ 3u $ in Python.Compare both results.
###Code
#
# your solution is here
#
###Output
_____no_output_____ |
ipynb/Congo-(Brazzaville).ipynb | ###Markdown
Congo (Brazzaville)* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Congo-(Brazzaville).ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview("Congo (Brazzaville)", weeks=5);
overview("Congo (Brazzaville)");
compare_plot("Congo (Brazzaville)", normalise=True);
# load the data
cases, deaths = get_country_data("Congo (Brazzaville)")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Congo-(Brazzaville).ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
Congo (Brazzaville)* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Congo-(Brazzaville).ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview("Congo (Brazzaville)", weeks=5);
overview("Congo (Brazzaville)");
compare_plot("Congo (Brazzaville)", normalise=True);
# load the data
cases, deaths = get_country_data("Congo (Brazzaville)")
# get population of the region for future normalisation:
inhabitants = population("Congo (Brazzaville)")
print(f'Population of "Congo (Brazzaville)": {inhabitants} people')
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 1000 rows
pd.set_option("max_rows", 1000)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Congo-(Brazzaville).ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
Congo (Brazzaville)* Homepage of project: https://oscovida.github.io* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Congo-(Brazzaville).ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview("Congo (Brazzaville)");
# load the data
cases, deaths, region_label = get_country_data("Congo (Brazzaville)")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Congo-(Brazzaville).ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.