markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
We see that the ``first`` key in this example ``Series`` data is the tuple (0,0,0), corresponding to an x, y, z coordinate of an original movie. | key | _____no_output_____ | Apache-2.0 | python/doc/tutorials/src/basic_usage.ipynb | broxtronix/thunder |
The value in this case is a time series of 240 observations, represented as a 1d numpy array. | value.shape | _____no_output_____ | Apache-2.0 | python/doc/tutorials/src/basic_usage.ipynb | broxtronix/thunder |
We can extract a random subset of records and plot their time series, after converting to `TimeSeries` (which enables time-specific methods), and applying a simple baseline normalization. Here and elsewhere, we'll use the excellent ``seaborn`` package for styling figures, but this is entirely optional. | %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context("notebook")
examples = data.toTimeSeries().normalize().subset(50, thresh=0.05)
sns.set_style('darkgrid')
plt.plot(examples.T); | _____no_output_____ | Apache-2.0 | python/doc/tutorials/src/basic_usage.ipynb | broxtronix/thunder |
We can also compute a statistic for each record using the method: | means = data.seriesStdev()
means.first() | _____no_output_____ | Apache-2.0 | python/doc/tutorials/src/basic_usage.ipynb | broxtronix/thunder |
``means`` is now itself a ``Series``, where the value of each record is the mean across time For this ``Series``, since the keys correspond to spatial coordinates, we can ``pack`` the results back into a local array. ``pack`` is an operation that converts ``Series`` data, with spatial coordinates as keys, into an n-dimensional numpy array. In this case, the result is 3D, reflecting the original input data. | img = means.pack()
img.shape | _____no_output_____ | Apache-2.0 | python/doc/tutorials/src/basic_usage.ipynb | broxtronix/thunder |
``pack`` is an example of a local operation, meaning that all the data involved will be sent to the Spark driver node. For larger data sets, this can be very problematic - it's a good idea to downsample, subselect, or otherwise reduce the size of your data before attempting to ``pack`` large data sets!To look at this array as an image, we'll use `matplotlib` via a helper function included with Thunder. | from thunder import Colorize
image = Colorize.image
image(img[:,:,0]) | _____no_output_____ | Apache-2.0 | python/doc/tutorials/src/basic_usage.ipynb | broxtronix/thunder |
It's also easy to export the result to a ``numpy`` or ``MAT`` file. ```tsc.export(img, "directory", "npy")tsc.export(img, "directory", "mat")``` This will put a ``npy`` file or ``MAT`` file called ``meanval`` in the folder ``directory`` in your current directory. You can also export to a location of Amazon S3 or Google Storage if path is specified with an `s3n://`or `gs://` prefix. Thunder includes several other toy data sets, to see the available ones: | tsc.loadExample() | _____no_output_____ | Apache-2.0 | python/doc/tutorials/src/basic_usage.ipynb | broxtronix/thunder |
Some of them are `Series`, some are `Images`, and some are associated `Params` (e.g. covariates). Let's load an `Images` dataset: | images = tsc.loadExample('mouse-images')
images | _____no_output_____ | Apache-2.0 | python/doc/tutorials/src/basic_usage.ipynb | broxtronix/thunder |
Now every record is an key-value pair where the key is an identifier, and the value is an image | key, value = images.first() | _____no_output_____ | Apache-2.0 | python/doc/tutorials/src/basic_usage.ipynb | broxtronix/thunder |
The key is an integer | key | _____no_output_____ | Apache-2.0 | python/doc/tutorials/src/basic_usage.ipynb | broxtronix/thunder |
And the value is a two-dimensional array | value.shape | _____no_output_____ | Apache-2.0 | python/doc/tutorials/src/basic_usage.ipynb | broxtronix/thunder |
Although `images` is not an array, some syntactic sugar supports easy indexing: | im = images[0]
image(im) | _____no_output_____ | Apache-2.0 | python/doc/tutorials/src/basic_usage.ipynb | broxtronix/thunder |
And we can now apply simple parallelized image processing routines | im = images.gaussianFilter(3).subsample(3)[0]
image(im) | _____no_output_____ | Apache-2.0 | python/doc/tutorials/src/basic_usage.ipynb | broxtronix/thunder |
Print Cirq Circuit and Statevector | # importing Qiskit
from qiskit import Aer, transpile, assemble
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit.visualization import plot_histogram, plot_bloch_multivector
from qiskit.visualization import plot_state_paulivec, plot_state_hinton, plot_state_city
from qiskit.visualization import plot_state_qsphere
# import basic plot tools
from qiskit.visualization import plot_histogram, plot_bloch_multivector
import matplotlib.pyplot as plt
#get backend simulator
sim = Aer.get_backend('aer_simulator')
qc = QuantumCircuit(3)
qc.h(0)
qc.cx(0,1)
qc.h(2)
qc.s(2)
#print circuit
print(qc)
#draw bloch spheres
qc.save_statevector()
statevector = sim.run(qc).result().get_statevector()
print("\n")
print(statevector)
print("\n")
plot_bloch_multivector(statevector)
plot_state_city(statevector)
plot_state_qsphere(statevector) | _____no_output_____ | MIT | Paper Figures/Introspection Code/Introspection Qiskit.ipynb | Lilgabz/Quantum-Algorithm-Implementations |
Args | class args:
save_dir = "weights/"
debug = True
# model
routings = 1
# hp
batch_size = 32
lr = 0.001
lr_decay = 1.0
lam_recon = 0.392
# training
epochs = 3
shift_fraction = 0.1
digit = 5 | _____no_output_____ | MIT | run.ipynb | ghetthub/capsnet |
Load data | (x_train, y_train), (x_test, y_test) = capsulenet.load_mnist() | _____no_output_____ | MIT | run.ipynb | ghetthub/capsnet |
Define model | model, eval_model, manipulate_model = capsulenet.CapsNet(input_shape=x_train.shape[1:],
n_class=len(np.unique(np.argmax(y_train, 1))),
routings=args.routings) | _____no_output_____ | MIT | run.ipynb | ghetthub/capsnet |
Training | capsulenet.train(model=model, data=((x_train, y_train), (x_test, y_test)), args=args)
capsulenet.test(eval_model, data=(x_test, y_test), args=args) | ------------------------------Begin: test------------------------------
Test acc: 0.9784
Reconstructed images are saved to weights//real_and_recon.png
------------------------------End: test------------------------------
| MIT | run.ipynb | ghetthub/capsnet |
Recordá abrir en una nueva pestaña Modelos no paramétricos: K-Nearest Neighbours y Árboles de decisiónDocumentación:- KNN para clasificación: https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.htmlsklearn.neighbors.KNeighborsClassifier- KNN para regresión: https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.htmlsklearn.neighbors.KNeighborsRegressor- Árboles para clasificación: https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html- Árboles para regresión: https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html Seteo de librerias | import os
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import seaborn as sns
import pandas as pd
import numpy as np
from sklearn.datasets import make_classification, make_blobs, load_breast_cancer
from sklearn.preprocessing import OrdinalEncoder
from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score, KFold
# modelos
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
DISPLAY_PRECISION = 4
pd.set_option("display.precision", DISPLAY_PRECISION) | _____no_output_____ | MIT | MachineLearning/5_KNNyArbolesDeDecision/KNN_Arboles.ipynb | guillelencina/cursos-python |
1. KNN 1.1 Introducción: Fronteras de decisiónPara familirizarnos con este modelo y podervisualizar como quedan las fronteras de decisión empezaremos con un problema de clasificación binaria con dos features con un dataset de juguete que generaremos nosotros con la función [make_classification](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_classification.html). | # construyamos el dataset para un problema de clasificación binaria de dos dimensiones
X, y = make_classification(n_samples=200, n_features=2, n_informative=2, n_redundant=0, n_classes=2,n_clusters_per_class=1,
random_state=1, class_sep=1.1)
# scatter plot, colores por etiquetas
df = pd.DataFrame(dict(x=X[:,0], y=X[:,1], label=y))
colors = {0:'red', 1:'blue'}
fig, ax = plt.subplots()
grouped = df.groupby('label')
for key, group in grouped:
group.plot(ax=ax, kind='scatter', x='x', y='y', label=key, color=colors[key])
# instanciemos y entrenemos el modelo
model = KNeighborsClassifier(n_neighbors=10,weights='uniform')
model.fit(X,y)
# visualicemos las predicciones
# elegimos algunos colores de la lista de colores
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
# tenemos que armar una grilla y setear un step
h = .02
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
# usamos un pcolormesh
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# ploteamos también los puntos de entrenamiento
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("2-Class classification (k = %i, weights = '%s')"
% (10, 'uniform'))
plt.show()
model = KNeighborsClassifier(n_neighbors=200,weights='uniform')
model.fit(X,y)
# visualicemos las predicciones
# elegimos algunos colores de la lista de colores
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
# tenemos que armar una grilla y setear un step
h = .02
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
# usamos un pcolormesh
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# ploteamos también los puntos de entrenamiento
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("2-Class classification (k = %i, weights = '%s')"
% (200, 'uniform'))
plt.show()
model = KNeighborsClassifier(n_neighbors=200,weights='distance')
model.fit(X,y)
# visualicemos las predicciones
# elegimos algunos colores de la lista de colores
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
# tenemos que armar una grilla y setear un step
h = .02
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
# usamos un pcolormesh
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# ploteamos también los puntos de entrenamiento
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("2-Class classification (k = %i, weights = '%s')"
% (200, 'distance'))
plt.show() | _____no_output_____ | MIT | MachineLearning/5_KNNyArbolesDeDecision/KNN_Arboles.ipynb | guillelencina/cursos-python |
1.2 Conjunto de datos de cáncer de mamaEl conjunto de datos etiquetado proviene de la "Base de datos (diagnóstico) de cáncer de mama de Wisconsin" disponible gratuitamente en la biblioteca sklearn de python. Para obtener más detalles, consulte:https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29Número de muestras: 569Número de funciones: 30 atributos numéricos y predictivosNúmero de clases: 2Las características se calculan a partir de una imagen digitalizada de una aspiración con aguja fina (FNA) de una masa mamaria. Describen las características de los núcleos celulares presentes en la imagen. Se calculan diez características de valor real para cada núcleo celular. La media, el error estándar y el "peor" o el más grande (la media de los tres valores más grandes) de estas características se calcularon para cada imagen, lo que dio como resultado 30 características. Por ejemplo, las medidas del radio son para el "radio medio", el "error estándar del radio" y el "peor radio". Todos los valores de las características se recodifican con cuatro dígitos significativos.Las dos clases objetivo corresponden a resultados negativos (benignos) y resultados positivos (malignos).Este conjunto de datos original se dividirá aleatoriamente en dos conjuntos para fines de entrenamiento y prueba. | data = load_breast_cancer()
#print(data.DESCR)
print("Descripción:")
print(data.keys()) # dict_keys(['target_names', 'target', 'feature_names', 'data', 'DESCR'])
print("---")
# Note that we need to reverse the original '0' and '1' mapping in order to end up with this mapping:
# Benign = 0 (negative class)
# Malignant = 1 (positive class)
data_clases = [data.target_names[1], data.target_names[0]]
data_target = [1 if x==0 else 0 for x in list(data.target)]
data_features = list(data.feature_names)
print("Clases Target:")
print("Clases", data_clases)
print("---")
print("Distribución de clases n=%d:" % len(data_target))
print(pd.Series(data_target).value_counts())
print("---")
pd.DataFrame(data.data[:,:], columns=data_features).info()
# separamos un 25% para test/held-out
X = pd.DataFrame(data.data[:,:], columns=data_features)
y = pd.Series(data_target)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=0) | _____no_output_____ | MIT | MachineLearning/5_KNNyArbolesDeDecision/KNN_Arboles.ipynb | guillelencina/cursos-python |
1.3 Overfitting: cantidad de vecinos y pesos | # veamos como le va a nuestro modelo variando la cantidad de vecinos y el tipo de peso
valores_k = list(range(1,50,4))
resultados_train_u = []
resultados_test_u = []
resultados_train_w = []
resultados_test_w = []
for k in valores_k:
# instanciamos el modelo uniforme
clf_u = KNeighborsClassifier(n_neighbors=k, weights='uniform')
clf_u.fit(X_train, y_train)
y_train_pred = clf_u.predict(X_train)
y_pred = clf_u.predict(X_test)
resultados_train_u.append(accuracy_score(y_train, y_train_pred))
resultados_test_u.append(accuracy_score(y_test, y_pred))
clf_w = KNeighborsClassifier(n_neighbors=k, weights='distance')
clf_w.fit(X_train, y_train)
y_train_pred = clf_w.predict(X_train)
y_pred = clf_w.predict(X_test)
resultados_train_w.append(accuracy_score(y_train, y_train_pred))
resultados_test_w.append(accuracy_score(y_test, y_pred))
# veamos que paso en cada caso
f, ax = plt.subplots(1,2,figsize=(14,5),sharey=True)
ax[0].plot(valores_k, resultados_train_u, valores_k, resultados_test_u);
ax[0].legend(['pesos uniformes - train', 'pesos uniformes - test']);
ax[0].set(xlabel='k',ylabel='accuracy');
ax[1].plot(valores_k, resultados_train_w, valores_k, resultados_test_w);
ax[1].legend(['pesos distancia - train', 'pesos distancia - test']);
ax[1].set(xlabel='k');
# ahora busquemos nuestro mejor modelo usando validacion cruzada y gridsearchcv pero incluyamos otra distancia!
model = KNeighborsClassifier()
n_neighbors = np.array([1,2,3,5,8,10,15,20,30,50])
param_grid = {'n_neighbors': n_neighbors,
'weights':['uniform', 'distance'],
'metric':['euclidean', 'chebyshev', 'manhattan']}
grid = GridSearchCV(estimator=model, param_grid=param_grid)
grid.fit(X_train, y_train)
print(grid.best_params_)
pd.DataFrame(grid.cv_results_).sample(3)
# evaluemos este clasificador usando el classification report
print(classification_report(y_test, grid.best_estimator_.predict(X_test), target_names=data_clases)) | precision recall f1-score support
benign 0.96 0.98 0.97 90
malignant 0.96 0.92 0.94 53
accuracy 0.96 143
macro avg 0.96 0.95 0.95 143
weighted avg 0.96 0.96 0.96 143
| MIT | MachineLearning/5_KNNyArbolesDeDecision/KNN_Arboles.ipynb | guillelencina/cursos-python |
1.4 Efectos de escalaDado que KNN esta basado en distancias si no usamos una distancia que involucra la varianza entre variables como la distancia de Mahalabois, nuestro modelo se verá afectadohttps://stats.stackexchange.com/questions/287425/why-do-you-need-to-scale-data-in-knn | XX,yy = make_classification(n_samples=400,n_features=2,n_classes=2,
n_redundant=0,n_informative=2,
n_clusters_per_class=2,random_state=48)
XX[:,0] = XX[:,0]*30 + 150
print('Media x: {}'.format(np.mean(XX[:,0])))
print('SD x: {}'.format(np.std(XX[:,0])))
print('Media y: {}'.format(np.mean(XX[:,1])))
print('SD y: {}'.format(np.std(XX[:,1])))
kf = KFold(n_splits=5)
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(XX, yy)
print(cross_val_score(knn, XX, yy, cv=kf).mean())
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
XX_scaled = scaler.fit_transform(XX)
print('Media x: {}'.format(np.mean(XX_scaled[:,0])))
print('SD x: {}'.format(np.std(XX_scaled[:,0])))
print('Media y: {}'.format(np.mean(XX_scaled[:,1])))
print('SD y: {}'.format(np.std(XX_scaled[:,1])))
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(XX, yy)
print(cross_val_score(knn, XX_scaled, yy, cv=kf).mean()) | 0.9675
| MIT | MachineLearning/5_KNNyArbolesDeDecision/KNN_Arboles.ipynb | guillelencina/cursos-python |
*** 2. Árboles de decisiónContinuaremos trabajando con el dataset de cancer de mama para familiarizarnos con los árboles de decisión 2.1 Mi primer arbolito | # instanciemos el modelo y entremoslo en el conjunto de autos
arbol = DecisionTreeClassifier(criterion='gini', max_depth=2, min_samples_leaf=1, min_samples_split=2, ccp_alpha=0)
arbol.fit(X_train,y_train)
accuracy_score(y_train, arbol.predict(X_train))
# veamos que tan bien le fue a este modelo
print(classification_report(y_true=y_test,y_pred=arbol.predict(X_test)))
# visualicemos los errores de este árbol en una matriz de confusión
cf_matrix = confusion_matrix(y_test, arbol.predict(X_test))
sns.heatmap(cf_matrix, annot=True); | _____no_output_____ | MIT | MachineLearning/5_KNNyArbolesDeDecision/KNN_Arboles.ipynb | guillelencina/cursos-python |
2.2 Feature importanceLos árboles nos permiten definir una manera de medir la importancia de los features (o *Feature Importances*) basado en la ganancia de información obtenida cada vez que se utilizo cada feature para hacer un split. Para esto, una vez entrando el árbol, el método que utilizaremos es: ``` arbol.feature_importances_``` | # calculando las 5 feature importances mas altas
importances = pd.Series(arbol.feature_importances_).sort_values(ascending=False)[:5]
importances
f5_names = list(pd.Series(data.feature_names)[importances.index.to_list()])
fig, ax = plt.subplots()
importances.plot.barh(ax=ax)
ax.set_yticklabels(f5_names)
ax.invert_yaxis() | _____no_output_____ | MIT | MachineLearning/5_KNNyArbolesDeDecision/KNN_Arboles.ipynb | guillelencina/cursos-python |
2.3 Desbalance de clasesComo este dataset tiene un desbalance de clases, podes incluir eso en el modelo utilizando el parámetro class_weight que nos permite manejar directamente el desbalance | arbol = DecisionTreeClassifier(criterion='gini', max_depth=2, min_samples_leaf=1,
min_samples_split=2, ccp_alpha=0, class_weight="balanced")
arbol.fit(X_train, y_train)
accuracy_score(y_train, arbol.predict(X_train))
print(classification_report(y_true=y_test,y_pred=arbol.predict(X_test)))
cf_matrix = confusion_matrix(y_test, arbol.predict(X_test))
sns.heatmap(cf_matrix, annot=True); | _____no_output_____ | MIT | MachineLearning/5_KNNyArbolesDeDecision/KNN_Arboles.ipynb | guillelencina/cursos-python |
2.4 VisualizaciónPara visualizar el árbol sklearn tiene el método tree.plot_tree: | plot_tree(arbol); | _____no_output_____ | MIT | MachineLearning/5_KNNyArbolesDeDecision/KNN_Arboles.ipynb | guillelencina/cursos-python |
Podemos obtener una representación mas estilizada con la ayuda de las librerías *graphviz* + *dot*. Ref: https://towardsdatascience.com/visualizing-decision-trees-with-python-scikit-learn-graphviz-matplotlib-1c50b4aa68dc | # libreria
from sklearn.externals.six import StringIO
from IPython.display import Image
from sklearn.tree import export_graphviz
import pydotplus
import matplotlib.pyplot as plt
dot_data = StringIO()
export_graphviz(arbol, out_file=dot_data,
filled=True, rounded=True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png()) | /usr/local/lib/python3.7/dist-packages/sklearn/externals/six.py:31: FutureWarning: The module is deprecated in version 0.21 and will be removed in version 0.23 since we've dropped support for Python 2.7. Please rely on the official version of six (https://pypi.org/project/six/).
"(https://pypi.org/project/six/).", FutureWarning)
| MIT | MachineLearning/5_KNNyArbolesDeDecision/KNN_Arboles.ipynb | guillelencina/cursos-python |
2.5 Overfitting: profundidad del árbol y post-pruningDado que los árboles son modelos que tienden a overfittear tenemos que recurrir a distintas técnicas para mitigar este problema. Veamos primero el efecto de la profundidad del árbol en el trade-off sesgo varianza. | profundidad = list(range(1,20))
resultados_train = []
resultados_test = []
for depth in profundidad:
# instanciamos el modelo uniforme
arbol = DecisionTreeClassifier(criterion='gini', max_depth=depth, min_samples_leaf=1, min_samples_split=2, ccp_alpha=0, class_weight="balanced")
arbol.fit(X_train, y_train)
y_train_pred = arbol.predict(X_train)
y_pred = arbol.predict(X_test)
resultados_train.append(accuracy_score(y_train, y_train_pred))
resultados_test.append(accuracy_score(y_test, y_pred))
# veamos que paso en cada caso
f, ax = plt.subplots(1,1,figsize=(14,5),sharey=True)
ax.plot(profundidad, resultados_train, profundidad, resultados_test);
ax.legend(['accuracy train', 'accuracy test']);
ax.set(xlabel='profundidad',ylabel='accuracy');
# veamos que pasa con un árbol sin corte de profundidad
np.random.seed(2021)
arbol = DecisionTreeClassifier(criterion='gini', ccp_alpha=0)
arbol.fit(X_train, y_train)
#print(classification_report(y_true=y_test,y_pred=arbol.predict(X_test)))
print('Accuracy en entrenamiento: %f' % accuracy_score(y_train,arbol.predict(X_train)))
print('Accuracy en test: %f' % accuracy_score(y_test,arbol.predict(X_test)))
# grafiquemos este árbol
dot_data = StringIO()
export_graphviz(arbol, out_file=dot_data,
filled=True, rounded=True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png()) | _____no_output_____ | MIT | MachineLearning/5_KNNyArbolesDeDecision/KNN_Arboles.ipynb | guillelencina/cursos-python |
Una técnica que nos permite mitigar el overfitting es lo que se conoce como post-prunning. El objetivo de esta técnica es *podar* el árbol entrenado, penalizando de alguna forma los árboles más complejos. El algortimo de poda que tenemos implementado en Scikit-Learn es el [Minimal Cost-Complexity Pruning](https://scikit-learn.org/stable/modules/tree.htmlminimal-cost-complexity-pruning). El hiperparámetro que controla esta penalización es ccp_alpha$\geq 0$, cuando este hiperparámetro es 0, no realizamos ningún tipo de poda, y a medida que aumentamos dicho hiperparámetro penalizaremos más fuertemente la cantidad de nodos terminales del árbol. | arbol = DecisionTreeClassifier(criterion='gini', ccp_alpha=0.01)
arbol.fit(X_train, y_train)
#print(classification_report(y_true=y_test,y_pred=arbol.predict(X_test)))
print('Accuracy en entrenamiento: %f' % accuracy_score(y_train,arbol.predict(X_train)))
print('Accuracy en test: %f' % accuracy_score(y_test,arbol.predict(X_test)))
dot_data = StringIO()
export_graphviz(arbol, out_file=dot_data,
filled=True, rounded=True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
# veamos como afecta el rendimiento y la profundidad del árbol
ccp_alpha_vals = np.arange(0,1,0.05)
resultados_train = []
resultados_test = []
profundidad = []
for ccp in ccp_alpha_vals:
# instanciamos el modelo uniforme
arbol = DecisionTreeClassifier(criterion='gini', ccp_alpha=ccp)
arbol.fit(X_train, y_train)
# guardamos la profundidad del árbol
profundidad.append(arbol.tree_.max_depth)
y_train_pred = arbol.predict(X_train)
y_pred = arbol.predict(X_test)
resultados_train.append(accuracy_score(y_train, y_train_pred))
resultados_test.append(accuracy_score(y_test, y_pred))
f,ax = plt.subplots(2,1,figsize=(12,8),sharex=True)
ax[0].plot(ccp_alpha_vals, resultados_train, ccp_alpha_vals, resultados_test);
ax[0].legend(['accuracy train', 'accuracy test']);
ax[0].set(xlabel='ccp_alpha',ylabel='Accuracy');
ax[1].plot(ccp_alpha_vals, profundidad)
ax[1].set(xlabel='ccp_alpha',ylabel='Profundidad'); | _____no_output_____ | MIT | MachineLearning/5_KNNyArbolesDeDecision/KNN_Arboles.ipynb | guillelencina/cursos-python |
This notebook trains a N2V network in the first step and then finetunes it for segmentation. | # We import all our dependencies.
import warnings
warnings.filterwarnings('ignore')
import sys
sys.path.append('../../')
from voidseg.models import Seg, SegConfig
from n2v.models import N2VConfig, N2V
import numpy as np
from csbdeep.utils import plot_history
from voidseg.utils.misc_utils import combine_train_test_data, shuffle_train_data, augment_data
from voidseg.utils.seg_utils import *
from n2v.utils.n2v_utils import manipulate_val_data
from voidseg.utils.compute_precision_threshold import compute_threshold, precision
from keras.optimizers import Adam
from matplotlib import pyplot as plt
from scipy import ndimage
import tensorflow as tf
import keras.backend as K
import urllib
import os
import zipfile
from tqdm import tqdm, tqdm_notebook | Using TensorFlow backend.
| BSD-3-Clause | examples/DSB2018/U-Net_Finetune.ipynb | psteinb/VoidSeg |
Download DSB2018 data.From the Kaggle 2018 Data Science Bowl challenge, we take the same subset of data as has been used [here](https://github.com/mpicbg-csbd/stardist), showing a diverse collection of cell nuclei imaged by various fluorescence microscopes. We extracted 4870 image patches of size 128×128 from the training set and added Gaussian noise with mean 0 and sigma = 10 (n10), 20 (n20) and 40 (n40). This notebook shows results for n40 images. | # create a folder for our data
if not os.path.isdir('./data'):
os.mkdir('data')
# check if data has been downloaded already
zipPath="data/DSB.zip"
if not os.path.exists(zipPath):
#download and unzip data
data = urllib.request.urlretrieve('https://owncloud.mpi-cbg.de/index.php/s/LIN4L4R9b2gebDX/download', zipPath)
with zipfile.ZipFile(zipPath, 'r') as zip_ref:
zip_ref.extractall("data") | _____no_output_____ | BSD-3-Clause | examples/DSB2018/U-Net_Finetune.ipynb | psteinb/VoidSeg |
The downloaded data is in `npz` format and the cell below extracts the training, validation and test data as numpy arrays | trainval_data = np.load('data/DSB/train_data/dsb2018_TrainVal40.npz')
test_data = np.load('data/DSB/test_data/dsb2018_Test40.npz', allow_pickle=True)
train_images = trainval_data['X_train']
val_images = trainval_data['X_val']
test_images = test_data['X_test']
train_masks = trainval_data['Y_train']
val_masks = trainval_data['Y_val']
test_masks = test_data['Y_test']
print("Shape of train_images: ", train_images.shape, ", Shape of train_masks: ", train_masks.shape)
print("Shape of val_images: ", val_images.shape, ", Shape of val_masks: ", val_masks.shape)
print("Shape of test_images: ", test_images.shape, ", Shape of test_masks: ", test_masks.shape) | Shape of train_images: (3800, 128, 128) , Shape of train_masks: (3800, 128, 128)
Shape of val_images: (670, 128, 128) , Shape of val_masks: (670, 128, 128)
Shape of test_images: (50,) , Shape of test_masks: (50,)
| BSD-3-Clause | examples/DSB2018/U-Net_Finetune.ipynb | psteinb/VoidSeg |
Data preparation for training a N2V networkSince, we can use all the noisy data for training N2V network, we combine the noisy train_images and test_images and use them as input to the N2V network. | X, Y = combine_train_test_data(X_train=train_images,Y_train=train_masks,X_test=test_images,Y_test=test_masks)
print("Combined Dataset Shape", X.shape)
X_val = val_images
Y_val = val_masks | _____no_output_____ | BSD-3-Clause | examples/DSB2018/U-Net_Finetune.ipynb | psteinb/VoidSeg |
Next, we shuffle the training pairs and augment the training and validation data. | random_seed = 1 # Seed to shuffle training data (annotated GT and raw image pairs)
X, Y = shuffle_train_data(X, Y, random_seed = random_seed)
print("Training Data \n..................")
X, Y = augment_data(X, Y)
print("\n")
print("Validation Data \n..................")
X_val, Y_val = augment_data(X_val, Y_val)
# Adding channel dimension
X = X[..., np.newaxis]
print(X.shape)
X_val = X_val[..., np.newaxis]
print(X_val.shape) | (34400, 128, 128, 1)
(5360, 128, 128, 1)
| BSD-3-Clause | examples/DSB2018/U-Net_Finetune.ipynb | psteinb/VoidSeg |
Let's look at one of our training and validation patches. | sl=0
plt.figure(figsize=(14,7))
plt.subplot(1,2,1)
plt.imshow(X[sl,...,0], cmap='gray')
plt.title('Training Patch');
plt.subplot(1,2,2)
plt.imshow(X_val[sl,...,0], cmap='gray')
plt.title('Validation Patch'); | _____no_output_____ | BSD-3-Clause | examples/DSB2018/U-Net_Finetune.ipynb | psteinb/VoidSeg |
Configure N2V Network The data preparation for training a denoising N2V network is now done. Next, we configure N2V network by specifying `N2VConfig` parameters. | config = N2VConfig(X, unet_kern_size=3, n_channel_out=1,train_steps_per_epoch=400, train_epochs=200,
train_loss='mse', batch_norm=True,
train_batch_size=128, n2v_perc_pix=0.784, n2v_patch_shape=(64, 64),
unet_n_first = 32,
unet_residual = False,
n2v_manipulator='uniform_withCP', n2v_neighborhood_radius=5, unet_n_depth=4)
# Let's look at the parameters stored in the config-object.
vars(config)
# a name used to identify the model
model_name = 'n40_denoising'
# the base directory in which our model will live
basedir = 'models'
# We are now creating our network model.
model = N2V(config, model_name, basedir=basedir)
model.prepare_for_training(metrics=()) | _____no_output_____ | BSD-3-Clause | examples/DSB2018/U-Net_Finetune.ipynb | psteinb/VoidSeg |
Now, we begin training the denoising N2V model. In case, a trained model is available, that model is loaded else a new model is trained. | # We are ready to start training now.
query_weightpath = os.getcwd()+"/models/"+model_name
weights_present = False
for file in os.listdir(query_weightpath):
if(file == "weights_best.h5"):
print("Found weights of a trained N2V network, loading it for prediction!")
weights_present = True
break
if(weights_present):
model.load_weights("weights_best.h5")
else:
print("Did not find weights of a trained N2V network, training one from scratch!")
history = model.train(X, X_val) | Found weights of a trained N2V network, loading it for prediction!
| BSD-3-Clause | examples/DSB2018/U-Net_Finetune.ipynb | psteinb/VoidSeg |
Data preparation for segmentation stepNext, we normalize all raw data with the mean and std (standard deviation) of the raw `train_images`. Then, we shuffle the raw training images and the correponding Ground Truth (GT). Lastly, we fractionate the training pairs of raw images and corresponding GT to realize the case where not enough annotated, training data is available. For this fractionation, please specify `fraction` parameter below. It should be between 0 (exclusive) and 100 (inclusive). | fraction = 2 # Fraction of annotated GT and raw image pairs to use during training.
random_seed = 1 # Seed to shuffle training data (annotated GT and raw image pairs).
assert 0 <fraction<= 100, "Fraction should be between 0 and 100"
mean, std = np.mean(train_images), np.std(train_images)
X_normalized = normalize(train_images, mean, std)
X_val_normalized = normalize(val_images, mean, std)
X_test_normalized = normalize(test_images, mean, std)
X_shuffled, Y_shuffled = shuffle_train_data(X_normalized, train_masks, random_seed = random_seed)
X_frac, Y_frac = fractionate_train_data(X_shuffled, Y_shuffled, fraction = fraction)
print("Training Data \n..................")
X, Y_train_masks = augment_data(X_frac, Y_frac)
print("\n")
print("Validation Data \n..................")
X_val, Y_val_masks = augment_data(X_val_normalized, val_masks) | Training Data
..................
Raw image size after augmentation (608, 128, 128)
Mask size after augmentation (608, 128, 128)
Validation Data
..................
Raw image size after augmentation (5360, 128, 128)
Mask size after augmentation (5360, 128, 128)
| BSD-3-Clause | examples/DSB2018/U-Net_Finetune.ipynb | psteinb/VoidSeg |
Next, we do a one-hot encoding of training and validation labels for training a 3-class U-Net. One-hot encoding will extract three channels from each labelled image, where the channels correspond to background, foreground and border. | X = X[...,np.newaxis]
Y = convert_to_oneHot(Y_train_masks)
X_val = X_val[...,np.newaxis]
Y_val = convert_to_oneHot(Y_val_masks)
print(X.shape, Y.shape)
print(X_val.shape, Y_val.shape) | (608, 128, 128, 1) (608, 128, 128, 3)
(5360, 128, 128, 1) (5360, 128, 128, 3)
| BSD-3-Clause | examples/DSB2018/U-Net_Finetune.ipynb | psteinb/VoidSeg |
Let's look at one of our validation patches. | sl=0
plt.figure(figsize=(20,5))
plt.subplot(1,4,1)
plt.imshow(X_val[sl,...,0])
plt.title('Raw validation image')
plt.subplot(1,4,2)
plt.imshow(Y_val[sl,...,0])
plt.title('1-hot encoded background')
plt.subplot(1,4,3)
plt.imshow(Y_val[sl,...,1])
plt.title('1-hot encoded foreground')
plt.subplot(1,4,4)
plt.imshow(Y_val[sl,...,2])
plt.title('1-hot encoded border') | _____no_output_____ | BSD-3-Clause | examples/DSB2018/U-Net_Finetune.ipynb | psteinb/VoidSeg |
Configure Segmentation NetworkThe data preparation for segmentation is now done. Next, we configure a segmentation network by specifying `SegConfig` parameters. For example, one can increase `train_epochs` to get even better results at the expense of a longer computation. (This holds usually true for a large `fraction`.) | relative_weights = [1.0,1.0,5.0] # Relative weight of background, foreground and border class for training
config = SegConfig(X, unet_kern_size=3, relative_weights = relative_weights,
train_steps_per_epoch=400, train_epochs=3, batch_norm=True,
train_batch_size=128, unet_n_first = 32, unet_n_depth=4)
# Let's look at the parameters stored in the config-object.
# a name used to identify the model
model_name = 'seg_finetune'
# the base directory in which our model will live
basedir = 'models'
# We are now creating our network model.
seg_model = Seg(config, model_name, basedir=basedir)
vars(config) | _____no_output_____ | BSD-3-Clause | examples/DSB2018/U-Net_Finetune.ipynb | psteinb/VoidSeg |
For finetuning, we initialize segmentation network with the best weights of the denoising N2V network trained above. | ft_layers = seg_model.keras_model.layers
n2v_layers = model.keras_model.layers
for i in range(0, len(n2v_layers)-2):
ft_layers[i].set_weights(n2v_layers[i].get_weights())
for l in seg_model.keras_model.layers:
l.trainable=True | _____no_output_____ | BSD-3-Clause | examples/DSB2018/U-Net_Finetune.ipynb | psteinb/VoidSeg |
Now, we begin training the model for segmentation. | seg_model.train(X, Y, (X_val, Y_val)) | Epoch 1/3
400/400 [==============================] - 152s 380ms/step - loss: 0.3040 - seg_crossentropy: 0.3040 - val_loss: 0.2867 - val_seg_crossentropy: 0.2867
Epoch 2/3
400/400 [==============================] - 143s 358ms/step - loss: 0.1202 - seg_crossentropy: 0.1202 - val_loss: 0.3895 - val_seg_crossentropy: 0.3895
Epoch 3/3
400/400 [==============================] - 142s 354ms/step - loss: 0.0760 - seg_crossentropy: 0.0760 - val_loss: 0.4506 - val_seg_crossentropy: 0.4506
Loading network weights from 'weights_best.h5'.
| BSD-3-Clause | examples/DSB2018/U-Net_Finetune.ipynb | psteinb/VoidSeg |
Computing the best threshold on validation images (to maximize Average Precision score). The threshold so obtained will be used to get hard masks from probability images to be predicted on test images. | threshold=seg_model.optimize_thresholds(X_val_normalized.astype(np.float32), val_masks) | Computing best threshold:
| BSD-3-Clause | examples/DSB2018/U-Net_Finetune.ipynb | psteinb/VoidSeg |
Prediction on test images to get segmentation result | predicted_images, precision_result=seg_model.predict_label_masks(X_test_normalized, test_masks, threshold)
print("Average precision over all test images at IOU = 0.5: ", precision_result)
plt.figure(figsize=(10,10))
plt.subplot(1,2,1)
plt.imshow(predicted_images[22])
plt.title('Prediction')
plt.subplot(1,2,2)
plt.imshow(test_masks[22])
plt.title('Ground Truth') | _____no_output_____ | BSD-3-Clause | examples/DSB2018/U-Net_Finetune.ipynb | psteinb/VoidSeg |
3 Maneras de Programar a una Red Neuronal - DOTCSV Código inicial | import numpy as np
import scipy as sc
import matplotlib.pyplot as plt
from sklearn.datasets import make_circles
# Creamos nuestros datos artificiales, donde buscaremos clasificar
# dos anillos concéntricos de datos.
X, Y = make_circles(n_samples=500, factor=0.5, noise=0.05)
# Resolución del mapa de predicción.
res = 100
# Coordendadas del mapa de predicción.
_x0 = np.linspace(-1.5, 1.5, res)
_x1 = np.linspace(-1.5, 1.5, res)
# Input con cada combo de coordenadas del mapa de predicción.
_pX = np.array(np.meshgrid(_x0, _x1)).T.reshape(-1, 2)
# Objeto vacio a 0.5 del mapa de predicción.
_pY = np.zeros((res, res)) + 0.5
# Visualización del mapa de predicción.
plt.figure(figsize=(8, 8))
plt.pcolormesh(_x0, _x1, _pY, cmap="coolwarm", vmin=0, vmax=1)
# Visualización de la nube de datos.
plt.scatter(X[Y == 0,0], X[Y == 0,1], c="skyblue")
plt.scatter(X[Y == 1,0], X[Y == 1,1], c="salmon")
plt.tick_params(labelbottom=False, labelleft=False) | _____no_output_____ | MIT | 2.3.1_3_Maneras_de_Programar_a_una_Red_Neuronal.ipynb | txusser/Master_IA_Sanidad |
Tensorflow | import tensorflow as tf
from matplotlib import animation
from IPython.core.display import display, HTML
# Definimos los puntos de entrada de la red, para la matriz X e Y.
iX = tf.placeholder('float', shape=[None, X.shape[1]])
iY = tf.placeholder('float', shape=[None])
lr = 0.01 # learning rate
nn = [2, 16, 8, 1] # número de neuronas por capa.
# Capa 1
W1 = tf.Variable(tf.random_normal([nn[0], nn[1]]), name='Weights_1')
b1 = tf.Variable(tf.random_normal([nn[1]]), name='bias_1')
l1 = tf.nn.relu(tf.add(tf.matmul(iX, W1), b1))
# Capa 2
W2 = tf.Variable(tf.random_normal([nn[1], nn[2]]), name='Weights_2')
b2 = tf.Variable(tf.random_normal([nn[2]]), name='bias_2')
l2 = tf.nn.relu(tf.add(tf.matmul(l1, W2), b2))
# Capa 3
W3 = tf.Variable(tf.random_normal([nn[2], nn[3]]), name='Weights_3')
b3 = tf.Variable(tf.random_normal([nn[3]]), name='bias_3')
# Vector de predicciones de Y.
pY = tf.nn.sigmoid(tf.add(tf.matmul(l2, W3), b3))[:, 0]
# Evaluación de las predicciones.
loss = tf.losses.mean_squared_error(pY, iY)
# Definimos al optimizador de la red, para que minimice el error.
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.05).minimize(loss)
n_steps = 1000 # Número de ciclos de entrenamiento.
iPY = [] # Aquí guardaremos la evolución de las predicción, para la animación.
with tf.Session() as sess:
# Inicializamos todos los parámetros de la red, las matrices W y b.
sess.run(tf.global_variables_initializer())
# Iteramos n pases de entrenamiento.
for step in range(n_steps):
# Evaluamos al optimizador, a la función de coste y al tensor de salida pY.
# La evaluación del optimizer producirá el entrenamiento de la red.
_, _loss, _pY = sess.run([optimizer, loss, pY], feed_dict={ iX : X, iY : Y })
# Cada 25 iteraciones, imprimimos métricas.
if step % 25 == 0:
# Cálculo del accuracy.
acc = np.mean(np.round(_pY) == Y)
# Impresión de métricas.
print('Step', step, '/', n_steps, '- Loss = ', _loss, '- Acc =', acc)
# Obtenemos predicciones para cada punto de nuestro mapa de predicción _pX.
_pY = sess.run(pY, feed_dict={ iX : _pX }).reshape((res, res))
# Y lo guardamos para visualizar la animación.
iPY.append(_pY)
# ----- CÓDIGO ANIMACIÓN ----- #
ims = []
fig = plt.figure(figsize=(10, 10))
print("--- Generando animación ---")
for fr in range(len(iPY)):
im = plt.pcolormesh(_x0, _x1, iPY[fr], cmap="coolwarm", animated=True)
# Visualización de la nube de datos.
plt.scatter(X[Y == 0,0], X[Y == 0,1], c="skyblue")
plt.scatter(X[Y == 1,0], X[Y == 1,1], c="salmon")
# plt.title("Resultado Clasificación")
plt.tick_params(labelbottom=False, labelleft=False)
ims.append([im])
ani = animation.ArtistAnimation(fig, ims, interval=50, blit=True, repeat_delay=1000)
HTML(ani.to_html5_video()) | Step 0 / 1000 - Loss = 0.29063216 - Acc = 0.562
Step 25 / 1000 - Loss = 0.18204297 - Acc = 0.632
Step 50 / 1000 - Loss = 0.1471082 - Acc = 0.79
Step 75 / 1000 - Loss = 0.13354021 - Acc = 0.854
Step 100 / 1000 - Loss = 0.122594796 - Acc = 0.902
Step 125 / 1000 - Loss = 0.111153014 - Acc = 0.942
Step 150 / 1000 - Loss = 0.09965967 - Acc = 0.956
Step 175 / 1000 - Loss = 0.08899061 - Acc = 0.968
Step 200 / 1000 - Loss = 0.07819476 - Acc = 0.98
Step 225 / 1000 - Loss = 0.06843367 - Acc = 0.984
Step 250 / 1000 - Loss = 0.059809823 - Acc = 0.992
Step 275 / 1000 - Loss = 0.051961992 - Acc = 0.994
Step 300 / 1000 - Loss = 0.04506078 - Acc = 0.996
Step 325 / 1000 - Loss = 0.03921565 - Acc = 0.998
Step 350 / 1000 - Loss = 0.034442045 - Acc = 1.0
Step 375 / 1000 - Loss = 0.030614918 - Acc = 1.0
Step 400 / 1000 - Loss = 0.027491104 - Acc = 1.0
Step 425 / 1000 - Loss = 0.024817096 - Acc = 1.0
Step 450 / 1000 - Loss = 0.022489877 - Acc = 1.0
Step 475 / 1000 - Loss = 0.020462362 - Acc = 1.0
Step 500 / 1000 - Loss = 0.018674525 - Acc = 1.0
Step 525 / 1000 - Loss = 0.01712375 - Acc = 1.0
Step 550 / 1000 - Loss = 0.015800532 - Acc = 1.0
Step 575 / 1000 - Loss = 0.014612312 - Acc = 1.0
Step 600 / 1000 - Loss = 0.013566933 - Acc = 1.0
Step 625 / 1000 - Loss = 0.012653999 - Acc = 1.0
Step 650 / 1000 - Loss = 0.011832824 - Acc = 1.0
Step 675 / 1000 - Loss = 0.011105223 - Acc = 1.0
Step 700 / 1000 - Loss = 0.010456048 - Acc = 1.0
Step 725 / 1000 - Loss = 0.009875296 - Acc = 1.0
Step 750 / 1000 - Loss = 0.009351645 - Acc = 1.0
Step 775 / 1000 - Loss = 0.008877279 - Acc = 1.0
Step 800 / 1000 - Loss = 0.0084480485 - Acc = 1.0
Step 825 / 1000 - Loss = 0.008039233 - Acc = 1.0
Step 850 / 1000 - Loss = 0.0076591335 - Acc = 1.0
Step 875 / 1000 - Loss = 0.0073076617 - Acc = 1.0
Step 900 / 1000 - Loss = 0.0069831987 - Acc = 1.0
Step 925 / 1000 - Loss = 0.0066823033 - Acc = 1.0
Step 950 / 1000 - Loss = 0.0064092586 - Acc = 1.0
Step 975 / 1000 - Loss = 0.0061591878 - Acc = 1.0
--- Generando animación ---
| MIT | 2.3.1_3_Maneras_de_Programar_a_una_Red_Neuronal.ipynb | txusser/Master_IA_Sanidad |
Keras | import tensorflow as tf
import tensorflow.keras as kr
from IPython.core.display import display, HTML
lr = 0.01 # learning rate
nn = [2, 16, 8, 1] # número de neuronas por capa.
# Creamos el objeto que contendrá a nuestra red neuronal, como
# secuencia de capas.
model = kr.Sequential()
# Añadimos la capa 1
l1 = model.add(kr.layers.Dense(nn[1], activation='relu'))
# Añadimos la capa 2
l2 = model.add(kr.layers.Dense(nn[2], activation='relu'))
# Añadimos la capa 3
l3 = model.add(kr.layers.Dense(nn[3], activation='sigmoid'))
# Compilamos el modelo, definiendo la función de coste y el optimizador.
model.compile(loss='mse', optimizer=kr.optimizers.SGD(lr=0.05), metrics=['acc'])
# Y entrenamos al modelo. Los callbacks
model.fit(X, Y, epochs=100) | Epoch 1/100
500/500 [==============================] - 0s 111us/sample - loss: 0.2468 - acc: 0.5040
Epoch 2/100
500/500 [==============================] - 0s 37us/sample - loss: 0.2457 - acc: 0.5100
Epoch 3/100
500/500 [==============================] - 0s 40us/sample - loss: 0.2446 - acc: 0.5040
Epoch 4/100
500/500 [==============================] - 0s 37us/sample - loss: 0.2434 - acc: 0.5160
Epoch 5/100
500/500 [==============================] - 0s 36us/sample - loss: 0.2422 - acc: 0.5200
Epoch 6/100
500/500 [==============================] - 0s 39us/sample - loss: 0.2412 - acc: 0.5400
Epoch 7/100
500/500 [==============================] - 0s 38us/sample - loss: 0.2400 - acc: 0.5460
Epoch 8/100
500/500 [==============================] - 0s 38us/sample - loss: 0.2388 - acc: 0.5780
Epoch 9/100
500/500 [==============================] - 0s 41us/sample - loss: 0.2376 - acc: 0.5840
Epoch 10/100
500/500 [==============================] - 0s 41us/sample - loss: 0.2363 - acc: 0.5960
Epoch 11/100
500/500 [==============================] - 0s 38us/sample - loss: 0.2350 - acc: 0.6280
Epoch 12/100
500/500 [==============================] - 0s 40us/sample - loss: 0.2336 - acc: 0.6220
Epoch 13/100
500/500 [==============================] - 0s 41us/sample - loss: 0.2320 - acc: 0.6280
Epoch 14/100
500/500 [==============================] - 0s 36us/sample - loss: 0.2305 - acc: 0.6580
Epoch 15/100
500/500 [==============================] - 0s 41us/sample - loss: 0.2289 - acc: 0.6640
Epoch 16/100
500/500 [==============================] - 0s 39us/sample - loss: 0.2272 - acc: 0.7040
Epoch 17/100
500/500 [==============================] - 0s 40us/sample - loss: 0.2255 - acc: 0.7140
Epoch 18/100
500/500 [==============================] - 0s 47us/sample - loss: 0.2238 - acc: 0.7280
Epoch 19/100
500/500 [==============================] - 0s 42us/sample - loss: 0.2221 - acc: 0.7440
Epoch 20/100
500/500 [==============================] - 0s 41us/sample - loss: 0.2201 - acc: 0.7620
Epoch 21/100
500/500 [==============================] - 0s 37us/sample - loss: 0.2181 - acc: 0.7740
Epoch 22/100
500/500 [==============================] - 0s 50us/sample - loss: 0.2161 - acc: 0.7900
Epoch 23/100
500/500 [==============================] - 0s 39us/sample - loss: 0.2140 - acc: 0.8040
Epoch 24/100
500/500 [==============================] - 0s 38us/sample - loss: 0.2119 - acc: 0.8020
Epoch 25/100
500/500 [==============================] - 0s 48us/sample - loss: 0.2095 - acc: 0.8440
Epoch 26/100
500/500 [==============================] - 0s 44us/sample - loss: 0.2072 - acc: 0.8280
Epoch 27/100
500/500 [==============================] - 0s 43us/sample - loss: 0.2048 - acc: 0.8620
Epoch 28/100
500/500 [==============================] - 0s 41us/sample - loss: 0.2023 - acc: 0.8720
Epoch 29/100
500/500 [==============================] - 0s 39us/sample - loss: 0.1997 - acc: 0.8700
Epoch 30/100
500/500 [==============================] - 0s 37us/sample - loss: 0.1970 - acc: 0.8940
Epoch 31/100
500/500 [==============================] - 0s 43us/sample - loss: 0.1941 - acc: 0.9100
Epoch 32/100
500/500 [==============================] - 0s 46us/sample - loss: 0.1910 - acc: 0.9220
Epoch 33/100
500/500 [==============================] - 0s 38us/sample - loss: 0.1877 - acc: 0.9260
Epoch 34/100
500/500 [==============================] - 0s 37us/sample - loss: 0.1843 - acc: 0.9380
Epoch 35/100
500/500 [==============================] - 0s 40us/sample - loss: 0.1811 - acc: 0.9360
Epoch 36/100
500/500 [==============================] - 0s 41us/sample - loss: 0.1779 - acc: 0.9480
Epoch 37/100
500/500 [==============================] - 0s 44us/sample - loss: 0.1748 - acc: 0.9520
Epoch 38/100
500/500 [==============================] - 0s 45us/sample - loss: 0.1713 - acc: 0.9580
Epoch 39/100
500/500 [==============================] - 0s 47us/sample - loss: 0.1680 - acc: 0.9620
Epoch 40/100
500/500 [==============================] - 0s 39us/sample - loss: 0.1646 - acc: 0.9660
Epoch 41/100
500/500 [==============================] - 0s 44us/sample - loss: 0.1610 - acc: 0.9660
Epoch 42/100
500/500 [==============================] - 0s 38us/sample - loss: 0.1576 - acc: 0.9720
Epoch 43/100
500/500 [==============================] - 0s 38us/sample - loss: 0.1538 - acc: 0.9760
Epoch 44/100
500/500 [==============================] - 0s 37us/sample - loss: 0.1503 - acc: 0.9740
Epoch 45/100
500/500 [==============================] - 0s 42us/sample - loss: 0.1463 - acc: 0.9780
Epoch 46/100
500/500 [==============================] - 0s 44us/sample - loss: 0.1426 - acc: 0.9800
Epoch 47/100
500/500 [==============================] - 0s 44us/sample - loss: 0.1386 - acc: 0.9840
Epoch 48/100
500/500 [==============================] - 0s 46us/sample - loss: 0.1346 - acc: 0.9860
Epoch 49/100
500/500 [==============================] - 0s 48us/sample - loss: 0.1305 - acc: 0.9920
Epoch 50/100
500/500 [==============================] - 0s 40us/sample - loss: 0.1263 - acc: 0.9960
Epoch 51/100
500/500 [==============================] - 0s 38us/sample - loss: 0.1221 - acc: 0.9980
Epoch 52/100
500/500 [==============================] - 0s 38us/sample - loss: 0.1180 - acc: 1.0000
Epoch 53/100
500/500 [==============================] - 0s 36us/sample - loss: 0.1137 - acc: 0.9980
Epoch 54/100
500/500 [==============================] - 0s 42us/sample - loss: 0.1094 - acc: 1.0000
Epoch 55/100
500/500 [==============================] - 0s 43us/sample - loss: 0.1051 - acc: 1.0000
Epoch 56/100
500/500 [==============================] - 0s 44us/sample - loss: 0.1011 - acc: 1.0000
Epoch 57/100
500/500 [==============================] - 0s 41us/sample - loss: 0.0969 - acc: 1.0000
Epoch 58/100
500/500 [==============================] - 0s 38us/sample - loss: 0.0930 - acc: 1.0000
Epoch 59/100
500/500 [==============================] - 0s 39us/sample - loss: 0.0891 - acc: 1.0000
Epoch 60/100
500/500 [==============================] - 0s 38us/sample - loss: 0.0855 - acc: 1.0000
Epoch 61/100
500/500 [==============================] - 0s 41us/sample - loss: 0.0819 - acc: 1.0000
Epoch 62/100
500/500 [==============================] - 0s 40us/sample - loss: 0.0785 - acc: 1.0000
Epoch 63/100
500/500 [==============================] - 0s 38us/sample - loss: 0.0752 - acc: 1.0000
Epoch 64/100
500/500 [==============================] - 0s 37us/sample - loss: 0.0719 - acc: 1.0000
Epoch 65/100
500/500 [==============================] - 0s 38us/sample - loss: 0.0689 - acc: 1.0000
Epoch 66/100
500/500 [==============================] - 0s 41us/sample - loss: 0.0660 - acc: 1.0000
Epoch 67/100
500/500 [==============================] - 0s 38us/sample - loss: 0.0632 - acc: 1.0000
Epoch 68/100
500/500 [==============================] - 0s 37us/sample - loss: 0.0606 - acc: 1.0000
Epoch 69/100
500/500 [==============================] - 0s 44us/sample - loss: 0.0581 - acc: 1.0000
Epoch 70/100
500/500 [==============================] - 0s 39us/sample - loss: 0.0556 - acc: 1.0000
Epoch 71/100
500/500 [==============================] - 0s 40us/sample - loss: 0.0533 - acc: 1.0000
Epoch 72/100
500/500 [==============================] - 0s 47us/sample - loss: 0.0511 - acc: 1.0000
Epoch 73/100
500/500 [==============================] - 0s 39us/sample - loss: 0.0491 - acc: 1.0000
Epoch 74/100
500/500 [==============================] - 0s 43us/sample - loss: 0.0471 - acc: 1.0000
Epoch 75/100
500/500 [==============================] - 0s 41us/sample - loss: 0.0452 - acc: 1.0000
Epoch 76/100
500/500 [==============================] - 0s 40us/sample - loss: 0.0434 - acc: 1.0000
Epoch 77/100
500/500 [==============================] - 0s 40us/sample - loss: 0.0418 - acc: 1.0000
Epoch 78/100
500/500 [==============================] - 0s 40us/sample - loss: 0.0401 - acc: 1.0000
Epoch 79/100
500/500 [==============================] - 0s 37us/sample - loss: 0.0386 - acc: 1.0000
Epoch 80/100
500/500 [==============================] - 0s 37us/sample - loss: 0.0371 - acc: 1.0000
Epoch 81/100
500/500 [==============================] - 0s 39us/sample - loss: 0.0358 - acc: 1.0000
Epoch 82/100
500/500 [==============================] - 0s 51us/sample - loss: 0.0344 - acc: 1.0000
Epoch 83/100
500/500 [==============================] - 0s 37us/sample - loss: 0.0332 - acc: 1.0000
Epoch 84/100
500/500 [==============================] - 0s 39us/sample - loss: 0.0320 - acc: 1.0000
Epoch 85/100
500/500 [==============================] - 0s 41us/sample - loss: 0.0309 - acc: 1.0000
Epoch 86/100
500/500 [==============================] - 0s 39us/sample - loss: 0.0298 - acc: 1.0000
Epoch 87/100
500/500 [==============================] - 0s 41us/sample - loss: 0.0288 - acc: 1.0000
Epoch 88/100
500/500 [==============================] - 0s 38us/sample - loss: 0.0278 - acc: 1.0000
Epoch 89/100
500/500 [==============================] - 0s 38us/sample - loss: 0.0269 - acc: 1.0000
Epoch 90/100
500/500 [==============================] - 0s 38us/sample - loss: 0.0260 - acc: 1.0000
Epoch 91/100
500/500 [==============================] - 0s 42us/sample - loss: 0.0251 - acc: 1.0000
Epoch 92/100
500/500 [==============================] - 0s 37us/sample - loss: 0.0243 - acc: 1.0000
Epoch 93/100
500/500 [==============================] - 0s 42us/sample - loss: 0.0235 - acc: 1.0000
Epoch 94/100
500/500 [==============================] - 0s 38us/sample - loss: 0.0228 - acc: 1.0000
Epoch 95/100
500/500 [==============================] - 0s 43us/sample - loss: 0.0221 - acc: 1.0000
Epoch 96/100
500/500 [==============================] - 0s 38us/sample - loss: 0.0215 - acc: 1.0000
Epoch 97/100
500/500 [==============================] - 0s 39us/sample - loss: 0.0208 - acc: 1.0000
Epoch 98/100
500/500 [==============================] - 0s 38us/sample - loss: 0.0202 - acc: 1.0000
Epoch 99/100
500/500 [==============================] - 0s 40us/sample - loss: 0.0196 - acc: 1.0000
Epoch 100/100
500/500 [==============================] - 0s 37us/sample - loss: 0.0190 - acc: 1.0000
| MIT | 2.3.1_3_Maneras_de_Programar_a_una_Red_Neuronal.ipynb | txusser/Master_IA_Sanidad |
Sklearn | import sklearn as sk
import sklearn.neural_network
from IPython.core.display import display, HTML
lr = 0.01 # learning rate
nn = [2, 16, 8, 1] # número de neuronas por capa.
# Creamos el objeto del modelo de red neuronal multicapa.
clf = sk.neural_network.MLPRegressor(solver='sgd',
learning_rate_init=lr,
hidden_layer_sizes=tuple(nn[1:]),
verbose=True,
n_iter_no_change=1000,
batch_size = 64)
# Y lo entrenamos con nuestro datos.
clf.fit(X, Y) | Iteration 1, loss = 0.66391606
Iteration 2, loss = 0.29448667
Iteration 3, loss = 0.13429471
Iteration 4, loss = 0.13165037
Iteration 5, loss = 0.13430276
Iteration 6, loss = 0.12556423
Iteration 7, loss = 0.12292571
Iteration 8, loss = 0.12204933
Iteration 9, loss = 0.12175702
Iteration 10, loss = 0.12129750
Iteration 11, loss = 0.12073281
Iteration 12, loss = 0.12028767
Iteration 13, loss = 0.11983928
Iteration 14, loss = 0.11939207
Iteration 15, loss = 0.11909108
Iteration 16, loss = 0.11836549
Iteration 17, loss = 0.11771654
Iteration 18, loss = 0.11703195
Iteration 19, loss = 0.11636100
Iteration 20, loss = 0.11559426
Iteration 21, loss = 0.11475135
Iteration 22, loss = 0.11391514
Iteration 23, loss = 0.11296898
Iteration 24, loss = 0.11183055
Iteration 25, loss = 0.11070522
Iteration 26, loss = 0.10945900
Iteration 27, loss = 0.10807801
Iteration 28, loss = 0.10653328
Iteration 29, loss = 0.10483565
Iteration 30, loss = 0.10299502
Iteration 31, loss = 0.10109678
Iteration 32, loss = 0.09868998
Iteration 33, loss = 0.09625805
Iteration 34, loss = 0.09356631
Iteration 35, loss = 0.09051455
Iteration 36, loss = 0.08720220
Iteration 37, loss = 0.08356481
Iteration 38, loss = 0.07956683
Iteration 39, loss = 0.07531904
Iteration 40, loss = 0.07057397
Iteration 41, loss = 0.06558903
Iteration 42, loss = 0.06042358
Iteration 43, loss = 0.05469307
Iteration 44, loss = 0.04914731
Iteration 45, loss = 0.04335916
Iteration 46, loss = 0.03773750
Iteration 47, loss = 0.03259090
Iteration 48, loss = 0.02785086
Iteration 49, loss = 0.02361280
Iteration 50, loss = 0.02024953
Iteration 51, loss = 0.01725262
Iteration 52, loss = 0.01477666
Iteration 53, loss = 0.01294427
Iteration 54, loss = 0.01150503
Iteration 55, loss = 0.01036948
Iteration 56, loss = 0.00950101
Iteration 57, loss = 0.00882902
Iteration 58, loss = 0.00840145
Iteration 59, loss = 0.00797252
Iteration 60, loss = 0.00769409
Iteration 61, loss = 0.00743825
Iteration 62, loss = 0.00731249
Iteration 63, loss = 0.00711639
Iteration 64, loss = 0.00700836
Iteration 65, loss = 0.00685081
Iteration 66, loss = 0.00670581
Iteration 67, loss = 0.00659854
Iteration 68, loss = 0.00655539
Iteration 69, loss = 0.00642937
Iteration 70, loss = 0.00637203
Iteration 71, loss = 0.00619710
Iteration 72, loss = 0.00614971
Iteration 73, loss = 0.00593245
Iteration 74, loss = 0.00579465
Iteration 75, loss = 0.00565489
Iteration 76, loss = 0.00553982
Iteration 77, loss = 0.00541618
Iteration 78, loss = 0.00532437
Iteration 79, loss = 0.00525496
Iteration 80, loss = 0.00514261
Iteration 81, loss = 0.00511693
Iteration 82, loss = 0.00499175
Iteration 83, loss = 0.00497192
Iteration 84, loss = 0.00491734
Iteration 85, loss = 0.00470830
Iteration 86, loss = 0.00461381
Iteration 87, loss = 0.00455140
Iteration 88, loss = 0.00446001
Iteration 89, loss = 0.00440248
Iteration 90, loss = 0.00430629
Iteration 91, loss = 0.00427582
Iteration 92, loss = 0.00420453
Iteration 93, loss = 0.00413087
Iteration 94, loss = 0.00406708
Iteration 95, loss = 0.00399991
Iteration 96, loss = 0.00394088
Iteration 97, loss = 0.00390739
Iteration 98, loss = 0.00384822
Iteration 99, loss = 0.00379567
Iteration 100, loss = 0.00372736
Iteration 101, loss = 0.00364839
Iteration 102, loss = 0.00359586
Iteration 103, loss = 0.00356903
Iteration 104, loss = 0.00350804
Iteration 105, loss = 0.00346888
Iteration 106, loss = 0.00341325
Iteration 107, loss = 0.00338402
Iteration 108, loss = 0.00334556
Iteration 109, loss = 0.00331617
Iteration 110, loss = 0.00327267
Iteration 111, loss = 0.00322546
Iteration 112, loss = 0.00316221
Iteration 113, loss = 0.00311790
Iteration 114, loss = 0.00308636
Iteration 115, loss = 0.00305983
Iteration 116, loss = 0.00307628
Iteration 117, loss = 0.00302102
Iteration 118, loss = 0.00299013
Iteration 119, loss = 0.00294987
Iteration 120, loss = 0.00295874
Iteration 121, loss = 0.00292606
Iteration 122, loss = 0.00289585
Iteration 123, loss = 0.00288184
Iteration 124, loss = 0.00286175
Iteration 125, loss = 0.00284965
Iteration 126, loss = 0.00286328
Iteration 127, loss = 0.00283168
Iteration 128, loss = 0.00285682
Iteration 129, loss = 0.00279665
Iteration 130, loss = 0.00278923
Iteration 131, loss = 0.00278239
Iteration 132, loss = 0.00276704
Iteration 133, loss = 0.00275697
Iteration 134, loss = 0.00275890
Iteration 135, loss = 0.00275535
Iteration 136, loss = 0.00282983
Iteration 137, loss = 0.00275359
Iteration 138, loss = 0.00272988
Iteration 139, loss = 0.00269894
Iteration 140, loss = 0.00272954
Iteration 141, loss = 0.00268760
Iteration 142, loss = 0.00267833
Iteration 143, loss = 0.00267846
Iteration 144, loss = 0.00269751
Iteration 145, loss = 0.00266955
Iteration 146, loss = 0.00265685
Iteration 147, loss = 0.00268063
Iteration 148, loss = 0.00265680
Iteration 149, loss = 0.00263361
Iteration 150, loss = 0.00262043
Iteration 151, loss = 0.00262108
Iteration 152, loss = 0.00262173
Iteration 153, loss = 0.00263316
Iteration 154, loss = 0.00259775
Iteration 155, loss = 0.00258960
Iteration 156, loss = 0.00263879
Iteration 157, loss = 0.00259500
Iteration 158, loss = 0.00257932
Iteration 159, loss = 0.00259434
Iteration 160, loss = 0.00256704
Iteration 161, loss = 0.00258173
Iteration 162, loss = 0.00253499
Iteration 163, loss = 0.00253539
Iteration 164, loss = 0.00253766
Iteration 165, loss = 0.00255039
Iteration 166, loss = 0.00253523
Iteration 167, loss = 0.00253166
Iteration 168, loss = 0.00252858
Iteration 169, loss = 0.00253196
Iteration 170, loss = 0.00251232
Iteration 171, loss = 0.00252011
Iteration 172, loss = 0.00251934
Iteration 173, loss = 0.00249041
Iteration 174, loss = 0.00249983
Iteration 175, loss = 0.00249816
Iteration 176, loss = 0.00249634
Iteration 177, loss = 0.00249739
Iteration 178, loss = 0.00249030
Iteration 179, loss = 0.00246445
Iteration 180, loss = 0.00250390
Iteration 181, loss = 0.00247568
Iteration 182, loss = 0.00247083
Iteration 183, loss = 0.00247611
Iteration 184, loss = 0.00246227
Iteration 185, loss = 0.00245628
Iteration 186, loss = 0.00245701
Iteration 187, loss = 0.00246615
Iteration 188, loss = 0.00244919
Iteration 189, loss = 0.00245754
Iteration 190, loss = 0.00245784
Iteration 191, loss = 0.00243623
Iteration 192, loss = 0.00245733
Iteration 193, loss = 0.00245661
Iteration 194, loss = 0.00242044
Iteration 195, loss = 0.00241922
Iteration 196, loss = 0.00242431
Iteration 197, loss = 0.00242330
Iteration 198, loss = 0.00245886
Iteration 199, loss = 0.00242789
Iteration 200, loss = 0.00240292
| MIT | 2.3.1_3_Maneras_de_Programar_a_una_Red_Neuronal.ipynb | txusser/Master_IA_Sanidad |
Arctic Project in Linear Regression: K-fold + Y:Area Import libraries | library(MASS)
library(tidyverse) | ── [1mAttaching packages[22m ──────────────────────────────────────────────────── tidyverse 1.3.0 ──
[32m✔[39m [34mggplot2[39m 3.3.2 [32m✔[39m [34mpurrr [39m 0.3.4
[32m✔[39m [34mtibble [39m 3.0.4 [32m✔[39m [34mdplyr [39m 1.0.2
[32m✔[39m [34mtidyr [39m 1.1.2 [32m✔[39m [34mstringr[39m 1.4.0
[32m✔[39m [34mreadr [39m 1.4.0 [32m✔[39m [34mforcats[39m 0.5.0
── [1mConflicts[22m ─────────────────────────────────────────────────────── tidyverse_conflicts() ──
[31m✖[39m [34mdplyr[39m::[32mfilter()[39m masks [34mstats[39m::filter()
[31m✖[39m [34mdplyr[39m::[32mlag()[39m masks [34mstats[39m::lag()
[31m✖[39m [34mdplyr[39m::[32mselect()[39m masks [34mMASS[39m::select()
| MIT | .ipynb_checkpoints/1-linear_regression_K-fold_area-checkpoint.ipynb | UCL-BENV0091-Antarctic/antarctic |
Load data | arctic <- read.csv("arctic_data.csv",stringsAsFactors = F) | _____no_output_____ | MIT | .ipynb_checkpoints/1-linear_regression_K-fold_area-checkpoint.ipynb | UCL-BENV0091-Antarctic/antarctic |
Data segmentation | folds <- cut(seq(1,nrow(arctic)), breaks = 10, labels = FALSE) | _____no_output_____ | MIT | .ipynb_checkpoints/1-linear_regression_K-fold_area-checkpoint.ipynb | UCL-BENV0091-Antarctic/antarctic |
Prediction | prediction <- as.data.frame(
sapply(1:10, FUN = function(i) # loop 1:K
{
testID <- which(folds == i, arr.ind = TRUE)
test <- arctic[testID, ]
train <- arctic[-testID, ] # set K-fold
# print(test) # if needed
# linear regression
model <- lm(area~rainfall+daylight+population+CO2+ozone+ocean_temp+land_temp,data=train)
# print(summary(model)) # if needed
# prediction output
predict(model,test)
})) | _____no_output_____ | MIT | .ipynb_checkpoints/1-linear_regression_K-fold_area-checkpoint.ipynb | UCL-BENV0091-Antarctic/antarctic |
Table gathering and merging | pred_gather <- gather(data=prediction, key="fold",value="prediction",1:10)
result <- as.data.frame(cbind(arctic[,c(1,6)],pred_gather)) | _____no_output_____ | MIT | .ipynb_checkpoints/1-linear_regression_K-fold_area-checkpoint.ipynb | UCL-BENV0091-Antarctic/antarctic |
Calculate value of R^2 | result["R^2"] <- ((result$area-result$prediction)^2)
R_square <- sum(result$`R^2`)/490 | _____no_output_____ | MIT | .ipynb_checkpoints/1-linear_regression_K-fold_area-checkpoint.ipynb | UCL-BENV0091-Antarctic/antarctic |
Plot line chart (Prediction vs True) with title, legend, and specific size of figure | {plot(result$observation,result$area,type ='l',ylim = c(0,1.5),lwd = '2',xlab = "Date", ylab = "Value",xaxt='n')
lines(result$observation,result$prediction,lty=1,col='red',lwd = '2')
axis(1,at=c(1,61,121,181,241,301,361,421,481),
labels=c("Jan 1980","Jan 1985","Jan 1990","Jan 1995","Jan 2000","Jan 2005","Jan 2010","Jan 2015","Jan 2020"))
title(main = list("Linear Regression", cex = 1.5, col = "red", font = 3))
legend("topright", inset=.05, c("Prediction","True"), bty = 'n', lty=c(1, 1), col=c("red", "black"),lwd =c(2, 2))
options(repr.plot.width=20, repr.plot.height=10)
} | _____no_output_____ | MIT | .ipynb_checkpoints/1-linear_regression_K-fold_area-checkpoint.ipynb | UCL-BENV0091-Antarctic/antarctic |
Data extraction and Pairing of Insulin Inputs to Glucose Measurements in the ICU Interactive notebook: Part IIAuthors: [Aldo Robles Arévalo](mailto:[email protected]); Jason Maley; Lawrence Baker; Susana M. da Silva Vieira; João M. da Costa Sousa; Stan Finkelstein; Jesse D. Raffa; Roselyn Cristelle; Leo Celi; Francis DeMichele OverviewThis notebook contains the pairing of pre-processed glucose readings and insulin inputs from the Medical Information Mart for Intensive Care (MIMIC).The curation is detailed in *1.0-ara-data-curation-I.ipynb*. General instructionsTo perform the queries, do not forget to specify your project ID that grants you access to the MIMIC database hosted in *bigQuery*. Substitute `projectid` variable with the name of that project. In case you want to save the dataframes to your *BigQuery* project, uncomment and substitute `your_dataset` with the name of your *BigQuery* dataset and execute.You can also save the created dataframes and figures in your Google Drive account. After mounting your drive, substitute `base_dir` variable with the path of the folder where you want to save them. In this notebook that folder was named `Insulin Therapy ICU` and `MyDrive` is the parent folder. Figures are saved in the path *Insulin Therapy ICU/DataExtraction/MIMIC_III/Figures/*, you should change it according to your needs or create the folders with the exact names in your Google Drive. Pairing rulesOnce merged the insulin inputs and glucose readings from the *1.0-ara-data-curation-I.ipynb* notebok, now we continue with the **pairing** of an insulin event with a preceding glucose reading.The goal is to link each insulin dose with the nearest glucose measurement. For this complex task, the following rules were implemented. This operation is done in BigQuery. The following rules or assumptions are proposed:1. **Rule 1**: A glucose reading should precede a regular insulin administration by up to 90 minutes. This basis for this time window is derived from the diabetic ketoacidosis guidelines which recommend measuring glucose values every 60 minutes while receiving an insulin infusion. An additional 30 minutes were added, 90 minutes in total, to this interval to account for the time it may take for providers to register the event. 2. **Rule 2**: When a regular insulin event is not preceded, but instead followed, by a blood glucose measurement, this glucose reading is paired with the regular insulin administration if they are recorded within 90 minutes of each other.3. **Rule 3**: Sometimes a regular insulin infusion/bolus appears between 2 blood glucose measurements. In this case, the higher glucose value is paired with the regular insulin entry as long as they are entered within 90 minutes of each other.4. **Rule 4**: When a regular insulin bolus occurs very close to a regular insulin infusion rate, it is assumed that the patient was given a bolus and then commenced on an infusion. Both regular insulin entries are paired with the preceding blood glucose measurement, or the posterior glucose reading in case its value is higher than the preceding blood glucose and is entered within 90 minutes of the insulin dose.5. No glucose values below 90 mg/dL is paired with a subsequent regular insulin bolus or infusion. No clinician will treat this low of a blood glucose value with a regular insulin bolus or infusion. Code Import dependencies and libraries | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.colors as colors
from scipy import stats
from datetime import datetime
import time
import warnings
# Below imports are used to print out pretty pandas dataframes
from IPython.display import display, HTML
# Imports for accessing data using Google BigQuery.
from google.cloud import bigquery
from google.colab import files, auth
auth.authenticate_user()
print('Authenticated')
%load_ext google.colab.data_table
# Function to submit query to BigQuery
def q(query,projectid):
client = bigquery.Client(location="US",project=projectid)
# Location must match that of the dataset(s) referenced in the query.
query_job = client.query(query,
location="US",)
return query_job.to_dataframe()
#Rounding (for heatmap categories)
def myround(x, base):
return int(base * round(float(x)/base))
def convert_to_datetime(df,time_cols):
for t_col in time_cols:
df[t_col] = pd.to_datetime(df[t_col])
return(df)
from google.colab import drive
drive.mount('/content/gdrive')
# Select your own folder
base_dir = "/content/gdrive/My Drive/Insulin Therapy ICU" | _____no_output_____ | MIT | notebooks/ICUglycemia/Notebooks/2_0_ara_pairing_II.ipynb | aldo-arevalo/mimic-code |
Adjusted datasets* **Note 1**: Substitute `your_dataset` with the name of your dataset ID (Line 850) where you hosted/stored the tables created in the `1.0-ara-pairing-I.ipynb` notebook. * **Note 2**: The table `glucose_insulin_ICU` was created in `1.0-ara-pairing-I.ipynb` notebook. It is equivalent to `glucose_insulin_ICU.csv`. | # Import dataset adjusted or aligned
projectid = "YOUR_PROJECT_ID" # <-- Add your project ID
query ="""
WITH pg AS(
SELECT p1.*
-- Column GLC_AL that would gather paired glucose values according to the proposed rules
,(CASE
-- 1ST CLAUSE
-- When previous and following rows are glucose readings, select the glucose value that
-- has the shortest time distance to insulin bolus/infusion.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding and posterior glucose reading
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Posterior glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Time-gap between glucose and insulin, should be equal or less than 90 minutes
AND ( -- Preceding glucose
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90)
-- Preceding glucose should be equal or greater than 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose value is lower than the preceding glucose
AND LAG(p1.GLC,1) OVER(w) >= LEAD(p1.GLC,1) OVER(w)
-- Return the PRECEDING glucose measurement that gathers the previous conditions
THEN (LAG(p1.GLC,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 2ND CLAUSE
-- In case the posterior glucose reading is higher than the preceding
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding and posterior glucose measurements
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a longer OR equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Time-gap between glucose and insulin, should be equal or less than 90 minutes
-- Preceding glucose
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose
AND ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose should be equal or greater than 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose values is higher than the preceding glucose
AND LAG(p1.GLC,1) OVER(w) < LEAD(p1.GLC,1) OVER(w)
-- Return the POSTERIOR glucose measurement that gathers the previous conditions
THEN (LEAD(p1.GLC,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 3RD CLAUSE
-- When previous timestamp is an insulin bolus/infusion event
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 2 rows above and regular insulin
AND (LAG(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LAG(p1.INSULINTYPE,2) OVER(w)) IN('Short')
-- One row above there is another insulin event
AND (LAG(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a shortime or equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Preceding glucose 2 rows above is equal or greater than 90 min
AND (LAG(p1.GLC,2) OVER(w)) >= 90
-- Posterior glucose value is lower than the preceding glucose 2 rows above
AND LAG(p1.GLC,2) OVER(w) >= LEAD(p1.GLC,1) OVER(w)
-- Return the preceding glucose value 2 rows above
THEN (LAG(p1.GLC,2) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 4TH CLAUSE
-- When previous timestamp is for Insulin bolus/infusion but posterior glucose
-- is higher than the preceding glucose 2 rows above.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 2 rows above
AND (LAG(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row above there is another regular insulin
AND (LAG(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LAG(p1.INSULINTYPE,1) OVER(w)) IN('Short')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Posterior glucose occurs within 90 minutes
AND ABS(TIMESTAMP_DIFF(LEAD(p1.timer,1) OVER(w), p1.timer, MINUTE)) <= 90
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose reading is greater or equal to 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose value is higher than the preceding glucose 2 rows above
AND LAG(p1.GLC,2) OVER(w) < LEAD(p1.GLC,1) OVER(w)
-- Return the POSTERIOR glucose measurement that gathers the previous conditions
THEN (LEAD(p1.GLC,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 5TH CLAUSE
-- When posterior timestamp is for Insulin bolus/infusion but preceding is glucose
-- and there is a glucose 2 rows below.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 2 rows below
AND (LEAD(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row BELOW there is another regular insulin
AND (LEAD(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LEAD(p1.INSULINTYPE,1) OVER(w)) IN('Short')
AND ( -- Preceding glucose has a shorter OR equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE))
)
-- Preceding glucose reading is greater or equal to 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose value (2 rows below) is lower than the preceding glucose 1 row above
AND LAG(p1.GLC,1) OVER(w) >= LEAD(p1.GLC,2) OVER(w)
-- Return the PRECEDING glucose (1 row above) measurement that gathers the previous conditions
THEN (LAG(p1.GLC,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 6TH CLAUSE
-- When posterior glucose reading (2 rows below) is higher than preceding glucose.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 2 rows below
AND (LEAD(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row BELOW there is another insulin event
AND (LEAD(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND ( -- Preceding glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE))
)
-- Posterior glucose reading is greater or equal to 90 mg/dL
AND (LEAD(p1.GLC,2) OVER(w)) >= 90
-- Posterior glucose (2 rows below) occurs within 90 minutes
AND ABS(TIMESTAMP_DIFF(LEAD(p1.timer,2) OVER(w), p1.timer, MINUTE)) <= 90
-- Preceding glucose 1 row above occures up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose value (2 rows below) is higher than the preceding glucose 1 row above
AND LAG(p1.GLC,1) OVER(w) < LEAD(p1.GLC,2) OVER(w)
-- Return the POSTERIOR glucose (2 rows below) measurement that gathers the previous conditions
THEN (LEAD(p1.GLC,2) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 7TH CLAUSE
-- When it is the last insulin dose and record in an ICU stay
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding glucose reading
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Time-gap between preceding glucose and insulin, should be equal or less than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90)
-- Preceding glucose should be equal or greater than 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Return the PRECEDING glucose measurement that gathers the previous conditions
THEN (LAG(p1.GLC,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 8TH CLAUSE
-- When there is no preceding glucose reading within 90 min, but there is a posterior
-- glucose within 90 min
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Time-gap between preceding glucose and insulin is greater than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) > 90)
-- Time-gap between posterior glucose and insulin is equal or less than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90)
-- Posterior glucose should be equal or greater than 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Return the POSTERIOR glucose (1 rows below) measurement that gathers the previous conditions
THEN (LEAD(p1.GLC,1) OVER(w))
-- Otherwise, return null value and finish CASE clause
ELSE null END
) AS GLC_AL
-- ---------------------------------------------------------------------------------------------
-- Column GLCTIMER_AL that would gather the timestamp of the paired glucose reading
, (CASE
-- 1ST CLAUSE
-- When previous and following rows are glucose readings,vselect the glucose value that
-- has the shortest time distance to insulin bolus/infusion.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding and posterior glucose reading
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Posterior glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Time-gap between glucose and insulin, should be equal or less than 90 minutes
AND ( -- Preceding glucose
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90)
-- Preceding glucose should be equal or greater than 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose value is lower than the preceding glucose
AND LAG(p1.GLC,1) OVER(w) >= LEAD(p1.GLC,1) OVER(w)
-- Return the PRECEDING glucose measurement that gathers the previous conditions
THEN (LAG(p1.TIMER,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 2ND CLAUSE
-- In case the posterior glucose reading is higher than the preceding
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding and posterior glucose measurements
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a longer OR equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Time-gap between glucose and insulin, should be equal or less than 90 minutes
-- Preceding glucose
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose
AND ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose should be equal or greater than 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose values is higher than the preceding glucose
AND LAG(p1.GLC,1) OVER(w) < LEAD(p1.GLC,1) OVER(w)
-- Return the POSTERIOR glucose measurement that gathers the previous conditions
THEN (LEAD(p1.TIMER,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 3RD CLAUSE
-- When previous timestamp is an insulin bolus/infusion event
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 2 rows above and regular insulin
AND (LAG(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LAG(p1.INSULINTYPE,2) OVER(w)) IN('Short')
-- One row above there is another insulin event
AND (LAG(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a shortime or equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Preceding glucose 2 rows above is equal or greater than 90 min
AND (LAG(p1.GLC,2) OVER(w)) >= 90
-- Posterior glucose value is lower than the preceding glucose 2 rows above
AND LAG(p1.GLC,2) OVER(w) >= LEAD(p1.GLC,1) OVER(w)
-- Return the preceding glucose value 2 rows above
THEN (LAG(p1.TIMER,2) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 4TH CLAUSE
-- When previous timestamp is for Insulin bolus/infusion but posterior glucose
-- is higher than the preceding glucose 2 rows above.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 2 rows above
AND (LAG(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row above there is another regular insulin
AND (LAG(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LAG(p1.INSULINTYPE,1) OVER(w)) IN('Short')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Posterior glucose occurs within 90 minutes
AND ABS(TIMESTAMP_DIFF(LEAD(p1.timer,1) OVER(w), p1.timer, MINUTE)) <= 90
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose reading is greater or equal to 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose value is higher than the preceding glucose 2 rows above
AND LAG(p1.GLC,2) OVER(w) < LEAD(p1.GLC,1) OVER(w)
-- Return the POSTERIOR glucose measurement that gathers the previous conditions
THEN (LEAD(p1.TIMER,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 5TH CLAUSE
-- When posterior timestamp is for Insulin bolus/infusion but preceding is glucose
-- and there is a glucose 2 rows below.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 2 rows below
AND (LEAD(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row BELOW there is another regular insulin
AND (LEAD(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LEAD(p1.INSULINTYPE,1) OVER(w)) IN('Short')
AND ( -- Preceding glucose has a shorter OR equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE))
)
-- Preceding glucose reading is greater or equal to 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose value (2 rows below) is lower than the preceding glucose 1 row above
AND LAG(p1.GLC,1) OVER(w) >= LEAD(p1.GLC,2) OVER(w)
-- Return the PRECEDING glucose (1 row above) measurement that gathers the previous conditions
THEN (LAG(p1.TIMER,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 6TH CLAUSE
-- When posterior glucose reading (2 rows below) is higher than preceding glucose.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 2 rows below
AND (LEAD(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row BELOW there is another regular insulin
AND (LEAD(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LEAD(p1.INSULINTYPE,1) OVER(w)) IN('Short')
AND ( -- Preceding glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE))
)
-- Posterior glucose reading is greater or equal to 90 mg/dL
AND (LEAD(p1.GLC,2) OVER(w)) >= 90
-- Posterior glucose (2 rows below) occurs within 90 minutes
AND ABS(TIMESTAMP_DIFF(LEAD(p1.timer,2) OVER(w), p1.timer, MINUTE)) <= 90
-- Preceding glucose 1 row above occures up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose value (2 rows below) is higher than the preceding glucose 1 row above
AND LAG(p1.GLC,1) OVER(w) < LEAD(p1.GLC,2) OVER(w)
-- Return the POSTERIOR glucose (2 rows below) measurement that gathers the previous conditions
THEN (LEAD(p1.TIMER,2) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 7TH CLAUSE
-- When it is the last insulin dose and record in an ICU stay
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding glucose reading
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Time-gap between preceding glucose and insulin, should be equal or less than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90)
-- Preceding glucose should be equal or greater than 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Return the PRECEDING glucose measurement that gathers the previous conditions
THEN (LAG(p1.TIMER,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 8TH CLAUSE
-- When there is no preceding glucose reading within 90 min, but there is a posterior
-- glucose within 90 min
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Time-gap between preceding glucose and insulin is greater than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) > 90)
-- Time-gap between posterior glucose and insulin is equal or less than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90)
-- Posterior glucose should be equal or greater than 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Return the timestamp of the POSTERIOR glucose (1 rows below) measurement that gathers the
-- previous conditions
THEN (LEAD(p1.TIMER,1) OVER(w))
-- Otherwise, return null value and finish CASE clause
ELSE null END
) AS GLCTIMER_AL
-- -----------------------------------------------------------------------------------------------
-- Column GLCSOURCE_AL that would indicate whether is fingerstick or lab analyzer sample of
-- the paired glucose reading
, (CASE
-- 1ST CLAUSE
-- When previous and following rows are glucose readings,vselect the glucose value that
-- has the shortest time distance to insulin bolus/infusion.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding and posterior glucose reading
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Posterior glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Time-gap between glucose and insulin, should be equal or less than 90 minutes
AND ( -- Preceding glucose
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90)
-- Preceding glucose should be equal or greater than 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose value is lower than the preceding glucose
AND LAG(p1.GLC,1) OVER(w) >= LEAD(p1.GLC,1) OVER(w)
-- Return the PRECEDING glucose measurement that gathers the previous conditions
THEN (LAG(p1.GLCSOURCE,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 2ND CLAUSE
-- In case the posterior glucose reading is higher than the preceding
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding and posterior glucose measurements
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a longer OR equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Time-gap between glucose and insulin, should be equal or less than 90 minutes
-- Preceding glucose
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose
AND ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose should be equal or greater than 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose values is higher than the preceding glucose
AND LAG(p1.GLC,1) OVER(w) < LEAD(p1.GLC,1) OVER(w)
-- Return the POSTERIOR glucose measurement that gathers the previous conditions
THEN (LEAD(p1.GLCSOURCE,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 3RD CLAUSE
-- When previous timestamp is an insulin bolus/infusion event
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 2 rows above and regular insulin
AND (LAG(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LAG(p1.INSULINTYPE,2) OVER(w)) IN('Short')
-- One row above there is another insulin event
AND (LAG(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a shortime or equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Preceding glucose 2 rows above is equal or greater than 90 min
AND (LAG(p1.GLC,2) OVER(w)) >= 90
-- Posterior glucose value is lower than the preceding glucose 2 rows above
AND LAG(p1.GLC,2) OVER(w) >= LEAD(p1.GLC,1) OVER(w)
-- Return the preceding glucose value 2 rows above
THEN (LAG(p1.GLCSOURCE,2) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 4TH CLAUSE
-- When previous timestamp is for Insulin bolus/infusion but posterior glucose
-- is higher than the preceding glucose 2 rows above.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 2 rows above
AND (LAG(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row above there is another regular insulin
AND (LAG(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LAG(p1.INSULINTYPE,1) OVER(w)) IN('Short')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Posterior glucose occurs within 90 minutes
AND ABS(TIMESTAMP_DIFF(LEAD(p1.timer,1) OVER(w), p1.timer, MINUTE)) <= 90
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose reading is greater or equal to 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose value is higher than the preceding glucose 2 rows above
AND LAG(p1.GLC,2) OVER(w) < LEAD(p1.GLC,1) OVER(w)
-- Return the POSTERIOR glucose measurement that gathers the previous conditions
THEN (LEAD(p1.GLCSOURCE,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 5TH CLAUSE
-- When posterior timestamp is for Insulin bolus/infusion but preceding is glucose
-- and there is a glucose 2 rows below.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 2 rows below
AND (LEAD(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row BELOW there is another regular insulin
AND (LEAD(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LEAD(p1.INSULINTYPE,1) OVER(w)) IN('Short')
AND ( -- Preceding glucose has a shorter OR equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE))
)
-- Preceding glucose reading is greater or equal to 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose value (2 rows below) is lower than the preceding glucose 1 row above
AND LAG(p1.GLC,1) OVER(w) >= LEAD(p1.GLC,2) OVER(w)
-- Return the PRECEDING glucose (1 row above) measurement that gathers the previous conditions
THEN (LAG(p1.GLCSOURCE,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 6TH CLAUSE
-- When posterior glucose reading (2 rows below) is higher than preceding glucose.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 2 rows below
AND (LEAD(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row BELOW there is another regular insulin
AND (LEAD(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LEAD(p1.INSULINTYPE,1) OVER(w)) IN('Short')
AND ( -- Preceding glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE))
)
-- Posterior glucose reading is greater or equal to 90 mg/dL
AND (LEAD(p1.GLC,2) OVER(w)) >= 90
-- Posterior glucose (2 rows below) occurs within 90 minutes
AND ABS(TIMESTAMP_DIFF(LEAD(p1.timer,2) OVER(w), p1.timer, MINUTE)) <= 90
-- Preceding glucose 1 row above occures up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose value (2 rows below) is higher than the preceding glucose 1 row above
AND LAG(p1.GLC,1) OVER(w) < LEAD(p1.GLC,2) OVER(w)
-- Return the POSTERIOR glucose (2 rows below) measurement that gathers the previous conditions
THEN (LEAD(p1.GLCSOURCE,2) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 7TH CLAUSE
-- When it is the last insulin dose and record in an ICU stay
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding glucose reading
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Time-gap between preceding glucose and insulin, should be equal or less than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90)
-- Preceding glucose should be equal or greater than 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Return the PRECEDING glucose measurement that gathers the previous conditions
THEN (LAG(p1.GLCSOURCE,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 8TH CLAUSE
-- When there is no preceding glucose reading within 90 min, but there is a posterior
-- glucose within 90 min
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Time-gap between preceding glucose and insulin is greater than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) > 90)
-- Time-gap between posterior glucose and insulin is equal or less than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90)
-- Posterior glucose should be equal or greater than 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Return the whether is figerstick or lab analyzer the POSTERIOR glucose (1 rows below) measurement
-- that gathers the previous conditions
THEN (LEAD(p1.GLCSOURCE,1) OVER(w))
-- Otherwise, return null value and finish CASE clause
ELSE null END
) AS GLCSOURCE_AL
-- ---------------------------------------------------------------------------------------------
-- Column RULE that indicateS which pairing rule is applied for the i^th case
, (CASE
-- 1ST CLAUSE
-- When previous and following rows are glucose readings,vselect the glucose value that
-- has the shortest time distance to insulin bolus/infusion.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding and posterior glucose reading
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Posterior glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Time-gap between glucose and insulin, should be equal or less than 90 minutes
AND ( -- Preceding glucose
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90)
-- Preceding glucose should be equal or greater than 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose value is lower than the preceding glucose
AND LAG(p1.GLC,1) OVER(w) >= LEAD(p1.GLC,1) OVER(w)
-- Return the PRECEDING glucose measurement that gathers the previous conditions
THEN 1
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 2ND CLAUSE
-- In case the posterior glucose reading is higher than the preceding
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding and posterior glucose measurements
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a longer OR equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Time-gap between glucose and insulin, should be equal or less than 90 minutes
-- Preceding glucose
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose
AND ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose should be equal or greater than 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose values is higher than the preceding glucose
AND LAG(p1.GLC,1) OVER(w) < LEAD(p1.GLC,1) OVER(w)
-- Return the POSTERIOR glucose measurement that gathers the previous conditions
THEN 3
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 3RD CLAUSE
-- When previous timestamp is an insulin bolus/infusion event
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 2 rows above and regular insulin
AND (LAG(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LAG(p1.INSULINTYPE,2) OVER(w)) IN('Short')
-- One row above there is another insulin event
AND (LAG(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a shortime or equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Preceding glucose 2 rows above is equal or greater than 90 min
AND (LAG(p1.GLC,2) OVER(w)) >= 90
-- Posterior glucose value is lower than the preceding glucose 2 rows above
AND LAG(p1.GLC,2) OVER(w) >= LEAD(p1.GLC,1) OVER(w)
-- Return the preceding glucose value 2 rows above
THEN 4
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 4TH CLAUSE
-- When previous timestamp is for Insulin bolus/infusion but posterior glucose
-- is higher than the preceding glucose 2 rows above.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 2 rows above
AND (LAG(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row above there is another regular insulin
AND (LAG(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LAG(p1.INSULINTYPE,1) OVER(w)) IN('Short')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Posterior glucose occurs within 90 minutes
AND ABS(TIMESTAMP_DIFF(LEAD(p1.timer,1) OVER(w), p1.timer, MINUTE)) <= 90
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose reading is greater or equal to 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose value is higher than the preceding glucose 2 rows above
AND LAG(p1.GLC,2) OVER(w) < LEAD(p1.GLC,1) OVER(w)
-- Return the POSTERIOR glucose measurement that gathers the previous conditions
THEN 4
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 5TH CLAUSE
-- When posterior timestamp is for Insulin bolus/infusion but preceding is glucose
-- and there is a glucose 2 rows below.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 2 rows below
AND (LEAD(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row BELOW there is another regular insulin
AND (LEAD(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LEAD(p1.INSULINTYPE,1) OVER(w)) IN('Short')
AND ( -- Preceding glucose has a shorter OR equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE))
)
-- Preceding glucose reading is greater or equal to 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose value (2 rows below) is lower than the preceding glucose 1 row above
AND LAG(p1.GLC,1) OVER(w) >= LEAD(p1.GLC,2) OVER(w)
-- Return the PRECEDING glucose (1 row above) measurement that gathers the previous conditions
THEN 4
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 6TH CLAUSE
-- When posterior glucose reading (2 rows below) is higher than preceding glucose.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 2 rows below
AND (LEAD(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row BELOW there is another regular insulin
AND (LEAD(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LEAD(p1.INSULINTYPE,1) OVER(w)) IN('Short')
AND ( -- Preceding glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE))
)
-- Posterior glucose reading is greater or equal to 90 mg/dL
AND (LEAD(p1.GLC,2) OVER(w)) >= 90
-- Posterior glucose (2 rows below) occurs within 90 minutes
AND ABS(TIMESTAMP_DIFF(LEAD(p1.timer,2) OVER(w), p1.timer, MINUTE)) <= 90
-- Preceding glucose 1 row above occures up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90
-- Posterior glucose value (2 rows below) is higher than the preceding glucose 1 row above
AND LAG(p1.GLC,1) OVER(w) < LEAD(p1.GLC,2) OVER(w)
-- Return the POSTERIOR glucose (2 rows below) measurement that gathers the previous conditions
THEN 4
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 7TH CLAUSE
-- When it is the last insulin dose and record in an ICU stay
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding glucose reading
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Time-gap between preceding glucose and insulin, should be equal or less than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90)
-- Preceding glucose should be equal or greater than 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Return the PRECEDING glucose measurement that gathers the previous conditions
THEN 1
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 8TH CLAUSE
-- When there is no preceding glucose reading within 90 min, but there is a posterior
-- glucose within 90 min
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Time-gap between preceding glucose and insulin is greater than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) > 90)
-- Time-gap between posterior glucose and insulin is equal or less than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 90)
-- Posterior glucose should be equal or greater than 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Return the Rule number applied
THEN 2
-- Otherwise, return null value and finish CASE clause
ELSE null END
) AS RULE
FROM `your_dataset.glucose_insulin_ICU` AS p1
WINDOW w AS(PARTITION BY CAST(p1.HADM_ID AS INT64) ORDER BY p1.TIMER)
)
-- Create a colum that identifies the glucose readings were paired and are duplicated in pg
SELECT pg.*
, (CASE
WHEN pg.GLCSOURCE_AL IS null
AND (LEAD(pg.GLCTIMER_AL,1) OVER(x) = pg.GLCTIMER)
THEN 1
WHEN pg.GLCSOURCE_AL IS null
AND (LAG(pg.GLCTIMER_AL,1) OVER(x) = pg.GLCTIMER)
AND LAG(endtime,1) OVER(x) IS NOT null
THEN 1
ELSE null END) AS Repeated
FROM pg
WINDOW x AS(PARTITION BY ICUSTAY_ID ORDER BY pg.timer)
"""
ICUinputs_adjusted = q(query,projectid)
del query
# Convert dtypes
ICUinputs_adjusted[["Repeated","INFXSTOP","RULE"]] = ICUinputs_adjusted[
["Repeated","INFXSTOP","RULE"]].apply(pd.to_numeric, errors='coerce')
# Remove values that are repeated due to the SQL query
ICUinputs_adjusted = ICUinputs_adjusted[ICUinputs_adjusted['Repeated']!=1]
# Get statistics
display(HTML('<h5>Contains the following information</h5>'))
print("Entries: {}".format(ICUinputs_adjusted.shape[0]))
print("Patients: {}".format(ICUinputs_adjusted['SUBJECT_ID'].nunique()))
print("Hospital admissions: {}".format(ICUinputs_adjusted['HADM_ID'].nunique()))
print('ICU stays: {}'.format(ICUinputs_adjusted['ICUSTAY_ID'].nunique()))
# Rules
display(HTML('<h5>Frequency of the rules</h5>'))
print(ICUinputs_adjusted['RULE'].value_counts()) | _____no_output_____ | MIT | notebooks/ICUglycemia/Notebooks/2_0_ara_pairing_II.ipynb | aldo-arevalo/mimic-code |
Boluses of short-acting insulin | # Filtering for only short insulin boluses and all sources of glucose
short_BOL_adjusted = ICUinputs_adjusted[
(ICUinputs_adjusted['INSULINTYPE']=="Short") &
(ICUinputs_adjusted['EVENT'].str.contains('BOLUS'))].copy()
# Get statistics
display(HTML('<h5>Contains the following information</h5>'))
print("Entries: {}".format(short_BOL_adjusted.shape[0]))
print("Patients: {}".format(short_BOL_adjusted['SUBJECT_ID'].nunique()))
print("Hospital admissions: {}".format(short_BOL_adjusted['HADM_ID'].nunique()))
print('ICU stays: {}'.format(short_BOL_adjusted['ICUSTAY_ID'].nunique()))
display(short_BOL_adjusted[['INPUT','GLC_AL']].describe())
# Save as CSV file, uncomment and modify as needed.
# short_BOL_adjusted.to_csv(base_dir+"/DataExtraction/BolusesCUR.csv", index=False,
# encoding='utf8', header = True)
# Aligned and not aligned entries
display(HTML('<h2>Boluses entries of short-acting insulin<h2>'))
print("Entries that were aligned: {}".format(
short_BOL_adjusted.shape[0]-short_BOL_adjusted.loc[np.isnan(
short_BOL_adjusted.RULE),'RULE'].shape[0]))
print("Entries that weren't aligned: {}".format(
short_BOL_adjusted.loc[np.isnan(short_BOL_adjusted.RULE),'RULE'].shape[0]))
print("Non-paired percentage: {:0.2f}%".format(
short_BOL_adjusted.loc[np.isnan(
short_BOL_adjusted.RULE),'RULE'].shape[0]/short_BOL_adjusted.shape[0]*100))
warnings.simplefilter('ignore')
# From Part1 Notebook
P99_bol_s = 18.0
# Heatmap
short_BOL_heat = short_BOL_adjusted.dropna(subset=['GLC_AL']).copy()
short_BOL_heat['A'] = ((short_BOL_heat['GLCTIMER_AL'] -
short_BOL_heat['STARTTIME'])/pd.Timedelta('1 minute'))*60
short_BOL_heat=short_BOL_heat.set_index('A')
#Define the cell size on the heat map
glc_base=25
ins_base=2
#Define heatmap limits
xlow=0
xhigh=P99_bol_s
ylow=90
yhigh=400
xhigh-=ins_base
#create categories for constructing the heatmap
short_BOL_heat['glc_cat']=(short_BOL_heat['GLC_AL'].apply(
lambda x: myround(x, glc_base))/glc_base)
short_BOL_heat['ins_cat']=(short_BOL_heat['INPUT'].apply(
lambda x: myround(x, ins_base))/ins_base)
#create dataframe for the heatmap using pivot_table
heat_df=pd.pivot_table(short_BOL_heat, values='ICUSTAY_ID', index=['glc_cat']
, columns=['ins_cat'], aggfunc='count')
#trim the heatmap dataframe based on the lmits specificed
heat_df=heat_df.loc[ylow/glc_base:yhigh/glc_base:,xlow/ins_base:xhigh/ins_base:]
#create labels for the x and y ticks
heat_xtick=np.arange(xlow, xhigh+ins_base*2, ins_base)
heat_ytick=np.arange(ylow, yhigh+glc_base*1, glc_base)
#plot heatmap
sns.set(style="ticks", font_scale=1.2)
fig, ax = plt.subplots(1, 1, figsize = (12, 12))
ax=sns.heatmap(heat_df, robust=True, annot=True, cmap="BuPu", fmt="2.0f"
, xticklabels=heat_xtick, yticklabels=heat_ytick
, norm=colors.PowerNorm(gamma=1./2.))
#titles
plt.title(f"Glucose readings prior to a bolus of short-acting insulin\n(n={int(heat_df.sum().values.sum())})",
fontsize=25)
plt.ylabel("Blood glucose (mg/dL)", fontsize=20)
plt.xlabel("Insulin dose (U)", fontsize=20)
#invert axis and offset labels
ax.invert_yaxis()
ax.set_yticks(np.arange(0, ((yhigh-ylow)/glc_base)+1))
ax.set_xticks(np.arange(0, ((xhigh-xlow)/ins_base)+2))
# Save figure, uncomment if needed.
fig.savefig(base_dir+'/DataExtraction/ShortBolusHeatMap.png', bbox_inches='tight',
dpi=fig.dpi) | _____no_output_____ | MIT | notebooks/ICUglycemia/Notebooks/2_0_ara_pairing_II.ipynb | aldo-arevalo/mimic-code |
Infusions of short-acting insulin | warnings.simplefilter('default')
# Filtering for only short insulin infusions and all sources of glucose
short_INF_adjusted = ICUinputs_adjusted[
(ICUinputs_adjusted['INSULINTYPE']=="Short") &
(ICUinputs_adjusted['EVENT'].str.contains('INFUSION'))].copy()
# Get statistics
display(HTML('<h5>Counts</h5>'))
print("Entries: {}".format(short_INF_adjusted.shape[0]))
print("Patients: {}".format(short_INF_adjusted['SUBJECT_ID'].nunique()))
print("Hospital admissions: {}".format(short_INF_adjusted['HADM_ID'].nunique()))
print('ICU stays: {}'.format(short_INF_adjusted['ICUSTAY_ID'].nunique()))
display(short_INF_adjusted[['INPUT_HRS','GLC_AL']].describe())
warnings.simplefilter('ignore')
# Heatmap
short_INF_heat = short_INF_adjusted.dropna(subset=['GLC_AL']).copy()
short_INF_heat['A'] = ((short_INF_heat['GLCTIMER_AL'] -
short_INF_heat['STARTTIME'])/pd.Timedelta('1 minute'))*60
short_INF_heat=short_INF_heat.set_index('A')
#Define the cell size on the heat map
glc_base=25
ins_base=2
#Define heatmap limits
xlow=0
xhigh=P99_bol_s
ylow=90
yhigh=400
xhigh-=ins_base
#create categories for constructing the heatmap
short_INF_heat['glc_cat']=(short_INF_heat['GLC_AL'].apply(
lambda x: myround(x, glc_base))/glc_base)
short_INF_heat['ins_cat']=(short_INF_heat['INPUT'].apply(
lambda x: myround(x, ins_base))/ins_base)
#create dataframe for the heatmap using pivot_table
heat_df_i=pd.pivot_table(short_INF_heat, values='ICUSTAY_ID', index=['glc_cat']
, columns=['ins_cat'], aggfunc='count')
#trim the heatmap dataframe based on the lmits specificed
heat_df_i=heat_df_i.loc[ylow/glc_base:yhigh/glc_base:,xlow/ins_base:xhigh/ins_base:]
#create labels for the x and y ticks
heat_xtick=np.arange(xlow, xhigh+ins_base*2, ins_base)
heat_ytick=np.arange(ylow, yhigh+glc_base*1, glc_base)
#plot heatmap
sns.set(style="ticks", font_scale=1.2)
fig, ax = plt.subplots(1, 1, figsize = (12, 12))
ax=sns.heatmap(heat_df_i, robust=True, annot=True, cmap="BuPu", fmt="2.0f"
, xticklabels=heat_xtick, yticklabels=heat_ytick
, norm=colors.PowerNorm(gamma=1./2.))
#titles
plt.title(f"Glucose readings prior to infusions of short-acting insulin\n(n={int(heat_df_i.sum().values.sum())})",
fontsize=25)
plt.ylabel("Blood glucose (mg/dL)", fontsize=20)
plt.xlabel("Insulin dose (U/hr)", fontsize=20)
#invert axis and offset labels
ax.invert_yaxis()
ax.set_yticks(np.arange(0, ((yhigh-ylow)/glc_base)+1))
ax.set_xticks(np.arange(0, ((xhigh-xlow)/ins_base)+2))
# Save figure, uncomment if needed.
fig.savefig(base_dir+'/DataExtraction/ShortInfxnHeatMap.png',
bbox_inches='tight',dpi=fig.dpi) | _____no_output_____ | MIT | notebooks/ICUglycemia/Notebooks/2_0_ara_pairing_II.ipynb | aldo-arevalo/mimic-code |
Boluses of intermediate-acting insulin | warnings.simplefilter('default')
# Filtering for only short insulin infusions and all sources of glucose
inter_BOL_adjusted = ICUinputs_adjusted[
(ICUinputs_adjusted['INSULINTYPE']=="Intermediate") &
(ICUinputs_adjusted['EVENT'].str.contains('BOLUS'))].copy()
# Get statistics
display(HTML('<h5>Contains the following information</h5>'))
print("Entries: {}".format(inter_BOL_adjusted.shape[0]))
print("Patients: {}".format(inter_BOL_adjusted['SUBJECT_ID'].nunique()))
print("Hospital admissions: {}".format(inter_BOL_adjusted['HADM_ID'].nunique()))
print('ICU stays: {}'.format(inter_BOL_adjusted['ICUSTAY_ID'].nunique()))
display(inter_BOL_adjusted[['INPUT','GLC_AL']].describe())
# Aligned and not aligned entries
display(HTML('<h2>Boluses entries of intermediate-acting insulin<h2>'))
print("Entries that were aligned: {}".format(
inter_BOL_adjusted.shape[0]-inter_BOL_adjusted.loc[np.isnan(
inter_BOL_adjusted.RULE),'RULE'].shape[0]))
print("Entries that weren't aligned: {}".format(
inter_BOL_adjusted.loc[np.isnan(inter_BOL_adjusted.RULE),'RULE'].shape[0]))
print("Non-paired percentage: {:0.2f}%".format(
inter_BOL_adjusted.loc[np.isnan(
inter_BOL_adjusted.RULE),'RULE'].shape[0]/inter_BOL_adjusted.shape[0]*100)) | _____no_output_____ | MIT | notebooks/ICUglycemia/Notebooks/2_0_ara_pairing_II.ipynb | aldo-arevalo/mimic-code |
Boluses of long-acting insulin | warnings.simplefilter('default')
# Filtering for only short insulin infusions and all sources of glucose
long_BOL_adjusted = ICUinputs_adjusted[
(ICUinputs_adjusted['INSULINTYPE']=="Long") &
(ICUinputs_adjusted['EVENT'].str.contains('BOLUS'))].copy()
# Get statistics
display(HTML('<h5>Contains the following information</h5>'))
print("Entries: {}".format(long_BOL_adjusted.shape[0]))
print("Patients: {}".format(long_BOL_adjusted['SUBJECT_ID'].nunique()))
print("Hospital admissions: {}".format(long_BOL_adjusted['HADM_ID'].nunique()))
print('ICU stays: {}'.format(long_BOL_adjusted['ICUSTAY_ID'].nunique()))
display(long_BOL_adjusted[['INPUT','GLC_AL']].describe())
# Aligned and not aligned entries
display(HTML('<h2>Boluses entries of long-acting insulin<h2>'))
print("Entries that were aligned: {}".format(
long_BOL_adjusted.shape[0]-long_BOL_adjusted.loc[np.isnan(
long_BOL_adjusted.RULE),'RULE'].shape[0]))
print("Entries that weren't aligned: {}".format(
long_BOL_adjusted.loc[np.isnan(long_BOL_adjusted.RULE),'RULE'].shape[0]))
print("Non-paired percentage: {:0.2f}%".format(
long_BOL_adjusted.loc[np.isnan(
long_BOL_adjusted.RULE),'RULE'].shape[0]/long_BOL_adjusted.shape[0]*100)) | _____no_output_____ | MIT | notebooks/ICUglycemia/Notebooks/2_0_ara_pairing_II.ipynb | aldo-arevalo/mimic-code |
Non-adjusted datasetsTo complement this analysis, and to show the difference between implementing and not implementing the proposed rules, three cohorts were created: a) no pairing rules applied, b) paired a glucose reading recorded within 60 minutes of the insulin event instead of 90 minutes, and c) pairing a glucose reading. Scenario CGlucose readings CURATED and insulin inputs CURATED but NO RULES* **Note 1**: Add the name of your dataset hosted in BigQuery (Line 45). * **Note 2**: The table `glucose_insulin_ICU` was created in `1.0-ara-pairing-I.ipynb` notebook. It is equivalent to `glucose_insulin_ICU.csv`. | # GLUCOSE READINGS CURATED AND INSULIN INPUTS CURATED but no RULES
query = """
SELECT pg.*
, (CASE
WHEN pg.GLCSOURCE_AL IS null
AND (LEAD(pg.GLCTIMER_AL,1) OVER(PARTITION BY pg.ICUSTAY_ID ORDER BY pg.TIMER) = pg.GLCTIMER)
THEN 1
WHEN pg.GLCSOURCE_AL IS null
AND (LAG(pg.GLCTIMER_AL,1) OVER(PARTITION BY pg.ICUSTAY_ID ORDER BY pg.timer) = pg.GLCTIMER)
AND LAG(endtime,1) OVER(PARTITION BY ICUSTAY_ID ORDER BY timer) IS NOT null
THEN 1
ELSE null END) AS Repeated
FROM(SELECT p1.*
, (CASE
-- Select the previous glucose value regardless the time distance
WHEN p1.EVENT IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
THEN (LAG(p1.GLC,1) OVER(w))
ELSE null END
) AS GLC_AL
, (CASE
-- Select the previous glucose value regardless the time distance
WHEN p1.EVENT IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
THEN (LAG(p1.TIMER,1) OVER(w))
ELSE null END
) AS GLCTIMER_AL
, (CASE
-- Select the previous glucose value regardless the time distance
WHEN p1.EVENT IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
THEN (LAG(p1.GLCSOURCE,1) OVER(w))
ELSE null END
) AS GLCSOURCE_AL
, (CASE
-- Select the previous glucose value regardless the time distance
WHEN p1.EVENT IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
THEN 1
ELSE null END
) AS RULE
FROM `your_dataset.glucose_insulin_ICU` AS p1
WINDOW w AS(PARTITION BY CAST(p1.HADM_ID AS INT64) ORDER BY p1.TIMER)
) AS pg
"""
glc_curALins_cur = q(query,projectid)
qwe = glc_curALins_cur[(glc_curALins_cur['INSULINTYPE']=="Short") &
(glc_curALins_cur['EVENT'].str.contains('BOLUS'))].copy()
display(HTML('<h4>Statistics for both glucose readings and insulin inputs CURATED</h4>'))
print("Total entries: {}".format(glc_curALins_cur.shape[0]))
display(qwe[['INPUT','GLC_AL']].describe())
display(HTML('<h5>Contains the following information (only for short-acting)</h5>'))
print("Boluses of short-acting insulin: {}".format(qwe.shape[0]))
print("Patients: {} out of {}".format(qwe['SUBJECT_ID'].nunique(),
glc_curALins_cur['SUBJECT_ID'].nunique()))
print("Hospital admissions: {}".format(qwe['HADM_ID'].nunique()))
print('ICU stays: {}'.format(qwe['ICUSTAY_ID'].nunique()))
# Rules
display(HTML('<h5>Frequency of the rules</h5>'))
print(qwe['RULE'].value_counts())
# Save as CSV file, uncomment and modify as needed.
# qwe.to_csv(base_dir+"/DataExtraction/BolusesCUR_nr.csv", index=False,
# encoding='utf8', header = True)
del query,qwe | /usr/lib/python3.6/json/decoder.py:355: ResourceWarning: unclosed <ssl.SSLSocket fd=63, family=AddressFamily.AF_INET, type=2049, proto=6, laddr=('172.28.0.2', 52706), raddr=('74.125.142.95', 443)>
obj, end = self.scan_once(s, idx)
/usr/lib/python3.6/json/decoder.py:355: ResourceWarning: unclosed <ssl.SSLSocket fd=64, family=AddressFamily.AF_INET, type=2049, proto=6, laddr=('172.28.0.2', 52660), raddr=('74.125.20.95', 443)>
obj, end = self.scan_once(s, idx)
| MIT | notebooks/ICUglycemia/Notebooks/2_0_ara_pairing_II.ipynb | aldo-arevalo/mimic-code |
Scenario BGlucose reading CURATED and inulin inputs CURATED paired with rules (60 min)* **Note 1**: Substitute `your_dataset` with the name of your dataset ID (Line 849) where you hosted/stored the tables created in the `1.0-ara-pairing-I.ipynb` notebook. * **Note 2**: The table `glucose_insulin_ICU` was created in `1.0-ara-pairing-I.ipynb` notebook. It is equivalent to `glucose_insulin_ICU.csv`. | # Import dataset adjusted or aligned with 60 min
query ="""
WITH pg AS(
SELECT p1.*
-- Column GLC_AL that would gather paired glucose values according to the proposed rules
,(CASE
-- 1ST CLAUSE
-- When previous and following rows are glucose readings, select the glucose value that
-- has the shortest time distance to insulin bolus/infusion.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding and posterior glucose reading
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Posterior glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Time-gap between glucose and insulin, should be equal or less than 90 minutes
AND ( -- Preceding glucose
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60)
-- Preceding glucose should be equal or greater than 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose value is lower than the preceding glucose
AND LAG(p1.GLC,1) OVER(w) >= LEAD(p1.GLC,1) OVER(w)
-- Return the PRECEDING glucose measurement that gathers the previous conditions
THEN (LAG(p1.GLC,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 2ND CLAUSE
-- In case the posterior glucose reading is higher than the preceding
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding and posterior glucose measurements
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a longer OR equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Time-gap between glucose and insulin, should be equal or less than 90 minutes
-- Preceding glucose
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose
AND ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose should be equal or greater than 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose values is higher than the preceding glucose
AND LAG(p1.GLC,1) OVER(w) < LEAD(p1.GLC,1) OVER(w)
-- Return the POSTERIOR glucose measurement that gathers the previous conditions
THEN (LEAD(p1.GLC,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 3RD CLAUSE
-- When previous timestamp is an insulin bolus/infusion event
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 2 rows above and regular insulin
AND (LAG(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LAG(p1.INSULINTYPE,2) OVER(w)) IN('Short')
-- One row above there is another insulin event
AND (LAG(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a shortime or equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Preceding glucose 2 rows above is equal or greater than 90 min
AND (LAG(p1.GLC,2) OVER(w)) >= 90
-- Posterior glucose value is lower than the preceding glucose 2 rows above
AND LAG(p1.GLC,2) OVER(w) >= LEAD(p1.GLC,1) OVER(w)
-- Return the preceding glucose value 2 rows above
THEN (LAG(p1.GLC,2) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 4TH CLAUSE
-- When previous timestamp is for Insulin bolus/infusion but posterior glucose
-- is higher than the preceding glucose 2 rows above.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 2 rows above
AND (LAG(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row above there is another regular insulin
AND (LAG(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LAG(p1.INSULINTYPE,1) OVER(w)) IN('Short')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Posterior glucose occurs within 90 minutes
AND ABS(TIMESTAMP_DIFF(LEAD(p1.timer,1) OVER(w), p1.timer, MINUTE)) <= 60
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose reading is greater or equal to 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose value is higher than the preceding glucose 2 rows above
AND LAG(p1.GLC,2) OVER(w) < LEAD(p1.GLC,1) OVER(w)
-- Return the POSTERIOR glucose measurement that gathers the previous conditions
THEN (LEAD(p1.GLC,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 5TH CLAUSE
-- When posterior timestamp is for Insulin bolus/infusion but preceding is glucose
-- and there is a glucose 2 rows below.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 2 rows below
AND (LEAD(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row BELOW there is another regular insulin
AND (LEAD(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LEAD(p1.INSULINTYPE,1) OVER(w)) IN('Short')
AND ( -- Preceding glucose has a shorter OR equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE))
)
-- Preceding glucose reading is greater or equal to 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose value (2 rows below) is lower than the preceding glucose 1 row above
AND LAG(p1.GLC,1) OVER(w) >= LEAD(p1.GLC,2) OVER(w)
-- Return the PRECEDING glucose (1 row above) measurement that gathers the previous conditions
THEN (LAG(p1.GLC,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 6TH CLAUSE
-- When posterior glucose reading (2 rows below) is higher than preceding glucose.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 2 rows below
AND (LEAD(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row BELOW there is another insulin event
AND (LEAD(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND ( -- Preceding glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE))
)
-- Posterior glucose reading is greater or equal to 90 mg/dL
AND (LEAD(p1.GLC,2) OVER(w)) >= 90
-- Posterior glucose (2 rows below) occurs within 90 minutes
AND ABS(TIMESTAMP_DIFF(LEAD(p1.timer,2) OVER(w), p1.timer, MINUTE)) <= 60
-- Preceding glucose 1 row above occures up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose value (2 rows below) is higher than the preceding glucose 1 row above
AND LAG(p1.GLC,1) OVER(w) < LEAD(p1.GLC,2) OVER(w)
-- Return the POSTERIOR glucose (2 rows below) measurement that gathers the previous conditions
THEN (LEAD(p1.GLC,2) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 7TH CLAUSE
-- When it is the last insulin dose and record in an ICU stay
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding glucose reading
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Time-gap between preceding glucose and insulin, should be equal or less than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60)
-- Preceding glucose should be equal or greater than 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Return the PRECEDING glucose measurement that gathers the previous conditions
THEN (LAG(p1.GLC,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 8TH CLAUSE
-- When there is no preceding glucose reading within 90 min, but there is a posterior
-- glucose within 90 min
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Time-gap between preceding glucose and insulin is greater than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) > 90)
-- Time-gap between posterior glucose and insulin is equal or less than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60)
-- Posterior glucose should be equal or greater than 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Return the POSTERIOR glucose (1 rows below) measurement that gathers the previous conditions
THEN (LEAD(p1.GLC,1) OVER(w))
-- Otherwise, return null value and finish CASE clause
ELSE null END
) AS GLC_AL
-- ---------------------------------------------------------------------------------------------
-- Column GLCTIMER_AL that would gather the timestamp of the paired glucose reading
, (CASE
-- 1ST CLAUSE
-- When previous and following rows are glucose readings,vselect the glucose value that
-- has the shortest time distance to insulin bolus/infusion.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding and posterior glucose reading
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Posterior glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Time-gap between glucose and insulin, should be equal or less than 90 minutes
AND ( -- Preceding glucose
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60)
-- Preceding glucose should be equal or greater than 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose value is lower than the preceding glucose
AND LAG(p1.GLC,1) OVER(w) >= LEAD(p1.GLC,1) OVER(w)
-- Return the PRECEDING glucose measurement that gathers the previous conditions
THEN (LAG(p1.TIMER,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 2ND CLAUSE
-- In case the posterior glucose reading is higher than the preceding
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding and posterior glucose measurements
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a longer OR equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Time-gap between glucose and insulin, should be equal or less than 90 minutes
-- Preceding glucose
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose
AND ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose should be equal or greater than 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose values is higher than the preceding glucose
AND LAG(p1.GLC,1) OVER(w) < LEAD(p1.GLC,1) OVER(w)
-- Return the POSTERIOR glucose measurement that gathers the previous conditions
THEN (LEAD(p1.TIMER,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 3RD CLAUSE
-- When previous timestamp is an insulin bolus/infusion event
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 2 rows above and regular insulin
AND (LAG(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LAG(p1.INSULINTYPE,2) OVER(w)) IN('Short')
-- One row above there is another insulin event
AND (LAG(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a shortime or equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Preceding glucose 2 rows above is equal or greater than 90 min
AND (LAG(p1.GLC,2) OVER(w)) >= 90
-- Posterior glucose value is lower than the preceding glucose 2 rows above
AND LAG(p1.GLC,2) OVER(w) >= LEAD(p1.GLC,1) OVER(w)
-- Return the preceding glucose value 2 rows above
THEN (LAG(p1.TIMER,2) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 4TH CLAUSE
-- When previous timestamp is for Insulin bolus/infusion but posterior glucose
-- is higher than the preceding glucose 2 rows above.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 2 rows above
AND (LAG(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row above there is another regular insulin
AND (LAG(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LAG(p1.INSULINTYPE,1) OVER(w)) IN('Short')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Posterior glucose occurs within 90 minutes
AND ABS(TIMESTAMP_DIFF(LEAD(p1.timer,1) OVER(w), p1.timer, MINUTE)) <= 60
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose reading is greater or equal to 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose value is higher than the preceding glucose 2 rows above
AND LAG(p1.GLC,2) OVER(w) < LEAD(p1.GLC,1) OVER(w)
-- Return the POSTERIOR glucose measurement that gathers the previous conditions
THEN (LEAD(p1.TIMER,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 5TH CLAUSE
-- When posterior timestamp is for Insulin bolus/infusion but preceding is glucose
-- and there is a glucose 2 rows below.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 2 rows below
AND (LEAD(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row BELOW there is another regular insulin
AND (LEAD(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LEAD(p1.INSULINTYPE,1) OVER(w)) IN('Short')
AND ( -- Preceding glucose has a shorter OR equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE))
)
-- Preceding glucose reading is greater or equal to 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose value (2 rows below) is lower than the preceding glucose 1 row above
AND LAG(p1.GLC,1) OVER(w) >= LEAD(p1.GLC,2) OVER(w)
-- Return the PRECEDING glucose (1 row above) measurement that gathers the previous conditions
THEN (LAG(p1.TIMER,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 6TH CLAUSE
-- When posterior glucose reading (2 rows below) is higher than preceding glucose.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 2 rows below
AND (LEAD(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row BELOW there is another regular insulin
AND (LEAD(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LEAD(p1.INSULINTYPE,1) OVER(w)) IN('Short')
AND ( -- Preceding glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE))
)
-- Posterior glucose reading is greater or equal to 90 mg/dL
AND (LEAD(p1.GLC,2) OVER(w)) >= 90
-- Posterior glucose (2 rows below) occurs within 90 minutes
AND ABS(TIMESTAMP_DIFF(LEAD(p1.timer,2) OVER(w), p1.timer, MINUTE)) <= 60
-- Preceding glucose 1 row above occures up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose value (2 rows below) is higher than the preceding glucose 1 row above
AND LAG(p1.GLC,1) OVER(w) < LEAD(p1.GLC,2) OVER(w)
-- Return the POSTERIOR glucose (2 rows below) measurement that gathers the previous conditions
THEN (LEAD(p1.TIMER,2) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 7TH CLAUSE
-- When it is the last insulin dose and record in an ICU stay
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding glucose reading
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Time-gap between preceding glucose and insulin, should be equal or less than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60)
-- Preceding glucose should be equal or greater than 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Return the PRECEDING glucose measurement that gathers the previous conditions
THEN (LAG(p1.TIMER,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 8TH CLAUSE
-- When there is no preceding glucose reading within 90 min, but there is a posterior
-- glucose within 90 min
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Time-gap between preceding glucose and insulin is greater than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) > 90)
-- Time-gap between posterior glucose and insulin is equal or less than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60)
-- Posterior glucose should be equal or greater than 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Return the timestamp of the POSTERIOR glucose (1 rows below) measurement that gathers the
-- previous conditions
THEN (LEAD(p1.TIMER,1) OVER(w))
-- Otherwise, return null value and finish CASE clause
ELSE null END
) AS GLCTIMER_AL
-- -----------------------------------------------------------------------------------------------
-- Column GLCSOURCE_AL that would indicate whether is fingerstick or lab analyzer sample of
-- the paired glucose reading
, (CASE
-- 1ST CLAUSE
-- When previous and following rows are glucose readings,vselect the glucose value that
-- has the shortest time distance to insulin bolus/infusion.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding and posterior glucose reading
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Posterior glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Time-gap between glucose and insulin, should be equal or less than 90 minutes
AND ( -- Preceding glucose
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60)
-- Preceding glucose should be equal or greater than 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose value is lower than the preceding glucose
AND LAG(p1.GLC,1) OVER(w) >= LEAD(p1.GLC,1) OVER(w)
-- Return the PRECEDING glucose measurement that gathers the previous conditions
THEN (LAG(p1.GLCSOURCE,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 2ND CLAUSE
-- In case the posterior glucose reading is higher than the preceding
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding and posterior glucose measurements
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a longer OR equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Time-gap between glucose and insulin, should be equal or less than 90 minutes
-- Preceding glucose
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose
AND ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose should be equal or greater than 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose values is higher than the preceding glucose
AND LAG(p1.GLC,1) OVER(w) < LEAD(p1.GLC,1) OVER(w)
-- Return the POSTERIOR glucose measurement that gathers the previous conditions
THEN (LEAD(p1.GLCSOURCE,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 3RD CLAUSE
-- When previous timestamp is an insulin bolus/infusion event
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 2 rows above and regular insulin
AND (LAG(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LAG(p1.INSULINTYPE,2) OVER(w)) IN('Short')
-- One row above there is another insulin event
AND (LAG(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a shortime or equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Preceding glucose 2 rows above is equal or greater than 90 min
AND (LAG(p1.GLC,2) OVER(w)) >= 90
-- Posterior glucose value is lower than the preceding glucose 2 rows above
AND LAG(p1.GLC,2) OVER(w) >= LEAD(p1.GLC,1) OVER(w)
-- Return the preceding glucose value 2 rows above
THEN (LAG(p1.GLCSOURCE,2) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 4TH CLAUSE
-- When previous timestamp is for Insulin bolus/infusion but posterior glucose
-- is higher than the preceding glucose 2 rows above.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 2 rows above
AND (LAG(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row above there is another regular insulin
AND (LAG(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LAG(p1.INSULINTYPE,1) OVER(w)) IN('Short')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Posterior glucose occurs within 90 minutes
AND ABS(TIMESTAMP_DIFF(LEAD(p1.timer,1) OVER(w), p1.timer, MINUTE)) <= 60
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose reading is greater or equal to 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose value is higher than the preceding glucose 2 rows above
AND LAG(p1.GLC,2) OVER(w) < LEAD(p1.GLC,1) OVER(w)
-- Return the POSTERIOR glucose measurement that gathers the previous conditions
THEN (LEAD(p1.GLCSOURCE,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 5TH CLAUSE
-- When posterior timestamp is for Insulin bolus/infusion but preceding is glucose
-- and there is a glucose 2 rows below.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 2 rows below
AND (LEAD(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row BELOW there is another regular insulin
AND (LEAD(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LEAD(p1.INSULINTYPE,1) OVER(w)) IN('Short')
AND ( -- Preceding glucose has a shorter OR equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE))
)
-- Preceding glucose reading is greater or equal to 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose value (2 rows below) is lower than the preceding glucose 1 row above
AND LAG(p1.GLC,1) OVER(w) >= LEAD(p1.GLC,2) OVER(w)
-- Return the PRECEDING glucose (1 row above) measurement that gathers the previous conditions
THEN (LAG(p1.GLCSOURCE,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 6TH CLAUSE
-- When posterior glucose reading (2 rows below) is higher than preceding glucose.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 2 rows below
AND (LEAD(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row BELOW there is another regular insulin
AND (LEAD(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LEAD(p1.INSULINTYPE,1) OVER(w)) IN('Short')
AND ( -- Preceding glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE))
)
-- Posterior glucose reading is greater or equal to 90 mg/dL
AND (LEAD(p1.GLC,2) OVER(w)) >= 90
-- Posterior glucose (2 rows below) occurs within 90 minutes
AND ABS(TIMESTAMP_DIFF(LEAD(p1.timer,2) OVER(w), p1.timer, MINUTE)) <= 60
-- Preceding glucose 1 row above occures up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose value (2 rows below) is higher than the preceding glucose 1 row above
AND LAG(p1.GLC,1) OVER(w) < LEAD(p1.GLC,2) OVER(w)
-- Return the POSTERIOR glucose (2 rows below) measurement that gathers the previous conditions
THEN (LEAD(p1.GLCSOURCE,2) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 7TH CLAUSE
-- When it is the last insulin dose and record in an ICU stay
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding glucose reading
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Time-gap between preceding glucose and insulin, should be equal or less than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60)
-- Preceding glucose should be equal or greater than 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Return the PRECEDING glucose measurement that gathers the previous conditions
THEN (LAG(p1.GLCSOURCE,1) OVER(w))
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 8TH CLAUSE
-- When there is no preceding glucose reading within 90 min, but there is a posterior
-- glucose within 90 min
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Time-gap between preceding glucose and insulin is greater than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) > 90)
-- Time-gap between posterior glucose and insulin is equal or less than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60)
-- Posterior glucose should be equal or greater than 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Return the whether is figerstick or lab analyzer the POSTERIOR glucose (1 rows below) measurement
-- that gathers the previous conditions
THEN (LEAD(p1.GLCSOURCE,1) OVER(w))
-- Otherwise, return null value and finish CASE clause
ELSE null END
) AS GLCSOURCE_AL
-- ---------------------------------------------------------------------------------------------
-- Column RULE that indicateS which pairing rule is applied for the i^th case
, (CASE
-- 1ST CLAUSE
-- When previous and following rows are glucose readings,vselect the glucose value that
-- has the shortest time distance to insulin bolus/infusion.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding and posterior glucose reading
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Posterior glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Time-gap between glucose and insulin, should be equal or less than 90 minutes
AND ( -- Preceding glucose
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60)
-- Preceding glucose should be equal or greater than 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose value is lower than the preceding glucose
AND LAG(p1.GLC,1) OVER(w) >= LEAD(p1.GLC,1) OVER(w)
-- Return the PRECEDING glucose measurement that gathers the previous conditions
THEN 1
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 2ND CLAUSE
-- In case the posterior glucose reading is higher than the preceding
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding and posterior glucose measurements
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a longer OR equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Time-gap between glucose and insulin, should be equal or less than 90 minutes
-- Preceding glucose
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose
AND ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose should be equal or greater than 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose values is higher than the preceding glucose
AND LAG(p1.GLC,1) OVER(w) < LEAD(p1.GLC,1) OVER(w)
-- Return the POSTERIOR glucose measurement that gathers the previous conditions
THEN 3
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 3RD CLAUSE
-- When previous timestamp is an insulin bolus/infusion event
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 2 rows above and regular insulin
AND (LAG(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND (LAG(p1.INSULINTYPE,2) OVER(w)) IN('Short')
-- One row above there is another insulin event
AND (LAG(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a shortime or equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Preceding glucose 2 rows above is equal or greater than 90 min
AND (LAG(p1.GLC,2) OVER(w)) >= 90
-- Posterior glucose value is lower than the preceding glucose 2 rows above
AND LAG(p1.GLC,2) OVER(w) >= LEAD(p1.GLC,1) OVER(w)
-- Return the preceding glucose value 2 rows above
THEN 4
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 4TH CLAUSE
-- When previous timestamp is for Insulin bolus/infusion but posterior glucose
-- is higher than the preceding glucose 2 rows above.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 2 rows above
AND (LAG(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row above there is another regular insulin
AND (LAG(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LAG(p1.INSULINTYPE,1) OVER(w)) IN('Short')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
AND ( -- Preceding glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE))
)
-- Posterior glucose occurs within 90 minutes
AND ABS(TIMESTAMP_DIFF(LEAD(p1.timer,1) OVER(w), p1.timer, MINUTE)) <= 60
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose reading is greater or equal to 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Posterior glucose value is higher than the preceding glucose 2 rows above
AND LAG(p1.GLC,2) OVER(w) < LEAD(p1.GLC,1) OVER(w)
-- Return the POSTERIOR glucose measurement that gathers the previous conditions
THEN 4
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 5TH CLAUSE
-- When posterior timestamp is for Insulin bolus/infusion but preceding is glucose
-- and there is a glucose 2 rows below.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 2 rows below
AND (LEAD(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row BELOW there is another regular insulin
AND (LEAD(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LEAD(p1.INSULINTYPE,1) OVER(w)) IN('Short')
AND ( -- Preceding glucose has a shorter OR equal time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <=
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE))
)
-- Preceding glucose reading is greater or equal to 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Preceding glucose 2 rows above occured up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose value (2 rows below) is lower than the preceding glucose 1 row above
AND LAG(p1.GLC,1) OVER(w) >= LEAD(p1.GLC,2) OVER(w)
-- Return the PRECEDING glucose (1 row above) measurement that gathers the previous conditions
THEN 4
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 6TH CLAUSE
-- When posterior glucose reading (2 rows below) is higher than preceding glucose.
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 2 rows below
AND (LEAD(p1.GLCSOURCE,2) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- One row BELOW there is another regular insulin
AND (LEAD(p1.EVENT,1) OVER(w)) IN('BOLUS_INYECTION','BOLUS_PUSH','INFUSION')
AND (LEAD(p1.INSULINTYPE,1) OVER(w)) IN('Short')
AND ( -- Preceding glucose has a longer time-gap to insulin than the posterior
ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) >
ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,2) OVER(w)), p1.TIMER, MINUTE))
)
-- Posterior glucose reading is greater or equal to 90 mg/dL
AND (LEAD(p1.GLC,2) OVER(w)) >= 90
-- Posterior glucose (2 rows below) occurs within 90 minutes
AND ABS(TIMESTAMP_DIFF(LEAD(p1.timer,2) OVER(w), p1.timer, MINUTE)) <= 60
-- Preceding glucose 1 row above occures up to 90 minutes before
AND ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60
-- Posterior glucose value (2 rows below) is higher than the preceding glucose 1 row above
AND LAG(p1.GLC,1) OVER(w) < LEAD(p1.GLC,2) OVER(w)
-- Return the POSTERIOR glucose (2 rows below) measurement that gathers the previous conditions
THEN 4
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 7TH CLAUSE
-- When it is the last insulin dose and record in an ICU stay
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Identify preceding glucose reading
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Time-gap between preceding glucose and insulin, should be equal or less than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60)
-- Preceding glucose should be equal or greater than 90 mg/dL
AND (LAG(p1.GLC,1) OVER(w)) >= 90
-- Return the PRECEDING glucose measurement that gathers the previous conditions
THEN 1
-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- 8TH CLAUSE
-- When there is no preceding glucose reading within 90 min, but there is a posterior
-- glucose within 90 min
-- Identify an insulin event either bolus or infusion
WHEN p1.EVENT IN('BOLUS_INYECTION', 'BOLUS_PUSH', 'INFUSION')
-- Regular insulin or short-acting
AND p1.INSULINTYPE IN('Short')
-- Identify preceding glucose reading 1 row above
AND (LAG(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Identify posterior glucose reading 1 row below
AND (LEAD(p1.GLCSOURCE,1) OVER(w)) IN('BLOOD', 'FINGERSTICK')
-- Time-gap between preceding glucose and insulin is greater than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LAG(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) > 90)
-- Time-gap between posterior glucose and insulin is equal or less than 90 minutes
AND (ABS(TIMESTAMP_DIFF((LEAD(p1.TIMER,1) OVER(w)), p1.TIMER, MINUTE)) <= 60)
-- Posterior glucose should be equal or greater than 90 mg/dL
AND (LEAD(p1.GLC,1) OVER(w)) >= 90
-- Return the Rule number applied
THEN 2
-- Otherwise, return null value and finish CASE clause
ELSE null END
) AS RULE
FROM `your_dataset.glucose_insulin_ICU` AS p1
WINDOW w AS(PARTITION BY CAST(p1.HADM_ID AS INT64) ORDER BY p1.TIMER)
)
-- Create a colum that identifies the glucose readings were paired and are duplicated in pg
SELECT pg.*
, (CASE
WHEN pg.GLCSOURCE_AL IS null
AND (LEAD(pg.GLCTIMER_AL,1) OVER(x) = pg.GLCTIMER)
THEN 1
WHEN pg.GLCSOURCE_AL IS null
AND (LAG(pg.GLCTIMER_AL,1) OVER(x) = pg.GLCTIMER)
AND LAG(endtime,1) OVER(x) IS NOT null
THEN 1
ELSE null END) AS Repeated
FROM pg
WINDOW x AS(PARTITION BY ICUSTAY_ID ORDER BY pg.timer)
"""
ICU60min_adjusted = q(query,projectid)
del query
# Convert dtypes
ICU60min_adjusted[["Repeated","INFXSTOP","RULE"]] = ICU60min_adjusted[
["Repeated","INFXSTOP","RULE"]].apply(pd.to_numeric, errors='coerce')
# Remove values that are repeated due to the SQL query
ICU60min_adjusted = ICU60min_adjusted[ICU60min_adjusted['Repeated']!=1]
# Get statistics
display(HTML('<h5>Contains the following information</h5>'))
print("Entries: {}".format(ICU60min_adjusted.shape[0]))
print("Patients: {}".format(ICU60min_adjusted['SUBJECT_ID'].nunique()))
print("Hospital admissions: {}".format(ICU60min_adjusted['HADM_ID'].nunique()))
print('ICU stays: {}'.format(ICU60min_adjusted['ICUSTAY_ID'].nunique()))
# Rules
display(HTML('<h5>Frequency of the rules</h5>'))
print(ICU60min_adjusted['RULE'].value_counts()) | /usr/lib/python3.6/json/decoder.py:355: ResourceWarning: unclosed <ssl.SSLSocket fd=80, family=AddressFamily.AF_INET, type=2049, proto=6, laddr=('172.28.0.2', 52770), raddr=('74.125.20.95', 443)>
obj, end = self.scan_once(s, idx)
/usr/lib/python3.6/json/decoder.py:355: ResourceWarning: unclosed <ssl.SSLSocket fd=79, family=AddressFamily.AF_INET, type=2049, proto=6, laddr=('172.28.0.2', 53808), raddr=('74.125.195.95', 443)>
obj, end = self.scan_once(s, idx)
| MIT | notebooks/ICUglycemia/Notebooks/2_0_ara_pairing_II.ipynb | aldo-arevalo/mimic-code |
Boluses of short-acting insulin | # Filtering for only short insulin boluses and all sources of glucose
short_BOL_60 = ICU60min_adjusted[(ICU60min_adjusted['INSULINTYPE']=="Short") &
(ICU60min_adjusted['EVENT'].str.contains('BOLUS'))].copy()
# Get statistics
display(HTML('<h5>Contains the following information</h5>'))
print("Entries: {}".format(short_BOL_60.shape[0]))
print("Patients: {}".format(short_BOL_60['SUBJECT_ID'].nunique()))
print("Hospital admissions: {}".format(short_BOL_60['HADM_ID'].nunique()))
print('ICU stays: {}'.format(short_BOL_60['ICUSTAY_ID'].nunique()))
display(short_BOL_60[['INPUT','GLC_AL']].describe())
# Save as CSV file, uncomment and modify as needed.
# short_BOL_60.to_csv(base_dir+"/DataExtraction/BolusesCUR_60.csv", index=False,
# encoding='utf8', header = True) | _____no_output_____ | MIT | notebooks/ICUglycemia/Notebooks/2_0_ara_pairing_II.ipynb | aldo-arevalo/mimic-code |
Loops and Conditions loops provides the methods of iteration while condition allows or blocks the code execution when specified conditionis meet. For Loop and while Loop | L = ['apple', 'banana','kite','cellphone']
for item in L:
print(item)
range(5), range(5,100), sum(range(100))
L=[]
for k in range(10):
L.append(10*k)
L
D = {}
for i in range(5):
for j in range(5):
if i == j :
D.update({(i,j) : 10*i+j})
elif i!=j :
D.update({(i,j): 100*i+j})
print(D)
for i, item in enumerate(['apple', 'banana','kite','cellphone']):
print("The",i,"th element is:", item)
A=[10*k**2+5*k+1 for k in range(10)]
print(A)
AA=[[10*x**2+5*y+1 for x in range(3)] for y in range(3)]
print(AA)
for i in range(3):
for j in range(3):
print("The", "(",i,",",j,")","th element is: ", AA[i][j])
i=0
while i<5:
print( i, "th turn")
i = i+1
for i in range(10):
print(i)
if i == 3:
break
import random as random
for i in range(10):
r = random.uniform(1,10)
if r<2 and r>0:
print("It is smaller than 2 and greater than 1","|",r)
elif r<4 and r>2:
print("It is smaller than 4 and greater than 2","|",r)
elif r<6 and r>4:
print("It is smaller tha 6 and greater than 4","|",r)
elif r<8 and r>6:
print("It is smaller than 8 and greate than 6","|",r)
elif r<10 and r>8:
print("It is smaller than 10 and greater than 8","|",r)
s = 0
for i in range(1000+1):
s = s+i
s
s = 0
LE = []
for i in range(1001):
if i%2 ==0:
LE.append(i)
s= s+i
s, sum(LE) | _____no_output_____ | MIT | loop.ipynb | dineshyadav2020/P_W_Files |
Copyright 2019 The TensorFlow Authors. | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. | _____no_output_____ | Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
Transformer Chatbot Run in Google Colab View source on GitHub This tutorial trains a Transformer model to be a chatbot. This is an advanced example that assumes knowledge of [text generation](https://tensorflow.org/alpha/tutorials/text/text_generation), [attention](https://www.tensorflow.org/alpha/tutorials/text/nmt_with_attention) and [transformer](https://www.tensorflow.org/alpha/tutorials/text/transformer).The core idea behind the Transformer model is *self-attention*—the ability to attend to different positions of the input sequence to compute a representation of that sequence. Transformer creates stacks of self-attention layers and is explained below in the sections *Scaled dot product attention* and *Multi-head attention*.Note: The model architecture is identical to the example in [Transformer model for language understanding](https://www.tensorflow.org/alpha/tutorials/text/transformer), and we demonstrate how to implement the same model in the Functional approach instead of Subclassing. | from __future__ import absolute_import, division, print_function, unicode_literals
try:
# The %tensorflow_version magic only works in colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
tf.random.set_seed(1234)
!pip install tfds-nightly
import tensorflow_datasets as tfds
import os
import re
import numpy as np
import matplotlib.pyplot as plt
| Collecting tf-nightly-gpu-2.0-preview==2.0.0.dev20190520
[?25l Downloading https://files.pythonhosted.org/packages/c9/c1/fcaf4f6873777da2cd3a7a8ac3c9648cef7c7413f13b8135521eb9b9804a/tf_nightly_gpu_2.0_preview-2.0.0.dev20190520-cp36-cp36m-manylinux1_x86_64.whl (349.0MB)
[K |████████████████████████████████| 349.0MB 31kB/s
[?25hRequirement already satisfied: tfds-nightly in /usr/local/lib/python3.6/dist-packages (1.0.2.dev201905140105)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tf-nightly-gpu-2.0-preview==2.0.0.dev20190520) (0.33.4)
Requirement already satisfied: gast>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from tf-nightly-gpu-2.0-preview==2.0.0.dev20190520) (0.2.2)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tf-nightly-gpu-2.0-preview==2.0.0.dev20190520) (1.1.0)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tf-nightly-gpu-2.0-preview==2.0.0.dev20190520) (1.12.0)
Collecting wrapt>=1.11.1 (from tf-nightly-gpu-2.0-preview==2.0.0.dev20190520)
Downloading https://files.pythonhosted.org/packages/67/b2/0f71ca90b0ade7fad27e3d20327c996c6252a2ffe88f50a95bba7434eda9/wrapt-1.11.1.tar.gz
Requirement already satisfied: numpy<2.0,>=1.14.5 in /usr/local/lib/python3.6/dist-packages (from tf-nightly-gpu-2.0-preview==2.0.0.dev20190520) (1.16.3)
Requirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tf-nightly-gpu-2.0-preview==2.0.0.dev20190520) (3.7.1)
Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tf-nightly-gpu-2.0-preview==2.0.0.dev20190520) (0.7.1)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tf-nightly-gpu-2.0-preview==2.0.0.dev20190520) (1.15.0)
Collecting tb-nightly<1.15.0a0,>=1.14.0a0 (from tf-nightly-gpu-2.0-preview==2.0.0.dev20190520)
[?25l Downloading https://files.pythonhosted.org/packages/6f/99/4220b50dc87814988e969cc859c07d070423bea820bc24d16c2023057eb6/tb_nightly-1.14.0a20190520-py3-none-any.whl (3.1MB)
[K |████████████████████████████████| 3.1MB 33.7MB/s
[?25hCollecting google-pasta>=0.1.6 (from tf-nightly-gpu-2.0-preview==2.0.0.dev20190520)
[?25l Downloading https://files.pythonhosted.org/packages/f9/68/a14620bfb042691f532dcde8576ff82ee82e4c003cdc0a3dbee5f289cee6/google_pasta-0.1.6-py3-none-any.whl (51kB)
[K |████████████████████████████████| 61kB 27.4MB/s
[?25hRequirement already satisfied: keras-applications>=1.0.6 in /usr/local/lib/python3.6/dist-packages (from tf-nightly-gpu-2.0-preview==2.0.0.dev20190520) (1.0.7)
Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from tf-nightly-gpu-2.0-preview==2.0.0.dev20190520) (1.0.9)
Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tf-nightly-gpu-2.0-preview==2.0.0.dev20190520) (0.7.1)
Collecting tensorflow-estimator-2.0-preview (from tf-nightly-gpu-2.0-preview==2.0.0.dev20190520)
[?25l Downloading https://files.pythonhosted.org/packages/71/e7/779651eca277d48486ae03d007162d37c93449bc29358fbe748e13639734/tensorflow_estimator_2.0_preview-1.14.0.dev2019052000-py2.py3-none-any.whl (427kB)
[K |████████████████████████████████| 430kB 51.7MB/s
[?25hRequirement already satisfied: promise in /usr/local/lib/python3.6/dist-packages (from tfds-nightly) (2.2.1)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from tfds-nightly) (0.16.0)
Requirement already satisfied: dill in /usr/local/lib/python3.6/dist-packages (from tfds-nightly) (0.2.9)
Requirement already satisfied: psutil in /usr/local/lib/python3.6/dist-packages (from tfds-nightly) (5.4.8)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from tfds-nightly) (2.21.0)
Requirement already satisfied: tensorflow-metadata in /usr/local/lib/python3.6/dist-packages (from tfds-nightly) (0.13.0)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from tfds-nightly) (4.28.1)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.6.1->tf-nightly-gpu-2.0-preview==2.0.0.dev20190520) (41.0.1)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<1.15.0a0,>=1.14.0a0->tf-nightly-gpu-2.0-preview==2.0.0.dev20190520) (0.15.3)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<1.15.0a0,>=1.14.0a0->tf-nightly-gpu-2.0-preview==2.0.0.dev20190520) (3.1)
Requirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.6->tf-nightly-gpu-2.0-preview==2.0.0.dev20190520) (2.8.0)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->tfds-nightly) (1.24.3)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->tfds-nightly) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->tfds-nightly) (2019.3.9)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->tfds-nightly) (2.8)
Requirement already satisfied: googleapis-common-protos in /usr/local/lib/python3.6/dist-packages (from tensorflow-metadata->tfds-nightly) (1.5.10)
Building wheels for collected packages: wrapt
Building wheel for wrapt (setup.py) ... [?25l[?25hdone
Stored in directory: /root/.cache/pip/wheels/89/67/41/63cbf0f6ac0a6156588b9587be4db5565f8c6d8ccef98202fc
Successfully built wrapt
[31mERROR: thinc 6.12.1 has requirement wrapt<1.11.0,>=1.10.0, but you'll have wrapt 1.11.1 which is incompatible.[0m
Installing collected packages: wrapt, tb-nightly, google-pasta, tensorflow-estimator-2.0-preview, tf-nightly-gpu-2.0-preview
Found existing installation: wrapt 1.10.11
Uninstalling wrapt-1.10.11:
Successfully uninstalled wrapt-1.10.11
Successfully installed google-pasta-0.1.6 tb-nightly-1.14.0a20190520 tensorflow-estimator-2.0-preview-1.14.0.dev2019052000 tf-nightly-gpu-2.0-preview-2.0.0.dev20190520 wrapt-1.11.1
| Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
Prepare Dataset We will use the conversations in movies and TV shows provided by [Cornell Movie-Dialogs Corpus](https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html), which contains more than 220 thousands conversational exchanges between more than 10k pairs of movie characters, as our dataset.`movie_conversations.txt` contains list of the conversation IDs and `movie_lines.text` contains the text of assoicated with each conversation ID. For further information regarding the dataset, please check the README file in the zip file. | path_to_zip = tf.keras.utils.get_file(
'cornell_movie_dialogs.zip',
origin=
'http://www.cs.cornell.edu/~cristian/data/cornell_movie_dialogs_corpus.zip',
extract=True)
path_to_dataset = os.path.join(
os.path.dirname(path_to_zip), "cornell movie-dialogs corpus")
path_to_movie_lines = os.path.join(path_to_dataset, 'movie_lines.txt')
path_to_movie_conversations = os.path.join(path_to_dataset,
'movie_conversations.txt') | Downloading data from http://www.cs.cornell.edu/~cristian/data/cornell_movie_dialogs_corpus.zip
9920512/9916637 [==============================] - 1s 0us/step
| Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
Load and preprocess dataTo keep this example simple and fast, we are limiting the maximum number of training samples to`MAX_SAMPLES=25000` and the maximum length of the sentence to be `MAX_LENGTH=40`.We preprocess our dataset in the following order:* Extract `MAX_SAMPLES` conversation pairs into list of `questions` and `answers.* Preprocess each sentence by removing special characters in each sentence.* Build tokenizer (map text to ID and ID to text) using [TensorFlow Datasets SubwordTextEncoder](https://www.tensorflow.org/datasets/api_docs/python/tfds/features/text/SubwordTextEncoder).* Tokenize each sentence and add `START_TOKEN` and `END_TOKEN` to indicate the start and end of each sentence.* Filter out sentence that has more than `MAX_LENGTH` tokens.* Pad tokenized sentences to `MAX_LENGTH` | # Maximum number of samples to preprocess
MAX_SAMPLES = 50000
def preprocess_sentence(sentence):
sentence = sentence.lower().strip()
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
sentence = re.sub(r"([?.!,])", r" \1 ", sentence)
sentence = re.sub(r'[" "]+', " ", sentence)
# replacing everything with space except (a-z, A-Z, ".", "?", "!", ",")
sentence = re.sub(r"[^a-zA-Z?.!,]+", " ", sentence)
sentence = sentence.strip()
# adding a start and an end token to the sentence
return sentence
def load_conversations():
# dictionary of line id to text
id2line = {}
with open(path_to_movie_lines, errors='ignore') as file:
lines = file.readlines()
for line in lines:
parts = line.replace('\n', '').split(' +++$+++ ')
id2line[parts[0]] = parts[4]
inputs, outputs = [], []
with open(path_to_movie_conversations, 'r') as file:
lines = file.readlines()
for line in lines:
parts = line.replace('\n', '').split(' +++$+++ ')
# get conversation in a list of line ID
conversation = [line[1:-1] for line in parts[3][1:-1].split(', ')]
for i in range(len(conversation) - 1):
inputs.append(preprocess_sentence(id2line[conversation[i]]))
outputs.append(preprocess_sentence(id2line[conversation[i + 1]]))
if len(inputs) >= MAX_SAMPLES:
return inputs, outputs
return inputs, outputs
questions, answers = load_conversations()
print('Sample question: {}'.format(questions[20]))
print('Sample answer: {}'.format(answers[20]))
# Build tokenizer using tfds for both questions and answers
tokenizer = tfds.features.text.SubwordTextEncoder.build_from_corpus(
questions + answers, target_vocab_size=2**13)
# Define start and end token to indicate the start and end of a sentence
START_TOKEN, END_TOKEN = [tokenizer.vocab_size], [tokenizer.vocab_size + 1]
# Vocabulary size plus start and end token
VOCAB_SIZE = tokenizer.vocab_size + 2
print('Tokenized sample question: {}'.format(tokenizer.encode(questions[20])))
# Maximum sentence length
MAX_LENGTH = 40
# Tokenize, filter and pad sentences
def tokenize_and_filter(inputs, outputs):
tokenized_inputs, tokenized_outputs = [], []
for (sentence1, sentence2) in zip(inputs, outputs):
# tokenize sentence
sentence1 = START_TOKEN + tokenizer.encode(sentence1) + END_TOKEN
sentence2 = START_TOKEN + tokenizer.encode(sentence2) + END_TOKEN
# check tokenized sentence max length
if len(sentence1) <= MAX_LENGTH and len(sentence2) <= MAX_LENGTH:
tokenized_inputs.append(sentence1)
tokenized_outputs.append(sentence2)
# pad tokenized sentences
tokenized_inputs = tf.keras.preprocessing.sequence.pad_sequences(
tokenized_inputs, maxlen=MAX_LENGTH, padding='post')
tokenized_outputs = tf.keras.preprocessing.sequence.pad_sequences(
tokenized_outputs, maxlen=MAX_LENGTH, padding='post')
return tokenized_inputs, tokenized_outputs
questions, answers = tokenize_and_filter(questions, answers)
print('Vocab size: {}'.format(VOCAB_SIZE))
print('Number of samples: {}'.format(len(questions))) | Vocab size: 8333
Number of samples: 44095
| Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
Create `tf.data.Dataset`We are going to use the [tf.data.Dataset API](https://www.tensorflow.org/api_docs/python/tf/data) to contruct our input pipline in order to utilize features like caching and prefetching to speed up the training process.The transformer is an auto-regressive model: it makes predictions one part at a time, and uses its output so far to decide what to do next.During training this example uses teacher-forcing. Teacher forcing is passing the true output to the next time step regardless of what the model predicts at the current time step.As the transformer predicts each word, self-attention allows it to look at the previous words in the input sequence to better predict the next word.To prevent the model from peaking at the expected output the model uses a look-ahead mask.Target is divided into `decoder_inputs` which padded as an input to the decoder and `cropped_targets` for calculating our loss and accuracy. | BATCH_SIZE = 64
BUFFER_SIZE = 20000
# decoder inputs use the previous target as input
# remove START_TOKEN from targets
dataset = tf.data.Dataset.from_tensor_slices((
{
'inputs': questions,
'dec_inputs': answers[:, :-1]
},
{
'outputs': answers[:, 1:]
},
))
dataset = dataset.cache()
dataset = dataset.shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
print(dataset) | <PrefetchDataset shapes: ({inputs: (None, 40), dec_inputs: (None, 39)}, {outputs: (None, 39)}), types: ({inputs: tf.int32, dec_inputs: tf.int32}, {outputs: tf.int32})>
| Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
Attention Scaled dot product AttentionThe scaled dot-product attention function used by the transformer takes three inputs: Q (query), K (key), V (value). The equation used to calculate the attention weights is:$$\Large{Attention(Q, K, V) = softmax_k(\frac{QK^T}{\sqrt{d_k}}) V} $$As the softmax normalization is done on the `key`, its values decide the amount of importance given to the `query`.The output represents the multiplication of the attention weights and the `value` vector. This ensures that the words we want to focus on are kept as is and the irrelevant words are flushed out.The dot-product attention is scaled by a factor of square root of the depth. This is done because for large values of depth, the dot product grows large in magnitude pushing the softmax function where it has small gradients resulting in a very hard softmax. For example, consider that `query` and `key` have a mean of 0 and variance of 1. Their matrix multiplication will have a mean of 0 and variance of `dk`. Hence, *square root of `dk`* is used for scaling (and not any other number) because the matmul of `query` and `key` should have a mean of 0 and variance of 1, so that we get a gentler softmax.The mask is multiplied with *-1e9 (close to negative infinity).* This is done because the mask is summed with the scaled matrix multiplication of `query` and `key` and is applied immediately before a softmax. The goal is to zero out these cells, and large negative inputs to softmax are near zero in the output. | def scaled_dot_product_attention(query, key, value, mask):
"""Calculate the attention weights. """
matmul_qk = tf.matmul(query, key, transpose_b=True)
# scale matmul_qk
depth = tf.cast(tf.shape(key)[-1], tf.float32)
logits = matmul_qk / tf.math.sqrt(depth)
# add the mask to zero out padding tokens
if mask is not None:
logits += (mask * -1e9)
# softmax is normalized on the last axis (seq_len_k)
attention_weights = tf.nn.softmax(logits, axis=-1)
output = tf.matmul(attention_weights, value)
return output | _____no_output_____ | Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
Multi-head attentionMulti-head attention consists of four parts:* Linear layers and split into heads.* Scaled dot-product attention.* Concatenation of heads.* Final linear layer.Each multi-head attention block gets three inputs; Q (query), K (key), V (value). These are put through linear (Dense) layers and split up into multiple heads. The `scaled_dot_product_attention` defined above is applied to each head (broadcasted for efficiency). An appropriate mask must be used in the attention step. The attention output for each head is then concatenated (using `tf.transpose`, and `tf.reshape`) and put through a final `Dense` layer.Instead of one single attention head, `query`, `key`, and `value` are split into multiple heads because it allows the model to jointly attend to information at different positions from different representational spaces. After the split each head has a reduced dimensionality, so the total computation cost is the same as a single head attention with full dimensionality. | class MultiHeadAttention(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, name="multi_head_attention"):
super(MultiHeadAttention, self).__init__(name=name)
self.num_heads = num_heads
self.d_model = d_model
assert d_model % self.num_heads == 0
self.depth = d_model // self.num_heads
self.query_dense = tf.keras.layers.Dense(units=d_model)
self.key_dense = tf.keras.layers.Dense(units=d_model)
self.value_dense = tf.keras.layers.Dense(units=d_model)
self.dense = tf.keras.layers.Dense(units=d_model)
def split_heads(self, inputs, batch_size):
inputs = tf.reshape(
inputs, shape=(batch_size, -1, self.num_heads, self.depth))
return tf.transpose(inputs, perm=[0, 2, 1, 3])
def call(self, inputs):
query, key, value, mask = inputs['query'], inputs['key'], inputs[
'value'], inputs['mask']
batch_size = tf.shape(query)[0]
# linear layers
query = self.query_dense(query)
key = self.key_dense(key)
value = self.value_dense(value)
# split heads
query = self.split_heads(query, batch_size)
key = self.split_heads(key, batch_size)
value = self.split_heads(value, batch_size)
# scaled dot-product attention
scaled_attention = scaled_dot_product_attention(query, key, value, mask)
scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3])
# concatenation of heads
concat_attention = tf.reshape(scaled_attention,
(batch_size, -1, self.d_model))
# final linear layer
outputs = self.dense(concat_attention)
return outputs | _____no_output_____ | Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
Transformer Masking `create_padding_mask` and `create_look_ahead` are helper functions to creating masks to mask out padded tokens, we are going to use these helper functions as `tf.keras.layers.Lambda` layers.Mask all the pad tokens (value `0`) in the batch to ensure the model does not treat padding as input. | def create_padding_mask(x):
mask = tf.cast(tf.math.equal(x, 0), tf.float32)
# (batch_size, 1, 1, sequence length)
return mask[:, tf.newaxis, tf.newaxis, :]
print(create_padding_mask(tf.constant([[1, 2, 0, 3, 0], [0, 0, 0, 4, 5]]))) | tf.Tensor(
[[[[0. 0. 1. 0. 1.]]]
[[[1. 1. 1. 0. 0.]]]], shape=(2, 1, 1, 5), dtype=float32)
| Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
Look-ahead mask to mask the future tokens in a sequence.We also mask out pad tokens.i.e. To predict the third word, only the first and second word will be used | def create_look_ahead_mask(x):
seq_len = tf.shape(x)[1]
look_ahead_mask = 1 - tf.linalg.band_part(tf.ones((seq_len, seq_len)), -1, 0)
padding_mask = create_padding_mask(x)
return tf.maximum(look_ahead_mask, padding_mask)
print(create_look_ahead_mask(tf.constant([[1, 2, 0, 4, 5]]))) | tf.Tensor(
[[[[0. 1. 1. 1. 1.]
[0. 0. 1. 1. 1.]
[0. 0. 1. 1. 1.]
[0. 0. 1. 0. 1.]
[0. 0. 1. 0. 0.]]]], shape=(1, 1, 5, 5), dtype=float32)
| Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
Positional encodingSince this model doesn't contain any recurrence or convolution, positional encoding is added to give the model some information about the relative position of the words in the sentence. The positional encoding vector is added to the embedding vector. Embeddings represent a token in a d-dimensional space where tokens with similar meaning will be closer to each other. But the embeddings do not encode the relative position of words in a sentence. So after adding the positional encoding, words will be closer to each other based on the *similarity of their meaning and their position in the sentence*, in the d-dimensional space.See the notebook on [positional encoding](https://github.com/tensorflow/examples/blob/master/community/en/position_encoding.ipynb) to learn more about it. The formula for calculating the positional encoding is as follows:$$\Large{PE_{(pos, 2i)} = sin(pos / 10000^{2i / d_{model}})} $$$$\Large{PE_{(pos, 2i+1)} = cos(pos / 10000^{2i / d_{model}})} $$ | class PositionalEncoding(tf.keras.layers.Layer):
def __init__(self, position, d_model):
super(PositionalEncoding, self).__init__()
self.pos_encoding = self.positional_encoding(position, d_model)
def get_angles(self, position, i, d_model):
angles = 1 / tf.pow(10000, (2 * (i // 2)) / tf.cast(d_model, tf.float32))
return position * angles
def positional_encoding(self, position, d_model):
angle_rads = self.get_angles(
position=tf.range(position, dtype=tf.float32)[:, tf.newaxis],
i=tf.range(d_model, dtype=tf.float32)[tf.newaxis, :],
d_model=d_model)
# apply sin to even index in the array
sines = tf.math.sin(angle_rads[:, 0::2])
# apply cos to odd index in the array
cosines = tf.math.cos(angle_rads[:, 1::2])
pos_encoding = tf.concat([sines, cosines], axis=-1)
pos_encoding = pos_encoding[tf.newaxis, ...]
return tf.cast(pos_encoding, tf.float32)
def call(self, inputs):
return inputs + self.pos_encoding[:, :tf.shape(inputs)[1], :]
sample_pos_encoding = PositionalEncoding(50, 512)
plt.pcolormesh(sample_pos_encoding.pos_encoding.numpy()[0], cmap='RdBu')
plt.xlabel('Depth')
plt.xlim((0, 512))
plt.ylabel('Position')
plt.colorbar()
plt.show() | _____no_output_____ | Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
Encoder LayerEach encoder layer consists of sublayers:1. Multi-head attention (with padding mask) 2. 2 dense layers followed by dropoutEach of these sublayers has a residual connection around it followed by a layer normalization. Residual connections help in avoiding the vanishing gradient problem in deep networks.The output of each sublayer is `LayerNorm(x + Sublayer(x))`. The normalization is done on the `d_model` (last) axis. | def encoder_layer(units, d_model, num_heads, dropout, name="encoder_layer"):
inputs = tf.keras.Input(shape=(None, d_model), name="inputs")
padding_mask = tf.keras.Input(shape=(1, 1, None), name="padding_mask")
attention = MultiHeadAttention(
d_model, num_heads, name="attention")({
'query': inputs,
'key': inputs,
'value': inputs,
'mask': padding_mask
})
attention = tf.keras.layers.Dropout(rate=dropout)(attention)
attention = tf.keras.layers.LayerNormalization(
epsilon=1e-6)(inputs + attention)
outputs = tf.keras.layers.Dense(units=units, activation='relu')(attention)
outputs = tf.keras.layers.Dense(units=d_model)(outputs)
outputs = tf.keras.layers.Dropout(rate=dropout)(outputs)
outputs = tf.keras.layers.LayerNormalization(
epsilon=1e-6)(attention + outputs)
return tf.keras.Model(
inputs=[inputs, padding_mask], outputs=outputs, name=name)
sample_encoder_layer = encoder_layer(
units=512,
d_model=128,
num_heads=4,
dropout=0.3,
name="sample_encoder_layer")
tf.keras.utils.plot_model(
sample_encoder_layer, to_file='encoder_layer.png', show_shapes=True) | _____no_output_____ | Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
EncoderThe Encoder consists of:1. Input Embedding2. Positional Encoding3. `num_layers` encoder layersThe input is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the encoder layers. The output of the encoder is the input to the decoder. | def encoder(vocab_size,
num_layers,
units,
d_model,
num_heads,
dropout,
name="encoder"):
inputs = tf.keras.Input(shape=(None,), name="inputs")
padding_mask = tf.keras.Input(shape=(1, 1, None), name="padding_mask")
embeddings = tf.keras.layers.Embedding(vocab_size, d_model)(inputs)
embeddings *= tf.math.sqrt(tf.cast(d_model, tf.float32))
embeddings = PositionalEncoding(vocab_size, d_model)(embeddings)
outputs = tf.keras.layers.Dropout(rate=dropout)(embeddings)
for i in range(num_layers):
outputs = encoder_layer(
units=units,
d_model=d_model,
num_heads=num_heads,
dropout=dropout,
name="encoder_layer_{}".format(i),
)([outputs, padding_mask])
return tf.keras.Model(
inputs=[inputs, padding_mask], outputs=outputs, name=name)
sample_encoder = encoder(
vocab_size=8192,
num_layers=2,
units=512,
d_model=128,
num_heads=4,
dropout=0.3,
name="sample_encoder")
tf.keras.utils.plot_model(
sample_encoder, to_file='encoder.png', show_shapes=True) | _____no_output_____ | Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
Decoder LayerEach decoder layer consists of sublayers:1. Masked multi-head attention (with look ahead mask and padding mask)2. Multi-head attention (with padding mask). `value` and `key` receive the *encoder output* as inputs. `query` receives the *output from the masked multi-head attention sublayer.*3. 2 dense layers followed by dropoutEach of these sublayers has a residual connection around it followed by a layer normalization. The output of each sublayer is `LayerNorm(x + Sublayer(x))`. The normalization is done on the `d_model` (last) axis.As `query` receives the output from decoder's first attention block, and `key` receives the encoder output, the attention weights represent the importance given to the decoder's input based on the encoder's output. In other words, the decoder predicts the next word by looking at the encoder output and self-attending to its own output. See the demonstration above in the scaled dot product attention section. | def decoder_layer(units, d_model, num_heads, dropout, name="decoder_layer"):
inputs = tf.keras.Input(shape=(None, d_model), name="inputs")
enc_outputs = tf.keras.Input(shape=(None, d_model), name="encoder_outputs")
look_ahead_mask = tf.keras.Input(
shape=(1, None, None), name="look_ahead_mask")
padding_mask = tf.keras.Input(shape=(1, 1, None), name='padding_mask')
attention1 = MultiHeadAttention(
d_model, num_heads, name="attention_1")(inputs={
'query': inputs,
'key': inputs,
'value': inputs,
'mask': look_ahead_mask
})
attention1 = tf.keras.layers.LayerNormalization(
epsilon=1e-6)(attention1 + inputs)
attention2 = MultiHeadAttention(
d_model, num_heads, name="attention_2")(inputs={
'query': attention1,
'key': enc_outputs,
'value': enc_outputs,
'mask': padding_mask
})
attention2 = tf.keras.layers.Dropout(rate=dropout)(attention2)
attention2 = tf.keras.layers.LayerNormalization(
epsilon=1e-6)(attention2 + attention1)
outputs = tf.keras.layers.Dense(units=units, activation='relu')(attention2)
outputs = tf.keras.layers.Dense(units=d_model)(outputs)
outputs = tf.keras.layers.Dropout(rate=dropout)(outputs)
outputs = tf.keras.layers.LayerNormalization(
epsilon=1e-6)(outputs + attention2)
return tf.keras.Model(
inputs=[inputs, enc_outputs, look_ahead_mask, padding_mask],
outputs=outputs,
name=name)
sample_decoder_layer = decoder_layer(
units=512,
d_model=128,
num_heads=4,
dropout=0.3,
name="sample_decoder_layer")
tf.keras.utils.plot_model(
sample_decoder_layer, to_file='decoder_layer.png', show_shapes=True) | _____no_output_____ | Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
DecoderThe Decoder consists of:1. Output Embedding2. Positional Encoding3. N decoder layersThe target is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the decoder layers. The output of the decoder is the input to the final linear layer. | def decoder(vocab_size,
num_layers,
units,
d_model,
num_heads,
dropout,
name='decoder'):
inputs = tf.keras.Input(shape=(None,), name='inputs')
enc_outputs = tf.keras.Input(shape=(None, d_model), name='encoder_outputs')
look_ahead_mask = tf.keras.Input(
shape=(1, None, None), name='look_ahead_mask')
padding_mask = tf.keras.Input(shape=(1, 1, None), name='padding_mask')
embeddings = tf.keras.layers.Embedding(vocab_size, d_model)(inputs)
embeddings *= tf.math.sqrt(tf.cast(d_model, tf.float32))
embeddings = PositionalEncoding(vocab_size, d_model)(embeddings)
outputs = tf.keras.layers.Dropout(rate=dropout)(embeddings)
for i in range(num_layers):
outputs = decoder_layer(
units=units,
d_model=d_model,
num_heads=num_heads,
dropout=dropout,
name='decoder_layer_{}'.format(i),
)(inputs=[outputs, enc_outputs, look_ahead_mask, padding_mask])
return tf.keras.Model(
inputs=[inputs, enc_outputs, look_ahead_mask, padding_mask],
outputs=outputs,
name=name)
sample_decoder = decoder(
vocab_size=8192,
num_layers=2,
units=512,
d_model=128,
num_heads=4,
dropout=0.3,
name="sample_decoder")
tf.keras.utils.plot_model(
sample_decoder, to_file='decoder.png', show_shapes=True) | _____no_output_____ | Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
TransformerTransformer consists of the encoder, decoder and a final linear layer. The output of the decoder is the input to the linear layer and its output is returned. | def transformer(vocab_size,
num_layers,
units,
d_model,
num_heads,
dropout,
name="transformer"):
inputs = tf.keras.Input(shape=(None,), name="inputs")
dec_inputs = tf.keras.Input(shape=(None,), name="dec_inputs")
enc_padding_mask = tf.keras.layers.Lambda(
create_padding_mask, output_shape=(1, 1, None),
name='enc_padding_mask')(inputs)
# mask the future tokens for decoder inputs at the 1st attention block
look_ahead_mask = tf.keras.layers.Lambda(
create_look_ahead_mask,
output_shape=(1, None, None),
name='look_ahead_mask')(dec_inputs)
# mask the encoder outputs for the 2nd attention block
dec_padding_mask = tf.keras.layers.Lambda(
create_padding_mask, output_shape=(1, 1, None),
name='dec_padding_mask')(inputs)
enc_outputs = encoder(
vocab_size=vocab_size,
num_layers=num_layers,
units=units,
d_model=d_model,
num_heads=num_heads,
dropout=dropout,
)(inputs=[inputs, enc_padding_mask])
dec_outputs = decoder(
vocab_size=vocab_size,
num_layers=num_layers,
units=units,
d_model=d_model,
num_heads=num_heads,
dropout=dropout,
)(inputs=[dec_inputs, enc_outputs, look_ahead_mask, dec_padding_mask])
outputs = tf.keras.layers.Dense(units=vocab_size, name="outputs")(dec_outputs)
return tf.keras.Model(inputs=[inputs, dec_inputs], outputs=outputs, name=name)
sample_transformer = transformer(
vocab_size=8192,
num_layers=4,
units=512,
d_model=128,
num_heads=4,
dropout=0.3,
name="sample_transformer")
tf.keras.utils.plot_model(
sample_transformer, to_file='transformer.png', show_shapes=True) | _____no_output_____ | Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
Train model Initialize modelTo keep this example small and relatively fast, the values for *num_layers, d_model, and units* have been reduced. See the [paper](https://arxiv.org/abs/1706.03762) for all the other versions of the transformer. | tf.keras.backend.clear_session()
# Hyper-parameters
NUM_LAYERS = 2
D_MODEL = 256
NUM_HEADS = 8
UNITS = 512
DROPOUT = 0.1
model = transformer(
vocab_size=VOCAB_SIZE,
num_layers=NUM_LAYERS,
units=UNITS,
d_model=D_MODEL,
num_heads=NUM_HEADS,
dropout=DROPOUT) | _____no_output_____ | Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
Loss functionSince the target sequences are padded, it is important to apply a padding mask when calculating the loss. | def loss_function(y_true, y_pred):
y_true = tf.reshape(y_true, shape=(-1, MAX_LENGTH - 1))
loss = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')(y_true, y_pred)
mask = tf.cast(tf.not_equal(y_true, 0), tf.float32)
loss = tf.multiply(loss, mask)
return tf.reduce_mean(loss) | _____no_output_____ | Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
Custom learning rateUse the Adam optimizer with a custom learning rate scheduler according to the formula in the [paper](https://arxiv.org/abs/1706.03762).$$\Large{lrate = d_{model}^{-0.5} * min(step{\_}num^{-0.5}, step{\_}num * warmup{\_}steps^{-1.5})}$$ | class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
def __init__(self, d_model, warmup_steps=4000):
super(CustomSchedule, self).__init__()
self.d_model = d_model
self.d_model = tf.cast(self.d_model, tf.float32)
self.warmup_steps = warmup_steps
def __call__(self, step):
arg1 = tf.math.rsqrt(step)
arg2 = step * (self.warmup_steps**-1.5)
return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
sample_learning_rate = CustomSchedule(d_model=128)
plt.plot(sample_learning_rate(tf.range(200000, dtype=tf.float32)))
plt.ylabel("Learning Rate")
plt.xlabel("Train Step") | _____no_output_____ | Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
Compile Model | learning_rate = CustomSchedule(D_MODEL)
optimizer = tf.keras.optimizers.Adam(
learning_rate, beta_1=0.9, beta_2=0.98, epsilon=1e-9)
def accuracy(y_true, y_pred):
# ensure labels have shape (batch_size, MAX_LENGTH - 1)
y_true = tf.reshape(y_true, shape=(-1, MAX_LENGTH - 1))
accuracy = tf.metrics.SparseCategoricalAccuracy()(y_true, y_pred)
return accuracy
model.compile(optimizer=optimizer, loss=loss_function, metrics=[accuracy]) | _____no_output_____ | Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
Fit modelTrain our transformer by simply calling `model.fit()` | EPOCHS = 20
model.fit(dataset, epochs=EPOCHS) | Epoch 1/20
689/689 [==============================] - 97s 141ms/step - loss: 2.1146 - accuracy: 0.0249
Epoch 2/20
689/689 [==============================] - 81s 118ms/step - loss: 1.5008 - accuracy: 0.0530
Epoch 3/20
689/689 [==============================] - 82s 119ms/step - loss: 1.3940 - accuracy: 0.0653
Epoch 4/20
689/689 [==============================] - 82s 118ms/step - loss: 1.3313 - accuracy: 0.0719
Epoch 5/20
689/689 [==============================] - 82s 119ms/step - loss: 1.2744 - accuracy: 0.0765
Epoch 6/20
689/689 [==============================] - 82s 119ms/step - loss: 1.2223 - accuracy: 0.0801
Epoch 7/20
689/689 [==============================] - 82s 118ms/step - loss: 1.1670 - accuracy: 0.0832
Epoch 8/20
689/689 [==============================] - 82s 119ms/step - loss: 1.1050 - accuracy: 0.0861
Epoch 9/20
689/689 [==============================] - 82s 119ms/step - loss: 1.0503 - accuracy: 0.0889
Epoch 10/20
689/689 [==============================] - 82s 120ms/step - loss: 1.0002 - accuracy: 0.0917
Epoch 11/20
689/689 [==============================] - 82s 118ms/step - loss: 0.9540 - accuracy: 0.0945
Epoch 12/20
689/689 [==============================] - 82s 119ms/step - loss: 0.9122 - accuracy: 0.0973
Epoch 13/20
689/689 [==============================] - 82s 118ms/step - loss: 0.8744 - accuracy: 0.1001
Epoch 14/20
689/689 [==============================] - 82s 119ms/step - loss: 0.8396 - accuracy: 0.1029
Epoch 15/20
689/689 [==============================] - 82s 119ms/step - loss: 0.8082 - accuracy: 0.1056
Epoch 16/20
689/689 [==============================] - 82s 119ms/step - loss: 0.7799 - accuracy: 0.1082
Epoch 17/20
689/689 [==============================] - 82s 118ms/step - loss: 0.7540 - accuracy: 0.1108
Epoch 18/20
689/689 [==============================] - 82s 119ms/step - loss: 0.7300 - accuracy: 0.1134
Epoch 19/20
689/689 [==============================] - 82s 119ms/step - loss: 0.7076 - accuracy: 0.1158
Epoch 20/20
689/689 [==============================] - 82s 118ms/step - loss: 0.6881 - accuracy: 0.1183
| Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
Evaluate and predictThe following steps are used for evaluation:* Apply the same preprocessing method we used to create our dataset for the input sentence.* Tokenize the input sentence and add `START_TOKEN` and `END_TOKEN`. * Calculate the padding masks and the look ahead masks.* The decoder then outputs the predictions by looking at the encoder output and its own output.* Select the last word and calculate the argmax of that.* Concatentate the predicted word to the decoder input as pass it to the decoder.* In this approach, the decoder predicts the next word based on the previous words it predicted.Note: The model used here has less capacity and trained on a subset of the full dataset, hence its performance can be further improved. | def evaluate(sentence):
sentence = preprocess_sentence(sentence)
sentence = tf.expand_dims(
START_TOKEN + tokenizer.encode(sentence) + END_TOKEN, axis=0)
output = tf.expand_dims(START_TOKEN, 0)
for i in range(MAX_LENGTH):
predictions = model(inputs=[sentence, output], training=False)
# select the last word from the seq_len dimension
predictions = predictions[:, -1:, :]
predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32)
# return the result if the predicted_id is equal to the end token
if tf.equal(predicted_id, END_TOKEN[0]):
break
# concatenated the predicted_id to the output which is given to the decoder
# as its input.
output = tf.concat([output, predicted_id], axis=-1)
return tf.squeeze(output, axis=0)
def predict(sentence):
prediction = evaluate(sentence)
predicted_sentence = tokenizer.decode(
[i for i in prediction if i < tokenizer.vocab_size])
print('Input: {}'.format(sentence))
print('Output: {}'.format(predicted_sentence))
return predicted_sentence | _____no_output_____ | Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
Let's test our model! | output = predict('Where have you been?')
output = predict("It's a trap")
# feed the model with its previous output
sentence = 'I am not crazy, my mother had me tested.'
for _ in range(5):
sentence = predict(sentence)
print('') | Input: I am not crazy, my mother had me tested.
Output: you re a good man , roy . that s a good man , roy , you re a little girl , that s a good man . you re a little girl .
Input: you re a good man , roy . that s a good man , roy , you re a little girl , that s a good man . you re a little girl .
Output: i m glad you re not a drug addict .
Input: i m glad you re not a drug addict .
Output: the inheritance .
Input: the inheritance .
Output: no , i don t know what to do . i just like to tell you .
Input: no , i don t know what to do . i just like to tell you .
Output: i m not sure . i m not gonna mention my name .
| Apache-2.0 | community/en/transformer_chatbot.ipynb | xuekun90/examples |
Copyright 2020 The TensorFlow Authors. | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. | _____no_output_____ | Apache-2.0 | docs/tutorials/gradients.ipynb | HectorIGH/quantum |
Calculate gradients View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial explores gradient calculation algorithms for the expectation values of quantum circuits.Calculating the gradient of the expectation value of a certain observable in a quantum circuit is an involved process. Expectation values of observables do not have the luxury of having analytic gradient formulas that are always easy to write down—unlike traditional machine learning transformations such as matrix multiplication or vector addition that have analytic gradient formulas which are easy to write down. As a result, there are different quantum gradient calculation methods that come in handy for different scenarios. This tutorial compares and contrasts two different differentiation schemes. Setup | !pip install tensorflow==2.3.1 | _____no_output_____ | Apache-2.0 | docs/tutorials/gradients.ipynb | HectorIGH/quantum |
Install TensorFlow Quantum: | !pip install tensorflow-quantum | _____no_output_____ | Apache-2.0 | docs/tutorials/gradients.ipynb | HectorIGH/quantum |
Now import TensorFlow and the module dependencies: | import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit | _____no_output_____ | Apache-2.0 | docs/tutorials/gradients.ipynb | HectorIGH/quantum |
1. PreliminaryLet's make the notion of gradient calculation for quantum circuits a little more concrete. Suppose you have a parameterized circuit like this one: | qubit = cirq.GridQubit(0, 0)
my_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha'))
SVGCircuit(my_circuit) | _____no_output_____ | Apache-2.0 | docs/tutorials/gradients.ipynb | HectorIGH/quantum |
Along with an observable: | pauli_x = cirq.X(qubit)
pauli_x | _____no_output_____ | Apache-2.0 | docs/tutorials/gradients.ipynb | HectorIGH/quantum |
Looking at this operator you know that $⟨Y(\alpha)| X | Y(\alpha)⟩ = \sin(\pi \alpha)$ | def my_expectation(op, alpha):
"""Compute ⟨Y(alpha)| `op` | Y(alpha)⟩"""
params = {'alpha': alpha}
sim = cirq.Simulator()
final_state_vector = sim.simulate(my_circuit, params).final_state_vector
return op.expectation_from_state_vector(final_state_vector, {qubit: 0}).real
my_alpha = 0.3
print("Expectation=", my_expectation(pauli_x, my_alpha))
print("Sin Formula=", np.sin(np.pi * my_alpha)) | _____no_output_____ | Apache-2.0 | docs/tutorials/gradients.ipynb | HectorIGH/quantum |
and if you define $f_{1}(\alpha) = ⟨Y(\alpha)| X | Y(\alpha)⟩$ then $f_{1}^{'}(\alpha) = \pi \cos(\pi \alpha)$. Let's check this: | def my_grad(obs, alpha, eps=0.01):
grad = 0
f_x = my_expectation(obs, alpha)
f_x_prime = my_expectation(obs, alpha + eps)
return ((f_x_prime - f_x) / eps).real
print('Finite difference:', my_grad(pauli_x, my_alpha))
print('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha)) | _____no_output_____ | Apache-2.0 | docs/tutorials/gradients.ipynb | HectorIGH/quantum |
2. The need for a differentiatorWith larger circuits, you won't always be so lucky to have a formula that precisely calculates the gradients of a given quantum circuit. In the event that a simple formula isn't enough to calculate the gradient, the `tfq.differentiators.Differentiator` class allows you to define algorithms for computing the gradients of your circuits. For instance you can recreate the above example in TensorFlow Quantum (TFQ) with: | expectation_calculation = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=[[my_alpha]]) | _____no_output_____ | Apache-2.0 | docs/tutorials/gradients.ipynb | HectorIGH/quantum |
However, if you switch to estimating expectation based on sampling (what would happen on a true device) the values can change a little bit. This means you now have an imperfect estimate: | sampled_expectation_calculation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=[[my_alpha]]) | _____no_output_____ | Apache-2.0 | docs/tutorials/gradients.ipynb | HectorIGH/quantum |
This can quickly compound into a serious accuracy problem when it comes to gradients: | # Make input_points = [batch_size, 1] array.
input_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=input_points)
imperfect_outputs = sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=input_points)
plt.title('Forward Pass Values')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.plot(input_points, exact_outputs, label='Analytic')
plt.plot(input_points, imperfect_outputs, label='Sampled')
plt.legend()
# Gradients are a much different story.
values_tensor = tf.convert_to_tensor(input_points)
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = sampled_expectation_calculation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_finite_diff_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_finite_diff_gradients, label='Sampled')
plt.legend() | _____no_output_____ | Apache-2.0 | docs/tutorials/gradients.ipynb | HectorIGH/quantum |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.