Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
10,700 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to Computer Vision!
Have you ever wanted to teach a computer to see? In this course, that's exactly what you'll do!
In this course, you'll
Step1: Let's take a look at a few examples from the training set.
Step2: Step 2 - Define Pretrained Base
The most commonly used dataset for pretraining is ImageNet, a large dataset of many kind of natural images. Keras includes a variety models pretrained on ImageNet in its applications module. The pretrained model we'll use is called VGG16.
Step3: Step 3 - Attach Head
Next, we attach the classifier head. For this example, we'll use a layer of hidden units (the first Dense layer) followed by a layer to transform the outputs to a probability score for class 1, Truck. The Flatten layer transforms the two dimensional outputs of the base into the one dimensional inputs needed by the head.
Step4: Step 4 - Train
Finally, let's train the model. Since this is a two-class problem, we'll use the binary versions of crossentropy and accuracy. The adam optimizer generally performs well, so we'll choose it as well.
Step5: When training a neural network, it's always a good idea to examine the loss and metric plots. The history object contains this information in a dictionary history.history. We can use Pandas to convert this dictionary to a dataframe and plot it with a built-in method. | Python Code:
#$HIDE_INPUT$
# Imports
import os, warnings
import matplotlib.pyplot as plt
from matplotlib import gridspec
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directory
# Reproducability
def set_seed(seed=31415):
np.random.seed(seed)
tf.random.set_seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
os.environ['TF_DETERMINISTIC_OPS'] = '1'
set_seed(31415)
# Set Matplotlib defaults
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('image', cmap='magma')
warnings.filterwarnings("ignore") # to clean up output cells
# Load training and validation sets
ds_train_ = image_dataset_from_directory(
'../input/car-or-truck/train',
labels='inferred',
label_mode='binary',
image_size=[128, 128],
interpolation='nearest',
batch_size=64,
shuffle=True,
)
ds_valid_ = image_dataset_from_directory(
'../input/car-or-truck/valid',
labels='inferred',
label_mode='binary',
image_size=[128, 128],
interpolation='nearest',
batch_size=64,
shuffle=False,
)
# Data Pipeline
def convert_to_float(image, label):
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
return image, label
AUTOTUNE = tf.data.experimental.AUTOTUNE
ds_train = (
ds_train_
.map(convert_to_float)
.cache()
.prefetch(buffer_size=AUTOTUNE)
)
ds_valid = (
ds_valid_
.map(convert_to_float)
.cache()
.prefetch(buffer_size=AUTOTUNE)
)
Explanation: Welcome to Computer Vision!
Have you ever wanted to teach a computer to see? In this course, that's exactly what you'll do!
In this course, you'll:
- Use modern deep-learning networks to build an image classifier with Keras
- Design your own custom convnet with reusable blocks
- Learn the fundamental ideas behind visual feature extraction
- Master the art of transfer learning to boost your models
- Utilize data augmentation to extend your dataset
If you've taken the Introduction to Deep Learning course, you'll know everything you need to be successful.
Now let's get started!
Introduction
This course will introduce you to the fundamental ideas of computer vision. Our goal is to learn how a neural network can "understand" a natural image well-enough to solve the same kinds of problems the human visual system can solve.
The neural networks that are best at this task are called convolutional neural networks (Sometimes we say convnet or CNN instead.) Convolution is the mathematical operation that gives the layers of a convnet their unique structure. In future lessons, you'll learn why this structure is so effective at solving computer vision problems.
We will apply these ideas to the problem of image classification: given a picture, can we train a computer to tell us what it's a picture of? You may have seen apps that can identify a species of plant from a photograph. That's an image classifier! In this course, you'll learn how to build image classifiers just as powerful as those used in professional applications.
While our focus will be on image classification, what you'll learn in this course is relevant to every kind of computer vision problem. At the end, you'll be ready to move on to more advanced applications like generative adversarial networks and image segmentation.
The Convolutional Classifier
A convnet used for image classification consists of two parts: a convolutional base and a dense head.
<center>
<!-- <img src="./images/1-parts-of-a-convnet.png" width="600" alt="The parts of a convnet: image, base, head, class; input, extract, classify, output.">-->
<img src="https://i.imgur.com/U0n5xjU.png" width="600" alt="The parts of a convnet: image, base, head, class; input, extract, classify, output.">
</center>
The base is used to extract the features from an image. It is formed primarily of layers performing the convolution operation, but often includes other kinds of layers as well. (You'll learn about these in the next lesson.)
The head is used to determine the class of the image. It is formed primarily of dense layers, but might include other layers like dropout.
What do we mean by visual feature? A feature could be a line, a color, a texture, a shape, a pattern -- or some complicated combination.
The whole process goes something like this:
<center>
<!-- <img src="./images/1-extract-classify.png" width="600" alt="The idea of feature extraction."> -->
<img src="https://i.imgur.com/UUAafkn.png" width="600" alt="The idea of feature extraction.">
</center>
The features actually extracted look a bit different, but it gives the idea.
Training the Classifier
The goal of the network during training is to learn two things:
1. which features to extract from an image (base),
2. which class goes with what features (head).
These days, convnets are rarely trained from scratch. More often, we reuse the base of a pretrained model. To the pretrained base we then attach an untrained head. In other words, we reuse the part of a network that has already learned to do 1. Extract features, and attach to it some fresh layers to learn 2. Classify.
<center>
<!-- <img src="./images/1-attach-head-to-base.png" width="400" alt="Attaching a new head to a trained base."> -->
<img src="https://imgur.com/E49fsmV.png" width="400" alt="Attaching a new head to a trained base.">
</center>
Because the head usually consists of only a few dense layers, very accurate classifiers can be created from relatively little data.
Reusing a pretrained model is a technique known as transfer learning. It is so effective, that almost every image classifier these days will make use of it.
Example - Train a Convnet Classifier
Throughout this course, we're going to be creating classifiers that attempt to solve the following problem: is this a picture of a Car or of a Truck? Our dataset is about 10,000 pictures of various automobiles, around half cars and half trucks.
Step 1 - Load Data
This next hidden cell will import some libraries and set up our data pipeline. We have a training split called ds_train and a validation split called ds_valid.
End of explanation
#$HIDE_INPUT$
import matplotlib.pyplot as plt
Explanation: Let's take a look at a few examples from the training set.
End of explanation
pretrained_base = tf.keras.models.load_model(
'../input/cv-course-models/cv-course-models/vgg16-pretrained-base',
)
pretrained_base.trainable = False
Explanation: Step 2 - Define Pretrained Base
The most commonly used dataset for pretraining is ImageNet, a large dataset of many kind of natural images. Keras includes a variety models pretrained on ImageNet in its applications module. The pretrained model we'll use is called VGG16.
End of explanation
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential([
pretrained_base,
layers.Flatten(),
layers.Dense(6, activation='relu'),
layers.Dense(1, activation='sigmoid'),
])
Explanation: Step 3 - Attach Head
Next, we attach the classifier head. For this example, we'll use a layer of hidden units (the first Dense layer) followed by a layer to transform the outputs to a probability score for class 1, Truck. The Flatten layer transforms the two dimensional outputs of the base into the one dimensional inputs needed by the head.
End of explanation
model.compile(
optimizer='adam',
loss='binary_crossentropy',
metrics=['binary_accuracy'],
)
history = model.fit(
ds_train,
validation_data=ds_valid,
epochs=30,
verbose=0,
)
Explanation: Step 4 - Train
Finally, let's train the model. Since this is a two-class problem, we'll use the binary versions of crossentropy and accuracy. The adam optimizer generally performs well, so we'll choose it as well.
End of explanation
import pandas as pd
history_frame = pd.DataFrame(history.history)
history_frame.loc[:, ['loss', 'val_loss']].plot()
history_frame.loc[:, ['binary_accuracy', 'val_binary_accuracy']].plot();
Explanation: When training a neural network, it's always a good idea to examine the loss and metric plots. The history object contains this information in a dictionary history.history. We can use Pandas to convert this dictionary to a dataframe and plot it with a built-in method.
End of explanation |
10,701 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overview
This is a generalized notebook for computing grade statistics from the Ted Grade Center.
Step1: Load data from exported CSV from Ted Full Grade Center. Some sanitization is performed to remove non-ascii characters and cruft.
Step2: Define lower grade cutoffs in terms of number of standard deviations from mean.
Step4: Define some general functions for computing grade statistics.
Step5: Problem Sets
Step6: Exams
Step7: Overall grade | Python Code:
#The usual imports
import math
import glob
import os
from collections import OrderedDict
from pandas import read_csv
import numpy as np
from pymatgen.util.plotting_utils import get_publication_quality_plot
from monty.string import remove_non_ascii
import prettyplotlib as ppl
from prettyplotlib import brewer2mpl
import matplotlib.pyplot as plt
colors = brewer2mpl.get_map('Set1', 'qualitative', 8).mpl_colors
import datetime
%matplotlib inline
print("Last updated on %s" % datetime.datetime.now())
Explanation: Overview
This is a generalized notebook for computing grade statistics from the Ted Grade Center.
End of explanation
files = glob.glob(os.environ["NANO106GC"])
latest = sorted(files)[-1]
d = read_csv(latest)
d.columns = [remove_non_ascii(c) for c in d.columns]
d.columns = [c.split("[")[0].strip().strip("\"") for c in d.columns]
d["Weighted Total"] = [float(s.strip("%")) for s in d["Weighted Total"]]
Explanation: Load data from exported CSV from Ted Full Grade Center. Some sanitization is performed to remove non-ascii characters and cruft.
End of explanation
grade_cutoffs = OrderedDict()
grade_cutoffs["A"] = 0.75
grade_cutoffs["B+"] = 0.5
grade_cutoffs["B"] = -0.25
grade_cutoffs["B-"] = -0.5
grade_cutoffs["C+"] = -0.75
grade_cutoffs["C"] = -1.5
grade_cutoffs["C-"] = -2
grade_cutoffs["F"] = float("-inf")
print("The cutoffs are:")
for k, v in grade_cutoffs.items():
print(u"%s: > μ + %.2f σ" % (k, v))
Explanation: Define lower grade cutoffs in terms of number of standard deviations from mean.
End of explanation
def bar_plot(dframe, data_key, offset=0, annotate=True):
Creates a historgram of the results.
Args:
dframe: DataFrame which is imported from CSV.
data_key: Specific column to plot
offset: Allows an offset for each grade. Defaults to 0.
Returns:
dict of cutoffs, {grade: (lower, upper)}
data = dframe[data_key]
d = filter(lambda x: (not np.isnan(x)) and x != 0, list(data))
heights, bins = np.histogram(d, bins=20, range=(0, 100))
bins = list(bins)
bins.pop(-1)
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1)
ppl.bar(ax, bins, heights, width=5, color=colors[0], grid='y')
plt = get_publication_quality_plot(12, 8, plt)
plt.xlabel("Score")
plt.ylabel("Number of students")
#print len([d for d in data if d > 90])
mean = np.mean(d)
sigma = np.std(d)
maxy = np.max(heights)
prev_cutoff = 100
cutoffs = {}
grade = ["A", "B+", "B", "B-", "C+", "C", "C-", "F"]
for grade, cutoff in grade_cutoffs.items():
if cutoff == float("-inf"):
cutoff = 0
else:
cutoff = max(0, mean + cutoff * sigma) + offset
if annotate:
plt.plot([cutoff] * 2, [0, maxy], 'k--')
plt.annotate("%.1f" % cutoff, [cutoff, maxy - 1], fontsize=18, horizontalalignment='left', rotation=45)
n = len([d for d in data if cutoff <= d < prev_cutoff])
#print "Grade %s (%.1f-%.1f): %d" % (grade, cutoff, prev_cutoff, n)
if annotate:
plt.annotate(grade, [(cutoff + prev_cutoff) / 2, maxy], fontsize=18, horizontalalignment='center')
cutoffs[grade] = (cutoff, prev_cutoff)
prev_cutoff = cutoff
plt.ylim([0, maxy * 1.1])
plt.annotate("$\mu = %.1f$\n$\sigma = %.1f$\n$max=%.1f$" % (mean, sigma, data.max()), xy=(10, 7), fontsize=30)
title = data_key.split("[")[0].strip()
plt.title(title, fontsize=30)
plt.tight_layout()
plt.savefig("%s.png" % title)
return cutoffs
def assign_grades(d, column_name, cutoffs, offset):
def compute_grade(pts):
for g, c in cutoffs.items():
if c[0] < pts + offset <= c[1]:
return g
d["Final_Assigned_Egrade"] = map(compute_grade, d[column_name])
d.to_csv("Overall grades.csv")
Explanation: Define some general functions for computing grade statistics.
End of explanation
cutoffs = bar_plot(d, "PS1", annotate=True)
cutoffs = bar_plot(d, "PS2", annotate=True)
cutoffs = bar_plot(d, "PS3", annotate=True)
cutoffs = bar_plot(d, "PS4", annotate=True)
cutoffs = bar_plot(d, "PS5", annotate=True)
Explanation: Problem Sets
End of explanation
cutoffs = bar_plot(d, "Mid-term 1", annotate=True)
cutoffs = bar_plot(d, "Mid-term 2", annotate=True)
cutoffs = bar_plot(d, "Final", annotate=True)
Explanation: Exams
End of explanation
cutoffs = bar_plot(d, "Weighted Total", annotate=True)
# The command below is used to generate the overall grade assignments for all students and dump it into a CSV file.
# assign_grades(d, "Weighted Total", cutoffs, offset=1.41)
Explanation: Overall grade
End of explanation |
10,702 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Robust PCA Example
Robust PCA is an awesome relatively new method for factoring a matrix into a low rank component and a sparse component. This enables really neat applications for outlier detection, or models that are robust to outliers.
Step1: Make Some Toy Data
Step2: Add Some Outliers to Make Life Difficult
Step3: Compute SVD on both the clean data and the outliery data
Step4: Just 10 outliers can really screw up our line fit!
Step5: Now the robust pca version!
Step6: Factor the matrix into L (low rank) and S (sparse) parts
Step7: And have a look at this! | Python Code:
%matplotlib inline
Explanation: Robust PCA Example
Robust PCA is an awesome relatively new method for factoring a matrix into a low rank component and a sparse component. This enables really neat applications for outlier detection, or models that are robust to outliers.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
def mk_rot_mat(rad=np.pi / 4):
rot = np.array([[np.cos(rad),-np.sin(rad)], [np.sin(rad), np.cos(rad)]])
return rot
rot_mat = mk_rot_mat( np.pi / 4)
x = np.random.randn(100) * 5
y = np.random.randn(100)
points = np.vstack([y,x])
rotated = np.dot(points.T, rot_mat).T
Explanation: Make Some Toy Data
End of explanation
outliers = np.tile([15,-10], 10).reshape((-1,2))
pts = np.vstack([rotated.T, outliers]).T
Explanation: Add Some Outliers to Make Life Difficult
End of explanation
U,s,Vt = np.linalg.svd(rotated)
U_n,s_n,Vt_n = np.linalg.svd(pts)
Explanation: Compute SVD on both the clean data and the outliery data
End of explanation
plt.ylim([-20,20])
plt.xlim([-20,20])
plt.scatter(*pts)
pca_line = np.dot(U[0].reshape((2,1)), np.array([-20,20]).reshape((1,2)))
plt.plot(*pca_line)
rpca_line = np.dot(U_n[0].reshape((2,1)), np.array([-20,20]).reshape((1,2)))
plt.plot(*rpca_line, c='r')
Explanation: Just 10 outliers can really screw up our line fit!
End of explanation
import tga
reload(tga)
import logging
logger = logging.getLogger(tga.__name__)
logger.setLevel(logging.INFO)
Explanation: Now the robust pca version!
End of explanation
X = pts.copy()
v = tga.tga(X.T, eps=1e-5, k=1, p=0.0)
Explanation: Factor the matrix into L (low rank) and S (sparse) parts
End of explanation
plt.ylim([-20,20])
plt.xlim([-20,20])
plt.scatter(*pts)
tga_line = np.dot(v[0].reshape((2,1)), np.array([-20,20]).reshape((1,2)))
plt.plot(*tga_line)
#plt.scatter(*L, c='red')
Explanation: And have a look at this!
End of explanation |
10,703 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualización e interacción
La visualización e interacción es un requerimiento actual para las nuevas metodologías de enseñanza, donde se busca un aprendizaje mucho más visual y que permita, a través de la experimentación, el entendimiento de un fenómeno cuando se cambian ciertas condiciones iniciales.
La ubicación espacial y la manipulación de parámetros en dicha experimentación se puede facilitar con herramientas como estas, que integran el uso de gráficos, animaciones y widgets. Este notebook, define los métodos de visualización e interacción que se usarán en otros notebooks, sobre la componente numérica y conceptual.
Esta separación se hace con el fin de distinguir claramente 3 componentes del proceso, y que faciliten la comprensión de la temática sin requerir que el usuario comprenda los 3 niveles (ya que el código es visible, y esto impactaría en el proceso de seguimiento del tema).
Funciones Matemáticas
Aunque no es parte de la visualización y de la interacción, el manejo de funciones matemáticas es requerido para estas etapas y las posteriores. Por lo que su definición es necesaria desde el principio para no ser redundante en requerir de múltiples invocaciones.
La evaluación de funciones matemáticas puede realizarse por medio del modulo math que hace parte de la biblioteca estandar de Python, o con la biblioteca numpy. Para el conjunto limitado de funciones matemáticas que requerimos y con la premisa de no realizar de formas complejas nuestros códigos, las utilidades de numpy no serán necesarias y con math y el uso de listas será suficiente.
El motivo de tener pocos requerimientos de funciones matemáticas es por el uso de métodos numéricos y no de herramientas análiticas. La idea es mostrar como con esta metodología es posible analizar un conjunto mayor de problemas sin tener que profundizar en una gran cantidad de herramientas matemáticas y así no limitar la discusión de estos temas a conocimientos avanzados de matemáticas, y más bien depender de un conocimiento básico tanto de matemáticas como de programación para el desarrollo de los problemas, y permitir simplemente la interacción en caso de solo usar estos notebooks como un recurso para el estudio conceptual. Por este último fin, se busca que el notebook conceptual posea el mínimo de código, y este se lleve sobre los notebooks de técnicas numéricas y de visualización.
Step1: El conjunto anterior de funciones sólo se indica por mantener una referencia de funciones para cualquier ampliación que se desee realizar sobre este, y para su uso en la creación de potenciales arbitrarios, así como en los casos de ejemplificación con funciones análiticas o para fines de comparación de resultados.
Para la implementación (partiendo de un potencial dado numéricamente), sólo se requiere del uso de sqrt.
El modulo de numpy permitiría extender la aplicación de funciones matemáticas directamente sobre arreglos numéricos, y definir estos arreglos de una forma natural para la matemática, como equivalente a los vectores y matrices a traves de la clase array.
Interacción
Existen multiples mecanismos para interacción con los recursos digitales, definidos de forma casi estándar en su comportamiento a través de distintas plataformas.
Dentro de la definición de los controles gráficos (widgets) incorporados en Jupyter en el módulo ipywidgets, encontramos los siguientes
Step2: Para nuestro uso, serán de uso principal
Step3: Entre estos controles que se usan, a veces es necesario crear dependencias de sus rangos respecto al rango o propiedad de otro control. Para este fin usamos la función link del módulo traitlets. En este módulo se encuentran otras funciones utiles para manipulación de los controles gráficos.
Step4: Tambien es necesario el uso de elementos que permitan el formato del documento y visualización de elementos y texto enriquecido, fuera de lo posible con texto plano a punta de print o con las capacidades de MarkDown (Nativo o con extensión). Para esto se puede extender el uso métodos para renderizado HTML y LaTeX.
Step5: Visualización
Por visualización entendemos las estrategias de representación gráfica de la información, resultados o modelos. Facilita la lectura rápida de datos mediante codificaciones de colores así como la ubicación espacial de los mismos. La representación gráfica no tiene por qué ser estática, y es ahí donde las animaciones nos permiten representar las variaciones temporales de un sistema de una forma más natural (no como un gráfico respecto a un eje de tiempo, sino vivenciando un gráfico evolucionando en el tiempo).
Para este fin es posible usar diversas bibliotecas existentes en python (en sus versiones 2 y 3), siendo la más común de ellas y robusta, la biblioteca Matplotlib. En el contexto moderno de los navegadores web, es posible integrar de una forma más natural bibliotecas que realizan el almacenamiento de los gráficos en formatos nativos para la web, como lo es el formato de intercambio de datos JSON, facilitando su interacción en el navegador mediante llamados a javascript.
Así, podemos establecer preferencias, como Matplotlib para uso estático principalmente o para uso local, mientras que para interacción web, usar bibliotecas como Bokeh.
Para este caso, sin profundidad en la interacción web, se usará Matplotlib.
Step6: Para indicar la graficación no interactiva embebida en el documento usamos la sguiente linea
%matplotlib inline
En caso de requerir una forma interactiva embebida, se usa la linea
%matplotlib notebook
Para nuestro uso básico, todo lo necesario para gráficación se encuentra en el módulo pyplot de Matplotlib. Con él podemos realizar cuadrículas, trazos de curvas de diversos estilos, modificación de ejes, leyendas, adición de anotaciones en el gráfico y llenado de formas (coloreado entre curvas). Pueden consultarse ejemplos de referencia en la galería de Matplotlib y en la lista de ejemplos de la página oficial.
Graficación de funciones
En general nuestro ideal es poder graficar funciones que son representadas por arreglos numéricos. Las funciones continuas en su representación algebraica de discretizan, y es el conjunto de puntos interpolado lo que se ilustra. Antes de discretizar, es conveniente convertir nuestra función en una función evaluable, y asociar la dependencia solo a una variable (para nuestro caso que es 1D).
El proceso de interpolación mencionado se realiza por el paquete de graficación y nosotros solo debemos indicar los puntos que pertenecen a la función.
Step7: El bloque anterior de código ilustra el uso de interact como mecanismo para crear controles automaticos que se apliquen a la ejecución de una función. Este permite crear de una forma simple las interacciones cuando no se requiere de personalizar mucho, ni vincular controles y se desea una ejecución automatica con cada variación de parametros. En caso de querer recuperar los valores especificos de los parametros para posterior manipulación se recomienda el uso de interactive o del uso explicito de los controles.
A pesar de la facilidad que ofrece interact e interactive al generar los controles automaticos, esto es poco conveniente cuando se trata de ejecuciones que toman tiempos significativos (que para escalas de una interacción favorable, un tiempo significativo son aquellos mayores a un segundo), ya que cada variación de parametros independiente, o sea, cada deslizador en este caso, al cambiar produce una nueva ejecución, y las nuevas variaciones de parámetros quedan en espera hasta terminar las ejecuciones de las variaciones individuales anteriores.
Es por esto, que puede ser conveniente definir una interacción donde los controles la unica acción que posean es la variación y almacenamiento de valores de los parametros, y sea otro control adicional el designado para indicar el momento de actualizar parametros y ejecutar.
El ejemplo anterior se puede construir usando FloatSlider, IntSlider, Button, Text, Box y display.
Step8: Graficación de potenciales
Para fines de ilustración y comprensión de los estados ligados del sistema, conviene poder ilustrar las funciones de potencial como barreras físicas. Esta noción gráfica se representa mediante el llenado entre la curva y el eje de referencia para la energía. De esta forma, al unir el gráfico con la referencia del autovalor, será claro que la energía hallada pertenece al intervalo requerido en teoría y que corresponde a un sistema ligado.
La función de graficación del potencial recibe dos listas/arreglos, uno con la información espacial y otro con la evaluación del potencial en dichos puntos. Antes de proceder con el llenado de la representación de la barrera del potencial, se crean los puntos inicial y final con el fin de crear formas cerradas distinguibles para el comando fill.
Step9: A continuación se presenta un ejemplo interactivo de graficación del potencial finito. Se inicia con la definición del potencial, la cual se usa para generar un arreglo con la información de la evaluación del potencial en distintos puntos del espacio.
Step10: Nivel de energía
Para ilustrar adecuadamente la presencia de estados ligados conviene superponer sobre la representación de la función de potencial, la referencia de energía del autovalor del sistema. Para distinguirlo, éste será un trazo discontinuo (no relleno para evitar confusión con el potencial, pero tampoco continuo para distinguirlo de la representación de las funciones de onda).
\begin{eqnarray}
E \leq V_\text{máx},& \qquad \text{Estado ligado}\
E > V_\text{máx},& \qquad \text{Estado no ligado}
\end{eqnarray}
Los estados no ligados son equivalentes a tener particulas libres.
Step11: Graficación de autofunciones
La visualización de las autofunciones (y su módulo cuadrado), nos permite reconocer visualmente la distribución de probabilidad del sistema e identificar los puntos espaciales más probables para la ubicación de la particula analizada.
Para la correcta visualización, la graficación de la función de onda debe considerar una normalización de escala, no necesariamente al valor de la unidad del eje, pero si como referencia un valor numerico comprendido por los valores máximos de potencial, que corresponden a la parte del gráfico más cercana al margen superior del recuadro de graficación. El no realizar este reescalamiento, podría afectar la visualización del potencial y de la energía, ya que el eje se reajusta a los datos máximos y mínimos.
$$ \psi^{\prime}(x) = \frac{\psi(x)}{\max \psi(x)} V_\text{máx} $$
La graficación de las autofunciones es mediante el comando plot tradicional, y solo tiene de elemento adicional su reescalamiento con base al potencial máximo en la región de interes. | Python Code:
from math import sin, cos, tan, sqrt, log, exp, pi
Explanation: Visualización e interacción
La visualización e interacción es un requerimiento actual para las nuevas metodologías de enseñanza, donde se busca un aprendizaje mucho más visual y que permita, a través de la experimentación, el entendimiento de un fenómeno cuando se cambian ciertas condiciones iniciales.
La ubicación espacial y la manipulación de parámetros en dicha experimentación se puede facilitar con herramientas como estas, que integran el uso de gráficos, animaciones y widgets. Este notebook, define los métodos de visualización e interacción que se usarán en otros notebooks, sobre la componente numérica y conceptual.
Esta separación se hace con el fin de distinguir claramente 3 componentes del proceso, y que faciliten la comprensión de la temática sin requerir que el usuario comprenda los 3 niveles (ya que el código es visible, y esto impactaría en el proceso de seguimiento del tema).
Funciones Matemáticas
Aunque no es parte de la visualización y de la interacción, el manejo de funciones matemáticas es requerido para estas etapas y las posteriores. Por lo que su definición es necesaria desde el principio para no ser redundante en requerir de múltiples invocaciones.
La evaluación de funciones matemáticas puede realizarse por medio del modulo math que hace parte de la biblioteca estandar de Python, o con la biblioteca numpy. Para el conjunto limitado de funciones matemáticas que requerimos y con la premisa de no realizar de formas complejas nuestros códigos, las utilidades de numpy no serán necesarias y con math y el uso de listas será suficiente.
El motivo de tener pocos requerimientos de funciones matemáticas es por el uso de métodos numéricos y no de herramientas análiticas. La idea es mostrar como con esta metodología es posible analizar un conjunto mayor de problemas sin tener que profundizar en una gran cantidad de herramientas matemáticas y así no limitar la discusión de estos temas a conocimientos avanzados de matemáticas, y más bien depender de un conocimiento básico tanto de matemáticas como de programación para el desarrollo de los problemas, y permitir simplemente la interacción en caso de solo usar estos notebooks como un recurso para el estudio conceptual. Por este último fin, se busca que el notebook conceptual posea el mínimo de código, y este se lleve sobre los notebooks de técnicas numéricas y de visualización.
End of explanation
import ipywidgets
print(dir(ipywidgets))
Explanation: El conjunto anterior de funciones sólo se indica por mantener una referencia de funciones para cualquier ampliación que se desee realizar sobre este, y para su uso en la creación de potenciales arbitrarios, así como en los casos de ejemplificación con funciones análiticas o para fines de comparación de resultados.
Para la implementación (partiendo de un potencial dado numéricamente), sólo se requiere del uso de sqrt.
El modulo de numpy permitiría extender la aplicación de funciones matemáticas directamente sobre arreglos numéricos, y definir estos arreglos de una forma natural para la matemática, como equivalente a los vectores y matrices a traves de la clase array.
Interacción
Existen multiples mecanismos para interacción con los recursos digitales, definidos de forma casi estándar en su comportamiento a través de distintas plataformas.
Dentro de la definición de los controles gráficos (widgets) incorporados en Jupyter en el módulo ipywidgets, encontramos los siguientes:
End of explanation
from ipywidgets import interact, interactive, fixed, IntSlider, FloatSlider, Button, Text, Box
Explanation: Para nuestro uso, serán de uso principal:
Interacciones: Son mecanismos automáticos para crear controles y asociarlos a una función. interact, interactive.
Deslizadores: Los hay específicos para tipos de datos, y estos son IntSlider y FloatSlider.
Botones: Elementos que permiten ejecutar una acción al presionarlos, Button.
Texto: Permiten el ingreso de texto arbitrario y asociar la ejecución de una acción a su ingreso. Text.
Contenedores: Permiten agrupar en un solo objeto/vista varios controles. Uno de ellos es Box.
End of explanation
from traitlets import link
Explanation: Entre estos controles que se usan, a veces es necesario crear dependencias de sus rangos respecto al rango o propiedad de otro control. Para este fin usamos la función link del módulo traitlets. En este módulo se encuentran otras funciones utiles para manipulación de los controles gráficos.
End of explanation
from IPython.display import clear_output, display, HTML, Latex, Markdown, Math
Explanation: Tambien es necesario el uso de elementos que permitan el formato del documento y visualización de elementos y texto enriquecido, fuera de lo posible con texto plano a punta de print o con las capacidades de MarkDown (Nativo o con extensión). Para esto se puede extender el uso métodos para renderizado HTML y LaTeX.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: Visualización
Por visualización entendemos las estrategias de representación gráfica de la información, resultados o modelos. Facilita la lectura rápida de datos mediante codificaciones de colores así como la ubicación espacial de los mismos. La representación gráfica no tiene por qué ser estática, y es ahí donde las animaciones nos permiten representar las variaciones temporales de un sistema de una forma más natural (no como un gráfico respecto a un eje de tiempo, sino vivenciando un gráfico evolucionando en el tiempo).
Para este fin es posible usar diversas bibliotecas existentes en python (en sus versiones 2 y 3), siendo la más común de ellas y robusta, la biblioteca Matplotlib. En el contexto moderno de los navegadores web, es posible integrar de una forma más natural bibliotecas que realizan el almacenamiento de los gráficos en formatos nativos para la web, como lo es el formato de intercambio de datos JSON, facilitando su interacción en el navegador mediante llamados a javascript.
Así, podemos establecer preferencias, como Matplotlib para uso estático principalmente o para uso local, mientras que para interacción web, usar bibliotecas como Bokeh.
Para este caso, sin profundidad en la interacción web, se usará Matplotlib.
End of explanation
def discretizar(funcion, a, b, n):
dx = (b-a)/n
x = [a + i*dx for i in range(n+1)]
y = [funcion(i) for i in x]
return x, y
def graficar_funcion(x, f):
plt.plot(x, f, '-')
def graficar_punto_texto(x, f, texto):
plt.plot(x, f, 'o')
plt.text(x+.2, f+.2, texto)
def int_raiz_sin(a:(-5.,0., .2), b:(0., 5., .2), k:(0.2, 10., .1), n:(1, 100, 1), N:(0, 10, 1)):
f = lambda x: sin(k*x)
x, y = discretizar(f, a, b, n)
r = pi*(N + int(a*k/pi))/k
graficar_funcion(x, y)
graficar_punto_texto(r, 0, 'Raíz')
plt.show()
interact(int_raiz_sin)
Explanation: Para indicar la graficación no interactiva embebida en el documento usamos la sguiente linea
%matplotlib inline
En caso de requerir una forma interactiva embebida, se usa la linea
%matplotlib notebook
Para nuestro uso básico, todo lo necesario para gráficación se encuentra en el módulo pyplot de Matplotlib. Con él podemos realizar cuadrículas, trazos de curvas de diversos estilos, modificación de ejes, leyendas, adición de anotaciones en el gráfico y llenado de formas (coloreado entre curvas). Pueden consultarse ejemplos de referencia en la galería de Matplotlib y en la lista de ejemplos de la página oficial.
Graficación de funciones
En general nuestro ideal es poder graficar funciones que son representadas por arreglos numéricos. Las funciones continuas en su representación algebraica de discretizan, y es el conjunto de puntos interpolado lo que se ilustra. Antes de discretizar, es conveniente convertir nuestra función en una función evaluable, y asociar la dependencia solo a una variable (para nuestro caso que es 1D).
El proceso de interpolación mencionado se realiza por el paquete de graficación y nosotros solo debemos indicar los puntos que pertenecen a la función.
End of explanation
def raiz_sin(a, b, k, n, N, texto):
f = lambda x: sin(k*x)
x, y = discretizar(f, a, b, n)
r = pi*(N + int(a*k/pi))/k
graficar_funcion(x, y)
graficar_punto_texto(r, 0, texto)
a = FloatSlider(value= -2.5, min=-5., max= 0., step= .2, description='a')
b= FloatSlider(value = 2.5, min=0., max= 5., step=.2, description='b')
k= FloatSlider(value = 5., min=0.2, max=10., step=.1, description='k')
n= IntSlider(value= 50, min=1, max= 100, step=1, description='n')
N= IntSlider(value=5, min=0, max=10, step=1, description='N')
texto = Text(value='Raíz', description='Texto punto')
Boton_graficar = Button(description='Graficar')
def click_graficar(boton):
clear_output(wait=True)
raiz_sin(a.value, b.value, k.value, n.value, N.value, texto.value)
plt.show()
display(a, b, k, n, N, texto, Boton_graficar)
Boton_graficar.on_click(click_graficar)
Explanation: El bloque anterior de código ilustra el uso de interact como mecanismo para crear controles automaticos que se apliquen a la ejecución de una función. Este permite crear de una forma simple las interacciones cuando no se requiere de personalizar mucho, ni vincular controles y se desea una ejecución automatica con cada variación de parametros. En caso de querer recuperar los valores especificos de los parametros para posterior manipulación se recomienda el uso de interactive o del uso explicito de los controles.
A pesar de la facilidad que ofrece interact e interactive al generar los controles automaticos, esto es poco conveniente cuando se trata de ejecuciones que toman tiempos significativos (que para escalas de una interacción favorable, un tiempo significativo son aquellos mayores a un segundo), ya que cada variación de parametros independiente, o sea, cada deslizador en este caso, al cambiar produce una nueva ejecución, y las nuevas variaciones de parámetros quedan en espera hasta terminar las ejecuciones de las variaciones individuales anteriores.
Es por esto, que puede ser conveniente definir una interacción donde los controles la unica acción que posean es la variación y almacenamiento de valores de los parametros, y sea otro control adicional el designado para indicar el momento de actualizar parametros y ejecutar.
El ejemplo anterior se puede construir usando FloatSlider, IntSlider, Button, Text, Box y display.
End of explanation
def graficar_potencial(x, V_x):
V_min = min(V_x)
plt.fill_between(x, V_min, V_x, facecolor = 'peru')
Explanation: Graficación de potenciales
Para fines de ilustración y comprensión de los estados ligados del sistema, conviene poder ilustrar las funciones de potencial como barreras físicas. Esta noción gráfica se representa mediante el llenado entre la curva y el eje de referencia para la energía. De esta forma, al unir el gráfico con la referencia del autovalor, será claro que la energía hallada pertenece al intervalo requerido en teoría y que corresponde a un sistema ligado.
La función de graficación del potencial recibe dos listas/arreglos, uno con la información espacial y otro con la evaluación del potencial en dichos puntos. Antes de proceder con el llenado de la representación de la barrera del potencial, se crean los puntos inicial y final con el fin de crear formas cerradas distinguibles para el comando fill.
End of explanation
def potencial(V_0, a, x):
if abs(x) > a/2:
return V_0
else:
return 0
def int_potencial(V_0:(.1, 10., .1), a:(.1, 5, .1), L:(1., 10., .5), N:(10, 200, 10)):
dx = L / N
x = [-L/2 + i*dx for i in range(N+1)]
y = [potencial(V_0, a, i) for i in x]
graficar_potencial(x, y)
plt.show()
interact(int_potencial)
Explanation: A continuación se presenta un ejemplo interactivo de graficación del potencial finito. Se inicia con la definición del potencial, la cual se usa para generar un arreglo con la información de la evaluación del potencial en distintos puntos del espacio.
End of explanation
def graficar_autovalor(L, E):
plt.plot([-L/2, L/2], [E, E], '--')
def int_potencial_energia(V_0:(.1, 10., .1), E:(.1, 10., .1), a:(.1, 5, .1), L:(1., 10., .5), N:(10, 200, 10)):
dx = L / N
x = [-L/2 + i*dx for i in range(N+1)]
y = [potencial(V_0, a, i) for i in x]
graficar_potencial(x, y)
graficar_autovalor(L, E)
if E > V_0:
plt.text(0, E+0.2, 'No ligado')
else:
plt.text(0, E+0.2, 'Ligado')
plt.show()
interact(int_potencial_energia)
Explanation: Nivel de energía
Para ilustrar adecuadamente la presencia de estados ligados conviene superponer sobre la representación de la función de potencial, la referencia de energía del autovalor del sistema. Para distinguirlo, éste será un trazo discontinuo (no relleno para evitar confusión con el potencial, pero tampoco continuo para distinguirlo de la representación de las funciones de onda).
\begin{eqnarray}
E \leq V_\text{máx},& \qquad \text{Estado ligado}\
E > V_\text{máx},& \qquad \text{Estado no ligado}
\end{eqnarray}
Los estados no ligados son equivalentes a tener particulas libres.
End of explanation
def graficar_autofuncion(x, psi_x, V_max):
psi_max = max([abs(i) for i in psi_x])
escala = V_max / psi_max
psi_x = [i*escala for i in psi_x]
plt.plot(x, psi_x, '-')
def onda(V_0, E, a, x):
if abs(x) <= a/2:
return cos(sqrt(E)*x/2)
else:
a2 = a/2
k1 = sqrt(V_0 - E)
A = cos(sqrt(E)*a2) / exp(-k1*a2)
signo = abs(x)/x
return A*exp(-signo*k1*x)
def int_potencial_auto_ef(V_0:(5., 20., .1), E:(.1, 20., .1), a:(2.5, 30., .1), L:(10., 100., 5.), N:(10, 200, 10)):
dx = L / N
x = [-L/2 + i*dx for i in range(N+1)]
V = [potencial(V_0, a, i) for i in x]
f = [onda(V_0, E, a, i) for i in x]
graficar_potencial(x, V)
graficar_autovalor(L, E)
graficar_autofuncion(x, f, V_0)
if E > V_0:
plt.text(0, E+0.2, 'No ligado')
else:
plt.text(0, E+0.2, 'Ligado')
plt.show()
interact(int_potencial_auto_ef)
Explanation: Graficación de autofunciones
La visualización de las autofunciones (y su módulo cuadrado), nos permite reconocer visualmente la distribución de probabilidad del sistema e identificar los puntos espaciales más probables para la ubicación de la particula analizada.
Para la correcta visualización, la graficación de la función de onda debe considerar una normalización de escala, no necesariamente al valor de la unidad del eje, pero si como referencia un valor numerico comprendido por los valores máximos de potencial, que corresponden a la parte del gráfico más cercana al margen superior del recuadro de graficación. El no realizar este reescalamiento, podría afectar la visualización del potencial y de la energía, ya que el eje se reajusta a los datos máximos y mínimos.
$$ \psi^{\prime}(x) = \frac{\psi(x)}{\max \psi(x)} V_\text{máx} $$
La graficación de las autofunciones es mediante el comando plot tradicional, y solo tiene de elemento adicional su reescalamiento con base al potencial máximo en la región de interes.
End of explanation |
10,704 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing a Change in the Auto Owernship Model
Create two auto ownership examples to illustrate running two scenarios and analyzing results. This notebook assumes users are familiar with the Getting Started notebook.
Create two examples
Step1: Run base example
Step2: Run alternative example with no input differences
Step3: Confirm identical results before making changes to the alternative scenario
Step4: Modify the alternative scenario
Reduce young adult car ownership coefficient to simulate the idea of less car ownership among young adults
Step5: Re-run alternative example
Step6: Compare Results
Plot the difference in household auto ownership. For additional summaries for downstream models, see the Summarizing Results notebook. | Python Code:
!activitysim create -e example_mtc -d example_base_auto_own
!activitysim create -e example_mtc -d example_base_auto_own_alternative
Explanation: Testing a Change in the Auto Owernship Model
Create two auto ownership examples to illustrate running two scenarios and analyzing results. This notebook assumes users are familiar with the Getting Started notebook.
Create two examples
End of explanation
%cd example_base_auto_own
!activitysim run -c configs -d data -o output
#return to root folder
%cd ..
Explanation: Run base example
End of explanation
%cd example_base_auto_own_alternative
!activitysim run -c configs -d data -o output
#return to root folder
%cd ..
Explanation: Run alternative example with no input differences
End of explanation
import pandas as pd
hh_base = pd.read_csv("example_base_auto_own/output/final_households.csv")
hh_alt = pd.read_csv("example_base_auto_own_alternative/output/final_households.csv")
same_results = (hh_base.auto_ownership == hh_alt.auto_ownership).all()
print("Identical household auto ownership results base versus alternative scenario: " + str(same_results))
Explanation: Confirm identical results before making changes to the alternative scenario
End of explanation
adjustment_factor = -2
coefficient_of_interest = "coef_cars1_persons_25_34"
alt_expressions = pd.read_csv("example_base_auto_own/configs/auto_ownership_coefficients.csv")
row_selector = (alt_expressions["coefficient_name"] == "coef_cars1_persons_25_34")
print(alt_expressions.loc[row_selector])
alt_expressions.loc[row_selector,"value"] = alt_expressions.loc[row_selector,"value"] + adjustment_factor
alt_expressions.to_csv("example_base_auto_own_alternative/configs/auto_ownership_coefficients.csv")
print(alt_expressions.loc[row_selector])
Explanation: Modify the alternative scenario
Reduce young adult car ownership coefficient to simulate the idea of less car ownership among young adults
End of explanation
%cd example_base_auto_own_alternative
!activitysim run -c configs -d data -o output
#return to root folder
%cd ..
Explanation: Re-run alternative example
End of explanation
import matplotlib.pyplot as plt
#read and summarize data
hh_base = pd.read_csv("example_base_auto_own/output/final_households.csv")
hh_alt = pd.read_csv("example_base_auto_own_alternative/output/final_households.csv")
autos_base = hh_base["auto_ownership"].value_counts()
autos_alt = hh_alt["auto_ownership"].value_counts()
#create plot
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (15,10)
plt.bar(x=autos_base.index - 0.15, height=autos_base.values, width=0.25, label="base", color="lightseagreen")
plt.bar(x=autos_alt.index + 0.15, height=autos_alt.values, width=0.25, label="alt", color="dodgerblue")
plt.title('Auto Ownership By Household')
plt.ylabel('Number of Households')
plt.legend()
plt.xticks(autos_base.index.values, autos_alt.index.values)
_ = plt.xlabel('Number of Vehicles')
Explanation: Compare Results
Plot the difference in household auto ownership. For additional summaries for downstream models, see the Summarizing Results notebook.
End of explanation |
10,705 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Kepler Hack
Step10: Here's the completeness model to apply to Q1—Q17 catalog
Step11: And a function for estimating the occurrence rate (assumed constant) in a bin in $T_\mathrm{eff}$ and period
Step12: G-dwarfs
Step13: M-dwarfs | Python Code:
import os
import requests
import numpy as np
import pandas as pd
from io import BytesIO # Python 3 only!
import matplotlib.pyplot as pl
def get_catalog(name, basepath="data"):
Download a catalog from the Exoplanet Archive by name and save it as a
Pandas HDF5 file.
:param name: the table name
:param basepath: the directory where the downloaded files should be saved
(default: ``data`` in the current working directory)
fn = os.path.join(basepath, "{0}.h5".format(name))
if os.path.exists(fn):
return pd.read_hdf(fn, name)
if not os.path.exists(basepath):
os.makedirs(basepath)
print("Downloading {0}...".format(name))
url = ("http://exoplanetarchive.ipac.caltech.edu/cgi-bin/nstedAPI/"
"nph-nstedAPI?table={0}&select=*").format(name)
r = requests.get(url)
if r.status_code != requests.codes.ok:
r.raise_for_status()
fh = BytesIO(r.content)
df = pd.read_csv(fh)
df.to_hdf(fn, name, format="t")
return df
Explanation: Kepler Hack: Q1–Q17 Occurrence Rate Calculation
By: Dan Foreman-Mackey
This is a version of a blog post I wrote updated for the most recent Kepler data release. The main change from Q1–Q16 is that the completeness model has changed. The main changes are:
the MES threshold should be set to 15
the matched filter was no longer a box. Eherefore the "depth" relevant for the completeness should be the minimum not the mean.
First, a helper function for downloading data from the Exoplanet Archive:
End of explanation
def get_duration(period, aor, e):
Equation (1) from Burke et al. This estimates the transit
duration in the same units as the input period. There is a
typo in the paper (24/4 = 6 != 4).
:param period: the period in any units of your choosing
:param aor: the dimensionless semi-major axis (scaled
by the stellar radius)
:param e: the eccentricity of the orbit
return 0.25 * period * np.sqrt(1 - e**2) / aor
def get_a(period, mstar, Go4pi=2945.4625385377644/(4*np.pi*np.pi)):
Compute the semi-major axis of an orbit in Solar radii.
:param period: the period in days
:param mstar: the stellar mass in Solar masses
return (Go4pi*period*period*mstar) ** (1./3)
def get_delta(k, c=1.0874, s=1.0187, mean=False):
Estimate the approximate expected transit depth as a function
of radius ratio. There might be a typo here. In the paper it
uses c + s*k but in the public code, it is c - s*k:
https://github.com/christopherburke/KeplerPORTs
:param k: the dimensionless radius ratio between the planet and
the star
delta_max = k*k * (c + s*k)
if mean:
return 0.84 * delta_max
return delta_max
cdpp_cols = [k for k in get_catalog("q1_q17_dr24_stellar").keys() if k.startswith("rrmscdpp")]
cdpp_vals = np.array([k[-4:].replace("p", ".") for k in cdpp_cols], dtype=float)
def get_mes(star, period, rp, tau, re=0.009171, mean=False):
Estimate the multiple event statistic value for a transit.
:param star: a pandas row giving the stellar properties
:param period: the period in days
:param rp: the planet radius in Earth radii
:param tau: the transit duration in hours
# Interpolate to the correct CDPP for the duration.
cdpp = np.array(star[cdpp_cols], dtype=float)
sigma = np.interp(tau, cdpp_vals, cdpp)
# Compute the radius ratio and estimate the S/N.
k = rp * re / star.radius
snr = get_delta(k, mean=mean) * 1e6 / sigma
# Scale by the estimated number of transits.
ntrn = star.dataspan * star.dutycycle / period
return snr * np.sqrt(ntrn)
# Pre-compute and freeze the gamma function from Equation (5) in
# Burke et al.
mesthresh = 15
def get_pdet(star, aor, period, rp, e, comp_p, mean=False):
Equation (5) from Burke et al. Estimate the detection efficiency
for a transit.
:param star: a pandas row giving the stellar properties
:param aor: the dimensionless semi-major axis (scaled
by the stellar radius)
:param period: the period in days
:param rp: the planet radius in Earth radii
:param e: the orbital eccentricity
tau = get_duration(period, aor, e) * 24.
mes = get_mes(star, period, rp, tau, mean=mean)
y = np.polyval(comp_p, mes) / (1 + np.exp(-2.0*(mes-mesthresh)))
return y * (y <= 1.0) + 1.0 * (y > 1.0)
def get_pwin(star, period):
Equation (6) from Burke et al. Estimates the window function
using a binomial distribution.
:param star: a pandas row giving the stellar properties
:param period: the period in days
M = star.dataspan / period
f = star.dutycycle
omf = 1.0 - f
pw = 1 - omf**M - M*f*omf**(M-1) - 0.5*M*(M-1)*f*f*omf**(M-2)
msk = (pw >= 0.0) * (M >= 2.0)
return pw * msk
def get_pgeom(aor, e):
The geometric transit probability.
See e.g. Kipping (2014) for the eccentricity factor
http://arxiv.org/abs/1408.1393
:param aor: the dimensionless semi-major axis (scaled
by the stellar radius)
:param e: the orbital eccentricity
return 1. / (aor * (1 - e*e)) * (aor > 1.0)
def get_completeness(star, period, rp, e, comp_p, with_geom=True, mean=False):
A helper function to combine all the completeness effects.
:param star: a pandas row giving the stellar properties
:param period: the period in days
:param rp: the planet radius in Earth radii
:param e: the orbital eccentricity
:param with_geom: include the geometric transit probability?
aor = get_a(period, star.mass) / star.radius
pdet = get_pdet(star, aor, period, rp, e, comp_p, mean=mean)
pwin = get_pwin(star, period)
if not with_geom:
return pdet * pwin
pgeom = get_pgeom(aor, e)
return pdet * pwin * pgeom
Explanation: Here's the completeness model to apply to Q1—Q17 catalog:
End of explanation
def run_analysis(trng, period_rng):
stlr = get_catalog("q1_q17_dr24_stellar")
# Select the stars.
m = np.isfinite(stlr.teff) & (trng[0] <= stlr.teff) & (stlr.teff < trng[1])
m &= np.isfinite(stlr.logg) & (4.0 <= stlr.logg)
# Only include stars with sufficient data coverage.
m &= (stlr.dutycycle * stlr.dataspan) > 2*365.25
m &= stlr.dutycycle > 0.33
# Only select stars with mass estimates.
m &= np.isfinite(stlr.mass)
stlr = pd.DataFrame(stlr[m])
print("Selected {0} targets after cuts".format(len(stlr)))
# KOI catalog.
kois = get_catalog("q1_q17_dr24_koi")
# Select candidates.
rp_rng = (1.5, 2.3)
# Join on the stellar list.
kois = pd.merge(kois, stlr[["kepid", "teff", "radius"]], on="kepid", how="inner")
# Only select the KOIs in the relevant part of parameter space.
m = kois.koi_pdisposition == "CANDIDATE"
base_kois = pd.DataFrame(kois[m])
m &= (period_rng[0] <= kois.koi_period) & (kois.koi_period < period_rng[1])
m &= np.isfinite(kois.koi_prad) & (rp_rng[0] <= kois.koi_prad) & (kois.koi_prad < rp_rng[1])
m &= np.isfinite(kois.koi_max_mult_ev) & (kois.koi_max_mult_ev > 15.0)
kois = pd.DataFrame(kois[m])
print("Selected {0} KOIs after cuts".format(len(kois)))
# Calibrate the completeness.
inj = pd.read_csv("data/DR24-Pipeline-Detection-Efficiency-Table.txt", delim_whitespace=True,
skiprows=4, header=None, names=[
"kepid", "sky", "period", "epoch", "t_depth", "t_dur", "t_b", "t_ror", "t_aor",
"offset_from_source", "offset_distance", "expect_mes", "recovered", "meas_mes",
"r_period", "r_epoch", "r_depth", "r_dur", "r_b", "r_ror", "r_aor"
], na_values="null")
# Join on the stellar list.
inj = pd.merge(inj, stlr[["kepid"]], on="kepid", how="inner")
# Estimate the linear trend above 15 MES.
bins = np.linspace(mesthresh, 80, 20)
n_tot, _ = np.histogram(inj.expect_mes, bins)
m = inj.meas_mes > mesthresh
# m = inj.expect_mes > mesthresh
m &= inj.recovered
n_rec, _ = np.histogram(inj.expect_mes[m], bins)
x = 0.5 * (bins[:-1] + bins[1:])
y = n_rec / n_tot
m = np.isfinite(y)
x, y = x[m], y[m]
pl.figure()
comp_p = np.polyfit(x, y, 1)
pl.plot(x, y)
x0 = np.linspace(0, 80, 500)
pl.plot(x0, np.polyval(comp_p, x0) / (1 + np.exp(-2*(x0-mesthresh))))
pl.xlabel("expected MES");
# Compute the mean completeness.
print("Computing mean completeness...")
p = np.exp(np.random.uniform(np.log(period_rng[0]), np.log(period_rng[1]), 5000))
r = np.exp(np.random.uniform(np.log(rp_rng[0]), np.log(rp_rng[1]), len(p)))
c = np.zeros(len(p))
for _, star in stlr.iterrows():
c += get_completeness(star, p, r, 0.0, comp_p, with_geom=True)
# Compute occurrence rate.
Q = np.mean(c)
N = len(kois)
occ = N / Q
sig = occ / np.sqrt(N)
print("{0:.3} ± {1:.3}".format(occ, sig))
return occ, sig, N, Q, comp_p
Explanation: And a function for estimating the occurrence rate (assumed constant) in a bin in $T_\mathrm{eff}$ and period:
End of explanation
run_analysis((5300.0, 6000.0), (40, 80))
Explanation: G-dwarfs:
End of explanation
run_analysis((2400.0, 3900.0), (20, 40))
Explanation: M-dwarfs:
End of explanation |
10,706 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Deterministic Policy Gradient (DDPG)
Author
Step1: We use OpenAIGym to create the environment.
We will use the upper_bound parameter to scale our actions later.
Step2: To implement better exploration by the Actor network, we use noisy perturbations,
specifically
an Ornstein-Uhlenbeck process for generating noise, as described in the paper.
It samples noise from a correlated normal distribution.
Step3: The Buffer class implements Experience Replay.
Critic loss - Mean Squared Error of y - Q(s, a)
where y is the expected return as seen by the Target network,
and Q(s, a) is action value predicted by the Critic network. y is a moving target
that the critic model tries to achieve; we make this target
stable by updating the Target model slowly.
Actor loss - This is computed using the mean of the value given by the Critic network
for the actions taken by the Actor network. We seek to maximize this quantity.
Hence we update the Actor network so that it produces actions that get
the maximum predicted value as seen by the Critic, for a given state.
Step4: Here we define the Actor and Critic networks. These are basic Dense models
with ReLU activation.
Note
Step5: policy() returns an action sampled from our Actor network plus some noise for
exploration.
Step6: Training hyperparameters
Step7: Now we implement our main training loop, and iterate over episodes.
We sample actions using policy() and train with learn() at each time step,
along with updating the Target networks at a rate tau.
Step8: If training proceeds correctly, the average episodic reward will increase with time.
Feel free to try different learning rates, tau values, and architectures for the
Actor and Critic networks.
The Inverted Pendulum problem has low complexity, but DDPG work great on many other
problems.
Another great environment to try this on is LunarLandingContinuous-v2, but it will take
more episodes to obtain good results. | Python Code:
import gym
import tensorflow as tf
from tensorflow.keras import layers
import numpy as np
import matplotlib.pyplot as plt
Explanation: Deep Deterministic Policy Gradient (DDPG)
Author: amifunny<br>
Date created: 2020/06/04<br>
Last modified: 2020/09/21<br>
Description: Implementing DDPG algorithm on the Inverted Pendulum Problem.
Introduction
Deep Deterministic Policy Gradient (DDPG) is a model-free off-policy algorithm for
learning continous actions.
It combines ideas from DPG (Deterministic Policy Gradient) and DQN (Deep Q-Network).
It uses Experience Replay and slow-learning target networks from DQN, and it is based on
DPG,
which can operate over continuous action spaces.
This tutorial closely follow this paper -
Continuous control with deep reinforcement learning
Problem
We are trying to solve the classic Inverted Pendulum control problem.
In this setting, we can take only two actions: swing left or swing right.
What make this problem challenging for Q-Learning Algorithms is that actions
are continuous instead of being discrete. That is, instead of using two
discrete actions like -1 or +1, we have to select from infinite actions
ranging from -2 to +2.
Quick theory
Just like the Actor-Critic method, we have two networks:
Actor - It proposes an action given a state.
Critic - It predicts if the action is good (positive value) or bad (negative value)
given a state and an action.
DDPG uses two more techniques not present in the original DQN:
First, it uses two Target networks.
Why? Because it add stability to training. In short, we are learning from estimated
targets and Target networks are updated slowly, hence keeping our estimated targets
stable.
Conceptually, this is like saying, "I have an idea of how to play this well,
I'm going to try it out for a bit until I find something better",
as opposed to saying "I'm going to re-learn how to play this entire game after every
move".
See this StackOverflow answer.
Second, it uses Experience Replay.
We store list of tuples (state, action, reward, next_state), and instead of
learning only from recent experience, we learn from sampling all of our experience
accumulated so far.
Now, let's see how is it implemented.
End of explanation
problem = "Pendulum-v0"
env = gym.make(problem)
num_states = env.observation_space.shape[0]
print("Size of State Space -> {}".format(num_states))
num_actions = env.action_space.shape[0]
print("Size of Action Space -> {}".format(num_actions))
upper_bound = env.action_space.high[0]
lower_bound = env.action_space.low[0]
print("Max Value of Action -> {}".format(upper_bound))
print("Min Value of Action -> {}".format(lower_bound))
Explanation: We use OpenAIGym to create the environment.
We will use the upper_bound parameter to scale our actions later.
End of explanation
class OUActionNoise:
def __init__(self, mean, std_deviation, theta=0.15, dt=1e-2, x_initial=None):
self.theta = theta
self.mean = mean
self.std_dev = std_deviation
self.dt = dt
self.x_initial = x_initial
self.reset()
def __call__(self):
# Formula taken from https://www.wikipedia.org/wiki/Ornstein-Uhlenbeck_process.
x = (
self.x_prev
+ self.theta * (self.mean - self.x_prev) * self.dt
+ self.std_dev * np.sqrt(self.dt) * np.random.normal(size=self.mean.shape)
)
# Store x into x_prev
# Makes next noise dependent on current one
self.x_prev = x
return x
def reset(self):
if self.x_initial is not None:
self.x_prev = self.x_initial
else:
self.x_prev = np.zeros_like(self.mean)
Explanation: To implement better exploration by the Actor network, we use noisy perturbations,
specifically
an Ornstein-Uhlenbeck process for generating noise, as described in the paper.
It samples noise from a correlated normal distribution.
End of explanation
class Buffer:
def __init__(self, buffer_capacity=100000, batch_size=64):
# Number of "experiences" to store at max
self.buffer_capacity = buffer_capacity
# Num of tuples to train on.
self.batch_size = batch_size
# Its tells us num of times record() was called.
self.buffer_counter = 0
# Instead of list of tuples as the exp.replay concept go
# We use different np.arrays for each tuple element
self.state_buffer = np.zeros((self.buffer_capacity, num_states))
self.action_buffer = np.zeros((self.buffer_capacity, num_actions))
self.reward_buffer = np.zeros((self.buffer_capacity, 1))
self.next_state_buffer = np.zeros((self.buffer_capacity, num_states))
# Takes (s,a,r,s') obervation tuple as input
def record(self, obs_tuple):
# Set index to zero if buffer_capacity is exceeded,
# replacing old records
index = self.buffer_counter % self.buffer_capacity
self.state_buffer[index] = obs_tuple[0]
self.action_buffer[index] = obs_tuple[1]
self.reward_buffer[index] = obs_tuple[2]
self.next_state_buffer[index] = obs_tuple[3]
self.buffer_counter += 1
# Eager execution is turned on by default in TensorFlow 2. Decorating with tf.function allows
# TensorFlow to build a static graph out of the logic and computations in our function.
# This provides a large speed up for blocks of code that contain many small TensorFlow operations such as this one.
@tf.function
def update(
self, state_batch, action_batch, reward_batch, next_state_batch,
):
# Training and updating Actor & Critic networks.
# See Pseudo Code.
with tf.GradientTape() as tape:
target_actions = target_actor(next_state_batch, training=True)
y = reward_batch + gamma * target_critic(
[next_state_batch, target_actions], training=True
)
critic_value = critic_model([state_batch, action_batch], training=True)
critic_loss = tf.math.reduce_mean(tf.math.square(y - critic_value))
critic_grad = tape.gradient(critic_loss, critic_model.trainable_variables)
critic_optimizer.apply_gradients(
zip(critic_grad, critic_model.trainable_variables)
)
with tf.GradientTape() as tape:
actions = actor_model(state_batch, training=True)
critic_value = critic_model([state_batch, actions], training=True)
# Used `-value` as we want to maximize the value given
# by the critic for our actions
actor_loss = -tf.math.reduce_mean(critic_value)
actor_grad = tape.gradient(actor_loss, actor_model.trainable_variables)
actor_optimizer.apply_gradients(
zip(actor_grad, actor_model.trainable_variables)
)
# We compute the loss and update parameters
def learn(self):
# Get sampling range
record_range = min(self.buffer_counter, self.buffer_capacity)
# Randomly sample indices
batch_indices = np.random.choice(record_range, self.batch_size)
# Convert to tensors
state_batch = tf.convert_to_tensor(self.state_buffer[batch_indices])
action_batch = tf.convert_to_tensor(self.action_buffer[batch_indices])
reward_batch = tf.convert_to_tensor(self.reward_buffer[batch_indices])
reward_batch = tf.cast(reward_batch, dtype=tf.float32)
next_state_batch = tf.convert_to_tensor(self.next_state_buffer[batch_indices])
self.update(state_batch, action_batch, reward_batch, next_state_batch)
# This update target parameters slowly
# Based on rate `tau`, which is much less than one.
@tf.function
def update_target(target_weights, weights, tau):
for (a, b) in zip(target_weights, weights):
a.assign(b * tau + a * (1 - tau))
Explanation: The Buffer class implements Experience Replay.
Critic loss - Mean Squared Error of y - Q(s, a)
where y is the expected return as seen by the Target network,
and Q(s, a) is action value predicted by the Critic network. y is a moving target
that the critic model tries to achieve; we make this target
stable by updating the Target model slowly.
Actor loss - This is computed using the mean of the value given by the Critic network
for the actions taken by the Actor network. We seek to maximize this quantity.
Hence we update the Actor network so that it produces actions that get
the maximum predicted value as seen by the Critic, for a given state.
End of explanation
def get_actor():
# Initialize weights between -3e-3 and 3-e3
last_init = tf.random_uniform_initializer(minval=-0.003, maxval=0.003)
inputs = layers.Input(shape=(num_states,))
out = layers.Dense(256, activation="relu")(inputs)
out = layers.Dense(256, activation="relu")(out)
outputs = layers.Dense(1, activation="tanh", kernel_initializer=last_init)(out)
# Our upper bound is 2.0 for Pendulum.
outputs = outputs * upper_bound
model = tf.keras.Model(inputs, outputs)
return model
def get_critic():
# State as input
state_input = layers.Input(shape=(num_states))
state_out = layers.Dense(16, activation="relu")(state_input)
state_out = layers.Dense(32, activation="relu")(state_out)
# Action as input
action_input = layers.Input(shape=(num_actions))
action_out = layers.Dense(32, activation="relu")(action_input)
# Both are passed through seperate layer before concatenating
concat = layers.Concatenate()([state_out, action_out])
out = layers.Dense(256, activation="relu")(concat)
out = layers.Dense(256, activation="relu")(out)
outputs = layers.Dense(1)(out)
# Outputs single value for give state-action
model = tf.keras.Model([state_input, action_input], outputs)
return model
Explanation: Here we define the Actor and Critic networks. These are basic Dense models
with ReLU activation.
Note: We need the initialization for last layer of the Actor to be between
-0.003 and 0.003 as this prevents us from getting 1 or -1 output values in
the initial stages, which would squash our gradients to zero,
as we use the tanh activation.
End of explanation
def policy(state, noise_object):
sampled_actions = tf.squeeze(actor_model(state))
noise = noise_object()
# Adding noise to action
sampled_actions = sampled_actions.numpy() + noise
# We make sure action is within bounds
legal_action = np.clip(sampled_actions, lower_bound, upper_bound)
return [np.squeeze(legal_action)]
Explanation: policy() returns an action sampled from our Actor network plus some noise for
exploration.
End of explanation
std_dev = 0.2
ou_noise = OUActionNoise(mean=np.zeros(1), std_deviation=float(std_dev) * np.ones(1))
actor_model = get_actor()
critic_model = get_critic()
target_actor = get_actor()
target_critic = get_critic()
# Making the weights equal initially
target_actor.set_weights(actor_model.get_weights())
target_critic.set_weights(critic_model.get_weights())
# Learning rate for actor-critic models
critic_lr = 0.002
actor_lr = 0.001
critic_optimizer = tf.keras.optimizers.Adam(critic_lr)
actor_optimizer = tf.keras.optimizers.Adam(actor_lr)
total_episodes = 100
# Discount factor for future rewards
gamma = 0.99
# Used to update target networks
tau = 0.005
buffer = Buffer(50000, 64)
Explanation: Training hyperparameters
End of explanation
# To store reward history of each episode
ep_reward_list = []
# To store average reward history of last few episodes
avg_reward_list = []
# Takes about 4 min to train
for ep in range(total_episodes):
prev_state = env.reset()
episodic_reward = 0
while True:
# Uncomment this to see the Actor in action
# But not in a python notebook.
# env.render()
tf_prev_state = tf.expand_dims(tf.convert_to_tensor(prev_state), 0)
action = policy(tf_prev_state, ou_noise)
# Recieve state and reward from environment.
state, reward, done, info = env.step(action)
buffer.record((prev_state, action, reward, state))
episodic_reward += reward
buffer.learn()
update_target(target_actor.variables, actor_model.variables, tau)
update_target(target_critic.variables, critic_model.variables, tau)
# End this episode when `done` is True
if done:
break
prev_state = state
ep_reward_list.append(episodic_reward)
# Mean of last 40 episodes
avg_reward = np.mean(ep_reward_list[-40:])
print("Episode * {} * Avg Reward is ==> {}".format(ep, avg_reward))
avg_reward_list.append(avg_reward)
# Plotting graph
# Episodes versus Avg. Rewards
plt.plot(avg_reward_list)
plt.xlabel("Episode")
plt.ylabel("Avg. Epsiodic Reward")
plt.show()
Explanation: Now we implement our main training loop, and iterate over episodes.
We sample actions using policy() and train with learn() at each time step,
along with updating the Target networks at a rate tau.
End of explanation
# Save the weights
actor_model.save_weights("pendulum_actor.h5")
critic_model.save_weights("pendulum_critic.h5")
target_actor.save_weights("pendulum_target_actor.h5")
target_critic.save_weights("pendulum_target_critic.h5")
Explanation: If training proceeds correctly, the average episodic reward will increase with time.
Feel free to try different learning rates, tau values, and architectures for the
Actor and Critic networks.
The Inverted Pendulum problem has low complexity, but DDPG work great on many other
problems.
Another great environment to try this on is LunarLandingContinuous-v2, but it will take
more episodes to obtain good results.
End of explanation |
10,707 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scapy in 15 minutes (or longer)
Guillaume Valadon & Pierre Lalet
Scapy is a powerful Python-based interactive packet manipulation program and library. It can be used to forge or decode packets for a wide number of protocols, send them on the wire, capture them, match requests and replies, and much more.
This iPython notebook provides a short tour of the main Scapy features. It assumes that you are familiar with networking terminology. All examples where built using the development version from https
Step1: 2_ Adanced firewalking using IP options is sometimes useful to perform network enumeration. Here is more complicate one-liner
Step2: Now that, we've got your attention, let's start the tutorial !
Quick setup
The easiest way to try Scapy is to clone the github repository, then launch the run_scapy script as root. The following examples can be pasted on the Scapy prompt. There is no need to install any external Python modules.
```shell
git clone https
Step3: First steps
With Scapy, each network layer is a Python class.
The '/' operator is used to bind layers together. Let's put a TCP segment on top of IP and assign it to the packet variable, then stack it on top of Ethernet.
Step4: This last output displays the packet summary. Here, Scapy automatically filled the Ethernet type as well as the IP protocol field.
Protocol fields can be listed using the ls() function
Step5: Let's create a new packet to a specific IP destination. With Scapy, each protocol field can be specified. As shown in the ls() output, the interesting field is dst.
Scapy packets are objects with some useful methods, such as summary().
Step6: There are not many differences with the previous example. However, Scapy used the specific destination to perform some magic tricks !
Using internal mechanisms (such as DNS resolution, routing table and ARP resolution), Scapy has automatically set fields necessary to send the packet. This fields can of course be accessed and displayed.
Step7: Scapy uses default values that work most of the time. For example, TCP() is a SYN segment to port 80.
Step8: Moreover, Scapy has implicit packets. For example, they are useful to make the TTL field value vary from 1 to 5 to mimic traceroute.
Step9: Sending and receiving
Currently, you know how to build packets with Scapy. The next step is to send them over the network !
The sr1() function sends a packet and return the corresponding answer. srp1() does the same for layer two packets, i.e. Ethernet. If you are only interested in sending packets send() is your friend.
As an example, we can use the DNS protocol to get www.example.com IPv4 address.
Step10: Another alternative is the sr() function. Like srp1(), the sr1() function can be used for layer 2 packets.
Step11: sr() sent a list of packets, and returns two variables, here r and u, where
Step12: With Scapy, list of packets, such as r or u, can be easily written to, or read from PCAP files.
Step13: Sniffing the network is a straightforward as sending and receiving packets. The sniff() function returns a list of Scapy packets, that can be manipulated as previously described.
Step14: sniff() has many arguments. The prn one accepts a function name that will be called on received packets. Using the lambda keyword, Scapy could be used to mimic the tshark command behavior.
Step15: Alternatively, Scapy can use OS sockets to send and receive packets. The following example assigns an UDP socket to a Scapy StreamSocket, which is then used to query www.example.com IPv4 address.
Unlike other Scapy sockets, StreamSockets do not require root privileges.
Step16: Visualization
Parts of the following examples require the matplotlib module.
With srloop(), we can send 100 ICMP packets to 8.8.8.8 and 8.8.4.4.
Step17: Then we can use the results to plot the IP id values.
Step18: The raw() constructor can be used to "build" the packet's bytes as they would be sent on the wire.
Step19: Since some people cannot read this representation, Scapy can
Step20: "hexdump" the packet's bytes
Step21: dump the packet, layer by layer, with the values for each field
Step22: render a pretty and handy dissection of the packet
Step23: Scapy has a traceroute() function, which basically runs a sr(IP(ttl=(1..30)) and creates a TracerouteResult object, which is a specific subclass of SndRcvList().
Step24: The result can be plotted with .world_trace() (this requires GeoIP module and data, from MaxMind)
Step25: The PacketList.make_table() function can be very helpful. Here is a simple "port scanner"
Step26: Implementing a new protocol
Scapy can be easily extended to support new protocols.
The following example defines DNS over TCP. The DNSTCP class inherits from Packet and defines two field
Step27: This new packet definition can be direcly used to build a DNS message over TCP.
Step28: Modifying the previous StreamSocket example to use TCP allows to use the new DNSCTP layer easily.
Step29: Scapy as a module
So far, Scapy was only used from the command line. It is also a Python module than can be used to build specific network tools, such as ping6.py
Step30: Answering machines
A lot of attack scenarios look the same
Step31: Cheap Man-in-the-middle with NFQUEUE
NFQUEUE is an iptables target than can be used to transfer packets to userland process. As a nfqueue module is available in Python, you can take advantage of this Linux feature to perform Scapy based MiTM.
This example intercepts ICMP Echo request messages sent to 8.8.8.8, sent with the ping command, and modify their sequence numbers. In order to pass packets to Scapy, the following iptable command put packets into the NFQUEUE #2807
Step32: Automaton
When more logic is needed, Scapy provides a clever way abstraction to define an automaton. In a nutshell, you need to define an object that inherits from Automaton, and implement specific methods
Step33: Pipes
Pipes are an advanced Scapy feature that aims sniffing, modifying and printing packets. The API provides several buildings blocks. All of them, have high entries and exits (>>) as well as low (>) ones.
For example, the CliFeeder is used to send message from the Python command line to a low exit. It can be combined to the InjectSink that reads message on its low entry and inject them to the specified network interface. These blocks can be combined as follows
Step34: Packet can be sent using the following command on the prompt | Python Code:
send(IP(dst="1.2.3.4")/TCP(dport=502, options=[("MSS", 0)]))
Explanation: Scapy in 15 minutes (or longer)
Guillaume Valadon & Pierre Lalet
Scapy is a powerful Python-based interactive packet manipulation program and library. It can be used to forge or decode packets for a wide number of protocols, send them on the wire, capture them, match requests and replies, and much more.
This iPython notebook provides a short tour of the main Scapy features. It assumes that you are familiar with networking terminology. All examples where built using the development version from https://github.com/secdev/scapy, and tested on Linux. They should work as well on OS X, and other BSD.
The current documentation is available on http://scapy.readthedocs.io/ !
Scapy eases network packets manipulation, and allows you to forge complicated packets to perform advanced tests. As a teaser, let's have a look a two examples that are difficult to express without Scapy:
1_ Sending a TCP segment with maximum segment size set to 0 to a specific port is an interesting test to perform against embedded TCP stacks. It can be achieved with the following one-liner:
End of explanation
ans = sr([IP(dst="8.8.8.8", ttl=(1, 8), options=IPOption_RR())/ICMP(seq=RandShort()), IP(dst="8.8.8.8", ttl=(1, 8), options=IPOption_Traceroute())/ICMP(seq=RandShort()), IP(dst="8.8.8.8", ttl=(1, 8))/ICMP(seq=RandShort())], verbose=False, timeout=3)[0]
ans.make_table(lambda x, y: (", ".join(z.summary() for z in x[IP].options) or '-', x[IP].ttl, y.sprintf("%IP.src% %ICMP.type%")))
Explanation: 2_ Adanced firewalking using IP options is sometimes useful to perform network enumeration. Here is more complicate one-liner:
End of explanation
from scapy.all import *
Explanation: Now that, we've got your attention, let's start the tutorial !
Quick setup
The easiest way to try Scapy is to clone the github repository, then launch the run_scapy script as root. The following examples can be pasted on the Scapy prompt. There is no need to install any external Python modules.
```shell
git clone https://github.com/secdev/scapy --depth=1
sudo ./run_scapy
Welcome to Scapy (2.4.0)
```
Note: iPython users must import scapy as follows
End of explanation
packet = IP()/TCP()
Ether()/packet
Explanation: First steps
With Scapy, each network layer is a Python class.
The '/' operator is used to bind layers together. Let's put a TCP segment on top of IP and assign it to the packet variable, then stack it on top of Ethernet.
End of explanation
>>> ls(IP, verbose=True)
version : BitField (4 bits) = (4)
ihl : BitField (4 bits) = (None)
tos : XByteField = (0)
len : ShortField = (None)
id : ShortField = (1)
flags : FlagsField (3 bits) = (0)
MF, DF, evil
frag : BitField (13 bits) = (0)
ttl : ByteField = (64)
proto : ByteEnumField = (0)
chksum : XShortField = (None)
src : SourceIPField (Emph) = (None)
dst : DestIPField (Emph) = (None)
options : PacketListField = ([])
Explanation: This last output displays the packet summary. Here, Scapy automatically filled the Ethernet type as well as the IP protocol field.
Protocol fields can be listed using the ls() function:
End of explanation
p = Ether()/IP(dst="www.secdev.org")/TCP()
p.summary()
Explanation: Let's create a new packet to a specific IP destination. With Scapy, each protocol field can be specified. As shown in the ls() output, the interesting field is dst.
Scapy packets are objects with some useful methods, such as summary().
End of explanation
print(p.dst) # first layer that has an src field, here Ether
print(p[IP].src) # explicitly access the src field of the IP layer
# sprintf() is a useful method to display fields
print(p.sprintf("%Ether.src% > %Ether.dst%\n%IP.src% > %IP.dst%"))
Explanation: There are not many differences with the previous example. However, Scapy used the specific destination to perform some magic tricks !
Using internal mechanisms (such as DNS resolution, routing table and ARP resolution), Scapy has automatically set fields necessary to send the packet. This fields can of course be accessed and displayed.
End of explanation
print(p.sprintf("%TCP.flags% %TCP.dport%"))
Explanation: Scapy uses default values that work most of the time. For example, TCP() is a SYN segment to port 80.
End of explanation
[p for p in IP(ttl=(1,5))/ICMP()]
Explanation: Moreover, Scapy has implicit packets. For example, they are useful to make the TTL field value vary from 1 to 5 to mimic traceroute.
End of explanation
p = sr1(IP(dst="8.8.8.8")/UDP()/DNS(qd=DNSQR()))
p[DNS].an
Explanation: Sending and receiving
Currently, you know how to build packets with Scapy. The next step is to send them over the network !
The sr1() function sends a packet and return the corresponding answer. srp1() does the same for layer two packets, i.e. Ethernet. If you are only interested in sending packets send() is your friend.
As an example, we can use the DNS protocol to get www.example.com IPv4 address.
End of explanation
r, u = srp(Ether()/IP(dst="8.8.8.8", ttl=(5,10))/UDP()/DNS(rd=1, qd=DNSQR(qname="www.example.com")))
r, u
Explanation: Another alternative is the sr() function. Like srp1(), the sr1() function can be used for layer 2 packets.
End of explanation
# Access the first tuple
print(r[0][0].summary()) # the packet sent
print(r[0][1].summary()) # the answer received
# Access the ICMP layer. Scapy received a time-exceeded error message
r[0][1][ICMP]
Explanation: sr() sent a list of packets, and returns two variables, here r and u, where:
1. r is a list of results (i.e tuples of the packet sent and its answer)
2. u is a list of unanswered packets
End of explanation
wrpcap("scapy.pcap", r)
pcap_p = rdpcap("scapy.pcap")
pcap_p[0]
Explanation: With Scapy, list of packets, such as r or u, can be easily written to, or read from PCAP files.
End of explanation
s = sniff(count=2)
s
Explanation: Sniffing the network is a straightforward as sending and receiving packets. The sniff() function returns a list of Scapy packets, that can be manipulated as previously described.
End of explanation
sniff(count=2, prn=lambda p: p.summary())
Explanation: sniff() has many arguments. The prn one accepts a function name that will be called on received packets. Using the lambda keyword, Scapy could be used to mimic the tshark command behavior.
End of explanation
import socket
sck = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # create an UDP socket
sck.connect(("8.8.8.8", 53)) # connect to 8.8.8.8 on 53/UDP
# Create the StreamSocket and gives the class used to decode the answer
ssck = StreamSocket(sck)
ssck.basecls = DNS
# Send the DNS query
ssck.sr1(DNS(rd=1, qd=DNSQR(qname="www.example.com")))
Explanation: Alternatively, Scapy can use OS sockets to send and receive packets. The following example assigns an UDP socket to a Scapy StreamSocket, which is then used to query www.example.com IPv4 address.
Unlike other Scapy sockets, StreamSockets do not require root privileges.
End of explanation
ans, unans = srloop(IP(dst=["8.8.8.8", "8.8.4.4"])/ICMP(), inter=.1, timeout=.1, count=100, verbose=False)
Explanation: Visualization
Parts of the following examples require the matplotlib module.
With srloop(), we can send 100 ICMP packets to 8.8.8.8 and 8.8.4.4.
End of explanation
%matplotlib inline
ans.multiplot(lambda x, y: (y[IP].src, (y.time, y[IP].id)), plot_xy=True)
Explanation: Then we can use the results to plot the IP id values.
End of explanation
pkt = IP() / UDP() / DNS(qd=DNSQR())
print(repr(raw(pkt)))
Explanation: The raw() constructor can be used to "build" the packet's bytes as they would be sent on the wire.
End of explanation
print(pkt.summary())
Explanation: Since some people cannot read this representation, Scapy can:
- give a summary for a packet
End of explanation
hexdump(pkt)
Explanation: "hexdump" the packet's bytes
End of explanation
pkt.show()
Explanation: dump the packet, layer by layer, with the values for each field
End of explanation
pkt.canvas_dump()
Explanation: render a pretty and handy dissection of the packet
End of explanation
ans, unans = traceroute('www.secdev.org', maxttl=15)
Explanation: Scapy has a traceroute() function, which basically runs a sr(IP(ttl=(1..30)) and creates a TracerouteResult object, which is a specific subclass of SndRcvList().
End of explanation
ans.world_trace()
Explanation: The result can be plotted with .world_trace() (this requires GeoIP module and data, from MaxMind)
End of explanation
ans = sr(IP(dst=["scanme.nmap.org", "nmap.org"])/TCP(dport=[22, 80, 443, 31337]), timeout=3, verbose=False)[0]
ans.extend(sr(IP(dst=["scanme.nmap.org", "nmap.org"])/UDP(dport=53)/DNS(qd=DNSQR()), timeout=3, verbose=False)[0])
ans.make_table(lambda x, y: (x[IP].dst, x.sprintf('%IP.proto%/{TCP:%r,TCP.dport%}{UDP:%r,UDP.dport%}'), y.sprintf('{TCP:%TCP.flags%}{ICMP:%ICMP.type%}')))
Explanation: The PacketList.make_table() function can be very helpful. Here is a simple "port scanner":
End of explanation
class DNSTCP(Packet):
name = "DNS over TCP"
fields_desc = [ FieldLenField("len", None, fmt="!H", length_of="dns"),
PacketLenField("dns", 0, DNS, length_from=lambda p: p.len)]
# This method tells Scapy that the next packet must be decoded with DNSTCP
def guess_payload_class(self, payload):
return DNSTCP
Explanation: Implementing a new protocol
Scapy can be easily extended to support new protocols.
The following example defines DNS over TCP. The DNSTCP class inherits from Packet and defines two field: the length, and the real DNS message. The length_of and length_from arguments link the len and dns fields together. Scapy will be able to automatically compute the len value.
End of explanation
# Build then decode a DNS message over TCP
DNSTCP(raw(DNSTCP(dns=DNS())))
Explanation: This new packet definition can be direcly used to build a DNS message over TCP.
End of explanation
import socket
sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # create an TCP socket
sck.connect(("8.8.8.8", 53)) # connect to 8.8.8.8 on 53/TCP
# Create the StreamSocket and gives the class used to decode the answer
ssck = StreamSocket(sck)
ssck.basecls = DNSTCP
# Send the DNS query
ssck.sr1(DNSTCP(dns=DNS(rd=1, qd=DNSQR(qname="www.example.com"))))
Explanation: Modifying the previous StreamSocket example to use TCP allows to use the new DNSCTP layer easily.
End of explanation
from scapy.all import *
import argparse
parser = argparse.ArgumentParser(description="A simple ping6")
parser.add_argument("ipv6_address", help="An IPv6 address")
args = parser.parse_args()
print(sr1(IPv6(dst=args.ipv6_address)/ICMPv6EchoRequest(), verbose=0).summary())
Explanation: Scapy as a module
So far, Scapy was only used from the command line. It is also a Python module than can be used to build specific network tools, such as ping6.py:
End of explanation
# Specify the Wi-Fi monitor interface
#conf.iface = "mon0" # uncomment to test
# Create an answering machine
class ProbeRequest_am(AnsweringMachine):
function_name = "pram"
# The fake mac of the fake access point
mac = "00:11:22:33:44:55"
def is_request(self, pkt):
return Dot11ProbeReq in pkt
def make_reply(self, req):
rep = RadioTap()
# Note: depending on your Wi-Fi card, you might need a different header than RadioTap()
rep /= Dot11(addr1=req.addr2, addr2=self.mac, addr3=self.mac, ID=RandShort(), SC=RandShort())
rep /= Dot11ProbeResp(cap="ESS", timestamp=time.time())
rep /= Dot11Elt(ID="SSID",info="Scapy !")
rep /= Dot11Elt(ID="Rates",info=b'\x82\x84\x0b\x16\x96')
rep /= Dot11Elt(ID="DSset",info=chr(10))
OK,return rep
# Start the answering machine
#ProbeRequest_am()() # uncomment to test
Explanation: Answering machines
A lot of attack scenarios look the same: you want to wait for a specific packet, then send an answer to trigger the attack.
To this extent, Scapy provides the AnsweringMachine object. Two methods are especially useful:
1. is_request(): return True if the pkt is the expected request
2. make_reply(): return the packet that must be sent
The following example uses Scapy Wi-Fi capabilities to pretend that a "Scapy !" access point exists.
Note: your Wi-Fi interface must be set to monitor mode !
End of explanation
from scapy.all import *
import nfqueue, socket
def scapy_cb(i, payload):
s = payload.get_data() # get and parse the packet
p = IP(s)
# Check if the packet is an ICMP Echo Request to 8.8.8.8
if p.dst == "8.8.8.8" and ICMP in p:
# Delete checksums to force Scapy to compute them
del(p[IP].chksum, p[ICMP].chksum)
# Set the ICMP sequence number to 0
p[ICMP].seq = 0
# Let the modified packet go through
ret = payload.set_verdict_modified(nfqueue.NF_ACCEPT, raw(p), len(p))
else:
# Accept all packets
payload.set_verdict(nfqueue.NF_ACCEPT)
# Get an NFQUEUE handler
q = nfqueue.queue()
# Set the function that will be call on each received packet
q.set_callback(scapy_cb)
# Open the queue & start parsing packes
q.fast_open(2807, socket.AF_INET)
q.try_run()
Explanation: Cheap Man-in-the-middle with NFQUEUE
NFQUEUE is an iptables target than can be used to transfer packets to userland process. As a nfqueue module is available in Python, you can take advantage of this Linux feature to perform Scapy based MiTM.
This example intercepts ICMP Echo request messages sent to 8.8.8.8, sent with the ping command, and modify their sequence numbers. In order to pass packets to Scapy, the following iptable command put packets into the NFQUEUE #2807:
$ sudo iptables -I OUTPUT --destination 8.8.8.8 -p icmp -o eth0 -j NFQUEUE --queue-num 2807
End of explanation
class TCPScanner(Automaton):
@ATMT.state(initial=1)
def BEGIN(self):
pass
@ATMT.state()
def SYN(self):
print("-> SYN")
@ATMT.state()
def SYN_ACK(self):
print("<- SYN/ACK")
raise self.END()
@ATMT.state()
def RST(self):
print("<- RST")
raise self.END()
@ATMT.state()
def ERROR(self):
print("!! ERROR")
raise self.END()
@ATMT.state(final=1)
def END(self):
pass
@ATMT.condition(BEGIN)
def condition_BEGIN(self):
raise self.SYN()
@ATMT.condition(SYN)
def condition_SYN(self):
if random.randint(0, 1):
raise self.SYN_ACK()
else:
raise self.RST()
@ATMT.timeout(SYN, 1)
def timeout_SYN(self):
raise self.ERROR()
TCPScanner().run()
TCPScanner().run()
Explanation: Automaton
When more logic is needed, Scapy provides a clever way abstraction to define an automaton. In a nutshell, you need to define an object that inherits from Automaton, and implement specific methods:
- states: using the @ATMT.state decorator. They usually do nothing
- conditions: using the @ATMT.condition and @ATMT.receive_condition decorators. They describe how to go from one state to another
- actions: using the ATMT.action decorator. They describe what to do, like sending a back, when changing state
The following example does nothing more than trying to mimic a TCP scanner:
End of explanation
# Instantiate the blocks
clf = CLIFeeder()
ijs = InjectSink("enx3495db043a28")
# Plug blocks together
clf > ijs
# Create and start the engine
pe = PipeEngine(clf)
pe.start()
Explanation: Pipes
Pipes are an advanced Scapy feature that aims sniffing, modifying and printing packets. The API provides several buildings blocks. All of them, have high entries and exits (>>) as well as low (>) ones.
For example, the CliFeeder is used to send message from the Python command line to a low exit. It can be combined to the InjectSink that reads message on its low entry and inject them to the specified network interface. These blocks can be combined as follows:
End of explanation
clf.send("Hello Scapy !")
Explanation: Packet can be sent using the following command on the prompt:
End of explanation |
10,708 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<header class="w3-container w3-teal">
<img src="images/utfsm.png" alt="" height="100px" align="left"/>
<img src="images/mat.png" alt="" height="100px" align="right"/>
</header>
<br/><br/><br/><br/><br/>
MAT281
Aplicaciones de la Matemática en la Ingeniería
Sebastián Flores
https
Step2: 1. kNN
Implementación de kNN en python y numpy
Es posible realizar la siguiente implementación en python y numpy
Step3: 1. kNN
Visualización
Step4: 2. Aplicación al Iris Dataset
Aplicaremos $kNN$ en las $3$ clases conocidas.
<img src="images/iris_petal_sepal.png" alt="" width="300px" align="middle"/>
¿Qué valor de $k$ es razonable tomar?
Step5: 2. Aplicación al Iris Dataset
Para aplicar el k Nearest Neighbors, utilizando el algoritmo kNN de la librería sklearn, requerimos un código como el siguiente
Step6: 2. Aplicación al Iris Dataset
¡Wow! ¡El método es perfecto para k=1!
¿o no?
kNN por construcción asigna la etiqueta del vecino más cercano, por lo que el error de entrenamiento para k=1 siempre será 0.
2. Aplicación al Iris Dataset
Para aplicar el k Nearest Neighbors, utilizando el algoritmo kNN de la librería sklearn, requerimos utilizar un Holdout SET para evaluar el error de predicción
Step7: 2. Aplicación al Iris Dataset
Debido al problema del overfitting, para seleccionar $k$ estamos obligados a utilizar un holdout set.
De hecho, debemos utilizar 3 conjuntos de datos
Step8: 2. Aplicación al Iris Dataset
Para aplicar el k Nearest Neighbors, con conjuntos de entrenamiento, validación y testeo, y utilizando el algoritmo kNN de la librería sklearn, requerimos un código como el siguiente
Step9: 2. Aplicación al Iris Dataset
Para aplicar el k Nearest Neighbors, con conjuntos de entrenamiento, validación y testeo, y utilizando el algoritmo kNN de la librería sklearn, requerimos un código como el siguiente
Step10: 2. Aplicación al Iris Dataset
Debido a lo anterior, resulta razonable considerar $k=3$ ó $k=5$, con un error de predicción de $1/30$, es decir, de aproximadamente $0.0333$. | Python Code:
def hamming(s1, s2):
# Caso no comparable
if len(s1)!=len(s2):
print("No comparable")
return None
h = 0
# Caso comparable
for ch1, ch2 in zip(s1,s2):
if ch1!=ch2:
h+= 1
# FIX ME
return h
print hamming("cara", "c")
print hamming("cara", "casa")
print hamming("cera", "cese")
Explanation: <header class="w3-container w3-teal">
<img src="images/utfsm.png" alt="" height="100px" align="left"/>
<img src="images/mat.png" alt="" height="100px" align="right"/>
</header>
<br/><br/><br/><br/><br/>
MAT281
Aplicaciones de la Matemática en la Ingeniería
Sebastián Flores
https://www.github.com/usantamaria/mat281
Clase anterior
Regresión Logística
* ¿Cómo se llamaba el algoritmo que vimos?
* ¿Cuál era la aproximación ingenieril? ¿Machine Learning? ¿Estadística?
* ¿Cuándo funcionaba y cuándo fallaba?
¿Qué veremos hoy?
Clasificación con k-Nearest Neighbors (kNN).
¿Porqué veremos ese contenido?
Clasificación con k-Nearest Neighbors (kNN).
Porque clasificación en múltiples categorías es un problema muy común.
kNN es el algoritmo más simple que permite clasificar en múltiples categorías y utiliza la noción de distancia/similaridad.
kNN
Algoritmo k Nearest Neighbors es un método no paramétrico: una vez que $k$ se ha fijado, no se busca obtener ningún parámetro.
Sean los puntos $x^{(i)} = (x^{(i)}_1, ..., x^{(i)}_n)$ de etiqueta $y^{(i)}$ conocida, para $i=1, ..., m$.
El problema de clasificación consiste en encontrar la etiqueta de un nuevo punto $x=(x_1, ..., x_m)$ para el cual no conocemos la etiqueta.
kNN
Para $k=1$, 1NN asigna a $x$ la etiqueta de su vecino más cercano.
Para $k$ genérico, kNN asigna a $x$ la etiqueta más popular de los k vecinos más cercanos.
El modelo subyacente a kNN es el conjunto de entrenamiento completo. Cuando se necesita realizar una predicción, el algoritmo mira todos los datos y selecciona los k datos más similares, para regresar la etiqueta más popular. Los datos no se resumen en un parámetro, como en regresión logística, sino que siempre deben mantenerse en memoria.
<img src="images/1.png" alt="" width="600px" align="middle"/>
<img src="images/2.png" alt="" width="600px" align="middle"/>
<img src="images/3a.png" alt="" width="600px" align="middle"/>
<img src="images/3b.png" alt="" width="600px" align="middle"/>
<img src="images/5a.png" alt="" width="600px" align="middle"/>
<img src="images/5b.png" alt="" width="600px" align="middle"/>
kNN
En caso de empate, existen diversas maneras de desempatar:
* Elegir la etiqueta del vecino más cercano (problema: no garantiza solución).
* Elegir la etiqueta de menor valor (problema: arbitrario).
* Elegir la etiqueta que se obtendría con $k+1$ o $k-1$ (problema: no garantiza solución, aumenta tiempo de cálculo).
kNN
Medida de similaridad
¿Cómo medimos la cercanía o similaridad entre los datos?
Depende del tipo de datos.
Para datos reales, puede utilizarse cualquier distancia, siendo la distancia euclidiana la más utilizada. También es posible ponderar unas componentes más que otras. Resulta conveniente normalizar para poder utilizar la noción de distancia más naturalmente.
Para datos categóricos o binarios, suele utilizarse la distancia de Hamming.
kNN
Medida de similaridad
La distancia de Hamming entre 2 strings consiste en el número de posiciones en los cuales los strings son distintos.
End of explanation
import numpy as np
def knn_search(X, k, x):
find K nearest neighbours of data among D
# Distancia euclidiana
d = np.sqrt(((X - x[:,:k])**2).sum(axis=0))
# Ordenar por cercania
idx = np.argsort(d)
# Regresar los k mas cercanos
return idx[:k]
def knn(X,Y,k,x):
# Obtener los k mas cercanos
k_closest = knn_search(X, k, x)
# Obtener las etiquetas
Y_closest = Y[k_closest]
# Obtener la mas popular
counts = np.bincount(Y_closest)
print counts
# Regresar la mas popular (cualquiera, si hay empate)
return np.argmax(counts)
Explanation: 1. kNN
Implementación de kNN en python y numpy
Es posible realizar la siguiente implementación en python y numpy
End of explanation
import numpy as np
from matplotlib import pyplot as plt
X = np.random.rand(2,100) # random dataset
Y = np.array(np.random.rand(100)<0.2, dtype=int) # random dataset
x = np.random.rand(2,1) # query point
# performing the search
k = 20
neig_idx = knn_search(X, k, x)
y = knn(X, Y, k, x)
print "etiqueta=", y
# plotting the data and the input point
fig = plt.figure(figsize=(16,8))
plt.plot(X[0,:][Y==0],X[1,:][Y==0],'ob', ms=8)
plt.plot(X[0,:][Y==1],X[1,:][Y==1],'sr', ms=8)
plt.plot(x[0,0],x[1,0],'ok', ms=16)
# highlighting the neighbours
plt.plot(X[0,neig_idx], X[1,neig_idx], 'o', markerfacecolor='None', markersize=24, markeredgewidth=1)
plt.show()
Explanation: 1. kNN
Visualización
End of explanation
import numpy as np
from sklearn import datasets
# Loading the data
iris = datasets.load_iris()
X = iris.data
Y = iris.target
print iris.target_names
print X.shape[0]
# Print data and labels
for x, y in zip(X,Y):
print x, y
Explanation: 2. Aplicación al Iris Dataset
Aplicaremos $kNN$ en las $3$ clases conocidas.
<img src="images/iris_petal_sepal.png" alt="" width="300px" align="middle"/>
¿Qué valor de $k$ es razonable tomar?
End of explanation
import numpy as np
from sklearn import datasets
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix
# Meta parameter
k = 150
# Loading the data
iris = datasets.load_iris()
names = iris.target_names
X = iris.data
Y = iris.target
# Fitting the model
kNN = KNeighborsClassifier(k)
kNN.fit(X,Y)
# No coefficients to print!
# Predicting values
Y_pred = kNN.predict(X)
# Count the errors
template = "{0} errores de clasificación de un total de {1}"
print template.format(sum(Y!=Y_pred), len(Y))
# Matriz de confusion
print confusion_matrix(Y, Y_pred)
Explanation: 2. Aplicación al Iris Dataset
Para aplicar el k Nearest Neighbors, utilizando el algoritmo kNN de la librería sklearn, requerimos un código como el siguiente:
End of explanation
import numpy as np
from sklearn import datasets
from sklearn.neighbors import KNeighborsClassifier
from sklearn.cross_validation import train_test_split
from sklearn.metrics import confusion_matrix
# Meta parameter
k = 5
# Loading the data
iris = datasets.load_iris()
names = iris.target_names
X = iris.data
Y = np.array(iris.target, int)
# Holdout Set
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, train_size=0.6)
print X_train.shape, X_test.shape
# Fitting the model
kNN = KNeighborsClassifier(n_neighbors=k)
kNN.fit(X_train, Y_train)
# No coefficients to print!
# Predicting values
Y_test_pred = kNN.predict(X_test)
# Count the errors
n_errors = sum(Y_test!=Y_test_pred)
template = "{0} errores de clasificación de un total de {1}"
print template.format(n_errors, len(Y_test))
# Matriz de confusion
print confusion_matrix(Y_test, Y_test_pred)
Explanation: 2. Aplicación al Iris Dataset
¡Wow! ¡El método es perfecto para k=1!
¿o no?
kNN por construcción asigna la etiqueta del vecino más cercano, por lo que el error de entrenamiento para k=1 siempre será 0.
2. Aplicación al Iris Dataset
Para aplicar el k Nearest Neighbors, utilizando el algoritmo kNN de la librería sklearn, requerimos utilizar un Holdout SET para evaluar el error de predicción:
End of explanation
from sklearn.cross_validation import train_test_split
from sklearn import datasets
import numpy as np
# Loading the data
iris = datasets.load_iris()
names = iris.target_names
X = iris.data
Y = np.array(iris.target, int)
# Splitting the data
X_train, X_aux, Y_train, Y_aux = train_test_split(X, Y, train_size=0.6)
X_valid, X_test, Y_valid, Y_test = train_test_split(X_aux, Y_aux, test_size=0.5)
print X_train.shape
print X_valid.shape
print X_test.shape
Explanation: 2. Aplicación al Iris Dataset
Debido al problema del overfitting, para seleccionar $k$ estamos obligados a utilizar un holdout set.
De hecho, debemos utilizar 3 conjuntos de datos:
* Conjunto de Entrenamiento (Training Dataset).
* Conjunto de Validación (Validation Dataset).
* Conjunto de Testeo (Testing Dataset).
2. Aplicación al Iris Dataset
Training set: Conjunto de ejemplos utililizados para "aprender": ajustar los parámetros de un modelo elegido.
Validation set: Conjunto de ejemplos utilizado para afinar los metaparámetros de un clasificador. En kNN, por ejemplo, para saber que valor de $k$ tomar.
Test set: conjunto de ejemplos completamente nuevo, y que se utiliza para conocer el error de predicción de un modelo completamente entrenado.
2. Aplicación al Iris Dataset
¿Porqué es necesario separar test y validación? Porque no queremos que el error de predicción contenga sesgo de ningún tipo.
End of explanation
import numpy as np
from sklearn import datasets
from sklearn.neighbors import KNeighborsClassifier
# Loading the data
iris = datasets.load_iris()
names = iris.target_names
X = iris.data
Y = np.array(iris.target, int)
# Holdout Set
X_aux, X_aux, Y_train, Y_aux = train_test_split(X, Y, train_size=0.6)
X_valid, X_test, Y_valid, Y_test = train_test_split(X_aux, Y_aux, test_size=0.5)
template = "k={0}: {1} errores de clasificación de un total de {2}"
# Fitting the model
for k in range(1,21):
kNN = KNeighborsClassifier(n_neighbors=k)
kNN.fit(X_train, Y_train)
# Predicting values
Y_test_pred = kNN.predict(X_test)
# Count the errors
n_errors = sum(Y_test!=Y_test_pred)
print template.format(k, n_errors, len(Y_test))
Explanation: 2. Aplicación al Iris Dataset
Para aplicar el k Nearest Neighbors, con conjuntos de entrenamiento, validación y testeo, y utilizando el algoritmo kNN de la librería sklearn, requerimos un código como el siguiente:
End of explanation
import numpy as np
from sklearn import datasets
from sklearn.neighbors import KNeighborsClassifier
# Loading the data
iris = datasets.load_iris()
names = iris.target_names
X = iris.data
Y = np.array(iris.target, int)
# Holdout Set
X_tv, X_test, Y_tv, Y_test = train_test_split(X, Y, train_size=0.8)
template = "k={0}: {1} errores de clasificación de un total de {2}"
# Fitting the model
mean_error_for_k = []
for k in range(1,21):
errors_k = []
for i in range(1000):
kNN = KNeighborsClassifier(n_neighbors=k)
X_train, X_valid, Y_train, Y_valid = train_test_split(X_tv, Y_tv, train_size=0.75)
kNN.fit(X_train, Y_train)
# Predicting values
Y_valid_pred = kNN.predict(X_valid)
# Count the errors
n_errors = sum(Y_valid!=Y_valid_pred)
# Add them to vector
errors_k.append(n_errors)
errors = np.array(errors_k).mean()
print template.format(k, errors, len(Y_valid))
mean_error_for_k.append(errors)
from matplotlib import pyplot as plt
plt.figure(figsize=(16,8))
plt.plot(range(1,21), mean_error_for_k, '-ok')
plt.xlabel("k")
plt.ylabel("Errores de clasificacion")
plt.show()
Explanation: 2. Aplicación al Iris Dataset
Para aplicar el k Nearest Neighbors, con conjuntos de entrenamiento, validación y testeo, y utilizando el algoritmo kNN de la librería sklearn, requerimos un código como el siguiente:
End of explanation
import numpy as np
from sklearn import datasets
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix
# Meta parameter
k = 5
# Loading the data
iris = datasets.load_iris()
names = iris.target_names
X = iris.data
Y = np.array(iris.target, int)
# Holdout Set
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, train_size=0.6)
print X_train.shape, X_test.shape
# Fitting the model
kNN = KNeighborsClassifier(n_neighbors=k)
kNN.fit(X_train, Y_train)
# No coefficients to print!
# Predicting values
Y_test_pred = kNN.predict(X_test)
# Count the errors
n_errors = sum(Y_test!=Y_test_pred)
print "{0} errores de clasificación de un total de {1}".format(n_errors, len(Y_test))
print n_errors/float(len(Y_test))
# Matriz de confusion
print confusion_matrix(Y_test, Y_test_pred)
Explanation: 2. Aplicación al Iris Dataset
Debido a lo anterior, resulta razonable considerar $k=3$ ó $k=5$, con un error de predicción de $1/30$, es decir, de aproximadamente $0.0333$.
End of explanation |
10,709 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Enriching Shooting Data
Goal
Step1: The query in the below box no longer works thanks to the NBA restricting access to the data.
Step2: Wrapping data merge into a function
Step3: Drawing NBA Court to Scale
Step4: Unfortunately, the NBA has blocked access to the data that was used to construct the following shot charts. Prior to about February, they had data that contained very interesting metrics on individual shots. One of those metrics was the proximity of the nearest defender.
The following charts basically graph circles around the shot location on court that mark where the defender was at the time of shot. A bigger circle means the shooter was more wide-open. We do not know where on the circle the defender as, only that the defender was somewhere on the perimeter of the circle | Python Code:
# Getting Basic Data
import goldsberry
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
pd.set_option("display.max_columns", 50)
pd.options.mode.chained_assignment = None
print goldsberry.__version__
print pd.__version__
# Getting Players List
players_2015 = goldsberry.PlayerList(Season='2015-16')
players_2015 = pd.DataFrame(players_2015.players())
harden_id = players_2015.loc[players_2015['DISPLAY_LAST_COMMA_FIRST'].str.contains("Harden"), 'PERSON_ID']
#XY Shot Charts
harden_shots = goldsberry.player.shot_chart(harden_id.values.tolist()[0], Season='2015-16')
harden_shots = pd.DataFrame(harden_shots.chart())
harden_shots.shape
harden_shots.head()
Explanation: Enriching Shooting Data
Goal: Visualize every shot a player takes during a single game with information on the closest defender
Steps:
Merge Advanced Shot Log with Shot Chart
Scatter plot each shot
Add ring to each shot using the distance of nearest defender as radius
Shade in each ring to represent in-your-face to WTFO
Change shape of each shot to represent Make/Miss
Funtionalize the whole process so that it takes a playerID and GameID as arguments and returns a chart with titles
End of explanation
dashboard = goldsberry.player.shot_dashboard(harden_id)
pd.DataFrame(dashboard.dribble())
#Sort XY Shots and Assign a Shot Number
#ShotNumber will be used to merge the two datasets.
harden_shots.sort(['GAME_ID', 'GAME_EVENT_ID'], inplace=True)
harden_shots['SHOT_NUMBER'] = harden_shots.groupby(['GAME_ID', 'PLAYER_ID'])['GAME_EVENT_ID'].cumcount()+1
#Merge data into a single dataframe
harden_shots_full = pd.merge(harden_shots, harden_shots_advanced, on=['GAME_ID', 'SHOT_NUMBER'], how='left')
harden_shots_full.head()
Explanation: The query in the below box no longer works thanks to the NBA restricting access to the data.
End of explanation
sns.set_style("white")
sns.set_color_codes()
plt.figure(figsize=(12,11))
plt.scatter(harden_shots.LOC_X, harden_shots.LOC_Y)
plt.show()
Explanation: Wrapping data merge into a function
End of explanation
def draw_court(ax=None, color='black', lw=2, outer_lines=False):
# If an axes object isn't provided to plot onto, just get current one
if ax is None:
ax = plt.gca()
# Create the various parts of an NBA basketball court
# Create the basketball hoop
# Diameter of a hoop is 18" so it has a radius of 9", which is a value
# 7.5 in our coordinate system
hoop = Circle((0, 0), radius=7.5, linewidth=lw, color=color, fill=False)
# Create backboard
backboard = Rectangle((-30, -7.5), 60, -1, linewidth=lw, color=color)
# The paint
# Create the outer box 0f the paint, width=16ft, height=19ft
outer_box = Rectangle((-80, -47.5), 160, 190, linewidth=lw, color=color,
fill=False)
# Create the inner box of the paint, widt=12ft, height=19ft
inner_box = Rectangle((-60, -47.5), 120, 190, linewidth=lw, color=color,
fill=False)
# Create free throw top arc
top_free_throw = Arc((0, 142.5), 120, 120, theta1=0, theta2=180,
linewidth=lw, color=color, fill=False)
# Create free throw bottom arc
bottom_free_throw = Arc((0, 142.5), 120, 120, theta1=180, theta2=0,
linewidth=lw, color=color, linestyle='dashed')
# Restricted Zone, it is an arc with 4ft radius from center of the hoop
restricted = Arc((0, 0), 80, 80, theta1=0, theta2=180, linewidth=lw,
color=color)
# Three point line
# Create the side 3pt lines, they are 14ft long before they begin to arc
corner_three_a = Rectangle((-220, -47.5), 0, 140, linewidth=lw,
color=color)
corner_three_b = Rectangle((220, -47.5), 0, 140, linewidth=lw, color=color)
# 3pt arc - center of arc will be the hoop, arc is 23'9" away from hoop
# I just played around with the theta values until they lined up with the
# threes
three_arc = Arc((0, 0), 475, 475, theta1=22, theta2=158, linewidth=lw,
color=color)
# Center Court
center_outer_arc = Arc((0, 422.5), 120, 120, theta1=180, theta2=0,
linewidth=lw, color=color)
center_inner_arc = Arc((0, 422.5), 40, 40, theta1=180, theta2=0,
linewidth=lw, color=color)
# List of the court elements to be plotted onto the axes
court_elements = [hoop, backboard, outer_box, inner_box, top_free_throw,
bottom_free_throw, restricted, corner_three_a,
corner_three_b, three_arc, center_outer_arc,
center_inner_arc]
if outer_lines:
# Draw the half court line, baseline and side out bound lines
outer_lines = Rectangle((-250, -47.5), 500, 470, linewidth=lw,
color=color, fill=False)
court_elements.append(outer_lines)
# Add the court elements onto the axes
for element in court_elements:
ax.add_patch(element)
return ax
Explanation: Drawing NBA Court to Scale
End of explanation
plt.figure(figsize=(12,11))
plt.scatter(harden_shots_full.LOC_X[0], harden_shots_full.LOC_Y[0])
draw_court()
defender = Circle(xy, def_dist, alpha=.5)
fig = plt.gcf()
fig.gca().add_artist(defender)
# Descending values along the axis from left to right
plt.xlim(-300,300)
plt.ylim(422.5, -47.5)
len(harden_shots_full)
def draw_defender_radius(df, ax=None, alpha = .25):
# If an axes object isn't provided to plot onto, just get current one
if ax is None:
ax = plt.gca()
for i in range(len(df)):
defender = Circle((df.LOC_X[i],df.LOC_Y[i]),
radius = df.CLOSE_DEF_DIST[i]*10,
alpha = alpha)
ax.add_patch(defender)
return ax
def fancy_shotchart(df):
plt.figure(figsize=(12,11))
plt.scatter(df.LOC_X, df.LOC_Y)
draw_court()
draw_defender_radius(df)
# Descending values along the axis from left to right
plt.xlim(-300,300)
plt.ylim(422.5, -47.5)
harden_game = harden_shots_full.ix[harden_shots.GAME_ID == '0021400003']
fancy_shotchart(harden_game)
plt.figure(figsize=(12,11))
plt.scatter(harden_game.LOC_X, harden_game.LOC_Y,
s=pi*(harden_game.CLOSE_DEF_DIST*10)**2,
alpha = .25, c = harden_game.SHOT_MADE_FLAG,
cmap = plt.cm.RdYlGn)
plt.scatter(harden_game.LOC_X, harden_game.LOC_Y, c='black')
draw_court()
# Descending values along the axis from left to right
plt.xlim(-300,300)
plt.ylim(422.5, -47.5)
def fancy_shots(df):
plt.figure(figsize=(12,11))
plt.scatter(df.LOC_X, df.LOC_Y,
s=pi*(df.CLOSE_DEF_DIST*10)**2,
alpha = .25, c = df.SHOT_MADE_FLAG,
cmap = plt.cm.RdYlGn)
plt.scatter(df.LOC_X, df.LOC_Y, c='black')
draw_court()
# Descending values along the axis from left to right
plt.xlim(-300,300)
plt.ylim(422.5, -47.5)
fancy_shots(harden_shots_full.ix[harden_shots.GAME_ID == '0021400003'])
fancy_shots(harden_shots_full.ix[harden_shots.GAME_ID == '0021400087'])
fancy_shots(harden_shots_full.ix[harden_shots.GAME_ID == '0021400512'])
Explanation: Unfortunately, the NBA has blocked access to the data that was used to construct the following shot charts. Prior to about February, they had data that contained very interesting metrics on individual shots. One of those metrics was the proximity of the nearest defender.
The following charts basically graph circles around the shot location on court that mark where the defender was at the time of shot. A bigger circle means the shooter was more wide-open. We do not know where on the circle the defender as, only that the defender was somewhere on the perimeter of the circle
End of explanation |
10,710 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple sphere and text
Step1: Clickable Surface
Step2: Design our own texture
Step3: Lines
Step4: Camera
Step6: Parametric Functions
To use the ParametricGeometry class, you need to specify a javascript function as a string. The function should take two parameters that vary between 0 and 1, and return a new THREE.Vector3(x,y,z).
If you want to build the surface in Python, you'll need to explicitly construct the vertices and faces and build a basic geometry from the vertices and faces. | Python Code:
ball = Mesh(geometry=SphereGeometry(radius=1), material=LambertMaterial(color='red'), position=[2,1,0])
scene = Scene(children=[ball, AmbientLight(color=0x777777), make_text('Hello World!', height=.6)])
c = PerspectiveCamera(position=[0,5,5], up=[0,0,1], children=[DirectionalLight(color='white',
position=[3,5,1],
intensity=0.5)])
renderer = Renderer(camera=c, scene = scene, controls=[OrbitControls(controlling=c)])
display(renderer)
ball.geometry.radius=0.5
import time, math
ball.material.color = 0x4400dd
for i in range(1,150,2):
ball.geometry.radius=i/100.
ball.material.color +=0x000300
ball.position = [math.cos(i/10.), math.sin(i/50.), i/100.]
time.sleep(.05)
Explanation: Simple sphere and text
End of explanation
nx,ny=(20,20)
xmax=1
x = np.linspace(-xmax,xmax,nx)
y = np.linspace(-xmax,xmax,ny)
xx, yy = np.meshgrid(x,y)
z = xx**2-yy**2
#z[6,1] = float('nan')
surf_g = SurfaceGeometry(z=list(z[::-1].flat),
width=2*xmax,
height=2*xmax,
width_segments=nx-1,
height_segments=ny-1)
surf = Mesh(geometry=surf_g, material=LambertMaterial(map=height_texture(z[::-1], 'YlGnBu_r')))
surfgrid = SurfaceGrid(geometry=surf_g, material=LineBasicMaterial(color='black'))
hover_point = Mesh(geometry=SphereGeometry(radius=0.05), material=LambertMaterial(color='hotpink'))
scene = Scene(children=[surf, surfgrid, hover_point, AmbientLight(color=0x777777)])
c = PerspectiveCamera(position=[0,3,3], up=[0,0,1],
children=[DirectionalLight(color='white', position=[3,5,1], intensity=0.6)])
click_picker = Picker(root=surf, event='dblclick')
hover_picker = Picker(root=surf, event='mousemove')
renderer = Renderer(camera=c, scene = scene, controls=[OrbitControls(controlling=c), click_picker, hover_picker])
def f(name, value):
print "Clicked on %s"%value
point = Mesh(geometry=SphereGeometry(radius=0.05),
material=LambertMaterial(color='red'),
position=value)
scene.children = list(scene.children)+[point]
click_picker.on_trait_change(f, 'point')
link((hover_point, 'position'), (hover_picker, 'point'))
h = HTML()
def g(name, value):
h.value="Green point at (%.3f, %.3f, %.3f)"%tuple(value)
g(None, hover_point.position)
hover_picker.on_trait_change(g, 'point')
display(h)
display(renderer)
# when we change the z values of the geometry, we need to also change the height map
surf_g.z = list((-z[::-1]).flat)
surf.material.map = height_texture(-z[::-1])
Explanation: Clickable Surface
End of explanation
import numpy as np
from scipy import ndimage
import matplotlib
import matplotlib.pyplot as plt
from skimage import img_as_ubyte
jet = matplotlib.cm.get_cmap('jet')
np.random.seed(int(1)) # start random number generator
n = int(5) # starting points
size = int(32) # size of image
im = np.zeros((size,size)) # create zero image
points = size*np.random.random((2, n**2)) # locations of seed values
im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = size # seed high values
im = ndimage.gaussian_filter(im, sigma=size/(float(4)*n)) # smooth high values into surrounding areas
im *= 1/np.max(im)# rescale to be in the range [0,1]
rgba_im = img_as_ubyte(jet(im)) # convert the values to rgba image using the jet colormap
rgba_list = list(rgba_im.flat) # make a flat list
t = DataTexture(data=rgba_list, format='RGBAFormat', width=size, height=size)
geometry = SphereGeometry()#TorusKnotGeometry(radius=2, radialSegments=200)
material = LambertMaterial(map=t)
myobject = Mesh(geometry=geometry, material=material)
c = PerspectiveCamera(position=[0,3,3], fov=40, children=[DirectionalLight(color=0xffffff, position=[3,5,1], intensity=0.5)])
scene = Scene(children=[myobject, AmbientLight(color=0x777777)])
renderer = Renderer(camera=c, scene = scene, controls=[OrbitControls(controlling=c)])
display(renderer)
Explanation: Design our own texture
End of explanation
# On windows, linewidth of the material has no effect
size = 4
linesgeom = PlainGeometry(vertices=[[0,0,0],[size,0,0],[0,0,0],[0,size,0],[0,0,0],[0,0,size]],
colors = ['red', 'red', 'green', 'green', 'white', 'orange'])
lines = Line(geometry=linesgeom,
material=LineBasicMaterial( linewidth=5, vertexColors='VertexColors'),
type='LinePieces')
scene = Scene(children=[lines, DirectionalLight(color=0xccaabb, position=[0,10,0]),AmbientLight(color=0xcccccc)])
c = PerspectiveCamera(position=[0,10,10])
renderer = Renderer(camera=c, scene = scene, controls=[OrbitControls(controlling=c)])
display(renderer)
Explanation: Lines
End of explanation
geometry = SphereGeometry(radius=4)
t = ImageTexture(imageuri="")
material = LambertMaterial(color='white', map=t)
sphere = Mesh(geometry=geometry, material=material)
point = Mesh(geometry=SphereGeometry(radius=.1),
material=LambertMaterial(color='red'))
c = PerspectiveCamera(position=[0,10,10], fov=40, children=[DirectionalLight(color='white',
position=[3,5,1],
intensity=0.5)])
scene = Scene(children=[sphere, point, AmbientLight(color=0x777777)])
p=Picker(event='mousemove', root=sphere)
renderer = Renderer(camera=c, scene = scene, controls=[OrbitControls(controlling=c), p])
coords = Text()
display(coords)
display(renderer)
#dlink((p,'point'), (point, 'position'), (coords, 'value'))
#
#camera=WebCamera()
#display(camera)
#display(Link(widgets=[[camera, 'imageurl'], [t, 'imageuri']]))
Explanation: Camera
End of explanation
f =
function f(origu,origv) {
// scale u and v to the ranges I want: [0, 2*pi]
var u = 2*Math.PI*origu;
var v = 2*Math.PI*origv;
var x = Math.sin(u);
var y = Math.cos(v);
var z = Math.cos(u+v);
return new THREE.Vector3(x,y,z)
}
surf_g = ParametricGeometry(func=f);
surf = Mesh(geometry=surf_g,material=LambertMaterial(color='green', side ='FrontSide'))
surf2 = Mesh(geometry=surf_g,material=LambertMaterial(color='yellow', side ='BackSide'))
scene = Scene(children=[surf, surf2, AmbientLight(color=0x777777)])
c = PerspectiveCamera(position=[5,5,3], up=[0,0,1],children=[DirectionalLight(color='white', position=[3,5,1], intensity=0.6)])
renderer = Renderer(camera=c,scene = scene,controls=[OrbitControls(controlling=c)])
display(renderer)
Explanation: Parametric Functions
To use the ParametricGeometry class, you need to specify a javascript function as a string. The function should take two parameters that vary between 0 and 1, and return a new THREE.Vector3(x,y,z).
If you want to build the surface in Python, you'll need to explicitly construct the vertices and faces and build a basic geometry from the vertices and faces.
End of explanation |
10,711 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Bootcamp "Group Project"
Analysis of historical stock return and volatility by industries using Fama-French Data
Sung Kim / Arthur Hong / Kevin Park
Contents
Step1: 1 | Background
Designed by Eugene Fama and Kenneth French, Fama-French factor model is a widely used tool in finance created by employing statistical techniques to estimate returns of stocks. Within this project, we attempted to analyze stock returns and risks by calculating betas of different industries over the past seven year period.
Source
Step2: 3.2 | Joining the Key Data
Now we add market return column ("Mkt" in dataframe mkt1) to ind dataframe.
Then key statistics, including means and standard deviations, are calculated to further derive betas of different industries. Since such statistics will be used in our beta calculation, we store it to a new dataframe ind_stat.
Step3: 4 | Beta Calculation
In order to facilitate matrix calculation, we altered the form of ind_stat to a inverse matrix and stored it as ind_stat_inv.
By definition, industry betas are calculated as
Step4: 5 | Creating Visual Output
We created four visual presentations to help better understand how different betas can be shown in a given timeframe.
Step5: On the heatmap of monthly returns by industry, we can see which industries have more extreme swings, and which have less variation. 'Coal' looks to be particularly volatile. 'Food' and 'Utilities' looks more stable.
Step6: Sure enough, 'Coal' had the highest beta value (beta > 1 means that returns are more volatile than market returns), and 'Utilities' had the lowest beta
Step7: By plotting 'Coal' and 'Utilities' returns against the market's returns, we can verify the amplitude each's returns... although the chart is a little hectic | Python Code:
# import packages
import pandas as pd # data management
import matplotlib.pyplot as plt # graphics
import datetime as dt # check today's date
import sys # check Python version
import numpy as np
# IPython command, puts plots in notebook
%matplotlib inline
print('Today is', dt.date.today())
print('Python version:\n', sys.version, sep='')
Explanation: Data Bootcamp "Group Project"
Analysis of historical stock return and volatility by industries using Fama-French Data
Sung Kim / Arthur Hong / Kevin Park
Contents:
1. Background
2. About the Data
3. Key Data
+ 3.1 | Slicing the Key Data
+ 3.2 | Joining the Key Data
4. Beta Calculation
5. Creating Visual Output
End of explanation
# importing 30 industry portfolio data set
import pandas_datareader.data as web
ff=web.DataReader("30_Industry_Portfolios", "famafrench")
print(ff['DESCR'])
# extracting value-weighted return only
ff[0]
ind=ff[0]
ind.shape
# importing mkt data from 3 factors model
mkt=web.DataReader("F-F_Research_Data_Factors", "famafrench")
mkt
print(mkt['DESCR'])
# Dropping annual result
mkt1=mkt[0]
mkt1
mkt1['Mkt']=mkt1['Mkt-RF']+mkt1['RF']
Explanation: 1 | Background
Designed by Eugene Fama and Kenneth French, Fama-French factor model is a widely used tool in finance created by employing statistical techniques to estimate returns of stocks. Within this project, we attempted to analyze stock returns and risks by calculating betas of different industries over the past seven year period.
Source: Fama-French Website
2 | About the Data
We collected our data by using PANDAS DataReader to get a direct feed to Kenneth French's data, where numerous equity market data are available online. Among them, we used the "30 Industry Portfolio" dataset to compare stock returns and risks of different industries.
Links to data:
+ 30 Industry Portfolio
+ Documentation to get direct feed from DataReader
3 | Key Data
We first imported the 30 industry portfolio data set. There are different categories: value or equal weighted, monthly or annual, etc. Detailed breakdown and description are shown below.
3.1 | Slicing the Key Data
Among many different types of data, we will extract value-weighted monthly return (dataframe: 0 in 30 Industry Portfolios) since 2010 to run our analysis and store it in a dataframe ind.
We also imported Fama-French 3-factor data with the same time frame, which contain:
Mkt-RF (market return - risk free rate)
SMB (Small-Minus-Big, the average return on the three small portfolios minus the average return on the three big portfolios)
HML (High-Minus-Low, the average return on the two value portfolios minus the average return on the two growth portfolios)
Because we need market return rather than equity risk premium we add "Mkt" column in the dataframe by combining "Mkt-RF" and "RF". Finally, we set this 3-factor market data (Mkt-RF, SMB, HML, RF, and Mkt) in dataframe mkt1.
End of explanation
# Adding mkt data to 30 industry data set
ind['Mkt']=mkt1['Mkt']
ind.tail()
# calculating historical average return and standard deviation
ind_stat=ind.describe()
ind_stat
Explanation: 3.2 | Joining the Key Data
Now we add market return column ("Mkt" in dataframe mkt1) to ind dataframe.
Then key statistics, including means and standard deviations, are calculated to further derive betas of different industries. Since such statistics will be used in our beta calculation, we store it to a new dataframe ind_stat.
End of explanation
# inverse matrix
ind_stat_inv = pd.DataFrame(np.linalg.pinv(ind_stat.values), ind_stat.columns, ind_stat.index)
ind_stat_inv
# beta calculation
def calc_beta(n):
np_array = ind.values
m = np_array[:,30] # market returns are column zero from numpy array
s = np_array[:,n] # stock returns are column one from numpy array
covariance = np.cov(s,m) # Calculate covariance between stock and market
beta = covariance[0,1]/covariance[1,1]
beta = covariance[0,1]/covariance[1,1]
return beta
numlist=range(0,31,1)
beta=[calc_beta(i) for i in numlist]
beta
# Adding beta result
ind_stat_inv['Beta']=beta
ind_stat_inv
sort=ind_stat_inv = ind_stat_inv.sort_values(by='Beta', ascending=False)
sort
Explanation: 4 | Beta Calculation
In order to facilitate matrix calculation, we altered the form of ind_stat to a inverse matrix and stored it as ind_stat_inv.
By definition, industry betas are calculated as:
Beta = (covariance between market and an industry) / (variance of market)
Once we found industry betas, we created a new column "Beta" to our ind_stat_inv and sorted by beta in ascending order.
End of explanation
#Transpose industry returns table to make heatmap
ind_heatmap = ind.T
ind_heatmap.tail()
#heatmap of monthly returns since 2010
import seaborn as sns
sns.set()
fig, ax = plt.subplots(figsize=(20,8))
sns.heatmap(ind_heatmap, annot=False, linewidths=.5)
ax.set_title("Monthly Returns by Industry (10 Years)")
Explanation: 5 | Creating Visual Output
We created four visual presentations to help better understand how different betas can be shown in a given timeframe.
End of explanation
#Sort a beta-only table to create beta bar chart
beta_table = sort[['Beta']]
beta_table.head()
#Bar chart of betas sorted from high to low
plt.style.use('seaborn-pastel')
ax = beta_table.plot(kind='bar', colormap = "Pastel2")
ax.set_title("Betas Across Industries")
Explanation: On the heatmap of monthly returns by industry, we can see which industries have more extreme swings, and which have less variation. 'Coal' looks to be particularly volatile. 'Food' and 'Utilities' looks more stable.
End of explanation
#Creating a dataframe just to see the most extreme values from the beta bar chart
industry_set = ind[['Coal ','Util ','Mkt']]
industry_set = industry_set.rename(columns={'Coal ':'Coal','Util ':'Utilities','Mkt':'Market'})
industry_set.tail()
#Line plot of the returns of Coal, Utilities, and the general market
import seaborn as sns
plt.style.use('seaborn-pastel')
ax = industry_set.plot(linestyle='-', colormap = "Accent", figsize = (16,5))
ax.set_title("Monthly Returns over 10 Years")
Explanation: Sure enough, 'Coal' had the highest beta value (beta > 1 means that returns are more volatile than market returns), and 'Utilities' had the lowest beta
End of explanation
#Calculating a new dataframe to look at excess returns
industry_diff = industry_set
industry_diff['Coal Excess Returns'] = industry_set['Coal'] - industry_set['Market']
industry_diff['Utilities Excess Returns'] = industry_set['Utilities'] - industry_set['Market']
industry_diff = industry_diff.drop(industry_diff.columns[[0,1,2]], 1)
industry_diff.tail()
#Line plot of the excess returns
plt.style.use('seaborn-pastel')
ax = industry_diff.plot(linestyle='-', colormap = "Accent", figsize = (16,5))
ax.set_title("Market Excess Returns")
Explanation: By plotting 'Coal' and 'Utilities' returns against the market's returns, we can verify the amplitude each's returns... although the chart is a little hectic
End of explanation |
10,712 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Statistical inference
Here we will briefly cover multiple concepts of inferential statistics in an
introductory manner, and demonstrate how to use some MNE statistical functions.
Step1: Hypothesis testing
Null hypothesis
^^^^^^^^^^^^^^^
From Wikipedia_
Step2: The data averaged over all subjects looks like this
Step3: In this case, a null hypothesis we could test for each voxel is
Step4: "Hat" variance adjustment
The "hat" technique regularizes the variance values used in the t-test
calculation
Step5: Non-parametric tests
Instead of assuming an underlying Gaussian distribution, we could instead
use a non-parametric resampling method. In the case of a paired t-test
between two conditions A and B, which is mathematically equivalent to a
one-sample t-test between the difference in the conditions A-B, under the
null hypothesis we have the principle of exchangeability. This means
that, if the null is true, we can exchange conditions and not change
the distribution of the test statistic.
When using a paired t-test, exchangeability thus means that we can flip the
signs of the difference between A and B. Therefore, we can construct the
null distribution values for each voxel by taking random subsets of
samples (subjects), flipping the sign of their difference, and recording the
absolute value of the resulting statistic (we record the absolute value
because we conduct a two-tailed test). The absolute value of the statistic
evaluated on the veridical data can then be compared to this distribution,
and the p-value is simply the proportion of null distribution values that
are smaller.
<div class="alert alert-danger"><h4>Warning</h4><p>In the case of a true one-sample t-test, i.e. analyzing a single
condition rather than the difference between two conditions,
it is not clear where/how exchangeability applies; see
[this FieldTrip discussion](ft_exch_).</p></div>
In the case where n_permutations is large enough (or "all") so
that the complete set of unique resampling exchanges can be done
(which is $2^{N_{samp}}-1$ for a one-tailed and
$2^{N_{samp}-1}-1$ for a two-tailed test, not counting the
veridical distribution), instead of randomly exchanging conditions
the null is formed from using all possible exchanges. This is known
as a permutation test (or exact test).
Step6: Multiple comparisons
So far, we have done no correction for multiple comparisons. This is
potentially problematic for these data because there are
$40 \cdot 40 = 1600$ tests being performed. If we use a threshold
p < 0.05 for each individual test, we would expect many voxels to be declared
significant even if there were no true effect. In other words, we would make
many type I errors (adapted from here)
Step7: To combat this problem, several methods exist. Typically these
provide control over either one of the following two measures
Step8: False discovery rate (FDR) correction
Typically FDR is performed with the Benjamini-Hochberg procedure, which
is less restrictive than Bonferroni correction for large numbers of
comparisons (fewer type II errors), but provides less strict control of type
I errors.
Step9: Non-parametric resampling test with a maximum statistic
Non-parametric resampling tests can also be used to correct for multiple
comparisons. In its simplest form, we again do permutations using
exchangeability under the null hypothesis, but this time we take the
maximum statistic across all voxels in each permutation to form the
null distribution. The p-value for each voxel from the veridical data
is then given by the proportion of null distribution values
that were smaller.
This method has two important features
Step10: Clustering
Each of the aforementioned multiple comparisons corrections have the
disadvantage of not fully incorporating the correlation structure of the
data, namely that points close to one another (e.g., in space or time) tend
to be correlated. However, by defining the adjacency (or "neighbor")
structure in our data, we can use clustering to compensate.
To use this, we need to rethink our null hypothesis. Instead
of thinking about a null hypothesis about means per voxel (with one
independent test per voxel), we consider a null hypothesis about sizes
of clusters in our data, which could be stated like
Step11: In general the adjacency between voxels can be more complex, such as
those between sensors in 3D space, or time-varying activation at brain
vertices on a cortical surface. MNE provides several convenience functions
for computing adjacency matrices, for example
Step12: "Hat" variance adjustment
This method can also be used in this context to correct for small
variances
Step13: Threshold-free cluster enhancement (TFCE)
TFCE eliminates the free parameter initial threshold value that
determines which points are included in clustering by approximating
a continuous integration across possible threshold values with a standard
Riemann sum_
Step14: We can also combine TFCE and the "hat" correction
Step15: Visualize and compare methods
Let's take a look at these statistics. The top row shows each test statistic,
and the bottom shows p-values for various statistical tests, with the ones
with proper control over FWER or FDR with bold titles. | Python Code:
# Authors: Eric Larson <[email protected]>
#
# License: BSD-3-Clause
from functools import partial
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D # noqa, analysis:ignore
import mne
from mne.stats import (ttest_1samp_no_p, bonferroni_correction, fdr_correction,
permutation_t_test, permutation_cluster_1samp_test)
Explanation: Statistical inference
Here we will briefly cover multiple concepts of inferential statistics in an
introductory manner, and demonstrate how to use some MNE statistical functions.
End of explanation
width = 40
n_subjects = 10
signal_mean = 100
signal_sd = 100
noise_sd = 0.01
gaussian_sd = 5
sigma = 1e-3 # sigma for the "hat" method
n_permutations = 'all' # run an exact test
n_src = width * width
# For each "subject", make a smoothed noisy signal with a centered peak
rng = np.random.RandomState(2)
X = noise_sd * rng.randn(n_subjects, width, width)
# Add a signal at the center
X[:, width // 2, width // 2] = signal_mean + rng.randn(n_subjects) * signal_sd
# Spatially smooth with a 2D Gaussian kernel
size = width // 2 - 1
gaussian = np.exp(-(np.arange(-size, size + 1) ** 2 / float(gaussian_sd ** 2)))
for si in range(X.shape[0]):
for ri in range(X.shape[1]):
X[si, ri, :] = np.convolve(X[si, ri, :], gaussian, 'same')
for ci in range(X.shape[2]):
X[si, :, ci] = np.convolve(X[si, :, ci], gaussian, 'same')
Explanation: Hypothesis testing
Null hypothesis
^^^^^^^^^^^^^^^
From Wikipedia_:
In inferential statistics, a general statement or default position that
there is no relationship between two measured phenomena, or no
association among groups.
We typically want to reject a null hypothesis with
some probability (e.g., p < 0.05). This probability is also called the
significance level $\alpha$.
To think about what this means, let's follow the illustrative example from
:footcite:RidgwayEtAl2012 and construct a toy dataset consisting of a
40 × 40 square with a "signal" present in the center with white noise added
and a Gaussian smoothing kernel applied.
End of explanation
fig, ax = plt.subplots()
ax.imshow(X.mean(0), cmap='inferno')
ax.set(xticks=[], yticks=[], title="Data averaged over subjects")
Explanation: The data averaged over all subjects looks like this:
End of explanation
titles = ['t']
out = stats.ttest_1samp(X, 0, axis=0)
ts = [out[0]]
ps = [out[1]]
mccs = [False] # these are not multiple-comparisons corrected
def plot_t_p(t, p, title, mcc, axes=None):
if axes is None:
fig = plt.figure(figsize=(6, 3))
axes = [fig.add_subplot(121, projection='3d'), fig.add_subplot(122)]
show = True
else:
show = False
# calculate critical t-value thresholds (2-tailed)
p_lims = np.array([0.1, 0.001])
df = n_subjects - 1 # degrees of freedom
t_lims = stats.distributions.t.ppf(1 - p_lims / 2, df=df)
p_lims = [-np.log10(p) for p in p_lims]
# t plot
x, y = np.mgrid[0:width, 0:width]
surf = axes[0].plot_surface(x, y, np.reshape(t, (width, width)),
rstride=1, cstride=1, linewidth=0,
vmin=t_lims[0], vmax=t_lims[1], cmap='viridis')
axes[0].set(xticks=[], yticks=[], zticks=[],
xlim=[0, width - 1], ylim=[0, width - 1])
axes[0].view_init(30, 15)
cbar = plt.colorbar(ax=axes[0], shrink=0.75, orientation='horizontal',
fraction=0.1, pad=0.025, mappable=surf)
cbar.set_ticks(t_lims)
cbar.set_ticklabels(['%0.1f' % t_lim for t_lim in t_lims])
cbar.set_label('t-value')
cbar.ax.get_xaxis().set_label_coords(0.5, -0.3)
if not show:
axes[0].set(title=title)
if mcc:
axes[0].title.set_weight('bold')
# p plot
use_p = -np.log10(np.reshape(np.maximum(p, 1e-5), (width, width)))
img = axes[1].imshow(use_p, cmap='inferno', vmin=p_lims[0], vmax=p_lims[1],
interpolation='nearest')
axes[1].set(xticks=[], yticks=[])
cbar = plt.colorbar(ax=axes[1], shrink=0.75, orientation='horizontal',
fraction=0.1, pad=0.025, mappable=img)
cbar.set_ticks(p_lims)
cbar.set_ticklabels(['%0.1f' % p_lim for p_lim in p_lims])
cbar.set_label(r'$-\log_{10}(p)$')
cbar.ax.get_xaxis().set_label_coords(0.5, -0.3)
if show:
text = fig.suptitle(title)
if mcc:
text.set_weight('bold')
plt.subplots_adjust(0, 0.05, 1, 0.9, wspace=0, hspace=0)
mne.viz.utils.plt_show()
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: In this case, a null hypothesis we could test for each voxel is:
There is no difference between the mean value and zero
($H_0 \colon \mu = 0$).
The alternative hypothesis, then, is that the voxel has a non-zero mean
($H_1 \colon \mu \neq 0$).
This is a two-tailed test because the mean could be less than
or greater than zero, whereas a one-tailed test would test only one of
these possibilities, i.e. $H_1 \colon \mu \geq 0$ or
$H_1 \colon \mu \leq 0$.
<div class="alert alert-info"><h4>Note</h4><p>Here we will refer to each spatial location as a "voxel".
In general, though, it could be any sort of data value,
including cortical vertex at a specific time, pixel in a
time-frequency decomposition, etc.</p></div>
Parametric tests
Let's start with a paired t-test, which is a standard test
for differences in paired samples. Mathematically, it is equivalent
to a 1-sample t-test on the difference between the samples in each condition.
The paired t-test is parametric
because it assumes that the underlying sample distribution is Gaussian, and
is only valid in this case. This happens to be satisfied by our toy dataset,
but is not always satisfied for neuroimaging data.
In the context of our toy dataset, which has many voxels
($40 \cdot 40 = 1600$), applying the paired t-test is called a
mass-univariate approach as it treats each voxel independently.
End of explanation
ts.append(ttest_1samp_no_p(X, sigma=sigma))
ps.append(stats.distributions.t.sf(np.abs(ts[-1]), len(X) - 1) * 2)
titles.append(r'$\mathrm{t_{hat}}$')
mccs.append(False)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: "Hat" variance adjustment
The "hat" technique regularizes the variance values used in the t-test
calculation :footcite:RidgwayEtAl2012 to compensate for implausibly small
variances.
End of explanation
# Here we have to do a bit of gymnastics to get our function to do
# a permutation test without correcting for multiple comparisons:
X.shape = (n_subjects, n_src) # flatten the array for simplicity
titles.append('Permutation')
ts.append(np.zeros(width * width))
ps.append(np.zeros(width * width))
mccs.append(False)
for ii in range(n_src):
ts[-1][ii], ps[-1][ii] = permutation_t_test(X[:, [ii]], verbose=False)[:2]
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: Non-parametric tests
Instead of assuming an underlying Gaussian distribution, we could instead
use a non-parametric resampling method. In the case of a paired t-test
between two conditions A and B, which is mathematically equivalent to a
one-sample t-test between the difference in the conditions A-B, under the
null hypothesis we have the principle of exchangeability. This means
that, if the null is true, we can exchange conditions and not change
the distribution of the test statistic.
When using a paired t-test, exchangeability thus means that we can flip the
signs of the difference between A and B. Therefore, we can construct the
null distribution values for each voxel by taking random subsets of
samples (subjects), flipping the sign of their difference, and recording the
absolute value of the resulting statistic (we record the absolute value
because we conduct a two-tailed test). The absolute value of the statistic
evaluated on the veridical data can then be compared to this distribution,
and the p-value is simply the proportion of null distribution values that
are smaller.
<div class="alert alert-danger"><h4>Warning</h4><p>In the case of a true one-sample t-test, i.e. analyzing a single
condition rather than the difference between two conditions,
it is not clear where/how exchangeability applies; see
[this FieldTrip discussion](ft_exch_).</p></div>
In the case where n_permutations is large enough (or "all") so
that the complete set of unique resampling exchanges can be done
(which is $2^{N_{samp}}-1$ for a one-tailed and
$2^{N_{samp}-1}-1$ for a two-tailed test, not counting the
veridical distribution), instead of randomly exchanging conditions
the null is formed from using all possible exchanges. This is known
as a permutation test (or exact test).
End of explanation
N = np.arange(1, 80)
alpha = 0.05
p_type_I = 1 - (1 - alpha) ** N
fig, ax = plt.subplots(figsize=(4, 3))
ax.scatter(N, p_type_I, 3)
ax.set(xlim=N[[0, -1]], ylim=[0, 1], xlabel=r'$N_{\mathrm{test}}$',
ylabel=u'Probability of at least\none type I error')
ax.grid(True)
fig.tight_layout()
fig.show()
Explanation: Multiple comparisons
So far, we have done no correction for multiple comparisons. This is
potentially problematic for these data because there are
$40 \cdot 40 = 1600$ tests being performed. If we use a threshold
p < 0.05 for each individual test, we would expect many voxels to be declared
significant even if there were no true effect. In other words, we would make
many type I errors (adapted from here):
.. rst-class:: skinnytable
+----------+--------+------------------+------------------+
| | Null hypothesis |
| +------------------+------------------+
| | True | False |
+==========+========+==================+==================+
| | | Type I error | Correct |
| | Yes | False positive | True positive |
+ Reject +--------+------------------+------------------+
| | | Correct | Type II error |
| | No | True Negative | False negative |
+----------+--------+------------------+------------------+
To see why, consider a standard $\alpha = 0.05$.
For a single test, our probability of making a type I error is 0.05.
The probability of making at least one type I error in
$N_{\mathrm{test}}$ independent tests is then given by
$1 - (1 - \alpha)^{N_{\mathrm{test}}}$:
End of explanation
titles.append('Bonferroni')
ts.append(ts[-1])
ps.append(bonferroni_correction(ps[0])[1])
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: To combat this problem, several methods exist. Typically these
provide control over either one of the following two measures:
Familywise error rate (FWER)
The probability of making one or more type I errors:
.. math::
\mathrm{P}(N_{\mathrm{type\ I}} >= 1 \mid H_0)
False discovery rate (FDR)
The expected proportion of rejected null hypotheses that are
actually true:
.. math::
\mathrm{E}(\frac{N_{\mathrm{type\ I}}}{N_{\mathrm{reject}}}
\mid N_{\mathrm{reject}} > 0) \cdot
\mathrm{P}(N_{\mathrm{reject}} > 0 \mid H_0)
We cover some techniques that control FWER and FDR below.
Bonferroni correction
Perhaps the simplest way to deal with multiple comparisons, Bonferroni
correction_
conservatively multiplies the p-values by the number of comparisons to
control the FWER.
End of explanation
titles.append('FDR')
ts.append(ts[-1])
ps.append(fdr_correction(ps[0])[1])
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: False discovery rate (FDR) correction
Typically FDR is performed with the Benjamini-Hochberg procedure, which
is less restrictive than Bonferroni correction for large numbers of
comparisons (fewer type II errors), but provides less strict control of type
I errors.
End of explanation
titles.append(r'$\mathbf{Perm_{max}}$')
out = permutation_t_test(X, verbose=False)[:2]
ts.append(out[0])
ps.append(out[1])
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: Non-parametric resampling test with a maximum statistic
Non-parametric resampling tests can also be used to correct for multiple
comparisons. In its simplest form, we again do permutations using
exchangeability under the null hypothesis, but this time we take the
maximum statistic across all voxels in each permutation to form the
null distribution. The p-value for each voxel from the veridical data
is then given by the proportion of null distribution values
that were smaller.
This method has two important features:
It controls FWER.
It is non-parametric. Even though our initial test statistic
(here a 1-sample t-test) is parametric, the null
distribution for the null hypothesis rejection (the mean value across
subjects is indistinguishable from zero) is obtained by permutations.
This means that it makes no assumptions of Gaussianity
(which do hold for this example, but do not in general for some types
of processed neuroimaging data).
End of explanation
from sklearn.feature_extraction.image import grid_to_graph # noqa: E402
mini_adjacency = grid_to_graph(3, 3).toarray()
assert mini_adjacency.shape == (9, 9)
print(mini_adjacency[0])
Explanation: Clustering
Each of the aforementioned multiple comparisons corrections have the
disadvantage of not fully incorporating the correlation structure of the
data, namely that points close to one another (e.g., in space or time) tend
to be correlated. However, by defining the adjacency (or "neighbor")
structure in our data, we can use clustering to compensate.
To use this, we need to rethink our null hypothesis. Instead
of thinking about a null hypothesis about means per voxel (with one
independent test per voxel), we consider a null hypothesis about sizes
of clusters in our data, which could be stated like:
The distribution of spatial cluster sizes observed in two experimental
conditions are drawn from the same probability distribution.
Here we only have a single condition and we contrast to zero, which can
be thought of as:
The distribution of spatial cluster sizes is independent of the sign
of the data.
In this case, we again do permutations with a maximum statistic, but, under
each permutation, we:
Compute the test statistic for each voxel individually.
Threshold the test statistic values.
Cluster voxels that exceed this threshold (with the same sign) based on
adjacency.
Retain the size of the largest cluster (measured, e.g., by a simple voxel
count, or by the sum of voxel t-values within the cluster) to build the
null distribution.
After doing these permutations, the cluster sizes in our veridical data
are compared to this null distribution. The p-value associated with each
cluster is again given by the proportion of smaller null distribution
values. This can then be subjected to a standard p-value threshold
(e.g., p < 0.05) to reject the null hypothesis (i.e., find an effect of
interest).
This reframing to consider cluster sizes rather than individual means
maintains the advantages of the standard non-parametric permutation
test -- namely controlling FWER and making no assumptions of parametric
data distribution.
Critically, though, it also accounts for the correlation structure in the
data -- which in this toy case is spatial but in general can be
multidimensional (e.g., spatio-temporal) -- because the null distribution
will be derived from data in a way that preserves these correlations.
.. sidebar:: Effect size
For a nice description of how to compute the effect size obtained
in a cluster test, see this
[FieldTrip mailing list discussion](ft_cluster_effect_size_).
However, there is a drawback. If a cluster significantly deviates from
the null, no further inference on the cluster (e.g., peak location) can be
made, as the entire cluster as a whole is used to reject the null.
Moreover, because the test statistic concerns the full data, the null
hypothesis (and our rejection of it) refers to the structure of the full
data. For more information, see also the comprehensive
FieldTrip tutorial.
Defining the adjacency matrix
First we need to define our adjacency (sometimes called "neighbors") matrix.
This is a square array (or sparse matrix) of shape (n_src, n_src) that
contains zeros and ones to define which spatial points are neighbors, i.e.,
which voxels are adjacent to each other. In our case this
is quite simple, as our data are aligned on a rectangular grid.
Let's pretend that our data were smaller -- a 3 × 3 grid. Thinking about
each voxel as being connected to the other voxels it touches, we would
need a 9 × 9 adjacency matrix. The first row of this matrix contains the
voxels in the flattened data that the first voxel touches. Since it touches
the second element in the first row and the first element in the second row
(and is also a neighbor to itself), this would be::
[1, 1, 0, 1, 0, 0, 0, 0, 0]
:mod:sklearn.feature_extraction provides a convenient function for this:
End of explanation
titles.append('Clustering')
# Reshape data to what is equivalent to (n_samples, n_space, n_time)
X.shape = (n_subjects, width, width)
# Compute threshold from t distribution (this is also the default)
# Here we use a two-tailed test, hence we need to divide alpha by 2.
# Subtracting alpha from 1 guarantees that we get a positive threshold,
# which MNE-Python expects for two-tailed tests.
df = n_subjects - 1 # degrees of freedom
t_thresh = stats.distributions.t.ppf(1 - alpha / 2, df=df)
# run the cluster test
t_clust, clusters, p_values, H0 = permutation_cluster_1samp_test(
X, n_jobs=None, threshold=t_thresh, adjacency=None,
n_permutations=n_permutations, out_type='mask')
# Put the cluster data in a viewable format
p_clust = np.ones((width, width))
for cl, p in zip(clusters, p_values):
p_clust[cl] = p
ts.append(t_clust)
ps.append(p_clust)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: In general the adjacency between voxels can be more complex, such as
those between sensors in 3D space, or time-varying activation at brain
vertices on a cortical surface. MNE provides several convenience functions
for computing adjacency matrices, for example:
:func:mne.channels.find_ch_adjacency
:func:mne.channels.read_ch_adjacency
:func:mne.stats.combine_adjacency
See the Statistics API <api_reference_statistics> for a full list.
Standard clustering
Here, since our data are on a grid, we can use adjacency=None to
trigger optimized grid-based code, and run the clustering algorithm.
End of explanation
titles.append(r'$\mathbf{C_{hat}}$')
stat_fun_hat = partial(ttest_1samp_no_p, sigma=sigma)
t_hat, clusters, p_values, H0 = permutation_cluster_1samp_test(
X, n_jobs=None, threshold=t_thresh, adjacency=None, out_type='mask',
n_permutations=n_permutations, stat_fun=stat_fun_hat, buffer_size=None)
p_hat = np.ones((width, width))
for cl, p in zip(clusters, p_values):
p_hat[cl] = p
ts.append(t_hat)
ps.append(p_hat)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: "Hat" variance adjustment
This method can also be used in this context to correct for small
variances :footcite:RidgwayEtAl2012:
End of explanation
titles.append(r'$\mathbf{C_{TFCE}}$')
threshold_tfce = dict(start=0, step=0.2)
t_tfce, _, p_tfce, H0 = permutation_cluster_1samp_test(
X, n_jobs=None, threshold=threshold_tfce, adjacency=None,
n_permutations=n_permutations, out_type='mask')
ts.append(t_tfce)
ps.append(p_tfce)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: Threshold-free cluster enhancement (TFCE)
TFCE eliminates the free parameter initial threshold value that
determines which points are included in clustering by approximating
a continuous integration across possible threshold values with a standard
Riemann sum_
:footcite:SmithNichols2009.
This requires giving a starting threshold start and a step
size step, which in MNE is supplied as a dict.
The smaller the step and closer to 0 the start value,
the better the approximation, but the longer it takes.
A significant advantage of TFCE is that, rather than modifying the
statistical null hypothesis under test (from one about individual voxels
to one about the distribution of clusters in the data), it modifies the data
under test while still controlling for multiple comparisons.
The statistical test is then done at the level of individual voxels rather
than clusters. This allows for evaluation of each point
independently for significance rather than only as cluster groups.
End of explanation
titles.append(r'$\mathbf{C_{hat,TFCE}}$')
t_tfce_hat, _, p_tfce_hat, H0 = permutation_cluster_1samp_test(
X, n_jobs=None, threshold=threshold_tfce, adjacency=None, out_type='mask',
n_permutations=n_permutations, stat_fun=stat_fun_hat, buffer_size=None)
ts.append(t_tfce_hat)
ps.append(p_tfce_hat)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: We can also combine TFCE and the "hat" correction:
End of explanation
fig = plt.figure(facecolor='w', figsize=(14, 3))
assert len(ts) == len(titles) == len(ps)
for ii in range(len(ts)):
ax = [fig.add_subplot(2, 10, ii + 1, projection='3d'),
fig.add_subplot(2, 10, 11 + ii)]
plot_t_p(ts[ii], ps[ii], titles[ii], mccs[ii], ax)
fig.tight_layout(pad=0, w_pad=0.05, h_pad=0.1)
plt.show()
Explanation: Visualize and compare methods
Let's take a look at these statistics. The top row shows each test statistic,
and the bottom shows p-values for various statistical tests, with the ones
with proper control over FWER or FDR with bold titles.
End of explanation |
10,713 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cartopy in a nutshell
Cartopy is a Python package that provides easy creation of maps, using matplotlib, for the analysis and visualisation of geospatial data.
In order to create a map with cartopy and matplotlib, we typically need to import pyplot from matplotlib and cartopy's crs (coordinate reference system) submodule. These are typically imported as follows
Step1: Cartopy's matplotlib interface is set up via the projection keyword when constructing a matplotlib Axes / SubAxes instance. The resulting axes instance has new methods, such as the coastlines() method, which are specific to drawing cartographic data
Step2: A full list of Cartopy projections is available at http
Step3: Notice that unless we specify a map extent (we did so via the set_global method in this case) the map will zoom into the range of the plotted data.
We can add graticule lines and tick labels to the map using the gridlines method (this currently is limited to just a few coordinate reference systems)
Step4: We can control the specific tick values by using matplotlib's locator object, and the formatting can be controlled with matplotlib formatters
Step5: Cartopy cannot currently label all types of projection, though more work is intended on this functionality in the future.
Exercise 1
The following snippet of code produces coordinate arrays and some data in a rotated pole coordinate system. The coordinate system for the x and y values, which is similar to that found in the some limited area models of Europe, has a projection "north pole" at 177.5 longitude and 37.5 latitude. | Python Code:
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
Explanation: Cartopy in a nutshell
Cartopy is a Python package that provides easy creation of maps, using matplotlib, for the analysis and visualisation of geospatial data.
In order to create a map with cartopy and matplotlib, we typically need to import pyplot from matplotlib and cartopy's crs (coordinate reference system) submodule. These are typically imported as follows:
End of explanation
ax = plt.axes(projection=ccrs.PlateCarree())
ax.coastlines()
plt.show()
Explanation: Cartopy's matplotlib interface is set up via the projection keyword when constructing a matplotlib Axes / SubAxes instance. The resulting axes instance has new methods, such as the coastlines() method, which are specific to drawing cartographic data:
End of explanation
ax = plt.axes(projection=ccrs.PlateCarree())
ax.coastlines()
ax.set_global()
plt.plot([-100, 50], [25, 25], linewidth=4, transform=ccrs.Geodetic())
plt.show()
Explanation: A full list of Cartopy projections is available at http://scitools.org.uk/cartopy/docs/latest/crs/projections.html.
To draw cartographic data, we use the the standard matplotlib plotting routines with an additional transform keyword argument. The value of the transform argument should be the cartopy coordinate reference system of the data being plotted:
End of explanation
ax = plt.axes(projection=ccrs.Mercator())
ax.coastlines()
gl = ax.gridlines(draw_labels=True)
plt.show()
Explanation: Notice that unless we specify a map extent (we did so via the set_global method in this case) the map will zoom into the range of the plotted data.
We can add graticule lines and tick labels to the map using the gridlines method (this currently is limited to just a few coordinate reference systems):
End of explanation
import matplotlib.ticker as mticker
from cartopy.mpl.gridliner import LATITUDE_FORMATTER
ax = plt.axes(projection=ccrs.PlateCarree())
ax.coastlines()
gl = ax.gridlines(draw_labels=True)
gl.xlocator = mticker.FixedLocator([-180, -45, 0, 45, 180])
gl.yformatter = LATITUDE_FORMATTER
plt.show()
Explanation: We can control the specific tick values by using matplotlib's locator object, and the formatting can be controlled with matplotlib formatters:
End of explanation
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
x = np.linspace(310, 390, 25)
y = np.linspace(-24, 25, 35)
x2d, y2d = np.meshgrid(x, y)
data = np.cos(np.deg2rad(y2d) * 4) + np.sin(np.deg2rad(x2d) * 4)
Explanation: Cartopy cannot currently label all types of projection, though more work is intended on this functionality in the future.
Exercise 1
The following snippet of code produces coordinate arrays and some data in a rotated pole coordinate system. The coordinate system for the x and y values, which is similar to that found in the some limited area models of Europe, has a projection "north pole" at 177.5 longitude and 37.5 latitude.
End of explanation |
10,714 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook explores how collaborative relationships form between mailing list participants over time.
The hypothesis, loosely put, is that early exchanges are indicators of growing relationships or trust that should be reflected in information flow at later times.
Step1: Next we'll import dependencies.
Step2: Let's begin with just one mailing list to simplify.
Step3: Let's look at the matrix of who replies to whom over the whole history of the list, to get a sense of the overall distribution
Step4: In particular we are interested in who replied to who at each time. Recall that this is an open mailing list--everybody potentially reads each message. A response from A to B is an indication that A read B's original message. Therefore, a response indicates not just a single message from A to B, but an exchange from B to A and back again.
Below we modify our data to see who replied to whom.
Step5: The next step is to create a DataFrame that for each pair A and B
Step6: The "duration" column gives us a datetime data structure so we must be careful later on when extracting information from this column.
Step7: Now let's create a dataframe that consists of these three quantities (duration, number of replies, and reciprocity) for each pair of contributers who interacted.
To get all the unique pairs we can use the unique_pairs function in twopeople.py. This willl give us all pairs between any two individuals who had some degree of interation.
Note
Step8: Using panda_allpairs, we can create the desired data frame
Step9: The printed values below display the counts associated for a given number of replies.
We can see that many of the interactions have a relatively small amount of replies. One possible explanation for this large amount may be having many interactions that are merely a follow-up or question followed by a thank you response (we can explore this further by parsing the message bodies in arx.data but for now we will just speculate).
Step10: To get a better idea of what's going on for larger values, let's look at the rows that have number of replies greater than 5.
Step11: The graph seems to follow a power law which is expected for this type of data.
Now let's see if we can find any patters between the number of replies and reciprocity. Intuitively, we would expect that the number of replies be positively associated with reciprocity but let's see...
We will first look at the data for which the number of replies is greater than 5 to possibly get rid of some noise (later on we will explore the data without removing these entries).
Below we divided reciprocity and number of replies into completely arbitrary bins as shown below. Hopefully, this will make it easier to see patters between these two variables as they have quite a bit of noise.
(The genId and genNumReplies functions just give each entry a corresponding label for graphing purposes later on. These labels are based on which bin a given entry falls under)
Step12: The following lines generate a data frame that contains three columns
Step13: Now that we have this data frame lets look at the corresponding histograms for each "level" of reciprocity.
Step14: It's pretty hard to compare the four histograms so let's create a contingency table for the groupsdf data frame.
Step15: Since each reciprocity group has a different amount of counts, let's normalize the counts to get a better picture of what's going on.
We will first normzalize column-wise, that is for say column A.[0,.25] we will sum the total number of responses and get the relative proportions for the replies bins.
Step16: We see that at the very extremes, namely reciprocity between 0-.25 and reciprocity between .75-1.0, there are some clear differences; reciprocity seems to be positively associated with the number of replies as we had initially expected.
On the other hand, the bin for reciprocity between .25-.5 weakens this association as this bin seems as if it should swap positions with bin A. However, since the bin widths we chose were completely arbitrary it may explain this paradox.
The fact that the extremes seem to follow our expectations is quite interesting; it provides some evidence that if we choose our bin sizes appropriately, we can perhaps get a nice positive association.
Step17: Now will do the normalization by row. This will give us the relative proportion of some bin for number of replies is distributed across the bins for reciprocity.
Step18: Now let's go back and do the exact same thing but not removing entries with a very low amount of replies.
Step19: Now we will look at various scatterplots for different variables to get a rough sense of how our data is spread.
Step20: Now let's look at some scatterplots for the entries with number of replies greater than 5.
Step21: Since we actually have the bodies of each message, we will now procceed by seeing if there are any patterns between the type of messages sent and reciprocity, duration, and the number of replies.
As a very rough measure, we have created a function calMessageLen that calculates the length of a given message. | Python Code:
%matplotlib inline
Explanation: This notebook explores how collaborative relationships form between mailing list participants over time.
The hypothesis, loosely put, is that early exchanges are indicators of growing relationships or trust that should be reflected in information flow at later times.
End of explanation
from bigbang.archive import Archive
import bigbang.parse as parse
import bigbang.analysis.graph as graph
import bigbang.ingress.mailman as mailman
import bigbang.analysis.process as process
import bigbang.analysis.twopeople as twoppl
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import pandas as pd
from pprint import pprint as pp
import pytz
import math
Explanation: Next we'll import dependencies.
End of explanation
url = "http://mail.python.org/piperpmail/scipy-dev/"
arx= Archive(url,archive_dir="../archives")
arx.data.irow(0).Body
arx.data.shape
Explanation: Let's begin with just one mailing list to simplify.
End of explanation
arx.data[arx.data['In-Reply-To'] > 0][:10]
Explanation: Let's look at the matrix of who replies to whom over the whole history of the list, to get a sense of the overall distribution
End of explanation
messages = arx.data[['From']]
responses = arx.data[arx.data['In-Reply-To'] > 0][['From','Date','In-Reply-To']]
exchanges = pd.merge(messages,responses,how='inner',right_on='In-Reply-To',left_index=True,suffixes=['_original','_response'])
exchanges
exchanges.groupby(['From_original','From_response']).count()
Explanation: In particular we are interested in who replied to who at each time. Recall that this is an open mailing list--everybody potentially reads each message. A response from A to B is an indication that A read B's original message. Therefore, a response indicates not just a single message from A to B, but an exchange from B to A and back again.
Below we modify our data to see who replied to whom.
End of explanation
twoppl.duration(exchanges, "oliphant at ee.byu.edu (Travis Oliphant)", "rossini at blindglobe.net (A.J. Rossini)" )
twoppl.panda_pair(exchanges, "oliphant at ee.byu.edu (Travis Oliphant)", "rossini at blindglobe.net (A.J. Rossini)" )
Explanation: The next step is to create a DataFrame that for each pair A and B:
* The duration of time between the first reply between that pair and the last.
* The total number of replies from A to B, $r_{AB}$, and from B to A, $r_{BA}$.
* The reciprocity of the conversation $min(r_{AB},r_{BA})/max(r_{AB},r_{BA})$
Using the exchanges data frame we can use the functions in twopeople.py to calculate the above quantities. The cell below gives a sample output for calculating duration, number of replies, and reciprocity for two specific contributers.
End of explanation
twoppl.duration(exchanges, "oliphant at ee.byu.edu (Travis Oliphant)", "rossini at blindglobe.net (A.J. Rossini)" )
Explanation: The "duration" column gives us a datetime data structure so we must be careful later on when extracting information from this column.
End of explanation
pairs = twoppl.unique_pairs(exchanges)
pairs
Explanation: Now let's create a dataframe that consists of these three quantities (duration, number of replies, and reciprocity) for each pair of contributers who interacted.
To get all the unique pairs we can use the unique_pairs function in twopeople.py. This willl give us all pairs between any two individuals who had some degree of interation.
Note: The unique pairs we get back do not include reversed pairs. For example, if one of the pairs was ("Bob", "Mary"), we would not have ("Mary", "Bob") in our output.
End of explanation
allpairs = twoppl.panda_allpairs(exchanges, pairs)
allpairs
Explanation: Using panda_allpairs, we can create the desired data frame
End of explanation
print("corresponding counts for number of replies up to 19")
print(("number of replies", "frequency of occurence"))
for i in range(20):
print((i, len(allpairs[allpairs['num_replies'] <= i]) - len(allpairs[allpairs['num_replies'] <= i - 1])))
plt.hist(allpairs['num_replies'])
plt.title("Number of replies")
Explanation: The printed values below display the counts associated for a given number of replies.
We can see that many of the interactions have a relatively small amount of replies. One possible explanation for this large amount may be having many interactions that are merely a follow-up or question followed by a thank you response (we can explore this further by parsing the message bodies in arx.data but for now we will just speculate).
End of explanation
greaterThanFive = allpairs[allpairs['num_replies'] > 5]['num_replies']
counts = greaterThanFive.value_counts()
counts.plot()
Explanation: To get a better idea of what's going on for larger values, let's look at the rows that have number of replies greater than 5.
End of explanation
#Completely arbitrary bins
#Group A reciprocity between (0, .25]
#Group B reciprocity between (.25, .5]
#Group C reciprocity between (.5, .75]
#Group D reciprocity between (.75, 1.00]
#"low" number of replies less than or equal to 10
#"moderate" number of replies between 10 and 20
#"high" replies greater than 20 replies
def genId(num):
if num <= .25:
return 'A.(0, .25]'
if num <= .5:
return "B.(.25, .5]"
if num <= .75:
return "C.(.5, .75]"
return "D.(.75, 1.00]"
def genNumReplies(num):
if num <= 10:
return 'a.low'
if num <= 20:
return "b.moderate"
return "c.high"
Explanation: The graph seems to follow a power law which is expected for this type of data.
Now let's see if we can find any patters between the number of replies and reciprocity. Intuitively, we would expect that the number of replies be positively associated with reciprocity but let's see...
We will first look at the data for which the number of replies is greater than 5 to possibly get rid of some noise (later on we will explore the data without removing these entries).
Below we divided reciprocity and number of replies into completely arbitrary bins as shown below. Hopefully, this will make it easier to see patters between these two variables as they have quite a bit of noise.
(The genId and genNumReplies functions just give each entry a corresponding label for graphing purposes later on. These labels are based on which bin a given entry falls under)
End of explanation
moreThanFive = allpairs[allpairs['num_replies'] > 5]
recipVec = moreThanFive['reciprocity']
numReplies = moreThanFive['num_replies']
ids = recipVec.apply(lambda val: genId(val))
groupedRep = numReplies.apply(lambda val: genNumReplies(val))
groupsdf = pd.DataFrame({"num_replies": numReplies, "ids": ids, "grouped_num_replies": groupedRep})
groupsdf
Explanation: The following lines generate a data frame that contains three columns:
1) Number of replies
2) Id corresponding to replies bin
3) Id corresponding to reciprocity bin
(The extra letters such as the a in "a.low" are just used so that pandas orders the columns in the way we want)
End of explanation
grpA = groupsdf[groupsdf["ids"] == "A.(0, .25]"]['num_replies']
grpB = groupsdf[groupsdf["ids"] == "B.(.25, .5]"]['num_replies']
grpC = groupsdf[groupsdf["ids"] == "C.(.5, .75]"]['num_replies']
grpD = groupsdf[groupsdf["ids"] == "D.(.75, 1.00]"]['num_replies']
grpA.value_counts().hist()
plt.title("Number of Replies for Reciprocity between 0-.25")
grpB.value_counts().hist()
plt.title("Number of Replies for Reciprocity between .25-.5")
grpC.value_counts().hist()
plt.title("Number of Replies for Reciprocity between .5-.75")
grpD.value_counts().hist()
plt.title("Number of Replies for Reciprocity between .75-1.0")
Explanation: Now that we have this data frame lets look at the corresponding histograms for each "level" of reciprocity.
End of explanation
crossed = pd.crosstab(groupsdf["grouped_num_replies"], groupsdf["ids"])
crossed
crossed.plot()
Explanation: It's pretty hard to compare the four histograms so let's create a contingency table for the groupsdf data frame.
End of explanation
crossed.apply(lambda r: r/sum(r), axis=0)
Explanation: Since each reciprocity group has a different amount of counts, let's normalize the counts to get a better picture of what's going on.
We will first normzalize column-wise, that is for say column A.[0,.25] we will sum the total number of responses and get the relative proportions for the replies bins.
End of explanation
crossed.apply(lambda r: r/sum(r), axis=0).plot()
plt.title("normalized (columnwise) plot")
Explanation: We see that at the very extremes, namely reciprocity between 0-.25 and reciprocity between .75-1.0, there are some clear differences; reciprocity seems to be positively associated with the number of replies as we had initially expected.
On the other hand, the bin for reciprocity between .25-.5 weakens this association as this bin seems as if it should swap positions with bin A. However, since the bin widths we chose were completely arbitrary it may explain this paradox.
The fact that the extremes seem to follow our expectations is quite interesting; it provides some evidence that if we choose our bin sizes appropriately, we can perhaps get a nice positive association.
End of explanation
crossed.apply(lambda r: r/sum(r), axis=1)
crossed.apply(lambda r: r/sum(r), axis=1).plot()
plt.title("normalized (row-wise) plot")
Explanation: Now will do the normalization by row. This will give us the relative proportion of some bin for number of replies is distributed across the bins for reciprocity.
End of explanation
recipVec2 = allpairs['reciprocity']
numReplies2 = allpairs['num_replies']
ids = recipVec2.apply(lambda val: genId(val))
groupedRep2 = numReplies2.apply(lambda val: genNumReplies(val))
groupsdf2 = pd.DataFrame({"num_replies": numReplies2, "ids": ids, "grouped_num_replies": groupedRep2})
crossed2 = pd.crosstab(groupsdf2["grouped_num_replies"], groupsdf2["ids"])
crossed2
crossed2.plot()
crossed2.apply(lambda r: r/sum(r), axis=0)
crossed2.apply(lambda r: r/sum(r), axis=0).plot()
Explanation: Now let's go back and do the exact same thing but not removing entries with a very low amount of replies.
End of explanation
plt.scatter(allpairs.num_replies, allpairs.reciprocity)
plt.title("number of replies vs. reciprocity")
allpairs['duration'] = allpairs['duration'].apply(lambda x: x.item() / pow(10,9))
plt.scatter(allpairs.duration, allpairs.num_replies)
plt.title("duration vs. number of replies")
Explanation: Now we will look at various scatterplots for different variables to get a rough sense of how our data is spread.
End of explanation
df_filt = allpairs[allpairs['num_replies'] > 5]
plt.scatter(df_filt.reciprocity, df_filt.duration)
plt.title("reciprocity vs. duration")
plt.scatter(df_filt.reciprocity, df_filt.duration.apply(lambda x: math.log(x)))
plt.title("reciprocity vs. log of duration")
plt.scatter(df_filt.duration.apply(lambda x: math.log(x+1)), df_filt.num_replies.apply(lambda x: math.log(x+1)))
plt.title("log of duration vs. log of number of replies")
Explanation: Now let's look at some scatterplots for the entries with number of replies greater than 5.
End of explanation
def calMessageLen(message):
if message == None:
return 0
return len(message)
arx.data['length'] = arx.data['Body'].apply(lambda x: calMessageLen(x))
arx.data
Explanation: Since we actually have the bodies of each message, we will now procceed by seeing if there are any patterns between the type of messages sent and reciprocity, duration, and the number of replies.
As a very rough measure, we have created a function calMessageLen that calculates the length of a given message.
End of explanation |
10,715 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Systemic Velocity
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Now we'll create empty lc, rv, orb, and mesh datasets. We'll then look to see how the systemic velocity (vgamma) affects the observables in each of these datasets, and how those are also affected by light-time effects (ltte).
To see the effects over long times, we'll compute one cycle starting at t=0, and another in the distant future.
Step3: Changing Systemic Velocity and LTTE
IMPORTANT NOTE
Step4: We'll leave it set at 0.0 for now, and then change vgamma to see how that affects the observables.
The other relevant parameter here is t0 - that is the time at which all quantities are provided, the time at which nbody integration would start (if applicable), and the time at which the center-of-mass of the system is defined to be at (0,0,0). Unless you have a reason to do otherwise, it makes sense to have this value near the start of your time data... so if we don't have any other changing quantities defined in our system and are using BJDs, we would want to set this to be non-zero. In this case, our times all start at 0, so we'll leave t0 at 0 as well.
Step5: The option to enable or disable LTTE are in the compute options, we can either set ltte or we can just temporarily pass a value when we call run_compute.
Step6: Let's first compute the model with 0 systemic velocity and ltte=False (not that it would matter in this case). Let's also name the model so we can keep track of what settings were used.
Step7: For our second model, we'll set a somewhat ridiculous value for the systemic velocity (so that the affects are exagerated and clearly visible over one orbit), but leave ltte off.
Step8: Lastly, let's leave this value of vgamma, but enable light-time effects.
Step9: Influence on Light Curves (fluxes)
Now let's compare the various models across all our different datasets.
In each of the figures below, the left panel will be the first cycle (days 0-3) and the right panel will be 100 cycles later (days 900-903).
No systemic velocity will be shown in blue, systemic velocity with ltte=False in red, and systemic velocity with ltte=True in green.
Without light-time effects, the light curve remains unchanged by the introduction of a systemic velocity.
Step10: However, once ltte is enabled, the time between two eclipses (ie the observed period of the system) changes. This occurs because the path between the system and observer has changed. This is an important effect to note - the period parameter sets the TRUE period of the system, not necessarily the observed period between two successive eclipses.
Step11: Influence on Radial Velocities
Radial velocities are perhaps the most logical observable in the case of systemic velocities. Introducing a non-zero value for vgamma simply offsets the observed values.
Step12: Light-time will have a similar affect on RVs as it does on LCs - it simply changes the observed period.
Step13: Influence on Orbits (positions, velocities)
In the orbit, the addition of a systemic velocity affects both the positions and velocities. So if we plot the orbits from above (x-z plane) we can see see orbit spiral in the z-direction. Note that this actually shows the barycenter of the orbit moving - and it was only at 0,0,0 at t0. This also stresses the importance of using a reasonable t0 - here 900 days later, the barycenter has moved significantly from the center of the coordinate system.
Step14: Plotting the z-velocities with respect to time would show the same as the RVs, except without any Rossiter-McLaughlin like effects. Note however the flip in z-convention between vz and radial velocities (+z is defined as towards the observer to make a right-handed system, but by convention +rv is a red shift).
Step15: Now let's look at the effect that enabling ltte has on these same plots.
Step16: Influence on Meshes
Step17: As you can see, since the center of mass of the system was at 0,0,0 at t0 - including systemic velocity actually shows the system spiraling towards or away from the observer (who is in the positive z direction). In other words - the positions of the meshes are affected in the same way as the orbits (note the offset on the ylimit scales).
In addition, the actual values of vz and rv in the meshes are adjusted to include the systemic velocity. | Python Code:
!pip install -I "phoebe>=2.0,<2.1"
%matplotlib inline
Explanation: Systemic Velocity
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
times1 = np.linspace(0,1,201)
times2 = np.linspace(90,91,201)
b.add_dataset('lc', times=times1, dataset='lc1')
b.add_dataset('lc', times=times2, dataset='lc2')
b.add_dataset('rv', times=times1, dataset='rv1')
b.add_dataset('rv', times=times2, dataset='rv2')
b.add_dataset('orb', times=times1, dataset='orb1')
b.add_dataset('orb', times=times2, dataset='orb2')
b.add_dataset('mesh', times=[0], dataset='mesh1')
b.add_dataset('mesh', times=[900], dataset='mesh2')
Explanation: Now we'll create empty lc, rv, orb, and mesh datasets. We'll then look to see how the systemic velocity (vgamma) affects the observables in each of these datasets, and how those are also affected by light-time effects (ltte).
To see the effects over long times, we'll compute one cycle starting at t=0, and another in the distant future.
End of explanation
b['vgamma@system']
Explanation: Changing Systemic Velocity and LTTE
IMPORTANT NOTE: the definition of vgamma in the 2.0.x releases is to be in the direction of positive vz, and therefore negative RV. This is inconsistent with the classical definition used in PHOEBE legacy. The 2.0.4 bugfix release addresses this by converting when importing or exporting legacy files. Note that starting in the 2.1 release, the definition within PHOEBE 2 will be changed such that vgamma will be in the direction of positive RV and negative vz.
By default, vgamma is initially set to 0.0
End of explanation
b['t0@system']
Explanation: We'll leave it set at 0.0 for now, and then change vgamma to see how that affects the observables.
The other relevant parameter here is t0 - that is the time at which all quantities are provided, the time at which nbody integration would start (if applicable), and the time at which the center-of-mass of the system is defined to be at (0,0,0). Unless you have a reason to do otherwise, it makes sense to have this value near the start of your time data... so if we don't have any other changing quantities defined in our system and are using BJDs, we would want to set this to be non-zero. In this case, our times all start at 0, so we'll leave t0 at 0 as well.
End of explanation
b['ltte@compute']
Explanation: The option to enable or disable LTTE are in the compute options, we can either set ltte or we can just temporarily pass a value when we call run_compute.
End of explanation
b.run_compute(irrad_method='none', model='0_false')
Explanation: Let's first compute the model with 0 systemic velocity and ltte=False (not that it would matter in this case). Let's also name the model so we can keep track of what settings were used.
End of explanation
b['vgamma@system'] = 100
b.run_compute(irrad_method='none', model='100_false')
Explanation: For our second model, we'll set a somewhat ridiculous value for the systemic velocity (so that the affects are exagerated and clearly visible over one orbit), but leave ltte off.
End of explanation
b.run_compute(irrad_method='none', ltte=True, model='100_true')
Explanation: Lastly, let's leave this value of vgamma, but enable light-time effects.
End of explanation
fig = plt.figure(figsize=(10,6))
ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122)
axs, artists = b['lc1@0_false'].plot(color='b', ax=ax1)
axs, artists = b['lc1@100_false'].plot(color='r', ax=ax1)
axs, artists = b['lc2@0_false'].plot(color='b', ax=ax2)
axs, artists = b['lc2@100_false'].plot(color='r', ax=ax2)
Explanation: Influence on Light Curves (fluxes)
Now let's compare the various models across all our different datasets.
In each of the figures below, the left panel will be the first cycle (days 0-3) and the right panel will be 100 cycles later (days 900-903).
No systemic velocity will be shown in blue, systemic velocity with ltte=False in red, and systemic velocity with ltte=True in green.
Without light-time effects, the light curve remains unchanged by the introduction of a systemic velocity.
End of explanation
fig = plt.figure(figsize=(10,6))
ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122)
axs, artists = b['lc1@100_false'].plot(color='r', ax=ax1)
axs, artists = b['lc1@100_true'].plot(color='g', ax=ax1)
axs, artists = b['lc2@100_false'].plot(color='r', ax=ax2)
axs, artists = b['lc2@100_true'].plot(color='g', ax=ax2)
Explanation: However, once ltte is enabled, the time between two eclipses (ie the observed period of the system) changes. This occurs because the path between the system and observer has changed. This is an important effect to note - the period parameter sets the TRUE period of the system, not necessarily the observed period between two successive eclipses.
End of explanation
fig = plt.figure(figsize=(10,6))
ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122)
axs, artists = b['rv1@0_false'].plot(color='b', ax=ax1)
axs, artists = b['rv1@100_false'].plot(color='r', ax=ax1)
axs, artists = b['rv2@0_false'].plot(color='b', ax=ax2)
axs, artists = b['rv2@100_false'].plot(color='r', ax=ax2)
Explanation: Influence on Radial Velocities
Radial velocities are perhaps the most logical observable in the case of systemic velocities. Introducing a non-zero value for vgamma simply offsets the observed values.
End of explanation
fig = plt.figure(figsize=(10,6))
ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122)
axs, artists = b['rv1@100_false'].plot(color='r', ax=ax1)
axs, artists = b['rv1@100_true'].plot(color='g', ax=ax1)
axs, artists = b['rv2@100_false'].plot(color='r', ax=ax2)
axs, artists = b['rv2@100_true'].plot(color='g', ax=ax2)
Explanation: Light-time will have a similar affect on RVs as it does on LCs - it simply changes the observed period.
End of explanation
fig = plt.figure(figsize=(10,6))
ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122)
axs, artists = b['orb1@0_false'].plot(x='xs', y='zs', color='b', ax=ax1)
axs, artists = b['orb1@100_false'].plot(x='xs', y='zs', color='r', ax=ax1)
axs, artists = b['orb2@0_false'].plot(x='xs', y='zs', color='b', ax=ax2)
axs, artists = b['orb2@100_false'].plot(x='xs', y='zs', color='r', ax=ax2)
Explanation: Influence on Orbits (positions, velocities)
In the orbit, the addition of a systemic velocity affects both the positions and velocities. So if we plot the orbits from above (x-z plane) we can see see orbit spiral in the z-direction. Note that this actually shows the barycenter of the orbit moving - and it was only at 0,0,0 at t0. This also stresses the importance of using a reasonable t0 - here 900 days later, the barycenter has moved significantly from the center of the coordinate system.
End of explanation
fig = plt.figure(figsize=(10,6))
ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122)
axs, artists = b['orb1@0_false'].plot(x='times', y='vzs', color='b', ax=ax1)
axs, artists = b['orb1@100_false'].plot(x='times', y='vzs', color='r', ax=ax1)
axs, artists = b['orb2@0_false'].plot(x='times', y='vzs', color='b', ax=ax2)
axs, artists = b['orb2@100_false'].plot(x='times', y='vzs', color='r', ax=ax2)
Explanation: Plotting the z-velocities with respect to time would show the same as the RVs, except without any Rossiter-McLaughlin like effects. Note however the flip in z-convention between vz and radial velocities (+z is defined as towards the observer to make a right-handed system, but by convention +rv is a red shift).
End of explanation
fig = plt.figure(figsize=(10,6))
ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122)
axs, artists = b['orb1@100_false'].plot(x='xs', y='zs', color='r', ax=ax1)
axs, artists = b['orb1@100_true'].plot(x='xs', y='zs', color='g', ax=ax1)
axs, artists = b['orb2@100_false'].plot(x='xs', y='zs', color='r', ax=ax2)
axs, artists = b['orb2@100_true'].plot(x='xs', y='zs', color='g', ax=ax2)
fig = plt.figure(figsize=(10,6))
ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122)
axs, artists = b['orb1@100_false'].plot(x='times', y='vzs', color='r', ax=ax1)
axs, artists = b['orb1@100_true'].plot(x='times', y='vzs', color='g', ax=ax1)
axs, artists = b['orb2@100_false'].plot(x='times', y='vzs', color='r', ax=ax2)
axs, artists = b['orb2@100_true'].plot(x='times', y='vzs', color='g', ax=ax2)
Explanation: Now let's look at the effect that enabling ltte has on these same plots.
End of explanation
fig = plt.figure(figsize=(10,6))
ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122)
axs, artists = b['mesh1@0_false'].plot(time=0.0, x='xs', y='zs', ax=ax1)
axs, artists = b['mesh1@100_false'].plot(time=0.0, x='xs', y='zs', ax=ax1)
ax1.set_xlim(-10,10)
ax1.set_ylim(-10,10)
axs, artists = b['mesh2@0_false'].plot(time=900.0, x='xs', y='zs', ax=ax2)
axs, artists = b['mesh2@100_false'].plot(time=900.0, x='xs', y='zs', ax=ax2)
ax2.set_xlim(-10,10)
ax2.set_ylim(-10,10)
fig = plt.figure(figsize=(10,6))
ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122)
axs, artists = b['mesh1@100_false'].plot(time=0.0, x='xs', y='zs', ax=ax1)
axs, artists = b['mesh1@100_true'].plot(time=0.0, x='xs', y='zs', ax=ax1)
ax1.set_xlim(-10,10)
ax1.set_ylim(-10,10)
axs, artists = b['mesh2@100_false'].plot(time=900.0, x='xs', y='zs', ax=ax2)
axs, artists = b['mesh2@100_true'].plot(time=900.0, x='xs', y='zs', ax=ax2)
ax2.set_xlim(-10,10)
ax2.set_ylim(11170,11200)
Explanation: Influence on Meshes
End of explanation
b['primary@mesh1@0_false'].get_value('vzs', time=0.0)[:5]
b['primary@mesh1@100_false'].get_value('vzs', time=0.0)[:5]
b['primary@mesh1@100_true'].get_value('vzs', time=0.0)[:5]
Explanation: As you can see, since the center of mass of the system was at 0,0,0 at t0 - including systemic velocity actually shows the system spiraling towards or away from the observer (who is in the positive z direction). In other words - the positions of the meshes are affected in the same way as the orbits (note the offset on the ylimit scales).
In addition, the actual values of vz and rv in the meshes are adjusted to include the systemic velocity.
End of explanation |
10,716 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
WMI Module Load
Metadata
| | |
|
Step1: Download & Process Mordor Dataset
Step2: Analytic I
Look for processes (non wmiprvse.exe or WmiApSrv.exe) loading wmi modules
| Data source | Event Provider | Relationship | Event |
| | Python Code:
from openhunt.mordorutils import *
spark = get_spark()
Explanation: WMI Module Load
Metadata
| | |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2019/08/11 |
| modification date | 2020/09/20 |
| playbook related | [] |
Hypothesis
Adversaries might be leveraging WMI modules to execute WMI tasks bypassing controls monitoring for wmiprvse.exe or wmiapsrv.exe activity
Technical Context
WMI is the Microsoft implementation of the Web-Based Enterprise Management (WBEM) and Common Information Model (CIM).
Both standards aim to provide an industry-agnostic means of collecting and transmitting information related to any managed component in an enterprise. An example of a managed component in WMI would be a running process, registry key, installed service, file information, etc.
At a high level, Microsoft’s implementation of these standards can be summarized as follows > Managed Components Managed components are represented as WMI objects — class instances representing highly structured operating system data. Microsoft provides a wealth of WMI objects that communicate information related to the operating system. E.g. Win32_Process, Win32_Service, AntiVirusProduct, Win32_StartupCommand, etc.
WMI modules loaded by legit processes such as wmiprvse.exe or wmiapsrv.exe are the following
C:\Windows\System32\wmiclnt.dll
C:\Windows\System32\wbem\WmiApRpl.dll
C:\Windows\System32\wbem\wmiprov.dll
C:\Windows\System32\wbem\wmiutils.dll
Offensive Tradecraft
Adversaries could leverage the WMI modules above to execute WMI tasks bypassing controls looking for wmiprvse.exe or wmiapsrv.exe activity.
Mordor Test Data
| | |
|:----------|:----------|
| metadata | https://mordordatasets.com/notebooks/small/windows/05_defense_evasion/SDWIN-190518200432.html |
| link | https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/defense_evasion/host/empire_psinject_PEinjection.zip |
Analytics
Initialize Analytics Engine
End of explanation
mordor_file = "https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/defense_evasion/host/empire_psinject_PEinjection.zip"
registerMordorSQLTable(spark, mordor_file, "mordorTable")
Explanation: Download & Process Mordor Dataset
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Image, ImageLoaded
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 7
AND (
lower(ImageLoaded) LIKE "%wmiclnt.dll"
OR lower(ImageLoaded) LIKE "%WmiApRpl.dll"
OR lower(ImageLoaded) LIKE "%wmiprov.dll"
OR lower(ImageLoaded) LIKE "%wmiutils.dll"
OR lower(ImageLoaded) LIKE "%wbemcomn.dll"
OR lower(ImageLoaded) LIKE "%WMINet_Utils.dll"
OR lower(ImageLoaded) LIKE "%wbemsvc.dll"
OR lower(ImageLoaded) LIKE "%fastprox.dll"
OR lower(Description) LIKE "%wmi%"
)
AND NOT (
lower(Image) LIKE "%wmiprvse.exe"
OR lower(Image) LIKE "%wmiapsrv.exe"
OR lower(Image) LIKE "%svchost.exe"
)
'''
)
df.show(10,False)
Explanation: Analytic I
Look for processes (non wmiprvse.exe or WmiApSrv.exe) loading wmi modules
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Module | Microsoft-Windows-Sysmon/Operational | Process loaded Dll | 7 |
End of explanation |
10,717 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PhenoCam ROI Summary Files
Here's a python notebook demonstrating how to read in and plot an ROI (Region of Interest) summary using python. In this case I'm using the 1-day summary file from the alligatorriver site. The summary files are in CSV format and can be read directly from the site using a URL. Before reading from a URL let's make sure we can read directly from a file.
Step1: While the data can be read directly from a URL we'll start by doing the simple thing of reading the CSV file directly from our local disk.
Step2: That was pretty simple. Now try to read directly from a URL to see if we get the same result. This has the advantage that you always get the latest version of the file which is updated nightly.
Step3: Use the requests package to read the CSV file from the URL.
Step4: If necessary we'll need to convert nodata values.
Step5: We can look at other columns and also filter the data in a variety of ways. Recently we had a site where the number of images varied a lot over time. Let's look at how consistent the number of images for the alligator river site. The image_count reflects our brightness threshold which will eliminate images in the winter time when the days are shorter. But there are a number of other ways the image count can be reduced. The ability reliably extract a 90^th precentile value is dependent on the number of images available for a particular summary period.
Step6: One possibility would be to filter the data for summary periods which had at least 10 images. | Python Code:
%matplotlib inline
import os, sys
import numpy as np
import matplotlib
import pandas as pd
import requests
import StringIO
# set matplotlib style
matplotlib.style.use('ggplot')
sitename = 'alligatorriver'
roiname = 'DB_0001'
infile = "{}_{}_1day.csv".format(sitename, roiname)
print infile
%%bash
head -30 alligatorriver_DB_0001_1day.csv
Explanation: PhenoCam ROI Summary Files
Here's a python notebook demonstrating how to read in and plot an ROI (Region of Interest) summary using python. In this case I'm using the 1-day summary file from the alligatorriver site. The summary files are in CSV format and can be read directly from the site using a URL. Before reading from a URL let's make sure we can read directly from a file.
End of explanation
with open(infile,'r') as fd:
df = pd.read_csv(fd, comment='#', parse_dates=[0])
df.head()
df.plot('date', ['gcc_90'], figsize=(16,4),
grid=True, style=['g'] )
Explanation: While the data can be read directly from a URL we'll start by doing the simple thing of reading the CSV file directly from our local disk.
End of explanation
url = "https://phenocam.sr.unh.edu/data/archive/{}/ROI/{}_{}_1day.csv"
url = url.format(sitename, sitename, roiname)
print url
Explanation: That was pretty simple. Now try to read directly from a URL to see if we get the same result. This has the advantage that you always get the latest version of the file which is updated nightly.
End of explanation
response = requests.get(url)
fd = StringIO.StringIO(response.text)
df = pd.read_csv(fd, comment='#', parse_dates=[0])
fd.close
df[0:5]
Explanation: Use the requests package to read the CSV file from the URL.
End of explanation
df[df['gcc_90'] == -9999.].gcc_90 = np.nan
df.plot('date', ['gcc_90'], figsize=(16,4),
grid=True, style=['g'] )
Explanation: If necessary we'll need to convert nodata values.
End of explanation
df.plot('date','image_count', figsize=(16,4), style='b')
Explanation: We can look at other columns and also filter the data in a variety of ways. Recently we had a site where the number of images varied a lot over time. Let's look at how consistent the number of images for the alligator river site. The image_count reflects our brightness threshold which will eliminate images in the winter time when the days are shorter. But there are a number of other ways the image count can be reduced. The ability reliably extract a 90^th precentile value is dependent on the number of images available for a particular summary period.
End of explanation
df10 = df[df['image_count'] >= 10]
df10.plot('date', ['gcc_90'], figsize=(16,4),
grid=True, style=['g'] )
Explanation: One possibility would be to filter the data for summary periods which had at least 10 images.
End of explanation |
10,718 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: 3T_데이터 분석을 위한 SQL 실습 (2) - SUB QUERY, HAVING
유저별 매출을 출력하세요. customer, payment
Step3: JOIN은 조금 어렵지만 속도가 WHERE보다 빠르다.
Step8: 서브쿼리랑 HAVING 다시 천천히 해보자
렌탈 횟수가 30회 이상인 유저
Step9: pandas
SUBQUERY - 1. Rental per User, 2.30개 이상인 애들 뽑기 | Python Code:
import pymysql
db = pymysql.connect(
"db.fastcamp.us",
"root",
"dkstncks",
"sakila",
charset='utf8',
)
customer_df = pd.read_sql("SELECT * FROM customer;", db)
payment_df = pd.read_sql("SELECT * FROM payment;", db)
customer_df.head(1)
payment_df.head(1)
SQL_QUERY =
SELECT c.first_name, c.last_name, SUM(p.amount) "Revenue"
FROM
customer c
JOIN payment p
ON p.customer_id = c.customer_id
GROUP BY c.customer_id
ORDER BY Revenue DESC
;
pd.read_sql(SQL_QUERY, db)
SQL_QUERY =
SELECT
c.customer_id,
SUM(p.amount)
FROM payment p, customer c
WHERE p.customer_id = c.customer_id
GROUP BY c.customer_id
;
pd.read_sql(SQL_QUERY, db)
Explanation: 3T_데이터 분석을 위한 SQL 실습 (2) - SUB QUERY, HAVING
유저별 매출을 출력하세요. customer, payment
End of explanation
payment_df.groupby("customer_id").agg({"amount": np.sum})
Explanation: JOIN은 조금 어렵지만 속도가 WHERE보다 빠르다.
End of explanation
rental_df = pd.read_sql("SELECT * FROM rental;", db)
rental_df.head(1)
customer_df.head(1)
SQL_QUERY =
SELECT
c.first_name,
c.last_name,
COUNT(*) "rentals_per_customer"
FROM
rental r
JOIN customer c
ON r.customer_id = c.customer_id
GROUP BY c.customer_id
HAVING rentals_per_customer >=30
ORDER BY 3 DESC
;
pd.read_sql(SQL_QUERY, db)
SQL_QUERY =
SELECT
c.first_name,
c.last_name,
COUNT(*) "rentals_per_customer"
FROM
rental r,
customer c
WHERE
r.customer_id = c.customer_id
GROUP BY c.customer_id
HAVING rentals_per_customer >= 30
;
pd.read_sql(SQL_QUERY, db)
RENTALS_PER_CUSTOMER_SQL_QUERY =
SELECT
c.first_name,
c.last_name,
COUNT(*) "rentals_per_customer"
FROM
rental r
JOIN customer c
ON r.customer_id = c.customer_id
GROUP BY c.customer_id
;
SQL_QUERY =
SELECT *
FROM ({RENTALS_PER_CUSTOMER_SQL_QUERY}) as rpc
WHERE rentals_per_customer >= 30
;
.format(RENTALS_PER_CUSTOMER_SQL_QUERY=RENTALS_PER_CUSTOMER_SQL_QUERY.replace(";", ""))
# print(SQL_QUERY)
pd.read_sql(SQL_QUERY, db)
Explanation: 서브쿼리랑 HAVING 다시 천천히 해보자
렌탈 횟수가 30회 이상인 유저
End of explanation
rc_df = rental_df.merge(customer_df, on="customer_id")
rc_df.groupby("customer_id").size() >= 30
rentals_per_customer_df = rc_df.groupby("customer_id").agg({"customer_id": np.size})
is_30 = rentals_per_customer_df.customer_id > 30
rentals_per_customer_df[is_30]
Explanation: pandas
SUBQUERY - 1. Rental per User, 2.30개 이상인 애들 뽑기
End of explanation |
10,719 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Three Little Circles
The "Hello World" (or Maxwell's Equations) of d3, Three Little Circles introduces all of the main concepts in d3, which gives you a pretty good grounding in data visualization, JavaScript, and SVG. Let's try out some circles in livecoder.
First, we need Livecoder, and traitlets, the Observer/Observable pattern used in building widgets.
Step1: Livecoder by itself doesn't do much. Let's add a traitlet for where we want to draw the circles (the cx attribute).
Step2: Notice the sync argument
Step3: Almost there! To view our widget, we need to display it, which is the default behavior by just having the widget be the last line of a code cell. | Python Code:
from livecoder.widgets import Livecoder
from IPython.utils import traitlets as T
Explanation: Three Little Circles
The "Hello World" (or Maxwell's Equations) of d3, Three Little Circles introduces all of the main concepts in d3, which gives you a pretty good grounding in data visualization, JavaScript, and SVG. Let's try out some circles in livecoder.
First, we need Livecoder, and traitlets, the Observer/Observable pattern used in building widgets.
End of explanation
class ThreeCircles(Livecoder):
x = T.Tuple([1, 2, 3], sync=True)
Explanation: Livecoder by itself doesn't do much. Let's add a traitlet for where we want to draw the circles (the cx attribute).
End of explanation
circles = ThreeCircles(description="three-circles")
circles.description
Explanation: Notice the sync argument: this tells IPython that it should propagate changes to the front-end. No REST for the wicked?
End of explanation
circles
Explanation: Almost there! To view our widget, we need to display it, which is the default behavior by just having the widget be the last line of a code cell.
End of explanation |
10,720 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex AI
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step11: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
Step12: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
Step13: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard
Step14: Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
Step15: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
Step16: Train a model
training.create-python-pre-built-container
Create and run custom training job
To train a custom model, you perform two steps
Step17: Example output
Step18: general.import-model
Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters
Step19: Example output
Step20: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. Each instance in the prediction request is a list of the form
Step21: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters
Step22: Example output
Step23: Example Output
Step24: Example Output
Step25: Example output
Step26: Make the prediction
Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.
Request
The format of each instance is
Step27: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
Step28: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex AI: Vertex AI Migration: Custom XGBoost model with pre-built training container
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ9%20Vertex%20SDK%20Custom%20XGBoost%20with%20pre-built%20training%20container.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ9%20Vertex%20SDK%20Custom%20XGBoost%20with%20pre-built%20training%20container.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Dataset
The dataset used for this tutorial is the Iris dataset from TensorFlow Datasets. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of Iris flower species from a class of three species: setosa, virginica, or versicolor.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
if os.getenv("IS_TESTING"):
! pip3 install --upgrade tensorflow $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import google.cloud.aiplatform as aip
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
TRAIN_VERSION = "xgboost-cpu.1-1"
DEPLOY_VERSION = "xgboost-cpu.1-1"
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
Explanation: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
End of explanation
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Iris tabular classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
%%writefile custom/trainer/task.py
# Single Instance Training for Iris
import datetime
import os
import subprocess
import sys
import pandas as pd
import xgboost as xgb
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
args = parser.parse_args()
# Download data
iris_data_filename = 'iris_data.csv'
iris_target_filename = 'iris_target.csv'
data_dir = 'gs://cloud-samples-data/ai-platform/iris'
# gsutil outputs everything to stderr so we need to divert it to stdout.
subprocess.check_call(['gsutil', 'cp', os.path.join(data_dir,
iris_data_filename),
iris_data_filename], stderr=sys.stdout)
subprocess.check_call(['gsutil', 'cp', os.path.join(data_dir,
iris_target_filename),
iris_target_filename], stderr=sys.stdout)
# Load data into pandas, then use `.values` to get NumPy arrays
iris_data = pd.read_csv(iris_data_filename).values
iris_target = pd.read_csv(iris_target_filename).values
# Convert one-column 2D array into 1D array for use with XGBoost
iris_target = iris_target.reshape((iris_target.size,))
# Load data into DMatrix object
dtrain = xgb.DMatrix(iris_data, label=iris_target)
# Train XGBoost model
bst = xgb.train({}, dtrain, 20)
# Export the classifier to a file
model_filename = 'model.bst'
bst.save_model(model_filename)
# Upload the saved model file to Cloud Storage
gcs_model_path = os.path.join(args.model_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path],
stderr=sys.stdout)
Explanation: Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_iris.tar.gz
Explanation: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
job = aip.CustomTrainingJob(
display_name="iris_" + TIMESTAMP,
script_path="custom/trainer/task.py",
container_uri=TRAIN_IMAGE,
requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"],
)
print(job)
Explanation: Train a model
training.create-python-pre-built-container
Create and run custom training job
To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job.
Create custom training job
A custom training job is created with the CustomTrainingJob class, with the following parameters:
display_name: The human readable name for the custom training job.
container_uri: The training container image.
requirements: Package requirements for the training container image (e.g., pandas).
script_path: The relative path to the training script.
End of explanation
MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP)
job.run(
replica_count=1, machine_type=TRAIN_COMPUTE, base_output_dir=MODEL_DIR, sync=True
)
MODEL_DIR = MODEL_DIR + "/model"
model_path_to_deploy = MODEL_DIR
Explanation: Example output:
<google.cloud.aiplatform.training_jobs.CustomTrainingJob object at 0x7feab1346710>
Run the custom training job
Next, you run the custom job to start the training job by invoking the method run, with the following parameters:
replica_count: The number of compute instances for training (replica_count = 1 is single node training).
machine_type: The machine type for the compute instances.
base_output_dir: The Cloud Storage location to write the model artifacts to.
sync: Whether to block until completion of the job.
End of explanation
model = aip.Model.upload(
display_name="iris_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
sync=False,
)
model.wait()
Explanation: general.import-model
Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters:
display_name: The human readable name for the Model resource.
artifact: The Cloud Storage location of the trained model artifacts.
serving_container_image_uri: The serving container image.
sync: Whether to execute the upload asynchronously or synchronously.
If the upload() method is run asynchronously, you can subsequently block until completion with the wait() method.
End of explanation
INSTANCES = [[1.4, 1.3, 5.1, 2.8], [1.5, 1.2, 4.7, 2.4]]
Explanation: Example output:
INFO:google.cloud.aiplatform.models:Creating Model
INFO:google.cloud.aiplatform.models:Create Model backing LRO: projects/759209241365/locations/us-central1/models/925164267982815232/operations/3458372263047331840
INFO:google.cloud.aiplatform.models:Model created. Resource name: projects/759209241365/locations/us-central1/models/925164267982815232
INFO:google.cloud.aiplatform.models:To use this Model in another session:
INFO:google.cloud.aiplatform.models:model = aiplatform.Model('projects/759209241365/locations/us-central1/models/925164267982815232')
Make batch predictions
predictions.batch-prediction
Make test items
You will use synthetic data as a test data items. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
End of explanation
import tensorflow as tf
gcs_input_uri = BUCKET_NAME + "/" + "test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
for i in INSTANCES:
f.write(str(i) + "\n")
! gsutil cat $gcs_input_uri
Explanation: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. Each instance in the prediction request is a list of the form:
[ [ content_1], [content_2] ]
content: The feature values of the test item as a list.
End of explanation
MIN_NODES = 1
MAX_NODES = 1
batch_predict_job = model.batch_predict(
job_display_name="iris_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
instances_format="jsonl",
predictions_format="jsonl",
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
sync=False,
)
print(batch_predict_job)
Explanation: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:
job_display_name: The human readable name for the batch prediction job.
gcs_source: A list of one or more batch request input files.
gcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls.
instances_format: The format for the input instances, either 'csv' or 'jsonl'. Defaults to 'jsonl'.
predictions_format: The format for the output predictions, either 'csv' or 'jsonl'. Defaults to 'jsonl'.
machine_type: The type of machine to use for training.
sync: If set to True, the call will block while waiting for the asynchronous batch job to complete.
End of explanation
batch_predict_job.wait()
Explanation: Example output:
INFO:google.cloud.aiplatform.jobs:Creating BatchPredictionJob
<google.cloud.aiplatform.jobs.BatchPredictionJob object at 0x7f806a6112d0> is waiting for upstream dependencies to complete.
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/5110965452507447296?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296 current state:
JobState.JOB_STATE_RUNNING
Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
End of explanation
import json
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
Explanation: Example Output:
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/181835033978339328?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_SUCCEEDED
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob run completed. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
instance: The prediction request.
prediction: The prediction response.
End of explanation
DEPLOYED_NAME = "iris-" + TIMESTAMP
TRAFFIC_SPLIT = {"0": 100}
MIN_NODES = 1
MAX_NODES = 1
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
Explanation: Example Output:
{'instance': [1.4, 1.3, 5.1, 2.8], 'prediction': 2.0451931953430176}
Make online predictions
predictions.deploy-model-api
Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters:
deployed_model_display_name: A human readable name for the deployed model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
machine_type: The type of machine to use for training.
starting_replica_count: The number of compute instances to initially provision.
max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
End of explanation
INSTANCE = [1.4, 1.3, 5.1, 2.8]
Explanation: Example output:
INFO:google.cloud.aiplatform.models:Creating Endpoint
INFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/4087251132693348352
INFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:To use this Endpoint in another session:
INFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/759209241365/locations/us-central1/endpoints/4867177336350441472')
INFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/1691336130932244480
INFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
predictions.online-prediction-automl
Make test item
You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
End of explanation
instances_list = [INSTANCE]
prediction = endpoint.predict(instances_list)
print(prediction)
Explanation: Make the prediction
Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.
Request
The format of each instance is:
[feature_list]
Since the predict() method can take multiple items (instances), send your single test item as a list of one test item.
Response
The response from the predict() call is a Python dictionary with the following entries:
ids: The internal assigned unique identifiers for each prediction request.
predictions: The predicted confidence, between 0 and 1, per class label.
deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.
End of explanation
endpoint.undeploy_all()
Explanation: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
10,721 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
How can I get get the position (indices) of the smallest value in a multi-dimensional NumPy array `a`? | Problem:
import numpy as np
a = np.array([[10,50,30],[60,20,40]])
result = a.argmin() |
10,722 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Functions tutorial
In astromodels functions can be used as spectral shapes for sources, or to describe time-dependence, phase-dependence, or links among parameters.
To get the list of available functions just do
Step1: If you need more info about a function, you can obtain it by using
Step2: Note that you don't need to create an instance in order to call the info() method.
Creating functions
Functions can be created in two different ways. We can create an instance with the default values for the parameters like this
Step3: or we can specify on construction specific values for the parameters
Step4: If you don't remember the names of the parameters just call the .info() method as in powerlaw.info() as demonstrated above.
Getting information about an instance
Using the .display() method we get a representation of the instance which exploits the features of the environment we are using. If we are running inside a IPython notebook, a rich representation with the formula of the function will be displayed (if available). Otherwise, in a normal terminal, the latex formula will not be rendered
Step5: It is also possible to get the text-only representation by simply printing the object like this
Step6: NOTE
Step7: Physical units
Astromodels uses the facility defined in astropy.units to make easier to convert between units during interactive analysis, when assigning to parameters.
However, when functions are initialized their parameters do not have units, as it is evident from the .display calls above. They however get assigned units when they are used for something specific, like to represent a spectrum. For example, let's create a point source (see the "Point source tutorial" for more on this)
Step8: Now if we display the function we can see that other parameters got units as well
Step9: Note that the index has still no units, as it is intrinsically a dimensionless quantity.
We can now change the values of the parameters using units, or pure floating point numbers. In the latter case, the current unit for the parameter will be assumed
Step10: NOTE
Step11: As you can see using an assignment with units is more than 100x slower than using .scaled_value. Note that this is a feature of astropy.units, not of astromodels. Thus, do not use assignment with units in computing intensive situations.
Composing functions
We can create arbitrary complex functions by combining "primitive" functions using the normal math operators
Step12: These expressions can be as complicated as needed. For example
Step13: The numbers between {} enumerate the unique functions which constitute a composite function. This is useful because composite functions can be created starting from pre-existing instances of functions, in which case the same instance can be used more than once. For example
Step14: In this case the same instance of a power law has been used twice. Changing the value of the parameters for "a_powerlaw" will affect also the second part of the expression. Instead, by doing this
Step15: we will end up with two independent sets of parameters for the two power laws. The difference can be seen immediately from the number of parameters of the two composite functions
Step16: Composing functions as in f(g(x))
Suppose you have two functions (f and g) and you want to compose them in a new function h(x) = f(g(x)). You can accomplish this by using the .of() method | Python Code:
from astromodels import *
list_functions()
Explanation: Functions tutorial
In astromodels functions can be used as spectral shapes for sources, or to describe time-dependence, phase-dependence, or links among parameters.
To get the list of available functions just do:
End of explanation
powerlaw.info()
Explanation: If you need more info about a function, you can obtain it by using:
End of explanation
powerlaw_instance = powerlaw()
Explanation: Note that you don't need to create an instance in order to call the info() method.
Creating functions
Functions can be created in two different ways. We can create an instance with the default values for the parameters like this:
End of explanation
powerlaw_instance = powerlaw(K=-2.0, index=-2.2)
Explanation: or we can specify on construction specific values for the parameters:
End of explanation
powerlaw_instance.display()
Explanation: If you don't remember the names of the parameters just call the .info() method as in powerlaw.info() as demonstrated above.
Getting information about an instance
Using the .display() method we get a representation of the instance which exploits the features of the environment we are using. If we are running inside a IPython notebook, a rich representation with the formula of the function will be displayed (if available). Otherwise, in a normal terminal, the latex formula will not be rendered:
End of explanation
print(powerlaw_instance)
Explanation: It is also possible to get the text-only representation by simply printing the object like this:
End of explanation
# Modify current value
powerlaw_instance.K = 1.2
# Modify minimum
powerlaw_instance.K.min_value = -10
# Modify maximum
powerlaw_instance.K.max_value = 15
# We can also modify minimum and maximum at the same time
powerlaw_instance.K.set_bounds(-10, 15)
# Modifying the delta for the parameter
# (which can be used by downstream software for fitting, for example)
powerlaw_instance.K.delta = 0.25
# Fix the parameter
powerlaw_instance.K.fix = True
# or equivalently
powerlaw_instance.K.free = False
# Free it again
powerlaw_instance.K.fix = False
# or equivalently
powerlaw_instance.K.free = True
# We can verify what we just did by printing again the whole function as shown above,
# or simply printing the parameter:
powerlaw_instance.K.display()
Explanation: NOTE: the .display() method of an instance displays the current values of the parameters, while the .info() method demonstrated above (for which you don't need an instance) displays the default values of the parameters.
Modifying parameters
Modifying a parameter of a function is easy:
End of explanation
# Create a powerlaw instance with default values
powerlaw_instance = powerlaw()
# Right now the parameters of the power law don't have any unit
print("Unit of K is [%s]" % powerlaw_instance.K.unit)
# Let's use it as a spectrum for a point source
test_source = PointSource('test_source', ra=0.0, dec=0.0, spectral_shape=powerlaw_instance)
# Now the parameter K has units
print("Unit of K is [%s]" % powerlaw_instance.K.unit)
Explanation: Physical units
Astromodels uses the facility defined in astropy.units to make easier to convert between units during interactive analysis, when assigning to parameters.
However, when functions are initialized their parameters do not have units, as it is evident from the .display calls above. They however get assigned units when they are used for something specific, like to represent a spectrum. For example, let's create a point source (see the "Point source tutorial" for more on this):
End of explanation
powerlaw_instance.display()
Explanation: Now if we display the function we can see that other parameters got units as well:
End of explanation
import astropy.units as u
# Express the differential flux at the pivot energy in 1 / (MeV cm2 s)
powerlaw_instance.K = 122.3 / (u.MeV * u.cm * u.cm * u.s)
# Express the differential flux at the pivot energy in 1 / (GeV m2 s)
powerlaw_instance.K = 122.3 / (u.GeV * u.m * u.m * u.s)
# Express the differential flux at the pivot energy in its default unit
# (currently 1/(keV cm2 s))
powerlaw_instance.K = 122.3
powerlaw_instance.display()
Explanation: Note that the index has still no units, as it is intrinsically a dimensionless quantity.
We can now change the values of the parameters using units, or pure floating point numbers. In the latter case, the current unit for the parameter will be assumed:
End of explanation
print(powerlaw_instance.K.scaled_unit)
# NOTE: These requires IPython
%timeit powerlaw_instance.K.scaled_value = 122.3 # 1 / (cm2 keV s)
%timeit powerlaw_instance.K = 122.3 / (u.keV * u.cm**2 * u.s)
Explanation: NOTE : using astropy.units in an assigment makes the operation pretty slow. This is hardly noticeable in an interactive settings, but if you put an assigment with units in a for loop or in any other context where it is repeated many times, you might start to notice. For this reason, astromodels allow you to assign directly the value of the parameter in an alternative way, by using the .scaled_value property. This assume that you are providing a simple floating point number, which implicitly uses a specific set of units, which you can retrieve with .scaled_units like this:
End of explanation
composite = gaussian() + powerlaw()
# Instead of the usual .display(), which would print all the many parameters,
# let's print just the description of the new composite functions:
print(composite.description)
a_source = PointSource("a_source",l=24.3, b=44.3, spectral_shape=composite)
composite.display()
Explanation: As you can see using an assignment with units is more than 100x slower than using .scaled_value. Note that this is a feature of astropy.units, not of astromodels. Thus, do not use assignment with units in computing intensive situations.
Composing functions
We can create arbitrary complex functions by combining "primitive" functions using the normal math operators:
End of explanation
crazy_function = 3 * sin() + powerlaw()**2 * (5+gaussian()) / 3.0
print(crazy_function.description)
Explanation: These expressions can be as complicated as needed. For example:
End of explanation
a_powerlaw = powerlaw()
a_sin = sin()
another_composite = 2 * a_powerlaw + (3 + a_powerlaw) * a_sin
print(another_composite.description)
Explanation: The numbers between {} enumerate the unique functions which constitute a composite function. This is useful because composite functions can be created starting from pre-existing instances of functions, in which case the same instance can be used more than once. For example:
End of explanation
another_composite2 = 2 * powerlaw() + (3 + powerlaw()) * sin()
print(another_composite2.description)
Explanation: In this case the same instance of a power law has been used twice. Changing the value of the parameters for "a_powerlaw" will affect also the second part of the expression. Instead, by doing this:
End of explanation
print(len(another_composite.parameters)) # 6 parameters
print(len(another_composite2.parameters)) # 9 parameters
Explanation: we will end up with two independent sets of parameters for the two power laws. The difference can be seen immediately from the number of parameters of the two composite functions:
End of explanation
# Let's get two functions (for example a gaussian and a sin function)
f = gaussian()
g = sin()
# Let's compose them in a composite function h = f(g(x))
h = f.of(g)
# Verify that indeed we have composed the function
# Get a random number between 1 and 10
x = np.random.uniform(1,10)
print (h(x) == f(g(x)))
Explanation: Composing functions as in f(g(x))
Suppose you have two functions (f and g) and you want to compose them in a new function h(x) = f(g(x)). You can accomplish this by using the .of() method:
End of explanation |
10,723 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring Mobile Gaming Using Feature Store
Learning objectives
In this notebook, you learn how to
Step1: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Step2: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step3: Otherwise, set your project ID here.
Step4: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
Step5: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
Step6: Only if your bucket doesn't already exist
Step7: Run the following cell to grant access to your Cloud Storage resources from Vertex AI Feature store
Step8: Finally, validate access to your Cloud Storage bucket by examining its contents
Step9: Create a Bigquery dataset
You create the BigQuery dataset to store the data along the demo.
Step10: Import libraries
Step11: Define constants
Step19: Helpers
Step20: Setting the realtime scenario
In order to make real-time churn prediction, you need to
Collect the historical data about user's events and behaviors
Design your data model, build your feature and ingest them into the Feature store to serve both offline for training and online for serving.
Define churn and get the data to train a churn model
Train the model at scale
Deploy the model to an endpoint and generate return the prediction score in real-time
You will cover those steps in details below.
Initiate clients
Step22: Identify users and build your features
This section we will static features we want to fetch from Vertex AI Feature Store. In particular, we will cover the following steps
Step23: Create a Vertex AI Feature store and ingest your features
Now you have the wide table of features. It is time to ingest them into the feature store.
Before to moving on, you may have a question
Step24: Create the User entity type and its features
You define your own entity types which represents one or more level you decide to refer your features. In your case, it would have a user entity.
Step25: Set Feature Monitoring
Notice that Vertex AI Feature store has feature monitoring capability. It is in preview, so you need to use v1beta1 Python which is a lower-level API than the one we've used so far in this notebook.
The easiest way to set this for now is using console UI. For completeness, below is example to do this using v1beta1 SDK.
Step26: Create features
In order to ingest features, you need to provide feature configuration and create them as featurestore resources.
Create Feature configuration
For simplicity, I created the configuration in a declarative way. Of course, we can create an helper function to built it from Bigquery schema.
Also notice that we want to pass some feature on-fly. In this case, it country, operating system and language looks perfect for that.
Step27: Create features using batch_create_features method
Once you have the feature configuration, you can create feature resources using batch_create_features method.
Step28: Search features
Vertex AI Feature store supports serching capabilities. Below you have a simple example that show how to filter a feature based on its name.
Step29: Ingest features
At that point, you create all resources associated to the feature store. You just need to import feature values before you can use them for online/offline serving.
Step31: Train and deploy a real-time churn ML model using Vertex AI Training and Endpoints
Now that you have your features and you are almost ready to train our churn model.
Below an high level picture
<img src="./assets/train_model_4.png">
Let's dive into each step of this process.
Fetch training data with point-in-time query using BigQuery and Vertex AI Feature store
As we mentioned above, in real time churn prediction, it is so important defining the label you want to predict with your model.
Let's assume that you decide to predict the churn probability over the last 24 hr. So now you have your label. Next step is to define your training sample. But let's think about that for a second.
In that churn real time system, you have a high volume of transactions you could use to calculate those features which keep floating and are collected constantly over time. It implies that you always get fresh data to reconstruct features. And depending on when you decide to calculate one feature or another you can end up with a set of features that are not aligned in time.
When you have labels available, it would be incredibly difficult to say which set of features contains the most up to date historical information associated with the label you want to predict. And, when you are not able to guarantee that, the performance of your model would be badly affected because you serve no representative features of the data and the label from the field when it goes live. So you need a way to get the most updated features you calculated over time before the label becomes available in order to avoid this informational skew.
With the Vertex AI Feature store, you can fetch feature values corresponding to a particular timestamp thanks to point-in-time lookup capability. In our case, it would be the timestamp associated to the label you want to predict with your model. In this way, you will avoid data leakage and you will get the most updated features to train your model.
Let's see how to do that.
Define query for reading instances at a specific point in time
First thing, you need to define the set of reading instances at a specific point in time you want to consider in order to generate your training sample.
Step32: Create the BigQuery instances tables
You store those instances in a Bigquery table.
Step33: Serve features for batch training
Then you use the batch_serve_to_gcs in order to generate your training sample and store it as csv file in a target cloud bucket.
Step34: Train a custom model on Vertex AI with Training Pipelines
Now that we produce the training sample, we use the Vertex AI SDK to train an new version of the model using Vertex AI Training.
Create training package and training sample
Step39: Create training script
You create the training script to train a XGboost model.
Step40: Create requirements.txt
You write the requirement file to build the training container.
Step41: Create training configuration
You create a training configuration with data and model params.
Step43: Test the model locally with local-run
You leverage the Vertex AI SDK local-run to test the script locally.
Step45: Create and Launch the Custom training pipeline to train the model with autopackaging.
You use autopackaging from Vertex AI SDK in order to
Build a custom Docker training image.
Push the image to Container Registry.
Start a Vertex AI CustomJob.
Step46: Check the status of training job and the result.
You can use the following commands to monitor the status of your job and check for the artefact in the bucket once the training successfully run.
Step47: Upload and Deploy Model on Vertex AI Endpoint
You use a custom function to upload your model to a Vertex AI Model Registry.
Step48: Deploy Model to the same Endpoint with Traffic Splitting
Now that you have registered in the model registry, you can deploy it in an endpoint. So you firstly create the endpoint and then you deploy your model.
Step49: Serve ML features at scale with low latency
At that time, you are ready to deploy our simple model which would requires fetching preprocessed attributes as input features in real time.
Below you can see how it works
<img src="./assets/online_serving_5.png" width="600">
But think about those features for a second.
Your behavioral features used to trained your model, they cannot be computed when you are going to serve the model online.
How could you compute the number of time a user challenged a friend withing the last 24 hours on the fly?
You simply can't do that. You need to be computed this feature on the server side and serve it with low latency. And becuase Bigquery is not optimized for those read operations, we need a different service that allows singleton lookup where the result is a single row with many columns.
Also, even if it was not the case, when you deploy a model that requires preprocessing your data, you need to be sure to reproduce the same preprocessing steps you had when you trained it. If you are not able to do that a skew between training and serving data would happen and it will affect badly your model performance (and in the worst scenario break your serving system).
You need a way to mitigate that in a way you don't need to implement those preprocessing steps online but just serve the same aggregated features you already have for training to generate online prediction.
These are other valuable reasons to introduce Vertex AI Feature Store. With it, you have a service which helps you to serve feature at scale with low latency as they were available at training time mitigating in that way possible training-serving skew.
Now that you know why you need a feature store, let's closing this journey by deploying your model and use feature store to retrieve features online, pass them to endpoint and generate predictions.
Time to simulate online predictions
Once the model is ready to receive prediction requests, you can use the simulate_prediction function to generate them.
In particular, that function
format entities for prediction
retrieve static features with a singleton lookup operations from Vertex AI Feature store
run the prediction request and get back the result
for a number of requests and some latency you define. It will nearly take about 17 minutes to run this cell. | Python Code:
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
# Install additional packages
! pip3 install {USER_FLAG} --upgrade pip
! pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform==1.11.0 -q --no-warn-conflicts
! pip3 install {USER_FLAG} git+https://github.com/googleapis/python-aiplatform.git@main # For features monitoring
! pip3 install {USER_FLAG} --upgrade google-cloud-bigquery==2.24.0 -q --no-warn-conflicts
! pip3 install {USER_FLAG} --upgrade xgboost==1.1.1 -q --no-warn-conflicts
Explanation: Exploring Mobile Gaming Using Feature Store
Learning objectives
In this notebook, you learn how to:
Provide a centralized feature repository with easy APIs to search & discover features and fetch them for training/serving.
Simplify deployments of models for Online Prediction, via low latency scalable feature serving.
Mitigate training serving skew and data leakage by performing point in time lookups to fetch historical data for training.
Overview
Imagine you are a member of the Data Science team working on the same Mobile Gaming application reported in the Churn prediction for game developers using Google Analytics 4 (GA4) and BigQuery ML blog post.
Business wants to use that information in real-time to take immediate intervention actions in-game to prevent churn. In particular, for each player, they want to provide gaming incentives like new items or bonus packs depending on the customer demographic, behavioral information and the resulting propensity of return.
Last year, Google Cloud announced Vertex AI, a managed machine learning (ML) platform that allows data science teams to accelerate the deployment and maintenance of ML models. One of the platform building blocks is Vertex AI Feature store which provides a managed service for low latency scalable feature serving. Also it is a centralized feature repository with easy APIs to search & discover features and feature monitoring capabilities to track drift and other quality issues.
In this notebook, you learn how the role of Vertex AI Feature Store in a ready to production scenario when the user's activities within the first 24 hours of last engagment and the gaming platform would consume in order to improver UX. Below you can find the high level picture of the system
<img src="./assets/mobile_gaming_architecture_1.png">
Dataset
The dataset is the public sample export data from an actual mobile game app called "Flood It!" (Android, iOS)
Notice that we assume that already know how to set up a Vertex AI Feature store. In case you are not, please check out this detailed notebook.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook
Install additional packages
Install additional package dependencies not installed in your notebook environment, such as {XGBoost, AdaNet, or TensorFlow Hub TODO: Replace with relevant packages for the tutorial}. Use the latest major GA version of each package.
End of explanation
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
import os
PROJECT_ID = "qwiklabs-gcp-01-17ee7907a406" # Replace your project id here
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "qwiklabs-gcp-01-17ee7907a406" # Replace your project id here
!gcloud config set project $PROJECT_ID #change it
Explanation: Otherwise, set your project ID here.
End of explanation
# Import necessary library and define Timestamp
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
BUCKET_URI = "gs://qwiklabs-gcp-01-17ee7907a406" # Replace your bucket name here
REGION = "us-central1" # @param {type:"string"}
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://qwiklabs-gcp-01-17ee7907a406": # Replace your bucket name here
BUCKET_URI = "gs://" + PROJECT_ID + "-aip-" + TIMESTAMP
if REGION == "[your-region]":
REGION = "us-central1"
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
End of explanation
! gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_URI
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil uniformbucketlevelaccess set on $BUCKET_URI
Explanation: Run the following cell to grant access to your Cloud Storage resources from Vertex AI Feature store
End of explanation
! gsutil ls -al $BUCKET_URI
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
BQ_DATASET = "Mobile_Gaming" # @param {type:"string"}
LOCATION = "US"
!bq mk --location=$LOCATION --dataset $PROJECT_ID:$BQ_DATASET
Explanation: Create a Bigquery dataset
You create the BigQuery dataset to store the data along the demo.
End of explanation
# General
import os
import random
import sys
import time
# Data Science
import pandas as pd
# Vertex AI and its Feature Store
from google.cloud import aiplatform as vertex_ai
from google.cloud import bigquery
from google.cloud.aiplatform import Feature, Featurestore
Explanation: Import libraries
End of explanation
# Data Engineering and Feature Engineering
TODAY = "2018-10-03"
TOMORROW = "2018-10-04"
LABEL_TABLE = f"label_table_{TODAY}".replace("-", "")
FEATURES_TABLE = "wide_features_table" # @param {type:"string"}
FEATURES_TABLE_TODAY = f"wide_features_table_{TODAY}".replace("-", "")
FEATURES_TABLE_TOMORROW = f"wide_features_table_{TOMORROW}".replace("-", "")
FEATURESTORE_ID = "mobile_gaming" # @param {type:"string"}
ENTITY_TYPE_ID = "user"
# Vertex AI Feature store
ONLINE_STORE_NODES_COUNT = 5
ENTITY_ID = "user"
API_ENDPOINT = f"{REGION}-aiplatform.googleapis.com"
FEATURE_TIME = "timestamp"
ENTITY_ID_FIELD = "user_pseudo_id"
BQ_SOURCE_URI = f"bq://{PROJECT_ID}.{BQ_DATASET}.{FEATURES_TABLE}"
GCS_DESTINATION_PATH = f"data/features/train_features_{TODAY}".replace("-", "")
GCS_DESTINATION_OUTPUT_URI = f"{BUCKET_URI}/{GCS_DESTINATION_PATH}"
SERVING_FEATURE_IDS = {"user": ["*"]}
READ_INSTANCES_TABLE = f"ground_truth_{TODAY}".replace("-", "")
READ_INSTANCES_URI = f"bq://{PROJECT_ID}.{BQ_DATASET}.{READ_INSTANCES_TABLE}"
# Vertex AI Training
BASE_CPU_IMAGE = "us-docker.pkg.dev/vertex-ai/training/scikit-learn-cpu.0-23:latest"
DATASET_NAME = f"churn_mobile_gaming_{TODAY}".replace("-", "")
TRAIN_JOB_NAME = f"xgb_classifier_training_{TODAY}".replace("-", "")
MODEL_NAME = f"churn_xgb_classifier_{TODAY}".replace("-", "")
MODEL_PACKAGE_PATH = "train_package"
TRAINING_MACHINE_TYPE = "n1-standard-4"
TRAINING_REPLICA_COUNT = 1
DATA_PATH = f"{GCS_DESTINATION_OUTPUT_URI}/000000000000.csv".replace("gs://", "/gcs/")
MODEL_PATH = f"model/{TODAY}".replace("-", "")
MODEL_DIR = f"{BUCKET_URI}/{MODEL_PATH}".replace("gs://", "/gcs/")
# Vertex AI Prediction
DESTINATION_URI = f"{BUCKET_URI}/{MODEL_PATH}"
VERSION = "v1"
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-23:latest"
)
ENDPOINT_NAME = "mobile_gaming_churn"
DEPLOYED_MODEL_NAME = f"churn_xgb_classifier_{VERSION}"
MODEL_DEPLOYED_NAME = "churn_xgb_classifier_v1"
SERVING_MACHINE_TYPE = "n1-highcpu-4"
MIN_NODES = 1
MAX_NODES = 1
# Sampling distributions for categorical features implemented in
# https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/model_monitoring/model_monitoring.ipynb
LANGUAGE = [
"en-us",
"en-gb",
"ja-jp",
"en-au",
"en-ca",
"de-de",
"en-in",
"en",
"fr-fr",
"pt-br",
"es-us",
"zh-tw",
"zh-hans-cn",
"es-mx",
"nl-nl",
"fr-ca",
"en-za",
"vi-vn",
"en-nz",
"es-es",
]
OS = ["IOS", "ANDROID", "null"]
COUNTRY = [
"United States",
"India",
"Japan",
"Canada",
"Australia",
"United Kingdom",
"Germany",
"Mexico",
"France",
"Brazil",
"Taiwan",
"China",
"Saudi Arabia",
"Pakistan",
"Egypt",
"Netherlands",
"Vietnam",
"Philippines",
"South Africa",
]
USER_IDS = [
"C8685B0DFA2C4B4E6E6EA72894C30F6F",
"A976A39B8E08829A5BC5CD3827C942A2",
"DD2269BCB7F8532CD51CB6854667AF51",
"A8F327F313C9448DFD5DE108DAE66100",
"8BE7BF90C971453A34C1FF6FF2A0ACAE",
"8375B114AFAD8A31DE54283525108F75",
"4AD259771898207D5869B39490B9DD8C",
"51E859FD9D682533C094B37DC85EAF87",
"8C33815E0A269B776AAB4B60A4F7BC63",
"D7EA8E3645EFFBD6443946179ED704A6",
"58F3D672BBC613680624015D5BC3ADDB",
"FF955E4CA27C75CE0BEE9FC89AD275A3",
"22DC6A6AE86C0AA33EBB8C3164A26925",
"BC10D76D02351BD4C6F6F5437EE5D274",
"19DEEA6B15B314DB0ED2A4936959D8F9",
"C2D17D9066EE1EB9FAE1C8A521BFD4E5",
"EFBDEC168A2BF8C727B060B2E231724E",
"E43D3AB2F9B9055C29373523FAF9DB9B",
"BBDCBE2491658165B7F20540DE652E3A",
"6895EEFC23B59DB13A9B9A7EED6A766F",
]
Explanation: Define constants
End of explanation
def run_bq_query(query: str):
An helper function to run a BigQuery job
Args:
query: a formatted SQL query
Returns:
None
try:
job = bq_client.query(query)
_ = job.result()
except RuntimeError as error:
print(error)
def upload_model(
display_name: str,
serving_container_image_uri: str,
artifact_uri: str,
sync: bool = True,
) -> vertex_ai.Model:
Args:
display_name: The name of Vertex AI Model artefact
serving_container_image_uri: The uri of the serving image
artifact_uri: The uri of artefact to import
sync:
Returns: Vertex AI Model
model = vertex_ai.Model.upload(
display_name=display_name,
artifact_uri=artifact_uri,
serving_container_image_uri=serving_container_image_uri,
sync=sync,
)
model.wait()
print(model.display_name)
print(model.resource_name)
return model
def create_endpoint(display_name: str) -> vertex_ai.Endpoint:
An utility to create a Vertex AI Endpoint
Args:
display_name: The name of Endpoint
Returns: Vertex AI Endpoint
endpoint = vertex_ai.Endpoint.create(display_name=display_name)
print(endpoint.display_name)
print(endpoint.resource_name)
return endpoint
def deploy_model(
model: vertex_ai.Model,
machine_type: str,
endpoint: vertex_ai.Endpoint = None,
deployed_model_display_name: str = None,
min_replica_count: int = 1,
max_replica_count: int = 1,
sync: bool = True,
) -> vertex_ai.Model:
An helper function to deploy a Vertex AI Endpoint
Args:
model: A Vertex AI Model
machine_type: The type of machine to serve the model
endpoint: An Vertex AI Endpoint
deployed_model_display_name: The name of the model
min_replica_count: Minimum number of serving replicas
max_replica_count: Max number of serving replicas
sync: Whether to execute method synchronously
Returns: vertex_ai.Model
model_deployed = model.deploy(
endpoint=endpoint,
deployed_model_display_name=deployed_model_display_name,
machine_type=machine_type,
min_replica_count=min_replica_count,
max_replica_count=max_replica_count,
sync=sync,
)
model_deployed.wait()
print(model_deployed.display_name)
print(model_deployed.resource_name)
return model_deployed
def endpoint_predict_sample(
instances: list, endpoint: vertex_ai.Endpoint
) -> vertex_ai.models.Prediction:
An helper function to get prediction from Vertex AI Endpoint
Args:
instances: The list of instances to score
endpoint: An Vertex AI Endpoint
Returns:
vertex_ai.models.Prediction
prediction = endpoint.predict(instances=instances)
print(prediction)
return prediction
def generate_online_sample() -> dict:
An helper function to generate a sample of online features
Returns:
online_sample: dict of online features
online_sample = {}
online_sample["entity_id"] = random.choices(USER_IDS)
online_sample["country"] = random.choices(COUNTRY)
online_sample["operating_system"] = random.choices(OS)
online_sample["language"] = random.choices(LANGUAGE)
return online_sample
def simulate_prediction(endpoint: vertex_ai.Endpoint, n_requests: int, latency: int):
An helper function to simulate online prediction with customer entity type
- format entities for prediction
- retrieve static features with a singleton lookup operations from Vertex AI Feature store
- run the prediction request and get back the result
Args:
endpoint: Vertex AI Endpoint object
n_requests: number of requests to run
latency: latency in seconds
Returns:
vertex_ai.models.Prediction
for i in range(n_requests):
online_sample = generate_online_sample()
online_features = pd.DataFrame.from_dict(online_sample)
entity_ids = online_features["entity_id"].tolist()
customer_aggregated_features = user_entity_type.read(
entity_ids=entity_ids,
feature_ids=[
"cnt_user_engagement",
"cnt_level_start_quickplay",
"cnt_level_end_quickplay",
"cnt_level_complete_quickplay",
"cnt_level_reset_quickplay",
"cnt_post_score",
"cnt_spend_virtual_currency",
"cnt_ad_reward",
"cnt_challenge_a_friend",
"cnt_completed_5_levels",
"cnt_use_extra_steps",
],
)
prediction_sample_df = pd.merge(
customer_aggregated_features.set_index("entity_id"),
online_features.set_index("entity_id"),
left_index=True,
right_index=True,
).reset_index(drop=True)
# prediction_sample = prediction_sample_df.to_dict("records")
prediction_instance = prediction_sample_df.values.tolist()
prediction = endpoint.predict(prediction_instance)
print(
f"Prediction request: user_id - {entity_ids} - values - {prediction_instance} - prediction - {prediction[0]}"
)
time.sleep(latency)
Explanation: Helpers
End of explanation
# Initiate the clients
bq_client = # TODO 1: Your code goes here(project=PROJECT_ID, location=LOCATION)
vertex_ai.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)
Explanation: Setting the realtime scenario
In order to make real-time churn prediction, you need to
Collect the historical data about user's events and behaviors
Design your data model, build your feature and ingest them into the Feature store to serve both offline for training and online for serving.
Define churn and get the data to train a churn model
Train the model at scale
Deploy the model to an endpoint and generate return the prediction score in real-time
You will cover those steps in details below.
Initiate clients
End of explanation
features_sql_query = f
CREATE OR REPLACE TABLE
`{PROJECT_ID}.{BQ_DATASET}.{FEATURES_TABLE}` AS
WITH
# query to extract demographic data for each user ---------------------------------------------------------
get_demographic_data AS (
SELECT * EXCEPT (row_num)
FROM (
SELECT
user_pseudo_id,
geo.country as country,
device.operating_system as operating_system,
device.language as language,
ROW_NUMBER() OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp DESC) AS row_num
FROM `firebase-public-project.analytics_153293282.events_*`)
WHERE row_num = 1),
# query to extract behavioral data for each user ----------------------------------------------------------
get_behavioral_data AS (
SELECT
event_timestamp,
user_pseudo_id,
SUM(IF(event_name = 'user_engagement', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_user_engagement,
SUM(IF(event_name = 'level_start_quickplay', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_level_start_quickplay,
SUM(IF(event_name = 'level_end_quickplay', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_level_end_quickplay,
SUM(IF(event_name = 'level_complete_quickplay', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_level_complete_quickplay,
SUM(IF(event_name = 'level_reset_quickplay', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_level_reset_quickplay,
SUM(IF(event_name = 'post_score', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_post_score,
SUM(IF(event_name = 'spend_virtual_currency', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_spend_virtual_currency,
SUM(IF(event_name = 'ad_reward', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_ad_reward,
SUM(IF(event_name = 'challenge_a_friend', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_challenge_a_friend,
SUM(IF(event_name = 'completed_5_levels', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_completed_5_levels,
SUM(IF(event_name = 'use_extra_steps', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_use_extra_steps,
FROM (
SELECT
e.*
FROM
`firebase-public-project.analytics_153293282.events_*` AS e
)
)
SELECT
-- PARSE_TIMESTAMP('%Y-%m-%d %H:%M:%S', CONCAT('{TODAY}', ' ', STRING(TIME_TRUNC(CURRENT_TIME(), SECOND))), 'UTC') as timestamp,
PARSE_TIMESTAMP('%Y-%m-%d %H:%M:%S', FORMAT_TIMESTAMP('%Y-%m-%d %H:%M:%S', TIMESTAMP_MICROS(beh.event_timestamp))) AS timestamp,
dem.*,
CAST(IFNULL(beh.cnt_user_engagement, 0) AS FLOAT64) AS cnt_user_engagement,
CAST(IFNULL(beh.cnt_level_start_quickplay, 0) AS FLOAT64) AS cnt_level_start_quickplay,
CAST(IFNULL(beh.cnt_level_end_quickplay, 0) AS FLOAT64) AS cnt_level_end_quickplay,
CAST(IFNULL(beh.cnt_level_complete_quickplay, 0) AS FLOAT64) AS cnt_level_complete_quickplay,
CAST(IFNULL(beh.cnt_level_reset_quickplay, 0) AS FLOAT64) AS cnt_level_reset_quickplay,
CAST(IFNULL(beh.cnt_post_score, 0) AS FLOAT64) AS cnt_post_score,
CAST(IFNULL(beh.cnt_spend_virtual_currency, 0) AS FLOAT64) AS cnt_spend_virtual_currency,
CAST(IFNULL(beh.cnt_ad_reward, 0) AS FLOAT64) AS cnt_ad_reward,
CAST(IFNULL(beh.cnt_challenge_a_friend, 0) AS FLOAT64) AS cnt_challenge_a_friend,
CAST(IFNULL(beh.cnt_completed_5_levels, 0) AS FLOAT64) AS cnt_completed_5_levels,
CAST(IFNULL(beh.cnt_use_extra_steps, 0) AS FLOAT64) AS cnt_use_extra_steps,
FROM
get_demographic_data dem
LEFT OUTER JOIN
get_behavioral_data beh
ON
dem.user_pseudo_id = beh.user_pseudo_id
run_bq_query(features_sql_query)
Explanation: Identify users and build your features
This section we will static features we want to fetch from Vertex AI Feature Store. In particular, we will cover the following steps:
Identify users, process demographic features and process behavioral features within the last 24 hours using BigQuery
Set up the feature store
Register features using Vertex AI Feature Store and the SDK.
Below you have a picture that shows the process.
<img src="./assets/feature_store_ingestion_2.png">
The original dataset contains raw event data we cannot ingest in the feature store as they are. We need to pre-process the raw data in order to get user features.
Notice we simulate those transformations in different point of time (today and tomorrow).
Label, Demographic and Behavioral Transformations
This section is based on the Churn prediction for game developers using Google Analytics 4 (GA4) and BigQuery ML blog article by Minhaz Kazi and Polong Lin.
You will adapt it in order to turn a batch churn prediction (using features within the first 24h user of first engagment) in a real-time churn prediction (using features within the first 24h user of last engagment).
End of explanation
try:
mobile_gaming_feature_store = Featurestore.create(
featurestore_id=FEATURESTORE_ID,
online_store_fixed_node_count=ONLINE_STORE_NODES_COUNT,
labels={"team": "dataoffice", "app": "mobile_gaming"},
sync=True,
)
except RuntimeError as error:
print(error)
else:
FEATURESTORE_RESOURCE_NAME = mobile_gaming_feature_store.resource_name
print(f"Feature store created: {FEATURESTORE_RESOURCE_NAME}")
Explanation: Create a Vertex AI Feature store and ingest your features
Now you have the wide table of features. It is time to ingest them into the feature store.
Before to moving on, you may have a question: Why do I need a feature store
in this scenario at that point?
One of the reason would be to make those features accessable across team by calculating once and reuse them many times. And in order to make it possible you need also be able to monitor those features over time to guarantee freshness and in case have a new feature engineerign run to refresh them.
If it is not your case, I will give even more reasons about why you should consider feature store in the following sections. Just keep following me for now.
One of the most important thing is related to its data model. As you can see in the picture below, Vertex AI Feature Store organizes resources hierarchically in the following order: Featurestore -> EntityType -> Feature. You must create these resources before you can ingest data into Vertex AI Feature Store.
<img src="./assets/feature_store_data_model_3.png">
In our case we are going to create mobile_gaming featurestore resource containing user entity type and all its associated features such as country or the number of times a user challenged a friend (cnt_challenge_a_friend).
Create featurestore, mobile_gaming
You need to create a featurestore resource to contain entity types, features, and feature values. In your case, you would call it mobile_gaming.
End of explanation
try:
user_entity_type = mobile_gaming_feature_store.create_entity_type(
entity_type_id=ENTITY_ID, description="User Entity", sync=True
)
except RuntimeError as error:
print(error)
else:
USER_ENTITY_RESOURCE_NAME = user_entity_type.resource_name
print("Entity type name is", USER_ENTITY_RESOURCE_NAME)
Explanation: Create the User entity type and its features
You define your own entity types which represents one or more level you decide to refer your features. In your case, it would have a user entity.
End of explanation
# Import required libraries
from google.cloud.aiplatform_v1beta1 import \
FeaturestoreServiceClient as v1beta1_FeaturestoreServiceClient
from google.cloud.aiplatform_v1beta1.types import \
entity_type as v1beta1_entity_type_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_monitoring as v1beta1_featurestore_monitoring_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_service as v1beta1_featurestore_service_pb2
from google.protobuf.duration_pb2 import Duration
v1beta1_admin_client = v1beta1_FeaturestoreServiceClient(
client_options={"api_endpoint": API_ENDPOINT}
)
v1beta1_admin_client.update_entity_type(
v1beta1_featurestore_service_pb2.UpdateEntityTypeRequest(
entity_type=v1beta1_entity_type_pb2.EntityType(
name=v1beta1_admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, ENTITY_ID
),
monitoring_config=v1beta1_featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=v1beta1_featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
monitoring_interval=Duration(seconds=86400), # 1 day
),
),
),
)
)
Explanation: Set Feature Monitoring
Notice that Vertex AI Feature store has feature monitoring capability. It is in preview, so you need to use v1beta1 Python which is a lower-level API than the one we've used so far in this notebook.
The easiest way to set this for now is using console UI. For completeness, below is example to do this using v1beta1 SDK.
End of explanation
feature_configs = {
"country": {
"value_type": "STRING",
"description": "The country of customer",
"labels": {"status": "passed"},
},
"operating_system": {
"value_type": "STRING",
"description": "The operating system of device",
"labels": {"status": "passed"},
},
"language": {
"value_type": "STRING",
"description": "The language of device",
"labels": {"status": "passed"},
},
"cnt_user_engagement": {
"value_type": "DOUBLE",
"description": "A variable of user engagement level",
"labels": {"status": "passed"},
},
"cnt_level_start_quickplay": {
"value_type": "DOUBLE",
"description": "A variable of user engagement with start level",
"labels": {"status": "passed"},
},
"cnt_level_end_quickplay": {
"value_type": "DOUBLE",
"description": "A variable of user engagement with end level",
"labels": {"status": "passed"},
},
"cnt_level_complete_quickplay": {
"value_type": "DOUBLE",
"description": "A variable of user engagement with complete status",
"labels": {"status": "passed"},
},
"cnt_level_reset_quickplay": {
"value_type": "DOUBLE",
"description": "A variable of user engagement with reset status",
"labels": {"status": "passed"},
},
"cnt_post_score": {
"value_type": "DOUBLE",
"description": "A variable of user score",
"labels": {"status": "passed"},
},
"cnt_spend_virtual_currency": {
"value_type": "DOUBLE",
"description": "A variable of user virtual amount",
"labels": {"status": "passed"},
},
"cnt_ad_reward": {
"value_type": "DOUBLE",
"description": "A variable of user reward",
"labels": {"status": "passed"},
},
"cnt_challenge_a_friend": {
"value_type": "DOUBLE",
"description": "A variable of user challenges with friends",
"labels": {"status": "passed"},
},
"cnt_completed_5_levels": {
"value_type": "DOUBLE",
"description": "A variable of user level 5 completed",
"labels": {"status": "passed"},
},
"cnt_use_extra_steps": {
"value_type": "DOUBLE",
"description": "A variable of user extra steps",
"labels": {"status": "passed"},
},
}
Explanation: Create features
In order to ingest features, you need to provide feature configuration and create them as featurestore resources.
Create Feature configuration
For simplicity, I created the configuration in a declarative way. Of course, we can create an helper function to built it from Bigquery schema.
Also notice that we want to pass some feature on-fly. In this case, it country, operating system and language looks perfect for that.
End of explanation
try:
user_entity_type.batch_create_features(feature_configs=feature_configs, sync=True)
except RuntimeError as error:
print(error)
else:
for feature in user_entity_type.list_features():
print("")
print(f"The resource name of {feature.name} feature is", feature.resource_name)
Explanation: Create features using batch_create_features method
Once you have the feature configuration, you can create feature resources using batch_create_features method.
End of explanation
feature_query = "feature_id:cnt_user_engagement"
searched_features = Feature.search(query=feature_query)
searched_features
Explanation: Search features
Vertex AI Feature store supports serching capabilities. Below you have a simple example that show how to filter a feature based on its name.
End of explanation
FEATURES_IDS = [feature.name for feature in user_entity_type.list_features()]
try:
user_entity_type.ingest_from_bq(
feature_ids=FEATURES_IDS,
feature_time=FEATURE_TIME,
bq_source_uri=BQ_SOURCE_URI,
entity_id_field=ENTITY_ID_FIELD,
disable_online_serving=False,
worker_count=10,
sync=True,
)
except RuntimeError as error:
print(error)
Explanation: Ingest features
At that point, you create all resources associated to the feature store. You just need to import feature values before you can use them for online/offline serving.
End of explanation
read_instances_query = f
CREATE OR REPLACE TABLE
`{PROJECT_ID}.{BQ_DATASET}.{READ_INSTANCES_TABLE}` AS
WITH
# get training threshold ----------------------------------------------------------------------------------
get_training_threshold AS (
SELECT
(MAX(event_timestamp) - 86400000000) AS training_thrs
FROM
`firebase-public-project.analytics_153293282.events_*`
WHERE
event_name="user_engagement"
AND
PARSE_TIMESTAMP('%Y-%m-%d %H:%M:%S', FORMAT_TIMESTAMP('%Y-%m-%d %H:%M:%S', TIMESTAMP_MICROS(event_timestamp))) < '{TODAY}'),
# query to create label -----------------------------------------------------------------------------------
get_label AS (
SELECT
user_pseudo_id,
user_last_engagement,
#label = 1 if last_touch within last hour hr else 0
IF
(user_last_engagement < (
SELECT
training_thrs
FROM
get_training_threshold),
1,
0 ) AS churned
FROM (
SELECT
user_pseudo_id,
MAX(event_timestamp) AS user_last_engagement
FROM
`firebase-public-project.analytics_153293282.events_*`
WHERE
event_name="user_engagement"
AND
PARSE_TIMESTAMP('%Y-%m-%d %H:%M:%S', FORMAT_TIMESTAMP('%Y-%m-%d %H:%M:%S', TIMESTAMP_MICROS(event_timestamp))) < '{TODAY}'
GROUP BY
user_pseudo_id )
GROUP BY
1,
2),
# query to create class weights --------------------------------------------------------------------------------
get_class_weights AS (
SELECT
CAST(COUNT(*) / (2*(COUNT(*) - SUM(churned))) AS STRING) AS class_weight_zero,
CAST(COUNT(*) / (2*SUM(churned)) AS STRING) AS class_weight_one,
FROM
get_label )
SELECT
user_pseudo_id as user,
PARSE_TIMESTAMP('%Y-%m-%d %H:%M:%S', CONCAT('{TODAY}', ' ', STRING(TIME_TRUNC(CURRENT_TIME(), SECOND))), 'UTC') as timestamp,
churned AS churned,
CASE
WHEN churned = 0 THEN ( SELECT class_weight_zero FROM get_class_weights)
ELSE ( SELECT class_weight_one
FROM get_class_weights)
END AS class_weights
FROM
get_label
Explanation: Train and deploy a real-time churn ML model using Vertex AI Training and Endpoints
Now that you have your features and you are almost ready to train our churn model.
Below an high level picture
<img src="./assets/train_model_4.png">
Let's dive into each step of this process.
Fetch training data with point-in-time query using BigQuery and Vertex AI Feature store
As we mentioned above, in real time churn prediction, it is so important defining the label you want to predict with your model.
Let's assume that you decide to predict the churn probability over the last 24 hr. So now you have your label. Next step is to define your training sample. But let's think about that for a second.
In that churn real time system, you have a high volume of transactions you could use to calculate those features which keep floating and are collected constantly over time. It implies that you always get fresh data to reconstruct features. And depending on when you decide to calculate one feature or another you can end up with a set of features that are not aligned in time.
When you have labels available, it would be incredibly difficult to say which set of features contains the most up to date historical information associated with the label you want to predict. And, when you are not able to guarantee that, the performance of your model would be badly affected because you serve no representative features of the data and the label from the field when it goes live. So you need a way to get the most updated features you calculated over time before the label becomes available in order to avoid this informational skew.
With the Vertex AI Feature store, you can fetch feature values corresponding to a particular timestamp thanks to point-in-time lookup capability. In our case, it would be the timestamp associated to the label you want to predict with your model. In this way, you will avoid data leakage and you will get the most updated features to train your model.
Let's see how to do that.
Define query for reading instances at a specific point in time
First thing, you need to define the set of reading instances at a specific point in time you want to consider in order to generate your training sample.
End of explanation
run_bq_query(read_instances_query)
Explanation: Create the BigQuery instances tables
You store those instances in a Bigquery table.
End of explanation
# Serve features for batch training
# TODO 2: Your code goes here(
gcs_destination_output_uri_prefix=GCS_DESTINATION_OUTPUT_URI,
gcs_destination_type="csv",
serving_feature_ids=SERVING_FEATURE_IDS,
read_instances_uri=READ_INSTANCES_URI,
pass_through_fields=["churned", "class_weights"],
)
Explanation: Serve features for batch training
Then you use the batch_serve_to_gcs in order to generate your training sample and store it as csv file in a target cloud bucket.
End of explanation
!rm -Rf train_package #if train_package already exist
!mkdir -m 777 -p trainer data/ingest data/raw model config
!gsutil -m cp -r $GCS_DESTINATION_OUTPUT_URI/*.csv data/ingest
!head -n 1000 data/ingest/*.csv > data/raw/sample.csv
Explanation: Train a custom model on Vertex AI with Training Pipelines
Now that we produce the training sample, we use the Vertex AI SDK to train an new version of the model using Vertex AI Training.
Create training package and training sample
End of explanation
!touch trainer/__init__.py
%%writefile trainer/task.py
import os
from pathlib import Path
import argparse
import yaml
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder
from sklearn.pipeline import Pipeline
import xgboost as xgb
import joblib
import warnings
warnings.filterwarnings("ignore")
def get_args():
Get arguments from command line.
Returns:
args: parsed arguments
parser = argparse.ArgumentParser()
parser.add_argument(
'--data_path',
required=False,
default=os.getenv('AIP_TRAINING_DATA_URI'),
type=str,
help='path to read data')
parser.add_argument(
'--learning_rate',
required=False,
default=0.01,
type=int,
help='number of epochs')
parser.add_argument(
'--model_dir',
required=False,
default=os.getenv('AIP_MODEL_DIR'),
type=str,
help='dir to store saved model')
parser.add_argument(
'--config_path',
required=False,
default='../config.yaml',
type=str,
help='path to read config file')
args = parser.parse_args()
return args
def ingest_data(data_path, data_model_params):
Ingest data
Args:
data_path: path to read data
data_model_params: data model parameters
Returns:
df: dataframe
# read training data
df = pd.read_csv(data_path, sep=',',
dtype={col: 'string' for col in data_model_params['categorical_features']})
return df
def preprocess_data(df, data_model_params):
Preprocess data
Args:
df: dataframe
data_model_params: data model parameters
Returns:
df: dataframe
# convert nan values because pd.NA ia not supported by SimpleImputer
# bug in sklearn 0.23.1 version: https://github.com/scikit-learn/scikit-learn/pull/17526
# decided to skip NAN values for now
df.replace({pd.NA: np.nan}, inplace=True)
df.dropna(inplace=True)
# get features and labels
x = df[data_model_params['numerical_features'] + data_model_params['categorical_features'] + [
data_model_params['weight_feature']]]
y = df[data_model_params['target']]
# train-test split
x_train, x_test, y_train, y_test = train_test_split(x, y,
test_size=data_model_params['train_test_split']['test_size'],
random_state=data_model_params['train_test_split'][
'random_state'])
return x_train, x_test, y_train, y_test
def build_pipeline(learning_rate, model_params):
Build pipeline
Args:
learning_rate: learning rate
model_params: model parameters
Returns:
pipeline: pipeline
# build pipeline
pipeline = Pipeline([
# ('imputer', SimpleImputer(strategy='most_frequent')),
('encoder', OneHotEncoder(handle_unknown='ignore')),
('model', xgb.XGBClassifier(learning_rate=learning_rate,
use_label_encoder=False, #deprecated and breaks Vertex AI predictions
**model_params))
])
return pipeline
def main():
print('Starting training...')
args = get_args()
data_path = args.data_path
learning_rate = args.learning_rate
model_dir = args.model_dir
config_path = args.config_path
# read config file
with open(config_path, 'r') as f:
config = yaml.load(f, Loader=yaml.FullLoader)
f.close()
data_model_params = config['data_model_params']
model_params = config['model_params']
# ingest data
print('Reading data...')
data_df = ingest_data(data_path, data_model_params)
# preprocess data
print('Preprocessing data...')
x_train, x_test, y_train, y_test = preprocess_data(data_df, data_model_params)
sample_weight = x_train.pop(data_model_params['weight_feature'])
sample_weight_eval_set = x_test.pop(data_model_params['weight_feature'])
# train lgb model
print('Training model...')
xgb_pipeline = build_pipeline(learning_rate, model_params)
# need to use fit_transform to get the encoded eval data
x_train_transformed = xgb_pipeline[:-1].fit_transform(x_train)
x_test_transformed = xgb_pipeline[:-1].transform(x_test)
xgb_pipeline[-1].fit(x_train_transformed, y_train,
sample_weight=sample_weight,
eval_set=[(x_test_transformed, y_test)],
sample_weight_eval_set=[sample_weight_eval_set],
eval_metric='error',
early_stopping_rounds=50,
verbose=True)
# save model
print('Saving model...')
model_path = Path(model_dir)
model_path.mkdir(parents=True, exist_ok=True)
joblib.dump(xgb_pipeline, f'{model_dir}/model.joblib')
if __name__ == "__main__":
main()
Explanation: Create training script
You create the training script to train a XGboost model.
End of explanation
%%writefile requirements.txt
pip==22.0.4
PyYAML==5.3.1
joblib==0.15.1
numpy==1.18.5
pandas==1.0.4
scipy==1.4.1
scikit-learn==0.23.1
xgboost==1.1.1
Explanation: Create requirements.txt
You write the requirement file to build the training container.
End of explanation
%%writefile config/config.yaml
data_model_params:
target: churned
categorical_features:
- country
- operating_system
- language
numerical_features:
- cnt_user_engagement
- cnt_level_start_quickplay
- cnt_level_end_quickplay
- cnt_level_complete_quickplay
- cnt_level_reset_quickplay
- cnt_post_score
- cnt_spend_virtual_currency
- cnt_ad_reward
- cnt_challenge_a_friend
- cnt_completed_5_levels
- cnt_use_extra_steps
weight_feature: class_weights
train_test_split:
test_size: 0.2
random_state: 8
model_params:
booster: gbtree
objective: binary:logistic
max_depth: 80
n_estimators: 100
random_state: 8
Explanation: Create training configuration
You create a training configuration with data and model params.
End of explanation
test_job_script = f
gcloud ai custom-jobs local-run \
--executor-image-uri={BASE_CPU_IMAGE} \
--python-module=trainer.task \
--extra-dirs=config,data,model \
-- \
--data_path data/raw/sample.csv \
--model_dir model \
--config_path config/config.yaml
with open("local_train_job_run.sh", "w+") as s:
s.write(test_job_script)
s.close()
# Launch the job locally
!chmod +x ./local_train_job_run.sh && ./local_train_job_run.sh
Explanation: Test the model locally with local-run
You leverage the Vertex AI SDK local-run to test the script locally.
End of explanation
!mkdir -m 777 -p {MODEL_PACKAGE_PATH} && mv -t {MODEL_PACKAGE_PATH} trainer requirements.txt config
train_job_script = f
gcloud ai custom-jobs create \
--region={REGION} \
--display-name={TRAIN_JOB_NAME} \
--worker-pool-spec=machine-type={TRAINING_MACHINE_TYPE},replica-count={TRAINING_REPLICA_COUNT},executor-image-uri={BASE_CPU_IMAGE},local-package-path={MODEL_PACKAGE_PATH},python-module=trainer.task,extra-dirs=config \
--args=--data_path={DATA_PATH},--model_dir={MODEL_DIR},--config_path=config/config.yaml \
--verbosity='info'
with open("train_job_run.sh", "w+") as s:
s.write(train_job_script)
s.close()
# Launch the Custom training Job using chmod command
# TODO 3: Your code goes here
Explanation: Create and Launch the Custom training pipeline to train the model with autopackaging.
You use autopackaging from Vertex AI SDK in order to
Build a custom Docker training image.
Push the image to Container Registry.
Start a Vertex AI CustomJob.
End of explanation
TRAIN_JOB_RESOURCE_NAME = "projects/292484118381/locations/us-central1/customJobs/7374149059830874112" # Replace this with your job path
# Check the status of training job
!gcloud ai custom-jobs describe $TRAIN_JOB_RESOURCE_NAME
!gsutil ls $DESTINATION_URI
Explanation: Check the status of training job and the result.
You can use the following commands to monitor the status of your job and check for the artefact in the bucket once the training successfully run.
End of explanation
# Upload the model
xgb_model = upload_model(
display_name=MODEL_NAME,
serving_container_image_uri=SERVING_CONTAINER_IMAGE_URI,
artifact_uri=DESTINATION_URI,
)
Explanation: Upload and Deploy Model on Vertex AI Endpoint
You use a custom function to upload your model to a Vertex AI Model Registry.
End of explanation
# Create endpoint
endpoint = create_endpoint(display_name=ENDPOINT_NAME)
# Deploy the model
deployed_model = # TODO 4: Your code goes here(
model=xgb_model,
machine_type=SERVING_MACHINE_TYPE,
endpoint=endpoint,
deployed_model_display_name=DEPLOYED_MODEL_NAME,
min_replica_count=1,
max_replica_count=1,
sync=False,
)
Explanation: Deploy Model to the same Endpoint with Traffic Splitting
Now that you have registered in the model registry, you can deploy it in an endpoint. So you firstly create the endpoint and then you deploy your model.
End of explanation
# Simulate online predictions
# TODO 5: Your code goes here(endpoint=endpoint, n_requests=1000, latency=1)
Explanation: Serve ML features at scale with low latency
At that time, you are ready to deploy our simple model which would requires fetching preprocessed attributes as input features in real time.
Below you can see how it works
<img src="./assets/online_serving_5.png" width="600">
But think about those features for a second.
Your behavioral features used to trained your model, they cannot be computed when you are going to serve the model online.
How could you compute the number of time a user challenged a friend withing the last 24 hours on the fly?
You simply can't do that. You need to be computed this feature on the server side and serve it with low latency. And becuase Bigquery is not optimized for those read operations, we need a different service that allows singleton lookup where the result is a single row with many columns.
Also, even if it was not the case, when you deploy a model that requires preprocessing your data, you need to be sure to reproduce the same preprocessing steps you had when you trained it. If you are not able to do that a skew between training and serving data would happen and it will affect badly your model performance (and in the worst scenario break your serving system).
You need a way to mitigate that in a way you don't need to implement those preprocessing steps online but just serve the same aggregated features you already have for training to generate online prediction.
These are other valuable reasons to introduce Vertex AI Feature Store. With it, you have a service which helps you to serve feature at scale with low latency as they were available at training time mitigating in that way possible training-serving skew.
Now that you know why you need a feature store, let's closing this journey by deploying your model and use feature store to retrieve features online, pass them to endpoint and generate predictions.
Time to simulate online predictions
Once the model is ready to receive prediction requests, you can use the simulate_prediction function to generate them.
In particular, that function
format entities for prediction
retrieve static features with a singleton lookup operations from Vertex AI Feature store
run the prediction request and get back the result
for a number of requests and some latency you define. It will nearly take about 17 minutes to run this cell.
End of explanation |
10,724 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Block logs
We'd like to make blocky, upscaled versions of logs.
Let's load a well from an LAS File using welly
Step1: We can block this log based on some cutoffs
Step2: But now we're not really dealing with regularly-sampled data anymore, we're dealing with 'intervals'. Striplog is sometimes a better option for representing this sort of data. So let's try using that instead
Step3: You can plot this
Step5: For a more natural visualization, it's a good idea to make a legend to display things. For example
Step6: Now the plot will look more geological
Step7: Let's simplify it a bit by removing beds thinner than 3 m, then 'annealing' over the gaps, then merging like neighbours (otherwise you'll likely have a lot of beds juxtaposed with similar beds)
Step9: The easiest way to plot with a curve is like
Step10: You can still make a blocky version from this Striplog
Step11: Note that this is a NumPy array, because striplog doesn't know about welly. But you could make a welly.Curve object
Step12: Let's add this Curve to the well object w — then we can make a multi-track plot with welly
Step13: Blocking another log using these intervals
Let's use the intervals we just created to block a different log from the same well.
Here's the RHOB log. We'll block it using the intervals from the GR.
Step14: Now we will 'extract' the RHOB data, using some reducing function — the default is to take the mean of the interval — into the intervals of the striplog s
Step15: Now each interval contains the mean RHOB value from that interval
Step16: We can now turn that data 'field' into a log, as we did before. This time, however, we can keep the data as the value of the log — i.e. instead of having 'bins' from the cutoff (like 1, 2, 3, etc), we want the actual value from an interval (i.e. a density in units of g/cm<sup>3</sup>). To get this, we pass bins=False.
Step17: We can plot this...
Step18: ...but it's more useful to make it into a Curve object and store it in the well object w
Step19: Now let's plot it next to the blocky GR to verify that the blocks are the same
Step21: Or we can plot everything together using welly (if we first extend the legend again to accommodate the new logs) | Python Code:
from welly import Well
w = Well.from_las('P-129_out.LAS')
w
gr = w.data['GR']
gr
Explanation: Block logs
We'd like to make blocky, upscaled versions of logs.
Let's load a well from an LAS File using welly:
End of explanation
gr_blocky = gr.block(cutoffs=[40, 100])
gr_blocky.plot()
Explanation: We can block this log based on some cutoffs:
End of explanation
from striplog import Striplog, Component
comps = [
Component(properties={'lithology': 'sandstone'}),
Component(properties={'lithology': 'siltstone'}),
Component(properties={'lithology': 'shale'}),
]
s = Striplog.from_log(gr, basis=gr.basis, cutoff=[40, 100], components=comps)
s
Explanation: But now we're not really dealing with regularly-sampled data anymore, we're dealing with 'intervals'. Striplog is sometimes a better option for representing this sort of data. So let's try using that instead:
Make a striplog instead of a blocked log
Striplog objects are potentially a bit more versatile than trying to use a log to represent intervals.
We have to do a little extra work though; for example, we have to tell Striplog what the intervals represent... the contents (lithologies or whatever) of an interval are called 'components'.
We also have to pass the depth separately, because Striplog doesn't know anything about welly's Curve objects.
End of explanation
s.plot(aspect=3)
Explanation: You can plot this:
End of explanation
from striplog import Legend
L = comp lithology, colour, width, curve mnemonic
sandstone, #fdf43f, 3
siltstone, #cfbb8f, 2
shale, #c0d0c0, 1
legend = Legend.from_csv(text=L)
legend
Explanation: For a more natural visualization, it's a good idea to make a legend to display things. For example:
End of explanation
s.plot(legend=legend)
Explanation: Now the plot will look more geological:
End of explanation
s = s.prune(limit=3).anneal().merge_neighbours()
s.plot(legend=legend)
Explanation: Let's simplify it a bit by removing beds thinner than 3 m, then 'annealing' over the gaps, then merging like neighbours (otherwise you'll likely have a lot of beds juxtaposed with similar beds):
End of explanation
w.data['strip'] = s
tracks = ['MD', 'GR', 'strip']
C = curve mnemonic, colour, width
GR, #ff0000, 1
GR-B, #ff8800, 1
curve_legend = Legend.from_csv(text=C)
big_legend = legend + curve_legend
w.plot(tracks=tracks, legend=big_legend)
Explanation: The easiest way to plot with a curve is like:
End of explanation
gr_blocky, depth, comps = s.to_log(return_meta=True)
import matplotlib.pyplot as plt
plt.figure(figsize=(2, 10))
plt.plot(gr_blocky, depth)
plt.ylim(2000, 0)
Explanation: You can still make a blocky version from this Striplog:
End of explanation
from welly import Curve
gr_blocky_curve = Curve(gr_blocky, index=depth, mnemonic='GR-B')
gr_blocky_curve.plot()
Explanation: Note that this is a NumPy array, because striplog doesn't know about welly. But you could make a welly.Curve object:
End of explanation
w.data['GR-B'] = gr_blocky_curve
tracks = ['MD', 'GR', 'GR-B', 'strip']
w.plot(tracks=tracks, legend=big_legend)
Explanation: Let's add this Curve to the well object w — then we can make a multi-track plot with welly:
End of explanation
rhob = w.data['RHOB']
rhob.plot()
Explanation: Blocking another log using these intervals
Let's use the intervals we just created to block a different log from the same well.
Here's the RHOB log. We'll block it using the intervals from the GR.
End of explanation
import numpy as np
s = s.extract(rhob, basis=rhob.basis, name='RHOB', function=np.median)
Explanation: Now we will 'extract' the RHOB data, using some reducing function — the default is to take the mean of the interval — into the intervals of the striplog s:
⚠️ Note that since v0.8.7 this returns a copy; before that, it worked in place.
End of explanation
s[0]
Explanation: Now each interval contains the mean RHOB value from that interval:
End of explanation
rhob_blocky, depth, _ = s.to_log(field='RHOB', bins=False, return_meta=True)
Explanation: We can now turn that data 'field' into a log, as we did before. This time, however, we can keep the data as the value of the log — i.e. instead of having 'bins' from the cutoff (like 1, 2, 3, etc), we want the actual value from an interval (i.e. a density in units of g/cm<sup>3</sup>). To get this, we pass bins=False.
End of explanation
plt.plot(rhob_blocky, depth)
Explanation: We can plot this...
End of explanation
rhob_blocky_curve = Curve(rhob_blocky, index=depth, mnemonic='RHOB-B', units='g/cm3')
w.data['RHOB-B'] = rhob_blocky_curve
rhob_blocky_curve.plot()
Explanation: ...but it's more useful to make it into a Curve object and store it in the well object w:
End of explanation
fig, axs = plt.subplots(ncols=2, figsize=(4, 10), sharey=True)
gr_blocky_curve.plot(ax=axs[0])
rhob_blocky_curve.plot(ax=axs[1])
Explanation: Now let's plot it next to the blocky GR to verify that the blocks are the same:
End of explanation
C = curve mnemonic, colour, width
GR, #ff0000, 1
GR-B, #ff8800, 1
RHOB, #0000ff, 1
RHOB-B, #0088ff, 1
curve_legend = Legend.from_csv(text=C)
big_legend = legend + curve_legend
tracks = ['MD', 'GR', 'GR-B', 'strip', 'RHOB-B', 'RHOB', 'MD']
w.plot(tracks=tracks, legend=big_legend)
Explanation: Or we can plot everything together using welly (if we first extend the legend again to accommodate the new logs):
End of explanation |
10,725 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Exercise 2
Imports
Step2: Factorial
Write a function that computes the factorial of small numbers using np.arange and np.cumprod.
Step4: Write a function that computes the factorial of small numbers using a Python loop.
Step5: Use the %timeit magic to time both versions of this function for an argument of 50. The syntax for %timeit is | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: Numpy Exercise 2
Imports
End of explanation
def np_fact(n):
Compute n! = n*(n-1)*...*1 using Numpy.
if n == 0:
return 1
else:
a = np.arange(1,n+1,1)
b = a.cumprod(0)
return b[n-1]
assert np_fact(0)==1
assert np_fact(1)==1
assert np_fact(10)==3628800
assert [np_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
Explanation: Factorial
Write a function that computes the factorial of small numbers using np.arange and np.cumprod.
End of explanation
def loop_fact(n):
Compute n! using a Python for loop.
if n == 0:
return 1
else:
factorial = 1
for i in range(1,n+1):
factorial *= i
return factorial
assert loop_fact(0)==1
assert loop_fact(1)==1
assert loop_fact(10)==3628800
assert [loop_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
Explanation: Write a function that computes the factorial of small numbers using a Python loop.
End of explanation
%timeit -n1 -r1 np_fact(100)
%timeit -n1 -r1 loop_fact(100)
Explanation: Use the %timeit magic to time both versions of this function for an argument of 50. The syntax for %timeit is:
python
%timeit -n1 -r1 function_to_time()
End of explanation |
10,726 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compare impact of frequency dependent $D_{min}$
Step1: Frequency dependence of $D_{min}$ predicted by Darendeli (2001)
Calculation
Step2: Plots
Step3: Site Response Calculation
Input
Step4: Two profiles are created. The first is a typical profile with the minimum damping computed at 1 Hz (default value). The second profile has the minimum damping computed at each frequency of the input motion.
Step5: Run the analyses and save the output.
Step6: Plot the results | Python Code:
import itertools
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pysra
%matplotlib inline
plt.rcParams["figure.dpi"] = 150
Explanation: Compare impact of frequency dependent $D_{min}$
End of explanation
plast_indices = [0, 20, 50, 100]
stresses_mean = 101.3 * np.array([0.5, 1, 2])
ocrs = [1, 2, 4]
freqs = np.logspace(-1, 2, num=31)
df = pd.DataFrame(
itertools.product(freqs, stresses_mean, plast_indices, ocrs),
columns=["freq", "stress_mean", "plast_ind", "ocr"],
)
def calc_damp_min(row):
return pysra.site.DarendeliSoilType(
plas_index=row.plast_ind,
stress_mean=row.stress_mean,
ocr=row.ocr,
freq=row.freq,
)._calc_damping_min()
df["damp_min"] = df.apply(calc_damp_min, axis=1)
df.head()
Explanation: Frequency dependence of $D_{min}$ predicted by Darendeli (2001)
Calculation
End of explanation
centers = {"plast_ind": 20, "stress_mean": 101.3, "ocr": 1}
for key in centers:
# Only select the centers
mask = np.all([df[k].eq(v) for k, v in centers.items() if k != key], axis=0)
selected = df[mask]
fig, ax = plt.subplots()
for name, group in selected.groupby(key):
ax.plot(group["freq"], group["damp_min"], label=name)
ax.set(
xlabel="Frequency (Hz)",
xscale="log",
ylabel="Damping Min. (dec)",
ylim=(0, 0.05),
)
ax.legend(title=key)
Explanation: Plots
End of explanation
motion = pysra.motion.SourceTheoryRvtMotion(7.0, 30, "wna")
motion.calc_fourier_amps()
Explanation: Site Response Calculation
Input
End of explanation
profiles = [
# Frequency independent soil properties
pysra.site.Profile(
[
pysra.site.Layer(
pysra.site.DarendeliSoilType(
18.0, plas_index=30, ocr=1, stress_mean=200
),
30,
400,
),
pysra.site.Layer(pysra.site.SoilType("Rock", 24.0, None, 0.01), 0, 1200),
]
),
# Frequency dependent minimum damping
pysra.site.Profile(
[
pysra.site.Layer(
pysra.site.DarendeliSoilType(
18.0, plas_index=30, ocr=1, stress_mean=200, freq=motion.freqs
),
30,
400,
),
pysra.site.Layer(pysra.site.SoilType("Rock", 24.0, None, 0.01), 0, 1200),
]
),
]
profiles = [p.auto_discretize() for p in profiles]
calc_fdm = pysra.propagation.FrequencyDependentEqlCalculator(use_smooth_spectrum=False)
calc_eql = pysra.propagation.EquivalentLinearCalculator(strain_ratio=0.65)
freqs = np.logspace(-1, 2, num=500)
outputs = pysra.output.OutputCollection(
[
pysra.output.AccelTransferFunctionOutput(
# Frequency
freqs,
# Location in (denominator),
pysra.output.OutputLocation("outcrop", index=-1),
# Location out (numerator)
pysra.output.OutputLocation("outcrop", index=0),
),
pysra.output.ResponseSpectrumOutput(
# Frequency
freqs,
# Location of the output
pysra.output.OutputLocation("outcrop", index=0),
# Damping
0.05,
),
]
)
Explanation: Two profiles are created. The first is a typical profile with the minimum damping computed at 1 Hz (default value). The second profile has the minimum damping computed at each frequency of the input motion.
End of explanation
for name, profile in zip(
["FDM - Constant $D_{min}$", "FDM - Variable $D_{min}$"], profiles
):
calc_fdm(motion, profile, profile.location("outcrop", index=-1))
outputs(calc_fdm, name)
calc_eql(motion, profiles[0], profiles[0].location("outcrop", index=-1))
outputs(calc_eql, "EQL")
Explanation: Run the analyses and save the output.
End of explanation
for o in outputs:
o.plot(style="indiv")
Explanation: Plot the results
End of explanation |
10,727 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccr-iitm', 'sandbox-2', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: CCCR-IITM
Source ID: SANDBOX-2
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:48
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
10,728 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Frequency and time-frequency sensors analysis
The objective is to show you how to explore the spectral content
of your data (frequency and time-frequency). Here we'll work on Epochs.
We will use this dataset
Step1: Set parameters
Step2: Frequency analysis
We start by exploring the frequence content of our epochs.
Let's first check out all channel types by averaging across epochs.
Step3: Now let's take a look at the spatial distributions of the PSD.
Step4: Alternatively, you can also create PSDs from Epochs objects with functions
that start with psd_ such as
Step5: Notably,
Step6: Lastly, we can also retrieve the unaggregated segments by passing
average=None to
Step7: Time-frequency analysis
Step8: Inspect power
<div class="alert alert-info"><h4>Note</h4><p>The generated figures are interactive. In the topo you can click
on an image to visualize the data for one sensor.
You can also select a portion in the time-frequency plane to
obtain a topomap for a certain time-frequency region.</p></div>
Step9: Joint Plot
You can also create a joint plot showing both the aggregated TFR
across channels and topomaps at specific times and frequencies to obtain
a quick overview regarding oscillatory effects across time and space.
Step10: Inspect ITC | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Stefan Appelhoff <[email protected]>
# Richard Höchenberger <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet, psd_multitaper, psd_welch
from mne.datasets import somato
Explanation: Frequency and time-frequency sensors analysis
The objective is to show you how to explore the spectral content
of your data (frequency and time-frequency). Here we'll work on Epochs.
We will use this dataset: somato-dataset. It contains so-called event
related synchronizations (ERS) / desynchronizations (ERD) in the beta band.
End of explanation
data_path = somato.data_path()
subject = '01'
task = 'somato'
raw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg',
'sub-{}_task-{}_meg.fif'.format(subject, task))
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True, stim=False)
# Construct Epochs
event_id, tmin, tmax = 1, -1., 3.
baseline = (None, 0)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=baseline, reject=dict(grad=4000e-13, eog=350e-6),
preload=True)
epochs.resample(200., npad='auto') # resample to reduce computation time
Explanation: Set parameters
End of explanation
epochs.plot_psd(fmin=2., fmax=40., average=True, spatial_colors=False)
Explanation: Frequency analysis
We start by exploring the frequence content of our epochs.
Let's first check out all channel types by averaging across epochs.
End of explanation
epochs.plot_psd_topomap(ch_type='grad', normalize=True)
Explanation: Now let's take a look at the spatial distributions of the PSD.
End of explanation
f, ax = plt.subplots()
psds, freqs = psd_multitaper(epochs, fmin=2, fmax=40, n_jobs=1)
psds = 10. * np.log10(psds)
psds_mean = psds.mean(0).mean(0)
psds_std = psds.mean(0).std(0)
ax.plot(freqs, psds_mean, color='k')
ax.fill_between(freqs, psds_mean - psds_std, psds_mean + psds_std,
color='k', alpha=.5)
ax.set(title='Multitaper PSD (gradiometers)', xlabel='Frequency (Hz)',
ylabel='Power Spectral Density (dB)')
plt.show()
Explanation: Alternatively, you can also create PSDs from Epochs objects with functions
that start with psd_ such as
:func:mne.time_frequency.psd_multitaper and
:func:mne.time_frequency.psd_welch.
End of explanation
# Estimate PSDs based on "mean" and "median" averaging for comparison.
kwargs = dict(fmin=2, fmax=40, n_jobs=1)
psds_welch_mean, freqs_mean = psd_welch(epochs, average='mean', **kwargs)
psds_welch_median, freqs_median = psd_welch(epochs, average='median', **kwargs)
# Convert power to dB scale.
psds_welch_mean = 10 * np.log10(psds_welch_mean)
psds_welch_median = 10 * np.log10(psds_welch_median)
# We will only plot the PSD for a single sensor in the first epoch.
ch_name = 'MEG 0122'
ch_idx = epochs.info['ch_names'].index(ch_name)
epo_idx = 0
_, ax = plt.subplots()
ax.plot(freqs_mean, psds_welch_mean[epo_idx, ch_idx, :], color='k',
ls='-', label='mean of segments')
ax.plot(freqs_median, psds_welch_median[epo_idx, ch_idx, :], color='k',
ls='--', label='median of segments')
ax.set(title='Welch PSD ({}, Epoch {})'.format(ch_name, epo_idx),
xlabel='Frequency (Hz)', ylabel='Power Spectral Density (dB)')
ax.legend(loc='upper right')
plt.show()
Explanation: Notably, :func:mne.time_frequency.psd_welch supports the keyword argument
average, which specifies how to estimate the PSD based on the individual
windowed segments. The default is average='mean', which simply calculates
the arithmetic mean across segments. Specifying average='median', in
contrast, returns the PSD based on the median of the segments (corrected for
bias relative to the mean), which is a more robust measure.
End of explanation
psds_welch_unagg, freqs_unagg = psd_welch(epochs, average=None, **kwargs)
print(psds_welch_unagg.shape)
Explanation: Lastly, we can also retrieve the unaggregated segments by passing
average=None to :func:mne.time_frequency.psd_welch. The dimensions of
the returned array are (n_epochs, n_sensors, n_freqs, n_segments).
End of explanation
# define frequencies of interest (log-spaced)
freqs = np.logspace(*np.log10([6, 35]), num=8)
n_cycles = freqs / 2. # different number of cycle per frequency
power, itc = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles, use_fft=True,
return_itc=True, decim=3, n_jobs=1)
Explanation: Time-frequency analysis: power and inter-trial coherence
We now compute time-frequency representations (TFRs) from our Epochs.
We'll look at power and inter-trial coherence (ITC).
To this we'll use the function :func:mne.time_frequency.tfr_morlet
but you can also use :func:mne.time_frequency.tfr_multitaper
or :func:mne.time_frequency.tfr_stockwell.
End of explanation
power.plot_topo(baseline=(-0.5, 0), mode='logratio', title='Average power')
power.plot([82], baseline=(-0.5, 0), mode='logratio', title=power.ch_names[82])
fig, axis = plt.subplots(1, 2, figsize=(7, 4))
power.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=8, fmax=12,
baseline=(-0.5, 0), mode='logratio', axes=axis[0],
title='Alpha', show=False)
power.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=13, fmax=25,
baseline=(-0.5, 0), mode='logratio', axes=axis[1],
title='Beta', show=False)
mne.viz.tight_layout()
plt.show()
Explanation: Inspect power
<div class="alert alert-info"><h4>Note</h4><p>The generated figures are interactive. In the topo you can click
on an image to visualize the data for one sensor.
You can also select a portion in the time-frequency plane to
obtain a topomap for a certain time-frequency region.</p></div>
End of explanation
power.plot_joint(baseline=(-0.5, 0), mode='mean', tmin=-.5, tmax=2,
timefreqs=[(.5, 10), (1.3, 8)])
Explanation: Joint Plot
You can also create a joint plot showing both the aggregated TFR
across channels and topomaps at specific times and frequencies to obtain
a quick overview regarding oscillatory effects across time and space.
End of explanation
itc.plot_topo(title='Inter-Trial coherence', vmin=0., vmax=1., cmap='Reds')
Explanation: Inspect ITC
End of explanation |
10,729 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bokeh scatter plot introduction
Step1: <a id='index'></a>
Index
Back to top
1 Introduction
2 ScatterPlot components
2.1 The scatter plot marker
2.2 Internal structure
2.3 Data structures
2.3.1 Original data
2.3.2 Tooltip data
2.3.3 Mapper data
2.3.2 Output data
3 ScatterPlot interface
3.1 Data mapper
3.2 Tooltip selector
3.3 Colors in Hex format
4 Taking a snapshot of the current plot
5 Plotting pandas Panel and Panel4D
<a id='introduction'></a>
1 Introduction
Now that we are familiar with the framework's basics, we can start showing the full capabilities of Shaolin. In order to do that I will rewiev one of the most "simple" and widely used plots in data science
Step2: <a id='data_structures'></a>
2.3 Data structures
Back to top
The data contained in the blocks described in the above diagram gcan be accessed the following way
Step3: <a id='tooltip_data'></a>
2.3.2 Tooltip data
Back to top
Step4: <a id='mapper_data'></a>
2.3.3 Mapper data
Step5: <a id='output_data'></a>
2.3.4 output data
Step6: <a id='plot_interface'></a>
3 Scatter plot Interface
Back to top
The scatter plot Dashboard contains the bokeh scatter plot and a widget. That widget is a toggle menu that can display two Dashboards
Step7: A plot mapper has the following components
Step8: <a id='snapshot'></a>
4 Taking a snapshot of the current plot
Back to top
Although it is possible to save the bokeh plot with any of the standard methods that the bokeh library offers by accessing the plot attribute of the ScatterPlot, shaolin offers the possibility of saving an snapshot of the plot as a shaolin widget compatible with the framework, this way it can be included in a Dashboard for displaying purposes.
This process is done by accessing the snapshot attribute of the scatterPlot. This way the current plot is exported and we can keep working with the ScatterPlot Dashboard in case we need to make more plots. An snapshot is an HTML widget which value is an exported notebook_div of the plot.
Step9: <a id='plot_pandas'></a>
5 Plotting pandas Panel and Panel4D
Back to top
It is also possible to plot a pandas Panel or a Panel4d the same way as a DataFrame. The only resctriction for now is that the axis that will be used as index must be the major_axis in case of a Panel and the items axis in case of a Panel4D. The tooltips are disabled, custom tooltips will be available in the next release.
It would be nice to have feedback on how would you like to display and select the tooltips. | Python Code:
%%HTML
<style>
.container { width:100% !important; }
.input{ width:60% !important;
align: center;
}
.text_cell{ width:70% !important;
font-size: 16px;}
.title {align:center !important;}
</style>
Explanation: Bokeh scatter plot introduction
End of explanation
from IPython.display import Image #this is for displaying the widgets in the web version of the notebook
from shaolin.dashboards.bokeh import ScatterPlot
from bokeh.sampledata.iris import flowers
scplot = ScatterPlot(flowers)
Explanation: <a id='index'></a>
Index
Back to top
1 Introduction
2 ScatterPlot components
2.1 The scatter plot marker
2.2 Internal structure
2.3 Data structures
2.3.1 Original data
2.3.2 Tooltip data
2.3.3 Mapper data
2.3.2 Output data
3 ScatterPlot interface
3.1 Data mapper
3.2 Tooltip selector
3.3 Colors in Hex format
4 Taking a snapshot of the current plot
5 Plotting pandas Panel and Panel4D
<a id='introduction'></a>
1 Introduction
Now that we are familiar with the framework's basics, we can start showing the full capabilities of Shaolin. In order to do that I will rewiev one of the most "simple" and widely used plots in data science: The scatter plot. We provide in the dashboards section of the Shaolin framework several Dashboards suited for complex data processing, and the Bokeh Scatterplot is the one on which we are going to center this tutorial. All the individual components of this Dashboard will be explained deeply in further tutorials.
<a id='components'></a>
2 ScatterPlot components
<a id='scatter_marker'></a>
2.1 The scatter plot marker
Back to top
A scatter plot, as we all know is a kind of plot in which we represent two datapoint vectors (x and y) against each other and assign a marker(by now just a circle) to each pair of data points. Althoug the x and y coordinates of the marker are the only two compulsory parameters, it is also possible to customize the following parameters for a circle marker:
x: x coordinate of the marker.
y: y coordinate of the marker.
size: Marker size.
fill_alpha: Transparency value of the interior of the marker.
fill_color: Color of the interior of the marker.
line_color: Color of the marker's border.
line_alpha: Transparency of the marker's border.
line_width: Width of the marker's boder.
It is possible to fully customize what data from the data structure we want to plot willn be mapped to a marker parameter and how that mapping will be performed. In order to assign values to a marker parameter we have to follow this process:
Select a chunk of data from your pandas data structure that has the correct shape. (In this case each parameter must be a datapoint vector)
Select how the data will be scaled to fit in the range of values that the marker parameter can have. (For example, for the line_width param all the values should be between 1 and 4.)
Select if the values of the parameter will be a mapping of the data or a default value that will be the same for all the data points.
This means that we could theoretically plot 8 dimensional data by mapping each parameter to a coordinate of a data point, but in practise it is sometime more usefull to use more than one parameter to map the same vector of data points in order to emphatise some feature of the data we are plotting. For example, we could map the fill_color parameter and the fill_alpha parameter to the same feature so it would be easy to emphatise the higher values of the plotted vector.
<a id='internals'></a>
2.2 Internal structure
Back to top
The scatter plot is a Dashboard with the following attributes:
data: The pandas data structure that we will use for the plot.
widget: GUI for selecting the plot parameters.
plot: Bokeh plot where the data is displayed.
mapper: Dashboard in charge of mapping data to every marker parameter.
tooltip: Dashboard in charge of managing the information displayed on the tooltip.
output: DataFrame with all the information available to the plot.
bokeh_source: Bokeh DataSource that mimics the infromation contained in the source df.
In the following diagram you can se the process of how data is mapped into visual information.
<img src="scatter_data/structure.svg"></img>
For this example we will use the classic Iris dataset imported from the bokeh data samples.
End of explanation
scplot.data.head()
Explanation: <a id='data_structures'></a>
2.3 Data structures
Back to top
The data contained in the blocks described in the above diagram gcan be accessed the following way:
<a id='original_data'></a>
2.3.1 Original data
End of explanation
scplot.tooltip.output.head()
Explanation: <a id='tooltip_data'></a>
2.3.2 Tooltip data
Back to top
End of explanation
scplot.mapper.output.head()
Explanation: <a id='mapper_data'></a>
2.3.3 Mapper data
End of explanation
scplot.output.head()
Explanation: <a id='output_data'></a>
2.3.4 output data
End of explanation
mapper = scplot.mapper
mapper.buttons.widget.layout.border = "blue solid"
mapper.buttons.value = 'line_width'
mapper.line_width.data_scaler.widget.layout.border = 'yellow solid'
mapper.line_width.data_slicer.widget.layout.border = 'red solid 0.4em'
mapper.line_width.data_slicer.columns_slicer.widget.layout.border = 'green solid 0.4em'
mapper.line_width.data_slicer.index_slicer.widget.layout.border = 'green solid 0.4em'
mapper.line_width.default_value.widget.layout.border = 'purple solid 0.4em'
mapper.line_width.apply_row.widget.layout.border = "pink solid 0.4em"
scplot.widget
Image(filename='scatter_data/img_1.png')
Explanation: <a id='plot_interface'></a>
3 Scatter plot Interface
Back to top
The scatter plot Dashboard contains the bokeh scatter plot and a widget. That widget is a toggle menu that can display two Dashboards:
- Mapper: This dashboard is in charge of managing how the data is displayed.
- Tooltip: The BokehTooltip Dashboard allows to select what information will be displayed on the plot tooltips.
The complete plot interface can be displayed calling the function show.
As you will see, the interface layout has not been yet customized, so any suggestion regarding interface desing will be appreciated.
<a id='data_mapper'></a>
3.1 Data Mapper
Back to top
This is the Dashboard that allows to customize how the data will be plotted. We will color each of its components so its easier to locate them. This is a good example of a complex Dashboard comprised of multiple Dashboards.
End of explanation
scplot.widget
Image(filename='scatter_data/img_2.png')
Explanation: A plot mapper has the following components:
- Marker parameter selector(Blue): A dropdown that allows to select which marker parameter that is going to be changed.
- Data slicer(Red): A dashoard in charge of selecting a datapoint vector from a pandas data structure. We can slice each of the dimensions of the data structure thanks to an AxisSlicer(Green) Dashboard.
- Data scaler(Yellow): Dashboard in charge of scaling the data. Similar to the data scaler from the tutorials.
- Activate mapping(pink): If the value of the checkbox is True the value of the marker parameter will be the output of the scaler, otherwise it will be the default value(Purple) for every data point.
<a id='tooltip_selector'></a>
3.1 Tooltip selector
Back to top
It is possible to choose what information from the data attribute of the ScatterPlot will be shown when hovering above a marker.
In the above cell we click in the "tooltip" button of the toggleButtons in order to make the widget visible. As we can see there is a SelectMultiple widget for every column of the original DataFrame.
End of explanation
widget_plot = scplot.snapshot
widget_plot.widget
Image(filename='scatter_data/img_3.png')
Explanation: <a id='snapshot'></a>
4 Taking a snapshot of the current plot
Back to top
Although it is possible to save the bokeh plot with any of the standard methods that the bokeh library offers by accessing the plot attribute of the ScatterPlot, shaolin offers the possibility of saving an snapshot of the plot as a shaolin widget compatible with the framework, this way it can be included in a Dashboard for displaying purposes.
This process is done by accessing the snapshot attribute of the scatterPlot. This way the current plot is exported and we can keep working with the ScatterPlot Dashboard in case we need to make more plots. An snapshot is an HTML widget which value is an exported notebook_div of the plot.
End of explanation
from pandas.io.data import DataReader# I know its deprecated but i can't make the pandas_datareader work :P
import datetime
symbols_list = ['ORCL', 'TSLA', 'IBM','YELP', 'MSFT']
start = datetime.datetime(2010, 1, 1)
end = datetime.datetime(2013, 1, 27)
panel = DataReader( symbols_list, start=start, end=end,data_source='yahoo')
panel
sc_panel = ScatterPlot(panel)
#sc_panel.show()
Image(filename='scatter_data/img_4.png')
Explanation: <a id='plot_pandas'></a>
5 Plotting pandas Panel and Panel4D
Back to top
It is also possible to plot a pandas Panel or a Panel4d the same way as a DataFrame. The only resctriction for now is that the axis that will be used as index must be the major_axis in case of a Panel and the items axis in case of a Panel4D. The tooltips are disabled, custom tooltips will be available in the next release.
It would be nice to have feedback on how would you like to display and select the tooltips.
End of explanation |
10,730 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parte 1
Step1: 2. Realizar y verificar la descomposición svd.
Step2: 3. Usar la descomposición para dar una aproximación de grado <code>k</code> de la imagen.</li>
4. Para alguna imagen de su elección, elegir distintos valores de aproximación a la imagen original.
Step3: Contestar, ¿qué tiene que ver este proyecto con compresión de imágenes?
En este proyecto se planteó una aplicación de la descomposición de una imagen mediante svd y se pudo comprender que esta técnica permite la compresión de información mediante reducción de componentes principales, es decir reducción de columnas en la matriz U, filas o columnas de la matriz S (componentes principales) y filas de la matriz $V^{-1} $
Ejercicio2
Funciones del ejercicio2
Step4: 1. Programar una función que dada cualquier matriz devuelva la pseudoinversa usando la descomposición SVD. Hacer otra función que resuelva cualquier sistema de ecuaciones de la forma Ax=b usando esta pseudoinversa.
Step6: 2. Jugar con el sistema Ax=b donde A=[[1,1],[0,0]] y b puede tomar distintos valores.
(a) Observar que pasa si b esta en la imagen de A (contestar cuál es la imagen) y si no está (ej. b = [1,1]).
(b) Contestar, ¿la solución resultante es única? Si hay más de una solución, investigar que carateriza a la solución devuelta.
(c) Repetir cambiando A=[[1,1],[0,1e-32]], ¿En este caso la solucíon es única? ¿Cambia el valor devuelto de x en cada posible valor de b del punto anterior?
Step7: Ejercicio3
1. Leer el archivo study_vs_sat.csv y almacenearlo como un data frame de pandas.
Step8: 2. Pleantear como un problema de optimización que intente hacer una aproximación de la forma sat_score ~ alpha + beta*study_hours minimizando la suma de los errores de predicción al cuadrado, ¿Cuál es el gradiente de la función que se quiere optimizar (hint
Step9: 3. Programar una función que reciba valores de alpha, beta y el vector study_hours y devuelva un vector array de numpy de predicciones alpha + beta*study_hours_i, con un valor por cada individuo
Step10: 4. Definan un numpy array X de dos columnas, la primera con unos en todas sus entradas y la segunda con la variable study_hours. Observen que X[alpha,beta] nos devuelve alpha + beta * study_hours_i en cada entrada y que entonces el problema se vuelve sat_score ~ X[alpha,beta]
Step11: 5. Calculen la pseudoinversa X^+ de X y computen (X^+)*sat_score para obtener alpha y beta soluciones.
Step12: 6. Comparen la solución anterior con la de la fórmula directa de solución exacta (alpha,beta)=(X^tX)^(-1)X^t*sat_score
Step13: La comparación de este método con el anterior es que ambos regresan el ajuste a mínimos cuadrados
7. (Avanzado) Usen la libreria matplotlib para visualizar las predicciones con alpha y beta solución contra los valores reales de sat_score. | Python Code:
from PIL import Image
import matplotlib.pyplot as plt
import numpy as np
#url = sys.argv[1]
url = 'Mario.png'
img = Image.open(url)
imggray = img.convert('LA')
Explanation: Parte 1: Teoría de Algebra Lineal y Optimización
1. ¿Por qué una matriz equivale a una transformación lineal entre espacios vectoriales?<br>
R: Porque al llevar a cabo la multiplicación de un vector con esta matriz, al vector original se le aplican escalamientos, reducciones, rotaciones y anulamientos en sus componentes transformándolo en otro vector de distintas proporciones y dimensiones, es decir lleva a cabo un mapeo del vector original en su espacio vectorial original hacia otro vector en otro espacio vectorial.<br><br>
2. ¿Cuál es el efecto de transformación lineal de una matriz diagonal y el de una matriz ortogonal?<br>
R: La transformación lineal que aplica una matriz diagonal hacia un vector columna o una matriz es el escalamiento de cada uno de los vectores columna y se va a dar de acuerdo a la posición de los elementos de la diagonal.
La transformación lineal que aplica una matriz ortogonal hacia un vector columna o una matriz, es una transformación isométrica que puede ser cualquiera de tres tipos: traslación, espejeo y rotación. Tienen determinante +-1<br><br>
3. ¿Qué es la descomposición en valores singulares de una matriz?<br>
R: Es la factorización de una matriz en la multiplicación de tres matrices fundamentales que son la matriz de eigenvectores, la matriz de componentes singulares y la matriz transpuesta de eigenvectores. La descomposicion se lleva a cabo de la siguiente manera: $$ A = U \Sigma V^{T} $$
En donde U = matriz de eigenvectores, $\Sigma$ = matriz de valores singulares o raices cuadradas de eigenvalores de $A^{T}A$ y $V^{T}$ = matriz transpuesta de eigenvectores por derecha.
La descomposición svd tiene muchas aplicaciones en las áreas de reducción de dimensiones, análisis de componentes principales (PCA), compresión de imágenes, aproximacion a solucion de sistemas de ecuaciones sin soluciones unicas mediante minimos cuadrados. <br><br>
4. ¿Qué es diagonalizar una matriz y que representan los eigenvectores?<br>
R: La diagonalización de una matriz es la factorización de la misma en la multiplicación de tres matrices básicas que son la matriz de eigenvectores, la matriz de eigenvalores y la matriz inversa de los eigenvectores de la misma. De la siguiente manera: $$A = PDP^{-1}$$En donde P = matriz de eigenvectores, D = Matriz diagonal de eigenvalores y $P^{-1}$ = matriz inversa de eigenvectores.
Los eigenvectores de una transformación lineal son vectores que cumplen con la condición de que el resultado de multiplicarlos por la matriz de transformación equivale a multiplicarlos por un escalar llamado eigenvalor. De la siguiente manera: $$ Ax = \lambda x $$ Esto quiere decir que a estos vectores al aplicarles la transformación lineal (multiplicarlos por la matriz de transformación) no cambian su dirección solamente cambian su longitud o su sentido, en caso de que haya eigenvalor con signo negativo. <br><br>
5. ¿Intuitivamente que son los eigenvectores?<br>
R: Se pueden interpretar como ejes cartesianos de la transformación lineal, cumpliendo el teorema espectral que afirma: $$ \tau \vec (v) = \lambda_{1}(\vec v_{1} \bullet v)v_{1} + \lambda_{2}(\vec v_{2} \bullet v)v_{2} + .... + \lambda_{n}(\vec v_{n} \bullet v)v_{n} $$ <br><br>
6. ¿Cómo interpretas la descomposición en valores singulares como una composición de tres tipos de transformaciones lineales simples?<br>
R: Las tres matrices que conforman la SVD se pueden ver como transformaciones simples que son una rotación inicial, un escalamiento a lo largo de los ejes principales (valores singulares) y una rotación final. <br><br>
7. ¿Qué relación hay entre la descomposición en valores singulares y la diagonalización?<br>
R: La descomposición svd es una generalización de la diagonalización de una matriz. Cuando una matriz no es cuadrada, no es diagonalizable, sin embargo si se puede descomponer mediante svd. Así mismo si se utiliza la descomposición svd de una matriz, es posible resolver sistemas de ecuaciones lineales que no tengan una única solución sino que la solución devuelta es el ajuste a mínimos cuadrados, (obviamente si tienen solución, regresa la solución al sistema). <br><br>
8. ¿Cómo se usa la descomposición en valores singulares para dar una aproximación de rango menor a una matriz? <br>
R: En la descomposición svd de una matriz mxn, se obtiene una descomposición de tres matrices cuya multiplicación resulta en la matriz original completa, sin embargo dadas las propiedades de la descomposición, se pueden utilizar solamente las componentes principales de mayor relevancia desechando columnas de la matriz $U_{mxr}$, filas o columnas de la matriz de componentes principales $\Sigma {rxr} $ y filas de la matriz $V{rxn}^{T}$ La multiplicación de estas resulta en el tamaño mxn de la matriz original pero con un determinado error y a medida que se consideran más componentes principales, columnas y vectores de las matrices U,S,VT , mejor será la reconstrucción de la matriz original. Esta descomposición es muy útil para comprimir información y análisis de componentes principales (PCA).<br><br>
9. Describe el método de minimización por descenso gradiente<br>
R: Es un método iterativo de primer orden de minimización de una función dado su gradiente. Conociendo la función a optimizar, se calcula el gradiente (vector de derivadas parciales), posteriormente se considera un punto aleatorio de inicio, se substituye la coordenada en el vector gradiente, se analiza en cuál de las componentes del vector gradiente (x, y, o z) es el valor más negativo y se da un pequeño incremento en esa dirección de manera que se acerca al mínimo local, y así sucesivamente hasta llegar a un punto de convergencia (mínimo local). Entre sus aplicaciones están: Encontrar mínimos locales de funciones, Solución de sistemas de ecuaciones lineales y solución de sistemas de ecuaciones no lineales. <br><br>
10. Menciona 4 ejemplos de problemas de optimización(dos con restricciones y dos sin restricciones) que te parezcan interesantes como científico de datos<br>
R: Con restricciones: Optimización de la eficiencia energética en el diseño de un motor eléctrico dadas restricciones geométricas, termodinámicas, electromagnéticas, etc. Algunas de las variables a considerar en el diseño de un motor eléctrico son: número de pares de polos, proporción del diámetro interno del estator contra el diámetro externo, porcentaje de la profundidad total del rotor para la sección principal céntrica del rotor, entre otras, todas estas variables forman un vector, este vector es la entrada a un conjunto de ecuaciones objetivo y las restricciones también se modelan como ecuaciones. <br>
Este tipo de optimización es del tipo multi-objetivo y el tipo de métodos más adecuados para implementar son la optimización de Pareto.
Vector con variables optimas:
$ \vec x = [x_{1},x_{2},x_{1},....,x_{D} ] \vec x \in R^{D}$
Restricciones de frontera $x_{i}^{L} \leq x_{i} \leq x_{i}^{U}, i=1, ...., D$
Restricciones de ecuaciones $g_{j}(\vec x ) \leq 0, j=1,....,m$
Función objetivo $ f( \vec x ) = [f_{1}( \vec x), f_{2}( \vec x), .... f_{k}( \vec x)]$
Referencia: http://www.adept-itn.eu/images/Publications/Optimization_in_Design_of_Electric_Machines_Methodology_and_Workflow.pdf
B. Optimización de la red eléctrica inteligente (Smart Grid):
La red eléctrica inteligente es la evolución de la red eléctrica actual, en donde se integran nuevas fuentes de generación eléctrica (energías renovables), productores caseros, nuevos equipos de transmisión y almacenamiento, medidores, sensores, infraestructura de comunicaciones, cómputo y algoritmos de gestión energética. La red eléctrica inteligente tiene tres objetivos principales que son:
1. Mejorar la confiabilidad del servicio eléctrico (menos fallas del servicio).
2. Reducir demandas pico (Tener un consumo lo más estable posible).
3. Reducir el consumo total de energía (Tener un consumo lo más bajo posible). <br>
El primer objetivo plantea un problema de maximización del tiempo de servicio así como maximización de la calidad de la energía.
El segundo objetivo se plantea como una minimización de demanda pico.
El tercer objetivo también es un problema de minimización.
Referencia: https://file.scirp.org/pdf/AJOR_2013013014361193.pdf <br>
Sin restricciones:
C. Encontrar el consumo energético máximo de una fábrica en kwh teniendo registros de consumo energético en medios digitales generados por medidores eléctricos inteligentes.
D. Encontrar la menor cantidad de unidades de producto para satisfacer una demanda con forma paraboloide (logística).
Parte2: Aplicaciones en Python
1. Recibir el path de un archivo de una imagen y convertirlo en una matriz numérica que represente la versión en blanco y negro de la imagen
End of explanation
imggrayArray = np.array(list(imggray.getdata(band=0)), float)
imggrayArray.shape = (imggray.size[1], imggray.size[0])
imggrayArray = np.matrix(imggrayArray)
plt.imshow(imggray)
plt.show()
u, s, v = np.linalg.svd(imggrayArray)
print("La matriz U es: ")
print(u)
print("La matriz S es: ")
print(s)
print("La matriz V es: ")
print(v)
Explanation: 2. Realizar y verificar la descomposición svd.
End of explanation
for i in range(1,50,10):
reconstimg = np.matrix(u[:, :i]) * np.diag(s[:i]) * np.matrix(v[:i,:])
plt.imshow(reconstimg, cmap='gray')
plt.show()
Explanation: 3. Usar la descomposición para dar una aproximación de grado <code>k</code> de la imagen.</li>
4. Para alguna imagen de su elección, elegir distintos valores de aproximación a la imagen original.
End of explanation
def lin_solve_pseudo(A,b):
pseudoinv = pseudoinverse(A)
return np.matmul(pseudoinv,b)
def pseudoinverse(A):
u,s,v = np.linalg.svd(A)
diagonal = np.diag(s)
if v.shape[0] > diagonal.shape[1]:
print("Agregando columnas a la sigma")
vector = np.array([[0 for x in range(v.shape[0] - diagonal.shape[1])] for y in range(diagonal.shape[0])])
diagonal = np.concatenate((diagonal, vector), axis=1)
elif u.shape[1] > diagonal.shape[0]:
print("Agregando renglones a la sigma")
vector = np.array([[0 for x in range(diagonal.shape[0])] for y in range(u.shape[1]-diagonal.shape[0])])
diagonal = np.concatenate((diagonal, vector), axis=0)
for a in range(diagonal.shape[0]):
for b in range(diagonal.shape[1]):
if diagonal[a][b] != 0:
diagonal[a][b] = 1/diagonal[a][b]
resultante = np.dot(np.transpose(v),np.transpose(diagonal))
resultante = np.dot(resultante,np.transpose(u))
return resultante
Explanation: Contestar, ¿qué tiene que ver este proyecto con compresión de imágenes?
En este proyecto se planteó una aplicación de la descomposición de una imagen mediante svd y se pudo comprender que esta técnica permite la compresión de información mediante reducción de componentes principales, es decir reducción de columnas en la matriz U, filas o columnas de la matriz S (componentes principales) y filas de la matriz $V^{-1} $
Ejercicio2
Funciones del ejercicio2
End of explanation
A = np.array([[1,1,1],[1,1,3],[2,4,4]])
b = np.array([[18,30,68]])
solve = lin_solve_pseudo(A,np.transpose(b))
print(solve)
Explanation: 1. Programar una función que dada cualquier matriz devuelva la pseudoinversa usando la descomposición SVD. Hacer otra función que resuelva cualquier sistema de ecuaciones de la forma Ax=b usando esta pseudoinversa.
End of explanation
print("(a)")
print("La imagen de A es cualquier vector de dos coordenadas en donde la segunda componente siempre sea cero")
print("Vector b en imagen de A")
A = np.array([[1,1],[0,0]])
b = np.array([[12,0]])
solve = lin_solve_pseudo(A, np.transpose(b))
print(solve)
print("Cuando b esta en la imagen, la funcion lin_solve_pseudo devuelve la solucion unica a su sistema")
print("Vector b no en imagen de A")
b = np.array([[12,8]])
solve = lin_solve_pseudo(A, np.transpose(b))
print(solve)
print("Cuando b no esta en la imagen, devuelve la solucion mas aproximada a su sistema")
print("(c)")
A = np.array([[1,1],[0,1e-32]])
b = np.array([[12,9]])
solve = lin_solve_pseudo(A, np.transpose(b))
print(solve)
cadena = En este caso, la solucion devuelta siempre es el valor de la segunda coordenada del vector b por e+32\
y es el valor de ambas incognitas, solo que con signos contrarios ej(x1=-9.0e+32, x2=9.0e+32) \
esto debido a que cualquier numero entre un numero muy pequenio tiende a infinito, de manera que la \
coordenada dos del vector tiene mucho peso con referencia a la coordenada uno del vector
print(cadena)
Explanation: 2. Jugar con el sistema Ax=b donde A=[[1,1],[0,0]] y b puede tomar distintos valores.
(a) Observar que pasa si b esta en la imagen de A (contestar cuál es la imagen) y si no está (ej. b = [1,1]).
(b) Contestar, ¿la solución resultante es única? Si hay más de una solución, investigar que carateriza a la solución devuelta.
(c) Repetir cambiando A=[[1,1],[0,1e-32]], ¿En este caso la solucíon es única? ¿Cambia el valor devuelto de x en cada posible valor de b del punto anterior?
End of explanation
import pandas as pd
import matplotlib.pyplot as plt
data = pd.read_csv("./study_vs_sat.csv", sep=',')
print(data)
Explanation: Ejercicio3
1. Leer el archivo study_vs_sat.csv y almacenearlo como un data frame de pandas.
End of explanation
hrs_studio = np.array(data["study_hours"])
sat_score = np.array(data["sat_score"])
A = np.vstack([hrs_studio, np.ones(len(hrs_studio))]).T
m,c = np.linalg.lstsq(A,sat_score)[0]
print("Beta y alfa: ")
print(m,c)
Explanation: 2. Pleantear como un problema de optimización que intente hacer una aproximación de la forma sat_score ~ alpha + beta*study_hours minimizando la suma de los errores de predicción al cuadrado, ¿Cuál es el gradiente de la función que se quiere optimizar (hint: las variables que queremos optimizar son alpha y beta)?
End of explanation
def predict(alfa, beta, study_hours):
study_hours_i=[]
for a in range(len(study_hours)):
study_hours_i.append(alfa + beta*np.array(study_hours[a]))
return study_hours_i
print("prediccion")
print(predict(353.165, 25.326, hrs_studio))
Explanation: 3. Programar una función que reciba valores de alpha, beta y el vector study_hours y devuelva un vector array de numpy de predicciones alpha + beta*study_hours_i, con un valor por cada individuo
End of explanation
unos = np.ones((len(hrs_studio),1))
hrs_studio = [hrs_studio]
hrs_studio = np.transpose(hrs_studio)
x = np.hstack((unos, hrs_studio))
print("La prediccion es: ")
print(np.matmul(x,np.array([[353.165],[25.326]])))
Explanation: 4. Definan un numpy array X de dos columnas, la primera con unos en todas sus entradas y la segunda con la variable study_hours. Observen que X[alpha,beta] nos devuelve alpha + beta * study_hours_i en cada entrada y que entonces el problema se vuelve sat_score ~ X[alpha,beta]
End of explanation
X_pseudo = pseudoinverse(x)
print("Las alfas y betas son: ")
print(np.matmul(X_pseudo,sat_score))
Explanation: 5. Calculen la pseudoinversa X^+ de X y computen (X^+)*sat_score para obtener alpha y beta soluciones.
End of explanation
def comparacion(X, sat_score):
x_transpose = np.transpose(X)
return np.matmul(np.linalg.inv(np.matmul(x_transpose,X)), np.matmul(x_transpose,sat_score))
#x = np.array([[1,1],[1,2],[1,3]])
#y = np.array([5,6,7])
x = np.hstack((unos, hrs_studio))
print(comparacion(x, sat_score))
Explanation: 6. Comparen la solución anterior con la de la fórmula directa de solución exacta (alpha,beta)=(X^tX)^(-1)X^t*sat_score
End of explanation
plt.plot(hrs_studio, sat_score, 'o', label='Original data', markersize=10)
plt.plot(hrs_studio, m*hrs_studio + c, 'r', label='Fitted line')
plt.legend()
plt.show()
Explanation: La comparación de este método con el anterior es que ambos regresan el ajuste a mínimos cuadrados
7. (Avanzado) Usen la libreria matplotlib para visualizar las predicciones con alpha y beta solución contra los valores reales de sat_score.
End of explanation |
10,731 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Experiment and anlyse some features creation
Step3: Read and prepare the data
Step4: Cleaning dataset
Step5: Create dataset for learning
Step6: Create Target learning & analyse mechanics
Step7: Here we don't care about DataFrame Sort (timeserie). We only do some matching and ratio
get_summer_holiday
Create a bool (0 classic day / 1 holidays)
Step8: get_public_holiday
Count day before and after special holiday (like assomption on 15/08)
Step9: cluster_station_lyon
Step10: Cluster of station by ativite (mean on bike by hours of day)
You can find the process of clustering in file ../clustering-Lyon-Armand.ipynb
Step11: cluster_station_geo_lyon
Step12: Cluster of Lyon's station by geography position
You can find the process of clustering in file ../Clustering-Lyon-geo-Armand.ipynb
Step13: Binned hour
Step14: get_statio_ratio_open_by_time
Ratio of open station by time
Step15: get_statio_cluster_geo_ratio_open_by_time
Step16: Ratio of open station (on geography cluster) by hours
Step17: Now we sorting our DataFrame to create timeserie features ( /!\ order is important )
Step18: Step by step
Step19: Creating 'label' to shift probability to one hour.
probability at 2017-07-09 01
Step20: Merging label and data to create the learning dataset with target shifting
Step21: Creation of features
Step22: create_shift_features
Step23: create_cumul_trend_features
Step24: get_station_recently_closed
Step25: Sometime station are closed for maintenance, so they can't be use by users. Trying to catch this information to help the learning process
Step26: If station is open since 4 hours, was_recently_open has a value of 24 (4 * 6 (bin is egal to 10 min))
filling_bike_on_geo_cluster
Step27: Create a features on filling bike by geo station. This give information if some zone (cluster) are empty or full
Step28: get_paa_transformation
Step29: There is data leak in this features (PAA). At 09
Step30: As PAA, SAX transformation give data leak.
tranform signal with rolling mean
As PAA & SAX can give data leak, we will mean our probability on x bin (rolling mean)
Step31: Here there is no leak of information in the future. You only take past informations to give context to our algorithm
Rolling Standard Deviation
Sometime station's bike don't move too much, and sometime it's crazy time. By given this indicator, we want to help our algorithm with context awareness
Step32: Rolling on median
Step35: Create bool for pretty empty or full station
Step36: Split learning dataset on train test (avoid data leak feature)
The train test split
We split our dataset to create on date
Step37: Create KPI probability group on binned hour / month / day / is_open ==1
We need to create our binned hours mapping
Step38: Detect anomalie in probability du to some re stock by humain
Step39: In 10 min at 03
Step40: Weather feature
Exact weather
Step41: Forcast weather | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.dates as mdates
from matplotlib import pyplot as plt
import seaborn as sns
# Set random
np.random.seed(42)
import sys
sys.path.append('../')
from prediction import (datareader, complete_data, cleanup, bikes_probability,
time_resampling)
%load_ext watermark
%watermark -d -v -p numpy,pandas,matplotlib -g -m -w
def plot_features_station(result, station, features_to_plot='paa', nb_row=350, draw_type='-'):
Plot available bikes and bike stands for a given station
data = result[result.station == station].tail(nb_row).copy()
fig, ax = plt.subplots(figsize=(18,5))
plt.plot(data.index, data.probability, draw_type, label='probability', alpha=0.8)
plt.plot(data.index, data[features_to_plot], draw_type, label=features_to_plot, alpha=0.6)
ax = plt.gca()
# set major ticks location every day
ax.xaxis.set_major_locator(mdates.DayLocator())
# set major ticks format
ax.xaxis.set_major_formatter(mdates.DateFormatter('\n\n\n%a %d.%m.%Y'))
# set minor ticks location every one hours
ax.xaxis.set_minor_locator(mdates.HourLocator(interval=1))
# set minor ticks format
ax.xaxis.set_minor_formatter(mdates.DateFormatter('%H:%M'))
plt.setp(ax.xaxis.get_minorticklabels(), rotation=45)
plt.legend(loc='best')
def plot_features_station_train_test(train, test, station, features_to_plot='paa', nb_row=350, draw_type='-'):
Plot available bikes and bike stands for a given station for a train / test dataset
train = train[train.station == station].tail(nb_row).copy()
test = test[test.station == station].copy()
fig, ax = plt.subplots(figsize=(18,5))
plt.plot(train.index, train.probability, draw_type, label=' Train probability', alpha=0.8)
plt.plot(train.index, train[features_to_plot], draw_type, label='Train ' + features_to_plot, alpha=0.6)
plt.plot(test.index, test.probability, draw_type, label=' Test probability', alpha=0.8)
plt.plot(test.index, test[features_to_plot], draw_type, label='Test ' + features_to_plot, alpha=0.6)
ax = plt.gca()
# set major ticks location every day
ax.xaxis.set_major_locator(mdates.DayLocator())
# set major ticks format
ax.xaxis.set_major_formatter(mdates.DateFormatter('\n\n\n%a %d.%m.%Y'))
# set minor ticks location every one hours
ax.xaxis.set_minor_locator(mdates.HourLocator(interval=1))
# set minor ticks format
ax.xaxis.set_minor_formatter(mdates.DateFormatter('%H:%M'))
plt.setp(ax.xaxis.get_minorticklabels(), rotation=45)
plt.legend(loc='best')
Explanation: Experiment and anlyse some features creation
End of explanation
DATAFILE = '../data/lyon.csv'
raw = datareader(DATAFILE)
Explanation: Read and prepare the data
End of explanation
df_clean = cleanup(raw)
df_clean.head()
Explanation: Cleaning dataset
End of explanation
df = (df_clean.pipe(time_resampling)
.pipe(complete_data)
.pipe(bikes_probability))
df.head()
df.shape
df.info()
Explanation: Create dataset for learning
End of explanation
# params of learning dataset creation
start = pd.Timestamp("2017-08-01T02:00:00") # Tuesday
predict_date = pd.Timestamp("2017-09-22T09:00:00") # wednesday
# predict the next 30 minutes
freq = '1H'
# number of predictions at 'predict_date'.
# Here, the next 30 minutes and the next hour (30 minutes + 30 minutes).
# If you want to predict the next 3 hours, every 30 minutes, thus set periods=6
periods = 1
Explanation: Create Target learning & analyse mechanics
End of explanation
from prediction import get_summer_holiday
df = get_summer_holiday(df.copy())
df.head(2)
df.tail(2)
Explanation: Here we don't care about DataFrame Sort (timeserie). We only do some matching and ratio
get_summer_holiday
Create a bool (0 classic day / 1 holidays)
End of explanation
from prediction import get_public_holiday
df = get_public_holiday(df.copy(), count_day=5)
df[df.ts >='2017-08-14 23:50:00'].head()
Explanation: get_public_holiday
Count day before and after special holiday (like assomption on 15/08)
End of explanation
from prediction import cluster_station_lyon
Explanation: cluster_station_lyon
End of explanation
df = cluster_station_lyon(df.copy(), path_file='../data/cluster_lyon_armand.csv')
df.head()
Explanation: Cluster of station by ativite (mean on bike by hours of day)
You can find the process of clustering in file ../clustering-Lyon-Armand.ipynb
End of explanation
from prediction import cluster_station_geo_lyon
Explanation: cluster_station_geo_lyon
End of explanation
df = cluster_station_lyon(df.copy(), path_file='../data/station_cluster_geo_armand.csv')
df.head()
Explanation: Cluster of Lyon's station by geography position
You can find the process of clustering in file ../Clustering-Lyon-geo-Armand.ipynb
End of explanation
from prediction import mapping_hours
df['hours_binned'] = df.hour.apply(mapping_hours)
df.head()
Explanation: Binned hour
End of explanation
from prediction import get_statio_ratio_open_by_time
df_temp_1 = get_statio_ratio_open_by_time(df.copy())
df_temp_1.head()
Explanation: get_statio_ratio_open_by_time
Ratio of open station by time
End of explanation
from prediction import get_statio_cluster_geo_ratio_open_by_time
Explanation: get_statio_cluster_geo_ratio_open_by_time
End of explanation
df_temp_2 = get_statio_cluster_geo_ratio_open_by_time(df.copy())
df_temp_2.head()
Explanation: Ratio of open station (on geography cluster) by hours
End of explanation
data = df.sort_values(['station', 'ts']).set_index(["ts", "station"])
observation = 'probability'
label = data[observation].copy()
label.name = "future"
label = (label.reset_index(level=1)
.shift(-1, freq=freq)
.reset_index()
.set_index(["ts", "station"]))
result = data.merge(label, left_index=True, right_index=True)
result.reset_index(level=1, inplace=True)
if start is not None:
result = result[result.index >= start]
Explanation: Now we sorting our DataFrame to create timeserie features ( /!\ order is important )
End of explanation
data.head(15)
Explanation: Step by step :
First step is to sort dataset by station and time ('ts')
End of explanation
label[6:11]
Explanation: Creating 'label' to shift probability to one hour.
probability at 2017-07-09 01:00:00 become futur at 2017-07-09 00:00:00
End of explanation
result[result.station == 1001][['station', 'bikes', 'stands', 'probability', 'future']].head(15)
Explanation: Merging label and data to create the learning dataset with target shifting
End of explanation
# Original learning dataset :
result.head()
Explanation: Creation of features
End of explanation
from prediction import create_shift_features
df_temp_3 = create_shift_features(result.copy(), features_name='bikes_shift_'+str(freq.replace('H', 'bin')), feature_to_shift='bikes',
features_grp='station', nb_shift=periods)
df_temp_3[['station', 'bikes', 'bikes_shift_1bin']].head(15)
Explanation: create_shift_features
End of explanation
from prediction import create_cumul_trend_features
# Need to use df_temp with 'bikes_shift_1bin' values
df_temp_4 = create_cumul_trend_features(df_temp_3, features_name='bikes_shift_'+str(freq.replace('H', 'bin')))
df_temp_4[df_temp_4.station == 1001][['station', 'bikes', 'bikes_shift_1bin',
'cumsum_trend_sup', 'cumsum_trend_inf', 'cumsum_trend_equal']].head(8)
Explanation: create_cumul_trend_features
End of explanation
from prediction import get_station_recently_closed
Explanation: get_station_recently_closed
End of explanation
df[254350:254361][['station', 'ts', 'bikes', 'is_open', 'probability']]
df_temp_5 = get_station_recently_closed(result, nb_hours=4)
df_temp_5[['station', 'bikes', 'is_open', 'probability', 'was_recently_open']].tail()
Explanation: Sometime station are closed for maintenance, so they can't be use by users. Trying to catch this information to help the learning process
End of explanation
from prediction import filling_bike_on_geo_cluster
Explanation: If station is open since 4 hours, was_recently_open has a value of 24 (4 * 6 (bin is egal to 10 min))
filling_bike_on_geo_cluster
End of explanation
df_temp_6 = filling_bike_on_geo_cluster(df_temp_3.copy(), features_name='bikes_shift_'+str(freq.replace('H', 'bin')))
df_temp_6.tail()
Explanation: Create a features on filling bike by geo station. This give information if some zone (cluster) are empty or full
End of explanation
from prediction import get_paa_transformation
df_temp_7 = get_paa_transformation(result.copy(), features_to_compute='probability', segments=10)
df_temp_7[df_temp_7.station == 1005][['station', 'bikes', 'probability', 'future', 'paa']].tail(22)
plot_features_station(df_temp_7, station=1001, features_to_plot='paa', nb_row=120, draw_type='-o')
plot_features_station(df_temp_7, station=1001, features_to_plot='paa', nb_row=29, draw_type='-o')
df_temp_7[df_temp_7.station == 1001][['station', 'probability', 'future', 'paa']].tail(15)
df_temp_7[df_temp_7.station == 1001][['station', 'probability', 'future', 'paa']][-26:-16]
df_temp_7[df_temp_7.station == 1001].probability[-26:-16].mean()
Explanation: get_paa_transformation
End of explanation
from prediction import get_sax_transformation
df_temp_8 = get_sax_transformation(result.copy(), features_to_compute='probability', segments=10, symbols=8)
df_temp_8[df_temp_8.station == 1001].tail(22)
plot_features_station(df_temp_8, station=1001, features_to_plot='sax', nb_row=35, draw_type='-o')
Explanation: There is data leak in this features (PAA). At 09:40, there is a probability of 0.062 (target is the same one hour later). But PAA is going to mean the next 9 values. So PAA will se the increase in the futur (0.187 / 0.187 / 0.125 - 30 min later) and PAA will be highter. Algorithm will catch it as a win information but can't see it in a production vision.
get_sax_transformation
End of explanation
# Original
result.head()
from prediction import create_rolling_mean_features
df_temp_9 = create_rolling_mean_features(result,
features_name='mean_6',
feature_to_mean='probability',
features_grp='station',
nb_shift=6)
df_temp_9[df_temp_9.station == 1001].tail(15)
plot_features_station(df_temp_9, station=1001, features_to_plot='mean_6', nb_row=40, draw_type='-o')
Explanation: As PAA, SAX transformation give data leak.
tranform signal with rolling mean
As PAA & SAX can give data leak, we will mean our probability on x bin (rolling mean)
End of explanation
from prediction import create_rolling_std_features
df_temp_10 = create_rolling_std_features(result,
features_name='std_9',
feature_to_std='probability',
features_grp='station',
nb_shift=9)
plot_features_station(df_temp_10, station=4012, features_to_plot='std_9', nb_row=35, draw_type='-o')
Explanation: Here there is no leak of information in the future. You only take past informations to give context to our algorithm
Rolling Standard Deviation
Sometime station's bike don't move too much, and sometime it's crazy time. By given this indicator, we want to help our algorithm with context awareness
End of explanation
from prediction import create_rolling_median_features
df_temp_11 = create_rolling_median_features(result,
features_name='median_6',
feature_to_median='probability',
features_grp='station',
nb_shift=6)
plot_features_station(df_temp_11, station=4012, features_to_plot='median_6', nb_row=40, draw_type='-o')
Explanation: Rolling on median
End of explanation
def create_bool_empty_full_station(df):
Create a bool features "warning_empty_full"
If bike <= 2 --> 1
If Proba >= 0.875 --> 1
else --> 0
df['warning_empty_full'] = 0
df.loc[df['bikes'] <= 2, 'warning_empty_full'] = 1
df.loc[df['probability'] >= 0.875, 'warning_empty_full'] = 1
return df
df_temp_12 = result.copy()
df_temp_12.head()
df_temp_12['warning_empty_full'] = 0
df_temp_12.loc[df_temp_12['bikes'] <= 2, 'warning_empty_full'] = 1
df_temp_12.loc[df_temp_12['probability'] >= 0.875, 'warning_empty_full'] = 1
feature_event_to_plot='warning_empty_full'
features_event_value=1
df_temp_13[df_temp_13[feature_event_to_plot] == features_event_value].head()
def plot_event_station(result, station, feature_event_to_plot='bike', features_event_value=1,
nb_row=350, point_type='*'):
Plot available bikes and bike stands for a given station
data = reswult[result.station == station].tail(nb_row).copy()
fig, ax = plt.subplots(figsize=(18,5))
plt.plot(data.index, data.probability, '-', label='probability', alpha=0.8)
plt.plot(data[data[feature_event_to_plot] == features_event_value].index,
data[data[feature_event_to_plot] == features_event_value].probability,
point_type, markerfacecolor='k',
label=feature_event_to_plot, alpha=0.6)
ax = plt.gca()
# set major ticks location every day
ax.xaxis.set_major_locator(mdates.DayLocator())
# set major ticks format
ax.xaxis.set_major_formatter(mdates.DateFormatter('\n\n\n%a %d.%m.%Y'))
# set minor ticks location every one hours
ax.xaxis.set_minor_locator(mdates.HourLocator(interval=1))
# set minor ticks format
ax.xaxis.set_minor_formatter(mdates.DateFormatter('%H:%M'))
plt.setp(ax.xaxis.get_minorticklabels(), rotation=45)
plt.legend(loc='best')
plot_event_station(df_temp_12, station=10101, feature_event_to_plot='warning_empty_full',
features_event_value=1, nb_row=350, point_type='*')
Explanation: Create bool for pretty empty or full station
End of explanation
# to have same value
date = predict_date
print ('date : ' + str(date))
cut = date - pd.Timedelta(freq.replace('T', 'm'))
stop = date + periods * pd.Timedelta(freq.replace('T', 'm'))
print ('cut : ' + str(cut))
print ('stop : ' + str(stop))
train = result[result.index <= cut].copy()
mask = np.logical_and(result.index >= date, result.index <= stop)
test = result[mask].copy()
print('train shape : ' + str(train.shape))
print('test shape : ' + str(test.shape))
train.head()
Explanation: Split learning dataset on train test (avoid data leak feature)
The train test split
We split our dataset to create on date :
- A trainning Dataset
- A test Dataset
End of explanation
from prediction import create_mean_by_sta_day_binned_hours
train_temp_1, test_temp_1 = create_mean_by_sta_day_binned_hours(train.copy(), test.copy(),
features_name='proba_mean_by_sta_day_binned_hour',
feature_to_mean='probability',
features_grp=['station', 'day', 'hours_binned'])
plot_features_station_train_test(train_temp_1, test_temp_1, station=1036, features_to_plot='proba_mean_by_sta_day_binned_hour',
nb_row=450, draw_type='-o')
Explanation: Create KPI probability group on binned hour / month / day / is_open ==1
We need to create our binned hours mapping
End of explanation
df_temp7 = result.copy()
df_temp7['ts'] = df_temp7.index
df_temp7 = df_temp7.sort_values(['station', 'ts'])
df_temp7.head()
df_temp7['prob_shit'] = df_temp7.groupby(['station'])['probability'].apply(lambda x: x.shift(1))
df_temp7['prob_diff'] = np.abs(df_temp7['prob_shit'] - df_temp7['probability'])
df_temp7.tail()
df_temp7[df_temp7.prob_diff >= 0.5].tail(6)
plot_features_station(df_temp7, station=1036, features_to_plot='prob_diff', nb_row=300, draw_type='-')
df_temp7[(df_temp7.station == 1036) & (df_temp7.ts >= '2017-09-26 02:30:00')].head(6)
Explanation: Detect anomalie in probability du to some re stock by humain
End of explanation
df_temp7['ano'] = 0
df_temp7.loc[df_temp7['prob_diff'] > 0.5, 'ano'] = 1
df_temp7[(df_temp7.station == 1036) & (df_temp7['ano']==1)][['prob_diff','day', 'hour', 'minute', 'bikes', 'prob_diff']].tail(50)
df_temp7[df_temp7['ano']==1].station.value_counts()
Explanation: In 10 min at 03:00, 13 bikes have been loaded here. It's impossible to predict it as it's bike company who reload this station
End of explanation
lyon_meteo = pd.read_csv('../data/lyon_weather.csv', parse_dates=['date'])
lyon_meteo.rename(columns={'date':'ts'}, inplace=True)
lyon_meteo.head()
Explanation: Weather feature
Exact weather
End of explanation
lyon_forecast = pd.read_csv('../data/lyon_forecast.csv', parse_dates=['forecast_at', 'ts'])
lyon_forecast['delta'] = lyon_forecast['ts'] - lyon_forecast['forecast_at']
lyon_forecast.tail()
lyon_forecast[(lyon_forecast.rain_3h >= 1) & (lyon_forecast.delta == '1H')].tail()
lyon_forecast[(lyon_forecast.ts >= '2017-09-14 14:00:00') & (lyon_forecast.delta == '1H')].head(15)
Explanation: Forcast weather
End of explanation |
10,732 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ToppGene & Pathway Visualization
Authors
Step1: Read in differential expression results as a Pandas data frame to get differentially expressed gene list
Step2: Translate Ensembl IDs to Gene Symbols and Entrez IDs using mygene.info API
Step3: Run ToppGene API
Include path for the input .xml file and path and name of the output .xml file.
Outputs all 17 features of ToppGene.
Step5: Parse ToppGene results into Pandas data frame
Step6: Display the dataframe of each ToppGene feature
Step7: Extract the KEGG pathway IDs from the ToppGene output (write to csv file)
Step8: Create dataframe that includes the KEGG IDs that correspond to the significant pathways outputted by ToppGene
Step9: Run Pathview to map and render user data on the pathway graphs outputted by ToppGene
Switch to R kernel here
Step10: Create matrix-like structure to contain entrez ID and log2FC for gene.data input
Step11: Create vector containing the KEGG IDs of all the significant target pathways
Step12: Display each of the signficant pathway colored overlay diagrams
Switch back to py27 kernel here
Step13: Weijun Luo and Cory Brouwer. Pathview | Python Code:
#Import Python modules
import os
import pandas
import qgrid
import mygene
#Change directory
os.chdir("/data/test")
Explanation: ToppGene & Pathway Visualization
Authors: N. Mouchamel, L. Huang, T. Nguyen, K. Fisch
Email: [email protected]
Date: June 2016
Goal: Create Jupyter notebook that runs an enrichment analysis in ToppGene through the API and runs Pathview to visualize the significant pathways outputted by ToppGene.
toppgene website: https://toppgene.cchmc.org/enrichment.jsp
Steps:
1. Read in differentially expressed gene list.
2. Convert differentially expressed gene list to xml file as input to ToppGene API.
3. Run enrichment analysis of DE genes through ToppGene API.
4. Parse ToppGene API results from xml to csv and Pandas data frame.
5. Display results in notebook.
6. Extract just the KEGG pathwway IDs from the ToppGene output.
7. Manually switch from Python2 to R kernel.
8. Extract entrez ID and log2FC from the input DE genes.
9. Create vector of significant pathways from ToppGene.
10. Run Pathview (https://bioconductor.org/packages/release/bioc/html/pathview.html) in R to create colored pathway maps.
11. Manually switch from R kernel to Python2.
12. Display each of the significant pathway colored overlay diagrams in the jupyter notebook.
End of explanation
#Read in DESeq2 results
genes=pandas.read_csv("DE_genes.csv")
#View top of file
genes.head(10)
#Extract genes that are differentially expressed with a pvalue less than a certain cutoff (pvalue < 0.05 or padj < 0.05)
genes_DE_only = genes.loc[(genes.pvalue < 0.05)]
#View top of file
genes_DE_only.head(10)
#Check how many rows in original genes file
len(genes)
#Check how many rows in DE genes file
len(genes_DE_only)
Explanation: Read in differential expression results as a Pandas data frame to get differentially expressed gene list
End of explanation
#Extract list of DE genes (Check to make sure this code works, this was adapted from a different notebook)
de_list = genes_DE_only[genes_DE_only.columns[0]]
#Remove .* from end of Ensembl ID
de_list2 = de_list.replace("\.\d","",regex=True)
#Add new column with reformatted Ensembl IDs
genes_DE_only["Full_Ensembl"] = de_list2
#View top of file
genes_DE_only.head(10)
#Set up mygene.info API and query
mg = mygene.MyGeneInfo()
gene_ids = mg.getgenes(de_list2, 'name, symbol, entrezgene', as_dataframe=True)
gene_ids.index.name = "Ensembl"
gene_ids.reset_index(inplace=True)
#View top of file
gene_ids.head(10)
#Merge mygene.info query results with original DE genes list
DE_with_ids = genes_DE_only.merge(gene_ids, left_on="Full_Ensembl", right_on="Ensembl", how="outer")
#View top of file
DE_with_ids.head(10)
#Write results to file
DE_with_ids.to_csv("./DE_genes_converted.csv")
#Dataframe to only contain gene symbol
DE_with_ids=pandas.read_csv("./DE_genes_converted.csv")
cols = DE_with_ids.columns.tolist()
cols.insert(0, cols.pop(cols.index('symbol')))
for_xmlfile = DE_with_ids.reindex(columns= cols)
#Condense dataframe to contain only gene symbol
for_xmlfile.drop(for_xmlfile.columns[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,12,13,14]], axis=1, inplace=True)
#Exclude NaN values
for_xmlfile.dropna(axis=0, how='any', thresh=None, subset=None, inplace=True)
#View top of file
for_xmlfile.head(10)
#Write results to file
for_xmlfile.to_csv("./for_xmlfile.csv", index=False)
#.XML file generator from gene list in .csv file
import xml.etree.cElementTree as ET
import xml.etree.cElementTree as ElementTree
import lxml
#Root element of .xml "Tree"
root=ET.Element("requests")
#Title/identifier for the gene list inputted into ToppGene API
#Name it whatever you like
doc=ET.SubElement(root, "toppfun", id= "nicole's gene list")
config=ET.SubElement(doc, "enrichment-config")
gene_list=ET.SubElement(doc, "trainingset")
gene_list.set('accession-source','HGNC')
#For gene symbol in gene_list
#Parse through gene_list to create the .xml file
toppgene = pandas.read_csv("./for_xmlfile.csv")
for i in toppgene.ix[:,0]:
gene_symbol = i
gene = ET.SubElement(gene_list, "gene")
gene.text= gene_symbol
tree = ET.ElementTree(root)
#Function needed for proper indentation of the .xml file
def indent(elem, level=0):
i = "\n" + level*" "
if len(elem):
if not elem.text or not elem.text.strip():
elem.text = i + " "
if not elem.tail or not elem.tail.strip():
elem.tail = i
for elem in elem:
indent(elem, level+1)
if not elem.tail or not elem.tail.strip():
elem.tail = i
else:
if level and (not elem.tail or not elem.tail.strip()):
elem.tail = i
indent(root)
import xml.dom.minidom
from lxml import etree
#File to write the .xml file to
#Include DOCTYPE
with open('/data/test/test.xml', 'w') as f:
f.write('<?xml version="1.0" encoding="UTF-8" ?><!DOCTYPE requests SYSTEM "https://toppgene.cchmc.org/toppgenereq.dtd">')
ElementTree.ElementTree(root).write(f, 'utf-8')
#Display .xml file
xml = xml.dom.minidom.parse('/data/test/test.xml')
pretty_xml_as_string = xml.toprettyxml()
print(pretty_xml_as_string)
Explanation: Translate Ensembl IDs to Gene Symbols and Entrez IDs using mygene.info API
End of explanation
!curl -v -H 'Content-Type: text/xml' --data @/data/test/test.xml -X POST https://toppgene.cchmc.org/api/44009585-27C5-41FD-8279-A5FE1C86C8DB > /data/test/testoutfile.xml
#Display output .xml file
import xml.dom.minidom
xml = xml.dom.minidom.parse("/data/test/testoutfile.xml")
pretty_xml_as_string = xml.toprettyxml()
print(pretty_xml_as_string)
Explanation: Run ToppGene API
Include path for the input .xml file and path and name of the output .xml file.
Outputs all 17 features of ToppGene.
End of explanation
import xml.dom.minidom
import pandas as pd
import numpy
#Parse through .xml file
def load_parse_xml(data_file):
Check if file exists. If file exists, load and parse the data file.
if os.path.isfile(data_file):
print "File exists. Parsing..."
data_parse = ET.ElementTree(file=data_file)
print "File parsed."
return data_parse
xmlfile = load_parse_xml("/data/test/testoutfile.xml")
#Generate array of annotation arrays for .csv file
root_tree = xmlfile.getroot()
gene_list=[]
for child in root_tree:
child.find("enrichment-results")
new_array = []
array_of_arrays=[]
for type in child.iter("enrichment-result"):
count = 0
for annotation in type.iter("annotation"):
array_of_arrays.append(new_array)
new_array = []
new_array.append(type.attrib['type'])
new_array.append(annotation.attrib['name'])
new_array.append(annotation.attrib['id'])
new_array.append(annotation.attrib['pvalue'])
new_array.append(annotation.attrib['genes-in-query'])
new_array.append(annotation.attrib['genes-in-term'])
new_array.append(annotation.attrib['source'])
for gene in annotation.iter("gene"):
gene_list.append(gene.attrib['symbol'])
new_array.append(gene_list)
gene_list =[]
count+= 1
print "Number of Annotations for ToppGene Feature - %s: " % type.attrib['type'] + str(count)
print "Total number of significant gene sets from ToppGene: " + str(len(array_of_arrays))
#print array_of_arrays
#Convert array of annotation arrays into .csv file (to be viewed as dataframe)
import pyexcel
data = array_of_arrays
pyexcel.save_as(array = data, dest_file_name = '/data/test/results.csv')
#Reading in the .csv ToppGene results
df=pandas.read_csv('/data/test/results.csv', header=None)
#Label dataframe columns
df.columns=['ToppGene Feature','Annotation Name','ID','pValue','Genes-in-Query','Genes-in-Term','Source','Genes']
Explanation: Parse ToppGene results into Pandas data frame
End of explanation
#Dataframe for GeneOntologyMolecularFunction
df.loc[df['ToppGene Feature'] == 'GeneOntologyMolecularFunction']
#Dataframe for GeneOntologyBiologicalProcess
df.loc[df['ToppGene Feature'] == 'GeneOntologyBiologicalProcess']
#Dataframe for GeneOntologyCellularComponent
df.loc[df['ToppGene Feature'] == 'GeneOntologyCellularComponent']
#Dataframe for Human Phenotype
df.loc[df['ToppGene Feature'] == 'HumanPheno']
#Dataframe for Mouse Phenotype
df.loc[df['ToppGene Feature'] == 'MousePheno']
#Dataframe for Domain
df.loc[df['ToppGene Feature'] == 'Domain']
#Dataframe for Pathways
df.loc[df['ToppGene Feature'] == 'Pathway']
#Dataframe for Pubmed
df.loc[df['ToppGene Feature'] == 'Pubmed']
#Dataframe for Interactions
df.loc[df['ToppGene Feature'] == 'Interaction']
#Dataframe for Cytobands
df.loc[df['ToppGene Feature'] == 'Cytoband']
#Dataframe for Transcription Factor Binding Sites
df.loc[df['ToppGene Feature'] == 'TranscriptionFactorBindingSite']
#Dataframe for Gene Family
df.loc[df['ToppGene Feature'] == 'GeneFamily']
#Dataframe for Coexpression
df.loc[df['ToppGene Feature'] == 'Coexpression']
#DataFrame for Coexpression Atlas
df.loc[df['ToppGene Feature'] == 'CoexpressionAtlas']
#Dataframe for Computational
df.loc[df['ToppGene Feature'] == 'Computational']
#Dataframe for MicroRNAs
df.loc[df['ToppGene Feature'] == 'MicroRNA']
#Dataframe for Drugs
df.loc[df['ToppGene Feature'] == 'Drug']
#Dataframe for Diseases
df.loc[df['ToppGene Feature'] == 'Disease']
Explanation: Display the dataframe of each ToppGene feature
End of explanation
#Number of significant KEGG pathways
total_KEGG_pathways = df.loc[df['Source'] == 'BioSystems: KEGG']
print "Number of significant KEGG pathways: " + str(len(total_KEGG_pathways.index))
df = df.loc[df['Source'] == 'BioSystems: KEGG']
df.to_csv('/data/test/keggpathways.csv', index=False)
mapping_df = pandas.read_csv('/data/test/KEGGmap.csv')
mapping_df = mapping_df.loc[mapping_df['Organism'] == 'Homo sapiens ']
mapping_df.head(10)
Explanation: Extract the KEGG pathway IDs from the ToppGene output (write to csv file)
End of explanation
#Create array of KEGG IDs that correspond to the significant pathways outputted by ToppGene
KEGG_ID_array = []
for ID in df.ix[:,2]:
x = int(ID)
for index,BSID in enumerate(mapping_df.ix[:,0]):
y = int(BSID)
if x == y:
KEGG_ID_array.append(mapping_df.get_value(index,1,takeable=True))
print KEGG_ID_array
#Transform array into KEGG ID dataframe
KEGG_IDs = pandas.DataFrame()
KEGG_IDs['KEGG ID'] = KEGG_ID_array
KEGG_IDs.to_csv('/data/test/keggidlist.csv', index=False)
no_KEGG_ID = pandas.read_csv('/data/test/keggpathways.csv')
KEGG_IDs = pandas.read_csv('/data/test/keggidlist.csv')
#Append KEGG ID dataframe to dataframe containing the significant pathways outputted by ToppGene
KEGG_ID_included = pd.concat([no_KEGG_ID, KEGG_IDs], axis = 1)
KEGG_ID_included.to_csv('/data/test/KEGG_ID_included.csv', index=False)
KEGG_ID_included
Explanation: Create dataframe that includes the KEGG IDs that correspond to the significant pathways outputted by ToppGene
End of explanation
#Set working directory
working_dir <- "/data/test"
setwd(working_dir)
date <- Sys.Date()
#Set R options
options(jupyter.plot_mimetypes = 'image/png')
options(useHTTPS=FALSE)
options(scipen=500)
#Load R packages from CRAN and Bioconductor
require(limma)
require(edgeR)
require(DESeq2)
require(RColorBrewer)
require(cluster)
library(gplots)
library(SPIA)
library(graphite)
library(PoiClaClu)
library(ggplot2)
library(pathview)
library(KEGG.db)
library(mygene)
library(splitstackshape)
library(reshape)
library(hwriter)
library(ReportingTools)
library("EnrichmentBrowser")
library(IRdisplay)
library(repr)
library(png)
Explanation: Run Pathview to map and render user data on the pathway graphs outputted by ToppGene
Switch to R kernel here
End of explanation
#Extract entrez ID and log2FC from the input DE genes
#Read in differential expression results as a Pandas data frame to get differentially expressed gene list
#Read in DE_genes_converted results (generated in jupyter notebook)
genes <- read.csv("DE_genes_converted.csv")[,c('entrezgene', 'log2FoldChange')]
#Remove NA values
genes<-genes[complete.cases(genes),]
head(genes,10)
#Transform data frame into matrix (gene.data in Pathview only takes in a matrix formatted data)
genedata<-matrix(c(genes[,2]),ncol=1,byrow=TRUE)
rownames(genedata)<-c(genes[,1])
colnames(genedata)<-c("log2FoldChange")
genedata <- as.matrix(genedata)
head(genedata,10)
Explanation: Create matrix-like structure to contain entrez ID and log2FC for gene.data input
End of explanation
#Read in pathways that you want to map to (from toppgene pathway results)
#Store as a vector
pathways <- read.csv("/data/test/keggidlist.csv")
head(pathways, 12)
pathways.vector<-as.vector(pathways$KEGG.ID)
pathways.vector
#Loop through all the pathways in pathways.vector
#Generate Pathview pathways for each one (native KEGG graphs)
i<-1
for (i in pathways.vector){
pv.out <- pathview(gene.data = genedata[, 1], pathway.id = i,
species = "hsa", out.suffix = "toppgene_native_kegg_graph", kegg.native = T)
#str(pv.out)
#head(pv.out$plot.data.gene)
}
#Loop through all the pathways in pathways.vector
#Generate Pathview pathways for each one (Graphviz layouts)
i<-1
for (i in pathways.vector){
pv.out <- pathview(gene.data = genedata[, 1], pathway.id = i,
species = "hsa", out.suffix = "toppgene_graphviz_layout", kegg.native = F)
str(pv.out)
head(pv.out$plot.data.gene)
#head(pv.out$plot.data.gene)
}
Explanation: Create vector containing the KEGG IDs of all the significant target pathways
End of explanation
#Display native KEGG graphs
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import pandas
%matplotlib inline
#for loop that iterates through the pathway images and displays them
pathways = pandas.read_csv("/data/test/keggidlist.csv")
pathways
for i in pathways.ix[:,0]:
image = i
address = "/data/test/%s.toppgene_native_kegg_graph.png" % image
img = mpimg.imread(address)
plt.imshow(img)
plt.gcf().set_size_inches(50,50)
print i
plt.show()
Explanation: Display each of the signficant pathway colored overlay diagrams
Switch back to py27 kernel here
End of explanation
#Import more python modules
import sys
#To access visJS_module and entrez_to_symbol module
sys.path.append(os.getcwd().replace('/data/test', '/data/CCBB_internal/interns/Lilith/PathwayViz'))
import visJS_module
from ensembl_to_entrez import entrez_to_symbol
import networkx as nx
import matplotlib.pyplot as plt
import pymongo
from itertools import islice
import requests
import math
import spectra
from bioservices.kegg import KEGG
import imp
imp.reload(visJS_module)
#Latex rendering of text in graphs
import matplotlib as mpl
mpl.rc('text', usetex = False)
mpl.rc('font', family = 'serif')
% matplotlib inline
s = KEGG()
#Lowest p value pathway
#But you can change the first parameter in pathways.get_value to see different pathways in the pathways list!
pathway = pathways.get_value(0,0, takeable=True)
print pathway
address = "/data/test/%s.xml" % pathway
#Parse pathway's xml file and get the root of the xml file
tree = ET.parse(address)
root = tree.getroot()
res = s.parse_kgml_pathway(pathway)
print res['relations']
print res['entries']
G=nx.DiGraph()
#Add nodes to networkx graph
for entry in res['entries']:
G.add_node(entry['id'], entry )
print len(G.nodes(data=True))
#Get symbol of each node
temp_node_id_array = []
for node, data in G.nodes(data=True):
if data['type'] == 'gene':
if ' ' not in data['name']:
G.node[node]['symbol'] = data['gene_names'].split(',', 1)[0]
else:
result = data['name'].split("hsa:")
result = ''.join(result)
result = result.split()
for index, gene in enumerate(result):
if index == 0:
gene_symbol = str(entrez_to_symbol(gene))
else:
gene_symbol = gene_symbol + ', ' + str(entrez_to_symbol(gene))
G.node[node]['symbol'] = gene_symbol
elif data['type'] == 'compound':
gene_symbol = s.parse(s.get(data['name']))['NAME']
G.node[node]['gene_names'] = ' '.join(gene_symbol)
G.node[node]['symbol'] = gene_symbol[0].replace(';', '')
print G.nodes(data=True)
#Get x and y coordinates for each node
seen_coord = set()
coord_array = []
dupes_coord = []
for entry in root.findall('entry'):
node_id = entry.attrib['id']
graphics = entry.find('graphics')
if (graphics.attrib['x'], graphics.attrib['y']) in seen_coord:
G.node[node_id]['x'] = (int(graphics.attrib['x']) + .1) * 2.5
G.node[node_id]['y'] = (int(graphics.attrib['y']) + .1) * 2.5
seen_coord.add((G.node[node_id]['x'], G.node[node_id]['y']))
print node_id
else:
seen_coord.add((graphics.attrib['x'], graphics.attrib['y']))
G.node[node_id]['x'] = int(graphics.attrib['x']) * 2.5
G.node[node_id]['y'] = int(graphics.attrib['y']) * 2.5
print dupes_coord
print seen_coord
#Handle undefined nodes
comp_dict = dict()
node_to_comp = dict()
comp_array_total = [] #Array containing all component nodes
for entry in root.findall('entry'):
#Array to store components of undefined nodes
component_array = []
if entry.attrib['name'] == 'undefined':
node_id = entry.attrib['id']
#Find components
for index, component in enumerate(entry.iter('component')):
component_array.append(component.get('id'))
#Check to see which elements are components
comp_array_total.append(component.get('id'))
node_to_comp[component.get('id')] = node_id
#Store into node dictionary
G.node[node_id]['component'] = component_array
comp_dict[node_id] = component_array
#Store gene names
gene_name_array = []
for index, component_id in enumerate(component_array):
if index == 0:
gene_name_array.append(G.node[component_id]['gene_names'])
else:
gene_name_array.append('\n' + G.node[component_id]['gene_names'])
G.node[node_id]['gene_names'] = gene_name_array
#Store gene symbols
gene_symbol_array = []
for index, component_id in enumerate(component_array):
if index == 0:
gene_symbol_array.append(G.node[component_id]['symbol'])
else:
gene_symbol_array.append('\n' + G.node[component_id]['symbol'])
G.node[node_id]['symbol'] = gene_symbol_array
print G.node
edge_list = []
edge_pairs = []
#Add edges to networkx graph
#Redirect edges to point to undefined nodes containing components in order to connect graph
for edge in res['relations']:
source = edge['entry1']
dest = edge['entry2']
if (edge['entry1'] in comp_array_total) == True:
source = node_to_comp[edge['entry1']]
if (edge['entry2'] in comp_array_total) == True:
dest = node_to_comp[edge['entry2']]
edge_list.append((source, dest, edge))
edge_pairs.append((source,dest))
#Check for duplicates
if (source, dest) in G.edges():
name = []
value = []
link = []
name.append(G.edge[source][dest]['name'])
value.append(G.edge[source][dest]['value'])
link.append(G.edge[source][dest]['link'])
name.append(edge['name'])
value.append(edge['value'])
link.append(edge['link'])
G.edge[source][dest]['name'] = '\n'.join(name)
G.edge[source][dest]['value'] = '\n'.join(value)
G.edge[source][dest]['link'] = '\n'.join(link)
else:
G.add_edge(source, dest, edge)
print G.edges(data=True)
edge_to_name = dict()
for edge in G.edges():
edge_to_name[edge] = G.edge[edge[0]][edge[1]]['name']
print edge_to_name
#Set colors of edges
edge_to_color = dict()
for edge in G.edges():
if 'activation' in G.edge[edge[0]][edge[1]]['name']:
edge_to_color[edge] = 'green'
elif 'inhibition' in G.edge[edge[0]][edge[1]]['name']:
edge_to_color[edge] = 'red'
else:
edge_to_color[edge] = 'blue'
print edge_to_color
#Remove component nodes from graph
G.remove_nodes_from(comp_array_total)
#Get nodes in graph
nodes = G.nodes()
numnodes = len(nodes)
print numnodes
print G.node
#Get symbol of nodes
node_to_symbol = dict()
for node in G.node:
if G.node[node]['type'] == 'map':
node_to_symbol[node] = G.node[node]['gene_names']
else:
if 'symbol' in G.node[node]:
node_to_symbol[node] = G.node[node]['symbol']
elif 'gene_names'in G.node[node]:
node_to_symbol[node] = G.node[node]['gene_names']
else:
node_to_symbol[node] = G.node[node]['name']
#Get name of nodes
node_to_gene = dict()
for node in G.node:
node_to_gene[node] = G.node[node]['gene_names']
#Get x coord of nodes
node_to_x = dict()
for node in G.node:
node_to_x[node] = G.node[node]['x']
#Get y coord of nodes
node_to_y = dict()
for node in G.node:
node_to_y[node] = G.node[node]['y']
#Log2FoldChange
DE_genes_df = pandas.read_csv("/data/test/DE_genes_converted.csv")
DE_genes_df.head(10)
short_df = DE_genes_df[['_id', 'Ensembl', 'log2FoldChange']]
short_df.head(10)
short_df.to_dict('split')
#Remove NA values
gene_to_log2fold = dict()
for entry in short_df.to_dict('split')['data']:
if isinstance(entry[0], float):
if math.isnan(entry[0]):
gene_to_log2fold[entry[1]] = entry[2]
else:
gene_to_log2fold[entry[0]] = entry[2]
else:
gene_to_log2fold[entry[0]] = entry[2]
print gene_to_log2fold
#Create color scale with negative as green and positive as red
my_scale = spectra.scale([ "green", "#CCC", "red" ]).domain([ -4, 0, 4 ])
id_to_log2fold = dict()
for node in res['entries']:
log2fold_array = []
if node['name'] == 'undefined':
print 'node is undefined'
elif node['type'] == 'map':
print 'node is a pathway'
else:
#print node['name']
result = node['name'].split("hsa:")
result = ''.join(result)
result = result.split()
#print result
for item in result:
if item in gene_to_log2fold.keys():
log2fold_array.append(gene_to_log2fold[item])
if len(log2fold_array) > 0:
id_to_log2fold[node['id']] = log2fold_array
print id_to_log2fold
#Color nodes based on log2fold data
node_to_color = dict()
for node in G.nodes():
if node in id_to_log2fold:
node_to_color[node] = my_scale(id_to_log2fold[node][0]).hexcode
else:
node_to_color[node] = '#f1f1f1'
print node_to_color
#Get number of edges in graph
edges = G.edges()
numedges = len(edges)
print numedges
print G.edges(data=True)
#Change directory
os.chdir("/data/CCBB_internal/interns/Nicole/ToppGene")
#Map to indices for source/target in edges
node_map = dict(zip(nodes,range(numnodes)))
#Dictionaries that hold per node and per edge attributes
nodes_dict = [{"id":node_to_gene[n],"degree":G.degree(n),"color":node_to_color[n], "node_shape":"box",
"node_size":10,'border_width':1, "id_num":node_to_symbol[n], "x":node_to_x[n], "y":node_to_y[n]} for n in nodes]
edges_dict = [{"source":node_map[edges[i][0]], "target":node_map[edges[i][1]],
"color":edge_to_color[edges[i]], "id":edge_to_name[edges[i]], "edge_label":'',
"hidden":'false', "physics":'true'} for i in range(numedges)]
#HTML file label for first graph (must manually increment later)
time = 1700
#Make edges thicker
#Create and display the graph here
visJS_module.visjs_network(nodes_dict, edges_dict, time_stamp = time, node_label_field = "id_num",
edge_width = 3, border_color = "black", edge_arrow_to = True, edge_font_size = 15, edge_font_align= "top",
physics_enabled = False, graph_width = 1000, graph_height = 1000)
Explanation: Weijun Luo and Cory Brouwer. Pathview: an R/Bioconductor package for pathway-based data integration and visualization.
Bioinformatics, 29(14):1830-1831, 2013. doi: 10.1093/bioinformatics/btt285.
Implement KEGG_pathway_vis Jupyter Notebook (by L. Huang)
Only works for one pathway (first one)
End of explanation |
10,733 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 1
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
# Reference : Intro to Tensor Flow - Min-Max scaling for grayscale image data
a = 0
b = 1
grayscale_min = 0
grayscale_max = 255
return a + ( ( (x - grayscale_min)*(b - a) )/( grayscale_max - grayscale_min ) )
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
from sklearn import preprocessing
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
# Reference: Intro to tensor flow - One Hot Encoding
# print (x)
# Create the encoder
lb = preprocessing.LabelBinarizer()
# Here the encoder finds the classes and assigns one-hot vectors
lb.fit([0,1,2,3,4,5,6,7,8,9])
# And finally, transform the labels into one-hot encoded vectors
return lb.transform(x)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return tf.placeholder(tf.float32, [None, image_shape[0],image_shape[1],image_shape[2]] , name='x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.float32, [None, n_classes], name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
print ("ConvMax In", x_tensor.get_shape())
x_depth = x_tensor.get_shape().as_list()[-1]
weight= tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_depth, conv_num_outputs],stddev=0.1))
bias = tf.Variable(tf.random_normal([conv_num_outputs]))
conv = tf.nn.conv2d(x_tensor, weight, [1, conv_strides[0], conv_strides[1], 1], 'SAME')
conv = tf.nn.bias_add(conv, bias)
conv = tf.nn.relu(conv)
conv = tf.nn.max_pool(conv,[1, pool_ksize[0], pool_ksize[1], 1],[1, pool_strides[0], pool_strides[1], 1],'SAME')
print ("ConvMax Out", conv.get_shape())
return conv
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
x_shape = x_tensor.get_shape().as_list()
print ("Flatten In", x_shape)
batch_size = x_shape[0] if x_shape[0] != None else -1
flattened_image_size = 1
for i in range(1, len(x_shape)):
flattened_image_size = flattened_image_size * x_shape[i]
ret = tf.reshape(x_tensor, (batch_size, flattened_image_size))
print ("Flatten Out", ret.get_shape())
return ret
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
x_shape = x_tensor.get_shape().as_list()
print ("FullyConn In", x_shape)
weights = tf.Variable(tf.truncated_normal([x_shape[-1], num_outputs], stddev = 0.1))
bias = tf.Variable(tf.random_normal([num_outputs]))
ret_tf = tf.add(tf.matmul(x_tensor, weights), bias)
ret_tf = tf.nn.relu(ret_tf)
print ("FullyConn Out", ret_tf.get_shape())
return ret_tf
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
x_shape = x_tensor.get_shape().as_list()
print ("Output In", x_shape)
weights = tf.Variable(tf.truncated_normal([x_shape[-1], num_outputs], stddev = 0.1))
bias = tf.Variable(tf.random_normal([num_outputs]))
ret_tf = tf.add(tf.matmul(x_tensor, weights), bias)
#ret_tf = tf.nn.relu(ret_tf)
print ("Output Out", ret_tf.get_shape())
return ret_tf
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
convmax = conv2d_maxpool(x, 64, (8,8), (2,2), (4,4), (1,1))
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flat = flatten(convmax)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fullyconn = fully_conn(flat, 600)
drop = tf.nn.dropout(fullyconn, keep_prob)
fullyconn = fully_conn(drop, 80)
drop = tf.nn.dropout(fullyconn, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
return output(drop, 10)
# TODO: return output
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
valid_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
print('Loss = {0} and Validation Accuracy = {1}'.format(loss, valid_accuracy))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 45
batch_size = 512
keep_probability = 0.5
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
print (epochs)
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
10,734 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook contains the code from the original and adds a section to produce animations (which I believe was originally in there, but may have gone missing at some point).
DeepDreaming with TensorFlow
Loading and displaying the model graph
Naive feature visualization
Multiscale image generation
Laplacian Pyramid Gradient Normalization
Playing with feature visualzations
DeepDream
This notebook demonstrates a number of Convolutional Neural Network image generation techniques implemented with TensorFlow for fun and science
Step1: <a id='loading'></a>
Loading and displaying the model graph
The pretrained network can be downloaded here. Unpack the tensorflow_inception_graph.pb file from the archive and set its path to model_fn variable. Alternatively you can uncomment and run the following cell to download the network
Step6: To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.
Step7: <a id='naive'></a>
Naive feature visualization
Let's start with a naive way of visualizing these. Image-space gradient ascent!
Step8: <a id="multiscale"></a>
Multiscale image generation
Looks like the network wants to show us something interesting! Let's help it. We are going to apply gradient ascent on multiple scales. Details formed on smaller scale will be upscaled and augmented with additional details on the next scale.
With multiscale image generation it may be tempting to set the number of octaves to some high value to produce wallpaper-sized images. Storing network activations and backprop values will quickly run out of GPU memory in this case. There is a simple trick to avoid this
Step9: <a id="laplacian"></a>
Laplacian Pyramid Gradient Normalization
This looks better, but the resulting images mostly contain high frequencies. Can we improve it? One way is to add a smoothness prior into the optimization objective. This will effectively blur the image a little every iteration, suppressing the higher frequencies, so that the lower frequencies can catch up. This will require more iterations to produce a nice image. Why don't we just boost lower frequencies of the gradient instead? One way to achieve this is through the Laplacian pyramid decomposition. We call the resulting technique Laplacian Pyramid Gradient Normalization.
Step10: <a id="playing"></a>
Playing with feature visualizations
We got a nice smooth image using only 10 iterations per octave. In case of running on GPU this takes just a few seconds. Let's try to visualize another channel from the same layer. The network can generate wide diversity of patterns.
Step11: Lower layers produce features of lower complexity.
Step12: There are many interesting things one may try. For example, optimizing a linear combination of features often gives a "mixture" pattern.
Step13: <a id="deepdream"></a>
DeepDream
Now let's reproduce the DeepDream algorithm with TensorFlow.
Step14: Let's load some image and populate it with DogSlugs (in case you've missed them).
Step15: Note that results can differ from the Caffe's implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset.
Using an arbitrary optimization objective still works
Step16: Don't hesitate to use higher resolution inputs (also increase the number of octaves)! Here is an example of running the flower dream over the bigger image.
The DeepDream notebook contains code with many more options to explore. You can guide the dreaming towards a specific image, or repeat it endlessly to produce dreamier dreams. If you're very patient, you can even make videos. | Python Code:
# boilerplate code
from __future__ import print_function
import os
from io import BytesIO
import numpy as np
from functools import partial
import PIL.Image
from IPython.display import clear_output, Image, display, HTML
import tensorflow as tf
Explanation: This notebook contains the code from the original and adds a section to produce animations (which I believe was originally in there, but may have gone missing at some point).
DeepDreaming with TensorFlow
Loading and displaying the model graph
Naive feature visualization
Multiscale image generation
Laplacian Pyramid Gradient Normalization
Playing with feature visualzations
DeepDream
This notebook demonstrates a number of Convolutional Neural Network image generation techniques implemented with TensorFlow for fun and science:
visualize individual feature channels and their combinations to explore the space of patterns learned by the neural network (see GoogLeNet and VGG16 galleries)
embed TensorBoard graph visualizations into Jupyter notebooks
produce high-resolution images with tiled computation (example)
use Laplacian Pyramid Gradient Normalization to produce smooth and colorful visuals at low cost
generate DeepDream-like images with TensorFlow (DogSlugs included)
The network under examination is the GoogLeNet architecture, trained to classify images into one of 1000 categories of the ImageNet dataset. It consists of a set of layers that apply a sequence of transformations to the input image. The parameters of these transformations were determined during the training process by a variant of gradient descent algorithm. The internal image representations may seem obscure, but it is possible to visualize and interpret them. In this notebook we are going to present a few tricks that allow to make these visualizations both efficient to generate and even beautiful. Impatient readers can start with exploring the full galleries of images generated by the method described here for GoogLeNet and VGG16 architectures.
End of explanation
# if you have wget installed:
#!wget https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip && unzip inception5h.zip
# if not, perhaps you have curl
#!curl -O https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip && unzip inception5h.zip
model_fn = 'tensorflow_inception_graph.pb'
# creating TensorFlow session and loading the model
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
with tf.gfile.FastGFile(model_fn, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
t_input = tf.placeholder(np.float32, name='input') # define the input tensor
imagenet_mean = 117.0
t_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)
tf.import_graph_def(graph_def, {'input':t_preprocessed})
Explanation: <a id='loading'></a>
Loading and displaying the model graph
The pretrained network can be downloaded here. Unpack the tensorflow_inception_graph.pb file from the archive and set its path to model_fn variable. Alternatively you can uncomment and run the following cell to download the network:
End of explanation
layers = [op.name for op in graph.get_operations() if op.type=='Conv2D' and 'import/' in op.name]
feature_nums = [int(graph.get_tensor_by_name(name+':0').get_shape()[-1]) for name in layers]
print('Number of layers', len(layers))
print('Total number of feature channels:', sum(feature_nums))
# Helper functions for TF Graph visualization
def strip_consts(graph_def, max_const_size=32):
Strip large constant values from graph_def.
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = tf.compat.as_bytes("<stripped %d bytes>"%size)
return strip_def
def rename_nodes(graph_def, rename_func):
res_def = tf.GraphDef()
for n0 in graph_def.node:
n = res_def.node.add()
n.MergeFrom(n0)
n.name = rename_func(n.name)
for i, s in enumerate(n.input):
n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])
return res_def
def show_graph(graph_def, max_const_size=32):
Visualize TensorFlow graph.
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code =
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe =
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
.format(code.replace('"', '"'))
display(HTML(iframe))
# Visualizing the network graph. Be sure expand the "mixed" nodes to see their
# internal structure. We are going to visualize "Conv2D" nodes.
tmp_def = rename_nodes(graph_def, lambda s:"/".join(s.split('_',1)))
show_graph(tmp_def)
Explanation: To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.
End of explanation
# Picking some internal layer. Note that we use outputs before applying the ReLU nonlinearity
# to have non-zero gradients for features with negative initial activations.
layer = 'mixed4d_3x3_bottleneck_pre_relu'
channel = 139 # picking some feature channel to visualize
# start with a gray image with a little noise
img_noise = np.random.uniform(size=(224,224,3)) + 100.0
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 1)*255)
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
def visstd(a, s=0.1):
'''Normalize the image range for visualization'''
return (a-a.mean())/max(a.std(), 1e-4)*s + 0.5
def T(layer):
'''Helper for getting layer output tensor'''
return graph.get_tensor_by_name("import/%s:0"%layer)
def render_naive(t_obj, img0=img_noise, iter_n=20, step=1.0):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for i in range(iter_n):
g, score = sess.run([t_grad, t_score], {t_input:img})
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print(score, end = ' ')
clear_output()
showarray(visstd(img))
render_naive(T(layer)[:,:,:,channel])
Explanation: <a id='naive'></a>
Naive feature visualization
Let's start with a naive way of visualizing these. Image-space gradient ascent!
End of explanation
def tffunc(*argtypes):
'''Helper that transforms TF-graph generating function into a regular one.
See "resize" function below.
'''
placeholders = list(map(tf.placeholder, argtypes))
def wrap(f):
out = f(*placeholders)
def wrapper(*args, **kw):
return out.eval(dict(zip(placeholders, args)), session=kw.get('session'))
return wrapper
return wrap
# Helper function that uses TF to resize an image
def resize(img, size):
img = tf.expand_dims(img, 0)
return tf.image.resize_bilinear(img, size)[0,:,:,:]
resize = tffunc(np.float32, np.int32)(resize)
def calc_grad_tiled(img, t_grad, tile_size=512):
'''Compute the value of tensor t_grad over the image in a tiled way.
Random shifts are applied to the image to blur tile boundaries over
multiple iterations.'''
sz = tile_size
h, w = img.shape[:2]
sx, sy = np.random.randint(sz, size=2)
img_shift = np.roll(np.roll(img, sx, 1), sy, 0)
grad = np.zeros_like(img)
for y in range(0, max(h-sz//2, sz),sz):
for x in range(0, max(w-sz//2, sz),sz):
sub = img_shift[y:y+sz,x:x+sz]
g = sess.run(t_grad, {t_input:sub})
grad[y:y+sz,x:x+sz] = g
return np.roll(np.roll(grad, -sx, 1), -sy, 0)
def render_multiscale(t_obj, img0=img_noise, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print('.', end = ' ')
clear_output()
showarray(visstd(img))
render_multiscale(T(layer)[:,:,:,channel])
Explanation: <a id="multiscale"></a>
Multiscale image generation
Looks like the network wants to show us something interesting! Let's help it. We are going to apply gradient ascent on multiple scales. Details formed on smaller scale will be upscaled and augmented with additional details on the next scale.
With multiscale image generation it may be tempting to set the number of octaves to some high value to produce wallpaper-sized images. Storing network activations and backprop values will quickly run out of GPU memory in this case. There is a simple trick to avoid this: split the image into smaller tiles and compute each tile gradient independently. Applying random shifts to the image before every iteration helps avoid tile seams and improves the overall image quality.
End of explanation
k = np.float32([1,4,6,4,1])
k = np.outer(k, k)
k5x5 = k[:,:,None,None]/k.sum()*np.eye(3, dtype=np.float32)
def lap_split(img):
'''Split the image into lo and hi frequency components'''
with tf.name_scope('split'):
lo = tf.nn.conv2d(img, k5x5, [1,2,2,1], 'SAME')
lo2 = tf.nn.conv2d_transpose(lo, k5x5*4, tf.shape(img), [1,2,2,1])
hi = img-lo2
return lo, hi
def lap_split_n(img, n):
'''Build Laplacian pyramid with n splits'''
levels = []
for i in range(n):
img, hi = lap_split(img)
levels.append(hi)
levels.append(img)
return levels[::-1]
def lap_merge(levels):
'''Merge Laplacian pyramid'''
img = levels[0]
for hi in levels[1:]:
with tf.name_scope('merge'):
img = tf.nn.conv2d_transpose(img, k5x5*4, tf.shape(hi), [1,2,2,1]) + hi
return img
def normalize_std(img, eps=1e-10):
'''Normalize image by making its standard deviation = 1.0'''
with tf.name_scope('normalize'):
std = tf.sqrt(tf.reduce_mean(tf.square(img)))
return img/tf.maximum(std, eps)
def lap_normalize(img, scale_n=4):
'''Perform the Laplacian pyramid normalization.'''
img = tf.expand_dims(img,0)
tlevels = lap_split_n(img, scale_n)
tlevels = list(map(normalize_std, tlevels))
out = lap_merge(tlevels)
return out[0,:,:,:]
# Showing the lap_normalize graph with TensorBoard
lap_graph = tf.Graph()
with lap_graph.as_default():
lap_in = tf.placeholder(np.float32, name='lap_in')
lap_out = lap_normalize(lap_in)
show_graph(lap_graph)
def render_lapnorm(t_obj, img0=img_noise, visfunc=visstd,
iter_n=10, step=1.0, octave_n=3, octave_scale=1.4, lap_n=4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# build the laplacian normalization graph
lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
g = lap_norm_func(g)
img += g*step
print('.', end = ' ')
clear_output()
showarray(visfunc(img))
render_lapnorm(T(layer)[:,:,:,channel])
Explanation: <a id="laplacian"></a>
Laplacian Pyramid Gradient Normalization
This looks better, but the resulting images mostly contain high frequencies. Can we improve it? One way is to add a smoothness prior into the optimization objective. This will effectively blur the image a little every iteration, suppressing the higher frequencies, so that the lower frequencies can catch up. This will require more iterations to produce a nice image. Why don't we just boost lower frequencies of the gradient instead? One way to achieve this is through the Laplacian pyramid decomposition. We call the resulting technique Laplacian Pyramid Gradient Normalization.
End of explanation
render_lapnorm(T(layer)[:,:,:,65])
Explanation: <a id="playing"></a>
Playing with feature visualizations
We got a nice smooth image using only 10 iterations per octave. In case of running on GPU this takes just a few seconds. Let's try to visualize another channel from the same layer. The network can generate wide diversity of patterns.
End of explanation
render_lapnorm(T('mixed3b_1x1_pre_relu')[:,:,:,101])
Explanation: Lower layers produce features of lower complexity.
End of explanation
render_lapnorm(T(layer)[:,:,:,65]+T(layer)[:,:,:,139], octave_n=4)
Explanation: There are many interesting things one may try. For example, optimizing a linear combination of features often gives a "mixture" pattern.
End of explanation
def render_deepdream(t_obj, img0=img_noise,
iter_n=10, step=1.5, octave_n=4, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# split the image into a number of octaves
img = img0
octaves = []
for i in range(octave_n-1):
hw = img.shape[:2]
lo = resize(img, np.int32(np.float32(hw)/octave_scale))
hi = img-resize(lo, hw)
img = lo
octaves.append(hi)
# generate details octave by octave
for octave in range(octave_n):
if octave>0:
hi = octaves[-octave]
img = resize(img, hi.shape[:2])+hi
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
img += g*(step / (np.abs(g).mean()+1e-7))
print('.',end = ' ')
clear_output()
showarray(img/255.0)
return img
Explanation: <a id="deepdream"></a>
DeepDream
Now let's reproduce the DeepDream algorithm with TensorFlow.
End of explanation
img0 = PIL.Image.open('deer.jpg')
img0 = np.float32(img0)
showarray(img0/255.0)
_ = render_deepdream(tf.square(T('mixed4c')), img0)
Explanation: Let's load some image and populate it with DogSlugs (in case you've missed them).
End of explanation
_ = render_deepdream(T(layer)[:,:,:,139], img0)
Explanation: Note that results can differ from the Caffe's implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset.
Using an arbitrary optimization objective still works:
End of explanation
frame = img0
h, w = frame.shape[:2]
s = 0.05 # scale coefficient
for i in xrange(100):
frame = render_deepdream(tf.square(T('mixed4c')), img0=frame)
img = PIL.Image.fromarray(np.uint8(np.clip(frame, 0, 255)))
img.save("dream-%04d.jpg"%i)
# Zoom in while maintaining size
img = img.resize(np.int32([w*(1+s), h*(1+s)]))
t, l = np.int32([h*(1+s) * s / 2, w*(1+s) * s / 2])
img = img.crop([l, t, w-l, h-t])
img.load()
# print (img.size)
frame = np.float32(img)
Explanation: Don't hesitate to use higher resolution inputs (also increase the number of octaves)! Here is an example of running the flower dream over the bigger image.
The DeepDream notebook contains code with many more options to explore. You can guide the dreaming towards a specific image, or repeat it endlessly to produce dreamier dreams. If you're very patient, you can even make videos.
End of explanation |
10,735 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Basic Model
In this example application it is shown how a simple time series model can be developed to simulate groundwater levels. The recharge (calculated as precipitation minus evaporation) is used as the explanatory time series.
Step1: 1. Importing the dependent time series data
In this codeblock a time series of groundwater levels is imported using the read_csv function of pandas. As pastas expects a pandas Series object, the data is squeezed. To check if you have the correct data type (a pandas Series object), you can use type(oseries) as shown below.
The following characteristics are important when importing and preparing the observed time series
Step2: 2. Import the independent time series
Two explanatory series are used
Step3: 3. Create the time series model
In this code block the actual time series model is created. First, an instance of the Model class is created (named ml here). Second, the different components of the time series model are created and added to the model. The imported time series are automatically checked for missing values and other inconsistencies. The keyword argument fillnan can be used to determine how missing values are handled. If any nan-values are found this will be reported by pastas.
Step4: 4. Solve the model
The next step is to compute the optimal model parameters. The default solver uses a non-linear least squares method for the optimization. The python package scipy is used (info on scipy's least_squares solver can be found here). Some standard optimization statistics are reported along with the optimized parameter values and correlations.
Step5: 5. Plot the results
The solution can be plotted after a solution has been obtained.
Step6: 6. Advanced plotting
There are many ways to further explore the time series model. pastas has some built-in functionalities that will provide the user with a quick overview of the model. The plots subpackage contains all the options. One of these is the method plots.results which provides a plot with more information.
Step7: 7. Statistics
The stats subpackage includes a number of statistical functions that may applied to the model. One of them is the summary method, which gives a summary of the main statistics of the model.
Step8: 8. Improvement | Python Code:
import matplotlib.pyplot as plt
import pandas as pd
import pastas as ps
ps.show_versions()
Explanation: A Basic Model
In this example application it is shown how a simple time series model can be developed to simulate groundwater levels. The recharge (calculated as precipitation minus evaporation) is used as the explanatory time series.
End of explanation
# Import groundwater time seriesm and squeeze to Series object
gwdata = pd.read_csv('../data/head_nb1.csv', parse_dates=['date'],
index_col='date', squeeze=True)
print('The data type of the oseries is: %s' % type(gwdata))
# Plot the observed groundwater levels
gwdata.plot(style='.', figsize=(10, 4))
plt.ylabel('Head [m]');
plt.xlabel('Time [years]');
Explanation: 1. Importing the dependent time series data
In this codeblock a time series of groundwater levels is imported using the read_csv function of pandas. As pastas expects a pandas Series object, the data is squeezed. To check if you have the correct data type (a pandas Series object), you can use type(oseries) as shown below.
The following characteristics are important when importing and preparing the observed time series:
- The observed time series are stored as a pandas Series object.
- The time step can be irregular.
End of explanation
# Import observed precipitation series
precip = pd.read_csv('../data/rain_nb1.csv', parse_dates=['date'],
index_col='date', squeeze=True)
print('The data type of the precip series is: %s' % type(precip))
# Import observed evaporation series
evap = pd.read_csv('../data/evap_nb1.csv', parse_dates=['date'],
index_col='date', squeeze=True)
print('The data type of the evap series is: %s' % type(evap))
# Calculate the recharge to the groundwater
recharge = precip - evap
print('The data type of the recharge series is: %s' % type(recharge))
# Plot the time series of the precipitation and evaporation
plt.figure()
recharge.plot(label='Recharge', figsize=(10, 4))
plt.xlabel('Time [years]')
plt.ylabel('Recharge (m/year)');
Explanation: 2. Import the independent time series
Two explanatory series are used: the precipitation and the potential evaporation. These need to be pandas Series objects, as for the observed heads.
Important characteristics of these time series are:
- All series are stored as pandas Series objects.
- The series may have irregular time intervals, but then it will be converted to regular time intervals when creating the time series model later on.
- It is preferred to use the same length units as for the observed heads.
End of explanation
# Create a model object by passing it the observed series
ml = ps.Model(gwdata, name="GWL")
# Add the recharge data as explanatory variable
sm = ps.StressModel(recharge, ps.Gamma, name='recharge', settings="evap")
ml.add_stressmodel(sm)
Explanation: 3. Create the time series model
In this code block the actual time series model is created. First, an instance of the Model class is created (named ml here). Second, the different components of the time series model are created and added to the model. The imported time series are automatically checked for missing values and other inconsistencies. The keyword argument fillnan can be used to determine how missing values are handled. If any nan-values are found this will be reported by pastas.
End of explanation
ml.solve()
Explanation: 4. Solve the model
The next step is to compute the optimal model parameters. The default solver uses a non-linear least squares method for the optimization. The python package scipy is used (info on scipy's least_squares solver can be found here). Some standard optimization statistics are reported along with the optimized parameter values and correlations.
End of explanation
ml.plot()
Explanation: 5. Plot the results
The solution can be plotted after a solution has been obtained.
End of explanation
ml.plots.results(figsize=(10, 6))
Explanation: 6. Advanced plotting
There are many ways to further explore the time series model. pastas has some built-in functionalities that will provide the user with a quick overview of the model. The plots subpackage contains all the options. One of these is the method plots.results which provides a plot with more information.
End of explanation
ml.stats.summary()
Explanation: 7. Statistics
The stats subpackage includes a number of statistical functions that may applied to the model. One of them is the summary method, which gives a summary of the main statistics of the model.
End of explanation
# Create a model object by passing it the observed series
ml2 = ps.Model(gwdata)
# Add the recharge data as explanatory variable
ts1 = ps.RechargeModel(precip, evap, ps.Gamma, name='rainevap',
recharge=ps.rch.Linear(), settings=("prec", "evap"))
ml2.add_stressmodel(ts1)
# Solve the model
ml2.solve()
# Plot the results
ml2.plot()
# Statistics
ml2.stats.summary()
Explanation: 8. Improvement: estimate evaporation factor
In the previous model, the recharge was estimated as precipitation minus potential evaporation. A better model is to estimate the actual evaporation as a factor (called the evaporation factor here) times the potential evaporation. First, new model is created (called ml2 here so that the original model ml does not get overwritten). Second, the RechargeModel object with a Linear recharge model is created, which combines the precipitation and evaporation series and adds a parameter for the evaporation factor f. The RechargeModel object is added to the model, the model is solved, and the results and statistics are plotted to the screen. Note that the new model gives a better fit (lower root mean squared error and higher explained variance), but that the Akiake information criterion indicates that the addition of the additional parameter does not improve the model signficantly (the Akaike criterion for model ml2 is higher than for model ml).
End of explanation |
10,736 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is a notebook to aid in the development of the market simulator. One initial version was created as part of the Machine Learning for Trading course. It has to be adapted for use in the Capstone project.
Step1: To use the market simulator with the q-learning agent it must be possible to call it with custom data, stored in RAM. Let's try that.
Step2: That function has many of the desired characteristics, but doesn't follow the dynamics necessary for the interaction with the agent. The solution will be to implement a new class, called Portfolio, that will accept orders, keep track of the positions and return their values when asked for.
Step3: Let's test the Portfolio class
Step4: Let's add a leverage limit of 2
Step5: Let's buy a less than the limit
Step6: Now, let's buy more than the limit
Step7: The last order wasn't executed because the leverage limit was reached. That's good.
Let's now go short on AAPL, but less than the limit
Step8: Now, the same, but this time let's pass the limit.
Step9: Nothing happened because the leverage limit was reached. That's ok.
Step10: Let's try to buy GOOG before it entered the market...
Step11: Ok, nothing happened. That's correct.
Now, let's add some years and try to buy GOOG again...
Step12: Good. This time GOOG was bought!
What about the leverage? | Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
from utils import analysis
Explanation: This is a notebook to aid in the development of the market simulator. One initial version was created as part of the Machine Learning for Trading course. It has to be adapted for use in the Capstone project.
End of explanation
from utils import marketsim as msim
orders_path = '../../data/orders/orders-my-leverage.csv'
orders_df = pd.read_csv(orders_path, index_col='Date', parse_dates=True, na_values=['nan'])
orders_df
data_df = pd.read_pickle('../../data/data_df.pkl')
port_vals_df, values = msim.simulate_orders(orders_df, data_df)
port_vals_df.plot()
values
analysis.value_eval(port_vals_df, graph=True, verbose=True, data_df=data_df)
Explanation: To use the market simulator with the q-learning agent it must be possible to call it with custom data, stored in RAM. Let's try that.
End of explanation
'AAPL' in data_df.columns.tolist()
data_df.index.get_level_values(0)[0]
symbols = data_df.columns.get_level_values(0).tolist()
symbols.append('CASH')
positions_df = pd.DataFrame(index=symbols, columns=['shares', 'value'])
positions_df
close_df = data_df.xs('Close', level='feature')
close_df.head()
current_date = close_df.index[-1]
current_date
positions_df['shares'] = np.zeros(positions_df.shape[0])
positions_df.loc['CASH','shares'] = 1000
positions_df
SHARES = 'shares'
VALUE = 'value'
CASH = 'CASH'
prices = close_df.loc[current_date]
prices[CASH] = 1.0
positions_df[VALUE] = positions_df[SHARES] * prices
positions_df
ORDER_SYMBOL = 'symbol'
ORDER_ORDER = 'order'
ORDER_SHARES = 'shares'
BUY = 'BUY'
SELL = 'SELL'
NOTHING = 'NOTHING'
order = pd.Series(['AAPL', BUY, 200], index=[ORDER_SYMBOL, ORDER_ORDER, ORDER_SHARES])
order
if order[ORDER_ORDER] == 'BUY':
positions_df.loc[order[ORDER_SYMBOL], SHARES] += order[ORDER_SHARES]
positions_df.loc[CASH, SHARES] -= order[ORDER_SHARES] * close_df.loc[current_date, order[ORDER_SYMBOL]]
if order[ORDER_ORDER] == 'SELL':
positions_df.loc[order[ORDER_SYMBOL], SHARES] -= order[ORDER_SHARES]
positions_df.loc[CASH, SHARES] += order[ORDER_SHARES] * close_df.loc[current_date, order[ORDER_SYMBOL]]
positions_df[VALUE] = positions_df[SHARES] * prices
positions_df.loc['AAPL']
positions_df.loc[CASH]
close_df.loc[current_date, 'AAPL']
116*200
positions_df[VALUE].iloc[:-1]
values = positions_df[VALUE]
leverage = np.sum(np.abs(values.iloc[:-1])) / (np.sum(values))
leverage
Explanation: That function has many of the desired characteristics, but doesn't follow the dynamics necessary for the interaction with the agent. The solution will be to implement a new class, called Portfolio, that will accept orders, keep track of the positions and return their values when asked for.
End of explanation
from recommender.portfolio import Portfolio
p = Portfolio(data_df)
from recommender.order import Order
o1 = Order(['AAPL', BUY, 150])
print(o1)
p.positions_df
p.positions_df.loc['AAPL']
p.execute_order(o1)
p.positions_df.loc[['AAPL','CASH']]
p.add_market_days(1)
p.current_date
p.positions_df.loc[['AAPL', CASH]]
p.add_market_days(1)
p.current_date
p.positions_df.loc[['AAPL', CASH]]
p.positions_df[VALUE].sum()
p.execute_order(Order(['AAPL',SELL,100]))
p.positions_df[p.positions_df[SHARES] != 0]
Explanation: Let's test the Portfolio class
End of explanation
p.execute_order(Order(['MSFT',BUY,120]))
p.get_positions()
p.leverage_limit = 2
Explanation: Let's add a leverage limit of 2
End of explanation
p.execute_order(Order(['AAPL',BUY, 10]))
p.get_positions()
Explanation: Let's buy a less than the limit
End of explanation
p.execute_order(Order(['AAPL',BUY, 5000]))
p.get_positions()
Explanation: Now, let's buy more than the limit
End of explanation
p.execute_order(Order(['AAPL',SELL, 300]))
p.get_positions()
Explanation: The last order wasn't executed because the leverage limit was reached. That's good.
Let's now go short on AAPL, but less than the limit
End of explanation
p.execute_order(Order(['AAPL',SELL, 3000]))
p.get_positions()
Explanation: Now, the same, but this time let's pass the limit.
End of explanation
pos = p.get_positions()
pos[VALUE].sum()
p.add_market_days(1000)
p.get_positions()
p.add_market_days(6000)
p.get_positions()
p.get_positions()[VALUE].sum()
p.add_market_days(-7000) # Back in time...
p.get_positions()
p.current_date
Explanation: Nothing happened because the leverage limit was reached. That's ok.
End of explanation
p.close_df.loc[p.current_date, 'GOOG']
p.execute_order(Order(['GOOG', BUY, 100]))
p.get_positions()
Explanation: Let's try to buy GOOG before it entered the market...
End of explanation
# I need to add some cash, because I lost a lot of money shorting AAPL in the last 20 years, and I need to meet the leverage limits.
p.positions_df.loc[CASH, SHARES] = 100000
p.update_values()
p.add_market_days(7200)
p.execute_order(Order(['GOOG', BUY, 100]))
p.get_positions()
Explanation: Ok, nothing happened. That's correct.
Now, let's add some years and try to buy GOOG again...
End of explanation
p.leverage_limit
p.my_leverage_reached()
p.get_leverage()
Explanation: Good. This time GOOG was bought!
What about the leverage?
End of explanation |
10,737 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Forecasting I
Step1: Intro to Pyro's forecasting framework
Pyro's forecasting framework consists of
Step2: Let's start with a simple log-linear regression model, with no trend or seasonality. Note that while this example is univariate, Pyro's forecasting framework is multivariate, so we'll often need to reshape using .unsqueeze(-1), .expand([1]), and .to_event(1).
Step3: We can now train this model by creating a Forecaster object. We'll split the data into [T0,T1) for training and [T1,T2) for testing.
Step4: Next we can evaluate by drawing posterior samples from the forecaster, passing in full covariates but only partial data. We'll use Pyro's quantile() function to plot median and an 80% confidence interval. To evaluate fit we'll use eval_crps() to compute Continuous Ranked Probability Score; this is an good metric to assess distributional fit of a heavy-tailed distribution.
Step5: Zooming in to just the forecasted region, we see this model ignores seasonal behavior.
Step6: We could add a yearly seasonal component simply by adding new covariates (note we've already taken care in the model to handle feature_dim > 1).
Step7: Time-local random variables
Step8: Heavy-tailed noise
Our final univariate model will generalize from Gaussian noise to heavy-tailed Stable noise. The only difference is the noise_dist which now takes two new parameters
Step9: Backtesting
To compare our Gaussian Model2 and Stable Model3 we'll use a simple backtesting() helper. This helper by default evaluates three metrics | Python Code:
import torch
import pyro
import pyro.distributions as dist
import pyro.poutine as poutine
from pyro.contrib.examples.bart import load_bart_od
from pyro.contrib.forecast import ForecastingModel, Forecaster, backtest, eval_crps
from pyro.infer.reparam import LocScaleReparam, StableReparam
from pyro.ops.tensor_utils import periodic_cumsum, periodic_repeat, periodic_features
from pyro.ops.stats import quantile
import matplotlib.pyplot as plt
%matplotlib inline
assert pyro.__version__.startswith('1.7.0')
pyro.set_rng_seed(20200221)
dataset = load_bart_od()
print(dataset.keys())
print(dataset["counts"].shape)
print(" ".join(dataset["stations"]))
Explanation: Forecasting I: univariate, heavy tailed
This tutorial introduces the pyro.contrib.forecast module, a framework for forecasting with Pyro models. This tutorial covers only univariate models and simple likelihoods. This tutorial assumes the reader is already familiar with SVI and tensor shapes.
See also:
Forecasting II: state space models
Forecasting III: hierarchical models
Summary
To create a forecasting model:
Create a subclass of the ForecastingModel class.
Implement the .model(zero_data, covariates) method using standard Pyro syntax.
Sample all time-local variables inside the self.time_plate context.
Finally call the .predict(noise_dist, prediction) method.
To train a forecasting model, create a Forecaster object.
Training can be flaky, you'll need to tune hyperparameters and randomly restart.
Reparameterization can help learning, e.g. LocScaleReparam.
To forecast the future, draw samples from a Forecaster object conditioned on data and covariates.
To model seasonality, use helpers periodic_features(), periodic_repeat(), and periodic_cumsum().
To model heavy-tailed data, use Stable distributions and StableReparam.
To evaluate results, use the backtest() helper or low-level loss functions.
End of explanation
T, O, D = dataset["counts"].shape
data = dataset["counts"][:T // (24 * 7) * 24 * 7].reshape(T // (24 * 7), -1).sum(-1).log()
data = data.unsqueeze(-1)
plt.figure(figsize=(9, 3))
plt.plot(data)
plt.title("Total weekly ridership")
plt.ylabel("log(# rides)")
plt.xlabel("Week after 2011-01-01")
plt.xlim(0, len(data));
Explanation: Intro to Pyro's forecasting framework
Pyro's forecasting framework consists of:
- a ForecastingModel base class, whose .model() method can be implemented for custom forecasting models,
- a Forecaster class that trains and forecasts using ForecastingModels, and
- a backtest() helper to evaluate models on a number of metrics.
Consider a simple univariate dataset, say weekly BART train ridership aggregated over all stations in the network. This data roughly logarithmic, so we log-transform for modeling.
End of explanation
# First we need some boilerplate to create a class and define a .model() method.
class Model1(ForecastingModel):
# We then implement the .model() method. Since this is a generative model, it shouldn't
# look at data; however it is convenient to see the shape of data we're supposed to
# generate, so this inputs a zeros_like(data) tensor instead of the actual data.
def model(self, zero_data, covariates):
data_dim = zero_data.size(-1) # Should be 1 in this univariate tutorial.
feature_dim = covariates.size(-1)
# The first part of the model is a probabilistic program to create a prediction.
# We use the zero_data as a template for the shape of the prediction.
bias = pyro.sample("bias", dist.Normal(0, 10).expand([data_dim]).to_event(1))
weight = pyro.sample("weight", dist.Normal(0, 0.1).expand([feature_dim]).to_event(1))
prediction = bias + (weight * covariates).sum(-1, keepdim=True)
# The prediction should have the same shape as zero_data (duration, obs_dim),
# but may have additional sample dimensions on the left.
assert prediction.shape[-2:] == zero_data.shape
# The next part of the model creates a likelihood or noise distribution.
# Again we'll be Bayesian and write this as a probabilistic program with
# priors over parameters.
noise_scale = pyro.sample("noise_scale", dist.LogNormal(-5, 5).expand([1]).to_event(1))
noise_dist = dist.Normal(0, noise_scale)
# The final step is to call the .predict() method.
self.predict(noise_dist, prediction)
Explanation: Let's start with a simple log-linear regression model, with no trend or seasonality. Note that while this example is univariate, Pyro's forecasting framework is multivariate, so we'll often need to reshape using .unsqueeze(-1), .expand([1]), and .to_event(1).
End of explanation
T0 = 0 # begining
T2 = data.size(-2) # end
T1 = T2 - 52 # train/test split
%%time
pyro.set_rng_seed(1)
pyro.clear_param_store()
time = torch.arange(float(T2)) / 365
covariates = torch.stack([time], dim=-1)
forecaster = Forecaster(Model1(), data[:T1], covariates[:T1], learning_rate=0.1)
Explanation: We can now train this model by creating a Forecaster object. We'll split the data into [T0,T1) for training and [T1,T2) for testing.
End of explanation
samples = forecaster(data[:T1], covariates, num_samples=1000)
p10, p50, p90 = quantile(samples, (0.1, 0.5, 0.9)).squeeze(-1)
crps = eval_crps(samples, data[T1:])
print(samples.shape, p10.shape)
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(data, 'k-', label='truth')
plt.title("Total weekly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Week after 2011-01-01")
plt.xlim(0, None)
plt.legend(loc="best");
Explanation: Next we can evaluate by drawing posterior samples from the forecaster, passing in full covariates but only partial data. We'll use Pyro's quantile() function to plot median and an 80% confidence interval. To evaluate fit we'll use eval_crps() to compute Continuous Ranked Probability Score; this is an good metric to assess distributional fit of a heavy-tailed distribution.
End of explanation
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(torch.arange(T1, T2), data[T1:], 'k-', label='truth')
plt.title("Total weekly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Week after 2011-01-01")
plt.xlim(T1, None)
plt.legend(loc="best");
Explanation: Zooming in to just the forecasted region, we see this model ignores seasonal behavior.
End of explanation
%%time
pyro.set_rng_seed(1)
pyro.clear_param_store()
time = torch.arange(float(T2)) / 365
covariates = torch.cat([time.unsqueeze(-1),
periodic_features(T2, 365.25 / 7)], dim=-1)
forecaster = Forecaster(Model1(), data[:T1], covariates[:T1], learning_rate=0.1)
samples = forecaster(data[:T1], covariates, num_samples=1000)
p10, p50, p90 = quantile(samples, (0.1, 0.5, 0.9)).squeeze(-1)
crps = eval_crps(samples, data[T1:])
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(data, 'k-', label='truth')
plt.title("Total weekly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Week after 2011-01-01")
plt.xlim(0, None)
plt.legend(loc="best");
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(torch.arange(T1, T2), data[T1:], 'k-', label='truth')
plt.title("Total weekly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Week after 2011-01-01")
plt.xlim(T1, None)
plt.legend(loc="best");
Explanation: We could add a yearly seasonal component simply by adding new covariates (note we've already taken care in the model to handle feature_dim > 1).
End of explanation
class Model2(ForecastingModel):
def model(self, zero_data, covariates):
data_dim = zero_data.size(-1)
feature_dim = covariates.size(-1)
bias = pyro.sample("bias", dist.Normal(0, 10).expand([data_dim]).to_event(1))
weight = pyro.sample("weight", dist.Normal(0, 0.1).expand([feature_dim]).to_event(1))
# We'll sample a time-global scale parameter outside the time plate,
# then time-local iid noise inside the time plate.
drift_scale = pyro.sample("drift_scale",
dist.LogNormal(-20, 5).expand([1]).to_event(1))
with self.time_plate:
# We'll use a reparameterizer to improve variational fit. The model would still be
# correct if you removed this context manager, but the fit appears to be worse.
with poutine.reparam(config={"drift": LocScaleReparam()}):
drift = pyro.sample("drift", dist.Normal(zero_data, drift_scale).to_event(1))
# After we sample the iid "drift" noise we can combine it in any time-dependent way.
# It is important to keep everything inside the plate independent and apply dependent
# transforms outside the plate.
motion = drift.cumsum(-2) # A Brownian motion.
# The prediction now includes three terms.
prediction = motion + bias + (weight * covariates).sum(-1, keepdim=True)
assert prediction.shape[-2:] == zero_data.shape
# Construct the noise distribution and predict.
noise_scale = pyro.sample("noise_scale", dist.LogNormal(-5, 5).expand([1]).to_event(1))
noise_dist = dist.Normal(0, noise_scale)
self.predict(noise_dist, prediction)
%%time
pyro.set_rng_seed(1)
pyro.clear_param_store()
time = torch.arange(float(T2)) / 365
covariates = periodic_features(T2, 365.25 / 7)
forecaster = Forecaster(Model2(), data[:T1], covariates[:T1], learning_rate=0.1,
time_reparam="dct",
)
samples = forecaster(data[:T1], covariates, num_samples=1000)
p10, p50, p90 = quantile(samples, (0.1, 0.5, 0.9)).squeeze(-1)
crps = eval_crps(samples, data[T1:])
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(data, 'k-', label='truth')
plt.title("Total weekly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Week after 2011-01-01")
plt.xlim(0, None)
plt.legend(loc="best");
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(torch.arange(T1, T2), data[T1:], 'k-', label='truth')
plt.title("Total weekly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Week after 2011-01-01")
plt.xlim(T1, None)
plt.legend(loc="best");
Explanation: Time-local random variables: self.time_plate
So far we've seen the ForecastingModel.model() method and self.predict(). The last piece of forecasting-specific syntax is the self.time_plate context for time-local variables. To see how this works, consider changing our global linear trend model above to a local level model. Note the poutine.reparam() handler is a general Pyro inference trick, not specific to forecasting.
End of explanation
class Model3(ForecastingModel):
def model(self, zero_data, covariates):
data_dim = zero_data.size(-1)
feature_dim = covariates.size(-1)
bias = pyro.sample("bias", dist.Normal(0, 10).expand([data_dim]).to_event(1))
weight = pyro.sample("weight", dist.Normal(0, 0.1).expand([feature_dim]).to_event(1))
drift_scale = pyro.sample("drift_scale", dist.LogNormal(-20, 5).expand([1]).to_event(1))
with self.time_plate:
with poutine.reparam(config={"drift": LocScaleReparam()}):
drift = pyro.sample("drift", dist.Normal(zero_data, drift_scale).to_event(1))
motion = drift.cumsum(-2) # A Brownian motion.
prediction = motion + bias + (weight * covariates).sum(-1, keepdim=True)
assert prediction.shape[-2:] == zero_data.shape
# The next part of the model creates a likelihood or noise distribution.
# Again we'll be Bayesian and write this as a probabilistic program with
# priors over parameters.
stability = pyro.sample("noise_stability", dist.Uniform(1, 2).expand([1]).to_event(1))
skew = pyro.sample("noise_skew", dist.Uniform(-1, 1).expand([1]).to_event(1))
scale = pyro.sample("noise_scale", dist.LogNormal(-5, 5).expand([1]).to_event(1))
noise_dist = dist.Stable(stability, skew, scale)
# We need to use a reparameterizer to handle the Stable distribution.
# Note "residual" is the name of Pyro's internal sample site in self.predict().
with poutine.reparam(config={"residual": StableReparam()}):
self.predict(noise_dist, prediction)
%%time
pyro.set_rng_seed(2)
pyro.clear_param_store()
time = torch.arange(float(T2)) / 365
covariates = periodic_features(T2, 365.25 / 7)
forecaster = Forecaster(Model3(), data[:T1], covariates[:T1], learning_rate=0.1,
time_reparam="dct")
for name, value in forecaster.guide.median().items():
if value.numel() == 1:
print("{} = {:0.4g}".format(name, value.item()))
samples = forecaster(data[:T1], covariates, num_samples=1000)
p10, p50, p90 = quantile(samples, (0.1, 0.5, 0.9)).squeeze(-1)
crps = eval_crps(samples, data[T1:])
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(data, 'k-', label='truth')
plt.title("Total weekly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Week after 2011-01-01")
plt.xlim(0, None)
plt.legend(loc="best");
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(torch.arange(T1, T2), data[T1:], 'k-', label='truth')
plt.title("Total weekly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Week after 2011-01-01")
plt.xlim(T1, None)
plt.legend(loc="best");
Explanation: Heavy-tailed noise
Our final univariate model will generalize from Gaussian noise to heavy-tailed Stable noise. The only difference is the noise_dist which now takes two new parameters: stability determines tail weight and skew determines the relative size of positive versus negative spikes.
The Stable distribution is a natural heavy-tailed generalization of the Normal distribution, but it is difficult to work with due to its intractible density function. Pyro implements auxiliary variable methods for working with Stable distributions. To inform Pyro to use those auxiliary variable methods, we wrap the final line in poutine.reparam() effect handler that applies the StableReparam transform to the implicit observe site named "residual". You can use Stable distributions for other sites by specifying config={"my_site_name": StableReparam()}.
End of explanation
%%time
pyro.set_rng_seed(1)
pyro.clear_param_store()
windows2 = backtest(data, covariates, Model2,
min_train_window=104, test_window=52, stride=26,
forecaster_options={"learning_rate": 0.1, "time_reparam": "dct",
"log_every": 1000, "warm_start": True})
%%time
pyro.set_rng_seed(1)
pyro.clear_param_store()
windows3 = backtest(data, covariates, Model3,
min_train_window=104, test_window=52, stride=26,
forecaster_options={"learning_rate": 0.1, "time_reparam": "dct",
"log_every": 1000, "warm_start": True})
fig, axes = plt.subplots(3, figsize=(8, 6), sharex=True)
axes[0].set_title("Gaussian versus Stable accuracy over {} windows".format(len(windows2)))
axes[0].plot([w["crps"] for w in windows2], "b<", label="Gaussian")
axes[0].plot([w["crps"] for w in windows3], "r>", label="Stable")
axes[0].set_ylabel("CRPS")
axes[1].plot([w["mae"] for w in windows2], "b<", label="Gaussian")
axes[1].plot([w["mae"] for w in windows3], "r>", label="Stable")
axes[1].set_ylabel("MAE")
axes[2].plot([w["rmse"] for w in windows2], "b<", label="Gaussian")
axes[2].plot([w["rmse"] for w in windows3], "r>", label="Stable")
axes[2].set_ylabel("RMSE")
axes[0].legend(loc="best")
plt.tight_layout()
Explanation: Backtesting
To compare our Gaussian Model2 and Stable Model3 we'll use a simple backtesting() helper. This helper by default evaluates three metrics: CRPS assesses distributional accuracy of heavy-tailed data, MAE assesses point accuracy of heavy-tailed data, and RMSE assesses accuracy of Normal-tailed data. The one nuance here is to set warm_start=True to reduce the need for random restarts.
End of explanation |
10,738 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create A Temporary File
Step2: Write To The Temp File
Step3: View The Tmp File's Name
Step4: Read The File
Step5: Close (And Thus Delete) The File | Python Code:
from tempfile import NamedTemporaryFile
Explanation: Title: Create A Temporary File
Slug: create_a_temporary_file
Summary: Create A Temporary File Using Python.
Date: 2017-02-02 12:00
Category: Python
Tags: Basics
Authors: Chris Albon
Preliminaries
End of explanation
f = NamedTemporaryFile('w+t')
Explanation: Create A Temporary File
End of explanation
# Write to the file, the output is the number of characters
f.write('Nobody lived on Deadweather but us and the pirates. It wasn’t hard to understand why.')
Explanation: Write To The Temp File
End of explanation
f.name
Explanation: View The Tmp File's Name
End of explanation
# Go to the top of the file
f.seek(0)
# Read the file
f.read()
Explanation: Read The File
End of explanation
f.close()
Explanation: Close (And Thus Delete) The File
End of explanation |
10,739 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
Glossary
2. Mathematical Groundwork
Previous
Step1: Import section specific modules
Step3: 2.8. The Discrete Fourier Transform (DFT) and the Fast Fourier Transform (FFT)<a id='math
Step5: Althought this would produce the correct result, this way of implementing the DFT is going to be incredibly slow. The DFT can be implemented in matrix form. Convince yourself that a vectorised implementation of this operation can be achieved with
$$ X = K x $$
where $K$ is the kernel matrix, it stores the values $K_{kn} = e^{\frac{-\imath 2 \pi k n}{N}}$. This is implemented numerically as follows
Step6: This function will be much faster than the previous implementation. We should check that they both return the same result
Step7: Just to be sure our DFT really works, let's also compare the output of our function to numpy's built in DFT function (note numpy automatically implements a faster version of the DFT called the FFT, see the discussion below)
Step8: Great! Our function is returning the correct result. Next we do an example to demonstrate the duality between the spectral (frequency domain) and temporal (time domain) representations of a function. As the following example shows, the Fourier transform of a time series returns the frequencies contained in the signal.
The following code simulates a signal of the form
$$ y = \sin(2\pi f_1 t) + \sin(2\pi f_2 t) + \sin(2\pi f_3 t), $$
takes the DFT and plots the amplitude and phase of the resulting components $Y_k$.
Step9: Figure 2.8.1
Step10: Figure 2.8.2
Step11: That is almost a factor of ten difference. Lets compare this to numpy's built in FFT
Step13: That seems amazing! The numpy FFT is about 1000 times faster than our vectorised implementation. But how does numpy achieve this speed up? Well, by using the fast Fourier transform of course.
2.8.5. Fast Fourier transforms<a id='math
Step14: Lets confirm that this function returns the correct result by comparing fith numpy's FFT. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: Outline
Glossary
2. Mathematical Groundwork
Previous: 2.7 Fourier Theorems
Next: 2.9 Sampling Theory
Import standard modules:
End of explanation
from IPython.display import HTML
from ipywidgets import interact
HTML('../style/code_toggle.html')
Explanation: Import section specific modules:
End of explanation
def loop_DFT(x):
Implementing the DFT in a double loop
Input: x = the vector we want to find the DFT of
#Get the length of the vector (will only work for 1D arrays)
N = x.size
#Create vector to store result in
X = np.zeros(N,dtype=complex)
for k in range(N):
for n in range(N):
X[k] += np.exp(-1j*2.0*np.pi*k*n/N)*x[n]
return X
Explanation: 2.8. The Discrete Fourier Transform (DFT) and the Fast Fourier Transform (FFT)<a id='math:sec:the_discrete_fourier_transform_and_the_fast_fourier_transform'></a>
The continuous version of the Fourier transform can only be computed when the integrals involved can be evaluated analytically, something which is not always possible in real life applications. This is true for a number of reasons, the most relevant of which are:
We don't always have the parametrisation of the signal that we want to find the Fourier transform of.
Signals are measured and recorded at a finite number of points.
Measured signals are contaminated by noise.
In such cases the discrete equivalent of the Fourier transform, called the discrete Fourier transform (DFT), is very useful. In fact, where the scale of the problem necessitates using a computer to perform calculations, the Fourier transform can only be implemented as the discrete equivalent. There are some subtleties we should be aware of when implementing the DFT. These mainly arise because it is very difficult to capture the full information present in a continuous signal with a finite number of samples. In this chapter we review the DFT and extend some of the most useful identities derived in the previous sections to the case where we only have acces to a finite number of samples. The subtleties that arise due to limited sampling will be discussed in the next section.
2.8.1 The discrete time Fourier transform (DTFT): definition<a id='math:sec:the_discrete_time_fourier_transform_definition'></a>
We start by introducing the discrete time Fourier transform (DTFT). The DTFT of a set $\left{y_n \in \mathbb{C}\right}_{n ~ \in ~ \mathbb{Z}}$ results in a Fourier series (see $\S$ 2.3 ➞) of the form
<a id='math:eq:8_001'></a><!--\label{math:eq:8_001}-->$$
Y_{2\pi}(\omega) = \sum_{n\,=\,-\infty}^{\infty} y_n\,e^{-\imath \omega n} \quad \mbox{where} \quad n \in \mathbb{Z}.
$$
The resulting function is a periodic function of the frequency variable $\omega$. In the above definition we assume that $\omega$ is expressed in normalised units of radians/sample so that the periodicity is $2\pi$. In terms of the usual time frequency variable $f$, where $\omega = 2\pi f$, we would define it as
<a id='math:eq:8_002'></a><!--\label{math:eq:8_002}-->$$
Y_{f_s}(f) = \sum_{n\,=\,-\infty}^{\infty} y_n\,e^{-2\pi\imath f t_n},
$$
where $t_n$ is a time coordinate and the subscript $f_s$ denotes the period of $Y_{f_s}(f)$. As we will see in $\S$ 2.9 ➞ the DTFT (more correctly the DFT introduced below) arises naturally when we take the Fourier transform of a sampled continuous function.
As with the continuous Fourier transform, it is only possible to compute the DTFT analytically in a limited number of cases (eg. when the limit of the infinite series is known analytically or when the signal is band limited i.e. the signal contains only frequencies below a certain threshold). For what follows we will find it useful to review the concept of periodic summation and the Poisson summation formula. Note that the DTFT is defined over the entire field of complex numbers and that there are an infinite number of components involved in the definition.
2.8.1.1 Periodic summation and the DTFT <a id='math:sec:Periodic_summation'></a>
The idea behind periodic summation is to construct a periodic function, $g_{\tau}(t)$ say, from a contnuous function $g(t)$. Consider the following construction
$$ g_\tau(t) = \sum_{n=-\infty}^{\infty} g(t + n\tau) = \sum_{n=-\infty}^{\infty} g(t - n\tau). $$
Clearly $g_\tau(t)$ has period $\tau$ and looks like an infinite number of copies of the function $g(t)$ for $t$ in the interval $0 \leq t \leq \tau$. We call $g_\tau(t)$ a periodic summation of $g(t)$. Note that we recover $g(t)$ when $n = 0$ and that a similar construction is obviously possible in the frequency domain. Actually the DTFT naturally results in a periodic function of the form
$$Y_{f_s}(f) = \sum_{k = -\infty}^{\infty} Y(f - k f_s), $$
such that $Y_{f_s}(f)$ is the periodic summation of $Y(f)$. As we will see later, the period $f_s$ is set by the number of samples $N$ at which we have the signal. In $\S$ 2.9 ➞ we will find it useful to think of $Y(f)$ as the spectrum of a bandlimited signal, $y(t)$ say. When the maximum frequency present in the signal is below a certain threshold the $Y_{f_s}(f)$ with $k \neq 0$ are exact copies of $Y(f)$ which we call aliases. This will become clearer after we have proved the Nyquist-Shannon sampling theorem.
2.8.1.2 Poisson summation formula <a id='math:sec:Poisson_summation'></a>
The Poisson summation formula is a result from analysis which is very important in Fourier theory. A general proof of this result will not add much to the current discussion. Instead we will simply point out its implications for Fourier theory as this will result in a particularly transparent proof of the Nyquist-Shannon sampling theorem.
Basically the Poisson summation formula can be used to relate the Fourier series coefficients of a periodic summation of a function to values which are proportional to the function's continuous Fourier transform. The Poisson summation formula states that, if $Y(f)$ is the Fourier transform of the (Schwartz) function $y(t)$, then
<a id='math:eq:8_003'></a><!--\label{math:eq:8_003}-->$$
\sum_{n = -\infty}^{\infty} \Delta t ~ y(\Delta t n) e^{-2\pi\imath f \Delta t n} = \sum_{k = -\infty}^{\infty} Y(f - \frac{k}{\Delta t}) = \sum_{k = -\infty}^{\infty} Y(f - kf_s) = Y_{f_s}(f). $$
This shows that the series $y_n = \Delta t y(\Delta t n)$ is sufficient to construct a periodic summation of $Y(f)$. The utility of this construction will become apparent a bit later. For now simply note that it is possible to construct $Y_{f_s}(f)$ as a Fourier series with coefficients $y_n = \Delta t \ y(n\Delta t)$.
The above discussion will mainly serve as a theoretical tool. It does not provide an obvious way to perform the Fourier transform in practice because it still requires an infinite number of components $y_n$. Before illustrating its utility we should construct a practical way to implement the Fourier transform.
2.8.2. The discrete Fourier transform: definition<a id='math:sec:the_discrete_fourier_transform_definition'></a>
Let $y= \left{y_n \in \mathbb{C}\right}{n = 0, \ldots, N-1}$ be a finite set of complex numbers. Then the discrete Fourier transform (DFT) of $y$, denoted $\mathscr{F}{\rm D}{y}$, is defined as
<a id='math:eq:8_004'></a><!--\label{math:eq:8_004}-->$$
\mathscr{F}{\rm D}: \left{y_n \in \mathbb{C}\right}{n \,=\, 0, \ldots, N-1} \rightarrow \left{Y_k \in \mathbb{C}\right}{k \,=\, 0, \ldots, N-1}\
\mathscr{F}{\rm D}{y} = \left{Y_k\in\mathbb{C}\right}{k \,=\, 0, \ldots, N-1} \quad \mbox{where} \quad
Y_k = \sum{n\,=\,0}^{N-1} y_n\,e^{-2\pi\imath f_k t_n} = \sum_{n\,=\,0}^{N-1} y_n\,e^{-\imath 2\pi \frac{nk}{N}}.
$$
In the above definition $f_k$ is the $k$-th frequency sample and $t_n$ is the $n$-th sampling instant. When the samples are spaced at uniform intervals $\Delta t$ apart these are given by
$$ t_n = t_0 + n\Delta t \quad \mbox{and} \quad f_k = \frac{kf_s}{N} \quad \mbox{where} \quad f_s = \frac{1}{\Delta t}. $$
Most of the proofs shown below are easiest to establish when thinking of the DFT in terms of the actual indices $k$ and $n$. This definition also has the advantage that the samples do not have to be uniformly spaced apart. In this section we use the notation
$$ \mathscr{F}{\rm D}{y}_k = Y_k = \sum{n\,=\,0}^{N-1} y_n\,e^{-\imath 2\pi \frac{nk}{N}}, $$
where the subscript $k$ on the LHS denotes the index not involved in the summation. Varaibles such as $Y_k$ and $y_n$ which are related as in the above expression are sometimes refered to as Fourier pairs or Fourier duals.
The number of Fourier transformed components $Y_k$ is the same as the number of samples of $y_n$. Denoting the set of Fourier transformed components by $Y = \left{Y_k \in \mathbb{C}\right}{k = 0, \ldots, N-1}$, we can define the inverse discrete Fourier transform of $Y$, denoted $\mathscr{F}{\rm D}^{-1}{Y}$, as
<a id='math:eq:8_005'></a><!--\label{math:eq:8_005}-->$$
\mathscr{F}{\rm D}^{-1}: \left{Y_k \in \mathbb{C}\right}{k \,=\, 0, \ldots, N-1} \rightarrow \left{y_n \in \mathbb{C}\right}{n \,=\, 0, \ldots, N-1}\
\mathscr{F}{\rm D}^{-1}{Y} = \left{y_n\in\mathbb{C}\right}{n = 0, \ldots, N-1}
\quad \mbox{where} \quad y_n = \frac{1}{N} \sum{k \ = \ 0}^{N-1} Y_k e^{\imath 2\pi \frac{nk}{N}} \ ,
$$
or in the abbreviated notation
$$ \mathscr{F}{\rm D}^{-1}{Y}_n = y_n = \frac{1}{N} \sum{k\,=\,0}^{N-1} Y_k\,e^{\imath 2\pi \frac{nk}{N}}. $$
The factor of $\frac{1}{N}$ appearing in the definition of the inverse DFT is a normalisation factor. We should mention that this normalisation is sometimes implemented differently by including a factor of $\sqrt{\frac{1}{N}}$ in the definition of both the forward and the inverse DFT. Some texts even omit it completely. We will follow the above convention throughout the course. The inverse DFT is the inverse operation with respect to the discrete Fourier transform (restricted to the original domain). This can be shown as follows:<br><br>
<a id='math:eq:8_006'></a><!--\label{math:eq:8_006}-->$$
\begin{align}
\mathscr{F}{\rm D}^{-1}\left{\mathscr{F}{\rm D}\left{y\right}\right}{n^\prime} \,&=\, \frac{1}{N}\sum{k\,=\,0}^{N-1} \left(\sum_{n\,=\,0}^{N-1} y_n e^{-\imath 2\pi\frac{kn}{N}}\right)e^{\imath 2\pi\frac{kn^\prime}{N}}\
&=\,\frac{1}{N}\sum_{k\,=\,0}^{N-1} \sum_{n\,=\,0}^{N-1} \left( y_n e^{-\imath 2\pi\frac{kn}{N}}e^{\imath 2\pi\frac{kn^\prime}{N}}\right)\
&=\,\frac{1}{N}\left(\sum_{k\,=\,0}^{N-1} y_{n^\prime}+\sum_{\begin{split}n\,&=\,0\n\,&\neq\,n^\prime\end{split}}^{N-1} \sum_{k\,=\,0}^{N-1} y_n e^{-\imath 2\pi\frac{kn}{N}}e^{\imath 2\pi\frac{kn^\prime}{N}}\right)\
&=\,\frac{1}{N}\left(\sum_{k\,=\,0}^{N-1} y_{n^\prime}+\sum_{\begin{split}n\,&=\,0\n\,&\neq\,n^\prime\end{split}}^{N-1} \sum_{k\,=\,0}^{N-1} y_n e^{\imath 2\pi\frac{k(n^\prime-n)}{N}}\right)\
&=\,y_{n^\prime}+\frac{1}{N}\sum_{\begin{split}n\,&=\,0\n\,&\neq\,n^\prime\end{split}}^{N-1} y_n \sum_{k\,=\,0}^{N-1} \left(e^{\imath 2\pi\frac{(n^\prime-n)}{N}}\right)^k\
&=\,y_{n^\prime}+\frac{1}{N}\sum_{\begin{split}n\,&=\,0\n\,&\neq\,n^\prime\end{split}}^{N-1} y_n \frac{1-\left(e^{\imath 2\pi\frac{(n^\prime-n)}{N}}\right)^N}{1-\left(e^{\imath 2\pi\frac{(n^\prime-n)}{N}}\right)}\
&=\,y_{n^\prime}+\frac{1}{N}\sum_{\begin{split}n\,&=\,0\n\,&\neq\,n^\prime\end{split}}^{N-1} y_n \frac{1-e^{\imath 2\pi(n^\prime-n)}}{1-e^{\imath 2\pi\frac{(n^\prime-n)}{N}}}\
&\underset{n,n^\prime \in \mathbb{N}}{=}\,y_{n^\prime},\
\end{align}
$$
where we made use of the identity $\sum_{n\,=\,0}^{N-1}x^n \,=\, \frac{1-x^N}{1-x}$ and used the orthogonality of the sinusoids in the last step.
Clearly both the DFT and its inverse are periodic with period $N$
<a id='math:eq:8_007'></a><!--\label{math:eq:8_007}-->$$
\begin{align}
\mathscr{F}{\rm D}{y }_k \,&=\,\mathscr{F}{\rm D}{y }{k \pm N} \
\mathscr{F}{\rm D}^{-1}{Y }{n} \,&=\,\mathscr{F}{\rm D}^{-1}{Y }_{n \pm N}.\
\end{align}
$$
As is the case for the continuous Fourier transform, the inverse DFT can be expressed in terms of the forward DFT (without proof, but it's straightforward)
<a id='math:eq:8_008'></a><!--\label{math:eq:8_008}-->$$
\begin{align}
\mathscr{F}{\rm D}^{-1}{Y}_n \,&=\, \frac{1}{N} \mathscr{F}{\rm D}{Y}{-n} \
&=\,\frac{1}{N} \mathscr{F}{\rm D}{Y}_{N-n}.\
\end{align}
$$
The DFT of a real-valued set of numbers $y = \left{y_n \in \mathbb{R}\right}_{n\,=\,0, \ldots, \,N-1}$ is Hermitian (and vice versa)
<a id='math:eq:8_009'></a><!--\label{math:eq:8_009}-->$$
\begin{split}
\mathscr{F}{\rm D}{y}_k\,&=\, \left(\mathscr{F}{\rm D}{y}_{-k}\right)^\
&=\, \left(\mathscr{F}{\rm D}{y}{N-k}\right)^ \ .
\end{split}
$$
2.8.3. The Discrete convolution: definition and discrete convolution theorem<a id='math:sec:the_discrete_convolution_definition_and_discrete_convolution_theorem'></a>
For two sets of complex numbers $y = \left{y_n \in \mathbb{C}\right}{n = 0, \ldots, N-1}$ and $z = \left{z_n \in \mathbb{C}\right}{n = 0, \ldots, N-1}$ the discrete convolution is, in analogy to the analytic convolution, defined as
<a id='math:eq:8_010'></a><!--\label{math:eq:8_010}-->$$
\circ: \left{y_n \in \mathbb{C}\right}{n \,=\, 0, \ldots, N-1}\times \left{z_n \in \mathbb{C}\right}{n \,=\, 0, \ldots, N-1} \rightarrow \left{r_k \in \mathbb{C}\right}{k \,=\, 0, \ldots, N-1}\
(y\circ z)_k = r_k = \sum{n\,=\,0}^{N-1} y_n z_{k-n}.\
$$
However there is a bit of a subtlety in this definition. We have to take into account that if $n > k$ the index $k-n$ will be negative. Since we have defined our indices as being strictly positive, this requires introducing what is sometimes referred to as the "wraparound" convention. Recal that complex numbers $r_k = e^{\frac{\imath 2\pi k}{N}}$ have the property that $r_{k \pm mN} = r_k$, where $m \in \mathbb{Z}$ is an integer. In the "wraparound" convention we map indices lying outside the range $0, \cdots , N-1$ into this range using the modulo operator. In other words we amend the definition as follows
$$ (y\circ z)k = r_k = \sum{n\,=\,0}^{N-1} y_n z_{(k-n) \, \text{mod} \, N}, $$
where mod denotes the modulo operation. Just like the ordinary convolution, the discrete convolution is commutative. One important effect evident from this equation is that if the two series are "broad" enough, the convolution will be continued at the beginning of the series, an effect called aliasing.
The convolution theorem (i.e. that convolution in one domain is the pointwise product in the other domain) is also valid for the DFT and the discrete convolution operator. We state the theorem here without proof (it is similar to the proof for the continuous case). Let $(y \odot z)_n \underset{def}{=} y_n ~ z_n$ (this is the Hadamard or component-wise product, we will encounter it again in $\S$ 2.10 ➞). Then, for Fourier pairs $Y_k$ and $y_n$, and $Z_k$ and $z_n$, we have
<a id='math:eq:8_011'></a><!--\label{math:eq:8_011}-->$$
\forall N\,\in\, \mathbb{N}\
\begin{align}
y \,&=\, \left{y_n \in \mathbb{C}\right}{n\,=\,0, \ldots, \,N-1}\
z \,&=\, \left{z_n \in \mathbb{C}\right}{n\,=\,0, \ldots, \,N-1}\
Y \,&=\, \left{Y_k \in \mathbb{C}\right}{k\,=\,0, \ldots, \,N-1}\
Z \,&=\, \left{Z_k \in \mathbb{C}\right}{k\,=\,0, \ldots, \,N-1}\
\end{align}\
\begin{split}
\mathscr{F}{\rm D}{y\odot z}\,&=\,\frac{1}{N}\mathscr{F}{\rm D}{y}\circ \mathscr{F}{\rm D}{z}\
\mathscr{F}{\rm D}^{-1}{Y\odot Z}\,&=\,\mathscr{F}{\rm D}{Y}\circ \mathscr{F}{\rm D}{Z}\
\mathscr{F}{\rm D}{y\circ z}\,&=\,\mathscr{F}{\rm D}{y} \odot \mathscr{F}{\rm D}{z}\
\mathscr{F}{\rm D}^{-1}{Y\circ Z}\,&=\,\frac{1}{N}\mathscr{F}{\rm D}{Y} \odot \mathscr{F}{\rm D}{Z}\
\end{split}
$$
2.8.4.Numerically implementing the DFT <a id='math:sec:numerical_DFT'></a>
We now turn to how the DFT is implemented numerically. The most direct way to do this is to sum the components in a double loop of the form
End of explanation
def matrix_DFT(x):
Implementing the DFT in vectorised form
Input: x = the vector we want to find the DFT of
#Get the length of the vector (will only work for 1D arrays)
N = x.size
#Create vector to store result in
n = np.arange(N)
k = n.reshape((N,1))
K = np.exp(-1j*2.0*np.pi*k*n/N)
return K.dot(x)
Explanation: Althought this would produce the correct result, this way of implementing the DFT is going to be incredibly slow. The DFT can be implemented in matrix form. Convince yourself that a vectorised implementation of this operation can be achieved with
$$ X = K x $$
where $K$ is the kernel matrix, it stores the values $K_{kn} = e^{\frac{-\imath 2 \pi k n}{N}}$. This is implemented numerically as follows
End of explanation
x = np.random.random(256) #create random vector to take the DFT of
np.allclose(loop_DFT(x),matrix_DFT(x)) #compare the result using numpy's built in function
Explanation: This function will be much faster than the previous implementation. We should check that they both return the same result
End of explanation
x = np.random.random(256) #create random vector to take the DFT of
np.allclose(np.fft.fft(x),matrix_DFT(x)) #compare the result using numpy's built in function
Explanation: Just to be sure our DFT really works, let's also compare the output of our function to numpy's built in DFT function (note numpy automatically implements a faster version of the DFT called the FFT, see the discussion below)
End of explanation
#First we simulate a time series as the sum of a number of sinusoids each with a different frequency
N = 512 #The number of samples of the time series
tmin = -10 #The minimum value of the time coordinate
tmax = 10 #The maximum value of the time coordinate
t = np.linspace(tmin,tmax,N) #The time coordinate
f1 = 1.0 #The frequency of the first sinusoid
f2 = 2.0 #The frequency of the second sinusoid
f3 = 3.0 #The frequency of the third sinusoid
#Generate the signal
y = np.sin(2.0*np.pi*f1*t) + np.sin(2.0*np.pi*f2*t) + np.sin(2.0*np.pi*f3*t)
#Take the DFT
Y = matrix_DFT(y)
#Plot the absolute value, real and imaginary parts
plt.figure(figsize=(15, 6))
plt.subplot(121)
plt.stem(abs(Y))
plt.xlabel('$k$',fontsize=18)
plt.ylabel(r'$|Y_k|$',fontsize=18)
plt.subplot(122)
plt.stem(np.angle(Y))
plt.xlabel('$k$',fontsize=18)
plt.ylabel(r'phase$(Y_k)$',fontsize=18)
Explanation: Great! Our function is returning the correct result. Next we do an example to demonstrate the duality between the spectral (frequency domain) and temporal (time domain) representations of a function. As the following example shows, the Fourier transform of a time series returns the frequencies contained in the signal.
The following code simulates a signal of the form
$$ y = \sin(2\pi f_1 t) + \sin(2\pi f_2 t) + \sin(2\pi f_3 t), $$
takes the DFT and plots the amplitude and phase of the resulting components $Y_k$.
End of explanation
#Get the sampling frequency
delt = t[1] - t[0]
fs = 1.0/delt
k = np.arange(N)
fk = k*fs/N
plt.figure(figsize=(15, 6))
plt.subplot(121)
plt.stem(fk,abs(Y))
plt.xlabel('$f_k$',fontsize=18)
plt.ylabel(r'$|Y_k|$',fontsize=18)
plt.subplot(122)
plt.stem(fk,np.angle(Y))
plt.xlabel('$f_k$',fontsize=18)
plt.ylabel(r'phase$(Y_k)$',fontsize=18)
Explanation: Figure 2.8.1: Amplitude and phase plots of the fourier transform of a signal comprised of 3 different tones
It is not immediately obvious that these are the frequencies contained in the signal. However, recall, from the definition given at the outset, that the frequencies are related to the index $k$ via
$$ f_k = \frac{k f_s}{N}, $$
where $f_s$ is the sampling frequency (i.e. one divided by the sampling period). Let's see what happens if we plot the $X_k$ against the $f_k$ using the following bit of code
End of explanation
%timeit loop_DFT(x)
%timeit matrix_DFT(x)
Explanation: Figure 2.8.2: The fourier transformed signal labled by frequency
Here we see that the three main peaks correspond to the frequencies contained in the input signal viz. $f_1 = 1$Hz, $f_2 = 2$Hz and $f_3 = 3$Hz. But what do the other peaks mean? The additional frequency peaks are a consequence of the following facts:
the DFT of a real valued signal is Hermitian (see Hermitian property of real valued signals ⤵<!--\ref{math:eq:8_009}-->) so that $Y_{-k} = Y_k^*$,
the DFT is periodic with period $N$ (see Periodicity of the DFT ⤵<!--\ref{math:eq:8_007}-->) so that $Y_{k} = Y_{k+N}$. <br>
When used together the above facts imply that $Y_{N-k} = Y_k^*$. This will be important in $\S$ 2.9 ➞ when we discuss aliasing. Note that these additional frequency peaks contain no new information. For this reason it is only necessary to store the first $\frac{N}{2} + 1$ samples when taking the DFT of a real valued signal.
We have not explained some of the features of the signal viz.
Why are there non-zero components of $Y_k$ at frequencies that are not present in the input signal?
Why do the three main peaks not contain the same amount of power? This is a bit unexpected since all three components of the input signal have the same amplitude.
As we will see in $\S$ 2.9 ➞, these features result from the imperfect sampling of the signal. This is unavoidable in any practical application involving the DFT and will be a reoccurring theme throughout this course. You are encouraged to play with the parameters (eg. the minimum $t_{min}$ and maximum $t_{max}$ values of the time coordinate, the number of samples $N$ (do not use $N > 10^5$ points or you might be here for a while), the frequencies of the input components etc.) to get a feel for what does and does not work. In particular try setting the number of samples to $N = 32$ and see if you can explain the output. It might also be a good exercise to and implement the inverse DFT.
We already mentioned that the vectorised version of the DFT above will be much faster than the loop version. We can see exactly how much faster with the following commands
End of explanation
%timeit np.fft.fft(x)
Explanation: That is almost a factor of ten difference. Lets compare this to numpy's built in FFT
End of explanation
def one_layer_FFT(x):
An implementation of the 1D Cooley-Tukey FFT using one layer
N = x.size
if N%2>0:
print("Warning: length of x in not a power of two, returning DFT")
return matrix_DFT(x)
else:
X_even = matrix_DFT(x[::2])
X_odd = matrix_DFT(x[1::2])
factor = np.exp(-2j * np.pi * np.arange(N) / N)
return np.concatenate([X_even + factor[:N // 2] * X_odd,X_even + factor[N // 2:] * X_odd])
Explanation: That seems amazing! The numpy FFT is about 1000 times faster than our vectorised implementation. But how does numpy achieve this speed up? Well, by using the fast Fourier transform of course.
2.8.5. Fast Fourier transforms<a id='math:sec:fast_fourier_tranforms'></a>
The DFT is a computationally expensive operation. As evidenced by the double loop required to implement the DFT the computational complexity of a naive implementation such as ours scales like $\mathcal{O}(N^2)$ where $N$ is the number of data points. Even a vectorised version of the DFT will scale like $\mathcal{O}(N^2)$ since, in the end, there are still the same number of complex exponentiations and multiplications involved.
By exploiting the symmetries of the DFT, it is not difficult to identify potential ways to safe computing time. Looking at the definition of the discrete Fourier transform discrete Fourier transform ⤵<!--\ref{math:eq:8_004}-->, one can see that, under certain circumstances, the same summands occur multiple times. Recall that the DFT is periodic i.e. $Y_k = Y_{N+k}$, where $N$ is the number of data points. Now suppose that $N = 8$. In calculating the component $Y_2$ we would have to compute the quantity $y_2\,e^{-2{\pi}\imath\frac{2 \cdot 2}{8}}$ i.e. when $n = 2$. However, using the periodicity of the kernel $e^{-2\pi\imath \frac{kn}{N}} = e^{-2\pi\imath \frac{k(n+N)}{N}}$, we can see that this same quantity will also have to be computed when calculating the component $Y_6$ since $y_2\,e^{-2{\pi}\imath\frac{2\cdot2}{8}}=y_2e^{-2{\pi}\imath\frac{6\cdot2}{8}} = y_2e^{-2{\pi}\imath\frac{12}{8}}$. If we were calculating the DFT by hand, it would be a waste of time to calculate this summand twice. To see how we can exploit this, lets first split the DFT into its odd and even $n$ indices as follows
\begin{eqnarray}
Y_{k} &=& \sum_{n = 0}^{N-1} y_n e^{-2\pi\imath \frac{kn}{N}}\
&=& \sum_{m = 0}^{N/2-1} y_{2m} e^{-2\pi\imath \frac{k(2m)}{N}} + \sum_{m = 0}^{N/2-1} y_{2m+1} e^{-2\pi\imath \frac{k(2m+1)}{N}}\
&=& \sum_{m = 0}^{N/2-1} y_{2m} e^{-2\pi\imath \frac{km}{N/2}} + e^{-2\pi\imath \frac{k}{N}}\sum_{m = 0}^{N/2-1} y_{2m+1} e^{-2\pi\imath \frac{km}{N/2}}
\end{eqnarray}
Notice that we have split the DFT into two terms which look very much like DFT's of length $N/2$, only with a slight adjustment on the indices. Importantly the form of the kernel (i.e. $e^{-2\pi\imath \frac{km}{N/2}}$) looks the same for both the odd and the even $n$ indices. Now, while $k$ is in the range $0, \cdots , N-1$, $n$ only ranges through $0,\cdots,N/2 - 1$. The DFT written in the above form will therefore be periodic with period $N/2$ and we can exploit this periodic property to compute the DFT with half the number of computations. See the code below for an explicit implementation.
End of explanation
np.allclose(np.fft.fft(x),one_layer_FFT(x))
Explanation: Lets confirm that this function returns the correct result by comparing fith numpy's FFT.
End of explanation |
10,740 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Exercise 3
Imports
Step2: Geometric Brownian motion
Here is a function that produces standard Brownian motion using NumPy. This is also known as a Wiener Process.
Step3: Call the brownian function to simulate a Wiener process with 1000 steps and max time of 1.0. Save the results as two arrays t and W.
Step4: Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes.
Step5: Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences.
Step7: Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation
Step8: Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\mu=0.5$ and $\sigma=0.3$ with the Wiener process you computed above.
Visualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes. | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import antipackage
import github.ellisonbg.misc.vizarray as va
Explanation: Numpy Exercise 3
Imports
End of explanation
def brownian(maxt, n):
Return one realization of a Brownian (Wiener) process with n steps and a max time of t.
t = np.linspace(0.0,maxt,n)
h = t[1]-t[0]
Z = np.random.normal(0.0,1.0,n-1)
dW = np.sqrt(h)*Z
W = np.zeros(n)
W[1:] = dW.cumsum()
return t, W
Explanation: Geometric Brownian motion
Here is a function that produces standard Brownian motion using NumPy. This is also known as a Wiener Process.
End of explanation
t,W = brownian(1.0, 1000)
assert isinstance(t, np.ndarray)
assert isinstance(W, np.ndarray)
assert t.dtype==np.dtype(float)
assert W.dtype==np.dtype(float)
assert len(t)==len(W)==1000
Explanation: Call the brownian function to simulate a Wiener process with 1000 steps and max time of 1.0. Save the results as two arrays t and W.
End of explanation
plt.plot(t,W)
plt.xlabel("$t$")
plt.ylabel("$W(t)$")
assert True # this is for grading
Explanation: Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes.
End of explanation
dW = np.diff(W)
dW.mean(), dW.std()
assert len(dW)==len(W)-1
assert dW.dtype==np.dtype(float)
Explanation: Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences.
End of explanation
def geo_brownian(t, W, X0, mu, sigma):
Return X(t) for geometric brownian motion with drift mu, volatility sigma.
exponent = 0.5 * t * (mu - sigma)**2 + sigma * W
return X0 * np.exp(exponent)
assert True # leave this for grading
Explanation: Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation:
$$
X(t) = X_0 e^{((\mu - \sigma^2/2)t + \sigma W(t))}
$$
Use Numpy ufuncs and no loops in your function.
End of explanation
plt.plot(t, geo_brownian(t, W, 1.0, 0.5, 0.3))
plt.xlabel("$t$")
plt.ylabel("$X(t)$")
assert True # leave this for grading
Explanation: Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\mu=0.5$ and $\sigma=0.3$ with the Wiener process you computed above.
Visualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes.
End of explanation |
10,741 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Possible/Extant/All pattern
Another form of joining made possible by the util module is very powerful. Here is an example reusing the chan_div_2 and chan_div_3 from the previous chapter
Step1: What did that do? If you think carefully about the values produced by the two channels, you will deduce that for each key, it selects the rightmost value with that key. In other words, the values from both channels make it into the final channel, but if they both provide values for a particular key, the rightmost channel wins. (Note, in this case, that 6 and 12 are not in the list.)
Why is that helpful? That pattern works very well for creating the Possible/Extant/All pattern that is one main raisons d' etre for the flowz framework in the first place. Here's how it works...
Suppose you have an expensive derivation function. For giggles and grins, let's say it takes 10 minutes to compute.
Step2: That would have taken 100 minutes. If your process died 85 minutes in, you would be bummed to have to do it all again. So, you would like to have a way to write out the results and, in the case of a crash, pick up where you left off. That's where the pattern comes in.
The channel just defined represents all of the possible values, so let's call it that.
Step3: Suppose that we have already written out our data to durable storage. We can represent that with an array of previously written values (as though we had run for 85 minutes)
Step4: Now, we need an ExtantArtifact that gets this data out of storage
Step5: Most durable storage mechanisms allow you to determine the keys of your stored items in sorted order, so lets do that and create an extant channel with that order
Step6: Great! Now we can combine these two channels into our all channel, preferring the items in the extant channel
Step7: OK. Something happened there, but it's not clear exactly what, since it looks like the unadulterated output of the possible channel. Let's turn logging back on, buid everything again (since all the artifacts have already been derived once, teeing won't be illustrative) and see what happens
Step8: Boom! Notice how the expensive_deriver() calls ("DerivedArtifact<expensive> running deriver.") are only called twice at the end. Our code did not have to consciously figure out how much had already been done and carefully make sure that we only call the deriver for the remaining ones. The lazy evaluation did it all.
There is yet one more performance improvement to make here, though. If we have already written out 8 of our expensively derived data sets, not only do we no longer need to derive and write them out, but we don't even need to read them in! flowz and the ExtantArtifact class allows to optimize things by ensuring each of the items in the channel, rather than getting them.
Step9: ensure on an ExtantArtifact is essentially a no-op just returning True, but it calls the get method on a DerivedArtifact. So we have done the minimal amount needed to get up to date
Step10: That's odd. It shows four deviver calls. Notice that only two of them, however, have "<expensive>" in the log. It turns out the transform() uses a DerivedArtifact under the covers, too.
Step11: Yes! Our storage has been updated. Now, if we run yet again, nothing should be done.
Step12: QED | Python Code:
from flowz.util import merge_keyed_channels
chan_div_2 = IterChannel(KeyedArtifact(i, i) for i in range(1, 13) if i % 2 == 0)
chan_div_3 = IterChannel(KeyedArtifact(i, i*10) for i in range(1, 13) if i % 3 == 0)
merged = merge_keyed_channels(chan_div_2, chan_div_3)
print_chans(merged)
Explanation: Possible/Extant/All pattern
Another form of joining made possible by the util module is very powerful. Here is an example reusing the chan_div_2 and chan_div_3 from the previous chapter:
End of explanation
def expensive_deriver(num):
# 10 minutes pass...
return num * 100
chan = IterChannel(KeyedArtifact(i, DerivedArtifact(expensive_deriver, i)) for i in range(10))
print_chans(chan.tee())
Explanation: What did that do? If you think carefully about the values produced by the two channels, you will deduce that for each key, it selects the rightmost value with that key. In other words, the values from both channels make it into the final channel, but if they both provide values for a particular key, the rightmost channel wins. (Note, in this case, that 6 and 12 are not in the list.)
Why is that helpful? That pattern works very well for creating the Possible/Extant/All pattern that is one main raisons d' etre for the flowz framework in the first place. Here's how it works...
Suppose you have an expensive derivation function. For giggles and grins, let's say it takes 10 minutes to compute.
End of explanation
possible = chan
Explanation: That would have taken 100 minutes. If your process died 85 minutes in, you would be bummed to have to do it all again. So, you would like to have a way to write out the results and, in the case of a crash, pick up where you left off. That's where the pattern comes in.
The channel just defined represents all of the possible values, so let's call it that.
End of explanation
storage = {num: expensive_deriver(num) for num in range(8)}
print(storage)
Explanation: Suppose that we have already written out our data to durable storage. We can represent that with an array of previously written values (as though we had run for 85 minutes):
End of explanation
class ExampleExtantArtifact(ExtantArtifact):
def __init__(self, num):
super(ExampleExtantArtifact, self).__init__(self.get_me, name='ExampleExtantArtifact')
self.num = num
@gen.coroutine
def get_me(self):
raise gen.Return(storage[self.num])
Explanation: Now, we need an ExtantArtifact that gets this data out of storage:
End of explanation
keys = sorted(storage.keys())
print(keys)
extant = IterChannel(KeyedArtifact(i, ExampleExtantArtifact(i)) for i in sorted(storage.keys()))
print_chans(extant.tee())
Explanation: Most durable storage mechanisms allow you to determine the keys of your stored items in sorted order, so lets do that and create an extant channel with that order:
End of explanation
all_ = merge_keyed_channels(possible.tee(), extant.tee())
print_chans(all_.tee())
Explanation: Great! Now we can combine these two channels into our all channel, preferring the items in the extant channel:
End of explanation
config_logging('INFO')
possible = IterChannel(KeyedArtifact(i, DerivedArtifact(expensive_deriver, i, name='expensive')) for i in range(10))
extant = IterChannel(KeyedArtifact(i, ExampleExtantArtifact(i)) for i in keys)
all_ = merge_keyed_channels(possible, extant)
print_chans(all_.tee())
config_logging('WARN')
Explanation: OK. Something happened there, but it's not clear exactly what, since it looks like the unadulterated output of the possible channel. Let's turn logging back on, buid everything again (since all the artifacts have already been derived once, teeing won't be illustrative) and see what happens:
End of explanation
config_logging('INFO')
possible = IterChannel(KeyedArtifact(i, DerivedArtifact(expensive_deriver, i, name='expensive')) for i in range(10))
extant = IterChannel(KeyedArtifact(i, ExampleExtantArtifact(i)) for i in sorted(storage.keys()))
all_ = merge_keyed_channels(possible, extant)
print_chans(all_.tee(), mode='ensure', func=lambda a: a)
config_logging('WARN')
Explanation: Boom! Notice how the expensive_deriver() calls ("DerivedArtifact<expensive> running deriver.") are only called twice at the end. Our code did not have to consciously figure out how much had already been done and carefully make sure that we only call the deriver for the remaining ones. The lazy evaluation did it all.
There is yet one more performance improvement to make here, though. If we have already written out 8 of our expensively derived data sets, not only do we no longer need to derive and write them out, but we don't even need to read them in! flowz and the ExtantArtifact class allows to optimize things by ensuring each of the items in the channel, rather than getting them.
End of explanation
# A function to write the data, to be passed to a transform() call
def data_writing_transform(key, value):
storage[key] = value
return value
# recreate the storage and turn on logging
storage = {num: expensive_deriver(num) for num in range(8)}
config_logging('INFO')
# Run as though we failed after 85 minutes and are picking up again
possible = IterChannel(KeyedArtifact(i, DerivedArtifact(expensive_deriver, i, name='expensive')).transform(data_writing_transform, i) for i in range(10))
extant = IterChannel(KeyedArtifact(i, ExampleExtantArtifact(i)) for i in sorted(storage.keys()))
all_ = merge_keyed_channels(possible, extant)
print_chans(all_.tee(), mode='ensure', func=lambda a: a)
Explanation: ensure on an ExtantArtifact is essentially a no-op just returning True, but it calls the get method on a DerivedArtifact. So we have done the minimal amount needed to get up to date:
1. A fast operation (getting the keys) to figure out what has already been written
2. The expensive operations for the items remaining to be written
All that remains now is that we haven't written this new data. Let's try that now.
End of explanation
print(storage)
Explanation: That's odd. It shows four deviver calls. Notice that only two of them, however, have "<expensive>" in the log. It turns out the transform() uses a DerivedArtifact under the covers, too.
End of explanation
possible = IterChannel(KeyedArtifact(i, DerivedArtifact(expensive_deriver, i, name='expensive')).transform(data_writing_transform, i) for i in range(10))
extant = IterChannel(KeyedArtifact(i, ExampleExtantArtifact(i)) for i in sorted(storage.keys()))
all = merge_keyed_channels(possible, extant)
print_chans(all.tee(), mode='ensure', func=lambda a: a)
Explanation: Yes! Our storage has been updated. Now, if we run yet again, nothing should be done.
End of explanation
# recreate the storage to not mess up other parts of the notebook when run out of order, and turn off logging
storage = {num: expensive_deriver(num) for num in range(8)}
config_logging('WARN')
Explanation: QED
End of explanation |
10,742 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Built-in Time Functions
Field profiles are defined as functions of time. A base rabi_freq is multiplied by a time function rabi_freq_t_func and related arguments rabi_freq_t_args. For example, a Gaussian pulse with a peak of $\Omega_0 = 2\pi \cdot 0.001 \mathrm{~ MHz}$ and a full-width at half-maximum (FWHM) of $1 \mathrm{~ \mu s}$ arriving at the start of the medium at $t = 0 \mathrm{~ \mu s}$ can be specified in JSON with "rabi_freq"
Step1: Gaussian
The Gaussian profile is defined as
$$
\Omega_0 \exp \left[ -4 \log 2 \left( \frac{t - t_0}{t_w}
\right)^2 \right]
$$
where $t_0$ (centre) is the point at which the function reaches its peak amplitude
$\Omega_0$ (ampl). The width $t_w$ (fwhm) is
the full width at half maximum (FWHM) of a Gaussian.
Step2: Note
Why are these args written like ampl_1 here instead of ampl? In the t_funcs module, each built-in time profile is specified in a functor that takes an index as argument and returns a function whose t_args are suffixed with that index. This is because when we have multiple fields, MaxwellBloch needs to be able to distinguish the arguments of each function. When specifying MaxwellBloch problems, you won't need to worry about this.
Square
The square profile just needs an amplitude ampl and switch on and off times.
Step3: Ramp On
The ramp on and off functions use a Gaussian profile to reach peak amplitude. For example, the ramp_on function is
$$
\Omega(t) =
\begin{cases}
\Omega_0 \exp \left[ -4 \log 2 \left( \frac{t - t_0}{t_w}
\right)^2 \right] & t < t_0\
\Omega_0 & t \ge t_0
\end{cases}
$$
where $t_0$ (centre_1) is the point at which the function reaches its peak amplitude
$\Omega_0$ (ampl_1). The duration of the ramp-on is governed by $t_w$ (fwhm_1), which is
the full width at half maximum (FWHM) of a Gaussian. The ramp_off, ramp_onoff and ramp_offon functions behave in the same way.
Step4: Ramp Off
Step5: Ramp On and Off
Step6: Ramp Off and On
Step7: Sech
The hyperbolic secant (sech) function is defined by
$$
\Omega_0 \textrm{sech}\left(\frac{t - t_0}{t_w}\right)
$$
where $t_0$ (centre_1) is the point at which the function reaches its peak amplitude
$\Omega_0$ (ampl_1). The width is governed by $t_w$ (width_1).
Step8: Sinc
The cardinal sine (sinc) function is defined by
$$
\Omega_0 \textrm{sinc} \left( \frac{w t}{\sqrt{\pi/2}} \right)
$$
where $w$ is a width function and $\Omega_0$ (ampl_1) is the peak amplitude of the function.
Step9: Combining Time Functions
It is possible to create your own time functions, or combine the built-in time functions. To do this you have to pass the function into the Field object directly, it's not possible to specify in JSON. | Python Code:
from maxwellbloch import t_funcs
tlist = np.linspace(0., 1., 201)
Explanation: Built-in Time Functions
Field profiles are defined as functions of time. A base rabi_freq is multiplied by a time function rabi_freq_t_func and related arguments rabi_freq_t_args. For example, a Gaussian pulse with a peak of $\Omega_0 = 2\pi \cdot 0.001 \mathrm{~ MHz}$ and a full-width at half-maximum (FWHM) of $1 \mathrm{~ \mu s}$ arriving at the start of the medium at $t = 0 \mathrm{~ \mu s}$ can be specified in JSON with "rabi_freq": 1.0e-3, "rabi_freq_t_func": "gaussian", and "rabi_freq_t_args": {"ampl": 1.0, "centre": 0.0, "fwhm": 1.0}.
Here we'll show all of the built-in t_funcs you can use. It is also possible to write your own.
End of explanation
plt.plot(tlist, t_funcs.gaussian(1)(tlist, args={ 'ampl_1': 1.0, 'fwhm_1': 0.1, 'centre_1': 0.6}));
Explanation: Gaussian
The Gaussian profile is defined as
$$
\Omega_0 \exp \left[ -4 \log 2 \left( \frac{t - t_0}{t_w}
\right)^2 \right]
$$
where $t_0$ (centre) is the point at which the function reaches its peak amplitude
$\Omega_0$ (ampl). The width $t_w$ (fwhm) is
the full width at half maximum (FWHM) of a Gaussian.
End of explanation
plt.plot(tlist, t_funcs.square(1)(tlist, args={ 'ampl_1': 1.0, 'on_1': 0.2, 'off_1': 0.8}));
Explanation: Note
Why are these args written like ampl_1 here instead of ampl? In the t_funcs module, each built-in time profile is specified in a functor that takes an index as argument and returns a function whose t_args are suffixed with that index. This is because when we have multiple fields, MaxwellBloch needs to be able to distinguish the arguments of each function. When specifying MaxwellBloch problems, you won't need to worry about this.
Square
The square profile just needs an amplitude ampl and switch on and off times.
End of explanation
plt.plot(tlist, t_funcs.ramp_on(1)(tlist, args={ 'ampl_1': 1.0, 'fwhm_1': 0.1, 'on_1': 0.6}));
Explanation: Ramp On
The ramp on and off functions use a Gaussian profile to reach peak amplitude. For example, the ramp_on function is
$$
\Omega(t) =
\begin{cases}
\Omega_0 \exp \left[ -4 \log 2 \left( \frac{t - t_0}{t_w}
\right)^2 \right] & t < t_0\
\Omega_0 & t \ge t_0
\end{cases}
$$
where $t_0$ (centre_1) is the point at which the function reaches its peak amplitude
$\Omega_0$ (ampl_1). The duration of the ramp-on is governed by $t_w$ (fwhm_1), which is
the full width at half maximum (FWHM) of a Gaussian. The ramp_off, ramp_onoff and ramp_offon functions behave in the same way.
End of explanation
plt.plot(tlist, t_funcs.ramp_off(1)(tlist, args={ 'ampl_1': 1.0, 'fwhm_1': 0.1, 'off_1': 0.6}));
Explanation: Ramp Off
End of explanation
plt.plot(tlist, t_funcs.ramp_onoff(1)(tlist, args={ 'ampl_1': 1.0, 'fwhm_1': 0.1, 'on_1': 0.4, 'off_1':0.6}));
Explanation: Ramp On and Off
End of explanation
plt.plot(tlist, t_funcs.ramp_offon(1)(tlist, args={ 'ampl_1': 1.0, 'fwhm_1': 0.1, 'off_1': 0.2, 'on_1':0.8}));
Explanation: Ramp Off and On
End of explanation
plt.plot(tlist, t_funcs.sech(1)(tlist, args={ 'ampl_1': 1.0, 'width_1': 0.1, 'centre_1': 0.5}));
Explanation: Sech
The hyperbolic secant (sech) function is defined by
$$
\Omega_0 \textrm{sech}\left(\frac{t - t_0}{t_w}\right)
$$
where $t_0$ (centre_1) is the point at which the function reaches its peak amplitude
$\Omega_0$ (ampl_1). The width is governed by $t_w$ (width_1).
End of explanation
plt.plot(tlist, t_funcs.sinc(1)(tlist, args={ 'ampl_1': 1.0, 'width_1': 10.}));
Explanation: Sinc
The cardinal sine (sinc) function is defined by
$$
\Omega_0 \textrm{sinc} \left( \frac{w t}{\sqrt{\pi/2}} \right)
$$
where $w$ is a width function and $\Omega_0$ (ampl_1) is the peak amplitude of the function.
End of explanation
f = lambda t, args: t_funcs.gaussian(1)(t,args) + t_funcs.ramp_onoff(2)(t, args)
plt.plot(tlist, f(tlist, args={'ampl_1': 1.0, 'fwhm_1': 0.1, 'centre_1': 0.2,
'ampl_2': 0.6, 'fwhm_2': 0.1, 'on_2':0.6, 'off_2':0.8}));
g = lambda t, args: (t_funcs.gaussian(1)(t,args) + t_funcs.gaussian(2)(t, args) + t_funcs.gaussian(3)(t, args) +
t_funcs.gaussian(4)(t, args))
plt.plot(tlist, g(tlist, args={'ampl_1': 1.0, 'fwhm_1': 0.05, 'centre_1': 0.2,
'ampl_2': 0.8, 'fwhm_2': 0.05, 'centre_2': 0.4,
'ampl_3': 0.6, 'fwhm_3': 0.05, 'centre_3': 0.6,
'ampl_4': 0.4, 'fwhm_4': 0.05, 'centre_4': 0.8,}));
Explanation: Combining Time Functions
It is possible to create your own time functions, or combine the built-in time functions. To do this you have to pass the function into the Field object directly, it's not possible to specify in JSON.
End of explanation |
10,743 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Composites simulation
Step1: We need to import here the data, modify them if needed and proceed
Step2: Now let's study the evolution of the concentration | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from simmit import smartplus as sim
from simmit import identify as iden
import os
import itertools
dir = os.path.dirname(os.path.realpath('__file__'))
Explanation: Composites simulation : perform parametric analyses
End of explanation
umat_name = 'MIMTN' #This is the 5 character code for the Mori-Tanaka homogenization for composites with a matrix and ellipsoidal reinforcments
nstatev = 0
nphases = 2 #The number of phases
num_file = 0 #The num of the file that contains the subphases
int1 = 50
int2 = 50
n_matrix = 0
props = np.array([nphases, num_file, int1, int2, n_matrix])
NPhases_file = dir + '/keys/Nellipsoids0.dat'
NPhases = pd.read_csv(NPhases_file, delimiter=r'\s+', index_col=False, engine='python')
NPhases[::]
path_data = dir + '/data'
path_keys = dir + '/keys'
pathfile = 'path.txt'
outputfile = 'results_PLN.txt'
nparams = 4
param_list = iden.read_parameters(nparams)
psi_rve = 0.
theta_rve = 0.
phi_rve = 0.
alpha = np.arange(0.,91.,1)
param_list[1].value = 100
param_list[2].value = 0.4
param_list[3].value = 1.0 - param_list[2].value
E_L = np.zeros(len(alpha))
fig = plt.figure()
umat_name = 'MIMTN' #This is the 5 character code for the Mori-Tanaka homogenization for composites with a matrix and ellipsoidal reinforcments
for i, x in enumerate (alpha):
param_list[0].value = x
iden.copy_parameters(param_list, path_keys, path_data)
iden.apply_parameters(param_list, path_data)
L = sim.L_eff(umat_name, props, nstatev, psi_rve, theta_rve, phi_rve, path_data)
p = sim.L_ortho_props(L)
E_L[i] = p[0]
plt.plot(alpha,E_L, c='black')
np.savetxt('E_L-angle_MT.txt', np.transpose([alpha,E_L]), fmt='%1.8e')
umat_name = 'MISCN' #This is the 5 character code for the Mori-Tanaka homogenization for composites with a matrix and ellipsoidal reinforcments
for i, x in enumerate (alpha):
param_list[0].value = x
iden.copy_parameters(param_list, path_keys, path_data)
iden.apply_parameters(param_list, path_data)
L = sim.L_eff(umat_name, props, nstatev, psi_rve, theta_rve, phi_rve, path_data)
p = sim.L_ortho_props(L)
E_L[i] = p[0]
plt.plot(alpha,E_L, c='red')
np.savetxt('E_L-angle_SC.txt', np.transpose([alpha,E_L]), fmt='%1.8e')
plt.show()
Explanation: We need to import here the data, modify them if needed and proceed
End of explanation
param_list[0].value = 0.0
param_list[1].value = 100
c = np.arange(0.,1.01,0.01)
E_L = np.zeros(len(c))
E_T = np.zeros(len(c))
umat_name = 'MIMTN' #This is the 5 character code for the Mori-Tanaka homogenization for composites with a matrix and ellipsoidal reinforcments
for i, x in enumerate (c):
param_list[3].value = x
param_list[2].value = 1.0 - param_list[3].value
iden.copy_parameters(param_list, path_keys, path_data)
iden.apply_parameters(param_list, path_data)
L = sim.L_eff(umat_name, props, nstatev, psi_rve, theta_rve, phi_rve, path_data)
p = sim.L_ortho_props(L)
E_L[i] = p[0]
E_T[i] = p[1]
fig = plt.figure()
np.savetxt('E-concentration_MT.txt', np.transpose([c,E_L,E_T]), fmt='%1.8e')
plt.plot(c,E_L, c='black')
plt.plot(c,E_T, c='black', label='Mori-Tanaka')
umat_name = 'MISCN' #This is the 5 character code for the Mori-Tanaka homogenization for composites with a matrix and ellipsoidal reinforcments
for i, x in enumerate (c):
param_list[3].value = x
param_list[2].value = 1.0 - param_list[3].value
iden.copy_parameters(param_list, path_keys, path_data)
iden.apply_parameters(param_list, path_data)
L = sim.L_eff(umat_name, props, nstatev, psi_rve, theta_rve, phi_rve, path_data)
p = sim.L_ortho_props(L)
E_L[i] = p[0]
E_T[i] = p[1]
np.savetxt('E-concentration_SC.txt', np.transpose([c,E_L,E_T]), fmt='%1.8e')
plt.plot(c,E_L, c='red')
plt.plot(c,E_T, c='red', label='self-consistent')
plt.xlabel('volume fraction $c$', size=12)
plt.ylabel('Young modulus', size=12)
plt.show()
Explanation: Now let's study the evolution of the concentration
End of explanation |
10,744 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plot point-spread functions (PSFs) and cross-talk functions (CTFs)
Visualise PSF and CTF at one vertex for sLORETA.
Step1: Visualize
PSF
Step2: CTF | Python Code:
# Authors: Olaf Hauk <[email protected]>
# Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.minimum_norm import (make_inverse_resolution_matrix, get_cross_talk,
get_point_spread)
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects/'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
fname_evo = data_path + '/MEG/sample/sample_audvis-ave.fif'
# read forward solution
forward = mne.read_forward_solution(fname_fwd)
# forward operator with fixed source orientations
mne.convert_forward_solution(forward, surf_ori=True,
force_fixed=True, copy=False)
# noise covariance matrix
noise_cov = mne.read_cov(fname_cov)
# evoked data for info
evoked = mne.read_evokeds(fname_evo, 0)
# make inverse operator from forward solution
# free source orientation
inverse_operator = mne.minimum_norm.make_inverse_operator(
info=evoked.info, forward=forward, noise_cov=noise_cov, loose=0.,
depth=None)
# regularisation parameter
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = 'MNE' # can be 'MNE' or 'sLORETA'
# compute resolution matrix for sLORETA
rm_lor = make_inverse_resolution_matrix(forward, inverse_operator,
method='sLORETA', lambda2=lambda2)
# get PSF and CTF for sLORETA at one vertex
sources = [1000]
stc_psf = get_point_spread(rm_lor, forward['src'], sources, norm=True)
stc_ctf = get_cross_talk(rm_lor, forward['src'], sources, norm=True)
Explanation: Plot point-spread functions (PSFs) and cross-talk functions (CTFs)
Visualise PSF and CTF at one vertex for sLORETA.
End of explanation
# Which vertex corresponds to selected source
vertno_lh = forward['src'][0]['vertno']
verttrue = [vertno_lh[sources[0]]] # just one vertex
# find vertices with maxima in PSF and CTF
vert_max_psf = vertno_lh[stc_psf.data.argmax()]
vert_max_ctf = vertno_lh[stc_ctf.data.argmax()]
brain_psf = stc_psf.plot('sample', 'inflated', 'lh', subjects_dir=subjects_dir)
brain_psf.show_view('ventral')
brain_psf.add_text(0.1, 0.9, 'sLORETA PSF', 'title', font_size=16)
# True source location for PSF
brain_psf.add_foci(verttrue, coords_as_verts=True, scale_factor=1., hemi='lh',
color='green')
# Maximum of PSF
brain_psf.add_foci(vert_max_psf, coords_as_verts=True, scale_factor=1.,
hemi='lh', color='black')
Explanation: Visualize
PSF:
End of explanation
brain_ctf = stc_ctf.plot('sample', 'inflated', 'lh', subjects_dir=subjects_dir)
brain_ctf.add_text(0.1, 0.9, 'sLORETA CTF', 'title', font_size=16)
brain_ctf.show_view('ventral')
brain_ctf.add_foci(verttrue, coords_as_verts=True, scale_factor=1., hemi='lh',
color='green')
# Maximum of CTF
brain_ctf.add_foci(vert_max_ctf, coords_as_verts=True, scale_factor=1.,
hemi='lh', color='black')
Explanation: CTF:
End of explanation |
10,745 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Global-scale MODIS NDVI time series analysis (with interpolation)
A material for the presentation in FOSS4G-Hokkaido on 1st July 2017.
Copyright © 2017 Naru. Tsutsumida ([email protected])
Summary
Using Google Earth Engine, Annually averaged MODIS MOD13Q1 NDVI data (250m spatial resolution, 16 days intervals) at a global scale are summarised during 2001-2016, then timeseries trend is calculated by Mann-kendall analysis.
Cloud masking and interpolation is not applied but code can be shown.
Prerequisite
python environments are installed.
Your Google account is accessible to Google Earth Engine. https
Step1: 2. Functions for MODIS MOD13Q1 NDVI
MODIS QA mask to filter out low quality pixels. several approaches for QA masks can be applied (but used only maskSummaryQA function)
Step2: 3. Pre-processing Input Data (MODIS MOD13Q1 NDVI in 2009)
3.1 Input data
in this demonstration the quality assurance (QA) mask is applied. Only the best quality pixel (QA=0) is picked up.
Step3: 3.2 see detailed information
toList(1,X). X=0,1,... first image as X=0, second as X=1....
check the info of first image
Step4: 3.3 Smoothing data
reference
Step5: See a map of whole mean NDVI in 2001-2016.
Note that all data is in GEE server, not in this running environment.
Step6: 4. Trend analysis of annual average of NDVI in 2001-2016
After annual averaged NDVI datasets in 2001-2016 are calculated. Here Mann-Kendall trend test is applied.
Keep in mind as p-values are not calculated at this time on GEE (1st July 2017),
The statistically significant test is not available.
4.1 parameters settings
Step7: 4.2 Calculate annual NDVI average
Step8: 4.3 Mann-Kendall trend test
Step9: 4.4 Display the results
Step10: 4.5 Export Geotiff
Export the outputs to Google Drive.
It takes some time.
see | Python Code:
from IPython.display import Image, display, HTML
%matplotlib inline
from pylab import *
import datetime
import math
import time
import ee
ee.Initialize()
Explanation: Global-scale MODIS NDVI time series analysis (with interpolation)
A material for the presentation in FOSS4G-Hokkaido on 1st July 2017.
Copyright © 2017 Naru. Tsutsumida ([email protected])
Summary
Using Google Earth Engine, Annually averaged MODIS MOD13Q1 NDVI data (250m spatial resolution, 16 days intervals) at a global scale are summarised during 2001-2016, then timeseries trend is calculated by Mann-kendall analysis.
Cloud masking and interpolation is not applied but code can be shown.
Prerequisite
python environments are installed.
Your Google account is accessible to Google Earth Engine. https://code.earthengine.google.com
The 'ee' python package for google earth engine api needs to be installed in advance.
see: https://developers.google.com/earth-engine/python_install
1. Initial setting
End of explanation
def getQABits(image, start, end, newName):
#Compute the bits we need to extract.
p = 0
for i in range(start,(end+1)):
p += math.pow(2, i)
# Return a single band image of the extracted QA bits, giving the band
# a new name.
return image.select([0], [newName])\
.bitwiseAnd(p)\
.rightShift(start)
#A function to mask out cloudy pixels.
def maskClouds(img):
# Select the QA band.
QA = img.select('DetailedQA')
# Get the MOD_LAND_QA bits
internalCloud = getQABits(QA, 0, 1, 'MOD_LAND_QA')
# Return an image masking out cloudy areas.
return img.mask(internalCloud.eq(0))
##originally function for landsat
#https://groups.google.com/forum/#!searchin/google-earth-engine-developers/python$20bitwiseAnd%7Csort:relevance/google-earth-engine-developers/OYuUMjFr0Gg/GGtYWh4CAwAJ
def maskBadData(image):
invalid = image.select('DetailedQA').bitwiseAnd(0x6).neq(0)
clean = image.mask(invalid.Not())
return(clean)
def maskSummaryQA(img):
QA = img.select('SummaryQA').eq(0)
best = img.mask(QA)
return(best)
# function to add system time band
def addTimeBand(image):
return image.addBands(image.metadata('system:time_start').rename(["time"]))
Explanation: 2. Functions for MODIS MOD13Q1 NDVI
MODIS QA mask to filter out low quality pixels. several approaches for QA masks can be applied (but used only maskSummaryQA function)
End of explanation
modisNDVI = ee.ImageCollection('MODIS/006/MOD13Q1') \
.select(['NDVI', "SummaryQA"]) \
.filterDate('2001-01-01', '2016-12-31') \
.sort('system:time_start')
count = modisNDVI.size().getInfo() ## total number of selected images
print('count:', count)
filteredMODIS = modisNDVI \
.map(maskSummaryQA) \
.select('NDVI')
Explanation: 3. Pre-processing Input Data (MODIS MOD13Q1 NDVI in 2009)
3.1 Input data
in this demonstration the quality assurance (QA) mask is applied. Only the best quality pixel (QA=0) is picked up.
End of explanation
img1 = ee.Image(filteredMODIS.toList(1,0).get(0))
scale = img1.projection().nominalScale().getInfo()
props = img1.getInfo()['properties']
date = props['system:time_start']
system_time = datetime.datetime.fromtimestamp((date / 1000) - 3600)
date_str = system_time.strftime("%Y_%m_%d")
img1 = img1.set('bands', date_str)
##check metadata
print('scale:', scale) ##spatial resolution
print('DATE:', date_str) ##first date
Explanation: 3.2 see detailed information
toList(1,X). X=0,1,... first image as X=0, second as X=1....
check the info of first image
End of explanation
## This field contains UNIX time in milliseconds.
timeField = 'system:time_start'
join = ee.Join.saveAll('images')
interval = 72 ##72 days
##ee.Filter.maxDifference:
##Creates a unary or binary filter that passes if the left and right operands, both numbers, are within a given maximum difference. If used as a join condition, this numeric difference is used as a join measure.
diffFilter = ee.Filter.maxDifference(difference = (1000 * 60 * 60 * 24) * interval,
leftField = timeField,
rightField = timeField)
NeighborJoin = join.apply(primary = filteredMODIS,
secondary = filteredMODIS,
condition = diffFilter)
def smooth_func(image):
collection = ee.ImageCollection.fromImages(image.get('images'))
return ee.Image(image).addBands(collection.mean().rename(['smooth']))
smoothed = ee.ImageCollection(NeighborJoin.map(smooth_func))
Explanation: 3.3 Smoothing data
reference: https://code.earthengine.google.com/a675608eb96f135024b0b2185f3889ee
End of explanation
regionfilter = ee.Geometry.Polygon([-170, 80, 0, 80, 170, 80, 170, -80, 10, -80, -170, -80]).toGeoJSON()
ndvi_palette = 'FFFFFF, CE7E45, DF923D, F1B555, FCD163, 99B718, 74A901, 66A000, 529400, 3E8601, 207401, 056201, 004C00, 023B01, 012E01, 011D01, 011301'
vizParams = {'min': -2000,
'max': 10000,
'region':regionfilter,
'palette': ndvi_palette}
img = smoothed.select('smooth').mean()
%config InlineBackend.figure_format = 'retina'
print(img.getThumbUrl(vizParams))
Image(url=img.getThumbUrl(vizParams), width=900, unconfined=True)
Explanation: See a map of whole mean NDVI in 2001-2016.
Note that all data is in GEE server, not in this running environment.
End of explanation
#create name list
yr = range(2001, 2017)
yr = map(str, yr)
num=len(range(2001, 2017))
yy = np.array(["Y"]*num)
years = np.core.defchararray.add(yy, yr)
st = np.array(["-01-01"]*num)
ed = np.array(["-12-31"]*num)
starts = np.core.defchararray.add(yr, st)
ends = np.core.defchararray.add(yr, ed)
Explanation: 4. Trend analysis of annual average of NDVI in 2001-2016
After annual averaged NDVI datasets in 2001-2016 are calculated. Here Mann-Kendall trend test is applied.
Keep in mind as p-values are not calculated at this time on GEE (1st July 2017),
The statistically significant test is not available.
4.1 parameters settings
End of explanation
y = 0
MODcoll = smoothed \
.filterDate(starts[y], ends[y]) \
.sort('system:time_start') \
.select('smooth')
start = starts[y]
end = ends[y]
avg = MODcoll.mean().rename([years[y]])
for y in range(1, 16):
MODcoll = smoothed \
.filterDate(starts[y], ends[y]) \
.sort('system:time_start') \
.select('smooth')
start = starts[y]
end = ends[y]
average = MODcoll.mean()
avg = avg.addBands(average.rename([years[y]]))
info_bands = avg.getInfo()['bands']
#print('Dimensions:', info_bands[0]['dimensions'])
print('Number of bands:', len(info_bands))
##see band names
for ids in range(0,len(info_bands),1):
print(info_bands[ids]['id'])
Explanation: 4.2 Calculate annual NDVI average
End of explanation
mk_ans = avg.reduce(ee.Reducer.kendallsCorrelation(1))
info_bands = mk_ans.getInfo()['bands']
#print('Dimensions:', info_bands[0]['dimensions'])
print('Number of bands:', len(info_bands))
##see bands
for ids in range(0,len(info_bands),1):
print(info_bands[ids]['id'])
Explanation: 4.3 Mann-Kendall trend test
End of explanation
RdBu_palette = '#B2182B, #D6604D, #F4A582, #FDDBC7, #F7F7F7, #D1E5F0, #92C5DE, #4393C3, #2166AC'
mk_tau = ee.Image(mk_ans.select('tau')).multiply(10000).int16()
url = mk_tau.getThumbUrl({
'min':-10000,
'max':10000,
'region':regionfilter,
'crs': 'EPSG:4326',
'palette': RdBu_palette
})
%config InlineBackend.figure_format = 'retina'
print(url)
Image(url=url, width=900, unconfined=True)
Explanation: 4.4 Display the results
End of explanation
globalfilter = ee.Geometry.Polygon([-170, 80, 0, 80, 170, 80, 170, -80, 10, -80, -170, -80])
globalfilter = globalfilter['coordinates'][0]
task_config = {
'description': 'imageToDriveExample',
'scale': 231.65635826395828,
'region': globalfilter,
'maxPixels': 5e10
}
task = ee.batch.Export.image(mk_tau, 'kenn_0116', task_config)
task.start()
time.sleep(10)
while task.status()['state'] == 'RUNNING':
print 'Running...'
time.sleep(100)
print 'Done.', task.status()
Explanation: 4.5 Export Geotiff
Export the outputs to Google Drive.
It takes some time.
see: https://stackoverflow.com/questions/39219705/how-to-download-images-using-google-earth-engines-python-api
End of explanation |
10,746 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Connect to the database
Log in to Firebase with our credentials. The fake-looking credentials are working credentials. Non-authenticated users cannot read or write data. This function must be executed before firebasePush().
Step1: Analyse already evaluated components | Python Code:
firebase = pyrebase.initialize_app(config)
auth = firebase.auth()
uid = ""
password = ""
user = auth.sign_in_with_email_and_password(uid, password)
db = firebase.database() # reference to the database service
def firebaseRefresh():
global user
user = auth.refresh(user['refreshToken'])
Explanation: Connect to the database
Log in to Firebase with our credentials. The fake-looking credentials are working credentials. Non-authenticated users cannot read or write data. This function must be executed before firebasePush().
End of explanation
import unidecode
import numpy as np
import matplotlib.pyplot as plt
def plot_polarity_subjectivity(listed_name_on_database):
pol = []
sub = []
articles_of_a_newspaper = db.child(str("articles/" + listed_name_on_database)).get()
articles = articles_of_a_newspaper.val()
for article_no in range(len(articles)):
data = list(articles.items())[article_no][1]
pol.append(abs(float(data["polarity"])))
sub.append(float(data["subjectivity"]))
plt.scatter(pol,sub,[80/np.sqrt(len(pol))]*len(sub), alpha=0.7, label = listed_name_on_database)
return np.column_stack((pol, sub))
plt.clf()
plt.figure(figsize=(12, 10))
plt.title("Scatter Plot (Articles)")
websites = ["wwwchannelnewsasiacom","wwwstraitstimescom","wwwtnpsg","wwwtodayonlinecom",
"sgnewsyahoocom","sgfinanceyahoocom","stompstraitstimescom","mothershipsg",
"thehearttruthscom","wwwtremerituscom","yawningbreadwordpresscom",
"wwwtheonlinecitizencom","wwwallsingaporestuffcom","alvinologycom","berthahensonwordpresscom"]
centroid ={}
for website in websites:
data = plot_polarity_subjectivity(website)
time.sleep(0.2)
centroid[website] = np.mean(data, axis=0)
plt.legend(loc=4)
plt.xlabel("Polarity")
plt.ylabel("Subjectivity")
plt.show()
plt.clf()
plt.figure(figsize=(12, 10))
plt.title("Centroids (Sources)")
mothershipsg = centroid["wwwchannelnewsasiacom"]
plt.scatter(mothershipsg[0],mothershipsg[1],label="wwwchannelnewsasiacom")
#plt.annotate("wwwchannelnewsasiacom",(mothershipsg[0],mothershipsg[1]))
mothershipsg = centroid["wwwstraitstimescom"]
plt.scatter(mothershipsg[0],mothershipsg[1],label="wwwstraitstimescom")
#plt.annotate("wwwstraitstimescom",(mothershipsg[0],mothershipsg[1]))
mothershipsg = centroid["wwwtnpsg"]
plt.scatter(mothershipsg[0],mothershipsg[1],label="wwwtnpsg")
#plt.annotate("wwwtnpsg",(mothershipsg[0],mothershipsg[1]))
mothershipsg = centroid["wwwtodayonlinecom"]
plt.scatter(mothershipsg[0],mothershipsg[1],label="wwwtodayonlinecom")
#plt.annotate("wwwtodayonlinecom",(mothershipsg[0],mothershipsg[1]))
mothershipsg = centroid["mothershipsg"]
plt.scatter(mothershipsg[0],mothershipsg[1],label="mothership")
#plt.annotate("mothership",(mothershipsg[0],mothershipsg[1]))
mothershipsg = centroid["sgnewsyahoocom"]
plt.scatter(mothershipsg[0],mothershipsg[1],label="sgnewsyahoocom")
#plt.annotate("sgnewsyahoocom",(mothershipsg[0],mothershipsg[1]))
mothershipsg = centroid["sgfinanceyahoocom"]
plt.scatter(mothershipsg[0],mothershipsg[1],label="sgfinanceyahoocom")
#plt.annotate("sgfinanceyahoocom",(mothershipsg[0],mothershipsg[1]))
mothershipsg = centroid["stompstraitstimescom"]
plt.scatter(mothershipsg[0],mothershipsg[1],label="stompstraitstimescom")
#plt.annotate("stompstraitstimescom",(mothershipsg[0],mothershipsg[1]))
mothershipsg = centroid["alvinologycom"]
plt.scatter(mothershipsg[0],mothershipsg[1],label="alvinologycom")
#plt.annotate("alvinologycom",(mothershipsg[0],mothershipsg[1]))
mothershipsg = centroid["wwwallsingaporestuffcom"]
plt.scatter(mothershipsg[0],mothershipsg[1],label="wwwallsingaporestuffcom")
#plt.annotate("wwwallsingaporestuffcom",(mothershipsg[0],mothershipsg[1]))
mothershipsg = centroid["wwwtheonlinecitizencom"]
plt.scatter(mothershipsg[0],mothershipsg[1],label="wwwtheonlinecitizencom")
#plt.annotate("wwwtheonlinecitizencom",(mothershipsg[0],mothershipsg[1]))
mothershipsg = centroid["wwwtremerituscom"]
plt.scatter(mothershipsg[0],mothershipsg[1],label="wwwtremerituscom")
#plt.annotate("wwwtremerituscom",(mothershipsg[0],mothershipsg[1]))
mothershipsg = centroid["thehearttruthscom"]
plt.scatter(mothershipsg[0],mothershipsg[1],label="thehearttruthscom")
#plt.annotate("thehearttruthscom",(mothershipsg[0],mothershipsg[1]))
mothershipsg = centroid["berthahensonwordpresscom"]
plt.scatter(mothershipsg[0],mothershipsg[1],label="berthahensonwordpresscom")
#plt.annotate("berthahensonwordpresscom",(mothershipsg[0],mothershipsg[1]))
mothershipsg = centroid["yawningbreadwordpresscom"]
plt.scatter(mothershipsg[0],mothershipsg[1],label="yawningbreadwordpresscom")
#plt.annotate("yawningbreadwordpresscom",(mothershipsg[0],mothershipsg[1]))
plt.xlabel("Polarity")
plt.ylabel("Subjectivity")
plt.legend(loc=4)
plt.show()
Explanation: Analyse already evaluated components
End of explanation |
10,747 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sample from the Gaussian Process by use of the Cholesky decomposition of the Kernel matrix
Step1: Sample from the posterior given points at (0.1, 0.0), (0.5, 1.0) | Python Code:
n_sample = 50000
u = np.random.randn(N, n_sample)
X = L.dot(u)
_ = plt.plot(X[:, np.random.permutation(n_sample)[:500]], c='k', alpha=0.05)
_ = plt.plot(X.mean(axis=1), c='k', linewidth=2)
_ = plt.plot(2*X.std(axis=1), c='r', linewidth=2)
_ = plt.plot(-2*X.std(axis=1), c='r', linewidth=2)
Explanation: Sample from the Gaussian Process by use of the Cholesky decomposition of the Kernel matrix
End of explanation
_ = plt.plot(x, X[:, (np.abs(X[np.where(x == 0.1)[0][0], :] - 0.0) < 0.05) &
(np.abs(X[np.where(x == 0.5)[0][0], :] -1) < 0.05)],
c='k', alpha=0.25)
Explanation: Sample from the posterior given points at (0.1, 0.0), (0.5, 1.0)
End of explanation |
10,748 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
.. _tut_raw_objects
The
Step1: Continuous data is stored in objects of type
Step2: Information about the channels contained in the
Step3: You can also pass an index directly to the
Step4: Selecting subsets of channels and samples
It is possible to use more intelligent indexing to extract data, using
channel names, types or time ranges.
Step5: Notice the different scalings of these types
Step6: You can restrict the data to a specific time range
Step7: And drop channels by name
Step8: Concatenating | Python Code:
from __future__ import print_function
import mne
import os.path as op
from matplotlib import pyplot as plt
Explanation: .. _tut_raw_objects
The :class:Raw <mne.io.RawFIF> data structure: continuous data
End of explanation
# Load an example dataset, the preload flag loads the data into memory now
data_path = op.join(mne.datasets.sample.data_path(), 'MEG',
'sample', 'sample_audvis_raw.fif')
raw = mne.io.RawFIF(data_path, preload=True, verbose=False)
# Give the sample rate
print('sample rate:', raw.info['sfreq'], 'Hz')
# Give the size of the data matrix
print('channels x samples:', raw._data.shape)
Explanation: Continuous data is stored in objects of type :class:Raw <mne.io.RawFIF>.
The core data structure is simply a 2D numpy array (channels × samples,
._data) combined with an :class:Info <mne.io.meas_info.Info> object
(.info) (:ref:tut_info_objects.
The most common way to load continuous data is from a .fif file. For more
information on :ref:loading data from other formats <ch_raw>, or creating
it :ref:from scratch <tut_creating_data_structures>.
Loading continuous data
End of explanation
print('Shape of data array:', raw._data.shape)
array_data = raw._data[0, :1000]
_ = plt.plot(array_data)
Explanation: Information about the channels contained in the :class:Raw <mne.io.RawFIF>
object is contained in the :class:Info <mne.io.meas_info.Info> attribute.
This is essentially a dictionary with a number of relevant fields (see
:ref:tut_info_objects).
Indexing data
There are two ways to access the data stored within :class:Raw
<mne.io.RawFIF> objects. One is by accessing the underlying data array, and
the other is to index the :class:Raw <mne.io.RawFIF> object directly.
To access the data array of :class:Raw <mne.io.Raw> objects, use the
_data attribute. Note that this is only present if preload==True.
End of explanation
# Extract data from the first 5 channels, from 1 s to 3 s.
sfreq = raw.info['sfreq']
data, times = raw[:5, int(sfreq * 1):int(sfreq * 3)]
_ = plt.plot(times, data.T)
_ = plt.title('Sample channels')
Explanation: You can also pass an index directly to the :class:Raw <mne.io.RawFIF>
object. This will return an array of times, as well as the data representing
those timepoints. This may be used even if the data is not preloaded:
End of explanation
# Pull all MEG gradiometer channels:
# Make sure to use copy==True or it will overwrite the data
meg_only = raw.pick_types(meg=True, copy=True)
eeg_only = raw.pick_types(meg=False, eeg=True, copy=True)
# The MEG flag in particular lets you specify a string for more specificity
grad_only = raw.pick_types(meg='grad', copy=True)
# Or you can use custom channel names
pick_chans = ['MEG 0112', 'MEG 0111', 'MEG 0122', 'MEG 0123']
specific_chans = raw.pick_channels(pick_chans, copy=True)
print(meg_only, eeg_only, grad_only, specific_chans, sep='\n')
Explanation: Selecting subsets of channels and samples
It is possible to use more intelligent indexing to extract data, using
channel names, types or time ranges.
End of explanation
f, (a1, a2) = plt.subplots(2, 1)
eeg, times = eeg_only[0, :int(sfreq * 2)]
meg, times = meg_only[0, :int(sfreq * 2)]
a1.plot(times, meg[0])
a2.plot(times, eeg[0])
Explanation: Notice the different scalings of these types
End of explanation
restricted = raw.crop(5, 7) # in seconds
print('New time range from', restricted.times.min(), 's to',
restricted.times.max(), 's')
Explanation: You can restrict the data to a specific time range
End of explanation
restricted = restricted.drop_channels(['MEG 0241', 'EEG 001'])
print('Number of channels reduced from', raw.info['nchan'], 'to',
restricted.info['nchan'])
Explanation: And drop channels by name
End of explanation
# Create multiple :class:`Raw <mne.io.RawFIF>` objects
raw1 = raw.copy().crop(0, 10)
raw2 = raw.copy().crop(10, 20)
raw3 = raw.copy().crop(20, 100)
# Concatenate in time (also works without preloading)
raw1.append([raw2, raw3])
print('Time extends from', raw1.times.min(), 's to', raw1.times.max(), 's')
Explanation: Concatenating :class:Raw <mne.io.RawFIF> objects
:class:Raw <mne.io.RawFIF> objects can be concatenated in time by using the
:func:append <mne.io.RawFIF.append> function. For this to work, they must
have the same number of channels and their :class:Info
<mne.io.meas_info.Info> structures should be compatible.
End of explanation |
10,749 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Executed
Step1: Load software and filenames definitions
Step2: Data folder
Step3: List of data files
Step4: Data load
Initial loading of the data
Step5: Laser alternation selection
At this point we have only the timestamps and the detector numbers
Step6: We need to define some parameters
Step7: We should check if everithing is OK with an alternation histogram
Step8: If the plot looks good we can apply the parameters with
Step9: Measurements infos
All the measurement data is in the d variable. We can print it
Step10: Or check the measurements duration
Step11: Compute background
Compute the background using automatic threshold
Step12: Burst search and selection
Step14: Donor Leakage fit
Half-Sample Mode
Fit peak usng the mode computed with the half-sample algorithm (Bickel 2005).
Step15: Gaussian Fit
Fit the histogram with a gaussian
Step16: KDE maximum
Step17: Leakage summary
Step18: Burst size distribution
Step19: Fret fit
Max position of the Kernel Density Estimation (KDE)
Step20: Weighted mean of $E$ of each burst
Step21: Gaussian fit (no weights)
Step22: Gaussian fit (using burst size as weights)
Step23: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE)
Step24: The Maximum likelihood fit for a Gaussian population is the mean
Step25: Computing the weighted mean and weighted standard deviation we get
Step26: Save data to file
Step27: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
Step28: This is just a trick to format the different variables | Python Code:
ph_sel_name = "all-ph"
data_id = "12d"
# ph_sel_name = "all-ph"
# data_id = "7d"
Explanation: Executed: Mon Mar 27 11:34:19 2017
Duration: 8 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
from fretbursts import *
init_notebook()
from IPython.display import display
Explanation: Load software and filenames definitions
End of explanation
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
Explanation: Data folder:
End of explanation
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
ph_sel_map = {'all-ph': Ph_sel('all'), 'Dex': Ph_sel(Dex='DAem'),
'DexDem': Ph_sel(Dex='Dem')}
ph_sel = ph_sel_map[ph_sel_name]
data_id, ph_sel_name
Explanation: List of data files:
End of explanation
d = loader.photon_hdf5(filename=files_dict[data_id])
Explanation: Data load
Initial loading of the data:
End of explanation
d.ph_times_t, d.det_t
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
plot_alternation_hist(d)
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
loader.alex_apply_period(d)
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
d
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
d.time_max
Explanation: Or check the measurements duration:
End of explanation
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
bs_kws = dict(L=10, m=10, F=7, ph_sel=ph_sel)
d.burst_search(**bs_kws)
th1 = 30
ds = d.select_bursts(select_bursts.size, th1=30)
bursts = (bext.burst_data(ds, include_bg=True, include_ph_index=True)
.round({'E': 6, 'S': 6, 'bg_d': 3, 'bg_a': 3, 'bg_aa': 3, 'nd': 3, 'na': 3, 'naa': 3, 'nda': 3, 'nt': 3, 'width_ms': 4}))
bursts.head()
burst_fname = ('results/bursts_usALEX_{sample}_{ph_sel}_F{F:.1f}_m{m}_size{th}.csv'
.format(sample=data_id, th=th1, **bs_kws))
burst_fname
bursts.to_csv(burst_fname)
assert d.dir_ex == 0
assert d.leakage == 0
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
n_bursts_do = ds_do.num_bursts[0]
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_do, n_bursts_fret
d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)
print ('D-only fraction:', d_only_frac)
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);
Explanation: Burst search and selection
End of explanation
def hsm_mode(s):
Half-sample mode (HSM) estimator of `s`.
`s` is a sample from a continuous distribution with a single peak.
Reference:
Bickel, Fruehwirth (2005). arXiv:math/0505419
s = memoryview(np.sort(s))
i1 = 0
i2 = len(s)
while i2 - i1 > 3:
n = (i2 - i1) // 2
w = [s[n-1+i+i1] - s[i+i1] for i in range(n)]
i1 = w.index(min(w)) + i1
i2 = i1 + n
if i2 - i1 == 3:
if s[i1+1] - s[i1] < s[i2] - s[i1 + 1]:
i2 -= 1
elif s[i1+1] - s[i1] > s[i2] - s[i1 + 1]:
i1 += 1
else:
i1 = i2 = i1 + 1
return 0.5*(s[i1] + s[i2])
E_pr_do_hsm = hsm_mode(ds_do.E[0])
print ("%s: E_peak(HSM) = %.2f%%" % (ds.ph_sel, E_pr_do_hsm*100))
Explanation: Donor Leakage fit
Half-Sample Mode
Fit peak usng the mode computed with the half-sample algorithm (Bickel 2005).
End of explanation
E_fitter = bext.bursts_fitter(ds_do, weights=None)
E_fitter.histogram(bins=np.arange(-0.2, 1, 0.03))
E_fitter.fit_histogram(model=mfit.factory_gaussian())
E_fitter.params
res = E_fitter.fit_res[0]
res.params.pretty_print()
E_pr_do_gauss = res.best_values['center']
E_pr_do_gauss
Explanation: Gaussian Fit
Fit the histogram with a gaussian:
End of explanation
bandwidth = 0.03
E_range_do = (-0.1, 0.15)
E_ax = np.r_[-0.2:0.401:0.0002]
E_fitter.calc_kde(bandwidth=bandwidth)
E_fitter.find_kde_max(E_ax, xmin=E_range_do[0], xmax=E_range_do[1])
E_pr_do_kde = E_fitter.kde_max_pos[0]
E_pr_do_kde
Explanation: KDE maximum
End of explanation
mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, plot_model=False)
plt.axvline(E_pr_do_hsm, color='m', label='HSM')
plt.axvline(E_pr_do_gauss, color='k', label='Gauss')
plt.axvline(E_pr_do_kde, color='r', label='KDE')
plt.xlim(0, 0.3)
plt.legend()
print('Gauss: %.2f%%\n KDE: %.2f%%\n HSM: %.2f%%' %
(E_pr_do_gauss*100, E_pr_do_kde*100, E_pr_do_hsm*100))
Explanation: Leakage summary
End of explanation
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
Explanation: Burst size distribution
End of explanation
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(center=0.5))
E_fitter.fit_res[0].params.pretty_print()
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
Explanation: Fret fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
ds_fret.fit_E_m(weights='size')
Explanation: Weighted mean of $E$ of each burst:
End of explanation
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
Explanation: Gaussian fit (no weights):
End of explanation
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr
Explanation: Gaussian fit (using burst size as weights):
End of explanation
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr
S_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr
Explanation: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
Explanation: The Maximum likelihood fit for a Gaussian population is the mean:
End of explanation
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
Explanation: Computing the weighted mean and weighted standard deviation we get:
End of explanation
sample = data_id
Explanation: Save data to file
End of explanation
variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr '
'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr '
'E_pr_do_kde E_pr_do_hsm E_pr_do_gauss nt_mean\n')
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-raw-%s.csv' % ph_sel_name, 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
Explanation: This is just a trick to format the different variables:
End of explanation |
10,750 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Les bases de la dynamique des populations
À voir
Step1: Ainsi
lorsque $\mu>\lambda$ la population croît exponentiellement
lorsque $\lambda<\mu$ la population tend exponentiellement vers 0.
On parlera de croissance (ou décroissance) malthusienne.
Lorsque $\lambda<\mu$ la population décroît exponentiellement vite vers 0 mais à tout instant fini cette population est strictement positive, pourtant si $M=10^3$ et si $x(t)$ descend en dessous de $10^{-3}$ alors $x(t)$ représentera moins d'un individu. Ce point n'est pas cohérent avec l'hypothèse de population grande et donc limite l'intérêt de ce modèle pour les petites tailles de population.
Croissance logistique
\begin{align}
\ce{X -> 2X}\hskip1em \textrm{avec inhibition}
\end{align}
En 1838,
Pierre François Verhulst (1804-1849)
proposa un modèle de croissance dont le taux de croissance diminue linéairement en fonction de la taille de la population rendant ainsi compte de la capacité maximale d'accueil du milieu.
$$
\dot x(t) = r\times\left(1-\frac{x(t)}{K}\right)\,x(t)\,,\ x(0)=x_0
$$
admet l'unique solution
Step2: Modèle de Lotka-Volterra
\begin{align}
\ce{A -> 2A} && \textrm{reproduction des proies} \
\ce{A + B -> B + \gamma B} && \textrm{prédation}\
\ce{B -> }\emptyset && \textrm{disparition des prédateurs}
\end{align}
matrice de Petersen
Step3: j'intègre l'EDO
Step4: Espace des phases
Au lieu de tracer $t\to x_1(t)$ et $t\to x_2(t)$, on trace les points $(x_1(t),x_2(t))$ lorsque $t$ varie, donc le temps n'apparait plus, il s'agit d'une courbe dans l'espace des phases. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
t0, t1 = 0, 10
temps = np.linspace(t0,t1,200, endpoint=True)
population = lambda t: x0*np.exp((rb-rd)*t)
legende = []
for x0, rb, rd in zip([1, 1, 1], [1, 1, 0.9], [0.9, 1, 1]):
plt.plot(temps, population(temps))
legende = legende + [r'$\lambda=$'+str(rb)+r', $\mu=$'+str(rd),]
plt.legend(legende, loc='upper left')
plt.show()
Explanation: Les bases de la dynamique des populations
À voir:
[Bacaer 2009] très intéressante perspective historique
[Boularas et al 2009] présentation très vivante et accessible des modèles différentiels
[Otto et Dray2007] très complet et tourné vers les biologistes
Modèles différentiels
Nous voulons modéliser l'évolution de la taille d'une population composée d'une seule espèce. Notons $n(t)$ la taille de cette population à l'instant $t$, il s'agit d'une quantité entière. Nous allons modéliser l'évolution de cette taille à des instants $t_{k}$, que nous supposerons pour simplifier équirépartis, i.e. $t_{k}=k\,h$ avec $h>0$:
<img src="./images/schema_pop.png" alt="schema_pop" style="width: 450px;"/>
Modéliser l'évolution de la taille de population consiste à définir la variation $\Delta n(t_{k})$ de cette taille entre les instants $t_{k}$ et $t_{k+1}$:
$$
n(t_{k+1})=n(t_{k})+\Delta n(t_{k})\,.
$$
On suppose donc que ces accroissements dépendent de la taille courante de la population. Il est pertinent de s'intéresse à la variation de la taille de la population par unité de temps:
$$
\frac{n(t_{k+1})-n(t_{k})}{h}=\frac{\Delta n(t_{k})}{h}\,.
$$
On fait l'hypothèse que les instants $t_{k}$ sont rapprochés, i.e. $h$ petit.
Dans l'équation précédente on fait tendre $h$ vers 0 et $k$ vers l'infini de telle sorte que $t_{k}\to t$ pour $t$ donné. On suppose aussi que $\Delta n(t_{k})$ tend vers l'infini de telle sorte que le rapport $\Delta n(t_{k})/h$ tende vers un certain $F(n(t))$:
\begin{align}\label{eqNt}
\dot n(t)=F(n(t))\,.
\end{align}
Enfin, la taille $n(t)$ de la population est supposée très grande et nous faisons le changement d'échelle suivant:
$$
x(t) := \frac{n(t)}{M}
$$
Ce changement de variable peut s'interpréter de différentes façons. Par exemple pour une population de bactéries:
$M$ peut être vu comme l'inverse de la masse d'une bactérie, alors $x(t)$ désigne la {biomasse} de la population;
$M$ peut être le volume dans lequel vit cette population, $x(t)$ est alors une densité de population;
$M$ peut être simplement un changement d'échelle, si la taille de la population est de l'ordre de $10^{9}$ individus et si $M=10^{3}$ alors $x(t)$ désignera la taille de la population de méta-individus (1 méta-individu = $10^3$ individus).
L'équation \eqref{eqNt}
devient:
$$
\frac{\dot n(t)}{M}=\frac{1}{M}\,F\Bigl(M\,\frac{n(t)}{M}\Bigr)
$$
et en posant $f(x) := \frac{1}{M}\,F(M\,x)$ on obtient l'équation différentielle ordinaire (EDO):
$$
\dot x(t)=f(x(t))\,,\ x(0)=x_{0}
$$
et son état $x(t)$ peut donc désigner la taille d'une population, sa biomasse, sa densité (nombre d'individus par unité de volume), ou bien encore sa concentration (massique ou molaire); pour simplifier nous dirons que $x(t)$ ``est'' la population; $x_{0}$ désigne la population initiale, supposée connue.
Dans beaucoup d'exemples de dynamique de population $f$ est de la forme:
$$
f(x)=r(x)\,x
$$
où $r(x)$ s'interprète comme un taux de croissance per capita (par individu). En effet si $x(t+h)=x(t)+f(x(t))$ ($h=1$ unité de temps) et si par exemple $f(x(t))=5$ il y alors eu un accroissement de 5 individus (dans l'échelle $x$) sur la période de temps $h$: est-ce grand ou petit ? Cela est relatif à la taille $x(t)$ de la population, c'est donc le rapport $\frac{f(x(t))}{x(t)}=r(x(t))$ qui importe.
Croissance exponentielle
Division céllulaire pouvant être vue comme un modèle d'ordre 1:
$\require{mhchem}$
\begin{align}
\ce{X -> 2X}
\end{align}
La première étape consiste à appréhender la croissance géométique (temps discret) et exponentielle (temps continu).
On considère une population dont la taille évolue de la façon suivante:
$$
n(t_{k+1})
= n(t_{k})
+\lambda\,n(t_{k})\,h
-\mu\,n(t_{k})\,h
$$
où $\lambda$ est le taux de naissance et $\mu$ celui de mort. Il est nécessaire
ici que l'intervalle de temps $[t_{k},t_{k+1}]$ soit suffisamment petit pour que
$n(t_{k})$ évolue peu, mais aussi suffisamment grand pour que des
événements de naissance et mort surviennent. Après changement d'échelle,
l'équation précédente devient:
$$
\dot x(t) = (\lambda-\mu)\,x(t)\,,\ x(0)=x_0
$$
taux de naissance $\lambda>0$, taux de mort $\mu>0$.
qui admet la solution explicite suivante:
$$
x(t) = x_{0}\,e^{(\lambda-\mu)\,t}\,,\quad t\geq 0\,.
$$
End of explanation
t0, t1 = 0, 10
temps = np.linspace(t0,t1,300, endpoint=True)
population = lambda t: K*1/(1+ (K/x0-1) * np.exp(-r*t))
x0, K = 1, 5
legende = []
for r in [2, 1, 0.5]:
plt.plot(temps, population(temps))
legende = legende + [r'$r=$'+str(r),]
plt.ylim([0,K*1.2])
plt.legend(legende, loc='lower right',title=r'taux de croissance $r$')
plt.plot([t0, t1], [K, K], color="k", linestyle='--')
plt.text((t1-t0)/50, K, r"$K$ (capacité d'accueil)",
verticalalignment='bottom', horizontalalignment='left')
plt.xlabel(r'temps $t$')
plt.ylabel(r'taille $x(t)$ de la population')
plt.show()
from ipywidgets import interact, fixed
def pltlogistique(x0,K,r):
population2 = K*1/(1+ (K/x0-1) * np.exp(-r*temps))
plt.plot(temps, population2)
plt.ylim([0,6])
plt.plot([t0, t1], [K, K], color="k", linestyle='--')
plt.show()
interact(pltlogistique, x0=(0.01,6,0.1), K=(0.01,6,0.1), r=(0.1,20,0.1))
plt.show()
Explanation: Ainsi
lorsque $\mu>\lambda$ la population croît exponentiellement
lorsque $\lambda<\mu$ la population tend exponentiellement vers 0.
On parlera de croissance (ou décroissance) malthusienne.
Lorsque $\lambda<\mu$ la population décroît exponentiellement vite vers 0 mais à tout instant fini cette population est strictement positive, pourtant si $M=10^3$ et si $x(t)$ descend en dessous de $10^{-3}$ alors $x(t)$ représentera moins d'un individu. Ce point n'est pas cohérent avec l'hypothèse de population grande et donc limite l'intérêt de ce modèle pour les petites tailles de population.
Croissance logistique
\begin{align}
\ce{X -> 2X}\hskip1em \textrm{avec inhibition}
\end{align}
En 1838,
Pierre François Verhulst (1804-1849)
proposa un modèle de croissance dont le taux de croissance diminue linéairement en fonction de la taille de la population rendant ainsi compte de la capacité maximale d'accueil du milieu.
$$
\dot x(t) = r\times\left(1-\frac{x(t)}{K}\right)\,x(t)\,,\ x(0)=x_0
$$
admet l'unique solution:
$$
x(t)
= K \,\frac{1}{1+\left(\frac {K}{x_{0}} - 1\right) \,e^{-r\,t}}
= \frac{1}{\frac{x_0}{K}\,\left(e^{rt} - 1\right)+1}\; x_0\,e^{r\,t}
$$
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
Explanation: Modèle de Lotka-Volterra
\begin{align}
\ce{A -> 2A} && \textrm{reproduction des proies} \
\ce{A + B -> B + \gamma B} && \textrm{prédation}\
\ce{B -> }\emptyset && \textrm{disparition des prédateurs}
\end{align}
matrice de Petersen:
| réaction |ordre | A | B | taux de réaction |
| --------------------------- | ---- | -- | -- | -------------------------------- |
| reproduction des proies | 1 | +1 | 0 | $k_1 [\ce A]$ |
| prédation | 2 | -1 | $\gamma$ | $k_2 [\ce A][\ce B]$ |
| disparition des prédateurs | 1 | 0 | -1 | $k_3 [\ce {B}]$ |
\begin{align}
\frac{{\rm d} [\ce A]}{{\rm d}t}&= k_1[\ce A] - k_2[\ce A][\ce B]
\
\frac{{\rm d} [\ce B]}{{\rm d}t}&= \gamma\,k_2[\ce A][\ce B]-k_3[B]
\end{align}
[1kg d'herbe ne fait pas 1kg de vache]
Il existe de nombreuses présentations de ce modèles, pour un résumé mathématique précis voir par exemple ce document PDF.
Le modèle de Lotka-Volterra représente deux populations en interaction:
des proies, de taille $x_1(t)$, ayant accès à une ressource ilimitée (non modélisée)
et des prédateurs, de taille $x_2(t)$, se nourissant de proies.
On suppose que:
en l'absence de prédateurs, la population de proies croit de façon exponentielle selon un taux $r_1$;
en l'abscence de proies, la population de prédateurs décroit de façon exponentielle selon un taux $r_2$.
On suppose que $r_1$ dépend de $x_2(t)$ et que $r_2$ dépend de $x_1(t)$:
$r_1=a-b\,x_2(t)$, où $a$ est le taux de naissance des proies en l'absence de prédateurs et $b\,x_2(t)$ est le taux de prédation que l'on suppose linéaire en $x_2(t)$;
$r_2=c\,x_1(t)-d$, où $d$ est le taux de mort des prédateurs en l'absence de proies et $c\,x_1(t)$ est le taux de naissance des prédateurs que l'on suppose linéaire en $x_1(t)$.
On obtient donc un système de deux équations différentielles couplées:
\begin{align}
\dot x_1(t) &= [a-b\,x_2(t)]\,x_1(t) \
\dot x_2(t) &= [c\,x_1(t)-d]\,x_2(t)
\end{align}
ce système n'admet pas de solution explicite, on doit faire appel à une méthode numérique.
La solution est périodique de période $\sqrt{a\,c}$.
Voir par exemple dans le SciPy Cookbook.
ici kkkkk
je fais appel aux librairies
End of explanation
a, b, c, d = 0.4, 0.002, 0.001, 0.7
def f(x, t):
x1, x2 = x
return [a * x1 - b * x1 * x2,
c * x1 * x2 - d * x2]
x0 = [600, 400]
t = np.linspace(0, 50, 250)
x_t = odeint(f, x0, t)
%matplotlib inline
plt.plot(t, x_t[:,0], label=r"proies $x_1(t)$")
plt.plot(t, x_t[:,1], label=r"prédateurs $x_2(t)$")
plt.xlabel("temps")
plt.ylabel("nombre d'animaux")
plt.legend()
plt.show()
Explanation: j'intègre l'EDO
End of explanation
plt.plot(x_t[:,0], x_t[:,1])
plt.xlabel(r"nombre de proies $x_1(t)$")
plt.ylabel(r"nombre de prédateurs $x_2(t)$")
marker_style = dict(linestyle=':', markersize=10)
equilibre = [d/c,a/b]
plt.plot(equilibre[0], equilibre[1], marker='.', color="k")
plt.text(1.05*equilibre[0], 1.05*equilibre[1], r'$(d/c,a/b)$')
plt.xlim(300, 1300)
plt.ylim(0, 500)
plt.axis('equal') # les échelles en x et y sont égales
plt.show()
echelle = np.linspace(0.3, 0.9, 5)
couleurs = plt.cm.winter(np.linspace(0.3, 1., len(echelle)))
for v, col in zip(echelle, couleurs):
val_ini = np.multiply(v,equilibre)
X = odeint( f, val_ini, t)
plt.plot( X[:,0], X[:,1], lw=1, color=col,
label=r'$(%.f, %.f)$' % tuple(val_ini) )
x1max = plt.xlim(xmin=0)[1]
x2max = plt.ylim(ymin=0)[1]
nb_points = 20
x1 = np.linspace(0, x1max, nb_points)
x2 = np.linspace(0, x2max, nb_points)
X1 , X2 = np.meshgrid(x1, x2)
DX1, DX2 = f([X1, X2],0)
vecteurs = np.hypot(DX1, DX2) # norme du taux de croissance
vecteurs[ vecteurs == 0] = 1. # éviter la division par 0
DX1 /= vecteurs # normalisation de chaque vecteur
DX2 /= vecteurs
plt.quiver(X1, X2, DX1, DX2, vecteurs, pivot='mid', cmap=plt.cm.hot)
plt.xlabel(r"nombre de proies $x_1(t)$")
plt.ylabel(r"nombre de prédateurs $x_2(t)$")
plt.legend(title="condition initiale")
plt.grid()
plt.xlim(0, x1max)
plt.ylim(0, x2max)
plt.show()
Explanation: Espace des phases
Au lieu de tracer $t\to x_1(t)$ et $t\to x_2(t)$, on trace les points $(x_1(t),x_2(t))$ lorsque $t$ varie, donc le temps n'apparait plus, il s'agit d'une courbe dans l'espace des phases.
End of explanation |
10,751 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
.. _tut_creating_data_structures
Step1: Creating
Step2: You can also supply more extensive metadata
Step3: .. note
Step4: Creating
Step5: It is necessary to supply an "events" array in order to create an Epochs
object. This is of shape(n_events, 3) where the first column is the index
of the event, the second column is the length of the event, and the third
column is the event type.
Step6: More information about the event codes
Step7: Finally, we must specify the beginning of an epoch (the end will be inferred
from the sampling frequency and n_samples)
Step8: Now we can create the
Step9: Creating | Python Code:
from __future__ import print_function
import mne
import numpy as np
Explanation: .. _tut_creating_data_structures:
Creating MNE-Python's data structures from scratch
End of explanation
# Create some dummy metadata
n_channels = 32
sampling_rate = 200
info = mne.create_info(32, sampling_rate)
print(info)
Explanation: Creating :class:Info <mne.Info> objects
.. note:: for full documentation on the Info object, see
:ref:tut_info_objects. See also
:ref:sphx_glr_auto_examples_io_plot_objects_from_arrays.py.
Normally, :class:mne.Info objects are created by the various
:ref:data import functions <ch_convert>.
However, if you wish to create one from scratch, you can use the
:func:mne.create_info function to initialize the minimally required
fields. Further fields can be assigned later as one would with a regular
dictionary.
The following creates the absolute minimum info structure:
End of explanation
# Names for each channel
channel_names = ['MEG1', 'MEG2', 'Cz', 'Pz', 'EOG']
# The type (mag, grad, eeg, eog, misc, ...) of each channel
channel_types = ['grad', 'grad', 'eeg', 'eeg', 'eog']
# The sampling rate of the recording
sfreq = 1000 # in Hertz
# The EEG channels use the standard naming strategy.
# By supplying the 'montage' parameter, approximate locations
# will be added for them
montage = 'standard_1005'
# Initialize required fields
info = mne.create_info(channel_names, sfreq, channel_types, montage)
# Add some more information
info['description'] = 'My custom dataset'
info['bads'] = ['Pz'] # Names of bad channels
print(info)
Explanation: You can also supply more extensive metadata:
End of explanation
# Generate some random data
data = np.random.randn(5, 1000)
# Initialize an info structure
info = mne.create_info(
ch_names=['MEG1', 'MEG2', 'EEG1', 'EEG2', 'EOG'],
ch_types=['grad', 'grad', 'eeg', 'eeg', 'eog'],
sfreq=100
)
custom_raw = mne.io.RawArray(data, info)
print(custom_raw)
Explanation: .. note:: When assigning new values to the fields of an
:class:mne.Info object, it is important that the
fields are consistent:
- The length of the channel information field `chs` must be
`nchan`.
- The length of the `ch_names` field must be `nchan`.
- The `ch_names` field should be consistent with the `name` field
of the channel information contained in `chs`.
Creating :class:Raw <mne.io.Raw> objects
To create a :class:mne.io.Raw object from scratch, you can use the
:class:mne.io.RawArray class, which implements raw data that is backed by a
numpy array. Its constructor simply takes the data matrix and
:class:mne.Info object:
End of explanation
# Generate some random data: 10 epochs, 5 channels, 2 seconds per epoch
sfreq = 100
data = np.random.randn(10, 5, sfreq * 2)
# Initialize an info structure
info = mne.create_info(
ch_names=['MEG1', 'MEG2', 'EEG1', 'EEG2', 'EOG'],
ch_types=['grad', 'grad', 'eeg', 'eeg', 'eog'],
sfreq=sfreq
)
Explanation: Creating :class:Epochs <mne.Epochs> objects
To create an :class:mne.Epochs object from scratch, you can use the
:class:mne.EpochsArray class, which uses a numpy array directly without
wrapping a raw object. The array must be of shape(n_epochs, n_chans,
n_times)
End of explanation
# Create an event matrix: 10 events with a duration of 1 sample, alternating
# event codes
events = np.array([
[0, 1, 1],
[1, 1, 2],
[2, 1, 1],
[3, 1, 2],
[4, 1, 1],
[5, 1, 2],
[6, 1, 1],
[7, 1, 2],
[8, 1, 1],
[9, 1, 2],
])
Explanation: It is necessary to supply an "events" array in order to create an Epochs
object. This is of shape(n_events, 3) where the first column is the index
of the event, the second column is the length of the event, and the third
column is the event type.
End of explanation
event_id = dict(smiling=1, frowning=2)
Explanation: More information about the event codes: subject was either smiling or
frowning
End of explanation
# Trials were cut from -0.1 to 1.0 seconds
tmin = -0.1
Explanation: Finally, we must specify the beginning of an epoch (the end will be inferred
from the sampling frequency and n_samples)
End of explanation
custom_epochs = mne.EpochsArray(data, info, events, tmin, event_id)
print(custom_epochs)
# We can treat the epochs object as we would any other
_ = custom_epochs['smiling'].average().plot()
Explanation: Now we can create the :class:mne.EpochsArray object
End of explanation
# The averaged data
data_evoked = data.mean(0)
# The number of epochs that were averaged
nave = data.shape[0]
# A comment to describe to evoked (usually the condition name)
comment = "Smiley faces"
# Create the Evoked object
evoked_array = mne.EvokedArray(data_evoked, info, tmin,
comment=comment, nave=nave)
print(evoked_array)
_ = evoked_array.plot()
Explanation: Creating :class:Evoked <mne.Evoked> Objects
If you already have data that is collapsed across trials, you may also
directly create an evoked array. Its constructor accepts an array of
shape(n_chans, n_times) in addition to some bookkeeping parameters.
End of explanation |
10,752 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-2', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: CSIR-CSIRO
Source ID: SANDBOX-2
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:54
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
10,753 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook will demonstrate how to do basic SuperDARN data plotting.
Step1: Remote File RTI Plots
Step2: Local File RTI Plot
You can also plot data stored in a local file. Just change the variables in the cell below.
Step3: Fan Plots
Right now we don't have plotFan setup to accept local files. But, we will add that in shortly.
Geopgraphic Coordinates
Step4: Magnetic Coordinates
Magnetic coordinates still need a little work. For instance, high latitude continent lines don't always plot. Also, we are working on getting Simon's new AACGM system in place (http
Step5: Convection Plotting | Python Code:
%pylab inline
import datetime
import os
import matplotlib.pyplot as plt
from davitpy import pydarn
sTime = datetime.datetime(2008,2,22)
eTime = datetime.datetime(2008,2,23)
radar = 'bks'
beam = 7
Explanation: This notebook will demonstrate how to do basic SuperDARN data plotting.
End of explanation
#The following command will print the docstring for the plotRti routine:
#pydarn.plotting.rti.plotRti?
fig = plt.figure(figsize=(14,12)) #Define a figure with a custom size.
pydarn.plotting.rti.plotRti(sTime, radar, eTime=eTime, bmnum=beam, figure=fig)
plt.show()
#Now save as a PNG to your home folder...
home = os.getenv('HOME')
filename = os.path.join(home,'rti.png')
fig.savefig(filename)
fig.clear() #Clear the figure from memory.
Explanation: Remote File RTI Plots
End of explanation
fileName = '/tmp/sd/20080222.000000.20080223.000000.bks.fitex'
fileType = 'fitex'
radar = 'bks'
beam = 7
sTime = datetime.datetime(2008,2,22)
eTime = datetime.datetime(2008,2,23)
fig = plt.figure(figsize=(14,12)) #Define a figure with a custom size.
pydarn.plotting.rti.plotRti(sTime, radar, eTime=eTime, bmnum=beam, figure=fig, fileName=fileName,fileType=fileType)
plt.show()
fig.clear() #Clear the figure from memory.
Explanation: Local File RTI Plot
You can also plot data stored in a local file. Just change the variables in the cell below.
End of explanation
import datetime
import os
import matplotlib.pyplot as plt
from davitpy import pydarn
pydarn.plotting.fan.plotFan(datetime.datetime(2013,3,16,16,30),['fhe','fhw'],param='power',gsct=False)
Explanation: Fan Plots
Right now we don't have plotFan setup to accept local files. But, we will add that in shortly.
Geopgraphic Coordinates
End of explanation
import datetime
import os
import matplotlib.pyplot as plt
from davitpy import pydarn
pydarn.plotting.fan.plotFan(datetime.datetime(2013,3,16,16,30),['fhe','fhw'],param='power',gsct=False,coords='mag')
Explanation: Magnetic Coordinates
Magnetic coordinates still need a little work. For instance, high latitude continent lines don't always plot. Also, we are working on getting Simon's new AACGM system in place (http://dx.doi.org/doi/10.1002/2014JA020264). Not there yet...
End of explanation
import datetime
import matplotlib.pyplot as plt
import davitpy.pydarn.plotting.plotMapGrd
from davitpy.utils import *
fig = plt.figure(figsize=(15,15))
ax = fig.add_subplot(111)
sdate = datetime.datetime(2011,4,3,4,0)
mObj = plotUtils.mapObj(boundinglat=50.,gridLabels=True, coords='mag')
mapDatObj = davitpy.pydarn.plotting.plotMapGrd.MapConv(sdate, mObj, ax)
mapDatObj.overlayMapFitVel()
mapDatObj.overlayCnvCntrs()
mapDatObj.overlayHMB()
Explanation: Convection Plotting
End of explanation |
10,754 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
Step5: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
from tensorflow.python.layers.core import Dense
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
image_size = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, [None, image_size], name='inputs')
targets_ = tf.placeholder(tf.float32, [None, image_size], name='targets')
# Output of hidden layer, single fully connected layer here with ReLU activation
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits, fully connected layer with no activation
logits = tf.layers.dense(encoded, image_size, activation=None)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits, name='outputs')
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(0.01).minimize(cost)
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
# Create the session
sess = tf.Session()
Explanation: Training
End of explanation
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation |
10,755 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bias Evaluation for TF Javascript Model
Based on the FAT* Tutorial Measuring Unintended Bias in Text Classification Models with Real Data.
Copyright 2019 Google LLC.
SPDX-License-Identifier
Step2: Score test set with our text classification model
Using our new model, we can score the set of test comments for toxicity.
Step6: Evaluate the overall ROC-AUC
This calculates the models performance on the entire test set using the ROC-AUC metric.
Step7: Plot a heatmap of bias metrics
Plot a heatmap of the bias metrics. Higher scores indicate better results.
* Subgroup AUC measures the ability to separate toxic and non-toxic comments for this identity.
* Negative cross AUC measures the ability to separate non-toxic comments for this identity from toxic comments from the background distribution.
* Positive cross AUC measures the ability to separate toxic comments for this identity from non-toxic comments from the background distribution. | Python Code:
!pip3 install --quiet "tensorflow>=1.11"
!pip3 install --quiet sentencepiece
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import re
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
import sentencepiece
from google.colab import auth
from IPython.display import HTML, display
from sklearn import metrics
%matplotlib inline
# autoreload makes it easier to interactively work on code in imported libraries
%load_ext autoreload
%autoreload 2
# Set pandas display options so we can read more of the comment text.
pd.set_option('max_colwidth', 300)
# Seed for Pandas sampling, to get consistent sampling results
RANDOM_STATE = 123456789
auth.authenticate_user()
!mkdir -p tfjs_model
!gsutil -m cp -R gs://conversationai-public/public_models/tfjs/v1/* tfjs_model
test_df = pd.read_csv(
'https://raw.githubusercontent.com/conversationai/unintended-ml-bias-analysis/master/unintended_ml_bias/new_madlibber/output_data/English/intersectional_madlibs.csv')
print('test data has %d rows' % len(test_df))
madlibs_words = pd.read_csv(
'https://raw.githubusercontent.com/conversationai/unintended-ml-bias-analysis/master/unintended_ml_bias/new_madlibber/input_data/English/words.csv')
identity_columns = madlibs_words[madlibs_words.type=='identity'].word.tolist()
for term in identity_columns:
test_df[term] = test_df['phrase'].apply(
lambda x: bool(re.search(r'\b{}\b'.format(term), x,
flags=re.UNICODE|re.IGNORECASE)))
Explanation: Bias Evaluation for TF Javascript Model
Based on the FAT* Tutorial Measuring Unintended Bias in Text Classification Models with Real Data.
Copyright 2019 Google LLC.
SPDX-License-Identifier: Apache-2.0
End of explanation
TOXICITY_COLUMN = 'toxicity'
TEXT_COLUMN = 'phrase'
predict_fn = tf.contrib.predictor.from_saved_model(
'tfjs_model', signature_def_key='predict')
sp = sentencepiece.SentencePieceProcessor()
sp.Load('tfjs_model/assets/universal_encoder_8k_spm.model')
def progress(value, max=100):
return HTML(
<progress
value='{value}'
max='{max}',
style='width: 100%'
>
{value}
</progress>
.format(value=value, max=max))
tox_scores = []
nrows = test_df.shape[0]
out = display(progress(0, nrows), display_id=True)
for offset in range(0, nrows):
out.update(progress(offset, nrows))
values = sp.EncodeAsIds(test_df[TEXT_COLUMN][offset])
tox_scores.append(predict_fn({
'values': values,
'indices': [(0, i) for i in range(len(values))],
'dense_shape': [1, len(values)]})['toxicity/probabilities'][0,1])
MODEL_NAME = 'tfjs_model'
test_df[MODEL_NAME] = tox_scores
Explanation: Score test set with our text classification model
Using our new model, we can score the set of test comments for toxicity.
End of explanation
SUBGROUP_AUC = 'subgroup_auc'
BACKGROUND_POSITIVE_SUBGROUP_NEGATIVE_AUC = 'background_positive_subgroup_negative_auc'
BACKGROUND_NEGATIVE_SUBGROUP_POSITIVE_AUC = 'background_negative_subgroup_positive_auc'
def compute_auc(y_true, y_pred):
try:
return metrics.roc_auc_score(y_true, y_pred)
except ValueError:
return np.nan
def compute_subgroup_auc(df, subgroup, label, model_name):
subgroup_examples = df[df[subgroup]]
return compute_auc(subgroup_examples[label], subgroup_examples[model_name])
def compute_background_positive_subgroup_negative_auc(df, subgroup, label, model_name):
Computes the AUC of the within-subgroup negative examples and the background positive examples.
index = df[label] == 'toxic'
subgroup_negative_examples = df[df[subgroup] & ~index]
non_subgroup_positive_examples = df[~df[subgroup] & index]
examples = subgroup_negative_examples.append(non_subgroup_positive_examples)
return compute_auc(examples[label], examples[model_name])
def compute_background_negative_subgroup_positive_auc(df, subgroup, label, model_name):
Computes the AUC of the within-subgroup positive examples and the background negative examples.
index = df[label] == 'toxic'
subgroup_positive_examples = df[df[subgroup] & index]
non_subgroup_negative_examples = df[~df[subgroup] & ~index]
examples = subgroup_positive_examples.append(non_subgroup_negative_examples)
return compute_auc(examples[label], examples[model_name])
def compute_bias_metrics_for_model(dataset,
subgroups,
model,
label_col,
include_asegs=False):
Computes per-subgroup metrics for all subgroups and one model.
records = []
for subgroup in subgroups:
record = {
'subgroup': subgroup,
'subgroup_size': len(dataset[dataset[subgroup]])
}
record[SUBGROUP_AUC] = compute_subgroup_auc(
dataset, subgroup, label_col, model)
record[BACKGROUND_POSITIVE_SUBGROUP_NEGATIVE_AUC] = compute_background_positive_subgroup_negative_auc(
dataset, subgroup, label_col, model)
record[BACKGROUND_NEGATIVE_SUBGROUP_POSITIVE_AUC] = compute_background_negative_subgroup_positive_auc(
dataset, subgroup, label_col, model)
records.append(record)
return pd.DataFrame(records).sort_values('subgroup_auc', ascending=True)
bias_metrics_df = compute_bias_metrics_for_model(test_df, identity_columns, MODEL_NAME, TOXICITY_COLUMN)
Explanation: Evaluate the overall ROC-AUC
This calculates the models performance on the entire test set using the ROC-AUC metric.
End of explanation
def plot_auc_heatmap(bias_metrics_results, models):
metrics_list = [SUBGROUP_AUC, BACKGROUND_POSITIVE_SUBGROUP_NEGATIVE_AUC, BACKGROUND_NEGATIVE_SUBGROUP_POSITIVE_AUC]
df = bias_metrics_results.set_index('subgroup')
columns = []
vlines = [i * len(models) for i in range(len(metrics_list))]
for metric in metrics_list:
for model in models:
columns.append(metric)
num_rows = len(df)
num_columns = len(columns)
fig = plt.figure(figsize=(num_columns, 0.5 * num_rows))
ax = sns.heatmap(df[columns], annot=True, fmt='.2', cbar=True, cmap='Reds_r',
vmin=0.5, vmax=1.0)
ax.xaxis.tick_top()
plt.xticks(rotation=90)
ax.vlines(vlines, *ax.get_ylim())
return ax
plot_auc_heatmap(bias_metrics_df, [MODEL_NAME])
Explanation: Plot a heatmap of bias metrics
Plot a heatmap of the bias metrics. Higher scores indicate better results.
* Subgroup AUC measures the ability to separate toxic and non-toxic comments for this identity.
* Negative cross AUC measures the ability to separate non-toxic comments for this identity from toxic comments from the background distribution.
* Positive cross AUC measures the ability to separate toxic comments for this identity from non-toxic comments from the background distribution.
End of explanation |
10,756 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Merge Sort
Step1: The function mergeSort is called with 4 arguments.
- The first parameter $\texttt{L}$ is the list that is to be sorted.
However, the task of $\texttt{mergeSort}$ is not to sort the entire list $\texttt{L}$ but only
the part of $\texttt{L}$ that is given as
$$\texttt{L[start
Step2: The function merge takes five arguments.
- L is a list,
- start is an integer such that $\texttt{start} \in {0, \cdots, \texttt{len}(L)-1 }$,
- middle is an integer such that $\texttt{middle} \in {0, \cdots, \texttt{len}(L)-1 }$,
- end is an integer such that $\texttt{end} \in {0, \cdots, \texttt{len}(L)-1 }$,
- A is a list of the same length as L.
Furthermore, the indices start, middle, and end have to satisfy the following
Step3: Testing
We import the module random in order to be able to create lists of random numbers that are then sorted.
Step4: We import the class Counter from the module collections. This module provides us with a dictionary that keeps count
how many times an item occurs in a list.
Step5: The function isOrdered(L) checks that the list L is sorted in ascending order.
Step6: The function sameElements(L, S) returns Trueif the lists L and S contain the same elements and, furthermore, each
element $x$ occurring in L occurs in S the same number of times it occurs in L.
Step7: The function $\texttt{testSort}(n, k)$ generates $n$ random lists of length $k$, sorts them, and checks whether the output is sorted and contains the same elements as the input.
Step8: The predefined function sorted is a lot quicker | Python Code:
def sort(L):
A = L[:]
mergeSort(L, 0, len(L), A)
Explanation: Merge Sort: A Recursive, Array Based Implementation
The function $\texttt{sort}(L)$ sorts the list $L$ in place using <em style="color:blue">merge sort</em>.
It takes advantage of the fact that, in Python, lists are stored internally as arrays.
The function sort is a wrapper for the function merge_sort. Its sole purpose is to allocate the auxiliary array A,
which has the same size as the array holding L.
End of explanation
def mergeSort(L, start, end, A):
if end - start < 2:
return
middle = (start + end) // 2
mergeSort(L, start, middle, A)
mergeSort(L, middle, end , A)
merge(L, start, middle, end, A)
Explanation: The function mergeSort is called with 4 arguments.
- The first parameter $\texttt{L}$ is the list that is to be sorted.
However, the task of $\texttt{mergeSort}$ is not to sort the entire list $\texttt{L}$ but only
the part of $\texttt{L}$ that is given as
$$\texttt{L[start:end]}$$
- Hence, the parameters $\texttt{start}$ and $\texttt{end}$ are indices specifying the
subarray that needs to be sorted.
- The final parameter $\texttt{A}$ is used as an auxiliary array. This array is needed
as <em style="color:blue">temporary storage</em> and is required to have the same size as the list $\texttt{L}$.
End of explanation
def merge(L, start, middle, end, A):
A[start:end] = L[start:end]
idx1 = start
idx2 = middle
i = start
while idx1 < middle and idx2 < end:
if A[idx1] <= A[idx2]:
L[i] = A[idx1]
idx1 += 1
else:
L[i] = A[idx2]
idx2 += 1
i += 1
if idx1 < middle:
L[i:end] = A[idx1:middle]
if idx2 < end:
L[i:end] = A[idx2:end]
L = [7, 8, 11, 12, 2, 5, 3, 7, 9, 3, 2]
sort(L)
L
Explanation: The function merge takes five arguments.
- L is a list,
- start is an integer such that $\texttt{start} \in {0, \cdots, \texttt{len}(L)-1 }$,
- middle is an integer such that $\texttt{middle} \in {0, \cdots, \texttt{len}(L)-1 }$,
- end is an integer such that $\texttt{end} \in {0, \cdots, \texttt{len}(L)-1 }$,
- A is a list of the same length as L.
Furthermore, the indices start, middle, and end have to satisfy the following:
$$ 0 \leq \texttt{start} < \texttt{middle} < \texttt{end} \leq \texttt{len}(L) $$
The function assumes that the sublists L[start:middle] and L[middle:end] are already sorted.
The function merges these sublists so that when the call returns the sublist L[start:end]
is sorted. The last argument A is used as auxiliary memory.
End of explanation
import random as rnd
Explanation: Testing
We import the module random in order to be able to create lists of random numbers that are then sorted.
End of explanation
from collections import Counter
Counter(['a', 'b', 'a', 'b', 'c', 'a'])
def demo():
L = [ rnd.randrange(1, 99+1) for n in range(1, 19+1) ]
print("L = ", L)
S = L[:]
sort(S)
print("S = ", S)
print(Counter(L))
print(Counter(S))
print(Counter(L) == Counter(S))
demo()
Explanation: We import the class Counter from the module collections. This module provides us with a dictionary that keeps count
how many times an item occurs in a list.
End of explanation
def isOrdered(L):
for i in range(len(L) - 1):
assert L[i] <= L[i+1], f'{L} not sorted at index {i}'
Explanation: The function isOrdered(L) checks that the list L is sorted in ascending order.
End of explanation
def sameElements(L, S):
assert Counter(L) == Counter(S), f'{Counter(L)} != {Counter(S)}'
Explanation: The function sameElements(L, S) returns Trueif the lists L and S contain the same elements and, furthermore, each
element $x$ occurring in L occurs in S the same number of times it occurs in L.
End of explanation
def testSort(n, k):
for i in range(n):
L = [ rnd.randrange(2*k) for x in range(k) ]
oldL = L[:]
sort(L)
isOrdered(L)
sameElements(oldL, L)
print('.', end='')
print()
print("All tests successful!")
%%time
testSort(100, 20000)
%%timeit
k = 1_000_000
L = [ rnd.randrange(2*k) for x in range(k) ]
sort(L)
Explanation: The function $\texttt{testSort}(n, k)$ generates $n$ random lists of length $k$, sorts them, and checks whether the output is sorted and contains the same elements as the input.
End of explanation
%%timeit
k = 1_000_000
L = [ rnd.randrange(2*k) for x in range(k) ]
S = sorted(L)
Explanation: The predefined function sorted is a lot quicker:
End of explanation |
10,757 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interface to statsmodels
Step1: ARMA errors
We assume that the observed data $y(t)$ follows
$$y(t)= f(t; \theta) + \epsilon(t),$$
where $f(t; \theta)$ is the logistic model solution.
Under the ARMA(1,1) noise model, the error terms $\epsilon(t)$ have 1 moving average term and 1 autoregressive term. Therefore,
$$\epsilon(t) = \rho \epsilon(t-1) + \nu(t) + \phi \nu(t-1),$$
where the white noise term $\nu(t) \overset{i.i.d.}{\sim} \mathcal{N}(0, \sigma \sqrt{(1 - \rho^2) / (1 + 2 \rho \phi + \phi^2))}$. The noise process standard deviation is such that the marginal distribution of $\epsilon$ is,
$$\epsilon\sim\mathcal{N}(0, \sigma).$$
The ARMA(1,1) noise model is available in Pints using pints.ARMA11LogLikelihood. As before, the code below shows how to generate a time series with ARMA(1,1) noise and perform Bayesian inference using the Kalman filter provided by the statsmodels ARIMA module.
Note that, whilst we do not show how to do this, it is possible to use the score function of the statsmodels package to calculate sensitivities of the log-likelihood.
Step2: Perform Bayesian inference using statsmodels' ARIMA Kalman filter
Here, we fit an ARMA(1,1) model in a Bayesian framework. Note, this is different from the fit functionality in the statsmodels package, which estimates maximum likelihood parameter values.
Step3: Look at results.
Step4: Look at results. Note that 'sigma' will be different to the value used to generate the data, due to a different definition. | Python Code:
import pints
import pints.toy as toy
import pints.plot
import numpy as np
import matplotlib.pyplot as plt
Explanation: Interface to statsmodels: ARIMA time series models
This notebook provides a short exposition of how it is possible to interface with the cornucopia of time series models provided by the statsmodels package. In this notebook, we illustrate how to fit the logistic ODE model, where the errors are described by ARIMA models.
End of explanation
# Load a forward model
model = toy.LogisticModel()
# Create some toy data
real_parameters = [0.015, 500]
times = np.linspace(0, 1000, 1000)
org_values = model.simulate(real_parameters, times)
# Add noise
noise = 10
rho = 0.9
phi = 0.95
## makes sigma comparable with estimate from statsmodel
errors = pints.noise.arma11(rho, phi, noise / np.sqrt((1-rho**2) / (1 + 2 * rho * phi + phi**2)), len(org_values))
values = org_values + errors
# Show the noisy data
plt.figure()
plt.plot(times, org_values)
plt.plot(times, values)
plt.xlabel('time')
plt.ylabel('y')
plt.legend(['true', 'observed'])
plt.show()
Explanation: ARMA errors
We assume that the observed data $y(t)$ follows
$$y(t)= f(t; \theta) + \epsilon(t),$$
where $f(t; \theta)$ is the logistic model solution.
Under the ARMA(1,1) noise model, the error terms $\epsilon(t)$ have 1 moving average term and 1 autoregressive term. Therefore,
$$\epsilon(t) = \rho \epsilon(t-1) + \nu(t) + \phi \nu(t-1),$$
where the white noise term $\nu(t) \overset{i.i.d.}{\sim} \mathcal{N}(0, \sigma \sqrt{(1 - \rho^2) / (1 + 2 \rho \phi + \phi^2))}$. The noise process standard deviation is such that the marginal distribution of $\epsilon$ is,
$$\epsilon\sim\mathcal{N}(0, \sigma).$$
The ARMA(1,1) noise model is available in Pints using pints.ARMA11LogLikelihood. As before, the code below shows how to generate a time series with ARMA(1,1) noise and perform Bayesian inference using the Kalman filter provided by the statsmodels ARIMA module.
Note that, whilst we do not show how to do this, it is possible to use the score function of the statsmodels package to calculate sensitivities of the log-likelihood.
End of explanation
from statsmodels.tsa.arima.model import ARIMA
model = toy.LogisticModel()
class ARIMALogLikelihood(pints.ProblemLogLikelihood):
def __init__(self, problem, arima_order):
super(ARIMALogLikelihood, self).__init__(problem)
self._nt = len(self._times) - 1
self._no = problem.n_outputs()
if len(arima_order) != 3:
raise ValueError("ARIMA (p, d, q) orders must be tuple of length 3.")
self._arima_order = arima_order
p = arima_order[0]
d = arima_order[1]
q = arima_order[2]
self._p = p
self._q = q
self._d = d
self._n_parameters = problem.n_parameters() + (p + q + 1) * self._no
self._m = (self._p + self._q + 1) * self._no
def __call__(self, x):
# convert x to list to make it easier to append
# nuisance params
x = x.tolist()
# p AR params; q MA params
m = self._m
# extract noise model params
parameters = x[-m:]
sol = self._problem.evaluate(x[:-m])
model = ARIMA(endog=self._values,
order=self._arima_order,
exog=sol)
# in statsmodels, parameters are variances
# rather than std. deviations, so square
sigma2 = parameters[-1]**2
parameters = parameters[:-1] + [sigma2]
# first param is trend (if model not differenced),
# second is coefficient on ODE soln
# see model.param_names
if self._d == 0:
full_params = [0, 1] + parameters
else:
full_params = [1] + parameters
return model.loglike(full_params)
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, values)
# Create a log-likelihood function (adds an extra parameter!)
log_likelihood = ARIMALogLikelihood(problem, arima_order=(1, 0, 1))
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0.01, 400, 0, 0, noise * 0.1],
[0.02, 600, 1, 1, noise * 100],
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Choose starting points for 3 mcmc chains
real_parameters = np.array(real_parameters + [rho, phi, 10])
xs = [
real_parameters * 1.05,
real_parameters * 1,
real_parameters * 1.025
]
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, 3, xs, method=pints.HaarioBardenetACMC)
# Add stopping criterion
mcmc.set_max_iterations(4000)
# Disable logging
mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
Explanation: Perform Bayesian inference using statsmodels' ARIMA Kalman filter
Here, we fit an ARMA(1,1) model in a Bayesian framework. Note, this is different from the fit functionality in the statsmodels package, which estimates maximum likelihood parameter values.
End of explanation
# Show traces and histograms
pints.plot.trace(chains,
ref_parameters=real_parameters,
parameter_names=[r'$r$', r'$k$', r'$\rho$', r'$\phi$', r'$\sigma$'])
# Discard warm up
chains = chains[:, 2000:, :]
# Look at distribution in chain 0
pints.plot.pairwise(chains[0],
kde=False,
ref_parameters=real_parameters,
parameter_names=[r'$r$', r'$k$', r'$\rho$', r'$\phi$', r'$\sigma$'])
# Show graphs
plt.show()
Explanation: Look at results.
End of explanation
results = pints.MCMCSummary(chains=chains,
parameter_names=["r", "k", "rho", "phi", "sigma"])
print(results)
Explanation: Look at results. Note that 'sigma' will be different to the value used to generate the data, due to a different definition.
End of explanation |
10,758 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Alignment
The align function projects 2 or more datasets with different coordinate systems into a common space. By default it uses the hyperalignment algorithm (Haxby et al, 2011), but also provides the option to use the Shared Response Model (SRM) for alignment, if preferred, via the Brain Imaging Analysis Kit (brainiak).
Alignment can be particularly useful in exploring statistical properties and/or similarities of datasets that are not in the same coordinate system (such as fMRI data from visual areas of participants watching a movie, and the movie data itself).
Alignment algorithms use linear transformations to rotate and scale your datasets so they match as best as possible. For example, take these three distinct datasets. Each has a similar shape (an S), but are scaled and rotated differently. Aligning these datasets finds the transformation that minimizes the distance between them.
<img src="https
Step1: Load your data
First, we'll load one of the sample datasets. This dataset is a list of 2 numpy arrays, each containing average brain activity (fMRI) from 18 subjects listening to the same story, fit using Hierarchical Topographic Factor Analysis (HTFA) with 100 nodes. The rows are timepoints and the columns are fMRI components.
See the full dataset or the HTFA article for more info on the data and HTFA, respectively.
Step2: Visualize unaligned data
First, we can see how the first hundred data points from two arrays in the weights data look when plotted together.
Step3: Aligning data with Hyperalignment
Next, we can align the two datasets (using hyperalignment) and visualize the aligned data. Note that the two datasets are now much more similar to each other.
Step4: Aligning data with the Shared Response Model
You may use the Shared Response Model for alignment by setting align to 'SRM'. | Python Code:
import hypertools as hyp
import numpy as np
%matplotlib inline
Explanation: Alignment
The align function projects 2 or more datasets with different coordinate systems into a common space. By default it uses the hyperalignment algorithm (Haxby et al, 2011), but also provides the option to use the Shared Response Model (SRM) for alignment, if preferred, via the Brain Imaging Analysis Kit (brainiak).
Alignment can be particularly useful in exploring statistical properties and/or similarities of datasets that are not in the same coordinate system (such as fMRI data from visual areas of participants watching a movie, and the movie data itself).
Alignment algorithms use linear transformations to rotate and scale your datasets so they match as best as possible. For example, take these three distinct datasets. Each has a similar shape (an S), but are scaled and rotated differently. Aligning these datasets finds the transformation that minimizes the distance between them.
<img src="https://github.com/ContextLab/hypertools/raw/master/docs/tutorials/img/alignment.png", width=600>
Import Hypertools
End of explanation
data = hyp.load('weights').get_data()
Explanation: Load your data
First, we'll load one of the sample datasets. This dataset is a list of 2 numpy arrays, each containing average brain activity (fMRI) from 18 subjects listening to the same story, fit using Hierarchical Topographic Factor Analysis (HTFA) with 100 nodes. The rows are timepoints and the columns are fMRI components.
See the full dataset or the HTFA article for more info on the data and HTFA, respectively.
End of explanation
# average into two groups
group1 = np.mean(data[:17], 0)
group2 = np.mean(data[18:], 0)
# plot
geo = hyp.plot([group1[:100, :], group2[:100, :]])
Explanation: Visualize unaligned data
First, we can see how the first hundred data points from two arrays in the weights data look when plotted together.
End of explanation
aligned_data = hyp.align(data)
# average into two groups
group1 = np.mean(aligned_data[:17], 0)
group2 = np.mean(aligned_data[18:], 0)
# plot
geo = hyp.plot([group1[:100, :], group2[:100, :]])
Explanation: Aligning data with Hyperalignment
Next, we can align the two datasets (using hyperalignment) and visualize the aligned data. Note that the two datasets are now much more similar to each other.
End of explanation
aligned_data = hyp.align(data, align='SRM')
# average into two groups
group1 = np.mean(aligned_data[:17], 0)
group2 = np.mean(aligned_data[18:], 0)
# plot
geo = hyp.plot([group1[:100, :], group2[:100, :]])
Explanation: Aligning data with the Shared Response Model
You may use the Shared Response Model for alignment by setting align to 'SRM'.
End of explanation |
10,759 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Sklearn SVR - Training a SVM Regression Model with Python
| Python Code::
from sklearn.svm import SVR
from sklearn.metrics import mean_squared_error, mean_absolute_error
# initliase & fit model
model = SVR(C=1.5, kernel='linear')
model.fit(X_train, y_train)
# make prediction for test data
y_pred = model.predict(X_test)
# evaluate performance
print('RMSE:',mean_squared_error(y_test, y_pred, squared = False))
print('MAE:',mean_absolute_error(y_test, y_pred))
|
10,760 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sightline gridding
We demonstrate the gridding of selected sightlines with cygrid. This can be particularly useful if you have some high-resolution data such as QSO absorption spectra and want to get accurate foreground values from a dataset with lower angular resolution.
We start by adjusting the notebook settings.
Step1: We attempt to limit our dependencies as much as possible, but astropy and healpy needs to be available on your machine if you want to re-run the calculations. We can highly recommend anaconda as a scientific python platform.
Step2: Create dummy data
The properties of the map are given by the ordering and the nside of the map. For more details, check the paper by Gorski et al. (2005).
Step3: The data are just random draws from the standard normal distribution. For the weights, we choose uniform weighting. The coordinates can be easily calculated with healpy.
Step4: The pixel size for this NPIX is
Step5: A quick look confirms that our data looks just as expected.
Step6: Gridding
We are now interested in the values of this map at a couple of given positions. It wouldn't make sense to use cygrid at all, if we were just interested in the values of the map at the given positions. Even when the positions are not exactly aligned with the HEALPix pixel centers, employing some interpolation routine would do a good job.
But let's assume that we would want to compare the values with another data set, whose angular resolution is much worse. Then it is reasonable to down-sample (i.e., lower the angular resolution by smoothing with a Gaussian kernel) our HEALPix map before extracting the sight-line values. With cygrid's sight-line gridder, this is done only for the vicinity of the requested positions, which can save a lot of computing time (only for large NSIDE, because healpy's smoothing function is very fast for small and moderate NSIDE due to the use of FFTs). cygrid would be at true advantage for most other projections, though.
In order to compare the results with healpy's smoothing routine (see below), we will use HEALPix pixel center coordinates without loss of generality.
Step7: We initiate the gridder by specifying the target sightlines.
Step8: The gridding kernel is of key importance for the entire gridding process. cygrid allows you to specify the shape of the kernel (e.g. elliptical Gaussian or tapered sinc) and its size.
In our example, we use a symmetrical Gaussian (i.e. the major and minor axis of the kernel are identical). In that case, we need to furthermore specify kernelsize_sigma, the sphere_radius up to which the kernel will be computed, and the maximum acceptable healpix resolution for which we recommend kernelsize_sigma/2.
We refer to section 3.5 of the paper ('a minimal example') for a short discussion of the kernel parameters.
Step9: After the kernel has been set, we perform the actual gridding by calling grid() with the coordinates and the data.
Step10: To get the gridded data, we simply call get_datacube().
Step11: Finally, we get a list of our gridded sightlines within the chosen aperture.
We can compare this with the healpy smoothing operation | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
Explanation: Sightline gridding
We demonstrate the gridding of selected sightlines with cygrid. This can be particularly useful if you have some high-resolution data such as QSO absorption spectra and want to get accurate foreground values from a dataset with lower angular resolution.
We start by adjusting the notebook settings.
End of explanation
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
import healpy as hp
from astropy.io import fits
from astropy.utils.misc import NumpyRNGContext
import cygrid
Explanation: We attempt to limit our dependencies as much as possible, but astropy and healpy needs to be available on your machine if you want to re-run the calculations. We can highly recommend anaconda as a scientific python platform.
End of explanation
NSIDE = 128
NPIX = hp.nside2npix(NSIDE)
Explanation: Create dummy data
The properties of the map are given by the ordering and the nside of the map. For more details, check the paper by Gorski et al. (2005).
End of explanation
# data and weights
with NumpyRNGContext(0):
# make sure to have "predictable" random numbers
input_data = np.random.randn(NPIX)
# coordinates
theta, phi = hp.pix2ang(NSIDE, np.arange(NPIX))
lons = np.rad2deg(phi)
lats = 90. - np.rad2deg(theta)
Explanation: The data are just random draws from the standard normal distribution. For the weights, we choose uniform weighting. The coordinates can be easily calculated with healpy.
End of explanation
print('pixel size: {:.1f}"'.format(3600 * hp.nside2resol(NSIDE)))
Explanation: The pixel size for this NPIX is:
End of explanation
hp.mollview(input_data, xsize=300)
Explanation: A quick look confirms that our data looks just as expected.
End of explanation
with NumpyRNGContext(0):
target_hpx_indices = np.random.randint(0, NPIX, 5)
theta, phi = hp.pix2ang(NSIDE,target_hpx_indices)
target_lons = np.rad2deg(phi)
target_lats = 90. - np.rad2deg(theta)
print('{:>8s} {:>8s}'.format('glon', 'glat'))
for glon, glat in zip(target_lons, target_lats):
print('{:8.4f} {:8.4f}'.format(glon, glat))
Explanation: Gridding
We are now interested in the values of this map at a couple of given positions. It wouldn't make sense to use cygrid at all, if we were just interested in the values of the map at the given positions. Even when the positions are not exactly aligned with the HEALPix pixel centers, employing some interpolation routine would do a good job.
But let's assume that we would want to compare the values with another data set, whose angular resolution is much worse. Then it is reasonable to down-sample (i.e., lower the angular resolution by smoothing with a Gaussian kernel) our HEALPix map before extracting the sight-line values. With cygrid's sight-line gridder, this is done only for the vicinity of the requested positions, which can save a lot of computing time (only for large NSIDE, because healpy's smoothing function is very fast for small and moderate NSIDE due to the use of FFTs). cygrid would be at true advantage for most other projections, though.
In order to compare the results with healpy's smoothing routine (see below), we will use HEALPix pixel center coordinates without loss of generality.
End of explanation
gridder = cygrid.SlGrid(target_lons, target_lats)
Explanation: We initiate the gridder by specifying the target sightlines.
End of explanation
kernelsize_fwhm = 1. # 1 degree
# see https://en.wikipedia.org/wiki/Full_width_at_half_maximum
kernelsize_sigma = kernelsize_fwhm / np.sqrt(8 * np.log(2))
sphere_radius = 4. * kernelsize_sigma
gridder.set_kernel(
'gauss1d',
(kernelsize_sigma,),
sphere_radius,
kernelsize_sigma / 2.
)
Explanation: The gridding kernel is of key importance for the entire gridding process. cygrid allows you to specify the shape of the kernel (e.g. elliptical Gaussian or tapered sinc) and its size.
In our example, we use a symmetrical Gaussian (i.e. the major and minor axis of the kernel are identical). In that case, we need to furthermore specify kernelsize_sigma, the sphere_radius up to which the kernel will be computed, and the maximum acceptable healpix resolution for which we recommend kernelsize_sigma/2.
We refer to section 3.5 of the paper ('a minimal example') for a short discussion of the kernel parameters.
End of explanation
gridder.grid(lons, lats, input_data)
Explanation: After the kernel has been set, we perform the actual gridding by calling grid() with the coordinates and the data.
End of explanation
sightlines = gridder.get_datacube()
Explanation: To get the gridded data, we simply call get_datacube().
End of explanation
smoothed_map = hp.sphtfunc.smoothing(
input_data,
fwhm=np.radians(kernelsize_fwhm),
)
smoothed_data = smoothed_map[target_hpx_indices]
print('{:>8s} {:>8s} {:>10s} {:>10s}'.format(
'glon', 'glat', 'cygrid', 'healpy')
)
for t in zip(
target_lons, target_lats,
sightlines, smoothed_data,
):
print('{:8.4f} {:8.4f} {:10.6f} {:10.6f}'.format(*t))
Explanation: Finally, we get a list of our gridded sightlines within the chosen aperture.
We can compare this with the healpy smoothing operation:
End of explanation |
10,761 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Paramz Tutorial
A simple introduction into Paramz based gradient based optimization of parameterized models.
Paramz is a python based parameterized modelling framework, that handles parameterization, printing, randomizing and many other parameter based operations to be done to a parameterized model.
In this example we will make use of the rosenbrock function of scipy. We will write a paramz model calling the scipy rosen function as an objective function and its gradients and use it to show the features of Paramz.
Step1: The starting position of the rosen function is set to be
$$ x_0 = [-1,1] $$
Step2: For paramz to understand your model there is three steps involved
Step3: The class created above only holds the information about the parameters, we still have to implement the objective function to optimize over. For now the class can be instantiated but is not functional yet.
Step4: Step Two
Step5: Step Three
Step6: Model Usage
Having implemented a paramz model with its necessary functions, the whole set of functionality of paramz is available for us. We will instantiate a rosen model class to play around with.
Step7: This rosen model is a fully working parameterized model for gradient based optimization of the rosen function of scipy.
Printing and Naming
All Parameterized and Param objects are named and can be accessed by name. This ensures a cleaner model creation and printing, when big models are created. In our simple example we only have a position and the model name itself
Step8: Or use the notebook representation
Step9: Note the model just printing the shape (in the value column) of the parameters, as parameters can be any sized arrays or matrices (with arbitrary numbers of dimensions).
We can print the actual values of the parameters directly, either by programmatically assigned variable
Step10: Or by name
Step11: We can redefine the name freely, as long as it does not exist already
Step12: Now r.position will not be accessible anymore!
Step13: Setting Parameters and Automated Updates
Param objects represent the parameters for the model. We told the model in the initialization that the position parameter (re-)named pos is a parameter of the model. Thus the model will listen to changes of the parameter values and update on any changes. We will set one element of the parameter and see what happens to the model
Step14: Note that we never actually told the model to update. It listened to changes to any of its parameters and updated accordingly. This update chain is based on the hierarchy of the model structure. Specific values of parameters can be accessed through indexing, just like indexing numpy arrays. In fact Param is a derivative of ndarray and inherits all its traits. Thus, Param can be used in any calculation involved with numpy. Importantly, when using a Param parameter inside a computation, it will be returning a normal numpy array. This prevents unwanted side effects and pointer errors.
Step15: Optimization
The optimization routine for the model can be accessed by the optimize() function. A call to optimize will setup the optimizer, do the iteration through getting and setting the parameters in an optimal 'in memory' fashion. By supplying messages=1 as an optional parameter we can print the progress of the optimization itself.
Step16: To show the values of the positions itself, we directly print the Param object
Step17: We could also randomize the model by using the convenience function randomize(), on the part we want to randomize. It can be any part of the model, also the whole model can be randomized
Step18: Gradient Checking
Importantly when implementing gradient based optimization is to make sure, that the gradients implemented match the numerical gradients of the objective function. This can be achieved using the checkgrad() function in paramz. It does a triangle numerical gradient estimate around the current position of the parameter. The verbosity of the gradient checker can be adjusted using the verbose option. If verbose is False, only one bool will be returned, specifying whether the gradients check the numerical gradients or not. The option of verbose=True returns a full list of every parameter, checking each parameter individually. This can be called on each subpart of the model again.
Here we can either directly call it on the parameter
Step19: Or on the whole model (verbose or not)
Step20: Or on individual parameters, note that numpy indexing is used
Step21: Constraining Parameter Spaces
In many optimization scenarios it is necessary to constrain parameters to only take on certain ranges of values, may it be bounded in a region (between two numbers), fixed or constrained to only be positive or negative numbers. This can be achieved in paramz by applying a transformation to a parameter. For convenience the most common constraints are placed in specific functions, found by r.constrain_<tab>
Step22: The printing will contain the constraints, either directly on the object, or it lists the constraints contained within a parameter. If a parameter has multiple constraints spread across the Param object all constraints contained in the whole Param object are indicated with {<partial constraint>}
Step23: To show the individual constraints, we look at the Param object of interest directly
Step24: The constraints (and other indexed properties) are held by each parameter as a dictionary with the name. For example the constraints are held in a constraints dictionary, where the keys are the constraints, and the values are the indices this constraint refers to. You can either ask for the constraints of the whole model
Step25: Or the constraints of individual Parameterized objects
Step26: The constraints of subparts of the model are only views into the actual constaints held by the root of the model hierarchy.
Models Inside Models
The hierarchy of a Paramz model is a tree, where the nodes of the tree are Parameterized objects and the leaves are Param objects. The Model class is Parameterized itself and, thus can serve as a child itself. This opens the possibility for combining models together in a bigger model. As a simple example, we will just add two rosen models together into a single model
Step27: The keen eyed will have noticed, that we did not set any gradients in the above definition. That is because the underlying rosen models handle their gradients directly!
Step28: All options listed above are availible for this model now. No additional steps need to be taken!
Step29: To show the different ways of how constraints are displayed, we constrain different parts of the model and fix parts of it too
Step30: First, we can see, that because two models with the same name were added to dr, the framework renamed the second model to have a unique name. This only happens when two childs of one parameter share the same name. If the two childs not under the same parameter share names, it is just fine, as you can see in the name of x in both models
Step31: Or print only one model
Step32: We can showcase that constraints are mapped to each parameter directly. We can either access the constraints of the whole model directly
Step33: Or for parameters directly
Step34: Note, that the constraints are remapped to directly index the parameters locally. This directly leeds up to the in memory handling of parameters. The root node of the hierarchy holds one parameter array param_array comprising all parameters. The same goes for the gradient gradient
Step35: Each child parameter (and subsequent parameters) have their own view into the memory of the root node
Step36: When changing the param_array of a parameter it directly edits the memory of the root node. This is a big part of the optimization of paramz, as getting and setting parameters works directly in memory and does not need any python routines (such as loops or traversal) functionality.
The constraints as described above, directly index the param_array of their Parameterized or Param object. That is why the remapping exists.
This param_array has its counterpart for the optimizer, which holds the remapped parameters by the constraints. The constraints are transformation mappings, which transform model parameters param_array into optimizer parameters optimizer_array. This optimizer array is presented to the optimizer and the constraints framework handles the mapping directly.
Step37: Note, that the optimizer array does only contain three values. This is because the first element of the the first rosen model is fixed and is not presented to the optimizer. The transformed gradients can be computed by the root node directly | Python Code:
import paramz, numpy as np
from scipy.optimize import rosen_der, rosen
Explanation: Paramz Tutorial
A simple introduction into Paramz based gradient based optimization of parameterized models.
Paramz is a python based parameterized modelling framework, that handles parameterization, printing, randomizing and many other parameter based operations to be done to a parameterized model.
In this example we will make use of the rosenbrock function of scipy. We will write a paramz model calling the scipy rosen function as an objective function and its gradients and use it to show the features of Paramz.
End of explanation
x = np.array([-1,1])
Explanation: The starting position of the rosen function is set to be
$$ x_0 = [-1,1] $$
End of explanation
class Rosen(paramz.Model): # Inherit from paramz.Model to ensure all model functionality.
def __init__(self, x, name='rosen'): # Initialize the Rosen model with a numpy array `x` and name `name`.
super(Rosen, self).__init__(name=name) # Call to super to make sure the structure is set up.
self.x = paramz.Param('position', x) # setup a Param object for the position parameter.
self.link_parameter(self.x) # Tell the model that the parameter `x` exists.
Explanation: For paramz to understand your model there is three steps involved:
Step One: Initialization of the Model
Initialize your model using the __init__() function. The init function contains a call to the super class to make sure paramz can setup the model structure. Then we setup the parameters contained for this model and lastly we tell the model that we have those parameters by linking them to self.
End of explanation
r = Rosen(x)
try:
print(r)
except NotImplementedError as e:
print(e)
Explanation: The class created above only holds the information about the parameters, we still have to implement the objective function to optimize over. For now the class can be instantiated but is not functional yet.
End of explanation
class Rosen(paramz.Model):
def __init__(self, x, name='rosen'):
super(Rosen, self).__init__(name=name)
self.x = paramz.Param('position', x)
self.link_parameter(self.x)
def objective_function(self): # The function to overwrite for the framework to know about the objective to optimize
return rosen(self.x) # Call the rosenbrock function of scipy as objective function.
Explanation: Step Two: Adding the Objective Function
The optimization of a gradient based mathematical model is based on an objective function to optimize over. The paramz framework expects the objective_function to be overwridden, returning the current objective of the model. It can make use of all parameters inside the model and you can rely on the parameters to be updated when the objective function is called. This function does not take any parameters.
End of explanation
class Rosen(paramz.Model):
def __init__(self, x, name='rosen'):
super(Rosen, self).__init__(name=name)
self.x = paramz.Param('position', x)
self.link_parameter(self.x)
def objective_function(self):
return self._obj
def parameters_changed(self): # Overwrite the parameters_changed function for model updates
self._obj = rosen(self.x) # Lazy evaluation of the rosen function only when there is an update
self.x.gradient[:] = rosen_der(self.x) # Compuataion and storing of the gradients for the position parameter
Explanation: Step Three: Adding Update Routine for Parameter Changes
This model is now functional, except optimization. The gradients are not initialized and an optimization will stagnate, as there are no gradients to consider. The optimization of parameters requires the gradients of the parameters to be updated. For this, we provide an inversion of control based approach, in which to update parameters and set gradients of parameters. The gradients for parameters are saved in the gradient of the parameter itself. The model handles the distribution and collection of correct gradients to the optimizer itself.
To implement the parameters_changed(self) function we overwrite the function on the class. This function has the expensive bits of computation in it, as it is only being called if an update is absolutely necessary. We also compute the objective for the current parameter set and store it as a variable, so that a call to objective_function() can be done in a lazy function and to prevent computational overhead:
End of explanation
r = Rosen(x)
Explanation: Model Usage
Having implemented a paramz model with its necessary functions, the whole set of functionality of paramz is available for us. We will instantiate a rosen model class to play around with.
End of explanation
print(r)
Explanation: This rosen model is a fully working parameterized model for gradient based optimization of the rosen function of scipy.
Printing and Naming
All Parameterized and Param objects are named and can be accessed by name. This ensures a cleaner model creation and printing, when big models are created. In our simple example we only have a position and the model name itself: rosen.
End of explanation
r
Explanation: Or use the notebook representation:
End of explanation
r.x
Explanation: Note the model just printing the shape (in the value column) of the parameters, as parameters can be any sized arrays or matrices (with arbitrary numbers of dimensions).
We can print the actual values of the parameters directly, either by programmatically assigned variable
End of explanation
r.position
Explanation: Or by name:
End of explanation
r.x.name = 'pos'
r
Explanation: We can redefine the name freely, as long as it does not exist already:
End of explanation
try:
r.position
except AttributeError as v:
print("Attribute Error: " + str(v))
Explanation: Now r.position will not be accessible anymore!
End of explanation
print("Objective before change: {}".format(r._obj))
r.x[0] = 1
print("Objective after change: {}".format(r._obj))
Explanation: Setting Parameters and Automated Updates
Param objects represent the parameters for the model. We told the model in the initialization that the position parameter (re-)named pos is a parameter of the model. Thus the model will listen to changes of the parameter values and update on any changes. We will set one element of the parameter and see what happens to the model:
End of explanation
2 * r.x
Explanation: Note that we never actually told the model to update. It listened to changes to any of its parameters and updated accordingly. This update chain is based on the hierarchy of the model structure. Specific values of parameters can be accessed through indexing, just like indexing numpy arrays. In fact Param is a derivative of ndarray and inherits all its traits. Thus, Param can be used in any calculation involved with numpy. Importantly, when using a Param parameter inside a computation, it will be returning a normal numpy array. This prevents unwanted side effects and pointer errors.
End of explanation
r.x[:] = [100,5] # Set to a difficult starting position to show the messages of the optimization.
r.optimize(messages=1) # Call the optimization and show the progress.
Explanation: Optimization
The optimization routine for the model can be accessed by the optimize() function. A call to optimize will setup the optimizer, do the iteration through getting and setting the parameters in an optimal 'in memory' fashion. By supplying messages=1 as an optional parameter we can print the progress of the optimization itself.
End of explanation
r.x
Explanation: To show the values of the positions itself, we directly print the Param object:
End of explanation
np.random.seed(100)
r.randomize()
r.x
r.x.randomize()
r.x
Explanation: We could also randomize the model by using the convenience function randomize(), on the part we want to randomize. It can be any part of the model, also the whole model can be randomized:
End of explanation
r.x.checkgrad(verbose=1)
Explanation: Gradient Checking
Importantly when implementing gradient based optimization is to make sure, that the gradients implemented match the numerical gradients of the objective function. This can be achieved using the checkgrad() function in paramz. It does a triangle numerical gradient estimate around the current position of the parameter. The verbosity of the gradient checker can be adjusted using the verbose option. If verbose is False, only one bool will be returned, specifying whether the gradients check the numerical gradients or not. The option of verbose=True returns a full list of every parameter, checking each parameter individually. This can be called on each subpart of the model again.
Here we can either directly call it on the parameter:
End of explanation
r.checkgrad()
r.checkgrad(verbose=1)
Explanation: Or on the whole model (verbose or not):
End of explanation
r.x[[0]].checkgrad(verbose=1)
Explanation: Or on individual parameters, note that numpy indexing is used:
End of explanation
r.x[[0]].constrain_bounded(-10,-1)
r.x[[1]].constrain_positive()
Explanation: Constraining Parameter Spaces
In many optimization scenarios it is necessary to constrain parameters to only take on certain ranges of values, may it be bounded in a region (between two numbers), fixed or constrained to only be positive or negative numbers. This can be achieved in paramz by applying a transformation to a parameter. For convenience the most common constraints are placed in specific functions, found by r.constrain_<tab>:
Each parameter can be constrain individually, by subindexing the Param object or Parameterized objects as a whole. Note that indexing functions like numpy indexing, so we need to make sure to keep the array structure when indexing singular elements. Next we bound $x_0$ to be constrained between $-10$ and $-1$ and $x_1$ to be constrained to only positive values:
End of explanation
r
Explanation: The printing will contain the constraints, either directly on the object, or it lists the constraints contained within a parameter. If a parameter has multiple constraints spread across the Param object all constraints contained in the whole Param object are indicated with {<partial constraint>}:
End of explanation
r.x
Explanation: To show the individual constraints, we look at the Param object of interest directly:
End of explanation
list(r.constraints.items())
Explanation: The constraints (and other indexed properties) are held by each parameter as a dictionary with the name. For example the constraints are held in a constraints dictionary, where the keys are the constraints, and the values are the indices this constraint refers to. You can either ask for the constraints of the whole model:
End of explanation
list(r.x.constraints.items())
Explanation: Or the constraints of individual Parameterized objects:
End of explanation
class DoubleRosen(paramz.Model):
def __init__(self, x1, x2, name='silly_double'):
super(DoubleRosen, self).__init__(name=name) # Call super to initiate the structure of the model
self.r1 = Rosen(x1) # Instantiate the underlying Rosen classes
self.r2 = Rosen(x2)
# Tell this model, which parameters it has. Models are just the same as parameters:
self.link_parameters(self.r1, self.r2)
def objective_function(self):
return self._obj # Lazy evaluation of the objective
def parameters_changed(self):
self._obj = self.r1._obj + self.r2._obj # Just add both objectives together to optimize both models.
Explanation: The constraints of subparts of the model are only views into the actual constaints held by the root of the model hierarchy.
Models Inside Models
The hierarchy of a Paramz model is a tree, where the nodes of the tree are Parameterized objects and the leaves are Param objects. The Model class is Parameterized itself and, thus can serve as a child itself. This opens the possibility for combining models together in a bigger model. As a simple example, we will just add two rosen models together into a single model:
End of explanation
dr = DoubleRosen(np.random.normal(size=2), np.random.normal(size=2))
Explanation: The keen eyed will have noticed, that we did not set any gradients in the above definition. That is because the underlying rosen models handle their gradients directly!
End of explanation
dr.checkgrad(verbose=1)
Explanation: All options listed above are availible for this model now. No additional steps need to be taken!
End of explanation
dr.r1.constrain_negative()
dr.r1.x[[0]].fix()
dr.r2.x[[1]].constrain_bounded(-30, 5)
dr.r2.x[[0]].constrain_positive()
dr
Explanation: To show the different ways of how constraints are displayed, we constrain different parts of the model and fix parts of it too:
End of explanation
dr.r2.checkgrad(verbose=1)
Explanation: First, we can see, that because two models with the same name were added to dr, the framework renamed the second model to have a unique name. This only happens when two childs of one parameter share the same name. If the two childs not under the same parameter share names, it is just fine, as you can see in the name of x in both models: position.
Second, the constraints are displayed in curly brackets {} if they do not span all underlying parameters. If a constraint, however, spans all parameters, it is shown without curly brackets, such as -ve for the first rosen model.
We can now just like before perform all actions paramz support on this model, as well as on sub models. For example we can check the gradients of only one part of the model:
End of explanation
dr.r1
Explanation: Or print only one model:
End of explanation
print(dr.constraints)
Explanation: We can showcase that constraints are mapped to each parameter directly. We can either access the constraints of the whole model directly:
End of explanation
print(dr.r2.constraints)
Explanation: Or for parameters directly:
End of explanation
dr.param_array
Explanation: Note, that the constraints are remapped to directly index the parameters locally. This directly leeds up to the in memory handling of parameters. The root node of the hierarchy holds one parameter array param_array comprising all parameters. The same goes for the gradient gradient:
End of explanation
dr.r2.param_array
Explanation: Each child parameter (and subsequent parameters) have their own view into the memory of the root node:
End of explanation
print(dr.param_array)
print(dr.optimizer_array)
Explanation: When changing the param_array of a parameter it directly edits the memory of the root node. This is a big part of the optimization of paramz, as getting and setting parameters works directly in memory and does not need any python routines (such as loops or traversal) functionality.
The constraints as described above, directly index the param_array of their Parameterized or Param object. That is why the remapping exists.
This param_array has its counterpart for the optimizer, which holds the remapped parameters by the constraints. The constraints are transformation mappings, which transform model parameters param_array into optimizer parameters optimizer_array. This optimizer array is presented to the optimizer and the constraints framework handles the mapping directly.
End of explanation
dr._transform_gradients(dr.gradient)
Explanation: Note, that the optimizer array does only contain three values. This is because the first element of the the first rosen model is fixed and is not presented to the optimizer. The transformed gradients can be computed by the root node directly:
End of explanation |
10,762 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Community Detection Lab (week 2
Step1: Task 5.1. Apply Girvan-Newman method
Apply Girvan-Newman algorithm
Step2: Apply available Girvan-Newman algorithm and compare results | Python Code:
# Import python-igraph library
import igraph
from IPython.display import Image
# Note: email graph is too large for the fast execution of the Girvan-Newman method, so we use karate graph,
# which is available on github and was taken from http://www.cise.ufl.edu/research/sparse/matrices/Newman/karate.html
gname = 'karate'
with open(gname + '.ncol', 'r') as finp:
g = igraph.Graph.Read_Ncol(finp, weights=False, directed=False)
igraph.summary(g)
# Visualize the input graph into karate.png
gimg = gname + '.png'
igraph.plot(g, target=gimg)
# Show the visualization
Image(filename=gimg)
Explanation: Community Detection Lab (week 2: modularity-based detection)
Import of python-igraph library and graph loading
End of explanation
# Cut dendogram at the level, which maximizes modularity (evaluated automatically)
vdr = g.community_edge_betweenness()
# Get clusters from the dendogram
vcs = vdr.as_clustering()
def printCommuns(vcs, aname):
'''Print resulting communities
vcs - communities as the VertexClustering object
aname - name of the algorithm
'''
# Evaluate the number of detected communities (clusters) and their sizes
csizes = vcs.sizes()
# Evaluate resulting modularity
Q = vcs.recalculate_modularity()
# Show results
print("Using {} clustering '{}' graph has modularity Q={}and contains {} communities of sizes: {}"
.format(gname, aname, Q, len(csizes), ', '.join([str(sz) for sz in csizes])))
def visualizeCommuns(g, vcs, aname):
'''Visualize communities
g - the graph to be visualized
vcs - communities as the VertexClustering object
aname - name of the algorithm
return - visualization of communities on the graph
'''
# Define distinct colors for the communities
colors = ['red', 'yellow', 'blue', 'green', 'purple', 'cyan', 'black']
# Assign colors to each vertex according to the cluster
for icl, cl in enumerate(vcs):
for v in cl:
g.vs[v]['color'] = colors[icl]
# Transform algorithm name to the file prefix
fpref = '_'.join(aname.lower().split('-'))
# Visualize detected communities on the input graph
cgnimg = fpref + '_' + gname + ".png"
print(cgnimg)
igraph.plot(g, target=cgnimg) # , vertex_size=6
return Image(cgnimg)
# Show results
aname = 'Girvan-Newman'
printCommuns(vcs, aname)
visualizeCommuns(g, vcs, aname)
Explanation: Task 5.1. Apply Girvan-Newman method
Apply Girvan-Newman algorithm
End of explanation
# Get communities (clusters) corresponding to the best modularity (top level of the built hierarchy)
vcs = g.community_multilevel()
# Show results
aname = 'Louvain'
printCommuns(vcs, aname)
visualizeCommuns(g, vcs, aname)
Explanation: Apply available Girvan-Newman algorithm and compare results
End of explanation |
10,763 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Single amino-acid / physico-chemical properties
Step1: Linear modeling, subsampling the negative set ~20 times
Step2: Charge can predict TAD with AUC=0.88 <br> aminoacid composition with AUC=0.93 (which includes the charges and the information from hydrophobic and aromatic residues)<br>
Dipeptides
Step3: Flip the coefficients and test performance to assign importance of dipeptide 'DW'. How does that affect performance?
Step4: What if we flip the whole heatmap on the diagonal line (lower left to upper right)?
Step5: And shuffling?
Step6: Try flipping weights of every other dipeptides, one at a time and compare?
Step7: Try assigning a zero to each coefficient.
Step8: Conclutions from dipeptide segment
Step9: xgboost with single amino acid frequencies | Python Code:
# create one numpy_map array for positives and 12 for negatives
idx = positives_train
p = get_aa_frequencies(positives[idx,0])
p_train, p_filename = store_data_numpy(np.hstack(p).T, float)
# set the positive validation array
idx = positives_validation
p_valid = get_aa_frequencies(positives[idx,0])
p_valid = np.hstack(p_valid).T
# negatives. SQL indexes start with 1 and not 0
N = divisors[-1]
idxs = np.array(negatives_train)
idxs = np.vstack(np.split(idxs, N))
n_filenames = np.empty(N, dtype='O')
n_train_shape = tuple(np.insert(idxs.shape, 2, 20))
n_train = np.zeros(shape=n_train_shape, dtype=np.float)
for i in range(N):
n = get_aa_frequencies(negatives[idxs[i],0])
n_train[i], n_filenames[i] = store_data_numpy(np.hstack(n).T, float)
# set the negative validation array
idx = negatives_validation
n_valid = get_aa_frequencies(negatives[idx,0])
n_valid = np.hstack(n_valid).T
# set a proper validation set with negatives and positives
X_valid = np.vstack([n_valid, p_valid])
y_valid = np.hstack([np.zeros(n_valid.shape[0]), np.ones(p_valid.shape[0])])
Explanation: Single amino-acid / physico-chemical properties
End of explanation
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import roc_auc_score, make_scorer
aminoacid_frequencies = []
for i in range(20):
# subsample the negative set and join with positive set
negative_sample = np.concatenate(subsample_negatives(n_train, p_train.shape[0]))
positive_sample = p_train
X = np.vstack([negative_sample, positive_sample])
y = np.hstack([np.zeros(negative_sample.shape[0]), np.ones(positive_sample.shape[0])])
model = LogisticRegressionCV(Cs=np.linspace(1e-4, 1e4, 50), scoring=make_scorer(roc_auc_score)).fit(X,y)
performance = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
aminoacid_frequencies.append(performance)
print('amioacid frequencies\nauc = {:.2f} +- {:.5f}'.format(np.mean(aminoacid_frequencies), np.std(aminoacid_frequencies)))
cols = ['b']*3 + ['r']*2 + ['g']*12 + ['y']*3
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.title('coefficients', fontsize=16)
plt.bar(aa, model.coef_[0], color=cols)
plt.grid(axis='y')
# frequencies
n = np.where(y==0)
p = np.where(y==1)
freqs = np.mean(X[p], axis=0) / np.mean(X[n], axis=0)
freqs = ((freqs - np.min(freqs)) / (np.max(freqs) - np.min(freqs))) - 0.5
plt.subplot(1,2,2)
plt.title('frequencies', fontsize=16)
plt.bar(aa, freqs, color=cols)
plt.grid();
# test all other physical properties
other_props = []
for prop in physical_props.columns:
#X_valid_prop = np.dot(X_valid, physical_props[prop].values).reshape(-1,1)
tmp = []
for i in range(20):
# subsample the negative set and join with positive set
negative_sample = np.concatenate(subsample_negatives(n_train, p_train.shape[0]))
positive_sample = p_train
X = np.vstack([negative_sample, positive_sample])
y = np.hstack([np.zeros(negative_sample.shape[0]), np.ones(positive_sample.shape[0])])
# there is a problem with the index of physical_properties
Pidx = [i+' ' for i in aa]
X = np.dot(X, physical_props.loc[Pidx, prop].values)
performance = 1 - roc_auc_score(y, X)
tmp.append(performance)
other_props.append(tmp)
# print results
pd.DataFrame(np.array(other_props).mean(axis=1), index = physical_props.columns, columns=['roc_auc'])
# remove the temporary numpy files of aminoacid frequencies
import subprocess
for i in n_filenames + p_filename:
subprocess.call(["rm",i])
Explanation: Linear modeling, subsampling the negative set ~20 times
End of explanation
# create one numpy_map array for positives and 12 for negatives
idx = positives_train
p = get_dipeptide_frequencies(positives[idx,0])
p_train, p_filename = store_data_numpy(np.vstack(p), float)
# set the positive validation array
idx = positives_validation
p_valid = get_dipeptide_frequencies(positives[idx,0])
p_valid = np.vstack(p_valid)
# negatives. SQL indexes start with 1 and not 0
N = divisors[-1]
idxs = np.array(negatives_train)
idxs = np.vstack(np.split(idxs, N))
n_filenames = np.empty(N, dtype='O')
n_train_shape = tuple(np.insert(idxs.shape, 2, 400))
n_train = np.zeros(shape=n_train_shape, dtype=np.float)
for i in range(N):
n = get_dipeptide_frequencies(negatives[idxs[i],0])
n_train[i], n_filenames[i] = store_data_numpy(np.vstack(n), float)
# set the negative validation array
idx = negatives_validation
n_valid = get_dipeptide_frequencies(negatives[idx,0])
n_valid = np.vstack(n_valid)
# set a proper validation set with negatives and positives
X_valid = np.vstack([n_valid, p_valid])
y_valid = np.hstack([np.zeros(n_valid.shape[0]), np.ones(p_valid.shape[0])])
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import roc_auc_score, make_scorer
dipeptide_frequencies, coefficients = [], []
for i in range(20):
# subsample the negative set and join with positive set
negative_sample = np.concatenate(subsample_negatives(n_train, p_train.shape[0]))
positive_sample = p_train
X = np.vstack([negative_sample, positive_sample])
y = np.hstack([np.zeros(negative_sample.shape[0]), np.ones(positive_sample.shape[0])])
model = LogisticRegressionCV(Cs=np.linspace(1e-4, 1e4, 50), scoring=make_scorer(roc_auc_score)).fit(X,y)
performance = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
dipeptide_frequencies.append(performance)
coefficients.append(model.coef_[0])
# Summarize dipeptides linear model
x = np.mean(dipeptide_frequencies)
s = np.std(dipeptide_frequencies)
print('{:.4f} +- {:.4f}'.format(x,s))
Explanation: Charge can predict TAD with AUC=0.88 <br> aminoacid composition with AUC=0.93 (which includes the charges and the information from hydrophobic and aromatic residues)<br>
Dipeptides
End of explanation
# I In case this cell is run more than once, set coefficient to original values
model.coef_[0] = coefficients[0]
# get performance with correct weights.
fwd_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
# plot coefficients
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(13,5))
# correct orientation
im = ax1.pcolor(model.coef_[0].reshape(20,20), cmap='hot_r')
ax1.set_xticks(np.arange(20)+0.5)
ax1.set_yticks(np.arange(20)+0.5)
ax1.set_xticklabels(aa)
ax1.set_yticklabels(aa)
plt.colorbar(im, shrink=0.5, ax=ax1)
# flip coefficients
reverse = lambda dipept: ''.join([i for i in reversed(dipept)]) # reverse dipeptide fx
fwd = ['DW','EW','DL','EL','DF','EF','DI','FI']
rev = [reverse(i) for i in fwd]
fwd = [np.where(dipeptides==i)[0][0] for i in fwd]
rev = [np.where(dipeptides==i)[0][0] for i in rev]
coef_fwd = [model.coef_[0][i] for i in fwd]
coef_rev = [model.coef_[0][i] for i in rev]
# perform the flipping
for i,j,k,l in zip(fwd, coef_rev, rev, coef_fwd):
model.coef_[0][i] = j
model.coef_[0][k] = l
# plot the flipped orientation
im = ax2.pcolor(model.coef_[0].reshape(20,20), cmap='hot_r')
ax2.set_xticks(np.arange(20)+0.5)
ax2.set_yticks(np.arange(20)+0.5)
ax2.set_xticklabels(aa)
ax2.set_yticklabels(aa)
plt.colorbar(im, shrink=0.5, ax=ax2);
# get performance using the flipped orientation
rev_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
print('dipeptides model auc = {:.3f}\nflipped model auc = {:.3f}\ndecrease in performance= {:.1f}%'.format(
fwd_weights, rev_weights, (1-(rev_weights/fwd_weights))*200))
Explanation: Flip the coefficients and test performance to assign importance of dipeptide 'DW'. How does that affect performance?
End of explanation
# I In case this cell is run more than once, set coefficient to original values
model.coef_[0] = coefficients[0]
# get performance with correct weights.
fwd_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
# plot coefficients
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(13,5))
# correct orientation
im = ax1.pcolor(model.coef_[0].reshape(20,20), cmap='hot_r')
ax1.set_xticks(np.arange(20)+0.5)
ax1.set_yticks(np.arange(20)+0.5)
ax1.set_xticklabels(aa)
ax1.set_yticklabels(aa)
plt.colorbar(im, shrink=0.5, ax=ax1)
# flip coefficients
reverse = lambda dipept: ''.join([i for i in reversed(dipept)]) # reverse dipeptide fx
fwd = dipeptides
rev = [reverse(i) for i in fwd]
fwd = [np.where(dipeptides==i)[0][0] for i in fwd]
rev = [np.where(dipeptides==i)[0][0] for i in rev]
coef_fwd = [model.coef_[0][i] for i in fwd]
coef_rev = [model.coef_[0][i] for i in rev]
# perform the flipping
for i,j,k,l in zip(fwd, coef_rev, rev, coef_fwd):
model.coef_[0][i] = j
model.coef_[0][k] = l
# plot the flipped orientation
im = ax2.pcolor(model.coef_[0].reshape(20,20), cmap='hot_r')
ax2.set_xticks(np.arange(20)+0.5)
ax2.set_yticks(np.arange(20)+0.5)
ax2.set_xticklabels(aa)
ax2.set_yticklabels(aa)
plt.colorbar(im, shrink=0.5, ax=ax2);
# get performance using the flipped orientation
rev_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
print('dipeptides model auc = {:.3f}\nflipped model auc = {:.3f}\ndecrease in performance= {:.1f}%'.format(
fwd_weights, rev_weights, (1-(rev_weights/fwd_weights))*200))
Explanation: What if we flip the whole heatmap on the diagonal line (lower left to upper right)?
End of explanation
# I In case this cell is run more than once, set coefficient to original values
model.coef_[0] = coefficients[0]
# get performance with correct weights.
fwd_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
# plot coefficients
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(13,5))
# correct orientation
im = ax1.pcolor(model.coef_[0].reshape(20,20), cmap='hot_r')
ax1.set_xticks(np.arange(20)+0.5)
ax1.set_yticks(np.arange(20)+0.5)
ax1.set_xticklabels(aa)
ax1.set_yticklabels(aa)
plt.colorbar(im, shrink=0.5, ax=ax1)
# shuffle coefficients
np.random.shuffle(model.coef_[0])
# plot the flipped orientation
im = ax2.pcolor(model.coef_[0].reshape(20,20), cmap='hot_r')
ax2.set_xticks(np.arange(20)+0.5)
ax2.set_yticks(np.arange(20)+0.5)
ax2.set_xticklabels(aa)
ax2.set_yticklabels(aa)
plt.colorbar(im, shrink=0.5, ax=ax2);
# get performance using the flipped orientation
rev_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
# get performance using the flipped orientation
rev_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
print('dipeptides model auc = {:.3f}\nflipped model auc = {:.3f}\ndecrease in performance= {:.1f}%'.format(
fwd_weights, rev_weights, (1-(rev_weights/fwd_weights))*200))
Explanation: And shuffling?
End of explanation
# and store the impact in performance in this list
flipping_performances = []
for idx in dipeptides:
# I In case this cell is run more than once, set coefficient to original values
model.coef_[0] = coefficients[0]
# get performance with correct weights.
fwd_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
# flip coefficients
reverse = lambda dipept: ''.join([i for i in reversed(dipept)]) # reverse dipeptide fx
fwd = [idx]
rev = [reverse(i) for i in fwd]
fwd = [np.where(dipeptides==i)[0][0] for i in fwd]
rev = [np.where(dipeptides==i)[0][0] for i in rev]
coef_fwd = [model.coef_[0][i] for i in fwd]
coef_rev = [model.coef_[0][i] for i in rev]
# perform the flipping
for i,j,k,l in zip(fwd, coef_rev, rev, coef_fwd):
model.coef_[0][i] = j
model.coef_[0][k] = l
# get performance using the flipped orientation
rev_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
flipping_performances.append(rev_weights / fwd_weights)
# convert flipping performances into performance losses
flipping_performances = (1-np.array(flipping_performances))*200
# plot coefficients
f, ax1 = plt.subplots(1, figsize=(6,5))
# correct orientation
im = ax1.pcolor(np.array(flipping_performances).reshape(20,20), cmap='hot_r')
ax1.set_xticks(np.arange(20)+0.5)
ax1.set_yticks(np.arange(20)+0.5)
ax1.set_xticklabels(aa)
ax1.set_yticklabels(aa)
plt.colorbar(im, shrink=0.5, ax=ax1)
ax1.set_title('Impact of fliping the dipeptide on\nperformance of linear model', fontsize=16);
Explanation: Try flipping weights of every other dipeptides, one at a time and compare?
End of explanation
# and store the impact in performance in this list
nulling_performances = []
for idx in range(len(dipeptides)):
# I In case this cell is run more than once, set coefficient to original values
model.coef_[0] = coefficients[0]
# get performance with correct weights.
fwd_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
# zeroing the coefficient
model.coef_[0][idx] = 0
# get performance using the flipped orientation
rev_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
nulling_performances.append(rev_weights / fwd_weights)
# convert data into performece losses
nulling_performances = (1 - np.array(nulling_performances))*200
# plot coefficients
f, ax1 = plt.subplots(1, figsize=(6,5))
# correct orientation
im = ax1.pcolor(np.array(nulling_performances).reshape(20,20), cmap='hot_r')
ax1.set_xticks(np.arange(20)+0.5)
ax1.set_yticks(np.arange(20)+0.5)
ax1.set_xticklabels(aa)
ax1.set_yticklabels(aa)
plt.colorbar(im, shrink=0.5, ax=ax1)
ax1.set_title('Impact of zeroing the dipeptide on\nperformance of linear model', fontsize=16);
Explanation: Try assigning a zero to each coefficient.
End of explanation
import warnings
warnings.filterwarnings('ignore')
from sklearn.metrics import roc_auc_score, make_scorer
from xgboost import XGBClassifier
# booster parameters
param = {'max_depth': 3,
'eta': 1,
'silent': 1,
'learning_rate': 0.1,
'objective': 'binary:logistic',
'eval_metric': 'auc',
'nthread':4,
'subsample':0.8,
'booster':'gbtree',
'n_estimators': 100}
performances, models = [], []
for i in range(10):
# subsample the negative set and join with positive set
negative_sample = np.concatenate(subsample_negatives(n_train, p_train.shape[0]))
positive_sample = p_train
X = np.vstack([negative_sample, positive_sample])
y = np.hstack([np.zeros(negative_sample.shape[0]), np.ones(positive_sample.shape[0])])
# Monitoring training performance
eval_set = [(X,y), (X_valid, y_valid)] # should change this to X_set
eval_metric = ["auc"] #, "logloss", "error"]
early_stopping_rounds=10
# fit model no training data
model = XGBClassifier(**param)
model.fit(X,y, eval_set=eval_set,
eval_metric=eval_metric,
early_stopping_rounds=early_stopping_rounds,
verbose=False)
y_pred = model.predict(X_valid)
# Am I overfitting? How was performance in the training data?
y_pred2 = model.predict(X)
# evaluate predictions
on_training = roc_auc_score(y, y_pred2)
on_validation = roc_auc_score(y_valid, y_pred)
performances.append([on_training, on_validation])
models.append(model)
l = len(models[0].evals_result()['validation_0']['auc'])
plt.plot(np.arange(l), models[0].evals_result()['validation_0']['auc'], label='training')
plt.plot(np.arange(l), models[0].evals_result()['validation_1']['auc'], label='validation')
plt.legend()
f, ax = plt.subplots(1, figsize=(6,5))
im = plt.pcolor(models[9].feature_importances_.reshape(20,20), cmap='hot_r')
ax.set_xticks(np.arange(20)+0.5)
ax.set_yticks(np.arange(20)+0.5)
ax.set_xticklabels(aa)
ax.set_yticklabels(aa)
plt.colorbar(im, shrink=0.5, ax=ax)
Explanation: Conclutions from dipeptide segment:
The importance of dipeptides included in 'DW','EW','DL','EL','DF','EF','DI','FI' has a small impact in the overall performance.
Flipping the whole heatmap, leaving only same-aminoacid dipeptides intact has less effect on the performance, compared to the above dipeptides.
As a control, shuffling the coefficients, brings performance ~80% down.
flipping the dipeptides , one at a time, has bigger (2x) impact compared to zeroing them. Specially involving DV, DL, DI, EV, SV
Only DV, DL, DI, DF, DW and to a lower extent the EXs show up to ~0.5% decrease in performance compared to the original model.
<br><br>
Using dipeptides in an ensemble trees model
End of explanation
import warnings
warnings.filterwarnings('ignore')
from sklearn.metrics import roc_auc_score, make_scorer
from xgboost import XGBClassifier
# booster parameters
param = {'max_depth': 3,
'eta': 1,
'silent': 1,
'learning_rate': 0.1,
'objective': 'binary:logistic',
'eval_metric': 'auc',
'nthread':4,
'subsample':0.8,
'booster':'gbtree',
'n_estimators': 100}
performances, models = [], []
for i in range(10):
# subsample the negative set and join with positive set
negative_sample = np.concatenate(subsample_negatives(n_train, p_train.shape[0]))
positive_sample = p_train
X = np.vstack([negative_sample, positive_sample])
y = np.hstack([np.zeros(negative_sample.shape[0]), np.ones(positive_sample.shape[0])])
# Monitoring training performance
eval_set = [(X,y), (X_valid, y_valid)] # should change this to X_set
eval_metric = ["auc"] #, "logloss", "error"]
early_stopping_rounds=10
# fit model no training data
model = XGBClassifier(**param)
model.fit(X,y, eval_set=eval_set,
eval_metric=eval_metric,
early_stopping_rounds=early_stopping_rounds,
verbose=False)
y_pred = model.predict(X_valid)
# Am I overfitting? How was performance in the training data?
y_pred2 = model.predict(X)
# evaluate predictions
on_training = roc_auc_score(y, y_pred2)
on_validation = roc_auc_score(y_valid, y_pred)
performances.append([on_training, on_validation])
models.append(model)
import matplotlib.gridspec as gridspec
plt.figure(figsize=(15,5))
gs = gridspec.GridSpec(1,2,
width_ratios = [1,2])
plt.subplot(gs[0])
l = len(models[0].evals_result()['validation_0']['auc'])
plt.plot(np.arange(l), models[0].evals_result()['validation_0']['auc'], label='training')
plt.plot(np.arange(l), models[0].evals_result()['validation_1']['auc'], label='validation')
plt.legend()
plt.subplot(gs[1])
plt.bar(aa, models[9].feature_importances_)
Explanation: xgboost with single amino acid frequencies
End of explanation |
10,764 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CSE 6040, Fall 2015 [28]
Step1: Read in data
Step2: Fast implementation of the distance matrix computation
The idea is that $$||(x - c)||^2 = ||x||^2 - 2\langle x, c \rangle + ||c||^2 $$
This has many advantages.
1. The centers are fixed (during a single iteration), so only needs to compute once
2. Data points are usually sparse, but centers are not
3. If implement cleverly, we don't need to use for loops
Step3: Let's see the different in running time of the two implementations.
Step4: K-means implementation in Scipy
Actually, Python has a builtin K-means implementation in Scipy.
Scipy is a superset of Numpy, and is installed by default in many Python distributions.
Step5: Elbow method to determine a good k
Elbow method is a general rule of thumb when selecting parameters.
The idea is to that one should choose a number of clusters so that adding another cluster doesn't give much better modeling of the data
Step6: You can see that at $k=2$, there is a sharper angle.
Exercise | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
Explanation: CSE 6040, Fall 2015 [28]: K-means Clustering, Part 2
Last time, we implemented the basic version of K-means. In this lecture we will explore some advanced techniques
to improve the performance of K-means.
End of explanation
df = pd.read_csv ('http://vuduc.org/cse6040/logreg_points_train.csv')
points = df.as_matrix (['x_1', 'x_2'])
labels = df['label'].as_matrix ()
n = points.shape[0]
d = points.shape[1]
k = 2
df.head()
def init_centers(X, k):
sampling = np.random.randint(0, n, k)
return X[sampling, :]
Explanation: Read in data
End of explanation
def compute_d2(X, centers):
D = np.empty((n, k))
for i in range(n):
D[i, :] = np.linalg.norm(X[i,:] - centers, axis=1) ** 2
return D
def compute_d2_fast(X, centers):
# @YOUSE: compute a length-n array, where each entry is the square of norm of a point
first_term =
# @YOUSE: compute a (n * k) matrix, where entry (i,j) is the two times of inner product of row i of X and row j of centers
second_term =
# @YOUSE: compute a length-k array, where each entry is the square of norm of a center
third_term =
D = np.tile(first_term, (centers.shape[0], 1)).T - second_term + np.tile(third_term, (n,1))
D[D < 0] = 0
return D
Explanation: Fast implementation of the distance matrix computation
The idea is that $$||(x - c)||^2 = ||x||^2 - 2\langle x, c \rangle + ||c||^2 $$
This has many advantages.
1. The centers are fixed (during a single iteration), so only needs to compute once
2. Data points are usually sparse, but centers are not
3. If implement cleverly, we don't need to use for loops
End of explanation
centers = init_centers(points, k)
%timeit D = compute_d2(points, centers)
%timeit D = compute_d2_fast(points, centers)
def cluster_points(D):
return np.argmin(D, axis=1)
def update_centers(X, clustering):
centers = np.empty((k, d))
for i in range(k):
members = (clustering == i)
if any(members):
centers[i, :] = np.mean(X[members, :], axis=0)
return centers
def WCSS(D):
min_val = np.amin(D, axis=1)
return np.sum(min_val)
def has_converged(old_centers, centers):
return set([tuple(x) for x in old_centers]) == set([tuple(x) for x in centers])
def kmeans_basic(X, k):
old_centers = init_centers(X, k)
centers = init_centers(X, k)
i = 1
while not has_converged(old_centers, centers):
old_centers = centers
D = compute_d2_fast(X, centers)
clustering = cluster_points(D)
centers = update_centers(X, clustering)
print "iteration", i, "WCSS = ", WCSS(D)
i += 1
return centers, clustering
centers, clustering = kmeans_basic(points, k)
def plot_clustering_k2(centers, clustering):
df['clustering'] = clustering
sns.lmplot(data=df, x="x_1", y="x_2", hue="clustering", fit_reg=False,)
if df['clustering'][0] == 0:
colors = ['b', 'g']
else:
colors = ['g', 'b']
plt.scatter(centers[:,0], centers[:,1], s=500, c=colors, marker=u'*' )
plot_clustering_k2(centers, clustering)
Explanation: Let's see the different in running time of the two implementations.
End of explanation
from scipy.cluster.vq import kmeans,vq
# distortion is the same as WCSS.
# It is called distortion in the Scipy document, since clustering can be used in compression.
centers, distortion = kmeans(points, k)
# vq return the clustering (assignment of group for each point)
# based on the centers obtained by the kmeans function.
# _ here means ignore the second return value
clustering, _ = vq(points, centers)
plot_clustering_k2(centers, clustering)
Explanation: K-means implementation in Scipy
Actually, Python has a builtin K-means implementation in Scipy.
Scipy is a superset of Numpy, and is installed by default in many Python distributions.
End of explanation
df_kcurve = pd.DataFrame(columns = ['k', 'distortion'])
for i in range(1,10):
_, distortion = kmeans(points, i)
df_kcurve.loc[i] = [i, distortion]
df_kcurve.plot(x="k", y="distortion")
Explanation: Elbow method to determine a good k
Elbow method is a general rule of thumb when selecting parameters.
The idea is to that one should choose a number of clusters so that adding another cluster doesn't give much better modeling of the data
End of explanation
def init_centers_kplusplus(X, k):
# @YOUSE: implement the initialization step in k-means++
# return centers: (k * d) matrix
pass
def kmeans_kplusplus(X, k):
old_centers = init_centers_kplusplus(X, k)
centers = init_centers(X, k)
i = 1
while not has_converged(old_centers, centers):
old_centers = centers
D = compute_d2_fast(X, centers)
clustering = cluster_points(D)
centers = update_centers(X, clustering)
print "iteration", i, "WCSS = ", WCSS(D)
i += 1
return centers, clustering
centers, clustering = kmeans_kplusplus(points, k)
plot_clustering_k2(centers, clustering)
Explanation: You can see that at $k=2$, there is a sharper angle.
Exercise: implement K-means++
K-means++ differs from K-means only in the initialization step.
In K-means, we randomly select k random data points as the centers at one time.
One may have bad luck and get poor initializations where all k points are concentrated in one area.
This could lead to a bad local optimum or take a long time to converge.
The idea of K-means++ is to select more spread-out centers.
In particular, K-means++ selects k centers iteratively, one at a time.
In the first iteration, it randomly choose only one random points as the 1st center.
In the second iteration, we calculate the square distance between each point and the 1st center,
and randomly choose the 2nd center with a probability distribution proportional to this square distance.
Now suppose we have chosen $m<k$ centers, the $(m+1)$-th center is randomly chosen
with a probability distribution proportional to the square distance between each point to its nearest center.
The initialization step finishes when all k centers are chosen.
End of explanation |
10,765 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Functions and Methods Homework
Complete the following questions
Step1: Write a function that checks whether a number is in a given range (Inclusive of high and low)
Step2: If you only wanted to return a boolean
Step3: Write a Python function that accepts a string and calculate the number of upper case letters and lower case letters.
Sample String
Step4: Write a Python function that takes a list and returns a new list with unique elements of the first list.
Sample List
Step5: Write a Python function to multiply all the numbers in a list.
Sample List
Step6: Write a Python function that checks whether a passed string is palindrome or not.
Note
Step7: Hard | Python Code:
import math
def vol(rad):
return 4/3*math.pi*rad**4
vol(5)
l_vol = lambda rad: 4/3*math.pi*rad**4
l_vol(5)
Explanation: Functions and Methods Homework
Complete the following questions:
Write a function that computes the volume of a sphere given its radius.
End of explanation
def ran_check(num,low,high):
if num>low and num<high:
print("it's in range!")
else:
print('it\'s out of range!')
ran_check(11,10,20)
Explanation: Write a function that checks whether a number is in a given range (Inclusive of high and low)
End of explanation
def ran_bool(num,low,high):
if num>low and num<high:
return True
else:
return False
ran_bool(3,1,10)
ran_bool(3,1,10)
Explanation: If you only wanted to return a boolean:
End of explanation
def up_low(s):
upper = 0
lower = 0
for i in s:
if i.isupper() == True:
upper += 1
elif i.islower() == True:
lower += 1
else:
continue
print(upper)
print(lower)
up_low('Hello Mr. Rogers, how are you this fine Tuesday?')
Explanation: Write a Python function that accepts a string and calculate the number of upper case letters and lower case letters.
Sample String : 'Hello Mr. Rogers, how are you this fine Tuesday?'
Expected Output :
No. of Upper case characters : 4
No. of Lower case Characters : 33
If you feel ambitious, explore the Collections module to solve this problem!
End of explanation
def unique_list(l):
u = []
for i in l:
if i not in u:
u.append(i)
print(u)
unique_list([1,1,1,1,2,2,3,3,3,3,4,5])
Explanation: Write a Python function that takes a list and returns a new list with unique elements of the first list.
Sample List : [1,1,1,1,2,2,3,3,3,3,4,5]
Unique List : [1, 2, 3, 4, 5]
End of explanation
def multiply(numbers):
quot = 0
quot = numbers[0]
for i in numbers:
quot = i*quot
return quot
multiply([1,2,3,-4])
Explanation: Write a Python function to multiply all the numbers in a list.
Sample List : [1, 2, 3, -4]
Expected Output : -24
End of explanation
def palindrome(s):
s.replace(' ','')
return s == s[::-1]
def palindrome(s):
x = 0
y = s[::-1]
for i in s:
if s[x] != y[x]:
return False
x += 1
return True
palindrome('hel leh')
palindrome('this is cool')
Explanation: Write a Python function that checks whether a passed string is palindrome or not.
Note: A palindrome is word, phrase, or sequence that reads the same backward as forward, e.g., madam or nurses run.
End of explanation
import string
def ispangram(str1, alphabet=string.ascii_lowercase):
for i in alphabet:
if i not in str1:
return False
return True
ispangram("The quick brown fox jumps over the lazy dog")
ispangram('i still dont know')
string.ascii_lowercase
b = {'a','b','c','c'}
b
Explanation: Hard:
Write a Python function to check whether a string is pangram or not.
Note : Pangrams are words or sentences containing every letter of the alphabet at least once.
For example : "The quick brown fox jumps over the lazy dog"
Hint: Look at the string module
End of explanation |
10,766 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
View in Colaboratory
Step1: Variables
TensorFlow variables are useful to store the state in your program. They are integrated with other parts of the API (taking gradients, checkpointing, graph functions).
Step2: Layers
Step3: The full list of pre-existing layers can be seen in the documentation. It includes Dense (a fully-connected layer),
Conv2D, LSTM, BatchNormalization, Dropout, and many others.
Models
Step4: Much of the time, however, models which compose many layers simply call one layer after the other. This can be done in very little code using tf.keras.Sequential
Step5: Exercise!
Make a simple convolutional neural network model, useful for things such as MNIST which don't need too many parameters. A sequence of two or three convolutions with small output channels (say, 32 and 64) plus one or two fully connected layers is probably enough.
The input shape should be [batch_size, 28, 28, 1].
Step13: Stop here for now
Training
When eager execution is enabled, you can write Pythonic training loops. Simply
load your data into a tf.data.Dataset, which lets you construct functional pipelines for processing, shuffling, and batching your data,
iterate over the dataset using a Python for loop, and
perform an optimization step in the body of your for loop.
This workflow is exemplified in the following exercise.
Exercise!
In this exercise, you'll train the convolutional model you implemented for the previous exericse on the MNIST dataset.
Step14: Fill in the implementation of train_one_epoch below and run the cell to train your model.
Step15: Run the below cell to qualitatively evaluate your model. Note how eager execution interoperates seamlessly with matplotlib.
Step16: Profiling
If you want to drill down into the performance characteristics of your code, you can use native Python profilers like cProfile. In the next exercise, you'll do just that.
Exercise!
This exercise does not require coding. If you have not completed the training exercise, replace train_one_epoch below with _train_one_epoch.
Run the below cell and inspect the printed profiles. What parts of the code appear to be hotspots or
bottlenecks? How does sorting the profile by total time compare to sorting it
by cumulative time? | Python Code:
import tensorflow as tf
tf.enable_eager_execution()
tfe = tf.contrib.eager
Explanation: View in Colaboratory
End of explanation
# Creating variables
v = tfe.Variable(1.0)
v
v.assign_add(1.0)
v
Explanation: Variables
TensorFlow variables are useful to store the state in your program. They are integrated with other parts of the API (taking gradients, checkpointing, graph functions).
End of explanation
# In the tf.keras.layers package, layers are objects. To construct a layer,
# simply construct the object. Most layers take as a first argument the number
# of output dimensions / channels.
layer = tf.keras.layers.Dense(100)
# The number of input dimensions is often unnecessary, as it can be inferred
# the first time the layer is used, but it can be provided if you want to
# specify it manually, which is useful in some complex models.
layer = tf.keras.layers.Dense(10, input_shape=(None, 5))
layer(tf.zeros([2, 2]))
layer.variables
Explanation: Layers: common sets of useful operations
Most of the time when writing code for machine learning models you want to operate at a higher level of abstraction than individual operations and manipulation of individual variables.
Many machine learning models are expressible as the composition and stacking of relatively simple layers, and TensorFlow provides both a set of many common layers as a well as easy ways for you to write your own application-specific layers either from scratch or as the composition of existing layers.
TensorFlow includes the full Keras API in the tf.keras package, and the Keras layers are very useful when building your own models.
End of explanation
class ResnetIdentityBlock(tf.keras.Model):
def __init__(self, kernel_size, filters):
super(ResnetIdentityBlock, self).__init__(name='')
filters1, filters2, filters3 = filters
self.conv2a = tf.keras.layers.Conv2D(filters1, (1, 1))
self.bn2a = tf.keras.layers.BatchNormalization()
self.conv2b = tf.keras.layers.Conv2D(filters2, kernel_size, padding='same')
self.bn2b = tf.keras.layers.BatchNormalization()
self.conv2c = tf.keras.layers.Conv2D(filters3, (1, 1))
self.bn2c = tf.keras.layers.BatchNormalization()
def call(self, input_tensor, training=False):
x = self.conv2a(input_tensor)
x = self.bn2a(x, training=training)
x = tf.nn.relu(x)
x = self.conv2b(x)
x = self.bn2b(x, training=training)
x = tf.nn.relu(x)
x = self.conv2c(x)
x = self.bn2c(x, training=training)
x += input_tensor
return tf.nn.relu(x)
block = ResnetIdentityBlock(1, [1, 2, 3])
print(block(tf.zeros([1, 2, 3, 3])))
print([x.name for x in block.variables])
Explanation: The full list of pre-existing layers can be seen in the documentation. It includes Dense (a fully-connected layer),
Conv2D, LSTM, BatchNormalization, Dropout, and many others.
Models: composing layers
Many interesting layer-like things in machine learning models are implemented by composing existing layers. For example, each residual block in a resnet is a composition of convolutions, batch normalizations, and a shortcut.
The main class used when creating a layer-like thing which contains other layers is tf.keras.Model. Implementing one is done by inheriting from tf.keras.Model.
End of explanation
my_seq = tf.keras.Sequential([tf.keras.layers.Conv2D(1, (1, 1)),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(2, 1,
padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(3, (1, 1)),
tf.keras.layers.BatchNormalization()])
my_seq(tf.zeros([1, 2, 3, 3]))
Explanation: Much of the time, however, models which compose many layers simply call one layer after the other. This can be done in very little code using tf.keras.Sequential
End of explanation
# TODO: Implement a convolutional model as described above, and assign it to
# model.
model = tf.keras.Sequential([
])
#@title Click to see the answer
max_pool = tf.keras.layers.MaxPooling2D(
(2, 2), (2, 2), padding='same')
# The model consists of a sequential chain of layers, so tf.keras.Sequential
# (a subclass of tf.keras.Model) makes for a compact description.
model = tf.keras.Sequential(
[
tf.keras.layers.Conv2D(
32,
5,
padding='same',
activation=tf.nn.relu),
max_pool,
tf.keras.layers.Conv2D(
64,
5,
padding='same',
activation=tf.nn.relu),
max_pool,
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1024, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.Dense(10)
])
model(tf.zeros([1, 28, 28, 1]))
Explanation: Exercise!
Make a simple convolutional neural network model, useful for things such as MNIST which don't need too many parameters. A sequence of two or three convolutions with small output channels (say, 32 and 64) plus one or two fully connected layers is probably enough.
The input shape should be [batch_size, 28, 28, 1].
End of explanation
#@title Utilities for downloading MNIST data (double-click to show code)
import gzip
import os
import tempfile
from six.moves import urllib
import shutil
import numpy as np
def read32(bytestream):
Read 4 bytes from bytestream as an unsigned 32-bit integer.
dt = np.dtype(np.uint32).newbyteorder('>')
return np.frombuffer(bytestream.read(4), dtype=dt)[0]
def check_image_file_header(filename):
Validate that filename corresponds to images for the MNIST dataset.
with tf.gfile.Open(filename, 'rb') as f:
magic = read32(f)
read32(f) # num_images, unused
rows = read32(f)
cols = read32(f)
if magic != 2051:
raise ValueError('Invalid magic number %d in MNIST file %s' % (magic,
f.name))
if rows != 28 or cols != 28:
raise ValueError(
'Invalid MNIST file %s: Expected 28x28 images, found %dx%d' %
(f.name, rows, cols))
def check_labels_file_header(filename):
Validate that filename corresponds to labels for the MNIST dataset.
with tf.gfile.Open(filename, 'rb') as f:
magic = read32(f)
read32(f) # num_items, unused
if magic != 2049:
raise ValueError('Invalid magic number %d in MNIST file %s' % (magic,
f.name))
def download(directory, filename):
Download (and unzip) a file from the MNIST dataset if not already done.
filepath = os.path.join(directory, filename)
if tf.gfile.Exists(filepath):
return filepath
if not tf.gfile.Exists(directory):
tf.gfile.MakeDirs(directory)
# CVDF mirror of http://yann.lecun.com/exdb/mnist/
url = 'https://storage.googleapis.com/cvdf-datasets/mnist/' + filename + '.gz'
_, zipped_filepath = tempfile.mkstemp(suffix='.gz')
print('Downloading %s to %s' % (url, zipped_filepath))
urllib.request.urlretrieve(url, zipped_filepath)
with gzip.open(zipped_filepath, 'rb') as f_in, \
tf.gfile.Open(filepath, 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
os.remove(zipped_filepath)
return filepath
def dataset(directory, images_file, labels_file):
Download and parse MNIST dataset.
images_file = download(directory, images_file)
labels_file = download(directory, labels_file)
check_image_file_header(images_file)
check_labels_file_header(labels_file)
def decode_image(image):
# Normalize from [0, 255] to [0.0, 1.0]
image = tf.decode_raw(image, tf.uint8)
image = tf.cast(image, tf.float32)
image = tf.reshape(image, [28, 28, 1])
return image / 255.0
def decode_label(label):
label = tf.decode_raw(label, tf.uint8) # tf.string -> [tf.uint8]
label = tf.reshape(label, []) # label is a scalar
return tf.to_int32(label)
images = tf.data.FixedLengthRecordDataset(
images_file, 28 * 28, header_bytes=16).map(decode_image)
labels = tf.data.FixedLengthRecordDataset(
labels_file, 1, header_bytes=8).map(decode_label)
return tf.data.Dataset.zip((images, labels))
def get_training_data(directory):
tf.data.Dataset object for MNIST training data.
return dataset(directory, 'train-images-idx3-ubyte',
'train-labels-idx1-ubyte').take(1024)
def get_test_data(directory):
tf.data.Dataset object for MNIST test data.
return dataset(directory, 't10k-images-idx3-ubyte', 't10k-labels-idx1-ubyte')
# Don't forget to run the cell above!
training_data = get_training_data("/tmp/mnist/train")
test_data = get_test_data("/tmp/mnist/test")
Explanation: Stop here for now
Training
When eager execution is enabled, you can write Pythonic training loops. Simply
load your data into a tf.data.Dataset, which lets you construct functional pipelines for processing, shuffling, and batching your data,
iterate over the dataset using a Python for loop, and
perform an optimization step in the body of your for loop.
This workflow is exemplified in the following exercise.
Exercise!
In this exercise, you'll train the convolutional model you implemented for the previous exericse on the MNIST dataset.
End of explanation
EPOCHS = 5
optimizer = tf.train.MomentumOptimizer(learning_rate=0.01, momentum=0.5)
def loss_fn(logits, labels):
return tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=tf.squeeze(logits), labels=labels))
def train_one_epoch(model, training_data, optimizer):
# TODO: Implement an optimization step and return the average loss.
#
# Hint: Use `tf.GradientTape` to compute the gradient of the loss, and use
# `optimizer.apply_gradients` to update the model's variables, which are
# accessible as `model.variables`
average_loss = tfe.metrics.Mean('loss')
for images, labels in training_data.shuffle(buffer_size=10000).batch(64):
pass
return average_loss.result()
for epoch in range(EPOCHS):
loss = train_one_epoch(model, training_data, optimizer)
print("Average loss after epoch %d: %.4f" % (epoch, loss))
#@title Double-click to see a solution.
EPOCHS = 5
optimizer = tf.train.MomentumOptimizer(learning_rate=0.01, momentum=0.5)
def _loss_fn(logits, labels):
return tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=tf.squeeze(logits), labels=labels))
def _train_one_epoch(model, training_data):
average_loss = tfe.metrics.Mean("loss")
for images, labels in training_data.shuffle(buffer_size=10000).batch(64):
with tf.GradientTape() as tape:
logits = model(images, training=True)
loss = _loss_fn(logits, labels)
average_loss(loss)
gradients = tape.gradient(loss, model.variables)
optimizer.apply_gradients(zip(gradients, model.variables))
return average_loss.result()
for epoch in range(EPOCHS):
loss = _train_one_epoch(model, training_data)
print("Average loss after epoch %d: %.4f" % (epoch, loss))
Explanation: Fill in the implementation of train_one_epoch below and run the cell to train your model.
End of explanation
import matplotlib.pyplot as plt
sampled_data = test_data.batch(1).shuffle(buffer_size=10000).take(5)
for image, label in sampled_data:
plt.figure()
plt.imshow(tf.reshape(image, (28, 28)))
plt.show()
logits = model(image, training=False)
prediction = tf.argmax(logits, axis=1, output_type=tf.int64)
print("Prediction: %d" % prediction)
Explanation: Run the below cell to qualitatively evaluate your model. Note how eager execution interoperates seamlessly with matplotlib.
End of explanation
import cProfile
import pstats
cProfile.run("train_one_epoch(model, training_data, optimizer)", "training_profile")
stats = pstats.Stats("training_profile").strip_dirs().sort_stats("tottime")
stats.print_stats(10)
stats.sort_stats("cumtime").print_stats(10)
Explanation: Profiling
If you want to drill down into the performance characteristics of your code, you can use native Python profilers like cProfile. In the next exercise, you'll do just that.
Exercise!
This exercise does not require coding. If you have not completed the training exercise, replace train_one_epoch below with _train_one_epoch.
Run the below cell and inspect the printed profiles. What parts of the code appear to be hotspots or
bottlenecks? How does sorting the profile by total time compare to sorting it
by cumulative time?
End of explanation |
10,767 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Churn Predictive Analytics using Amazon SageMaker and Snowflake
Background
The purpose of this lab is to demonstrate the basics of building an advanced analytics solution using Amazon SageMaker on data stored in Snowflake. In this notebook we will create a customer churn analytics solution by training an XGBoost churn model, and batching churn prediction scores into a data warehouse.
(Need to update) This notebook extends one of the example tutorial notebooks
Step1: Now let's set the S3 bucket and prefix that you want to use for training and model data. This bucket should be created within the same region as the Notebook Instance, training, and hosting.
Replace <<'REPLACE WITH YOUR BUCKET NAME'>> with the name of your bucket.
Step2: Data
Mobile operators have historical records on which customers ultimately ended up churning and which continued using the service. We can use this historical information to construct an ML model of one mobile operator’s churn using a process called training. After training the model, we can pass the profile information of an arbitrary customer (the same profile information that we used to train the model) to the model, and have the model predict whether this customer is going to churn. Of course, we expect the model to make mistakes–after all, predicting the future is tricky business! But I’ll also show how to deal with prediction errors.
The dataset we use is publicly available and was mentioned in the book Discovering Knowledge in Data by Daniel T. Larose. It is attributed by the author to the University of California Irvine Repository of Machine Learning Datasets. In the previous steps, this dataset was loaded into the CUSTOMER_CHURN table in your Snowflake instance.
Provide the connection and credentials required to connect to your Snowflake account. You'll need to modify the cell below with the appropriate ACCOUNT for your Snowflake trial. If you followed the lab guide instructions, the username and password below will work.
NOTE
Step4: Explore
Now we can run queries against your database.
However, in practice, the data table will often contain more data than what is practical to operate on within a notebook instance, or relevant attributes are spread across multiple tables. Being able to run SQL queries and loading the data into a pandas dataframe will be helpful during the initial stages of development. Check out the Spark integration for a fully scalable solution. Snowflake Connector for Spark
Step5: By modern standards, it’s a relatively small dataset, with only 3,333 records, where each record uses 21 attributes to describe the profile of a customer of an unknown US mobile operator. The attributes are
Step6: We can see immediately that
Step7: Next let's look at the relationship between each of the features and our target variable.
Step8: Interestingly we see that churners appear
Step9: We see several features that essentially have 100% correlation with one another. Including these feature pairs in some machine learning algorithms can create catastrophic problems, while in others it will only introduce minor redundancy and bias. Let's remove one feature from each of the highly correlated pairs
Step10: Now that we've cleaned up our dataset, let's determine which algorithm to use. As mentioned above, there appear to be some variables where both high and low (but not intermediate) values are predictive of churn. In order to accommodate this in an algorithm like linear regression, we'd need to generate polynomial (or bucketed) terms. Instead, let's attempt to model this problem using gradient boosted trees. Amazon SageMaker provides an XGBoost container that we can use to train in a managed, distributed setting, and then host as a real-time prediction endpoint. XGBoost uses gradient boosted trees which naturally account for non-linear relationships between features and the target variable, as well as accommodating complex interactions between features.
Amazon SageMaker XGBoost can train on data in either a CSV or LibSVM format. For this example, we'll stick with CSV. It should
Step11: And now let's split the data into training, validation, and test sets. This will help prevent us from overfitting the model, and allow us to test the models accuracy on data it hasn't already seen.
Step12: Now we'll upload these files to S3.
Step13: Train
Moving onto training, first we'll need to specify the locations of the XGBoost algorithm containers.
Step14: Then, because we're training with the CSV file format, we'll create s3_inputs that our training function can use as a pointer to the files in S3.
Step15: Now, we can specify a few parameters like what type of training instances we'd like to use and how many, as well as our XGBoost hyperparameters. A few key hyperparameters are
Step16: Compile
Amazon SageMaker Neo optimizes models to run up to twice as fast, with no loss in accuracy. When calling compile_model() function, we specify the target instance family (c5) as well as the S3 bucket to which the compiled model would be stored.
Step17: Batch Inference
Next we're going to evaluate our model by using a Batch Transform to generate churn scores in batch from our model_data.
First, we upload the model data to S3. SageMaker Batch Transform is designed to run asynchronously and ingest input data from S3. This differs from SageMaker's real-time inference endpoints, which receive input data from synchronous HTTP requests.
For large scale deployments the data set will be retrieved from Snwoflake using SQL and an External Stage to S3.
Batch Transform is often the ideal option for advanced analytics use case for serveral reasons
Step18: Batch transform jobs run asynchronously, and are non-blocking by default. Run the command below to block until the batch job completes.
Step19: There are many ways to compare the performance of a machine learning model, but let's start by simply by comparing actual to predicted values. In this case, we're simply predicting whether the customer churned (1) or not (0), which produces a simple confusion matrix.
Step20: Upload Churn Score to Snowflake
To be able to allow multiple business users and dashboards simple access to the churn scores we will upload it to Snowflake by using a Snowflake internal stage. | Python Code:
import boto3
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import io
import os
import sys
import time
import json
from IPython.display import display
from time import strftime, gmtime
import sagemaker
from sagemaker.predictor import csv_serializer
from sagemaker import get_execution_role
sess = sagemaker.Session()
role = get_execution_role()
region = boto3.Session().region_name
print("IAM role ARN: {}".format(role))
Explanation: Churn Predictive Analytics using Amazon SageMaker and Snowflake
Background
The purpose of this lab is to demonstrate the basics of building an advanced analytics solution using Amazon SageMaker on data stored in Snowflake. In this notebook we will create a customer churn analytics solution by training an XGBoost churn model, and batching churn prediction scores into a data warehouse.
(Need to update) This notebook extends one of the example tutorial notebooks: Customer Churn Prediction with XGBoost. The extended learning objectives are highlighted in bold below.
Learning Objectives
Learn how to query ground truth data from our data warehouse into a pandas dataframe for exploration and feature engineering.
Train an XGBoost model to perform churn prediction.
Learn how to run a Batch Transform job to calculate churn scores in batch.
Optimize your model using SageMaker Neo.
Upload the Churn Score results back to Snowflake to perform basic analysis.
Prerequisites
In summary:
- You've built the lab environment using this CloudFormation template. This template installs the Snowflake python connector within your Jupyter instance.
- You've taken note of the Snowflake credentials in the lab guide.
- This notebook should be running in your default VPC.
- Snowflake traffic uses port 443.
Setup
Run the cell below to import Python libraries required by this notebook.
The IAM role arn used to give training and hosting access to your data. By default, we'll use the IAM permissions that have been allocated to your notebook instance. The role should have the permissions to access your S3 bucket, and full execution permissions on Amazon SageMaker. In practice, you could minimize the scope of requried permissions.
End of explanation
#bucket = 'snowflake-sagemaker-workshop'
bucket = '<REPLACE WITH YOUR BUCKET NAME>'
prefix = 'churn-analytics-lab'
Explanation: Now let's set the S3 bucket and prefix that you want to use for training and model data. This bucket should be created within the same region as the Notebook Instance, training, and hosting.
Replace <<'REPLACE WITH YOUR BUCKET NAME'>> with the name of your bucket.
End of explanation
import snowflake.connector
# Connecting to Snowflake using the default authenticator
ctx = snowflake.connector.connect(
user='sagemaker',
password='AWSSF123',
account='<ACCOUNT>',
warehouse='SAGEMAKER_WH',
database='ML_WORKSHOP',
schema='PUBLIC'
)
Explanation: Data
Mobile operators have historical records on which customers ultimately ended up churning and which continued using the service. We can use this historical information to construct an ML model of one mobile operator’s churn using a process called training. After training the model, we can pass the profile information of an arbitrary customer (the same profile information that we used to train the model) to the model, and have the model predict whether this customer is going to churn. Of course, we expect the model to make mistakes–after all, predicting the future is tricky business! But I’ll also show how to deal with prediction errors.
The dataset we use is publicly available and was mentioned in the book Discovering Knowledge in Data by Daniel T. Larose. It is attributed by the author to the University of California Irvine Repository of Machine Learning Datasets. In the previous steps, this dataset was loaded into the CUSTOMER_CHURN table in your Snowflake instance.
Provide the connection and credentials required to connect to your Snowflake account. You'll need to modify the cell below with the appropriate ACCOUNT for your Snowflake trial. If you followed the lab guide instructions, the username and password below will work.
NOTE: For Snowflake accounts in regions other than US WEST add the Region ID after a period <ACCOUNT>.<REGION ID> i.e. XYZ123456.US-EAST-1.
In practice, security standards might prohibit you from providing credentials in clear text. As a best practice in production, you should utilize a service like AWS Secrets Manager to manage your database credentials.
End of explanation
# Query Snowflake Data
cs=ctx.cursor()
allrows=cs.execute(select Cust_ID,STATE,ACCOUNT_LENGTH,AREA_CODE,PHONE,INTL_PLAN,VMAIL_PLAN,VMAIL_MESSAGE,
DAY_MINS,DAY_CALLS,DAY_CHARGE,EVE_MINS,EVE_CALLS,EVE_CHARGE,NIGHT_MINS,NIGHT_CALLS,
NIGHT_CHARGE,INTL_MINS,INTL_CALLS,INTL_CHARGE,CUSTSERV_CALLS,
CHURN from CUSTOMER_CHURN ).fetchall()
churn = pd.DataFrame(allrows)
churn.columns=['Cust_id','State','Account Length','Area Code','Phone','Intl Plan', 'VMail Plan', 'VMail Message','Day Mins',
'Day Calls', 'Day Charge', 'Eve Mins', 'Eve Calls', 'Eve Charge', 'Night Mins', 'Night Calls','Night Charge',
'Intl Mins','Intl Calls','Intl Charge','CustServ Calls', 'Churn?']
pd.set_option('display.max_columns', 500) # Make sure we can see all of the columns
pd.set_option('display.max_rows', 10) # Keep the output on one page
churn
Explanation: Explore
Now we can run queries against your database.
However, in practice, the data table will often contain more data than what is practical to operate on within a notebook instance, or relevant attributes are spread across multiple tables. Being able to run SQL queries and loading the data into a pandas dataframe will be helpful during the initial stages of development. Check out the Spark integration for a fully scalable solution. Snowflake Connector for Spark
End of explanation
# Frequency tables for each categorical feature
for column in churn.select_dtypes(include=['object']).columns:
display(pd.crosstab(index=churn[column], columns='% observations', normalize='columns'))
# Histograms for each numeric features
display(churn.describe())
%matplotlib inline
hist = churn.hist(bins=30, sharey=True, figsize=(10, 10))
Explanation: By modern standards, it’s a relatively small dataset, with only 3,333 records, where each record uses 21 attributes to describe the profile of a customer of an unknown US mobile operator. The attributes are:
State: the US state in which the customer resides, indicated by a two-letter abbreviation; for example, OH or NJ
Account Length: the number of days that this account has been active
Area Code: the three-digit area code of the corresponding customer’s phone number
Phone: the remaining seven-digit phone number
Int’l Plan: whether the customer has an international calling plan: yes/no
VMail Plan: whether the customer has a voice mail feature: yes/no
VMail Message: presumably the average number of voice mail messages per month
Day Mins: the total number of calling minutes used during the day
Day Calls: the total number of calls placed during the day
Day Charge: the billed cost of daytime calls
Eve Mins, Eve Calls, Eve Charge: the billed cost for calls placed during the evening
Night Mins, Night Calls, Night Charge: the billed cost for calls placed during nighttime
Intl Mins, Intl Calls, Intl Charge: the billed cost for international calls
CustServ Calls: the number of calls placed to Customer Service
Churn?: whether the customer left the service: true/false
The last attribute, Churn?, is known as the target attribute–the attribute that we want the ML model to predict. Because the target attribute is binary, our model will be performing binary prediction, also known as binary classification.
Let's begin exploring the data:
End of explanation
churn = churn.drop('Phone', axis=1)
churn['Area Code'] = churn['Area Code'].astype(object)
Explanation: We can see immediately that:
- State appears to be quite evenly distributed
- Phone takes on too many unique values to be of any practical use. It's possible parsing out the prefix could have some value, but without more context on how these are allocated, we should avoid using it.
- Only 14% of customers churned, so there is some class imabalance, but nothing extreme.
- Most of the numeric features are surprisingly nicely distributed, with many showing bell-like gaussianity. VMail Message being a notable exception (and Area Code showing up as a feature we should convert to non-numeric).
End of explanation
for column in churn.select_dtypes(include=['object']).columns:
if column != 'Churn?':
display(pd.crosstab(index=churn[column], columns=churn['Churn?'], normalize='columns'))
for column in churn.select_dtypes(exclude=['object']).columns:
print(column)
hist = churn[[column, 'Churn?']].hist(by='Churn?', bins=30)
plt.show()
Explanation: Next let's look at the relationship between each of the features and our target variable.
End of explanation
display(churn.corr())
pd.plotting.scatter_matrix(churn, figsize=(18, 18))
plt.show()
Explanation: Interestingly we see that churners appear:
- Fairly evenly distributed geographically
- More likely to have an international plan
- Less likely to have a voicemail plan
- To exhibit some bimodality in daily minutes (either higher or lower than the average for non-churners)
- To have a larger number of customer service calls (which makes sense as we'd expect customers who experience lots of problems may be more likely to churn)
In addition, we see that churners take on very similar distributions for features like Day Mins and Day Charge. That's not surprising as we'd expect minutes spent talking to correlate with charges. Let's dig deeper into the relationships between our features.
End of explanation
churn = churn.drop(['Day Charge', 'Eve Charge', 'Night Charge', 'Intl Charge'], axis=1)
Explanation: We see several features that essentially have 100% correlation with one another. Including these feature pairs in some machine learning algorithms can create catastrophic problems, while in others it will only introduce minor redundancy and bias. Let's remove one feature from each of the highly correlated pairs: Day Charge from the pair with Day Mins, Night Charge from the pair with Night Mins, Intl Charge from the pair with Intl Mins:
End of explanation
model_data = pd.get_dummies(churn)
model_data = pd.concat([model_data['Churn?_True.'], model_data.drop(['Churn?_False.', 'Churn?_True.'], axis=1)], axis=1)
to_split_data = model_data.drop(['Cust_id'], axis=1)
Explanation: Now that we've cleaned up our dataset, let's determine which algorithm to use. As mentioned above, there appear to be some variables where both high and low (but not intermediate) values are predictive of churn. In order to accommodate this in an algorithm like linear regression, we'd need to generate polynomial (or bucketed) terms. Instead, let's attempt to model this problem using gradient boosted trees. Amazon SageMaker provides an XGBoost container that we can use to train in a managed, distributed setting, and then host as a real-time prediction endpoint. XGBoost uses gradient boosted trees which naturally account for non-linear relationships between features and the target variable, as well as accommodating complex interactions between features.
Amazon SageMaker XGBoost can train on data in either a CSV or LibSVM format. For this example, we'll stick with CSV. It should:
- Have the predictor variable in the first column
- Not have a header row
But first, let's convert our categorical features into numeric features.
End of explanation
train_data, validation_data, test_data = np.split(to_split_data.sample(frac=1, random_state=1729), [int(0.7 * len(to_split_data)), int(0.9 * len(to_split_data))])
train_data.to_csv('train.csv', header=False, index=False)
validation_data.to_csv('validation.csv', header=False, index=False)
pd.set_option('display.max_columns', 100)
pd.set_option('display.width', 1000)
display(train_data)
Explanation: And now let's split the data into training, validation, and test sets. This will help prevent us from overfitting the model, and allow us to test the models accuracy on data it hasn't already seen.
End of explanation
boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train/train.csv')).upload_file('train.csv')
boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'validation/validation.csv')).upload_file('validation.csv')
Explanation: Now we'll upload these files to S3.
End of explanation
from sagemaker.amazon.amazon_estimator import get_image_uri
xgb_training_container = get_image_uri(boto3.Session().region_name, 'xgboost', '0.90-1')
Explanation: Train
Moving onto training, first we'll need to specify the locations of the XGBoost algorithm containers.
End of explanation
s3_input_train = sagemaker.s3_input(s3_data='s3://{}/{}/train'.format(bucket, prefix), content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data='s3://{}/{}/validation/'.format(bucket, prefix), content_type='csv')
Explanation: Then, because we're training with the CSV file format, we'll create s3_inputs that our training function can use as a pointer to the files in S3.
End of explanation
xgb = sagemaker.estimator.Estimator(xgb_training_container,
role,
train_instance_count=1,
train_instance_type='ml.m5.xlarge',
output_path='s3://{}/{}/output'.format(bucket, prefix),
sagemaker_session=sess)
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
num_round=100)
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
Explanation: Now, we can specify a few parameters like what type of training instances we'd like to use and how many, as well as our XGBoost hyperparameters. A few key hyperparameters are:
- max_depth controls how deep each tree within the algorithm can be built. Deeper trees can lead to better fit, but are more computationally expensive and can lead to overfitting. There is typically some trade-off in model performance that needs to be explored between a large number of shallow trees and a smaller number of deeper trees.
- subsample controls sampling of the training data. This technique can help reduce overfitting, but setting it too low can also starve the model of data.
- num_round controls the number of boosting rounds. This is essentially the subsequent models that are trained using the residuals of previous iterations. Again, more rounds should produce a better fit on the training data, but can be computationally expensive or lead to overfitting.
- eta controls how aggressive each round of boosting is. Larger values lead to more conservative boosting.
- gamma controls how aggressively trees are grown. Larger values lead to more conservative models.
More detail on XGBoost's hyperparmeters can be found on their GitHub page.
End of explanation
compiled_model = xgb
#try:
# xgb.create_model()._neo_image_account(boto3.Session().region_name)
#except:
# print('Neo is not currently supported in', boto3.Session().region_name)
#else:
# output_path = '/'.join(xgb.output_path.split('/')[:-1])
# compiled_model = xgb.compile_model(target_instance_family='ml_c5',
# input_shape={'data':[1, 69]},
# role=role,
# framework='xgboost',
# framework_version='0.7',
# output_path=output_path)
# compiled_model.name = 'deployed-xgboost-customer-churn-c5'
# compiled_model.image = get_image_uri(sess.boto_region_name, 'xgboost-neo', repo_version='latest')
Explanation: Compile
Amazon SageMaker Neo optimizes models to run up to twice as fast, with no loss in accuracy. When calling compile_model() function, we specify the target instance family (c5) as well as the S3 bucket to which the compiled model would be stored.
End of explanation
batch_input = model_data.iloc[:,1:]
batch_input.to_csv('model.csv', header=False, index=False)
boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'model/model.csv')).upload_file('model.csv')
s3uri_batch_input ='s3://{}/{}/model'.format(bucket, prefix)
print('Batch Transform input S3 uri: {}'.format(s3uri_batch_input))
s3uri_batch_output= 's3://{}/{}/out'.format(bucket, prefix)
print('Batch Transform output S3 uri: {}'.format(s3uri_batch_output))
from sagemaker.transformer import Transformer
BATCH_INSTANCE_TYPE = 'ml.c5.xlarge'
transformer = compiled_model.transformer(instance_count=1,
strategy='SingleRecord',
assemble_with='Line',
instance_type= BATCH_INSTANCE_TYPE,
accept = 'text/csv',
output_path=s3uri_batch_output)
transformer.transform(s3uri_batch_input,
split_type= 'Line',
content_type= 'text/csv',
input_filter = "$[1:]",
join_source = "Input",
output_filter = "$[0,-1,-2]")
Explanation: Batch Inference
Next we're going to evaluate our model by using a Batch Transform to generate churn scores in batch from our model_data.
First, we upload the model data to S3. SageMaker Batch Transform is designed to run asynchronously and ingest input data from S3. This differs from SageMaker's real-time inference endpoints, which receive input data from synchronous HTTP requests.
For large scale deployments the data set will be retrieved from Snwoflake using SQL and an External Stage to S3.
Batch Transform is often the ideal option for advanced analytics use case for serveral reasons:
Batch Transform is better optimized for throughput in comparison with real-time inference endpoints. Thus, Batch Transform is ideal for processing large volumes of data for analytics.
Offline asynchronous processing is acceptable for most analytics use cases.
Batch Transform is more cost efficient when real-time inference isn't necessary. You only need to pay for resources used during batch processing. There is no need to pay for ongoing resources like a hosted endpoint for real-time inference.
End of explanation
transformer.wait()
Explanation: Batch transform jobs run asynchronously, and are non-blocking by default. Run the command below to block until the batch job completes.
End of explanation
batched_churn_scores = pd.read_csv(s3uri_batch_output+'/model.csv.out', usecols=[0,1], names=['id','scores'])
gt_df = pd.DataFrame(model_data['Churn?_True.']).reset_index(drop=True)
results_df= pd.concat([gt_df,batched_churn_scores],axis=1,join_axes=[gt_df.index])
pd.crosstab(index=results_df['Churn?_True.'], columns=np.round(results_df['scores']), rownames=['actual'], colnames=['predictions'])
Explanation: There are many ways to compare the performance of a machine learning model, but let's start by simply by comparing actual to predicted values. In this case, we're simply predicting whether the customer churned (1) or not (0), which produces a simple confusion matrix.
End of explanation
results_df.to_csv('results.csv', header=False, index=False)
cs.execute("PUT file://results.csv @ml_results")
Explanation: Upload Churn Score to Snowflake
To be able to allow multiple business users and dashboards simple access to the churn scores we will upload it to Snowflake by using a Snowflake internal stage.
End of explanation |
10,768 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 2
Step1: Create some dictionarys with parameters for cell, synapse and extracellular electrode
Step2: Then, create the cell, synapse and electrode objects using the
LFPy.Cell, LFPy.Synapse, LFPy.RecExtElectrode classes.
Step3: Run the simulation using cell.simulate() probing the extracellular potential with
the additional keyword argument probes=[electrode]
Step4: Then plot the somatic potential and the prediction obtained using the RecExtElectrode instance
(now accessible as electrode.data) | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
import LFPy
Explanation: Example 2: Extracellular response of synaptic input
This is an example of LFPy running in a Jupyter notebook. To run through this example code and produce output, press <shift-Enter> in each code block below.
First step is to import LFPy and other packages for analysis and plotting:
End of explanation
cellParameters = {
'morphology': 'morphologies/L5_Mainen96_LFPy.hoc',
'tstart': -50,
'tstop': 100,
'dt': 2**-4,
'passive': True,
}
synapseParameters = {
'syntype': 'Exp2Syn',
'e': 0.,
'tau1': 0.5,
'tau2': 2.0,
'weight': 0.005,
'record_current': True,
}
z = np.mgrid[-400:1201:100]
electrodeParameters = {
'x': np.zeros(z.size),
'y': np.zeros(z.size),
'z': z,
'sigma': 0.3,
}
Explanation: Create some dictionarys with parameters for cell, synapse and extracellular electrode:
End of explanation
cell = LFPy.Cell(**cellParameters)
cell.set_pos(x=-10, y=0, z=0)
cell.set_rotation(x=4.98919, y=-4.33261, z=np.pi)
synapse = LFPy.Synapse(cell,
idx = cell.get_closest_idx(z=800),
**synapseParameters)
synapse.set_spike_times(np.array([10, 30, 50]))
electrode = LFPy.RecExtElectrode(cell, **electrodeParameters)
Explanation: Then, create the cell, synapse and electrode objects using the
LFPy.Cell, LFPy.Synapse, LFPy.RecExtElectrode classes.
End of explanation
cell.simulate(probes=[electrode])
Explanation: Run the simulation using cell.simulate() probing the extracellular potential with
the additional keyword argument probes=[electrode]
End of explanation
fig = plt.figure(figsize=(12, 6))
gs = GridSpec(2, 3)
ax0 = fig.add_subplot(gs[:, 0])
ax0.plot(cell.x.T, cell.z.T, 'k')
ax0.plot(synapse.x, synapse.z,
color='r', marker='o', markersize=10,
label='synapse')
ax0.plot(electrode.x, electrode.z, '.', color='g',
label='electrode')
ax0.axis([-500, 500, -450, 1250])
ax0.legend()
ax0.set_xlabel('x (um)')
ax0.set_ylabel('z (um)')
ax0.set_title('morphology')
ax1 = fig.add_subplot(gs[0, 1])
ax1.plot(cell.tvec, synapse.i, 'r')
ax1.set_title('synaptic current (pA)')
plt.setp(ax1.get_xticklabels(), visible=False)
ax2 = fig.add_subplot(gs[1, 1], sharex=ax1)
ax2.plot(cell.tvec, cell.somav, 'k')
ax2.set_title('somatic voltage (mV)')
ax3 = fig.add_subplot(gs[:, 2], sharey=ax0, sharex=ax1)
im = ax3.pcolormesh(cell.tvec, electrode.z, electrode.data,
vmin=-abs(electrode.data).max(), vmax=abs(electrode.data).max(),
shading='auto')
plt.colorbar(im)
ax3.set_title('LFP (mV)')
ax3.set_xlabel('time (ms)')
#savefig('LFPy-example-02.pdf', dpi=300)
Explanation: Then plot the somatic potential and the prediction obtained using the RecExtElectrode instance
(now accessible as electrode.data):
End of explanation |
10,769 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to FermiLib
Note that all the examples below must be run sequentially within a section.
Initializing the FermionOperator data structure
Fermionic systems are often treated in second quantization where arbitrary operators can be expressed using the fermionic creation and annihilation operators, $a^\dagger_k$ and $a_k$. The fermionic ladder operators play a similar role to their qubit ladder operator counterparts, $\sigma^+k$ and $\sigma^-_k$ but are distinguished by the cannonical fermionic anticommutation relations, ${a^\dagger_i, a^\dagger_j} = {a_i, a_j} = 0$ and ${a_i, a_j^\dagger} = \delta{ij}$. Any weighted sums of products of these operators are represented with the FermionOperator data structure in FermiLib. The following are examples of valid FermionOperators
Step1: The preferred way to specify the coefficient in FermiLib is to provide an optional coefficient argument. If not provided, the coefficient defaults to 1. In the code below, the first method is preferred. The multiplication in the second method actually creates a copy of the term, which introduces some additional cost. All inplace operands (such as +=) modify classes whereas binary operands such as + create copies. Important caveats are that the empty tuple FermionOperator(()) and the empty string FermionOperator('') initializes identity. The empty initializer FermionOperator() initializes the zero operator.
Step2: Note that FermionOperator has only one attribute
Step3: Manipulating the FermionOperator data structure
So far we have explained how to initialize a single FermionOperator such as $-1.7 \, a^\dagger_3 a_1$. However, in general we will want to represent sums of these operators such as $(1 + 2i) \, a^\dagger_4 a^\dagger_3 a_9 a_1 - 1.7 \, a^\dagger_3 a_1$. To do this, just add together two FermionOperators! We demonstrate below.
Step4: The print function prints each term in the operator on a different line. Note that the line my_operator = term_1 + term_2 creates a new object, which involves a copy of term_1 and term_2. The second block of code uses the inplace method +=, which is more efficient. This is especially important when trying to construct a very large FermionOperator. FermionOperators also support a wide range of builtins including, str(), repr(), =, , /, /=, +, +=, -, -=, - and **. Note that instead of supporting != and ==, we have the method .isclose(), since FermionOperators involve floats. We demonstrate some of these methods below.
Step5: Additionally, there are a variety of methods that act on the FermionOperator data structure. We demonstrate a small subset of those methods here.
Step6: The QubitOperator data structure
The QubitOperator data structure is another essential part of FermiLib. While the QubitOperator was originally developed for FermiLib, it is now part of the core ProjectQ library so that it can be interpreted by the ProjectQ compiler using the TimeEvolution gate. As the name suggests, QubitOperator is used to store qubit operators in almost exactly the same way that FermionOperator is used to store fermion operators. For instance $X_0 Z_3 Y_4$ is a QubitOperator. The internal representation of this as a terms tuple would be $((0, \textrm{"X"}), (3, \textrm{"Z"}), (4, \textrm{"Y"}))$. Note that one important difference between QubitOperator and FermionOperator is that the terms in QubitOperator are always sorted in order of tensor factor. In some cases, this enables faster manipulation. We initialize some QubitOperators below.
Step7: Jordan-Wigner and Bravyi-Kitaev
FermiLib provides functions for mapping FermionOperators to QubitOperators.
Step8: We see that despite the different representation, these operators are iso-spectral. We can also apply the Jordan-Wigner transform in reverse to map arbitrary QubitOperators to FermionOperators. Note that we also demonstrate the .compress() method (a method on both FermionOperators and QubitOperators) which removes zero entries.
Step9: Sparse matrices and the Hubbard model
Often, one would like to obtain a sparse matrix representation of an operator which can be analyzed numerically. There is code in both fermilib.transforms and fermilib.utils which facilitates this. The function get_sparse_operator converts either a FermionOperator, a QubitOperator or other more advanced classes such as InteractionOperator to a scipy.sparse.csc matrix. There are numerous functions in fermilib.utils which one can call on the sparse operators such as "get_gap", "get_hartree_fock_state", "get_ground_state", ect. We show this off by computing the ground state energy of the Hubbard model. To do that, we use code from the fermilib.utils module which constructs lattice models of fermions such as Hubbard models.
Step10: Hamiltonians in the plane wave basis
A user can write plugins to FermiLib which allow for the use of, e.g., third-party electronic structure package to compute molecular orbitals, Hamiltonians, energies, reduced density matrices, coupled cluster amplitudes, etc using Gaussian basis sets. We may provide scripts which interface between such packages and FermiLib in future but do not discuss them in this tutorial.
When using simpler basis sets such as plane waves, these packages are not needed. FermiLib comes with code which computes Hamiltonians in the plane wave basis. Note that when using plane waves, one is working with the periodized Coulomb operator, best suited for condensed phase calculations such as studying the electronic structure of a solid. To obtain these Hamiltonians one must choose to study the system without a spin degree of freedom (spinless), one must the specify dimension in which the calculation is performed (n_dimensions, usually 3), one must specify how many plane waves are in each dimension (grid_length) and one must specify the length scale of the plane wave harmonics in each dimension (length_scale) and also the locations and charges of the nuclei. One can generate these models with plane_wave_hamiltonian() found in fermilib.utils. For simplicity, below we compute the Hamiltonian in the case of zero external charge (corresponding to the uniform electron gas, aka jellium). We also demonstrate that one can transform the plane wave Hamiltonian using a Fourier transform without effecting the spectrum of the operator.
Step11: Basics of MolecularData class
Data from electronic structure calculations can be saved in a FermiLib data structure called MolecularData, which makes it easy to access within our library. Often, one would like to analyze a chemical series or look at many different Hamiltonians and sometimes the electronic structure calculations are either expensive to compute or difficult to converge (e.g. one needs to mess around with different types of SCF routines to make things converge). Accordingly, we anticipate that users will want some way to automatically database the results of their electronic structure calculations so that important data (such as the SCF intergrals) can be looked up on-the-fly if the user has computed them in the past. FermiLib supports a data provenance strategy which saves key results of the electronic structure calculation (including pointers to files containing large amounts of data, such as the molecular integrals) in an HDF5 container.
The MolecularData class stores information about molecules. One initializes a MolecularData object by specifying parameters of a molecule such as its geometry, basis, multiplicity, charge and an optional string describing it. One can also initialize MolecularData simply by providing a string giving a filename where a previous MolecularData object was saved in an HDF5 container. One can save a MolecularData instance by calling the class's .save() method. This automatically saves the instance in a data folder specified during FermiLib installation. The name of the file is generated automatically from the instance attributes and optionally provided description. Alternatively, a filename can also be provided as an optional input if one wishes to manually name the file.
When electronic structure calculations are run, the data files for the molecule can be automatically updated. If one wishes to later use that data they either initialize MolecularData with the instance filename or initialize the instance and then later call the .load() method.
Basis functions are provided to initialization using a string such as "6-31g". Geometries can be specified using a simple txt input file (see geometry_from_file function in molecular_data.py) or can be passed using a simple python list format demonstrated below. Atoms are specified using a string for their atomic symbol. Distances should be provided in angstrom. Below we initialize a simple instance of MolecularData without performing any electronic structure calculations.
Step12: If we had previously computed this molecule using an electronic structure package, we can call molecule.load() to populate all sorts of interesting fields in the data structure. Though we make no assumptions about what electronic structure packages users might install, we assume that the calculations are saved in Fermilib's MolecularData objects. There may be plugins available in future. For the purposes of this example, we will load data that ships with FermiLib to make a plot of the energy surface of hydrogen. Note that helper functions to initialize some interesting chemical benchmarks are found in fermilib.utils.
Step13: InteractionOperator and InteractionRDM for efficient numerical representations
Fermion Hamiltonians can be expressed as $H = h_0 + \sum_{pq} h_{pq}\, a^\dagger_p a_q + \frac{1}{2} \sum_{pqrs} h_{pqrs} \, a^\dagger_p a^\dagger_q a_r a_s$ where $h_0$ is a constant shift due to the nuclear repulsion and $h_{pq}$ and $h_{pqrs}$ are the famous molecular integrals. Since fermions interact pairwise, their energy is thus a unique function of the one-particle and two-particle reduced density matrices which are expressed in second quantization as $\rho_{pq} = \left \langle p \mid a^\dagger_p a_q \mid q \right \rangle$ and $\rho_{pqrs} = \left \langle pq \mid a^\dagger_p a^\dagger_q a_r a_s \mid rs \right \rangle$, respectively.
Because the RDMs and molecular Hamiltonians are both compactly represented and manipulated as 2- and 4- index tensors, we can represent them in a particularly efficient form using similar data structures. The InteractionOperator data structure can be initialized for a Hamiltonian by passing the constant $h_0$ (or 0), as well as numpy arrays representing $h_{pq}$ (or $\rho_{pq}$) and $h_{pqrs}$ (or $\rho_{pqrs}$). Importantly, InteractionOperators can also be obtained by calling MolecularData.get_molecular_hamiltonian() or by calling the function get_interaction_operator() (found in fermilib.utils) on a FermionOperator. The InteractionRDM data structure is similar but represents RDMs. For instance, one can get a molecular RDM by calling MolecularData.get_molecular_rdm(). When generating Hamiltonians from the MolecularData class, one can choose to restrict the system to an active space.
These classes inherit from the same base class, InteractionTensor. This data structure overloads the slice operator [] so that one can get or set the key attributes of the InteractionOperator
Step14: Simulating a variational quantum eigensolver using ProjectQ
We now demonstrate how one can use both FermiLib and ProjectQ to run a simple VQE example using a Unitary Coupled Cluster ansatz. It demonstrates a simple way to evaluate the energy, optimize the energy with respect to the ansatz and build the corresponding compiled quantum circuit. It utilizes ProjectQ to build and simulate the circuit.
Step15: Here we load $\textrm{H}_2$ from a precomputed molecule file found in the test data directory, and initialize the ProjectQ circuit compiler to a standard setting that uses a first-order Trotter decomposition to break up the exponentials of non-commuting operators.
Step17: The Variational Quantum Eigensolver (or VQE), works by parameterizing a wavefunction $| \Psi(\theta) \rangle$ through some quantum circuit, and minimzing the energy with respect to that angle, which is defined by
\begin{align}
E(\theta) = \langle \Psi(\theta)| H | \Psi(\theta) \rangle
\end{align}
To perform the VQE loop with a simple molecule, it helps to wrap the evaluation of the energy into a simple objective function that takes the parameters of the circuit and returns the energy. Here we define that function using ProjectQ to handle the qubits and the simulation.
Step18: While we could plug this objective function into any optimizer, SciPy offers a convenient framework within the Python ecosystem. We'll choose as starting amplitudes the classical CCSD values that can be loaded from the molecule if desired. The optimal energy is found and compared to the exact values to verify that our simulation was successful.
Step19: As we can see, the optimization terminates extremely quickly because the classical coupled cluster amplitudes were (for this molecule) already optimal. We can now use ProjectQ to compile this simulation circuit to a set of two-body quanutm gates. | Python Code:
from fermilib.ops import FermionOperator
my_term = FermionOperator(((3, 1), (1, 0)))
print(my_term)
my_term = FermionOperator('3^ 1')
print(my_term)
Explanation: Introduction to FermiLib
Note that all the examples below must be run sequentially within a section.
Initializing the FermionOperator data structure
Fermionic systems are often treated in second quantization where arbitrary operators can be expressed using the fermionic creation and annihilation operators, $a^\dagger_k$ and $a_k$. The fermionic ladder operators play a similar role to their qubit ladder operator counterparts, $\sigma^+k$ and $\sigma^-_k$ but are distinguished by the cannonical fermionic anticommutation relations, ${a^\dagger_i, a^\dagger_j} = {a_i, a_j} = 0$ and ${a_i, a_j^\dagger} = \delta{ij}$. Any weighted sums of products of these operators are represented with the FermionOperator data structure in FermiLib. The following are examples of valid FermionOperators:
$$
\begin{align}
& a_1 \nonumber \
& 1.7 a^\dagger_3 \nonumber \
&-1.7 \, a^\dagger_3 a_1 \nonumber \
&(1 + 2i) \, a^\dagger_4 a^\dagger_3 a_9 a_1 \nonumber \
&(1 + 2i) \, a^\dagger_4 a^\dagger_3 a_9 a_1 - 1.7 \, a^\dagger_3 a_1 \nonumber
\end{align}
$$
The FermionOperator class is contained in $\textrm{ops/_fermion_operators.py}$. In order to support fast addition of FermionOperator instances, the class is implemented as hash table (python dictionary). The keys of the dictionary encode the strings of ladder operators and values of the dictionary store the coefficients. The strings of ladder operators are encoded as a tuple of 2-tuples which we refer to as the "terms tuple". Each ladder operator is represented by a 2-tuple. The first element of the 2-tuple is an int indictating the tensor factor on which the ladder operator acts. The second element of the 2-tuple is Boole: 1 represents raising and 0 represents lowering. For instance, $a^\dagger_8$ is represented in a 2-tuple as $(8, 1)$. Note that indices start at 0 and the identity operator is an empty list. Below we give some examples of operators and their terms tuple:
$$
\begin{align}
I & \mapsto () \nonumber \
a_1 & \mapsto ((1, 0),) \nonumber \
a^\dagger_3 & \mapsto ((3, 1),) \nonumber \
a^\dagger_3 a_1 & \mapsto ((3, 1), (1, 0)) \nonumber \
a^\dagger_4 a^\dagger_3 a_9 a_1 & \mapsto ((4, 1), (3, 1), (9, 0), (1, 0)) \nonumber
\end{align}
$$
Note that when initializing a single ladder operator one should be careful to add the comma after the inner pair. This is because in python ((1, 2)) = (1, 2) whereas ((1, 2),) = ((1, 2),). The "terms tuple" is usually convenient when one wishes to initialize a term as part of a coded routine. However, the terms tuple is not particularly intuitive. Accordingly, FermiLib also supports another user-friendly, string notation below. This representation is rendered when calling "print" on a FermionOperator.
$$
\begin{align}
I & \mapsto \textrm{""} \nonumber \
a_1 & \mapsto \textrm{"1"} \nonumber \
a^\dagger_3 & \mapsto \textrm{"3^"} \nonumber \
a^\dagger_3 a_1 & \mapsto \textrm{"3^}\;\textrm{1"} \nonumber \
a^\dagger_4 a^\dagger_3 a_9 a_1 & \mapsto \textrm{"4^}\;\textrm{3^}\;\textrm{9}\;\textrm{1"} \nonumber
\end{align}
$$
Let's initialize our first term! We do it two different ways below.
End of explanation
good_way_to_initialize = FermionOperator('3^ 1', -1.7)
print(good_way_to_initialize)
bad_way_to_initialize = -1.7 * FermionOperator('3^ 1')
print(bad_way_to_initialize)
identity = FermionOperator('')
print(identity)
zero_operator = FermionOperator()
print(zero_operator)
Explanation: The preferred way to specify the coefficient in FermiLib is to provide an optional coefficient argument. If not provided, the coefficient defaults to 1. In the code below, the first method is preferred. The multiplication in the second method actually creates a copy of the term, which introduces some additional cost. All inplace operands (such as +=) modify classes whereas binary operands such as + create copies. Important caveats are that the empty tuple FermionOperator(()) and the empty string FermionOperator('') initializes identity. The empty initializer FermionOperator() initializes the zero operator.
End of explanation
my_operator = FermionOperator('4^ 1^ 3 9', 1. + 2.j)
print(my_operator)
print(my_operator.terms)
Explanation: Note that FermionOperator has only one attribute: .terms. This attribute is the dictionary which stores the term tuples.
End of explanation
from fermilib.ops import FermionOperator
term_1 = FermionOperator('4^ 3^ 9 1', 1. + 2.j)
term_2 = FermionOperator('3^ 1', -1.7)
my_operator = term_1 + term_2
print(my_operator)
my_operator = FermionOperator('4^ 3^ 9 1', 1. + 2.j)
term_2 = FermionOperator('3^ 1', -1.7)
my_operator += term_2
print('')
print(my_operator)
Explanation: Manipulating the FermionOperator data structure
So far we have explained how to initialize a single FermionOperator such as $-1.7 \, a^\dagger_3 a_1$. However, in general we will want to represent sums of these operators such as $(1 + 2i) \, a^\dagger_4 a^\dagger_3 a_9 a_1 - 1.7 \, a^\dagger_3 a_1$. To do this, just add together two FermionOperators! We demonstrate below.
End of explanation
term_1 = FermionOperator('4^ 3^ 9 1', 1. + 2.j)
term_2 = FermionOperator('3^ 1', -1.7)
my_operator = term_1 - 33. * term_2
print(my_operator)
my_operator *= 3.17 * (term_2 + term_1) ** 2
print('')
print(my_operator)
print('')
print(term_2 ** 3)
print('')
print(term_1.isclose(2.*term_1 - term_1))
print(term_1.isclose(my_operator))
Explanation: The print function prints each term in the operator on a different line. Note that the line my_operator = term_1 + term_2 creates a new object, which involves a copy of term_1 and term_2. The second block of code uses the inplace method +=, which is more efficient. This is especially important when trying to construct a very large FermionOperator. FermionOperators also support a wide range of builtins including, str(), repr(), =, , /, /=, +, +=, -, -=, - and **. Note that instead of supporting != and ==, we have the method .isclose(), since FermionOperators involve floats. We demonstrate some of these methods below.
End of explanation
from fermilib.ops import hermitian_conjugated, normal_ordered
from fermilib.utils import commutator, count_qubits
# Get the Hermitian conjugate of a FermionOperator, count its qubit, check if it is normal-ordered.
term_1 = FermionOperator('4^ 3 3^', 1. + 2.j)
print(hermitian_conjugated(term_1))
print(term_1.is_normal_ordered())
print(count_qubits(term_1))
# Normal order the term.
term_2 = normal_ordered(term_1)
print('')
print(term_2)
print(term_2.is_normal_ordered())
# Compute a commutator of the terms.
print('')
print(commutator(term_1, term_2))
Explanation: Additionally, there are a variety of methods that act on the FermionOperator data structure. We demonstrate a small subset of those methods here.
End of explanation
from projectq.ops import QubitOperator
my_first_qubit_operator = QubitOperator('X1 Y2 Z3')
print(my_first_qubit_operator)
print(my_first_qubit_operator.terms)
operator_2 = QubitOperator('X3 Z4', 3.17)
operator_2 -= 77. * my_first_qubit_operator
print('')
print(operator_2)
Explanation: The QubitOperator data structure
The QubitOperator data structure is another essential part of FermiLib. While the QubitOperator was originally developed for FermiLib, it is now part of the core ProjectQ library so that it can be interpreted by the ProjectQ compiler using the TimeEvolution gate. As the name suggests, QubitOperator is used to store qubit operators in almost exactly the same way that FermionOperator is used to store fermion operators. For instance $X_0 Z_3 Y_4$ is a QubitOperator. The internal representation of this as a terms tuple would be $((0, \textrm{"X"}), (3, \textrm{"Z"}), (4, \textrm{"Y"}))$. Note that one important difference between QubitOperator and FermionOperator is that the terms in QubitOperator are always sorted in order of tensor factor. In some cases, this enables faster manipulation. We initialize some QubitOperators below.
End of explanation
from fermilib.ops import FermionOperator, hermitian_conjugated
from fermilib.transforms import jordan_wigner, bravyi_kitaev
from fermilib.utils import eigenspectrum
# Initialize an operator.
fermion_operator = FermionOperator('2^ 0', 3.17)
fermion_operator += hermitian_conjugated(fermion_operator)
print(fermion_operator)
# Transform to qubits under the Jordan-Wigner transformation and print its spectrum.
jw_operator = jordan_wigner(fermion_operator)
print('')
print(jw_operator)
jw_spectrum = eigenspectrum(jw_operator)
print(jw_spectrum)
# Transform to qubits under the Bravyi-Kitaev transformation and print its spectrum.
bk_operator = bravyi_kitaev(fermion_operator)
print('')
print(bk_operator)
bk_spectrum = eigenspectrum(bk_operator)
print(bk_spectrum)
Explanation: Jordan-Wigner and Bravyi-Kitaev
FermiLib provides functions for mapping FermionOperators to QubitOperators.
End of explanation
from fermilib.transforms import reverse_jordan_wigner
# Initialize QubitOperator.
my_operator = QubitOperator('X0 Y1 Z2', 88.)
my_operator += QubitOperator('Z1 Z4', 3.17)
print(my_operator)
# Map QubitOperator to a FermionOperator.
mapped_operator = reverse_jordan_wigner(my_operator)
print('')
print(mapped_operator)
# Map the operator back to qubits and make sure it is the same.
back_to_normal = jordan_wigner(mapped_operator)
back_to_normal.compress()
print('')
print(back_to_normal)
Explanation: We see that despite the different representation, these operators are iso-spectral. We can also apply the Jordan-Wigner transform in reverse to map arbitrary QubitOperators to FermionOperators. Note that we also demonstrate the .compress() method (a method on both FermionOperators and QubitOperators) which removes zero entries.
End of explanation
from fermilib.transforms import get_sparse_operator, jordan_wigner
from fermilib.utils import fermi_hubbard, get_ground_state
# Set model.
x_dimension = 2
y_dimension = 2
tunneling = 2.
coulomb = 1.
magnetic_field = 0.5
chemical_potential = 0.25
periodic = 1
spinless = 1
# Get fermion operator.
hubbard_model = fermi_hubbard(
x_dimension, y_dimension, tunneling, coulomb, chemical_potential,
magnetic_field, periodic, spinless)
print(hubbard_model)
# Get qubit operator under Jordan-Wigner.
jw_hamiltonian = jordan_wigner(hubbard_model)
jw_hamiltonian.compress()
print('')
print(jw_hamiltonian)
# Get scipy.sparse.csc representation.
sparse_operator = get_sparse_operator(hubbard_model)
print('')
print(sparse_operator)
print('\nEnergy of the model is {} in units of T and J.'.format(
get_ground_state(sparse_operator)[0]))
Explanation: Sparse matrices and the Hubbard model
Often, one would like to obtain a sparse matrix representation of an operator which can be analyzed numerically. There is code in both fermilib.transforms and fermilib.utils which facilitates this. The function get_sparse_operator converts either a FermionOperator, a QubitOperator or other more advanced classes such as InteractionOperator to a scipy.sparse.csc matrix. There are numerous functions in fermilib.utils which one can call on the sparse operators such as "get_gap", "get_hartree_fock_state", "get_ground_state", ect. We show this off by computing the ground state energy of the Hubbard model. To do that, we use code from the fermilib.utils module which constructs lattice models of fermions such as Hubbard models.
End of explanation
from fermilib.utils import eigenspectrum, fourier_transform, jellium_model, Grid
from fermilib.transforms import jordan_wigner
# Let's look at a very small model of jellium in 1D.
grid = Grid(dimensions=1, length=3, scale=1.0)
spinless = True
# Get the momentum Hamiltonian.
momentum_hamiltonian = jellium_model(grid, spinless)
momentum_qubit_operator = jordan_wigner(momentum_hamiltonian)
momentum_qubit_operator.compress()
print(momentum_qubit_operator)
# Fourier transform the Hamiltonian to the position basis.
position_hamiltonian = fourier_transform(momentum_hamiltonian, grid, spinless)
position_qubit_operator = jordan_wigner(position_hamiltonian)
position_qubit_operator.compress()
print('')
print (position_qubit_operator)
# Check the spectra to make sure these representations are iso-spectral.
spectral_difference = eigenspectrum(momentum_qubit_operator) - eigenspectrum(position_qubit_operator)
print('')
print(spectral_difference)
Explanation: Hamiltonians in the plane wave basis
A user can write plugins to FermiLib which allow for the use of, e.g., third-party electronic structure package to compute molecular orbitals, Hamiltonians, energies, reduced density matrices, coupled cluster amplitudes, etc using Gaussian basis sets. We may provide scripts which interface between such packages and FermiLib in future but do not discuss them in this tutorial.
When using simpler basis sets such as plane waves, these packages are not needed. FermiLib comes with code which computes Hamiltonians in the plane wave basis. Note that when using plane waves, one is working with the periodized Coulomb operator, best suited for condensed phase calculations such as studying the electronic structure of a solid. To obtain these Hamiltonians one must choose to study the system without a spin degree of freedom (spinless), one must the specify dimension in which the calculation is performed (n_dimensions, usually 3), one must specify how many plane waves are in each dimension (grid_length) and one must specify the length scale of the plane wave harmonics in each dimension (length_scale) and also the locations and charges of the nuclei. One can generate these models with plane_wave_hamiltonian() found in fermilib.utils. For simplicity, below we compute the Hamiltonian in the case of zero external charge (corresponding to the uniform electron gas, aka jellium). We also demonstrate that one can transform the plane wave Hamiltonian using a Fourier transform without effecting the spectrum of the operator.
End of explanation
from fermilib.utils import MolecularData
# Set parameters to make a simple molecule.
diatomic_bond_length = .7414
geometry = [('H', (0., 0., 0.)), ('H', (0., 0., diatomic_bond_length))]
basis = 'sto-3g'
multiplicity = 1
charge = 0
description = str(diatomic_bond_length)
# Make molecule and print out a few interesting facts about it.
molecule = MolecularData(geometry, basis, multiplicity,
charge, description)
print('Molecule has automatically generated name {}'.format(
molecule.name))
print('Information about this molecule would be saved at:\n{}\n'.format(
molecule.filename))
print('This molecule has {} atoms and {} electrons.'.format(
molecule.n_atoms, molecule.n_electrons))
for atom, atomic_number in zip(molecule.atoms, molecule.protons):
print('Contains {} atom, which has {} protons.'.format(
atom, atomic_number))
Explanation: Basics of MolecularData class
Data from electronic structure calculations can be saved in a FermiLib data structure called MolecularData, which makes it easy to access within our library. Often, one would like to analyze a chemical series or look at many different Hamiltonians and sometimes the electronic structure calculations are either expensive to compute or difficult to converge (e.g. one needs to mess around with different types of SCF routines to make things converge). Accordingly, we anticipate that users will want some way to automatically database the results of their electronic structure calculations so that important data (such as the SCF intergrals) can be looked up on-the-fly if the user has computed them in the past. FermiLib supports a data provenance strategy which saves key results of the electronic structure calculation (including pointers to files containing large amounts of data, such as the molecular integrals) in an HDF5 container.
The MolecularData class stores information about molecules. One initializes a MolecularData object by specifying parameters of a molecule such as its geometry, basis, multiplicity, charge and an optional string describing it. One can also initialize MolecularData simply by providing a string giving a filename where a previous MolecularData object was saved in an HDF5 container. One can save a MolecularData instance by calling the class's .save() method. This automatically saves the instance in a data folder specified during FermiLib installation. The name of the file is generated automatically from the instance attributes and optionally provided description. Alternatively, a filename can also be provided as an optional input if one wishes to manually name the file.
When electronic structure calculations are run, the data files for the molecule can be automatically updated. If one wishes to later use that data they either initialize MolecularData with the instance filename or initialize the instance and then later call the .load() method.
Basis functions are provided to initialization using a string such as "6-31g". Geometries can be specified using a simple txt input file (see geometry_from_file function in molecular_data.py) or can be passed using a simple python list format demonstrated below. Atoms are specified using a string for their atomic symbol. Distances should be provided in angstrom. Below we initialize a simple instance of MolecularData without performing any electronic structure calculations.
End of explanation
# Set molecule parameters.
basis = 'sto-3g'
multiplicity = 1
bond_length_interval = 0.1
n_points = 25
# Generate molecule at different bond lengths.
hf_energies = []
fci_energies = []
bond_lengths = []
for point in range(3, n_points + 1):
bond_length = bond_length_interval * point
bond_lengths += [bond_length]
description = str(round(bond_length,2))
print(description)
geometry = [('H', (0., 0., 0.)), ('H', (0., 0., bond_length))]
molecule = MolecularData(
geometry, basis, multiplicity, description=description)
# Load data.
molecule.load()
# Print out some results of calculation.
print('\nAt bond length of {} angstrom, molecular hydrogen has:'.format(
bond_length))
print('Hartree-Fock energy of {} Hartree.'.format(molecule.hf_energy))
print('MP2 energy of {} Hartree.'.format(molecule.mp2_energy))
print('FCI energy of {} Hartree.'.format(molecule.fci_energy))
print('Nuclear repulsion energy between protons is {} Hartree.'.format(
molecule.nuclear_repulsion))
for orbital in range(molecule.n_orbitals):
print('Spatial orbital {} has energy of {} Hartree.'.format(
orbital, molecule.orbital_energies[orbital]))
hf_energies += [molecule.hf_energy]
fci_energies += [molecule.fci_energy]
# Plot.
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(0)
plt.plot(bond_lengths, fci_energies, 'x-')
plt.plot(bond_lengths, hf_energies, 'o-')
plt.ylabel('Energy in Hartree')
plt.xlabel('Bond length in angstrom')
plt.show()
Explanation: If we had previously computed this molecule using an electronic structure package, we can call molecule.load() to populate all sorts of interesting fields in the data structure. Though we make no assumptions about what electronic structure packages users might install, we assume that the calculations are saved in Fermilib's MolecularData objects. There may be plugins available in future. For the purposes of this example, we will load data that ships with FermiLib to make a plot of the energy surface of hydrogen. Note that helper functions to initialize some interesting chemical benchmarks are found in fermilib.utils.
End of explanation
from fermilib.transforms import get_fermion_operator, get_sparse_operator, jordan_wigner
from fermilib.utils import get_ground_state, MolecularData
import numpy
import scipy
import scipy.linalg
# Load saved file for LiH.
diatomic_bond_length = 1.45
geometry = [('Li', (0., 0., 0.)), ('H', (0., 0., diatomic_bond_length))]
basis = 'sto-3g'
multiplicity = 1
# Set Hamiltonian parameters.
active_space_start = 1
active_space_stop = 3
# Generate and populate instance of MolecularData.
molecule = MolecularData(geometry, basis, multiplicity, description="1.45")
molecule.load()
# Get the Hamiltonian in an active space.
molecular_hamiltonian = molecule.get_molecular_hamiltonian(
occupied_indices=range(active_space_start),
active_indices=range(active_space_start, active_space_stop))
# Map operator to fermions and qubits.
fermion_hamiltonian = get_fermion_operator(molecular_hamiltonian)
qubit_hamiltonian = jordan_wigner(fermion_hamiltonian)
qubit_hamiltonian.compress()
print('The Jordan-Wigner Hamiltonian in canonical basis follows:\n{}'.format(qubit_hamiltonian))
# Get sparse operator and ground state energy.
sparse_hamiltonian = get_sparse_operator(qubit_hamiltonian)
energy, state = get_ground_state(sparse_hamiltonian)
print('Ground state energy before rotation is {} Hartree.\n'.format(energy))
# Randomly rotate.
n_orbitals = molecular_hamiltonian.n_qubits // 2
n_variables = int(n_orbitals * (n_orbitals - 1) / 2)
random_angles = numpy.pi * (1. - 2. * numpy.random.rand(n_variables))
kappa = numpy.zeros((n_orbitals, n_orbitals))
index = 0
for p in range(n_orbitals):
for q in range(p + 1, n_orbitals):
kappa[p, q] = random_angles[index]
kappa[q, p] = -numpy.conjugate(random_angles[index])
index += 1
# Build the unitary rotation matrix.
difference_matrix = kappa + kappa.transpose()
rotation_matrix = scipy.linalg.expm(kappa)
# Apply the unitary.
molecular_hamiltonian.rotate_basis(rotation_matrix)
# Get qubit Hamiltonian in rotated basis.
qubit_hamiltonian = jordan_wigner(molecular_hamiltonian)
qubit_hamiltonian.compress()
print('The Jordan-Wigner Hamiltonian in rotated basis follows:\n{}'.format(qubit_hamiltonian))
# Get sparse Hamiltonian and energy in rotated basis.
sparse_hamiltonian = get_sparse_operator(qubit_hamiltonian)
energy, state = get_ground_state(sparse_hamiltonian)
print('Ground state energy after rotation is {} Hartree.'.format(energy))
Explanation: InteractionOperator and InteractionRDM for efficient numerical representations
Fermion Hamiltonians can be expressed as $H = h_0 + \sum_{pq} h_{pq}\, a^\dagger_p a_q + \frac{1}{2} \sum_{pqrs} h_{pqrs} \, a^\dagger_p a^\dagger_q a_r a_s$ where $h_0$ is a constant shift due to the nuclear repulsion and $h_{pq}$ and $h_{pqrs}$ are the famous molecular integrals. Since fermions interact pairwise, their energy is thus a unique function of the one-particle and two-particle reduced density matrices which are expressed in second quantization as $\rho_{pq} = \left \langle p \mid a^\dagger_p a_q \mid q \right \rangle$ and $\rho_{pqrs} = \left \langle pq \mid a^\dagger_p a^\dagger_q a_r a_s \mid rs \right \rangle$, respectively.
Because the RDMs and molecular Hamiltonians are both compactly represented and manipulated as 2- and 4- index tensors, we can represent them in a particularly efficient form using similar data structures. The InteractionOperator data structure can be initialized for a Hamiltonian by passing the constant $h_0$ (or 0), as well as numpy arrays representing $h_{pq}$ (or $\rho_{pq}$) and $h_{pqrs}$ (or $\rho_{pqrs}$). Importantly, InteractionOperators can also be obtained by calling MolecularData.get_molecular_hamiltonian() or by calling the function get_interaction_operator() (found in fermilib.utils) on a FermionOperator. The InteractionRDM data structure is similar but represents RDMs. For instance, one can get a molecular RDM by calling MolecularData.get_molecular_rdm(). When generating Hamiltonians from the MolecularData class, one can choose to restrict the system to an active space.
These classes inherit from the same base class, InteractionTensor. This data structure overloads the slice operator [] so that one can get or set the key attributes of the InteractionOperator: $\textrm{.constant}$, $\textrm{.one_body_coefficients}$ and $\textrm{.two_body_coefficients}$ . For instance, InteractionOperator[p,q,r,s] would return $h_{pqrs}$ and InteractionRDM would return $\rho_{pqrs}$. Importantly, the class supports fast basis transformations using the method InteractionTensor.rotate_basis(rotation_matrix).
But perhaps most importantly, one can map the InteractionOperator to any of the other data structures we've described here.
Below, we load MolecularData from a saved calculation of LiH. We then obtain an InteractionOperator representation of this system in an active space. We then map that operator to qubits. We then demonstrate that one can rotate the orbital basis of the InteractionOperator using random angles to obtain a totally different operator that is still iso-spectral.
End of explanation
from numpy import array, concatenate, zeros
from numpy.random import randn
from scipy.optimize import minimize
from fermilib.config import *
from fermilib.circuits._unitary_cc import *
from fermilib.transforms import jordan_wigner
from projectq.ops import X, All, Measure
from projectq.backends import CommandPrinter, CircuitDrawer
Explanation: Simulating a variational quantum eigensolver using ProjectQ
We now demonstrate how one can use both FermiLib and ProjectQ to run a simple VQE example using a Unitary Coupled Cluster ansatz. It demonstrates a simple way to evaluate the energy, optimize the energy with respect to the ansatz and build the corresponding compiled quantum circuit. It utilizes ProjectQ to build and simulate the circuit.
End of explanation
# Load the molecule.
import os
filename = os.path.join(DATA_DIRECTORY, 'H2_sto-3g_singlet_0.7414')
molecule = MolecularData(filename=filename)
# Use a Jordan-Wigner encoding, and compress to remove 0 imaginary components
qubit_hamiltonian = jordan_wigner(molecule.get_molecular_hamiltonian())
qubit_hamiltonian.compress()
compiler_engine = uccsd_trotter_engine()
Explanation: Here we load $\textrm{H}_2$ from a precomputed molecule file found in the test data directory, and initialize the ProjectQ circuit compiler to a standard setting that uses a first-order Trotter decomposition to break up the exponentials of non-commuting operators.
End of explanation
def energy_objective(packed_amplitudes):
Evaluate the energy of a UCCSD singlet wavefunction with packed_amplitudes
Args:
packed_amplitudes(ndarray): Compact array that stores the unique
amplitudes for a UCCSD singlet wavefunction.
Returns:
energy(float): Energy corresponding to the given amplitudes
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
# Set Jordan-Wigner initial state with correct number of electrons
wavefunction = compiler_engine.allocate_qureg(molecule.n_qubits)
for i in range(molecule.n_electrons):
X | wavefunction[i]
# Build the circuit and act it on the wavefunction
evolution_operator = uccsd_singlet_evolution(packed_amplitudes,
molecule.n_qubits,
molecule.n_electrons)
evolution_operator | wavefunction
compiler_engine.flush()
# Evaluate the energy and reset wavefunction
energy = compiler_engine.backend.get_expectation_value(qubit_hamiltonian, wavefunction)
All(Measure) | wavefunction
compiler_engine.flush()
return energy
Explanation: The Variational Quantum Eigensolver (or VQE), works by parameterizing a wavefunction $| \Psi(\theta) \rangle$ through some quantum circuit, and minimzing the energy with respect to that angle, which is defined by
\begin{align}
E(\theta) = \langle \Psi(\theta)| H | \Psi(\theta) \rangle
\end{align}
To perform the VQE loop with a simple molecule, it helps to wrap the evaluation of the energy into a simple objective function that takes the parameters of the circuit and returns the energy. Here we define that function using ProjectQ to handle the qubits and the simulation.
End of explanation
n_amplitudes = uccsd_singlet_paramsize(molecule.n_qubits, molecule.n_electrons)
initial_amplitudes = [0, 0.05677]
initial_energy = energy_objective(initial_amplitudes)
# Run VQE Optimization to find new CCSD parameters
opt_result = minimize(energy_objective, initial_amplitudes,
method="CG", options={'disp':True})
opt_energy, opt_amplitudes = opt_result.fun, opt_result.x
print("\nOptimal UCCSD Singlet Energy: {}".format(opt_energy))
print("Optimal UCCSD Singlet Amplitudes: {}".format(opt_amplitudes))
print("Classical CCSD Energy: {} Hartrees".format(molecule.ccsd_energy))
print("Exact FCI Energy: {} Hartrees".format(molecule.fci_energy))
print("Initial Energy of UCCSD with CCSD amplitudes: {} Hartrees".format(initial_energy))
Explanation: While we could plug this objective function into any optimizer, SciPy offers a convenient framework within the Python ecosystem. We'll choose as starting amplitudes the classical CCSD values that can be loaded from the molecule if desired. The optimal energy is found and compared to the exact values to verify that our simulation was successful.
End of explanation
compiler_engine = uccsd_trotter_engine(CommandPrinter())
wavefunction = compiler_engine.allocate_qureg(molecule.n_qubits)
for i in range(molecule.n_electrons):
X | wavefunction[i]
# Build the circuit and act it on the wavefunction
evolution_operator = uccsd_singlet_evolution(opt_amplitudes,
molecule.n_qubits,
molecule.n_electrons)
evolution_operator | wavefunction
compiler_engine.flush()
Explanation: As we can see, the optimization terminates extremely quickly because the classical coupled cluster amplitudes were (for this molecule) already optimal. We can now use ProjectQ to compile this simulation circuit to a set of two-body quanutm gates.
End of explanation |
10,770 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Hyper-parameter tuning </h1>
Learning Objectives
1. Understand various approaches to hyperparameter tuning
2. Automate hyperparameter tuning using AI Platform HyperTune
Introduction
In the previous notebook we achieved an RMSE of 4.13. Let's see if we can improve upon that by tuning our hyperparameters.
Hyperparameters are parameters that are set prior to training a model, as opposed to parameters which are learned during training.
These include learning rate and batch size, but also model design parameters such as type of activation function and number of hidden units.
Here are the four most common ways to finding the ideal hyperparameters
Step1: Move code into python package
Let's package our updated code with feature engineering so it's AI Platform compatible.
Step2: Create model.py
Note that any hyperparameters we want to tune need to be exposed as command line arguments. In particular note that the number of hidden units is now a parameter.
Step3: Create task.py
Exercise 1
The code cell below has two TODOs for you to complete.
Firstly, in model.py above we set the number of hidden units in our model to be a hyperparameter. This means hidden_units must be exposed as a command line argument when we submit our training job to Cloud ML Engine. Modify the code below to add an flag for hidden_units. Be sure to include a description for the help field and specify the data type that the model should expect to receive. You can also include a default value. Look to the other parser arguments to make sure you have the formatting corret.
Second, when doing hyperparameter tuning we need to make sure the output directory is different for each run, otherwise successive runs will overwrite previous runs. In task.py below, add some code to append the trial_id to the output direcroty of the training job.
Hint
Step4: Create hypertuning configuration
We specify
Step5: Run the training job|
Same as before with the addition of --config=hyperpam.yaml to reference the file we just created.
This will take about 20 minutes. Go to cloud console and click on the job id. Once the job is completed, the choosen hyperparameters and resulting objective value (RMSE in this case) will be shown. Trials will sorted from best to worst.
Exercise 3
Submit a hyperparameter tuning job to the cloud. Fill in the missing arguments below. This is similar to the exercise you completed in the 02_tensorlfow/g_distributed notebook. Note that one difference here is that we now specify a config parameter giving the location of our .yaml file.
Step6: Results
The best result is RMSE 4.02 with hidden units = 128,64,32.
This improvement is modest, but now that we have our hidden units tuned let's run on our larger dataset to see if it helps.
Note the passing of hyperparameter values via command line | Python Code:
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = "cloud-training-bucket" # Replace with your BUCKET
REGION = "us-central1" # Choose an available region for AI Platform
TFVERSION = "1.14" # TF version for AI Platform
import os
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
Explanation: <h1> Hyper-parameter tuning </h1>
Learning Objectives
1. Understand various approaches to hyperparameter tuning
2. Automate hyperparameter tuning using AI Platform HyperTune
Introduction
In the previous notebook we achieved an RMSE of 4.13. Let's see if we can improve upon that by tuning our hyperparameters.
Hyperparameters are parameters that are set prior to training a model, as opposed to parameters which are learned during training.
These include learning rate and batch size, but also model design parameters such as type of activation function and number of hidden units.
Here are the four most common ways to finding the ideal hyperparameters:
1. Manual
2. Grid Search
3. Random Search
4. Bayesian Optimzation
1. Manual
Traditionally, hyperparameter tuning is a manual trial and error process. A data scientist has some intution about suitable hyperparameters which they use as a starting point, then they observe the result and use that information to try a new set of hyperparameters to try to beat the existing performance.
Pros
- Educational, builds up your intuition as a data scientist
- Inexpensive because only one trial is conducted at a time
Cons
- Requires alot of time and patience
2. Grid Search
On the other extreme we can use grid search. Define a discrete set of values to try for each hyperparameter then try every possible combination.
Pros
- Can run hundreds of trials in parallel using the cloud
- Gauranteed to find the best solution within the search space
Cons
- Expensive
3. Random Search
Alternatively define a range for each hyperparamter (e.g. 0-256) and sample uniformly at random from that range.
Pros
- Can run hundreds of trials in parallel using the cloud
- Requires less trials than Grid Search to find a good solution
Cons
- Expensive (but less so than Grid Search)
4. Bayesian Optimization
Unlike Grid Search and Random Search, Bayesian Optimization takes into account information from past trials to select parameters for future trials. The details of how this is done is beyond the scope of this notebook, but if you're interested you can read how it works here here.
Pros
- Picks values intelligenty based on results from past trials
- Less expensive because requires fewer trials to get a good result
Cons
- Requires sequential trials for best results, takes longer
AI Platform HyperTune
AI Platform HyperTune, powered by Google Vizier, uses Bayesian Optimization by default, but also supports Grid Search and Random Search.
When tuning just a few hyperparameters (say less than 4), Grid Search and Random Search work well, but when tunining several hyperparameters and the search space is large Bayesian Optimization is best.
End of explanation
%%bash
mkdir taxifaremodel
touch taxifaremodel/__init__.py
Explanation: Move code into python package
Let's package our updated code with feature engineering so it's AI Platform compatible.
End of explanation
%%writefile taxifaremodel/model.py
import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
#1. Train and Evaluate Input Functions
CSV_COLUMN_NAMES = ["fare_amount","dayofweek","hourofday","pickuplon","pickuplat","dropofflon","dropofflat"]
CSV_DEFAULTS = [[0.0],[1],[0],[-74.0],[40.0],[-74.0],[40.7]]
def read_dataset(csv_path):
def _parse_row(row):
# Decode the CSV row into list of TF tensors
fields = tf.decode_csv(records = row, record_defaults = CSV_DEFAULTS)
# Pack the result into a dictionary
features = dict(zip(CSV_COLUMN_NAMES, fields))
# NEW: Add engineered features
features = add_engineered_features(features)
# Separate the label from the features
label = features.pop("fare_amount") # remove label from features and store
return features, label
# Create a dataset containing the text lines.
dataset = tf.data.Dataset.list_files(file_pattern = csv_path) # (i.e. data_file_*.csv)
dataset = dataset.flat_map(map_func = lambda filename:tf.data.TextLineDataset(filenames = filename).skip(count = 1))
# Parse each CSV row into correct (features,label) format for Estimator API
dataset = dataset.map(map_func = _parse_row)
return dataset
def train_input_fn(csv_path, batch_size = 128):
#1. Convert CSV into tf.data.Dataset with (features,label) format
dataset = read_dataset(csv_path)
#2. Shuffle, repeat, and batch the examples.
dataset = dataset.shuffle(buffer_size = 1000).repeat(count = None).batch(batch_size = batch_size)
return dataset
def eval_input_fn(csv_path, batch_size = 128):
#1. Convert CSV into tf.data.Dataset with (features,label) format
dataset = read_dataset(csv_path)
#2.Batch the examples.
dataset = dataset.batch(batch_size = batch_size)
return dataset
#2. Feature Engineering
# One hot encode dayofweek and hourofday
fc_dayofweek = tf.feature_column.categorical_column_with_identity(key = "dayofweek", num_buckets = 7)
fc_hourofday = tf.feature_column.categorical_column_with_identity(key = "hourofday", num_buckets = 24)
# Cross features to get combination of day and hour
fc_day_hr = tf.feature_column.crossed_column(keys = [fc_dayofweek, fc_hourofday], hash_bucket_size = 24 * 7)
# Bucketize latitudes and longitudes
NBUCKETS = 16
latbuckets = np.linspace(start = 38.0, stop = 42.0, num = NBUCKETS).tolist()
lonbuckets = np.linspace(start = -76.0, stop = -72.0, num = NBUCKETS).tolist()
fc_bucketized_plat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplon"), boundaries = lonbuckets)
fc_bucketized_plon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplat"), boundaries = latbuckets)
fc_bucketized_dlat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflon"), boundaries = lonbuckets)
fc_bucketized_dlon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflat"), boundaries = latbuckets)
def add_engineered_features(features):
features["dayofweek"] = features["dayofweek"] - 1 # subtract one since our days of week are 1-7 instead of 0-6
features["latdiff"] = features["pickuplat"] - features["dropofflat"] # East/West
features["londiff"] = features["pickuplon"] - features["dropofflon"] # North/South
features["euclidean_dist"] = tf.sqrt(features["latdiff"]**2 + features["londiff"]**2)
return features
feature_cols = [
#1. Engineered using tf.feature_column module
tf.feature_column.indicator_column(categorical_column = fc_day_hr),
fc_bucketized_plat,
fc_bucketized_plon,
fc_bucketized_dlat,
fc_bucketized_dlon,
#2. Engineered in input functions
tf.feature_column.numeric_column(key = "latdiff"),
tf.feature_column.numeric_column(key = "londiff"),
tf.feature_column.numeric_column(key = "euclidean_dist")
]
#3. Serving Input Receiver Function
def serving_input_receiver_fn():
receiver_tensors = {
'dayofweek' : tf.placeholder(dtype = tf.int32, shape = [None]), # shape is vector to allow batch of requests
'hourofday' : tf.placeholder(dtype = tf.int32, shape = [None]),
'pickuplon' : tf.placeholder(dtype = tf.float32, shape = [None]),
'pickuplat' : tf.placeholder(dtype = tf.float32, shape = [None]),
'dropofflat' : tf.placeholder(dtype = tf.float32, shape = [None]),
'dropofflon' : tf.placeholder(dtype = tf.float32, shape = [None]),
}
features = add_engineered_features(receiver_tensors) # 'features' is what is passed on to the model
return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = receiver_tensors)
#4. Train and Evaluate
def train_and_evaluate(params):
OUTDIR = params["output_dir"]
model = tf.estimator.DNNRegressor(
hidden_units = params["hidden_units"].split(","), # NEW: paramaterize architecture
feature_columns = feature_cols,
model_dir = OUTDIR,
config = tf.estimator.RunConfig(
tf_random_seed = 1, # for reproducibility
save_checkpoints_steps = max(100, params["train_steps"] // 10) # checkpoint every N steps
)
)
# Add custom evaluation metric
def my_rmse(labels, predictions):
pred_values = tf.squeeze(input = predictions["predictions"], axis = -1)
return {"rmse": tf.metrics.root_mean_squared_error(labels = labels, predictions = pred_values)}
model = tf.contrib.estimator.add_metrics(model, my_rmse)
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: train_input_fn(params["train_data_path"]),
max_steps = params["train_steps"])
exporter = tf.estimator.FinalExporter(name = "exporter", serving_input_receiver_fn = serving_input_receiver_fn) # export SavedModel once at the end of training
# Note: alternatively use tf.estimator.BestExporter to export at every checkpoint that has lower loss than the previous checkpoint
eval_spec = tf.estimator.EvalSpec(
input_fn = lambda: eval_input_fn(params["eval_data_path"]),
steps = None,
start_delay_secs = 1, # wait at least N seconds before first evaluation (default 120)
throttle_secs = 1, # wait at least N seconds before each subsequent evaluation (default 600)
exporters = exporter) # export SavedModel once at the end of training
tf.logging.set_verbosity(v = tf.logging.INFO) # so loss is printed during training
shutil.rmtree(path = OUTDIR, ignore_errors = True) # start fresh each time
tf.estimator.train_and_evaluate(model, train_spec, eval_spec)
Explanation: Create model.py
Note that any hyperparameters we want to tune need to be exposed as command line arguments. In particular note that the number of hidden units is now a parameter.
End of explanation
%%writefile taxifaremodel/task.py
import argparse
import json
import os
from . import model
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
# TODO: Your code goes here
)
parser.add_argument(
"--train_data_path",
help = "GCS or local path to training data",
required = True
)
parser.add_argument(
"--train_steps",
help = "Steps to run the training job for (default: 1000)",
type = int,
default = 1000
)
parser.add_argument(
"--eval_data_path",
help = "GCS or local path to evaluation data",
required = True
)
parser.add_argument(
"--output_dir",
help = "GCS location to write checkpoints and export models",
required = True
)
parser.add_argument(
"--job-dir",
help="This is not used by our model, but it is required by gcloud",
)
args = parser.parse_args().__dict__
# Append trial_id to path so trials don"t overwrite each other
# This code can be removed if you are not using hyperparameter tuning
args["output_dir"] = os.path.join(
# TODO: Your code goes here
)
# Run the training job
model.train_and_evaluate(args)
Explanation: Create task.py
Exercise 1
The code cell below has two TODOs for you to complete.
Firstly, in model.py above we set the number of hidden units in our model to be a hyperparameter. This means hidden_units must be exposed as a command line argument when we submit our training job to Cloud ML Engine. Modify the code below to add an flag for hidden_units. Be sure to include a description for the help field and specify the data type that the model should expect to receive. You can also include a default value. Look to the other parser arguments to make sure you have the formatting corret.
Second, when doing hyperparameter tuning we need to make sure the output directory is different for each run, otherwise successive runs will overwrite previous runs. In task.py below, add some code to append the trial_id to the output direcroty of the training job.
Hint: You can use json.loads(os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '') to extract the trial id of the training job. You will want to append this quanity to the output directory args['output_dir'] to make sure the output directory is different for each run.
End of explanation
%%writefile hyperparam.yaml
trainingInput:
scaleTier: BASIC
hyperparameters:
goal: MINIMIZE
maxTrials: 10
maxParallelTrials: 10
hyperparameterMetricTag: rmse
enableTrialEarlyStopping: True
algorithm: GRID_SEARCH
params:
- parameterName: hidden_units
type: CATEGORICAL
categoricalValues:
- 10,10
- 64,32
- 128,64,32
- # TODO: Your code goes here
Explanation: Create hypertuning configuration
We specify:
1. How many trials to run (maxTrials) and how many of those trials can be run in parrallel (maxParallelTrials)
2. Which algorithm to use (in this case GRID_SEARCH)
3. Which metric to optimize (hyperparameterMetricTag)
4. The search region in which to constrain the hyperparameter search
Full specification options here.
Here we are just tuning one parameter, the number of hidden units, and we'll run all trials in parrallel. However more commonly you would tune multiple hyperparameters.
Exercise 2
Add some additional hidden units to the hyperparam.yaml file below to potetially explore during the hyperparameter job.
End of explanation
OUTDIR="gs://{}/taxifare/trained_hp_tune".format(BUCKET)
!gsutil -m rm -rf # TODO: Your code goes here
!gcloud ai-platform # TODO: Your code goes here
--package-path= # TODO: Your code goes here
--module-name= # TODO: Your code goes here
--config= # TODO: Your code goes here
--job-dir= # TODO: Your code goes here
--python-version= # TODO: Your code goes here
--runtime-version= # TODO: Your code goes here
--region= # TODO: Your code goes here
-- \
--train_data_path=gs://{BUCKET}/taxifare/smallinput/taxi-train.csv \
--eval_data_path=gs://{BUCKET}/taxifare/smallinput/taxi-valid.csv \
--train_steps=5000 \
--output_dir={OUTDIR}
Explanation: Run the training job|
Same as before with the addition of --config=hyperpam.yaml to reference the file we just created.
This will take about 20 minutes. Go to cloud console and click on the job id. Once the job is completed, the choosen hyperparameters and resulting objective value (RMSE in this case) will be shown. Trials will sorted from best to worst.
Exercise 3
Submit a hyperparameter tuning job to the cloud. Fill in the missing arguments below. This is similar to the exercise you completed in the 02_tensorlfow/g_distributed notebook. Note that one difference here is that we now specify a config parameter giving the location of our .yaml file.
End of explanation
OUTDIR="gs://{}/taxifare/trained_large_tuned".format(BUCKET)
!gsutil -m rm -rf {OUTDIR} # start fresh each time
!gcloud ai-platform jobs submit training taxifare_large_$(date -u +%y%m%d_%H%M%S) \
--package-path=taxifaremodel \
--module-name=taxifaremodel.task \
--job-dir=gs://{BUCKET}/taxifare \
--python-version=3.5 \
--runtime-version={TFVERSION} \
--region={REGION} \
--scale-tier=STANDARD_1 \
-- \
--train_data_path=gs://cloud-training-demos/taxifare/large/taxi-train*.csv \
--eval_data_path=gs://cloud-training-demos/taxifare/small/taxi-valid.csv \
--train_steps=200000 \
--output_dir={OUTDIR} \
--hidden_units="128,64,32"
Explanation: Results
The best result is RMSE 4.02 with hidden units = 128,64,32.
This improvement is modest, but now that we have our hidden units tuned let's run on our larger dataset to see if it helps.
Note the passing of hyperparameter values via command line
End of explanation |
10,771 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like translations.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with language and words, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise
Step5: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. This weight matrix is usually called the embedding matrix or embedding look-up table. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise
Step8: Embedding
The embedding matrix has a size of the number of words by the number of neurons in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using one-hot encoded vectors for our inputs. When you do the matrix multiplication of the one-hot vector with the embedding matrix, you end up selecting only one row out of the entire matrix
Step9: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise
Step10: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
Step11: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. | Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like translations.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with language and words, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
## Your code here
from collections import Counter
import random
import re
threshold = 1e-5
word_counts = Counter(int_words)
total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
train_words = [word for word in int_words if p_drop[word] < random.random()]
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# Your code here
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = set(words[start:idx] + words[idx+1:stop+1])
return list(target_words)
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None])
labels = tf.placeholder(tf.int32, [None, 1])
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. This weight matrix is usually called the embedding matrix or embedding look-up table. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
n_vocab = len(int_to_vocab)
n_embedding = 300 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1))# create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, inputs)# use tf.nn.embedding_lookup to get the hidden layer output
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of neurons in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using one-hot encoded vectors for our inputs. When you do the matrix multiplication of the one-hot vector with the embedding matrix, you end up selecting only one row out of the entire matrix:
You don't actually need to do the matrix multiplication, you just need to select the row in the embedding matrix that corresponds to the input word. Then, the embedding matrix becomes a lookup table, you're looking up a vector the size of the hidden layer that represents the input word.
<img src="assets/word2vec_weight_matrix_lookup_table.png" width=500>
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform. This TensorFlow tutorial will help if you get stuck.
End of explanation
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) # create softmax weight matrix here
softmax_b = tf.Variable(tf.zeros(n_vocab)) # create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
Explanation: Restore the trained network if you need to:
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation |
10,772 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting started with Python
Step1: Create some variables in Python
Step2: Advanced python types
Step3: Advanced printing
Step4: Conditional statements in python
Step5: Conditional loops
Step6: Note that in Python, we don't use {} or other markers to indicate the part of the loop that gets iterated. Instead, we just indent and align each of the iterated statements with spaces or tabs. (You can use as many as you want, as long as the lines are aligned.)
Step7: Creating functions in Python
Again, we don't use {}, but just indent the lines that are part of the function.
Step8: We can also define simple functions with lambdas | Python Code:
print ('Hello World!')
Explanation: Getting started with Python
End of explanation
i = 4 # int
type(i)
f = 4.1 # float
type(f)
b = True # boolean variable
s = "This is a string!"
print s
Explanation: Create some variables in Python
End of explanation
l = [3,1,2] # list
print l
d = {'foo':1, 'bar':2.3, 's':'my first dictionary'} # dictionary
print d
print d['foo'] # element of a dictionary
n = None # Python's null type
type(n)
Explanation: Advanced python types
End of explanation
print "Our float value is %s. Our int value is %s." % (f,i) # Python is pretty good with strings
Explanation: Advanced printing
End of explanation
if i == 1 and f > 4:
print "The value of i is 1 and f is greater than 4."
elif i > 4 or f > 4:
print "i or f are both greater than 4."
else:
print "both i and f are less than or equal to 4"
Explanation: Conditional statements in python
End of explanation
print l
for e in l:
print e
Explanation: Conditional loops
End of explanation
counter = 6
while counter < 10:
print counter
counter += 1
Explanation: Note that in Python, we don't use {} or other markers to indicate the part of the loop that gets iterated. Instead, we just indent and align each of the iterated statements with spaces or tabs. (You can use as many as you want, as long as the lines are aligned.)
End of explanation
def add2(x):
y = x + 2
return y
i = 5
add2(i)
Explanation: Creating functions in Python
Again, we don't use {}, but just indent the lines that are part of the function.
End of explanation
square = lambda x: x*x
square(add2(i))
Explanation: We can also define simple functions with lambdas:
End of explanation |
10,773 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex SDK
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step11: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
Step12: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify
Step13: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
Step14: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard
Step15: Tutorial
Now you are ready to start creating your own custom model and training for Boston Housing.
Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
Step16: Task.py contents
In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary
Step17: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
Step18: Create and run custom training job
To train a custom model, you perform two steps
Step19: Prepare your command-line arguments
Now define the command-line arguments for your custom training container
Step20: Run the custom training job
Next, you run the custom job to start the training job by invoking the method run, with the following parameters
Step21: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
Step22: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the Boston Housing test (holdout) data from tf.keras.datasets, using the method load_data(). This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements
Step23: Perform the model evaluation
Now evaluate how well the model in the custom job did.
Step24: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
You also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently.
Step25: Explanation Specification
To get explanations when doing a prediction, you must enable the explanation capability and set corresponding settings when you upload your custom model to an Vertex Model resource. These settings are referred to as the explanation metadata, which consists of
Step26: Explanation Metadata
Let's first dive deeper into the explanation metadata, which consists of
Step27: Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters
Step28: Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters
Step29: Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
Step30: Make the prediction with explanation
Now that your Model resource is deployed to an Endpoint resource, one can do online explanations by sending prediction requests to the Endpoint resource.
Request
The format of each instance is
Step31: Understanding the explanations response
First, you will look what your model predicted and compare it to the actual value.
Step32: Examine feature attributions
Next you will look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed your model prediction up by that amount, and vice versa for negative attribution values.
Step33: Check your explanations and baselines
To better make sense of the feature attributions you're getting, you should compare them with your model's baseline. In most cases, the sum of your attribution values + the baseline should be very close to your model's predicted value for each input. Also note that for regression models, the baseline_score returned from AI Explanations will be the same for each example sent to your model. For classification models, each class will have its own baseline.
In this section you'll send 10 test examples to your model for prediction in order to compare the feature attributions with the baseline. Then you'll run each test example's attributions through a sanity check in the sanity_check_explanations method.
Get explanations
Step34: Sanity check
In the function below you perform a sanity check on the explanations.
Step35: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
Step36: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex SDK: Custom training tabular regression model for online prediction with explainabilty
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_online_explain.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_online_explain.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_online_explain.ipynb">
Open in Google Cloud Notebooks
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex SDK to train and deploy a custom tabular regression model for online prediction with explanation.
Dataset
The dataset used for this tutorial is the Boston Housing Prices dataset. The version of the dataset you will use in this tutorial is built into TensorFlow. The trained model predicts the median price of a house in units of 1K USD.
Objective
In this tutorial, you create a custom model from a Python script in a Google prebuilt Docker container using the Vertex SDK, and then do a prediction with explanations on the deployed model by sending data. You can alternatively create custom models using gcloud command-line tool or online using Cloud Console.
The steps performed include:
Create a Vertex custom job for training a model.
Train a TensorFlow model.
Retrieve and load the model artifacts.
View the model evaluation.
Set explanation parameters.
Upload the model as a Vertex Model resource.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction with explanation.
Undeploy the Model resource.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import google.cloud.aiplatform as aip
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
Explanation: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
Otherwise specify (None, None) to use a container image to run on a CPU.
Learn more here hardware accelerator support for your region
Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
End of explanation
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
Explanation: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
End of explanation
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Boston Housing tabular regression\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
Explanation: Tutorial
Now you are ready to start creating your own custom model and training for Boston Housing.
Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for Boston Housing
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import numpy as np
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=100, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
parser.add_argument('--param-file', dest='param_file',
default='/tmp/param.txt', type=str,
help='Output file for parameters')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
def make_dataset():
# Scaling Boston Housing data features
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float)
return feature, max
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
params = []
for _ in range(13):
x_train[_], max = scale(x_train[_])
x_test[_], _ = scale(x_test[_])
params.append(max)
# store the normalization (max) value for each feature
with tf.io.gfile.GFile(args.param_file, 'w') as f:
f.write(str(params))
return (x_train, y_train), (x_test, y_test)
# Build the Keras model
def build_and_compile_dnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(128, activation='relu', input_shape=(13,)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1, activation='linear')
])
model.compile(
loss='mse',
optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr))
return model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
BATCH_SIZE = 16
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_dnn_model()
# Train the model
(x_train, y_train), (x_test, y_test) = make_dataset()
model.fit(x_train, y_train, epochs=args.epochs, batch_size=GLOBAL_BATCH_SIZE)
model.save(args.model_dir)
Explanation: Task.py contents
In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary:
Get the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.
Loads Boston Housing dataset from TF.Keras builtin datasets
Builds a simple deep neural network model using TF.Keras model API.
Compiles the model (compile()).
Sets a training distribution strategy according to the argument args.distribute.
Trains the model (fit()) with epochs specified by args.epochs.
Saves the trained model (save(args.model_dir)) to the specified model directory.
Saves the maximum value for each feature f.write(str(params)) to the specified parameters file.
End of explanation
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_boston.tar.gz
Explanation: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
job = aip.CustomTrainingJob(
display_name="boston_" + TIMESTAMP,
script_path="custom/trainer/task.py",
container_uri=TRAIN_IMAGE,
requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"],
)
print(job)
Explanation: Create and run custom training job
To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job.
Create custom training job
A custom training job is created with the CustomTrainingJob class, with the following parameters:
display_name: The human readable name for the custom training job.
container_uri: The training container image.
requirements: Package requirements for the training container image (e.g., pandas).
script_path: The relative path to the training script.
End of explanation
MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP)
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
Explanation: Prepare your command-line arguments
Now define the command-line arguments for your custom training container:
args: The command-line arguments to pass to the executable that is set as the entry point into the container.
--model-dir : For our demonstrations, we use this command-line argument to specify where to store the model artifacts.
direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or
indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.
"--epochs=" + EPOCHS: The number of epochs for training.
"--steps=" + STEPS: The number of steps per epoch.
End of explanation
if TRAIN_GPU:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
sync=True,
)
else:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
base_output_dir=MODEL_DIR,
sync=True,
)
model_path_to_deploy = MODEL_DIR
Explanation: Run the custom training job
Next, you run the custom job to start the training job by invoking the method run, with the following parameters:
args: The command-line arguments to pass to the training script.
replica_count: The number of compute instances for training (replica_count = 1 is single node training).
machine_type: The machine type for the compute instances.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
base_output_dir: The Cloud Storage location to write the model artifacts to.
sync: Whether to block until completion of the job.
End of explanation
import tensorflow as tf
local_model = tf.keras.models.load_model(MODEL_DIR)
Explanation: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
End of explanation
import numpy as np
from tensorflow.keras.datasets import boston_housing
(_, _), (x_test, y_test) = boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float32)
return feature
# Let's save one data item that has not been scaled
x_test_notscaled = x_test[0:1].copy()
for _ in range(13):
x_test[_] = scale(x_test[_])
x_test = x_test.astype(np.float32)
print(x_test.shape, x_test.dtype, y_test.shape)
print("scaled", x_test[0])
print("unscaled", x_test_notscaled)
Explanation: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the Boston Housing test (holdout) data from tf.keras.datasets, using the method load_data(). This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the feature data, and the corresponding labels (median value of owner-occupied home).
You don't need the training data, and hence why we loaded it as (_, _).
Before you can run the data through evaluation, you need to preprocess it:
x_test:
1. Normalize (rescale) the data in each column by dividing each value by the maximum value of that column. This replaces each single value with a 32-bit floating point number between 0 and 1.
End of explanation
local_model.evaluate(x_test, y_test)
Explanation: Perform the model evaluation
Now evaluate how well the model in the custom job did.
End of explanation
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
serving_output = list(loaded.signatures["serving_default"].structured_outputs.keys())[0]
print("Serving function output:", serving_output)
input_name = local_model.input.name
print("Model input name:", input_name)
output_name = local_model.output.name
print("Model output name:", output_name)
Explanation: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
You also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently.
End of explanation
XAI = "ig" # [ shapley, ig, xrai ]
if XAI == "shapley":
PARAMETERS = {"sampled_shapley_attribution": {"path_count": 10}}
elif XAI == "ig":
PARAMETERS = {"integrated_gradients_attribution": {"step_count": 50}}
elif XAI == "xrai":
PARAMETERS = {"xrai_attribution": {"step_count": 50}}
parameters = aip.explain.ExplanationParameters(PARAMETERS)
Explanation: Explanation Specification
To get explanations when doing a prediction, you must enable the explanation capability and set corresponding settings when you upload your custom model to an Vertex Model resource. These settings are referred to as the explanation metadata, which consists of:
parameters: This is the specification for the explainability algorithm to use for explanations on your model. You can choose between:
Shapley - Note, not recommended for image data -- can be very long running
XRAI
Integrated Gradients
metadata: This is the specification for how the algoithm is applied on your custom model.
Explanation Parameters
Let's first dive deeper into the settings for the explainability algorithm.
Shapley
Assigns credit for the outcome to each feature, and considers different permutations of the features. This method provides a sampling approximation of exact Shapley values.
Use Cases:
- Classification and regression on tabular data.
Parameters:
path_count: This is the number of paths over the features that will be processed by the algorithm. An exact approximation of the Shapley values requires M! paths, where M is the number of features. For the CIFAR10 dataset, this would be 784 (28*28).
For any non-trival number of features, this is too compute expensive. You can reduce the number of paths over the features to M * path_count.
Integrated Gradients
A gradients-based method to efficiently compute feature attributions with the same axiomatic properties as the Shapley value.
Use Cases:
- Classification and regression on tabular data.
- Classification on image data.
Parameters:
step_count: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.
XRAI
Based on the integrated gradients method, XRAI assesses overlapping regions of the image to create a saliency map, which highlights relevant regions of the image rather than pixels.
Use Cases:
Classification on image data.
Parameters:
step_count: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.
In the next code cell, set the variable XAI to which explainabilty algorithm you will use on your custom model.
End of explanation
INPUT_METADATA = {
"input_tensor_name": serving_input,
"encoding": "BAG_OF_FEATURES",
"modality": "numeric",
"index_feature_mapping": [
"crim",
"zn",
"indus",
"chas",
"nox",
"rm",
"age",
"dis",
"rad",
"tax",
"ptratio",
"b",
"lstat",
],
}
OUTPUT_METADATA = {"output_tensor_name": serving_output}
input_metadata = aip.explain.ExplanationMetadata.InputMetadata(INPUT_METADATA)
output_metadata = aip.explain.ExplanationMetadata.OutputMetadata(OUTPUT_METADATA)
metadata = aip.explain.ExplanationMetadata(
inputs={"features": input_metadata}, outputs={"medv": output_metadata}
)
Explanation: Explanation Metadata
Let's first dive deeper into the explanation metadata, which consists of:
outputs: A scalar value in the output to attribute -- what to explain. For example, in a probability output [0.1, 0.2, 0.7] for classification, one wants an explanation for 0.7. Consider the following formulae, where the output is y and that is what we want to explain.
y = f(x)
Consider the following formulae, where the outputs are y and z. Since we can only do attribution for one scalar value, we have to pick whether we want to explain the output y or z. Assume in this example the model is object detection and y and z are the bounding box and the object classification. You would want to pick which of the two outputs to explain.
y, z = f(x)
The dictionary format for outputs is:
{ "outputs": { "[your_display_name]":
"output_tensor_name": [layer]
}
}
<blockquote>
- [your_display_name]: A human readable name you assign to the output to explain. A common example is "probability".<br/>
- "output_tensor_name": The key/value field to identify the output layer to explain. <br/>
- [layer]: The output layer to explain. In a single task model, like a tabular regressor, it is the last (topmost) layer in the model.
</blockquote>
inputs: The features for attribution -- how they contributed to the output. Consider the following formulae, where a and b are the features. We have to pick which features to explain how the contributed. Assume that this model is deployed for A/B testing, where a are the data_items for the prediction and b identifies whether the model instance is A or B. You would want to pick a (or some subset of) for the features, and not b since it does not contribute to the prediction.
y = f(a,b)
The minimum dictionary format for inputs is:
{ "inputs": { "[your_display_name]":
"input_tensor_name": [layer]
}
}
<blockquote>
- [your_display_name]: A human readable name you assign to the input to explain. A common example is "features".<br/>
- "input_tensor_name": The key/value field to identify the input layer for the feature attribution. <br/>
- [layer]: The input layer for feature attribution. In a single input tensor model, it is the first (bottom-most) layer in the model.
</blockquote>
Since the inputs to the model are tabular, you can specify the following two additional fields as reporting/visualization aids:
<blockquote>
- "encoding": "BAG_OF_FEATURES" : Indicates that the inputs are set of tabular features.<br/>
- "index_feature_mapping": [ feature-names ] : A list of human readable names for each feature. For this example, we use the feature names specified in the dataset.<br/>
- "modality": "numeric": Indicates the field values are numeric.
</blockquote>
End of explanation
model = aip.Model.upload(
display_name="boston_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
explanation_parameters=parameters,
explanation_metadata=metadata,
sync=False,
)
model.wait()
Explanation: Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters:
display_name: The human readable name for the Model resource.
artifact: The Cloud Storage location of the trained model artifacts.
serving_container_image_uri: The serving container image.
sync: Whether to execute the upload asynchronously or synchronously.
explanation_parameters: Parameters to configure explaining for Model's predictions.
explanation_metadata: Metadata describing the Model's input and output for explanation.
If the upload() method is run asynchronously, you can subsequently block until completion with the wait() method.
End of explanation
DEPLOYED_NAME = "boston-" + TIMESTAMP
TRAFFIC_SPLIT = {"0": 100}
MIN_NODES = 1
MAX_NODES = 1
if DEPLOY_GPU:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
else:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=0,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
Explanation: Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters:
deployed_model_display_name: A human readable name for the deployed model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
machine_type: The type of machine to use for training.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
starting_replica_count: The number of compute instances to initially provision.
max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
End of explanation
test_item = x_test[0]
test_label = y_test[0]
print(test_item.shape)
Explanation: Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
End of explanation
instances_list = [test_item.tolist()]
prediction = endpoint.explain(instances_list)
print(prediction)
Explanation: Make the prediction with explanation
Now that your Model resource is deployed to an Endpoint resource, one can do online explanations by sending prediction requests to the Endpoint resource.
Request
The format of each instance is:
[feature_list]
Since the explain() method can take multiple items (instances), send your single test item as a list of one test item.
Response
The response from the explain() call is a Python dictionary with the following entries:
ids: The internal assigned unique identifiers for each prediction request.
predictions: The prediction per instance.
deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.
explanations: The feature attributions
End of explanation
value = prediction[0][0][0]
print("Predicted Value:", value)
Explanation: Understanding the explanations response
First, you will look what your model predicted and compare it to the actual value.
End of explanation
from tabulate import tabulate
feature_names = [
"crim",
"zn",
"indus",
"chas",
"nox",
"rm",
"age",
"dis",
"rad",
"tax",
"ptratio",
"b",
"lstat",
]
attributions = prediction.explanations[0].attributions[0].feature_attributions
rows = []
for i, val in enumerate(feature_names):
rows.append([val, test_item[i], attributions[val]])
print(tabulate(rows, headers=["Feature name", "Feature value", "Attribution value"]))
Explanation: Examine feature attributions
Next you will look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed your model prediction up by that amount, and vice versa for negative attribution values.
End of explanation
# Prepare 10 test examples to your model for prediction
instances = []
for i in range(10):
instances.append(x_test[i].tolist())
response = endpoint.explain(instances)
Explanation: Check your explanations and baselines
To better make sense of the feature attributions you're getting, you should compare them with your model's baseline. In most cases, the sum of your attribution values + the baseline should be very close to your model's predicted value for each input. Also note that for regression models, the baseline_score returned from AI Explanations will be the same for each example sent to your model. For classification models, each class will have its own baseline.
In this section you'll send 10 test examples to your model for prediction in order to compare the feature attributions with the baseline. Then you'll run each test example's attributions through a sanity check in the sanity_check_explanations method.
Get explanations
End of explanation
import numpy as np
def sanity_check_explanations(
explanation, prediction, mean_tgt_value=None, variance_tgt_value=None
):
passed_test = 0
total_test = 1
# `attributions` is a dict where keys are the feature names
# and values are the feature attributions for each feature
baseline_score = explanation.attributions[0].baseline_output_value
print("baseline:", baseline_score)
# Sanity check 1
# The prediction at the input is equal to that at the baseline.
# Please use a different baseline. Some suggestions are: random input, training
# set mean.
if abs(prediction - baseline_score) <= 0.05:
print("Warning: example score and baseline score are too close.")
print("You might not get attributions.")
else:
passed_test += 1
print("Sanity Check 1: Passed")
print(passed_test, " out of ", total_test, " sanity checks passed.")
i = 0
for explanation in response.explanations:
try:
prediction = np.max(response.predictions[i]["scores"])
except TypeError:
prediction = np.max(response.predictions[i])
sanity_check_explanations(explanation, prediction)
i += 1
Explanation: Sanity check
In the function below you perform a sanity check on the explanations.
End of explanation
endpoint.undeploy_all()
Explanation: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
10,774 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Images are numpy arrays
Images are represented in scikit-image using standard numpy arrays. This allows maximum inter-operability with other libraries in the scientific Python ecosystem, such as matplotlib and scipy.
Let's see how to build a grayscale image as a 2D array
Step1: The same holds for "real-world" images
Step2: A color image is a 3D array, where the last dimension has size 3 and represents the red, green, and blue channels
Step3: These are just numpy arrays. Making a red square is easy using just array slicing and manipulation
Step4: Images can also include transparent regions by adding a 4th dimension, called an alpha layer.
Data types and image values
In literature, one finds different conventions for representing image values
Step5: The library is designed in such a way that any data-type is allowed as input,
as long as the range is correct (0-1 for floating point images, 0-255 for unsigned bytes,
0-65535 for unsigned 16-bit integers).
This is achieved through the use of a few utility functions, such as img_as_float and img_as_ubyte
Step6: Your code would then typically look like this
Step7: Also, we'll want to make sure we have numpy and matplotlib imported.
Step8: If we plot a gray-scale image using the default colormap, "jet", and a gray-scale color
map, "gray", you can easily see the difference
Step9: We can get a better idea of the ill effects by zooming into the man's face.
Step10: Notice how the face looks distorted and splotchy with the "jet" colormap. Also, this colormap distorts the concepts of light and dark, and there are artificial boundaries created by the different color hues. Is that a beauty mark on the man's upper lip? No, it's just an artifact of this ridiculous colormap.
Here's another example
Step11: Woah! See all those non-existing contours?
You can add the following setting at the top of any script
to change the default colormap
Step12: Don't worry
Step13: You can also set both of these explicitly in the imshow command
Step14: Interactive demo
Step15: Image I/O
Mostly, we won't be using input images from the scikit-image example data sets. Those images are typically stored in JPEG or PNG format. Since scikit-image operates on NumPy arrays, any image reader library that provides arrays will do. Options include matplotlib, pillow, imageio, imread, etc.
scikit-image conveniently wraps many of these in the io submodule, and will use whatever option is available
Step16: We also have the ability to load multiple images, or multi-layer TIFF images
Step17: <span class="exercize">Exercise
Step18: Test your function like so
Step19: <span class="exercize">Exercise
Step20: Test your function here
Step21: <div style="height | Python Code:
import numpy as np
from matplotlib import pyplot as plt, cm
random_image = np.random.random([500, 500])
plt.imshow(random_image, cmap=cm.gray, interpolation='nearest');
Explanation: Images are numpy arrays
Images are represented in scikit-image using standard numpy arrays. This allows maximum inter-operability with other libraries in the scientific Python ecosystem, such as matplotlib and scipy.
Let's see how to build a grayscale image as a 2D array:
End of explanation
from skimage import data
coins = data.coins()
print(type(coins), coins.dtype, coins.shape)
plt.imshow(coins, cmap=cm.gray, interpolation='nearest');
Explanation: The same holds for "real-world" images:
End of explanation
cat = data.chelsea()
print("Shape:", cat.shape)
print("Values min/max:", cat.min(), cat.max())
plt.imshow(cat, interpolation='nearest');
Explanation: A color image is a 3D array, where the last dimension has size 3 and represents the red, green, and blue channels:
End of explanation
cat[10:110, 10:110, :] = [255, 0, 0] # [red, green, blue]
plt.imshow(cat);
Explanation: These are just numpy arrays. Making a red square is easy using just array slicing and manipulation:
End of explanation
linear0 = np.linspace(0, 1, 2500).reshape((50, 50))
linear1 = np.linspace(0, 255, 2500).reshape((50, 50)).astype(np.uint8)
print("Linear0:", linear0.dtype, linear0.min(), linear0.max())
print("Linear1:", linear1.dtype, linear1.min(), linear1.max())
fig, (ax0, ax1) = plt.subplots(1, 2)
ax0.imshow(linear0, cmap='gray')
ax1.imshow(linear1, cmap='gray');
Explanation: Images can also include transparent regions by adding a 4th dimension, called an alpha layer.
Data types and image values
In literature, one finds different conventions for representing image values:
0 - 255 where 0 is black, 255 is white
0 - 1 where 0 is black, 1 is white
scikit-image supports both conventions--the choice is determined by the
data-type of the array.
E.g., here, I generate two valid images:
End of explanation
from skimage import img_as_float, img_as_ubyte
image = data.chelsea()
image_float = img_as_float(image)
image_ubyte = img_as_ubyte(image)
print("type, min, max:", image_float.dtype, image_float.min(), image_float.max())
print("type, min, max:", image_ubyte.dtype, image_ubyte.min(), image_ubyte.max())
print("231/255 =", 231/255.)
Explanation: The library is designed in such a way that any data-type is allowed as input,
as long as the range is correct (0-1 for floating point images, 0-255 for unsigned bytes,
0-65535 for unsigned 16-bit integers).
This is achieved through the use of a few utility functions, such as img_as_float and img_as_ubyte:
End of explanation
from skimage import data
image = data.camera()
Explanation: Your code would then typically look like this:
python
def my_function(any_image):
float_image = img_as_float(any_image)
# Proceed, knowing image is in [0, 1]
We recommend using the floating point representation, given that
scikit-image mostly uses that format internally.
Displaying images using matplotlib
Before we get started, a quick note about plotting images---specifically, plotting gray-scale images with Matplotlib. First, let's grab an example image from scikit-image.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
Explanation: Also, we'll want to make sure we have numpy and matplotlib imported.
End of explanation
fig, (ax_jet, ax_gray) = plt.subplots(ncols=2, figsize=(10, 5))
ax_jet.imshow(image, cmap='jet')
ax_gray.imshow(image, cmap='gray');
Explanation: If we plot a gray-scale image using the default colormap, "jet", and a gray-scale color
map, "gray", you can easily see the difference:
End of explanation
face = image[80:160, 200:280]
fig, (ax_jet, ax_gray) = plt.subplots(ncols=2)
ax_jet.imshow(face, cmap='jet')
ax_gray.imshow(face, cmap='gray');
Explanation: We can get a better idea of the ill effects by zooming into the man's face.
End of explanation
X, Y = np.ogrid[-5:5:0.1, -5:5:0.1]
R = np.sqrt(X**2 + Y**2)
fig, (ax_jet, ax_gray) = plt.subplots(1, 2)
ax_jet.imshow(R, cmap='jet')
ax_gray.imshow(R, cmap='gray');
Explanation: Notice how the face looks distorted and splotchy with the "jet" colormap. Also, this colormap distorts the concepts of light and dark, and there are artificial boundaries created by the different color hues. Is that a beauty mark on the man's upper lip? No, it's just an artifact of this ridiculous colormap.
Here's another example:
End of explanation
plt.rcParams['image.cmap'] = 'gray'
Explanation: Woah! See all those non-existing contours?
You can add the following setting at the top of any script
to change the default colormap:
End of explanation
plt.rcParams['image.interpolation'] = 'nearest'
Explanation: Don't worry: color images are unaffected by this change.
In addition, we'll set the interpolation to 'nearest neighborhood' so that it's easier to distinguish individual pixels in your image (the default is 'bicubic'--see the exploration below).
End of explanation
plt.imshow(R, cmap='gray', interpolation='nearest');
Explanation: You can also set both of these explicitly in the imshow command:
End of explanation
from IPython.html.widgets import interact, fixed
from matplotlib import cm as colormaps
@interact(image=fixed(face),
cmap=sorted([c for c in colormaps.datad.keys() if not c.endswith('_r')],
key=lambda x: x.lower()),
interpolation=['nearest', 'bilinear', 'bicubic',
'spline16', 'spline36', 'hanning', 'hamming',
'hermite', 'kaiser', 'quadric', 'catrom',
'gaussian', 'bessel', 'mitchell', 'sinc', 'lanczos'])
def imshow_params(image, cmap='jet', interpolation='bicubic'):
fig, axes = plt.subplots(1, 5, figsize=(15, 4))
axes[0].imshow(image, cmap='gray', interpolation='nearest')
axes[0].set_title('Original')
axes[1].imshow(image[:5, :5], cmap='gray', interpolation='nearest')
axes[1].set_title('Top 5x5 block')
axes[1].set_xlabel('No interpolation')
axes[2].imshow(image, cmap=cmap, interpolation=interpolation)
axes[2].set_title('%s colormap' % cmap)
axes[2].set_xlabel('%s interpolation' % interpolation)
axes[3].imshow(image[:5, :5], cmap=cmap, interpolation=interpolation)
axes[3].set_title('%s colormap' % cmap)
axes[3].set_xlabel('%s interpolation' % interpolation)
axes[4].imshow(R, cmap=cmap, interpolation=interpolation)
axes[4].set_title('%s colormap' % cmap)
axes[4].set_xlabel('%s interpolation' % interpolation)
for ax in axes:
ax.set_xticks([])
ax.set_yticks([])
Explanation: Interactive demo: interpolation and color maps
End of explanation
from skimage import io
image = io.imread('../images/balloon.jpg')
print(type(image))
plt.imshow(image);
Explanation: Image I/O
Mostly, we won't be using input images from the scikit-image example data sets. Those images are typically stored in JPEG or PNG format. Since scikit-image operates on NumPy arrays, any image reader library that provides arrays will do. Options include matplotlib, pillow, imageio, imread, etc.
scikit-image conveniently wraps many of these in the io submodule, and will use whatever option is available:
End of explanation
ic = io.imread_collection('../images/*.png')
print(type(ic), '\n\n', ic)
f, axes = plt.subplots(nrows=1, ncols=len(ic), figsize=(15, 10))
for i, image in enumerate(ic):
axes[i].imshow(image, cmap='gray')
axes[i].axis('off')
Explanation: We also have the ability to load multiple images, or multi-layer TIFF images:
End of explanation
def draw_H(image, coords, color=(0.8, 0.8, 0.8), in_place=True):
out = image.copy()
# your code goes here
return out
Explanation: <span class="exercize">Exercise: draw the letter H</span>
Define a function that takes as input an RGB image and a pair of coordinates (row, column), and returns the image (optionally a copy) with green letter H overlaid at those coordinates. The coordinates should point to the top-left corner of the H.
The arms and strut of the H should have a width of 3 pixels, and the H itself should have a height of 24 pixels and width of 20 pixels.
Start with the following template:
End of explanation
cat = data.chelsea()
cat_H = draw_H(cat, (50, -50))
plt.imshow(cat_H);
Explanation: Test your function like so:
End of explanation
def plot_intensity(image, row):
# Fill in the three lines below
red_values = ...
green_values = ...
blue_values = ...
plt.figure()
plt.plot(red_values)
plt.plot(green_values)
plt.plot(blue_values)
pass
Explanation: <span class="exercize">Exercise: RGB intensity plot</span>
Plot the intensity of each channel of the image along a given row.
Start with the following template:
End of explanation
plot_intensity(cat, 50)
plot_intensity(cat, 100)
Explanation: Test your function here:
End of explanation
%reload_ext load_style
%load_style ../themes/tutorial.css
Explanation: <div style="height: 400px;"></div>
End of explanation |
10,775 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computation of cutting planes
Step1: $\DeclareMathOperator{\domain}{dom}
\newcommand{\transpose}{\text{T}}
\newcommand{\vec}[1]{\begin{pmatrix}#1\end{pmatrix}}$
Example
To test the computation of cutting planes we consider the unconstrained convex optimization problem
\begin{align}
&\text{minimize} \quad f_\text{obj}(x_0, x_1) = (x_0 - 5)^2 + (x_1 - 5)^2,
\end{align}
and also the same problem with inequality constraints convex. That is, the problem
\begin{align}
&\text{minimize} \quad f_\text{obj}(x_0, x_1) = (x_0 - 5)^2 + (x_1 - 5)^2 \
&\phantom{\text{minimize}} \quad f_0(x_0, x_1) =
a_0^\transpose x - b_0 = \vec{1\0}^\transpose \vec{x_0\x_1} - 20 = x_0 - 20 \leq 0\
&\phantom{\text{minimize}} \quad f_1(x_0, x_1) =
a_1^\transpose x - b_1 = \vec{-1\0}^\transpose \vec{x_0\x_1} = -x_0 \leq 0\
&\phantom{\text{minimize}} \quad f_2(x_0, x_1) =
a_2^\transpose x - b_2 = \vec{0\1}^\transpose \vec{x_0\x_1} - 20 = x_1 - 20 \leq 0 \
&\phantom{\text{minimize}} \quad f_3(x_0, x_1) =
a_3^\transpose x - b_3 = \vec{0\-1}^\transpose \vec{x_0\x_1} = -x_1 \leq 0.
\end{align}
In both cases it is clear that the solution is $x^\star = (x_1^\star, x_2^\star) = (5, 5)$.
The ACCPM requires the gradients of the objective function and constraint functions, which are
\begin{align}
&\nabla f_\text{obj}(x_0, x_1) = \vec{2(x_0 - 5)\2(x_1 - 5)}, \
&\nabla f_0(x_0, x_1) = \vec{1\0}, \quad \nabla f_1(x_0, x_1) = \vec{-1\0}, \
&\nabla f_2(x_0, x_1) = \vec{0\1}, \quad \nabla f_3(x_0, x_1) = \vec{0\-1}.
\end{align}
We implement these functions as follows
Step2: Here we analytically compute the initial few iterations for the unconstrained problem. The ACCPM requires that the initial polygon $\mathcal{P}_0$ (here I've abused terminology and by the initial polygon $\mathcal{P}_0$ I actually mean the system of linear inequalities $Ax \leq b$) contain at least some of the points we are interested in. For the purposes of this example we take
\begin{align}
A = \vec{a_0^\transpose\a_1^\transpose\a_2^\transpose\a_3^\transpose}, b = \vec{20\0\20\0}.
\end{align}
Now, we start with $k=0$.
Now, $x^{(0)}{ac}$ is the solution of the minimization problem
\begin{equation}
\min_{\domain \phi} \phi(x) = - \sum_{i=0}^{3}{\log{(b_i - a_i^\transpose x)}}.
\end{equation}
So, we solve the problem
\begin{align}
&\phantom{iff}\nabla \phi(x) = \sum_{i=0}^{3
} \frac{1}{b_i - a_i^\transpose x}a_i = 0 \
&\iff \frac{1}{20-x_0}\begin{bmatrix}1\0\end{bmatrix} + \frac{1}{x_0}\begin{bmatrix}-1\0\end{bmatrix} + \frac{1}{20-x_1}\begin{bmatrix}0\1\end{bmatrix} + \frac{1}{x_1}\begin{bmatrix}0\-1\end{bmatrix} = 0 \
&\iff \frac{1}{20-x_0} - \frac{1}{x_0} = 0, \frac{1}{20-x_1} - \frac{1}{x_1} = 0 \
&\iff x_0 = \frac{20}{2} = 10, x_1 = \frac{20}{2} = 10,
\end{align}
and conclude $x^{(0)}{ac} = (10, 10)$. We then query the oracle at $x^{(0)}{ac}$. (Here, $f\text{best} = f_\text{obj}(10, 10) = 50$ since this is the $0$-th iteration.) As there are no inequality constraints we have
\begin{align}
&a_4 = \nabla f_\text{obj}(10, 10) = \vec{10\10}, \
&b_4 = \nabla f_\text{obj}(10, 10)^\transpose \vec{10\10} = \vec{10\10}^\transpose \vec{10\10} = 200,
\end{align}
which we normalize to get
\begin{align}
&a_4 = \frac{1}{\sqrt{100^2 + 100^2}} \nabla f_\text{obj}(10, 10)
= \vec{\frac{1}{\sqrt{2}} \ \frac{1}{\sqrt{2}} } \approx \vec{0.7071 \ 0.7071}, \
&b_4 = \frac{1}{\sqrt{100^2 + 100^2}} \nabla f_\text{obj}(10, 10)^\transpose \vec{10\10} = \vec{10\10}^\transpose \vec{10\10} = \frac{20}{\sqrt{2}} = 10\sqrt{2} \approx 14.1421,
\end{align}
and therefore update
\begin{align}
A = \vec{a_0^\transpose\a_1^\transpose\a_2^\transpose\a_3^\transpose\ \frac{1}{\sqrt{2}} \;\; \frac{1}{\sqrt{2}}}, b = \vec{20\0\20\0\10\sqrt{2}}, k = 1.
\end{align}
Now, $x^{(1)}{ac}$ is the solution of the minimization problem
\begin{equation}
\min_{\domain \phi} \phi(x) = - \sum_{i=0}^{4}{\log{(b_i - a_i^\transpose x)}}.
\end{equation}
So, we solve the problem
\begin{align}
&\phantom{iff}\nabla \phi(x) = \sum_{i=0}^{4
} \frac{1}{b_i - a_i^\transpose x}a_i = 0 \
&\iff \frac{1}{20-x_0}\vec{1\0} + \frac{1}{x_0}\vec{-1\0} + \frac{1}{20-x_1}\vec{0\1} + \frac{1}{x_1}\vec{0\-1} + \frac{\sqrt{2}}{20-x_0-x_1} \vec{\frac{1}{\sqrt{2}}\\frac{1}{\sqrt{2}}} = 0\
&\iff \frac{1}{20-x_0} - \frac{1}{x_0} + \frac{1}{20 - x_0- x_1}= 0, \frac{1}{20-x_1} - \frac{1}{x_1} + \frac{1}{20 - x_0- x_1} = 0 \
&\iff x_0 = x_1 = 2(5 \pm \sqrt{5}) \approx 14.4721 \text{ or } 5.52786,
\end{align}
and take $x^{(1)}{ac} = (2(5-\sqrt{5}), 2(5-\sqrt{5})) \approx (5.52786, 5.52786)$. We then query the
oracle at $x^{(1)}{ac}$. Here
$f\text{obj}(x^{(1)}{ac}) = f\text{obj}(2(5-\sqrt{5}), 2(5-\sqrt{5})) = 90 - 40\sqrt{5} \approx 0.557281 \leq f_\text{best} = 50$ so we update
$f_\text{best} = 90 - 40\sqrt{5} \approx 0.557281$ and therefore put (and normalize)
\begin{align}
&a_5 = \frac{1}{\sqrt{2(10-4\sqrt{5})^2}} \nabla f_\text{obj}(x^{(1)}{ac}) =
\frac{1}{\sqrt{2(10-4\sqrt{5})^2}} \nabla f\text{obj}\vec{2(5-\sqrt{5}) \ 2(5-\sqrt{5})} =
\frac{1}{\sqrt{2(10-4\sqrt{5})^2}} \vec{10-4\sqrt{5}\10-4\sqrt{5}}
= \vec{\frac{1}{\sqrt{2}}\\frac{1}{\sqrt{2}}}\approx \vec{0.7071 \ 0.7071}, \
&b_5 = \frac{1}{\sqrt{2(10-4\sqrt{5})^2}} \nabla f_\text{obj}(x^{(1)}_{ac})^\transpose \vec{2(5-\sqrt{5}) \ 2(5-\sqrt{5})} =
\frac{1}{\sqrt{2}} \vec{1\1}^\transpose \vec{2(5-\sqrt{5}) \ 2(5-\sqrt{5})} = 2\sqrt{2} (5 - \sqrt{5})
\approx 7.8176,
\end{align}
updating $A$ and $b$ also. Iteration $k = 1$ is concluded by incrementing $k$.
We see that this computation will only get more complicated. Therefore, in our implementation we test the computed analytic center by simply computing the norm of the gradient of the log barrier at the analytic center and see if it sufficiently small.
We first test the unconstrained version of the problem.
Step3: Next we test a trivial version of the inequality constrained problem where the inequality constraints and linear inequality constraints are the same
Step4: Next we test a version of the inequality constrained problem where the initial polygon lies within the feasible region given by the inequality constraints.
Step5: We now test a version of the inequality constrained problem where the initial polyhedron contains the feasible region but also regions that are infeasible.
Step6: We test a version of the inequality constrained problem where the initial polyhedron is moderately large.
Step7: Finally we test a version of the inequality constrained problem where the initial polyhedron is very large. We observe that the algorithm does not converge.
It is possible more iterations, or pruning of the redundant inequalities, would allow for convergence, but in practice, it is unlikely this situation will arise. | Python Code:
import numpy as np
import pandas as pd
import accpm
%load_ext autoreload
%autoreload 1
%aimport accpm
Explanation: Computation of cutting planes: example 1
The set-up
End of explanation
def funcobj(x):
return (x[0]-5)**2 + (x[1]-5)**2
def func0(x):
return x[0] - 20
def func1(x):
return -x[0]
def func2(x):
return x[1] - 20
def func3(x):
return -x[1]
def grad_funcobj(x):
return np.array([2*(x[0] - 5), 2*(x[1] - 5)])
def grad_func0(x):
return np.array([1, 0])
def grad_func1(x):
return np.array([-1, 0])
def grad_func2(x):
return np.array([0, 1])
def grad_func3(x):
return np.array([0, -1])
Explanation: $\DeclareMathOperator{\domain}{dom}
\newcommand{\transpose}{\text{T}}
\newcommand{\vec}[1]{\begin{pmatrix}#1\end{pmatrix}}$
Example
To test the computation of cutting planes we consider the unconstrained convex optimization problem
\begin{align}
&\text{minimize} \quad f_\text{obj}(x_0, x_1) = (x_0 - 5)^2 + (x_1 - 5)^2,
\end{align}
and also the same problem with inequality constraints convex. That is, the problem
\begin{align}
&\text{minimize} \quad f_\text{obj}(x_0, x_1) = (x_0 - 5)^2 + (x_1 - 5)^2 \
&\phantom{\text{minimize}} \quad f_0(x_0, x_1) =
a_0^\transpose x - b_0 = \vec{1\0}^\transpose \vec{x_0\x_1} - 20 = x_0 - 20 \leq 0\
&\phantom{\text{minimize}} \quad f_1(x_0, x_1) =
a_1^\transpose x - b_1 = \vec{-1\0}^\transpose \vec{x_0\x_1} = -x_0 \leq 0\
&\phantom{\text{minimize}} \quad f_2(x_0, x_1) =
a_2^\transpose x - b_2 = \vec{0\1}^\transpose \vec{x_0\x_1} - 20 = x_1 - 20 \leq 0 \
&\phantom{\text{minimize}} \quad f_3(x_0, x_1) =
a_3^\transpose x - b_3 = \vec{0\-1}^\transpose \vec{x_0\x_1} = -x_1 \leq 0.
\end{align}
In both cases it is clear that the solution is $x^\star = (x_1^\star, x_2^\star) = (5, 5)$.
The ACCPM requires the gradients of the objective function and constraint functions, which are
\begin{align}
&\nabla f_\text{obj}(x_0, x_1) = \vec{2(x_0 - 5)\2(x_1 - 5)}, \
&\nabla f_0(x_0, x_1) = \vec{1\0}, \quad \nabla f_1(x_0, x_1) = \vec{-1\0}, \
&\nabla f_2(x_0, x_1) = \vec{0\1}, \quad \nabla f_3(x_0, x_1) = \vec{0\-1}.
\end{align}
We implement these functions as follows:
End of explanation
A = np.array([[1, 0],[-1,0],[0,1],[0,-1]])
b = np.array([20, 0, 20, 0])
accpm.accpm(A, b, funcobj, grad_funcobj, alpha=0.01, beta=0.7,
start=1, tol=10e-3, maxiter = 200, testing=1)
Explanation: Here we analytically compute the initial few iterations for the unconstrained problem. The ACCPM requires that the initial polygon $\mathcal{P}_0$ (here I've abused terminology and by the initial polygon $\mathcal{P}_0$ I actually mean the system of linear inequalities $Ax \leq b$) contain at least some of the points we are interested in. For the purposes of this example we take
\begin{align}
A = \vec{a_0^\transpose\a_1^\transpose\a_2^\transpose\a_3^\transpose}, b = \vec{20\0\20\0}.
\end{align}
Now, we start with $k=0$.
Now, $x^{(0)}{ac}$ is the solution of the minimization problem
\begin{equation}
\min_{\domain \phi} \phi(x) = - \sum_{i=0}^{3}{\log{(b_i - a_i^\transpose x)}}.
\end{equation}
So, we solve the problem
\begin{align}
&\phantom{iff}\nabla \phi(x) = \sum_{i=0}^{3
} \frac{1}{b_i - a_i^\transpose x}a_i = 0 \
&\iff \frac{1}{20-x_0}\begin{bmatrix}1\0\end{bmatrix} + \frac{1}{x_0}\begin{bmatrix}-1\0\end{bmatrix} + \frac{1}{20-x_1}\begin{bmatrix}0\1\end{bmatrix} + \frac{1}{x_1}\begin{bmatrix}0\-1\end{bmatrix} = 0 \
&\iff \frac{1}{20-x_0} - \frac{1}{x_0} = 0, \frac{1}{20-x_1} - \frac{1}{x_1} = 0 \
&\iff x_0 = \frac{20}{2} = 10, x_1 = \frac{20}{2} = 10,
\end{align}
and conclude $x^{(0)}{ac} = (10, 10)$. We then query the oracle at $x^{(0)}{ac}$. (Here, $f\text{best} = f_\text{obj}(10, 10) = 50$ since this is the $0$-th iteration.) As there are no inequality constraints we have
\begin{align}
&a_4 = \nabla f_\text{obj}(10, 10) = \vec{10\10}, \
&b_4 = \nabla f_\text{obj}(10, 10)^\transpose \vec{10\10} = \vec{10\10}^\transpose \vec{10\10} = 200,
\end{align}
which we normalize to get
\begin{align}
&a_4 = \frac{1}{\sqrt{100^2 + 100^2}} \nabla f_\text{obj}(10, 10)
= \vec{\frac{1}{\sqrt{2}} \ \frac{1}{\sqrt{2}} } \approx \vec{0.7071 \ 0.7071}, \
&b_4 = \frac{1}{\sqrt{100^2 + 100^2}} \nabla f_\text{obj}(10, 10)^\transpose \vec{10\10} = \vec{10\10}^\transpose \vec{10\10} = \frac{20}{\sqrt{2}} = 10\sqrt{2} \approx 14.1421,
\end{align}
and therefore update
\begin{align}
A = \vec{a_0^\transpose\a_1^\transpose\a_2^\transpose\a_3^\transpose\ \frac{1}{\sqrt{2}} \;\; \frac{1}{\sqrt{2}}}, b = \vec{20\0\20\0\10\sqrt{2}}, k = 1.
\end{align}
Now, $x^{(1)}{ac}$ is the solution of the minimization problem
\begin{equation}
\min_{\domain \phi} \phi(x) = - \sum_{i=0}^{4}{\log{(b_i - a_i^\transpose x)}}.
\end{equation}
So, we solve the problem
\begin{align}
&\phantom{iff}\nabla \phi(x) = \sum_{i=0}^{4
} \frac{1}{b_i - a_i^\transpose x}a_i = 0 \
&\iff \frac{1}{20-x_0}\vec{1\0} + \frac{1}{x_0}\vec{-1\0} + \frac{1}{20-x_1}\vec{0\1} + \frac{1}{x_1}\vec{0\-1} + \frac{\sqrt{2}}{20-x_0-x_1} \vec{\frac{1}{\sqrt{2}}\\frac{1}{\sqrt{2}}} = 0\
&\iff \frac{1}{20-x_0} - \frac{1}{x_0} + \frac{1}{20 - x_0- x_1}= 0, \frac{1}{20-x_1} - \frac{1}{x_1} + \frac{1}{20 - x_0- x_1} = 0 \
&\iff x_0 = x_1 = 2(5 \pm \sqrt{5}) \approx 14.4721 \text{ or } 5.52786,
\end{align}
and take $x^{(1)}{ac} = (2(5-\sqrt{5}), 2(5-\sqrt{5})) \approx (5.52786, 5.52786)$. We then query the
oracle at $x^{(1)}{ac}$. Here
$f\text{obj}(x^{(1)}{ac}) = f\text{obj}(2(5-\sqrt{5}), 2(5-\sqrt{5})) = 90 - 40\sqrt{5} \approx 0.557281 \leq f_\text{best} = 50$ so we update
$f_\text{best} = 90 - 40\sqrt{5} \approx 0.557281$ and therefore put (and normalize)
\begin{align}
&a_5 = \frac{1}{\sqrt{2(10-4\sqrt{5})^2}} \nabla f_\text{obj}(x^{(1)}{ac}) =
\frac{1}{\sqrt{2(10-4\sqrt{5})^2}} \nabla f\text{obj}\vec{2(5-\sqrt{5}) \ 2(5-\sqrt{5})} =
\frac{1}{\sqrt{2(10-4\sqrt{5})^2}} \vec{10-4\sqrt{5}\10-4\sqrt{5}}
= \vec{\frac{1}{\sqrt{2}}\\frac{1}{\sqrt{2}}}\approx \vec{0.7071 \ 0.7071}, \
&b_5 = \frac{1}{\sqrt{2(10-4\sqrt{5})^2}} \nabla f_\text{obj}(x^{(1)}_{ac})^\transpose \vec{2(5-\sqrt{5}) \ 2(5-\sqrt{5})} =
\frac{1}{\sqrt{2}} \vec{1\1}^\transpose \vec{2(5-\sqrt{5}) \ 2(5-\sqrt{5})} = 2\sqrt{2} (5 - \sqrt{5})
\approx 7.8176,
\end{align}
updating $A$ and $b$ also. Iteration $k = 1$ is concluded by incrementing $k$.
We see that this computation will only get more complicated. Therefore, in our implementation we test the computed analytic center by simply computing the norm of the gradient of the log barrier at the analytic center and see if it sufficiently small.
We first test the unconstrained version of the problem.
End of explanation
A = np.array([[1, 0],[-1,0],[0,1],[0,-1]])
b = np.array([20, 0, 20, 0])
accpm.accpm(A, b, funcobj, grad_funcobj,
(func0, func1, func2, func3), (grad_func0, grad_func1, grad_func2, grad_func3),
alpha=0.01, beta=0.7, start=1, tol=10e-3, maxiter=200, testing=True)
Explanation: Next we test a trivial version of the inequality constrained problem where the inequality constraints and linear inequality constraints are the same
End of explanation
A = np.array([[1, 0],[-1,0],[0,1],[0,-1]])
b = np.array([5, 0, 5, 0])
accpm.accpm(A, b, funcobj, grad_funcobj,
(func0, func1, func2, func3), (grad_func0, grad_func1, grad_func2, grad_func3),
alpha=0.01, beta=0.7, start=1, tol=10e-3, maxiter=200, testing=True)
Explanation: Next we test a version of the inequality constrained problem where the initial polygon lies within the feasible region given by the inequality constraints.
End of explanation
A = np.array([[1, 0],[-1,0],[0,1],[0,-1]])
b = np.array([30, 0, 30, 0])
accpm.accpm(A, b, funcobj, grad_funcobj,
(func0, func1, func2, func3), (grad_func0, grad_func1, grad_func2, grad_func3),
alpha=0.01, beta=0.7, start=1, tol=10e-3, maxiter=200, testing=True)
Explanation: We now test a version of the inequality constrained problem where the initial polyhedron contains the feasible region but also regions that are infeasible.
End of explanation
A = np.array([[1, 0],[-1,0],[0,1],[0,-1]])
b = np.array([100, 0, 100, 0])
accpm.accpm(A, b, funcobj, grad_funcobj,
(func0, func1, func2, func3), (grad_func0, grad_func1, grad_func2, grad_func3),
alpha=0.01, beta=0.7, start=1, tol=10e-3, maxiter=200, testing=True)
Explanation: We test a version of the inequality constrained problem where the initial polyhedron is moderately large.
End of explanation
A = np.array([[1, 0],[-1,0],[0,1],[0,-1]])
b = np.array([1000, 0, 1000, 0])
accpm.accpm(A, b, funcobj, grad_funcobj,
(func0, func1, func2, func3), (grad_func0, grad_func1, grad_func2, grad_func3),
alpha=0.01, beta=0.7, start=1, tol=10e-3, maxiter=200, testing=True)
A = np.array([[1, 0],[-1,0],[0,1],[0,-1]])
b = np.array([1000, 0, 1000, 0])
accpm.accpm(A, b, funcobj, grad_funcobj,
(func0, func1, func2, func3), (grad_func0, grad_func1, grad_func2, grad_func3),
alpha=0.01, beta=0.7, start=1, tol=10e-3, maxiter=500, testing=True)
Explanation: Finally we test a version of the inequality constrained problem where the initial polyhedron is very large. We observe that the algorithm does not converge.
It is possible more iterations, or pruning of the redundant inequalities, would allow for convergence, but in practice, it is unlikely this situation will arise.
End of explanation |
10,776 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring the MNIST Digits Dataset
Introduction
The MNIST digits dataset is a famous dataset of handwritten digit images. You can read more about it at wikipedia or Yann LeCun's page. It's a useful dataset because it provides an example of a pretty simple, straightforward image processing task, for which we know exactly what state of the art accuracy is.
I plan to use this dataset for a couple upcoming machine learning blog posts, and since the first step of pretty much any ML task is 'explore your data,' I figured I would post this first, to have to refer back to, instead of repeating in each subsequent post.
Conveniently, scikit-learn has a built-in utility for loading this (and other) standard datsets.
Loading the Digits Dataset
Step1: Data Shape, Summary Stats
We can see below that our data (X) and target (y) have 70,000 rows, meaning we have information on 70,000 digit images. Our X, or independent variables dataset, has 784 columns, which correspond to the 784 pixel values in a 28-pixel x 28-pixel image (28x28 = 784). Our y, or target, is a single column representing the true digit labels (0-9) for each image.
Step2: Below we see min, max, mean and most-common pixel-intensity values for our rows/images. As suggested by the first row above, our most common value is 0. In fact even the median is 0, which means over half of our pixels are background/blank space. Makes sense.
Step3: We might wonder if there are only a few distinct pixel values present in the data (e.g. black, white, and a few shades of grey), but in fact we have all 256 values between our min-max of 0-255
Step4: Viewing the Digit Images
We can also take a look at the digits images themselves, with matplotlib's handy function pyplot.imshow().
imshow accepts a dataset to plot, which it will interpret as pixel values. It also accepts a color-mapping to determine the color each pixel-value should be displayed as.
In the code below, we'll plot our first row/image, using the "reverse grayscale" color-map, to plot 0 (background in this dataset) as white.
Step5: And here's a few more...
Step6: Final Wrap-up
One last thing I'd want to check here before moving forward with any classification task, would be to determine how balanced our dataset is. Do we have a pretty even distribution of each digit? Or do we have mostly 7s, for example? | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import os
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original', data_home='datasets/')
# Convert sklearn 'datasets bunch' object to Pandas DataFrames
y = pd.Series(mnist.target).astype('int').astype('category')
X = pd.DataFrame(mnist.data)
Explanation: Exploring the MNIST Digits Dataset
Introduction
The MNIST digits dataset is a famous dataset of handwritten digit images. You can read more about it at wikipedia or Yann LeCun's page. It's a useful dataset because it provides an example of a pretty simple, straightforward image processing task, for which we know exactly what state of the art accuracy is.
I plan to use this dataset for a couple upcoming machine learning blog posts, and since the first step of pretty much any ML task is 'explore your data,' I figured I would post this first, to have to refer back to, instead of repeating in each subsequent post.
Conveniently, scikit-learn has a built-in utility for loading this (and other) standard datsets.
Loading the Digits Dataset
End of explanation
X.shape, y.shape
# Change column-names in X to reflect that they are pixel values
num_images = X.shape[1]
X.columns = ['pixel_'+str(x) for x in range(num_images)]
# print first row of X
X.head(1)
Explanation: Data Shape, Summary Stats
We can see below that our data (X) and target (y) have 70,000 rows, meaning we have information on 70,000 digit images. Our X, or independent variables dataset, has 784 columns, which correspond to the 784 pixel values in a 28-pixel x 28-pixel image (28x28 = 784). Our y, or target, is a single column representing the true digit labels (0-9) for each image.
End of explanation
X_values = pd.Series(X.values.ravel())
print(" min: {}, \n max: {}, \n mean: {}, \n median: {}, \n most common value: {}".format(X_values.min(),
X_values.max(),
X_values.mean(),
X_values.median(),
X_values.value_counts().idxmax()))
Explanation: Below we see min, max, mean and most-common pixel-intensity values for our rows/images. As suggested by the first row above, our most common value is 0. In fact even the median is 0, which means over half of our pixels are background/blank space. Makes sense.
End of explanation
len(np.unique(X.values))
Explanation: We might wonder if there are only a few distinct pixel values present in the data (e.g. black, white, and a few shades of grey), but in fact we have all 256 values between our min-max of 0-255:
End of explanation
# First row is first image
first_image = X.loc[0,:]
first_label = y[0]
# 784 columns correspond to 28x28 image
plottable_image = np.reshape(first_image.values, (28, 28))
# Plot the image
plt.imshow(plottable_image, cmap='gray_r')
plt.title('Digit Label: {}'.format(first_label))
plt.show()
Explanation: Viewing the Digit Images
We can also take a look at the digits images themselves, with matplotlib's handy function pyplot.imshow().
imshow accepts a dataset to plot, which it will interpret as pixel values. It also accepts a color-mapping to determine the color each pixel-value should be displayed as.
In the code below, we'll plot our first row/image, using the "reverse grayscale" color-map, to plot 0 (background in this dataset) as white.
End of explanation
images_to_plot = 9
random_indices = random.sample(range(70000), images_to_plot)
sample_images = X.loc[random_indices, :]
sample_labels = y.loc[random_indices]
plt.clf()
plt.style.use('seaborn-muted')
fig, axes = plt.subplots(3,3,
figsize=(5,5),
sharex=True, sharey=True,
subplot_kw=dict(adjustable='box-forced', aspect='equal')) #https://stackoverflow.com/q/44703433/1870832
for i in range(images_to_plot):
# axes (subplot) objects are stored in 2d array, accessed with axes[row,col]
subplot_row = i//3
subplot_col = i%3
ax = axes[subplot_row, subplot_col]
# plot image on subplot
plottable_image = np.reshape(sample_images.iloc[i,:].values, (28,28))
ax.imshow(plottable_image, cmap='gray_r')
ax.set_title('Digit Label: {}'.format(sample_labels.iloc[i]))
ax.set_xbound([0,28])
plt.tight_layout()
plt.show()
Explanation: And here's a few more...
End of explanation
y.value_counts(normalize=True)
Explanation: Final Wrap-up
One last thing I'd want to check here before moving forward with any classification task, would be to determine how balanced our dataset is. Do we have a pretty even distribution of each digit? Or do we have mostly 7s, for example?
End of explanation |
10,777 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basics of lists
Step1: The length of a list is acquired by the len functino
Step2: Lists can be initialised if its values are known at run time
Step3: Appending and extending lists
Step4: Note that the + operator does not modify the lists inplace, rather it
constructs a new list from the values of l1 and l2.
If inplace extension is required, the .extend member function should be used
Step5: Items can be removed from the lists too, with the pop function.
When called, the pop function removes the last element of the list.
If an argument is passed into the pop function, the element contained
at that position in the list is removed, e.g.
Step6: List comprehension
A powerful feature of the python language is that iteration over
iterables can be done concisely in one line using list comprehension.
This technique allows copies, transformation, and filtering of lists, as follows
Step7: Generaters
In python (especially Python3), generaters are used in many cases. These are objects that perform lazy evaluation (i.e. code is executed when required, not when composed). We will come across these later.
Filtering and mapping from in-build functions
The operations above can also be achieved with the builtin filter and map functions.
The above operations are demonstrated below
Step8: Anonymous inline functions (lambdas) can be used if the is_positive function isn't available
Step9: Mapping functions to lists
The map function takes a function pointer and iterables as parameters.
The following replicates the list comprehension mapping methods described
earlier
Step10: map and filter commands may be nested
Step11: The map function is very powerful, and makes type conversion very easy
Step12: Iterating through multiple lists
The zip command takes as arguments a number of lists, and iterates through these
until the shortest list is traversed. This function therefore aligns lists together.
Consider the following examples
Step13: Note, if more than one iterable is passed into map.
A simplified implementation of map that takes two iterables is given here
python
def map_two(func, iterable1, iterable2)
Step14: Retreiving the position in the list
The for loops that we have considered iterate over the data elements.
It is often convenient to also have access to the position in the sequence
during the iteration. The enumerate function provides this
Step15: This can also be used in conjunction with the zip function
Step16: In many applications, it is helpful for the index variable to be offset
by a particular value. This can be achieved with the enumerate function
by passing an optional argument to the function. In this example we start
the indexing with an offset of 100. | Python Code:
from __future__ import print_function
l1 = list()
l2 = []
print(l1)
print(l2)
Explanation: Basics of lists
End of explanation
print(len(l1))
print(len(l2))
Explanation: The length of a list is acquired by the len functino:
End of explanation
l3 = [1, 2, 3]
print(l3)
print(len(l3))
Explanation: Lists can be initialised if its values are known at run time:
End of explanation
l1.append(1)
print(l1)
l1.append(10)
print(l1)
l2.append(100)
print(l2)
print(l1)
print(l2)
print(l1 + l2)
Explanation: Appending and extending lists
End of explanation
l1.extend(l2)
print(l1)
print(l2)
Explanation: Note that the + operator does not modify the lists inplace, rather it
constructs a new list from the values of l1 and l2.
If inplace extension is required, the .extend member function should be used:
End of explanation
l2 = [0, 1, 2, 3, 4, 5, 6]
l2.pop(1)
print(l2)
l2.pop()
print(l2)
print(l1)
Explanation: Items can be removed from the lists too, with the pop function.
When called, the pop function removes the last element of the list.
If an argument is passed into the pop function, the element contained
at that position in the list is removed, e.g.
End of explanation
l3 = [-2, -1, 0, 1, 2]
print(l3)
print(len(l3))
print([el for el in l3])
# Return the positive elements
print([el for el in l3 if el > 0])
# Return the negative elements
print([el for el in l3 if el < 0])
# Multiply the elements by two
print([el * 2 for el in l3])
# Multiply filtered elements by two
print([el * 2 for el in l3 if el <= 1])
Explanation: List comprehension
A powerful feature of the python language is that iteration over
iterables can be done concisely in one line using list comprehension.
This technique allows copies, transformation, and filtering of lists, as follows:
End of explanation
def is_positive(el):
return el > 0
print(l3)
print(filter(is_positive, l3))
print(list(filter(is_positive, l3))) # python 3
Explanation: Generaters
In python (especially Python3), generaters are used in many cases. These are objects that perform lazy evaluation (i.e. code is executed when required, not when composed). We will come across these later.
Filtering and mapping from in-build functions
The operations above can also be achieved with the builtin filter and map functions.
The above operations are demonstrated below
End of explanation
# Return the positive elements
print(list(filter(lambda el: el > 0, l3)))
# Return the non-positive elements
print(list(filter(lambda el: el <= 0, l3)))
# Return elements outside of a range
print(list(filter(lambda el: el < -1 or el > 1, l3)))
# Return the elements found within a range (note the mathematical notation)
print(list(filter(lambda el: -1 <= el <= 1, l3)))
Explanation: Anonymous inline functions (lambdas) can be used if the is_positive function isn't available
End of explanation
print([abs(el) for el in l3])
print(list(map(abs, l3)))
def add_one(item):
return item + 1
print(list(map(add_one, l3)))
Explanation: Mapping functions to lists
The map function takes a function pointer and iterables as parameters.
The following replicates the list comprehension mapping methods described
earlier
End of explanation
print(list(map(lambda el: el * 2, filter(lambda el: el <= 1, l3))))
Explanation: map and filter commands may be nested:
End of explanation
print('Integer array:', list(map(int, l3)))
print(' Float array:', list(map(float, l3)))
print('Boolean array:', list(map(bool, l3)))
Explanation: The map function is very powerful, and makes type conversion very easy:
End of explanation
l4 = [1, 2, 3]
print('l3:', l3)
print('l4:', l4)
for el3, el4 in zip(l3, l4):
print(el3, el4)
l5 = l3 + l4
print(l5)
for el3, el4, el5 in zip(l3, l4, l5):
print(el3, el4, el5)
Explanation: Iterating through multiple lists
The zip command takes as arguments a number of lists, and iterates through these
until the shortest list is traversed. This function therefore aligns lists together.
Consider the following examples:
End of explanation
def add(l, r):
try:
return l * r
except TypeError:
# Addition of `None` type is not defined
return None
def is_None(l, r):
return l is None or r is None
l5 = [5, 4, 3, 2, 1]
print(list(map(add, l4, l5)))
print(list(map(is_None, l4, l5)))
Explanation: Note, if more than one iterable is passed into map.
A simplified implementation of map that takes two iterables is given here
python
def map_two(func, iterable1, iterable2):
out = []
for iter1, iter2 in zip(iterable1, iterable2):
out.append(func(iter1, iter2))
return out
Note that map copes with 'jagged' iterables in the following way:
when iterables are not of same length, map will pad the shorter
lists with None. The returned list is therefore always of the
same length as the longest iterable.
End of explanation
for index, value in enumerate(l1):
print(index, value)
Explanation: Retreiving the position in the list
The for loops that we have considered iterate over the data elements.
It is often convenient to also have access to the position in the sequence
during the iteration. The enumerate function provides this:
End of explanation
for index, (el3, el4, el5) in enumerate(zip(l3, l4, l5)):
print(index, (el3, el4, el5))
Explanation: This can also be used in conjunction with the zip function:
End of explanation
for index, (el3, el4, el5) in enumerate(zip(l3, l4, l5), start=100):
print(index, (el3, el4, el5))
Explanation: In many applications, it is helpful for the index variable to be offset
by a particular value. This can be achieved with the enumerate function
by passing an optional argument to the function. In this example we start
the indexing with an offset of 100.
End of explanation |
10,778 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decision Trees
By Parijat Mazumdar (GitHub ID
Step1: We want to create a decision tree from the above training dataset. The first step for that is to encode the data into numeric values and bind them to Shogun's features and multiclass labels.
Step2: Next, we learn our decision tree using the features and labels created.
Step3: Our decision tree is ready now and we want to use it to make some predictions over test data. So, let us create some test data examples first.
Step4: Next, as with training data, we encode our test dataset and bind it to Shogun features. Then, we apply our decision tree to the test examples to obtain the predicted labels.
Step5: Finally let us tabulate the results obtained and compare them with our intuitive predictions.
Step6: So, do the predictions made by our decision tree match our inferences from training set? Yes! For example, from the training set we infer that the individual having low income has low usage and also all individuals going to college have medium usage. The decision tree predicts the same for both cases.
Example using a real dataset
We choose the car evaluation dataset from the UCI Machine Learning Repository as our real-world dataset. The car.names file of the dataset enumerates the class categories as well as the non-class attributes. Each car categorized into one of 4 classes
Step7: Next, let us read the file and form Shogun features and labels.
Step8: From the entire dataset, let us choose some test vectors to form our test dataset.
Step9: Next step is to train our decision tree using the training features and applying it to our test dataset to get predicted output classes.
Step10: Finally, let us compare our predicted labels with test labels to find out the percentage error of our classification model.
Step11: We see that the accuracy is moderately high. Thus our decision tree can evaluate any car given its features with a high success rate. As a final exercise, let us examine the effect of training dataset size on the accuracy of decision tree.
Step12: NOTE
Step13: In the above plot the training data points are are marked with different colours of crosses where each colour corresponds to a particular label. The test data points are marked by black circles. For us it is a trivial task to assign correct colours (i.e. labels) to the black points. Let us see how accurately C4.5 assigns colours to these test points.
Now let us train a decision tree using the C4.5 algorithm. We need to create a Shogun C4.5 tree object and supply training features and training labels to it. We also need to specify which attribute is categorical and which is continuous. The attribute types can be specified using set_feature_types method through which all categorical attributes are set as True and continuous attributes as False.
Step14: Now that we have trained the decision tree, we can use it to classify our test vectors.
Step15: Let us use the output labels to colour our test data points to qualitatively judge the performance of the decision tree.
Step16: We see that the decision tree trained using the C4.5 algorithm works almost perfectly in this toy dataset. Now let us try this algorithm on a real world dataset.
Example using a real dataset
In this section we will investigate that how accurately we can predict the species of an Iris flower using a C4.5 trained decision tree. In this example we will use petal length, petal width, sepal length and sepal width as our attributes to decide among 3 classes of Iris
Step17: Because there is no separate test dataset, we first divide the given dataset into training and testing subsets.
Step18: Before marching forward with applying C4.5, let us plot the data to get a better understanding. The given data points are 4-D and hence cannot be conveniently plotted. We need to reduce the number of dimensions to 2. This reduction can be achieved using any dimension reduction algorithm like PCA. However for the sake of brevity, let us just choose two highly correlated dimensions, petal width and petal length (see summary statistics), right away for plotting.
Step19: First, let us create Shogun features and labels from the given data.
Step20: We know for fact that decision trees tend to overfit. Hence pruning becomes a necessary step. In case of toy dataset, we skipped the pruning step because the dataset was simple and noise-free. But in case of a real dataset like the Iris dataset pruning cannot be skipped. So we have to partition the training dataset into the training subset and the validation subset.
Step21: Now we train the decision tree first, then prune it and finally use it to get output labels for test vectors.
Step22: Let us calculate the accuracy of the classification made by our tree as well as plot the results for qualitative evaluation.
Step23: From the evaluation of results, we infer that, with the help of a C4.5 trained decision tree, we can predict (with high accuracy) the type of Iris plant given its petal and sepal widths and lengths.
Classification and Regression Trees (CART)
The CART algorithm is a popular decision tree learning algorithm introduced by Breiman et al. Unlike ID3 and C4.5, the learnt decision tree in this case can be used for both multiclass classification and regression depending on the type of dependent variable. The tree growing process comprises of recursive binary splitting of nodes. To find the best split at each node, all possible splits of all available predictive attributes are considered. The best split is the one that maximises some splitting criterion. For classification tasks, ie. when the dependent attribute is categorical, the Gini index is used as the splitting criterion. For regression tasks, ie. when the dependent variable is continuous, the least squares deviation is used. Let us learn about Shogun's CART implementation by working on two toy problems, one on classification and the other on regression.
Classification example using toy data
Let us consider the same dataset as that in the C4.5 toy example. We re-create the dataset and plot it first.
Step24: Next, we supply necessary parameters to the CART algorithm and use it train our decision tree.
Step25: In the above code snippet, we see four parameters being supplied to the CART tree object. feat_types supplies knowledge of attribute types of training data to the CART algorithm and problem_type specifies whether it is a multiclass classification problem (PT_MULTICLASS) or a regression problem (PT_REGRESSION). The boolean parameter use_cv_pruning switches on cross-validation pruning of the trained tree and num_folds specifies the number of folds of cross-validation to be applied while pruning. At this point, let us divert ourselves briefly towards undertanding what kind of pruning strategy is employed by Shogun's CART implementation. The CART algorithm uses the cost-complexity pruning strategy. Cost-Complexity pruning yields a list of subtrees of varying depths using complexity normalized resubstitution error, $R_\alpha(T)$. Resubstitution error, R(T), measures how well a decision tree fits the training data. But, this measure favours larger trees over smaller ones. Hence the complexity normalized resubstitution error metric is used which adds penalty for increased complexity and in-turn counters overfitting.
$R_\alpha(T)=R(T)+\alpha \times (numleaves)$
The best subtree among the list of subtrees can be chosen using cross validation or using the best-fit metric in the validation dataset. Setting use_cv_pruning in the above code snippet basically tells the CART object to use cross-validation to choose the best among the subtrees generated by cost-complexity pruning.
Let us now get back on track and use the trained tree to classify our test data.
Step26: Regression example using toy data
In this example, we form the training dataset by sampling points from a sinusoidal curve and see how well a decision tree, trained using these samples, re-creates the actual sinusoid.
Step27: Next, we train our CART-tree.
Step28: Now let us use the trained decision tree to regress over the entire range of the previously depicted sinusoid.
Step29: As we can see from the above plot, CART-induced decision tree follows the reference sinusoid quite beautifully!
Classification example using real dataset
In this section, we will apply the CART algorithm on the Iris dataset. Remember that the Iris dataset provides us with just a training dataset and no separate test dataset. In case of the C4.5 example discussed earlier, we ourselves divided the entire training dataset into training subset and test subset. In this section, we will employ a different strategy i.e. cross validation. In cross-validation, we divide the training dataset into n subsets where n is a user controlled parameter. We perform n iterations of training and testing in which, at each iteration, we choose one of the n subsets as our test dataset and the remaining n-1 subsets as our training dataset. The performance of the model is usually taken as the average of the performances in various iterations. Shogun's cross validation class makes it really easy to apply cross-validation to any model of our choice. Let us realize this by applying cross-validation to CART-tree trained over Iris dataset. We start by reading the data.
Step30: Next, we setup the model which is CART-tree in this case.
Step31: Finally we can use Shogun's cross-validation class to get performance.
Step32: We get a mean accuracy of about 0.93-0.94. This number essentially means that a CART-tree trained using this dataset is expected to classify Iris flowers, given their required attributes, with an accuracy of 93-94% in a real world scenario. The parameters required by Shogun's cross-validation class should be noted in the above code snippet. The class requires the model, training features, training labels, splitting strategy and evaluation method to be specified.
Regression using real dataset
In this section, we evaluate CART-induced decision tree over the Servo dataset. Using this dataset, we essentially want to train a model which can predict the rise time of a servomechanism given the required parameters which are the two (integer) gain settings and two (nominal) choices of mechanical linkages. Let us read the dataset first.
Step33: The servo dataset is a small training dataset (contains just 167 training vectors) with no separate test dataset, like the Iris dataset. Hence we will apply the same cross-validation strategy we applied in case of the Iris dataset. However, to make things interesting let us play around with a yet-untouched parameter of CART-induced tree i.e. the maximum allowed tree depth. As the tree depth increases, the tree becomes more complex and hence fits the training data more closely. By setting a maximum allowed tree depth, we restrict the complexity of trained tree and hence avoid over-fitting. But choosing a low value of the maximum allowed tree depth may lead to early stopping i.e. under-fitting. Let us explore how we can decide the appropriate value of the max-allowed-tree-depth parameter. Let us create a method, which takes max-allowed-tree-depth parameter as input and returns the corresponding cross-validated error as output.
Step34: Next, let us supply a range of max_depth values to the above method and plot the returned cross-validated errors.
Step35: From the above plot quite clearly gives us the most appropriate value of maximum allowed depth. We see that the first minima occurs at a maximum allowed depth of 6-8. Hence, one of these should be the desired value. It is to be noted that the error metric that we are discussing here is the mean squared error. Thus, from the above plot, we can also claim that, given required parameters, our CART-flavoured decision tree can predict the rise time within an average error range of $\pm0.5$ (i.e. square root of 0.25 which is the approximate minimum cross-validated error). The relative error i.e average_error/range_of_labels comes out to be ~30%.
CHi-squared Automatic Interaction Detection (CHAID)
The CHAID is an algorithm for decision tree learning proposed by Kass (1980). It is similar in functionality to CART in the sense that both can be used for classification as well as regression. But unlike CART, CHAID internally handles only categorical features. The continuous features are first converted into ordinal categorical features for the CHAID algorithm to be able to use them. This conversion is done by binning of feature values.The number of bins (K) has to be supplied by the user. Given K, a predictor is split in such a way that all the bins get the same number (more or less) of distinct predictor values. The maximum feature value in each bin is used as a breakpoint.
An important parameter in the CHAID tree growing process is the p-value. The p-value is the metric that is used for deciding which categories of predictor values to merge during merging as well as for deciding the best attribute during splitting. The p-value is calculated using different hypothesis testing methods depending on the type of dependent variable (nominal, ordinal or continuous). A more detailed discussion on the CHAID algorithm can be found in the documentation of the CCHAIDTree class in Shogun. Let us move on to a more interesting topic which is learning to use CHAID using Shogun's python API.
Classification example using toy dataset
Let us re-use the toy classification dataset used in C4.5 and CART to see the API usage of CHAID as well as to qualitatively compare the results of the CHAID algorithm with the other two.
Step36: Now, we set up our CHAID-tree with appropriate parameters and train over given data.
Step37: An important point to be noted in the above code snippet is that CHAID training modifies the training data. The actual continuous feature values are replaced by the discrete ordinal values obtained during continuous to ordinal conversion. Notice the difference between the original feature matrix and the updated matrix. The updated matrix contains only 10 distinct values denoting all values of the original matrix for feature dimension at row index 1.
With a CHAID-trained decision tree at our disposal, it's time to apply it to colour our test points.
Step38: Regression example with toy dataset
In this section, we re-work the sinusoid curve fitting example (earlier used in CART toy regression).
Step39: As usual, we start by setting up our decision tree and training it.
Step40: Next, we use the trained decision tree to follow the reference sinusoid.
Step41: A distinguishing feature about the predicted curve is the presence of steps. These steps are essentially an artifact of continuous to ordinal conversion. If we decrease the number of bins for the conversion the step widths will increase.
Classification example over real dataset
In this section, we will try to estimate the quality of wine based on 13 attributes like alcohol content, malic acid, magnesium content, etc. using the wine dataset. Let us first read the dataset using Shogun's CSV file reader.
Step42: Like the case of CART, here we are also interested in finding out the approximate accuracy with which our CHAID tree trained on this dataset will perform in real world. Hence, we will apply the cross validation strategy. But first we specify the parameters of the CHAID tree.
Step43: Next we set up the cross-validation class and get back the error estimate we want i.e mean classification error.
Step44: Regression example using real dataset
In this section, we try to predict the value of houses in Boston using 13 attributes, like per capita crime rate in neighborhood, number of rooms, nitrous oxide concentration in air, proportion of non-retail business in the area etc. Out of the 13 attributes 12 are continuous and 1 (the Charles river dummy variable) is binary nominal. Let us load the dataset as our first step. For this, we can directly use Shogun's CSV file reader class.
Step45: Next, we set up the parameters for the CHAID tree as well as the cross-validation class. | Python Code:
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../../data')
# training data
train_income=['Low','Medium','Low','High','Low','High','Medium','Medium','High','Low','Medium',
'Medium','High','Low','Medium']
train_age = ['Old','Young','Old','Young','Old','Young','Young','Old','Old','Old','Young','Old',
'Old','Old','Young']
train_education = ['University','College','University','University','University','College','College',
'High School','University','High School','College','High School','University','High School','College']
train_marital = ['Married','Single','Married','Single','Married','Single','Married','Single','Single',
'Married','Married','Single','Single','Married','Married']
train_usage = ['Low','Medium','Low','High','Low','Medium','Medium','Low','High','Low','Medium','Low',
'High','Low','Medium']
# print data
print('Training Data Table : \n')
print('Income \t\t Age \t\t Education \t\t Marital Status \t Usage')
for i in range(len(train_income)):
print(train_income[i]+' \t\t '+train_age[i]+' \t\t '+train_education[i]+' \t\t '+train_marital[i]+' \t\t '+train_usage[i])
Explanation: Decision Trees
By Parijat Mazumdar (GitHub ID: mazumdarparijat)
This notebook illustrates the use of decision trees in Shogun for classification and regression. Various decision tree learning algorithms like ID3, C4.5, CART, CHAID have been discussed in detail using both intuitive toy datasets as well as real-world datasets.
Decision Tree Basics
Decision Trees are a non-parametric supervised learning method that can be used for both classification and regression. Decision trees essentially encode a set of if-then-else rules which can be used to predict target variable given data features. These if-then-else rules are formed using the training dataset with the aim to satisfy as many training data instances as possible. The formation of these rules (aka. decision tree) from training data is called decision tree learning. Various decision tree learning algorithms have been developed and they work best in different situations. An advantage of decision trees is that they can model any type of function for classification or regression which other techniques cannot. But a decision tree is highly prone to overfitting and bias towards training data. So, decision trees are used for very large datasets which are assumed to represent the ground truth well. Additionally, certain tree pruning algorithms are also used to tackle overfitting.
ID3 (Iterative Dichotomiser 3)
ID3 is a straightforward decision tree learning algorithm developed by Ross Quinlan. ID3 is applicable only in cases where the attributes (or features) defining data examples are categorical in nature and the data examples belong to pre-defined, clearly distinguishable (ie. well defined) classes. ID3 is an iterative greedy algorithm which starts with the root node and eventually builds the entire tree. At each node, the "best" attribute to classify data is chosen. The "best" attribute is chosen using the information gain metric. Once an attribute is chosen in a node, the data examples in the node are categorized into sub-groups based on the attribute values that they have. Basically, all data examples having the same attribute value are put together in the same sub-group. These sub-groups form the children of the present node and the algorithm is repeated for each of the newly formed children nodes. This goes on until all the data members of a node belong to the same class or all the attributes are exhausted. In the latter case, the class predicted may be erroneous and generally the mode of the classes appearing in the node is chosen as the predictive class.
Pseudocode for ID3 Algorithm
Example using a Simple dataset
In this section, we create a simple example where we try to predict the usage of mobile phones by individuals based on their income, age, education and marital status. Each of the attributes have been categorized into 2 or 3 types. Let us create the training dataset and tabulate it first.
End of explanation
from shogun import ID3ClassifierTree, features, MulticlassLabels
from numpy import array, concatenate
# encoding dictionary
income = {'Low' : 1.0, 'Medium' : 2.0, 'High' : 3.0}
age = {'Young' : 1.0, 'Old' : 2.0}
education = {'High School' : 1.0, 'College' : 2.0, 'University' : 3.0}
marital_status = {'Married' : 1.0, 'Single' : 2.0}
usage = {'Low' : 1.0, 'Medium' : 2.0, 'High' : 3.0}
# encode training data
for i in range(len(train_income)):
train_income[i] = income[train_income[i]]
train_age[i] = age[train_age[i]]
train_education[i] = education[train_education[i]]
train_marital[i] = marital_status[train_marital[i]]
train_usage[i] = usage[train_usage[i]]
# form Shogun feature matrix
train_data = array([train_income, train_age, train_education, train_marital])
train_feats = features(train_data);
# form Shogun multiclass labels
labels = MulticlassLabels(array(train_usage));
Explanation: We want to create a decision tree from the above training dataset. The first step for that is to encode the data into numeric values and bind them to Shogun's features and multiclass labels.
End of explanation
# create ID3ClassifierTree object
id3 = ID3ClassifierTree()
# set labels
id3.put('labels', labels)
# learn the tree from training features
is_successful = id3.train(train_feats)
Explanation: Next, we learn our decision tree using the features and labels created.
End of explanation
# test data
test_income = ['Medium','Medium','Low','High','High']
test_age = ['Old','Young','Old','Young','Old']
test_education = ['University','College','High School','University','College']
test_marital = ['Married','Single','Married','Single','Married']
test_usage = ['Low','Medium','Low','High','High']
# tabulate test data
print('Test Data Table : \n')
print('Income \t\t Age \t\t Education \t\t Marital Status \t Usage')
for i in range(len(test_income)):
print(test_income[i]+' \t\t '+test_age[i]+' \t\t '+test_education[i]+' \t\t '+test_marital[i]+' \t\t ?')
Explanation: Our decision tree is ready now and we want to use it to make some predictions over test data. So, let us create some test data examples first.
End of explanation
# encode test data
for i in range(len(test_income)):
test_income[i] = income[test_income[i]]
test_age[i] = age[test_age[i]]
test_education[i] = education[test_education[i]]
test_marital[i] = marital_status[test_marital[i]]
# bind to shogun features
test_data = array([test_income, test_age, test_education, test_marital])
test_feats = features(test_data)
# apply decision tree classification
test_labels = id3.apply_multiclass(test_feats)
Explanation: Next, as with training data, we encode our test dataset and bind it to Shogun features. Then, we apply our decision tree to the test examples to obtain the predicted labels.
End of explanation
output = test_labels.get_labels();
output_labels=[0]*len(output)
# decode back test data for printing
for i in range(len(test_income)):
test_income[i]=income.keys()[income.values().index(test_income[i])]
test_age[i]=age.keys()[age.values().index(test_age[i])]
test_education[i]=education.keys()[education.values().index(test_education[i])]
test_marital[i]=marital_status.keys()[marital_status.values().index(test_marital[i])]
output_labels[i]=usage.keys()[usage.values().index(output[i])]
# print output data
print('Final Test Data Table : \n')
print('Income \t Age \t Education \t Marital Status \t Usage(predicted)')
for i in range(len(test_income)):
print(test_income[i]+' \t '+test_age[i]+' \t '+test_education[i]+' \t '+test_marital[i]+' \t\t '+output_labels[i])
Explanation: Finally let us tabulate the results obtained and compare them with our intuitive predictions.
End of explanation
# class attribute
evaluation = {'unacc' : 1.0, 'acc' : 2.0, 'good' : 3.0, 'vgood' : 4.0}
# non-class attributes
buying = {'vhigh' : 1.0, 'high' : 2.0, 'med' : 3.0, 'low' : 4.0}
maint = {'vhigh' : 1.0, 'high' : 2.0, 'med' : 3.0, 'low' : 4.0}
doors = {'2' : 1.0, '3' : 2.0, '4' : 3.0, '5more' : 4.0}
persons = {'2' : 1.0, '4' : 2.0, 'more' : 3.0}
lug_boot = {'small' : 1.0, 'med' : 2.0, 'big' : 3.0}
safety = {'low' : 1.0, 'med' : 2.0, 'high' : 3.0}
Explanation: So, do the predictions made by our decision tree match our inferences from training set? Yes! For example, from the training set we infer that the individual having low income has low usage and also all individuals going to college have medium usage. The decision tree predicts the same for both cases.
Example using a real dataset
We choose the car evaluation dataset from the UCI Machine Learning Repository as our real-world dataset. The car.names file of the dataset enumerates the class categories as well as the non-class attributes. Each car categorized into one of 4 classes : unacc, acc, good, vgood. Each car is judged using 6 attributes : buying, maint, doors, persons, lug_boot, safety. Each of these attributes can take 3-4 values. Let us first make a dictionary to encode strings to numeric values using information from cars.names file.
End of explanation
f = open( os.path.join(SHOGUN_DATA_DIR, 'uci/car/car.data'), 'r')
feats = []
labels = []
# read data from file and encode
for line in f:
words = line.rstrip().split(',')
words[0] = buying[words[0]]
words[1] = maint[words[1]]
words[2] = doors[words[2]]
words[3] = persons[words[3]]
words[4] = lug_boot[words[4]]
words[5] = safety[words[5]]
words[6] = evaluation[words[6]]
feats.append(words[0:6])
labels.append(words[6])
f.close()
Explanation: Next, let us read the file and form Shogun features and labels.
End of explanation
from numpy import random, delete
feats = array(feats)
labels = array(labels)
# number of test vectors
num_test_vectors = 200;
test_indices = random.randint(feats.shape[0], size = num_test_vectors)
test_features = feats[test_indices]
test_labels = labels[test_indices]
# remove test vectors from training set
feats = delete(feats,test_indices,0)
labels = delete(labels,test_indices,0)
Explanation: From the entire dataset, let us choose some test vectors to form our test dataset.
End of explanation
# shogun test features and labels
test_feats = features(test_features.T)
test_labels = MulticlassLabels(test_labels)
# method for id3 training and
def ID3_routine(feats, labels):
# Shogun train features and labels
train_feats = features(feats.T)
train_lab = MulticlassLabels(labels)
# create ID3ClassifierTree object
id3 = ID3ClassifierTree()
# set labels
id3.put('labels', train_lab)
# learn the tree from training features
id3.train(train_feats)
# apply to test dataset
output = id3.apply_multiclass(test_feats)
return output
output = ID3_routine(feats, labels)
Explanation: Next step is to train our decision tree using the training features and applying it to our test dataset to get predicted output classes.
End of explanation
from shogun import MulticlassAccuracy
# Shogun object for calculating multiclass accuracy
accuracy = MulticlassAccuracy()
print('Accuracy : ' + str(accuracy.evaluate(output, test_labels)))
Explanation: Finally, let us compare our predicted labels with test labels to find out the percentage error of our classification model.
End of explanation
# list of error rates for all training dataset sizes
error_rate = []
# number of error rate readings taken for each value of dataset size
num_repetitions = 3
# loop over training dataset size
for i in range(500,1600,200):
indices = random.randint(feats.shape[0], size = i)
train_features = feats[indices]
train_labels = labels[indices]
average_error = 0
for i in range(num_repetitions):
output = ID3_routine(train_features, train_labels)
average_error = average_error + (1-accuracy.evaluate(output, test_labels))
error_rate.append(average_error/num_repetitions)
# plot the error rates
import matplotlib.pyplot as pyplot
% matplotlib inline
from scipy.interpolate import interp1d
from numpy import linspace, arange
fig,axis = pyplot.subplots(1,1)
x = arange(500,1600,200)
f = interp1d(x, error_rate)
xnew = linspace(500,1500,100)
pyplot.plot(x,error_rate,'o',xnew,f(xnew),'-')
pyplot.xlim([400,1600])
pyplot.xlabel('training dataset size')
pyplot.ylabel('Classification Error')
pyplot.title('Decision Tree Performance')
pyplot.show()
Explanation: We see that the accuracy is moderately high. Thus our decision tree can evaluate any car given its features with a high success rate. As a final exercise, let us examine the effect of training dataset size on the accuracy of decision tree.
End of explanation
import matplotlib.pyplot as plt
from numpy import ones, zeros, random, concatenate
from shogun import features, MulticlassLabels
% matplotlib inline
def create_toy_classification_dataset(ncat,do_plot):
# create attribute values and labels for class 1
x = ones((1,ncat))
y = 1+random.rand(1,ncat)*4
lab = zeros(ncat)
# add attribute values and labels for class 2
x = concatenate((x,ones((1,ncat))),1)
y = concatenate((y,5+random.rand(1,ncat)*4),1)
lab = concatenate((lab,ones(ncat)))
# add attribute values and labels for class 3
x = concatenate((x,2*ones((1,ncat))),1)
y = concatenate((y,1+random.rand(1,ncat)*8),1)
lab = concatenate((lab,2*ones(ncat)))
# create test data
ntest = 20
x_t = concatenate((ones((1,3*ntest/4)),2*ones((1,ntest/4))),1)
y_t = 1+random.rand(1,ntest)*8
if do_plot:
# plot training data
c = ['r','g','b']
for i in range(3):
plt.scatter(x[0,lab==i],y[0,lab==i],color=c[i],marker='x',s=50)
# plot test data
plt.scatter(x_t[0,:],y_t[0,:],color='k',s=10,alpha=0.8)
plt.xlabel('attribute X')
plt.ylabel('attribute Y')
plt.show()
# form training feature matrix
train_feats = features(concatenate((x,y),0))
# from training labels
train_labels = MulticlassLabels(lab)
# from test feature matrix
test_feats = features(concatenate((x_t,y_t),0))
return (train_feats,train_labels,test_feats);
train_feats,train_labels,test_feats = create_toy_classification_dataset(20,True)
Explanation: NOTE : The above code snippet takes about half a minute to execute. Please wait patiently.
From the above plot, we see that error rate decreases steadily as we increase the training dataset size. Although in this case, the decrease in error rate is not very significant, in many datasets this decrease in error rate can be substantial.
C4.5
The C4.5 algorithm is essentially an extension of the ID3 algorithm for decision tree learning. It has the additional capability of handling continuous attributes and attributes with missing values. The tree growing process in case of C4.5 is same as that of ID3 i.e. finding the best split at each node using the information gain metric. But in case of continuous attribute, the C4.5 algorithm has to perform the additional step of converting it to a two-value categorical attribute by splitting about a suitable threshold. This threshold is chosen in a way such that the resultant split produces maximum information gain. Let us start exploring Shogun's C4.5 algorithm API with a toy example.
Example using toy dataset
Let us consider a 3-class classification using 2 attributes. One of the attributes (say attribute X) is a 2-class categorical attribute depicted by values 1 and 2. The other attribute (say attribute Y) is a continuous attribute having values between 1 and 9. The simple rules of classification are as follows : if X=1 and Y $\epsilon$ [1,5), data point belongs to class 1, if X=1 and Y $\epsilon$ [5,9), data point belongs to class 2 and if X=2, data point belongs to class 3. Let us realize the toy dataset by plotting it.
End of explanation
from numpy import array
from shogun import C45ClassifierTree
# steps in C4.5 Tree training bundled together in a python method
def train_tree(feats,types,labels):
# C4.5 Tree object
tree = C45ClassifierTree()
# set labels
tree.put('labels', labels)
# supply attribute types
tree.set_feature_types(types)
# supply training matrix and train
tree.train(feats)
return tree
# specify attribute types X is categorical hence True, Y is continuous hence False
feat_types = array([True,False])
# get back trained tree
C45Tree = train_tree(train_feats,feat_types,train_labels)
Explanation: In the above plot the training data points are are marked with different colours of crosses where each colour corresponds to a particular label. The test data points are marked by black circles. For us it is a trivial task to assign correct colours (i.e. labels) to the black points. Let us see how accurately C4.5 assigns colours to these test points.
Now let us train a decision tree using the C4.5 algorithm. We need to create a Shogun C4.5 tree object and supply training features and training labels to it. We also need to specify which attribute is categorical and which is continuous. The attribute types can be specified using set_feature_types method through which all categorical attributes are set as True and continuous attributes as False.
End of explanation
def classify_data(tree,data):
# get classification labels
output = tree.apply_multiclass(data)
# get classification certainty
output_certainty=tree.get_real_vector('m_certainty')
return output,output_certainty
out_labels,out_certainty = classify_data(C45Tree,test_feats)
Explanation: Now that we have trained the decision tree, we can use it to classify our test vectors.
End of explanation
from numpy import int32
# plot results
def plot_toy_classification_results(train_feats,train_labels,test_feats,test_labels):
train = train_feats.get_real_matrix('feature_matrix')
lab = train_labels.get_labels()
test = test_feats.get_real_matrix('feature_matrix')
out_labels = test_labels.get_labels()
c = ['r','g','b']
for i in range(out_labels.size):
plt.scatter(test[0,i],test[1,i],color=c[int32(out_labels[i])],s=50)
# plot training dataset for visual comparison
for i in range(3):
plt.scatter(train[0,lab==i],train[1,lab==i],color=c[i],marker='x',s=30,alpha=0.7)
plt.show()
plot_toy_classification_results(train_feats,train_labels,test_feats,out_labels)
Explanation: Let us use the output labels to colour our test data points to qualitatively judge the performance of the decision tree.
End of explanation
import csv
from numpy import array
# dictionary to encode class names to class labels
to_label = {'Iris-setosa' : 0.0, 'Iris-versicolor' : 1.0, 'Iris-virginica' : 2.0}
# read csv file and separate out labels and features
lab = []
feat = []
with open( os.path.join(SHOGUN_DATA_DIR, 'uci/iris/iris.data')) as csvfile:
csvread = csv.reader(csvfile,delimiter=',')
for row in csvread:
feat.append([float(i) for i in row[0:4]])
lab.append(to_label[row[4]])
lab = array(lab)
feat = array(feat).T
Explanation: We see that the decision tree trained using the C4.5 algorithm works almost perfectly in this toy dataset. Now let us try this algorithm on a real world dataset.
Example using a real dataset
In this section we will investigate that how accurately we can predict the species of an Iris flower using a C4.5 trained decision tree. In this example we will use petal length, petal width, sepal length and sepal width as our attributes to decide among 3 classes of Iris : Iris Setosa, Iris Versicolor and Iris Verginica. Let us start by suitably reading the dataset.
End of explanation
from numpy import int32, random
# no.of vectors in test dataset
ntest = 25
# no. of vectors in train dataset
ntrain = 150-ntest
# randomize the order of vectors
subset = int32(random.permutation(150))
# choose 1st ntrain from randomized set as training vectors
feats_train = feat[:,subset[0:ntrain]]
# form training labels correspondingly
train_labels = lab[subset[0:ntrain]]
# form test features and labels (for accuracy evaluations)
feats_test = feat[:,subset[ntrain:ntrain+ntest]]
test_labels = lab[subset[ntrain:ntrain+ntest]]
Explanation: Because there is no separate test dataset, we first divide the given dataset into training and testing subsets.
End of explanation
import matplotlib.pyplot as plt
% matplotlib inline
# plot training features
c = ['r', 'g', 'b']
for i in range(3):
plt.scatter(feats_train[2,train_labels==i],feats_train[3,train_labels==i],color=c[i],marker='x')
# plot test data points in black
plt.scatter(feats_test[2,:],feats_test[3,:],color='k',marker='o')
plt.show()
Explanation: Before marching forward with applying C4.5, let us plot the data to get a better understanding. The given data points are 4-D and hence cannot be conveniently plotted. We need to reduce the number of dimensions to 2. This reduction can be achieved using any dimension reduction algorithm like PCA. However for the sake of brevity, let us just choose two highly correlated dimensions, petal width and petal length (see summary statistics), right away for plotting.
End of explanation
from shogun import features, MulticlassLabels
# training data
feats_train = features(feats_train)
train_labels = MulticlassLabels(train_labels)
# test data
feats_test = features(feats_test)
test_labels = MulticlassLabels(test_labels)
Explanation: First, let us create Shogun features and labels from the given data.
End of explanation
# randomize the order of vectors
subset = int32(random.permutation(ntrain))
nvalidation = 45
# form training subset and validation subset
train_subset = subset[0:ntrain-nvalidation]
validation_subset = subset[ntrain-nvalidation:ntrain]
Explanation: We know for fact that decision trees tend to overfit. Hence pruning becomes a necessary step. In case of toy dataset, we skipped the pruning step because the dataset was simple and noise-free. But in case of a real dataset like the Iris dataset pruning cannot be skipped. So we have to partition the training dataset into the training subset and the validation subset.
End of explanation
# set attribute types - all continuous
feature_types = array([False, False, False, False])
# remove validation subset before training the tree
feats_train.add_subset(train_subset)
train_labels.add_subset(train_subset)
# train tree
C45Tree = train_tree(feats_train,feature_types,train_labels)
# bring back validation subset
feats_train.remove_subset()
train_labels.remove_subset()
# remove data belonging to training subset
feats_train.add_subset(validation_subset)
train_labels.add_subset(validation_subset)
# prune the tree
C45Tree.prune_tree(feats_train,train_labels)
# bring back training subset
feats_train.remove_subset()
train_labels.remove_subset()
# get results
output, output_certainty = classify_data(C45Tree,feats_test)
Explanation: Now we train the decision tree first, then prune it and finally use it to get output labels for test vectors.
End of explanation
from shogun import MulticlassAccuracy
# Shogun object for calculating multiclass accuracy
accuracy = MulticlassAccuracy()
print('Accuracy : ' + str(accuracy.evaluate(output, test_labels)))
# convert MulticlassLabels object to labels vector
output = output.get_labels()
test_labels = test_labels.get_labels()
train_labels = train_labels.get_labels()
# convert features object to matrix
feats_test = feats_test.get_real_matrix('feature_matrix')
feats_train = feats_train.get_real_matrix('feature_matrix')
# plot ground truth
for i in range(3):
plt.scatter(feats_test[2,test_labels==i],feats_test[3,test_labels==i],color=c[i],marker='x',s=100)
# plot predicted labels
for i in range(output.size):
plt.scatter(feats_test[2,i],feats_test[3,i],color=c[int32(output[i])],marker='o',s=30*output_certainty[i])
plt.show()
Explanation: Let us calculate the accuracy of the classification made by our tree as well as plot the results for qualitative evaluation.
End of explanation
train_feats,train_labels,test_feats=create_toy_classification_dataset(20,True)
Explanation: From the evaluation of results, we infer that, with the help of a C4.5 trained decision tree, we can predict (with high accuracy) the type of Iris plant given its petal and sepal widths and lengths.
Classification and Regression Trees (CART)
The CART algorithm is a popular decision tree learning algorithm introduced by Breiman et al. Unlike ID3 and C4.5, the learnt decision tree in this case can be used for both multiclass classification and regression depending on the type of dependent variable. The tree growing process comprises of recursive binary splitting of nodes. To find the best split at each node, all possible splits of all available predictive attributes are considered. The best split is the one that maximises some splitting criterion. For classification tasks, ie. when the dependent attribute is categorical, the Gini index is used as the splitting criterion. For regression tasks, ie. when the dependent variable is continuous, the least squares deviation is used. Let us learn about Shogun's CART implementation by working on two toy problems, one on classification and the other on regression.
Classification example using toy data
Let us consider the same dataset as that in the C4.5 toy example. We re-create the dataset and plot it first.
End of explanation
from shogun import PT_MULTICLASS, CARTree
from numpy import array
def train_carttree(feat_types,problem_type,num_folds,use_cv_pruning,labels,feats):
# create CART tree object
c = CARTree(feat_types,problem_type,num_folds,use_cv_pruning)
# set training labels
c.set_labels(labels)
# train using training features
c.train(feats)
return c
# form feature types True for nominal (attribute X), False for ordinal/continuous (attribute Y)
ft = array([True, False])
# get back trained tree
cart = train_carttree(ft, PT_MULTICLASS, 5, True, train_labels, train_feats)
Explanation: Next, we supply necessary parameters to the CART algorithm and use it train our decision tree.
End of explanation
from numpy import int32
# get output labels
output_labels = cart.apply_multiclass(test_feats)
plot_toy_classification_results(train_feats,train_labels,test_feats,output_labels)
Explanation: In the above code snippet, we see four parameters being supplied to the CART tree object. feat_types supplies knowledge of attribute types of training data to the CART algorithm and problem_type specifies whether it is a multiclass classification problem (PT_MULTICLASS) or a regression problem (PT_REGRESSION). The boolean parameter use_cv_pruning switches on cross-validation pruning of the trained tree and num_folds specifies the number of folds of cross-validation to be applied while pruning. At this point, let us divert ourselves briefly towards undertanding what kind of pruning strategy is employed by Shogun's CART implementation. The CART algorithm uses the cost-complexity pruning strategy. Cost-Complexity pruning yields a list of subtrees of varying depths using complexity normalized resubstitution error, $R_\alpha(T)$. Resubstitution error, R(T), measures how well a decision tree fits the training data. But, this measure favours larger trees over smaller ones. Hence the complexity normalized resubstitution error metric is used which adds penalty for increased complexity and in-turn counters overfitting.
$R_\alpha(T)=R(T)+\alpha \times (numleaves)$
The best subtree among the list of subtrees can be chosen using cross validation or using the best-fit metric in the validation dataset. Setting use_cv_pruning in the above code snippet basically tells the CART object to use cross-validation to choose the best among the subtrees generated by cost-complexity pruning.
Let us now get back on track and use the trained tree to classify our test data.
End of explanation
from shogun import RegressionLabels, features
from numpy import random, sin, linspace
import matplotlib.pyplot as plt
% matplotlib inline
def create_toy_regression_dataset(nsamples,noise_var):
# randomly choose positions in X axis between 0 to 16
samples_x = random.rand(1,nsamples)*16
# find out y (=sin(x)) values for the sampled x positions and add noise to it
samples_y = sin(samples_x)+(random.rand(1,nsamples)-0.5)*noise_var
# plot the samples
plt.scatter(samples_x,samples_y,color='b',marker='x')
# create training features
train_feats = features(samples_x)
# training labels
train_labels = RegressionLabels(samples_y[0,:])
return (train_feats,train_labels)
# plot the reference sinusoid
def plot_ref_sinusoid():
plot_x = linspace(-2,18,100)
plt.plot(plot_x,sin(plot_x),color='y',linewidth=1.5)
plt.xlabel('Feature values')
plt.ylabel('Labels')
plt.xlim([-3,19])
plt.ylim([-1.5,1.5])
# number of samples is 300, noise variance is 0.5
train_feats,train_labels = create_toy_regression_dataset(300,0.5)
plot_ref_sinusoid()
plt.show()
Explanation: Regression example using toy data
In this example, we form the training dataset by sampling points from a sinusoidal curve and see how well a decision tree, trained using these samples, re-creates the actual sinusoid.
End of explanation
from shogun import PT_REGRESSION
from numpy import array
# feature type - continuous
feat_type = array([False])
# get back trained tree
cart = train_carttree(feat_type, PT_REGRESSION, 5, True, train_labels, train_feats)
Explanation: Next, we train our CART-tree.
End of explanation
def plot_predicted_sinusoid(cart):
# regression range - 0 to 16
x_test = array([linspace(0,16,100)])
# form Shogun features
test_feats = features(x_test)
# apply regression using our previously trained CART-tree
regression_output = cart.apply_regression(test_feats).get_labels()
# plot the result
plt.plot(x_test[0,:],regression_output,linewidth=2.0)
# plot reference sinusoid
plot_ref_sinusoid()
plt.show()
plot_predicted_sinusoid(cart)
Explanation: Now let us use the trained decision tree to regress over the entire range of the previously depicted sinusoid.
End of explanation
import csv
from numpy import array
import matplotlib.pylab as plt
% matplotlib inline
# dictionary to encode class names to class labels
to_label = {'Iris-setosa' : 0.0, 'Iris-versicolor' : 1.0, 'Iris-virginica' : 2.0}
# read csv file and separate out labels and features
lab = []
feat = []
with open( os.path.join(SHOGUN_DATA_DIR, 'uci/iris/iris.data')) as csvfile:
csvread = csv.reader(csvfile,delimiter=',')
for row in csvread:
feat.append([float(i) for i in row[0:4]])
lab.append(to_label[row[4]])
lab = array(lab)
feat = array(feat).T
# plot the dataset using two highly correlated attributes
c = ['r', 'g', 'b']
for i in range(3):
plt.scatter(feat[2,lab==i],feat[3,lab==i],color=c[i],marker='x')
plt.show()
Explanation: As we can see from the above plot, CART-induced decision tree follows the reference sinusoid quite beautifully!
Classification example using real dataset
In this section, we will apply the CART algorithm on the Iris dataset. Remember that the Iris dataset provides us with just a training dataset and no separate test dataset. In case of the C4.5 example discussed earlier, we ourselves divided the entire training dataset into training subset and test subset. In this section, we will employ a different strategy i.e. cross validation. In cross-validation, we divide the training dataset into n subsets where n is a user controlled parameter. We perform n iterations of training and testing in which, at each iteration, we choose one of the n subsets as our test dataset and the remaining n-1 subsets as our training dataset. The performance of the model is usually taken as the average of the performances in various iterations. Shogun's cross validation class makes it really easy to apply cross-validation to any model of our choice. Let us realize this by applying cross-validation to CART-tree trained over Iris dataset. We start by reading the data.
End of explanation
from shogun import CARTree, PT_MULTICLASS
# set attribute types - all continuous
feature_types = array([False, False, False, False])
# setup CART-tree with cross validation pruning switched off
cart = CARTree(feature_types,PT_MULTICLASS,5,False)
Explanation: Next, we setup the model which is CART-tree in this case.
End of explanation
from shogun import features, MulticlassLabels
from shogun import CrossValidation, MulticlassAccuracy, CrossValidationSplitting, CrossValidationResult
# training features
feats_train = features(feat)
# training labels
labels_train = MulticlassLabels(lab)
# set evaluation criteria - multiclass accuracy
accuracy = MulticlassAccuracy()
# set splitting criteria - 10 fold cross-validation
split = CrossValidationSplitting(labels_train,10)
# set cross-validation parameters
cross_val = CrossValidation(cart,feats_train,labels_train,split,accuracy,False)
# run cross-validation multiple times - to get better estimate of accuracy
cross_val.put('num_runs', 10)
# get cross validation result
# CARTree is not x-validatable
# result = cross_val.evaluate()
# print result
# print('Mean Accuracy : ' + str(CrossValidationResult.obtain_from_generic(result).get_mean()))
Explanation: Finally we can use Shogun's cross-validation class to get performance.
End of explanation
from numpy import array
# dictionary to convert string features to integer values
to_int = {'A' : 1, 'B' : 2, 'C' : 3, 'D' : 4, 'E' : 5}
# read csv file and separate out labels and features
lab = []
feat = []
with open( os.path.join(SHOGUN_DATA_DIR, 'uci/servo/servo.data')) as csvfile:
csvread = csv.reader(csvfile,delimiter=',')
for row in csvread:
feat.append([to_int[row[0]], to_int[row[1]], float(row[2]), float(row[3])])
lab.append(float(row[4]))
lab = array(lab)
feat = array(feat).T
Explanation: We get a mean accuracy of about 0.93-0.94. This number essentially means that a CART-tree trained using this dataset is expected to classify Iris flowers, given their required attributes, with an accuracy of 93-94% in a real world scenario. The parameters required by Shogun's cross-validation class should be noted in the above code snippet. The class requires the model, training features, training labels, splitting strategy and evaluation method to be specified.
Regression using real dataset
In this section, we evaluate CART-induced decision tree over the Servo dataset. Using this dataset, we essentially want to train a model which can predict the rise time of a servomechanism given the required parameters which are the two (integer) gain settings and two (nominal) choices of mechanical linkages. Let us read the dataset first.
End of explanation
from shogun import CARTree, RegressionLabels, PT_REGRESSION, MeanSquaredError
from shogun import CrossValidation, CrossValidationSplitting, CrossValidationResult
# form training features
feats_train = features(feat)
# form training labels
labels_train = RegressionLabels(lab)
def get_cv_error(max_depth):
# set attribute types - 2 nominal and 2 ordinal
feature_types = array([True, True, False, False])
# setup CART-tree with cross validation pruning switched off
cart = CARTree(feature_types,PT_REGRESSION,5,False)
# set max allowed depth
cart.set_max_depth(max_depth)
# set evaluation criteria - mean squared error
accuracy = MeanSquaredError()
# set splitting criteria - 10 fold cross-validation
split = CrossValidationSplitting(labels_train,10)
# set cross-validation parameters
cross_val = CrossValidation(cart,feats_train,labels_train,split,accuracy,False)
# run cross-validation multiple times
cross_val.put('num_runs', 10)
# return cross validation result
return CrossValidationResult.obtain_from_generic(cross_val.evaluate()).get_mean()
Explanation: The servo dataset is a small training dataset (contains just 167 training vectors) with no separate test dataset, like the Iris dataset. Hence we will apply the same cross-validation strategy we applied in case of the Iris dataset. However, to make things interesting let us play around with a yet-untouched parameter of CART-induced tree i.e. the maximum allowed tree depth. As the tree depth increases, the tree becomes more complex and hence fits the training data more closely. By setting a maximum allowed tree depth, we restrict the complexity of trained tree and hence avoid over-fitting. But choosing a low value of the maximum allowed tree depth may lead to early stopping i.e. under-fitting. Let us explore how we can decide the appropriate value of the max-allowed-tree-depth parameter. Let us create a method, which takes max-allowed-tree-depth parameter as input and returns the corresponding cross-validated error as output.
End of explanation
import matplotlib.pyplot as plt
# CARTree is not x-validatable
# cv_errors = [get_cv_error(i) for i in range(1,15)]
# plt.plot(range(1,15),cv_errors,'bo',range(1,15),cv_errors,'k')
# plt.xlabel('max_allowed_depth')
# plt.ylabel('cross-validated error')
# plt.ylim(0,1.2)
# plt.show()
Explanation: Next, let us supply a range of max_depth values to the above method and plot the returned cross-validated errors.
End of explanation
train_feats,train_labels,test_feats = create_toy_classification_dataset(20,True)
Explanation: From the above plot quite clearly gives us the most appropriate value of maximum allowed depth. We see that the first minima occurs at a maximum allowed depth of 6-8. Hence, one of these should be the desired value. It is to be noted that the error metric that we are discussing here is the mean squared error. Thus, from the above plot, we can also claim that, given required parameters, our CART-flavoured decision tree can predict the rise time within an average error range of $\pm0.5$ (i.e. square root of 0.25 which is the approximate minimum cross-validated error). The relative error i.e average_error/range_of_labels comes out to be ~30%.
CHi-squared Automatic Interaction Detection (CHAID)
The CHAID is an algorithm for decision tree learning proposed by Kass (1980). It is similar in functionality to CART in the sense that both can be used for classification as well as regression. But unlike CART, CHAID internally handles only categorical features. The continuous features are first converted into ordinal categorical features for the CHAID algorithm to be able to use them. This conversion is done by binning of feature values.The number of bins (K) has to be supplied by the user. Given K, a predictor is split in such a way that all the bins get the same number (more or less) of distinct predictor values. The maximum feature value in each bin is used as a breakpoint.
An important parameter in the CHAID tree growing process is the p-value. The p-value is the metric that is used for deciding which categories of predictor values to merge during merging as well as for deciding the best attribute during splitting. The p-value is calculated using different hypothesis testing methods depending on the type of dependent variable (nominal, ordinal or continuous). A more detailed discussion on the CHAID algorithm can be found in the documentation of the CCHAIDTree class in Shogun. Let us move on to a more interesting topic which is learning to use CHAID using Shogun's python API.
Classification example using toy dataset
Let us re-use the toy classification dataset used in C4.5 and CART to see the API usage of CHAID as well as to qualitatively compare the results of the CHAID algorithm with the other two.
End of explanation
from shogun import PT_MULTICLASS, CHAIDTree
from numpy import array, dtype, int32
def train_chaidtree(dependent_var_type,feature_types,num_bins,feats,labels):
# create CHAID tree object
c = CHAIDTree(dependent_var_type,feature_types,num_bins)
# set training labels
c.put('labels', labels)
# train using training features
c.train(feats)
return c
# form feature types 0 for nominal (attribute X), 2 for continuous (attribute Y)
ft = array([0, 2],dtype=int32)
# cache training matrix
train_feats_cache=features(train_feats.get_feature_matrix())
# get back trained tree - dependent variable type is nominal (hence 0), number of bins for binning is 10
chaid = train_chaidtree(0,ft,10,train_feats,train_labels)
print('updated_matrix')
print(train_feats.get_real_matrix('feature_matrix'))
print('')
print('original_matrix')
print(train_feats_cache.get_real_matrix('feature_matrix'))
Explanation: Now, we set up our CHAID-tree with appropriate parameters and train over given data.
End of explanation
# get output labels
output_labels = chaid.apply_multiclass(test_feats)
plot_toy_classification_results(train_feats_cache,train_labels,test_feats,output_labels)
Explanation: An important point to be noted in the above code snippet is that CHAID training modifies the training data. The actual continuous feature values are replaced by the discrete ordinal values obtained during continuous to ordinal conversion. Notice the difference between the original feature matrix and the updated matrix. The updated matrix contains only 10 distinct values denoting all values of the original matrix for feature dimension at row index 1.
With a CHAID-trained decision tree at our disposal, it's time to apply it to colour our test points.
End of explanation
train_feats,train_labels = create_toy_regression_dataset(300,0.5)
plot_ref_sinusoid()
plt.show()
Explanation: Regression example with toy dataset
In this section, we re-work the sinusoid curve fitting example (earlier used in CART toy regression).
End of explanation
from numpy import dtype, int32, array
# feature type - continuous
feat_type = array([2],dtype=int32)
# get back trained tree
chaid = train_chaidtree(2,feat_type, 50, train_feats, train_labels)
Explanation: As usual, we start by setting up our decision tree and training it.
End of explanation
plot_predicted_sinusoid(chaid)
Explanation: Next, we use the trained decision tree to follow the reference sinusoid.
End of explanation
from shogun import CSVFile, features, MulticlassLabels
train_feats=features(CSVFile( os.path.join(SHOGUN_DATA_DIR, 'uci/wine/fm_wine.dat')))
train_labels=MulticlassLabels(CSVFile( os.path.join(SHOGUN_DATA_DIR, 'uci/wine/label_wine.dat')))
Explanation: A distinguishing feature about the predicted curve is the presence of steps. These steps are essentially an artifact of continuous to ordinal conversion. If we decrease the number of bins for the conversion the step widths will increase.
Classification example over real dataset
In this section, we will try to estimate the quality of wine based on 13 attributes like alcohol content, malic acid, magnesium content, etc. using the wine dataset. Let us first read the dataset using Shogun's CSV file reader.
End of explanation
from shogun import CHAIDTree, MulticlassLabels
# set attribute types - all attributes are continuous(2)
feature_types = array([2 for i in range(13)],dtype=int32)
# setup CHAID tree - dependent variable is nominal(0), feature types set, number of bins(20)
chaid = CHAIDTree(0,feature_types,20)
Explanation: Like the case of CART, here we are also interested in finding out the approximate accuracy with which our CHAID tree trained on this dataset will perform in real world. Hence, we will apply the cross validation strategy. But first we specify the parameters of the CHAID tree.
End of explanation
# set up cross validation class
from shogun import CrossValidation, CrossValidationSplitting, CrossValidationResult, MulticlassAccuracy
# set evaluation criteria - multiclass accuracy
accuracy = MulticlassAccuracy()
# set splitting criteria - 10 fold cross-validation
split = CrossValidationSplitting(train_labels,10)
# set cross-validation parameters
cross_val = CrossValidation(chaid,train_feats,train_labels,split,accuracy,False)
# run cross-validation multiple times
cross_val.put('num_runs', 10)
# CHAIDTree is not x-validatable
# print('Mean classification accuracy : '+str(CrossValidationResult.obtain_from_generic(cross_val.evaluate()).get_mean()*100)+' %')
Explanation: Next we set up the cross-validation class and get back the error estimate we want i.e mean classification error.
End of explanation
from shogun import CSVFile, features, RegressionLabels
from numpy import ptp
train_feats=features(CSVFile( os.path.join(SHOGUN_DATA_DIR, 'uci/housing/fm_housing.dat')))
train_labels=RegressionLabels(CSVFile( os.path.join(SHOGUN_DATA_DIR, 'uci/housing/housing_label.dat')))
# print range of regression labels - this is useful for calculating relative deviation later
print('labels range : '+str(ptp(train_labels.get_labels())))
Explanation: Regression example using real dataset
In this section, we try to predict the value of houses in Boston using 13 attributes, like per capita crime rate in neighborhood, number of rooms, nitrous oxide concentration in air, proportion of non-retail business in the area etc. Out of the 13 attributes 12 are continuous and 1 (the Charles river dummy variable) is binary nominal. Let us load the dataset as our first step. For this, we can directly use Shogun's CSV file reader class.
End of explanation
from shogun import CHAIDTree, MeanSquaredError
from shogun import CrossValidation, CrossValidationSplitting, CrossValidationResult
from numpy import array, dtype, int32
def get_cv_error(max_depth):
# set feature types - all continuous(2) except 4th column which is nominal(0)
feature_types = array([2]*13,dtype=int32)
feature_types[3]=0
feature_types[8]=1
feature_types[9]=1
# setup CHAID-tree
chaid = CHAIDTree(2,feature_types,10)
# set max allowed depth
chaid.set_max_tree_depth(max_depth)
# set evaluation criteria - mean squared error
accuracy = MeanSquaredError()
# set splitting criteria - 5 fold cross-validation
split = CrossValidationSplitting(train_labels,5)
# set cross-validation parameters
cross_val = CrossValidation(chaid,train_feats,train_labels,split,accuracy,False)
# run cross-validation multiple times
cross_val.set_num_runs(3)
# return cross validation result
return CrossValidationResult.obtain_from_generic(cross_val.evaluate()).get_mean()
import matplotlib.pyplot as plt
% matplotlib inline
# CHAIDTree is not x-validatable
# cv_errors = [get_cv_error(i) for i in range(1,10)]
# plt.plot(range(1,10),cv_errors,'bo',range(1,10),cv_errors,'k')
# plt.xlabel('max_allowed_depth')
# plt.ylabel('cross-validated error')
# plt.show()
Explanation: Next, we set up the parameters for the CHAID tree as well as the cross-validation class.
End of explanation |
10,779 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Joint Intent Classification and Slot Filling with Transformers
The goal of this notebook is to fine-tune a pretrained transformer-based neural network model to convert a user query expressed in English into
a representation that is structured enough to be processed by an automated service.
Here is an example of interpretation computed by such a Natural Language Understanding system
Step1: The Data
We will use a speech command dataset collected, annotated and published by French startup SNIPS.ai (bought in 2019 by Audio device manufacturer Sonos).
The original dataset comes in YAML format with inline markdown annotations.
Instead we will use a preprocessed variant with token level B-I-O annotations closer the representation our model will predict. This variant of the SNIPS
dataset was prepared by Su Zhu.
Step2: Let's have a look at the first lines from the training set
Step3: Some remarks
Step4: This utterance is a voice command of type "AddToPlaylist" with to annotations
Step5: "POI" stands for "Point of Interest".
Let's parse all the lines and store the results in pandas DataFrames
Step6: A First Model
Step7: Notice that BERT uses subword tokens so the length of the tokenized sentence is likely to be larger than the number of words in the sentence.
Question
Step8: Remarks
Step9: To perform transfer learning, we will need to work with padded sequences so they all have the same sizes. The above histograms, shows that after tokenization, 43 tokens are enough to represent all the voice commands in the training set.
The mapping can be introspected in the tokenizer.vocab attribute
Step10: Couple of remarks
Step11: Encoding the Sequence Classification Targets
To do so we build a simple mapping from the auxiliary files
Step12: Loading and Feeding a Pretrained BERT model
Let's load a pretrained BERT model using the huggingface transformers package
Step13: The first ouput of the BERT model is a tensor with shape
Step14: The second output of the BERT model is a tensor with shape (batch_size, output_dim) which is the vector representation of the special token [CLS]. This vector is typically used as a pooled representation for the sequence as a whole. This is will be used as the features of our Intent classifier
Step15: Exercise
Use the following code template to build and train a sequence classification model using to predict the intent class.
Use the self.bert pre-trained model in the call method and only consider the pooled features (ignore the token-wise features for now).
Step16: Solution
Step17: Our classification model outputs logits instead of probabilities. The final softmax normalization layer is implicit, that is included in the loss function instead of the model directly.
We need to configure the loss function SparseCategoricalCrossentropy(from_logits=True) accordingly
Step18: Join Intent Classification and Slot Filling
Let's now refine our Natural Language Understanding system by trying the retrieve the important structured elements of each voici command.
To do so we will perform word level (or token level) classification of the BIO labels.
Since we have word level tags but BERT uses a wordpiece tokenizer, we need to align the BIO labels with the BERT tokens.
Let's load the list of possible word token labels and augment it with an additional padding label to be able to ignore special tokens
Step19: The following function generates token-aligned integer labels from the BIO word-level annotations. In particular, if a specific word is too long to be represented as a single token, we expand its label for all the tokens of that word while taking care of using "B-" labels only for the first token and then use "I-" for the matching slot type for subsequent tokens of the same word
Step20: Note that the special tokens such as "[PAD]" and "[SEP]" and all padded positions recieve a 0 label.
Exercise
Use the following code template to build a joint sequence and token classification model suitable for training on our encoded dataset with slot labels
Step21: Solution
Step22: The following function uses our trained model to make a prediction on a single text sequence and display both the sequence-wise and the token-wise class labels
Step23: Decoding Predictions into Structured Knowledge
For completeness, here a minimal function to naively decode the predicted BIO slot ids and convert it into a structured representation for the detected slots as a Python dictionaries | Python Code:
import tensorflow as tf
tf.__version__
!nvidia-smi
# TODO: update this notebook to work with the latest version of transformers
%pip install -q transformers==2.11.0
Explanation: Joint Intent Classification and Slot Filling with Transformers
The goal of this notebook is to fine-tune a pretrained transformer-based neural network model to convert a user query expressed in English into
a representation that is structured enough to be processed by an automated service.
Here is an example of interpretation computed by such a Natural Language Understanding system:
```python
nlu("Book a table for two at Le Ritz for Friday night",
tokenizer, joint_model, intent_names, slot_names)
{
'intent': 'BookRestaurant',
'slots': {
'party_size_number': 'two',
'restaurant_name': 'Le Ritz',
'timeRange': 'Friday night'
}
}
```
Intent classification is a simple sequence classification problem. The trick is to treat the structured knowledge extraction part ("Slot Filling") as token-level classification problem using BIO-annotations:
```python
show_predictions("Book a table for two at Le Ritz for Friday night!",
... tokenizer, joint_model, intent_names, slot_names)
Intent: BookRestaurant
Slots:
Book : O
a : O
table : O
for : O
two : B-party_size_number
at : O
Le : B-restaurant_name
R : I-restaurant_name
##itz : I-restaurant_name
for : O
Friday : B-timeRange
night : I-timeRange
! : O
```
We will show how to train a such as join "sequence classification" and "token classification" joint model on a voice command dataset published by snips.ai.
This notebook is a partial reproduction of some of the results presented in this paper:
BERT for Joint Intent Classification and Slot Filling
Qian Chen, Zhu Zhuo, Wen Wang
https://arxiv.org/abs/1902.10909
End of explanation
from urllib.request import urlretrieve
from pathlib import Path
SNIPS_DATA_BASE_URL = (
"https://github.com/ogrisel/slot_filling_and_intent_detection_of_SLU/blob/"
"master/data/snips/"
)
for filename in ["train", "valid", "test", "vocab.intent", "vocab.slot"]:
path = Path(filename)
if not path.exists():
print(f"Downloading {filename}...")
urlretrieve(SNIPS_DATA_BASE_URL + filename + "?raw=true", path)
Explanation: The Data
We will use a speech command dataset collected, annotated and published by French startup SNIPS.ai (bought in 2019 by Audio device manufacturer Sonos).
The original dataset comes in YAML format with inline markdown annotations.
Instead we will use a preprocessed variant with token level B-I-O annotations closer the representation our model will predict. This variant of the SNIPS
dataset was prepared by Su Zhu.
End of explanation
lines_train = Path("train").read_text("utf-8").strip().splitlines()
lines_train[:5]
Explanation: Let's have a look at the first lines from the training set:
End of explanation
def parse_line(line):
utterance_data, intent_label = line.split(" <=> ")
items = utterance_data.split()
words = [item.rsplit(":", 1)[0]for item in items]
word_labels = [item.rsplit(":", 1)[1]for item in items]
return {
"intent_label": intent_label,
"words": " ".join(words),
"word_labels": " ".join(word_labels),
"length": len(words),
}
parse_line(lines_train[0])
Explanation: Some remarks:
The class label for the voice command appears at the end of each line (after the "<=>" marker).
Each word-level token is annotated with B-I-O labels using the ":" separator.
B/I/O stand for "Beginning" / "Inside" / "Outside"
"Add:O" means that the token "Add" is "Outside" of any annotation span
"Don:B-entity_name" means that "Don" is the "Beginning" of an annotation of type "entity-name".
"and:I-entity_name" means that "and" is "Inside" the previously started annotation of type "entity-name".
Let's write a parsing function and test it on the first line:
End of explanation
print(Path("vocab.intent").read_text("utf-8"))
print(Path("vocab.slot").read_text("utf-8"))
Explanation: This utterance is a voice command of type "AddToPlaylist" with to annotations:
an entity-name: "Don and Sherri",
a playlist: "Medidate to Sounds of Nature".
The goal of this project is to build a baseline Natural Understanding model to analyse such voice commands and predict:
the intent of the speaker: the sentence level class label ("AddToPlaylist");
extract the interesting "slots" (typed named entities) from the sentence by performing word level classification using the B-I-O tags as target classes. This second task is often referred to as "NER" (Named Entity Recognition) in the Natural Language Processing literature. Alternatively this is also known as "slot filling" when we expect a fixed set of named entity per sentence of a given class.
The list of possible classes for the sentence level and the word level classification problems are given as:
End of explanation
import pandas as pd
parsed = [parse_line(line) for line in lines_train]
df_train = pd.DataFrame([p for p in parsed if p is not None])
df_train
df_train
df_train.groupby("intent_label").count()
df_train.hist("length", bins=30);
lines_valid = Path("valid").read_text("utf-8").strip().splitlines()
lines_test = Path("test").read_text("utf-8").strip().splitlines()
df_valid = pd.DataFrame([parse_line(line) for line in lines_valid])
df_test = pd.DataFrame([parse_line(line) for line in lines_test])
Explanation: "POI" stands for "Point of Interest".
Let's parse all the lines and store the results in pandas DataFrames:
End of explanation
from transformers import BertTokenizer
model_name = "bert-base-cased"
tokenizer = BertTokenizer.from_pretrained(model_name)
first_sentence = df_train.iloc[0]["words"]
first_sentence
tokenizer.tokenize(first_sentence)
Explanation: A First Model: Intent Classification (Sentence Level)
Let's ignore the slot filling task for now and let's try to build a sentence level classifier by fine-tuning a pre-trained Transformer-based model using the huggingface/transformers package that provides both TF2/Keras and Pytorch APIs.
The BERT Tokenizer
First let's load a pre-trained tokenizer and test it on a test sentence from the training set:
End of explanation
tokenizer.encode(first_sentence)
tokenizer.decode(tokenizer.encode(first_sentence))
Explanation: Notice that BERT uses subword tokens so the length of the tokenized sentence is likely to be larger than the number of words in the sentence.
Question:
why is it particulary interesting to use subword tokenization for general purpose language models such as BERT?
Each token string is mapped to a unique integer id that makes it fast to lookup the right column in the input layer token embedding:
End of explanation
import matplotlib.pyplot as plt
train_sequence_lengths = [len(tokenizer.encode(text))
for text in df_train["words"]]
plt.hist(train_sequence_lengths, bins=30)
plt.title(f"max sequence length: {max(train_sequence_lengths)}");
Explanation: Remarks:
The first token [CLS] is used by the pre-training task for sequence classification.
The last token [SEP] is a separator for the pre-training task that classifiies if a pair of sentences are consecutive in a corpus or not (next sentence prediction).
Here we want to use BERT to compute a representation of a single voice command at a time
We could reuse the representation of the [CLS] token for sequence classification.
Alternatively we can pool the representations of all the tokens of the voice command (e.g. global average) and use that as the input of the final sequence classification layer.
End of explanation
tokenizer.vocab_size
bert_vocab_items = list(tokenizer.vocab.items())
bert_vocab_items[:10]
bert_vocab_items[100:110]
bert_vocab_items[900:910]
bert_vocab_items[1100:1110]
bert_vocab_items[20000:20010]
bert_vocab_items[-10:]
Explanation: To perform transfer learning, we will need to work with padded sequences so they all have the same sizes. The above histograms, shows that after tokenization, 43 tokens are enough to represent all the voice commands in the training set.
The mapping can be introspected in the tokenizer.vocab attribute:
End of explanation
import numpy as np
def encode_dataset(tokenizer, text_sequences, max_length):
token_ids = np.zeros(shape=(len(text_sequences), max_length),
dtype=np.int32)
for i, text_sequence in enumerate(text_sequences):
encoded = tokenizer.encode(text_sequence)
token_ids[i, 0:len(encoded)] = encoded
attention_masks = (token_ids != 0).astype(np.int32)
return {"input_ids": token_ids, "attention_masks": attention_masks}
encoded_train = encode_dataset(tokenizer, df_train["words"], 45)
encoded_train["input_ids"]
encoded_train["attention_masks"]
encoded_valid = encode_dataset(tokenizer, df_valid["words"], 45)
encoded_test = encode_dataset(tokenizer, df_test["words"], 45)
Explanation: Couple of remarks:
30K is a reasonable vocabulary size and is small enough to be used in a softmax output layer;
it can represent multi-lingual sentences, including non-Western alphabets;
subword tokenization makes it possible to deal with typos and morphological variations with a small vocabulary side and without any language-specific preprocessing;
subword tokenization makes it unlikely to use the [UNK] special token as rare words can often be represented as a sequence of frequent enough short subwords in a meaningful way.
Encoding the Dataset with the Tokenizer
Let's now encode the full train / valid and test sets with our tokenizer to get a padded integer numpy arrays:
End of explanation
intent_names = Path("vocab.intent").read_text("utf-8").split()
intent_map = dict((label, idx) for idx, label in enumerate(intent_names))
intent_map
intent_train = df_train["intent_label"].map(intent_map).values
intent_train
intent_valid = df_valid["intent_label"].map(intent_map).values
intent_test = df_test["intent_label"].map(intent_map).values
Explanation: Encoding the Sequence Classification Targets
To do so we build a simple mapping from the auxiliary files:
End of explanation
from transformers import TFAutoModel
base_bert_model = TFAutoModel.from_pretrained("bert-base-cased")
base_bert_model.summary()
encoded_valid
outputs = base_bert_model(encoded_valid)
len(outputs)
Explanation: Loading and Feeding a Pretrained BERT model
Let's load a pretrained BERT model using the huggingface transformers package:
End of explanation
outputs[0].shape
Explanation: The first ouput of the BERT model is a tensor with shape: (batch_size, seq_len, output_dim) which computes features for each token in the input sequence:
End of explanation
outputs[1].shape
Explanation: The second output of the BERT model is a tensor with shape (batch_size, output_dim) which is the vector representation of the special token [CLS]. This vector is typically used as a pooled representation for the sequence as a whole. This is will be used as the features of our Intent classifier:
End of explanation
import tensorflow as tf
from transformers import TFAutoModel
from tensorflow.keras.layers import Dropout, Dense
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import SparseCategoricalCrossentropy
from tensorflow.keras.metrics import SparseCategoricalAccuracy
class IntentClassificationModel(tf.keras.Model):
def __init__(self, intent_num_labels=None, model_name="bert-base-cased",
dropout_prob=0.1):
super().__init__(name="joint_intent_slot")
# Let's preload the pretrained model BERT in the constructor of our
# classifier model
self.bert = TFAutoModel.from_pretrained(model_name)
# TODO: define a (Dense) classification layer to compute the
# for each sequence in a batch the batch of samples. The number of
# output classes is given by the intent_num_labels parameter.
# Use the default linear activation (no softmax) to compute logits.
# The softmax normalization will be computed in the loss function
# instead of the model itself.
def call(self, inputs, training=False):
# Use the pretrained model to extract features from our encoded inputs:
sequence_output, pooled_output = self.bert(inputs, training=training)
# The second output of the main BERT layer has shape:
# (batch_size, output_dim)
# and gives a "pooled" representation for the full sequence from the
# hidden state that corresponds to the "[CLS]" token.
# TODO: use the classifier layer to compute the logits from the pooled
# features.
intent_logits = None
return intent_logits
intent_model = IntentClassificationModel(intent_num_labels=len(intent_map))
intent_model.compile(optimizer=Adam(learning_rate=3e-5, epsilon=1e-08),
loss=SparseCategoricalCrossentropy(from_logits=True),
metrics=[SparseCategoricalAccuracy('accuracy')])
# TODO: uncomment to train the model:
# history = intent_model.fit(encoded_train, intent_train, epochs=2, batch_size=32,
# validation_data=(encoded_valid, intent_valid))
Explanation: Exercise
Use the following code template to build and train a sequence classification model using to predict the intent class.
Use the self.bert pre-trained model in the call method and only consider the pooled features (ignore the token-wise features for now).
End of explanation
import tensorflow as tf
from transformers import TFAutoModel
from tensorflow.keras.layers import Dropout, Dense
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import SparseCategoricalCrossentropy
from tensorflow.keras.metrics import SparseCategoricalAccuracy
class IntentClassificationModel(tf.keras.Model):
def __init__(self, intent_num_labels=None, model_name="bert-base-cased",
dropout_prob=0.1):
super().__init__(name="joint_intent_slot")
self.bert = TFAutoModel.from_pretrained(model_name)
self.dropout = Dropout(dropout_prob)
# Use the default linear activation (no softmax) to compute logits.
# The softmax normalization will be computed in the loss function
# instead of the model itself.
self.intent_classifier = Dense(intent_num_labels)
def call(self, inputs, training=False):
sequence_output, pooled_output = self.bert(inputs, training=training)
pooled_output = self.dropout(pooled_output, training=training)
intent_logits = self.intent_classifier(pooled_output)
return intent_logits
intent_model = IntentClassificationModel(intent_num_labels=len(intent_map))
Explanation: Solution
End of explanation
intent_model.compile(optimizer=Adam(learning_rate=3e-5, epsilon=1e-08),
loss=SparseCategoricalCrossentropy(from_logits=True),
metrics=[SparseCategoricalAccuracy('accuracy')])
history = intent_model.fit(encoded_train, intent_train, epochs=2, batch_size=32,
validation_data=(encoded_valid, intent_valid))
def classify(text, tokenizer, model, intent_names):
inputs = tf.constant(tokenizer.encode(text))[None, :] # batch_size = 1
class_id = model(inputs).numpy().argmax(axis=1)[0]
return intent_names[class_id]
classify("Book a table for two at La Tour d'Argent for Friday night.",
tokenizer, intent_model, intent_names)
classify("I would like to listen to Anima by Thom Yorke.",
tokenizer, intent_model, intent_names)
classify("Will it snow tomorrow in Saclay?",
tokenizer, intent_model, intent_names)
classify("Where can I see to the last Star Wars near Odéon tonight?",
tokenizer, intent_model, intent_names)
Explanation: Our classification model outputs logits instead of probabilities. The final softmax normalization layer is implicit, that is included in the loss function instead of the model directly.
We need to configure the loss function SparseCategoricalCrossentropy(from_logits=True) accordingly:
End of explanation
slot_names = ["[PAD]"]
slot_names += Path("vocab.slot").read_text("utf-8").strip().splitlines()
slot_map = {}
for label in slot_names:
slot_map[label] = len(slot_map)
slot_map
Explanation: Join Intent Classification and Slot Filling
Let's now refine our Natural Language Understanding system by trying the retrieve the important structured elements of each voici command.
To do so we will perform word level (or token level) classification of the BIO labels.
Since we have word level tags but BERT uses a wordpiece tokenizer, we need to align the BIO labels with the BERT tokens.
Let's load the list of possible word token labels and augment it with an additional padding label to be able to ignore special tokens:
End of explanation
def encode_token_labels(text_sequences, slot_names, tokenizer, slot_map,
max_length):
encoded = np.zeros(shape=(len(text_sequences), max_length), dtype=np.int32)
for i, (text_sequence, word_labels) in enumerate(
zip(text_sequences, slot_names)):
encoded_labels = []
for word, word_label in zip(text_sequence.split(), word_labels.split()):
tokens = tokenizer.tokenize(word)
encoded_labels.append(slot_map[word_label])
expand_label = word_label.replace("B-", "I-")
if not expand_label in slot_map:
expand_label = word_label
encoded_labels.extend([slot_map[expand_label]] * (len(tokens) - 1))
encoded[i, 1:len(encoded_labels) + 1] = encoded_labels
return encoded
slot_train = encode_token_labels(
df_train["words"], df_train["word_labels"], tokenizer, slot_map, 45)
slot_valid = encode_token_labels(
df_valid["words"], df_valid["word_labels"], tokenizer, slot_map, 45)
slot_test = encode_token_labels(
df_test["words"], df_test["word_labels"], tokenizer, slot_map, 45)
slot_train[0]
slot_valid[0]
Explanation: The following function generates token-aligned integer labels from the BIO word-level annotations. In particular, if a specific word is too long to be represented as a single token, we expand its label for all the tokens of that word while taking care of using "B-" labels only for the first token and then use "I-" for the matching slot type for subsequent tokens of the same word:
End of explanation
from transformers import TFAutoModel
from tensorflow.keras.layers import Dropout, Dense
class JointIntentAndSlotFillingModel(tf.keras.Model):
def __init__(self, intent_num_labels=None, slot_num_labels=None,
model_name="bert-base-cased", dropout_prob=0.1):
super().__init__(name="joint_intent_slot")
self.bert = TFAutoModel.from_pretrained(model_name)
# TODO: define all the needed layers here.
def call(self, inputs, training=False):
# TODO: extract the features from the inputs using the pre-trained
# BERT model here.
# TODO: use the new layers to predict slot class (logits) for each
# token position in the input sequence:
slot_logits = None # (batch_size, seq_len, slot_num_labels)
# TODO: define a second classification head for the sequence-wise
# predictions:
intent_logits = None # (batch_size, intent_num_labels)
return slot_logits, intent_logits
joint_model = JointIntentAndSlotFillingModel(
intent_num_labels=len(intent_map), slot_num_labels=len(slot_map))
# Define one classification loss for each output:
losses = [SparseCategoricalCrossentropy(from_logits=True),
SparseCategoricalCrossentropy(from_logits=True)]
joint_model.compile(optimizer=Adam(learning_rate=3e-5, epsilon=1e-08),
loss=losses)
# TODO: uncomment to train the model:
# history = joint_model.fit(
# encoded_train, (slot_train, intent_train),
# validation_data=(encoded_valid, (slot_valid, intent_valid)),
# epochs=2, batch_size=32)
Explanation: Note that the special tokens such as "[PAD]" and "[SEP]" and all padded positions recieve a 0 label.
Exercise
Use the following code template to build a joint sequence and token classification model suitable for training on our encoded dataset with slot labels:
End of explanation
from transformers import TFAutoModel
from tensorflow.keras.layers import Dropout, Dense
class JointIntentAndSlotFillingModel(tf.keras.Model):
def __init__(self, intent_num_labels=None, slot_num_labels=None,
model_name="bert-base-cased", dropout_prob=0.1):
super().__init__(name="joint_intent_slot")
self.bert = TFAutoModel.from_pretrained(model_name)
self.dropout = Dropout(dropout_prob)
self.intent_classifier = Dense(intent_num_labels,
name="intent_classifier")
self.slot_classifier = Dense(slot_num_labels,
name="slot_classifier")
def call(self, inputs, training=False):
sequence_output, pooled_output = self.bert(inputs, training=training)
# The first output of the main BERT layer has shape:
# (batch_size, max_length, output_dim)
sequence_output = self.dropout(sequence_output, training=training)
slot_logits = self.slot_classifier(sequence_output)
# The second output of the main BERT layer has shape:
# (batch_size, output_dim)
# and gives a "pooled" representation for the full sequence from the
# hidden state that corresponds to the "[CLS]" token.
pooled_output = self.dropout(pooled_output, training=training)
intent_logits = self.intent_classifier(pooled_output)
return slot_logits, intent_logits
joint_model = JointIntentAndSlotFillingModel(
intent_num_labels=len(intent_map), slot_num_labels=len(slot_map))
opt = Adam(learning_rate=3e-5, epsilon=1e-08)
losses = [SparseCategoricalCrossentropy(from_logits=True),
SparseCategoricalCrossentropy(from_logits=True)]
metrics = [SparseCategoricalAccuracy('accuracy')]
joint_model.compile(optimizer=opt, loss=losses, metrics=metrics)
history = joint_model.fit(
encoded_train, (slot_train, intent_train),
validation_data=(encoded_valid, (slot_valid, intent_valid)),
epochs=2, batch_size=32)
Explanation: Solution:
End of explanation
def show_predictions(text, tokenizer, model, intent_names, slot_names):
inputs = tf.constant(tokenizer.encode(text))[None, :] # batch_size = 1
outputs = model(inputs)
slot_logits, intent_logits = outputs
slot_ids = slot_logits.numpy().argmax(axis=-1)[0, 1:-1]
intent_id = intent_logits.numpy().argmax(axis=-1)[0]
print("## Intent:", intent_names[intent_id])
print("## Slots:")
for token, slot_id in zip(tokenizer.tokenize(text), slot_ids):
print(f"{token:>10} : {slot_names[slot_id]}")
show_predictions("Book a table for two at Le Ritz for Friday night!",
tokenizer, joint_model, intent_names, slot_names)
show_predictions("Will it snow tomorrow in Saclay?",
tokenizer, joint_model, intent_names, slot_names)
show_predictions("I would like to listen to Anima by Thom Yorke.",
tokenizer, joint_model, intent_names, slot_names)
Explanation: The following function uses our trained model to make a prediction on a single text sequence and display both the sequence-wise and the token-wise class labels:
End of explanation
def decode_predictions(text, tokenizer, intent_names, slot_names,
intent_id, slot_ids):
info = {"intent": intent_names[intent_id]}
collected_slots = {}
active_slot_words = []
active_slot_name = None
for word in text.split():
tokens = tokenizer.tokenize(word)
current_word_slot_ids = slot_ids[:len(tokens)]
slot_ids = slot_ids[len(tokens):]
current_word_slot_name = slot_names[current_word_slot_ids[0]]
if current_word_slot_name == "O":
if active_slot_name:
collected_slots[active_slot_name] = " ".join(active_slot_words)
active_slot_words = []
active_slot_name = None
else:
# Naive BIO: handling: treat B- and I- the same...
new_slot_name = current_word_slot_name[2:]
if active_slot_name is None:
active_slot_words.append(word)
active_slot_name = new_slot_name
elif new_slot_name == active_slot_name:
active_slot_words.append(word)
else:
collected_slots[active_slot_name] = " ".join(active_slot_words)
active_slot_words = [word]
active_slot_name = new_slot_name
if active_slot_name:
collected_slots[active_slot_name] = " ".join(active_slot_words)
info["slots"] = collected_slots
return info
def nlu(text, tokenizer, model, intent_names, slot_names):
inputs = tf.constant(tokenizer.encode(text))[None, :] # batch_size = 1
outputs = model(inputs)
slot_logits, intent_logits = outputs
slot_ids = slot_logits.numpy().argmax(axis=-1)[0, 1:-1]
intent_id = intent_logits.numpy().argmax(axis=-1)[0]
return decode_predictions(text, tokenizer, intent_names, slot_names,
intent_id, slot_ids)
nlu("Book a table for two at Le Ritz for Friday night",
tokenizer, joint_model, intent_names, slot_names)
nlu("Will it snow tomorrow in Saclay",
tokenizer, joint_model, intent_names, slot_names)
nlu("I would like to listen to Anima by Thom Yorke",
tokenizer, joint_model, intent_names, slot_names)
Explanation: Decoding Predictions into Structured Knowledge
For completeness, here a minimal function to naively decode the predicted BIO slot ids and convert it into a structured representation for the detected slots as a Python dictionaries:
End of explanation |
10,780 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classifier analysis
In this notebook, I find the precision–recall and ROC curves of classifiers, and look at some examples of where the classifiers do really well (and really poorly).
Step1: Logistic regression
Precision–recall and ROC curves
Step2: Confident, but wrong classifications
Step3: Random forests | Python Code:
import csv
import sys
import astropy.wcs
import h5py
import matplotlib.pyplot as plot
import numpy
import sklearn.metrics
sys.path.insert(1, '..')
import crowdastro.train
CROWDASTRO_H5_PATH = '../data/crowdastro.h5'
CROWDASTRO_CSV_PATH = '../crowdastro.csv'
TRAINING_H5_PATH = '../data/training.h5'
ARCMIN = 1 / 60
%matplotlib inline
Explanation: Classifier analysis
In this notebook, I find the precision–recall and ROC curves of classifiers, and look at some examples of where the classifiers do really well (and really poorly).
End of explanation
with h5py.File(CROWDASTRO_H5_PATH) as crowdastro_h5:
with h5py.File(TRAINING_H5_PATH) as training_h5:
classifier, astro_t, image_t = crowdastro.train.train(
crowdastro_h5, training_h5, '../data/classifier.pkl', '../data/astro_transformer.pkl',
'../data/image_transformer.pkl', classifier='lr')
testing_indices = crowdastro_h5['/atlas/cdfs/testing_indices'].value
all_astro_inputs = astro_t.transform(training_h5['astro'].value)
all_cnn_inputs = image_t.transform(training_h5['cnn_outputs'].value)
all_inputs = numpy.hstack([all_astro_inputs, all_cnn_inputs])
all_labels = training_h5['labels'].value
inputs = all_inputs[testing_indices]
labels = all_labels[testing_indices]
probs = classifier.predict_proba(inputs)
precision, recall, _ = sklearn.metrics.precision_recall_curve(labels, probs[:, 1])
plot.plot(recall, precision)
plot.xlabel('Recall')
plot.ylabel('Precision')
plot.show()
fpr, tpr, _ = sklearn.metrics.roc_curve(labels, probs[:, 1])
plot.plot(fpr, tpr)
plot.xlabel('False positive rate')
plot.ylabel('True positive rate')
print('Accuracy: {:.02%}'.format(classifier.score(inputs, labels)))
Explanation: Logistic regression
Precision–recall and ROC curves
End of explanation
max_margin = float('-inf')
max_index = None
max_swire = None
with h5py.File(CROWDASTRO_H5_PATH) as crowdastro_h5:
with h5py.File(TRAINING_H5_PATH) as training_h5:
classifier, astro_t, image_t = crowdastro.train.train(
crowdastro_h5, training_h5, '../classifier.pkl', '../astro_transformer.pkl',
'../image_transformer.pkl', classifier='lr')
testing_indices = crowdastro_h5['/atlas/cdfs/testing_indices'].value
swire_positions = crowdastro_h5['/swire/cdfs/catalogue'][:, :2]
atlas_positions = crowdastro_h5['/atlas/cdfs/positions'].value
all_astro_inputs = training_h5['astro'].value
all_cnn_inputs = training_h5['cnn_outputs'].value
all_labels = training_h5['labels'].value
swire_tree = sklearn.neighbors.KDTree(swire_positions, metric='chebyshev')
simple = True
if simple:
atlas_counts = {} # ATLAS ID to number of objects in that subject.
for consensus in crowdastro_h5['/atlas/cdfs/consensus_objects']:
atlas_id = int(consensus[0])
atlas_counts[atlas_id] = atlas_counts.get(atlas_id, 0) + 1
indices = []
for atlas_id, count in atlas_counts.items():
if count == 1 and atlas_id in testing_indices:
indices.append(atlas_id)
indices = numpy.array(sorted(indices))
atlas_positions = atlas_positions[indices]
print('Found %d simple subjects.', len(atlas_positions))
else:
atlas_positions = atlas_positions[testing_indices]
print('Found %d subjects.', len(atlas_positions))
# Test each ATLAS subject.
n_correct = 0
n_total = 0
for atlas_index, pos in enumerate(atlas_positions):
neighbours, distances = swire_tree.query_radius([pos], ARCMIN,
return_distance=True)
neighbours = neighbours[0]
distances = distances[0]
astro_inputs = all_astro_inputs[neighbours]
astro_inputs[:, -1] = distances
cnn_inputs = all_cnn_inputs[neighbours]
labels = all_labels[neighbours]
features = []
features.append(astro_t.transform(astro_inputs))
features.append(image_t.transform(cnn_inputs))
inputs = numpy.hstack(features)
outputs = classifier.predict_proba(inputs)[:, 1]
assert len(labels) == len(outputs)
index = outputs.argmax()
correct = labels[index] == 1
if not correct:
outputs.sort()
margin = outputs[-1] - outputs[-2]
if margin > max_margin:
max_margin = margin
max_index = atlas_index
max_swire = swire_positions[index]
with h5py.File(CROWDASTRO_H5_PATH) as crowdastro_h5:
plot.imshow(crowdastro_h5['/atlas/cdfs/images_2x2'][max_index])
swire = crowdastro_h5['/atlas/cdfs/consensus_objects'][max_index][1]
pos = crowdastro_h5['/swire/cdfs/catalogue'][swire][:2]
with open(CROWDASTRO_CSV_PATH) as c_csv:
r = csv.DictReader(c_csv)
header = [a for a in r if int(a['index']) == max_index][0]['header']
wcs = astropy.wcs.WCS(header)
(x, y), = wcs.wcs_world2pix([pos], 1)
print(x,y)
Explanation: Confident, but wrong classifications
End of explanation
with h5py.File(CROWDASTRO_H5_PATH) as crowdastro_h5:
with h5py.File(TRAINING_H5_PATH) as training_h5:
classifier, astro_t, image_t = crowdastro.train.train(
crowdastro_h5, training_h5, '../classifier.pkl', '../astro_transformer.pkl',
'../image_transformer.pkl', classifier='rf')
testing_indices = crowdastro_h5['/atlas/cdfs/testing_indices'].value
all_astro_inputs = astro_t.transform(training_h5['astro'].value)
all_cnn_inputs = image_t.transform(training_h5['cnn_outputs'].value)
all_inputs = numpy.hstack([all_astro_inputs, all_cnn_inputs])
all_labels = training_h5['labels'].value
inputs = all_inputs[testing_indices]
labels = all_labels[testing_indices]
probs = classifier.predict_proba(inputs)
precision, recall, _ = sklearn.metrics.precision_recall_curve(labels, probs[:, 1])
plot.plot(recall, precision)
plot.xlabel('Recall')
plot.ylabel('Precision')
plot.show()
fpr, tpr, _ = sklearn.metrics.roc_curve(labels, probs[:, 1])
plot.plot(fpr, tpr)
plot.xlabel('False positive rate')
plot.ylabel('True positive rate')
print('Accuracy: {:.02%}'.format(classifier.score(inputs, labels)))
Explanation: Random forests
End of explanation |
10,781 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The API is very similar to the Gen2. We can butler.get with a dict of data IDs like before
Step1: We can get all data IDs/Dimensions.
Note that ref.dataId is no longer a simple dict; it's a ExpandedDataCoordinate
Step2: In Gen3, we can also get the WCS and the file URI without dumping the images as Python objects, for example
Step3: With the DatasetRef, we may also use butler.getDirect | Python Code:
exp = butler.get("calexp", {"visit":903334, "detector":22, "instrument":"HSC"})
print(exp.getWcs())
wcs = butler.get("calexp.wcs", {"visit":903334, "detector":22, "instrument":"HSC"})
print(wcs)
vinfo = butler.get("calexp.visitInfo", {"visit":903334, "detector":22, "instrument":"HSC"})
print(vinfo)
Explanation: The API is very similar to the Gen2. We can butler.get with a dict of data IDs like before
End of explanation
for ref in butler.registry.queryDatasets("calexp", collections=['shared/ci_hsc_output']):
print(ref.dataId)
Explanation: We can get all data IDs/Dimensions.
Note that ref.dataId is no longer a simple dict; it's a ExpandedDataCoordinate
End of explanation
for ref in butler.registry.queryDatasets("calexp.wcs", collections=['shared/ci_hsc_output']):
wcs = butler.get(ref)
uri = butler.datastore.getUri(ref)
print("calexp has ", wcs, "\nand the file is at \n", uri)
Explanation: In Gen3, we can also get the WCS and the file URI without dumping the images as Python objects, for example
End of explanation
rows = butler.registry.queryDatasets("calexp", collections=['shared/ci_hsc_output'])
ref = list(rows)[0] # Just to get the first DatasetRef
exp = butler.getDirect(ref)
exp.getWcs()
import lsst.geom as geom
for ref in butler.registry.queryDatasets("calexp", collections=['shared/ci_hsc_output']):#, where="detector = 22"):
uri = butler.datastore.getUri(ref)
print("==== For the file of ", ref.dataId, "at \n", uri)
exp = butler.getDirect(ref)
wcs = exp.getWcs()
print("dimensions:", exp.getDimensions())
print("pixel scale:", wcs.getPixelScale().asArcseconds())
imageBox = geom.Box2D(exp.getBBox())
corners = [wcs.pixelToSky(pix) for pix in imageBox.getCorners()]
imageCenter = wcs.pixelToSky(imageBox.getCenter())
print("ra and dec for the center:", imageCenter.getRa().asDegrees(), imageCenter.getDec().asDegrees())
print("ra and dec for the corners:")
[print(corner) for corner in corners]
Explanation: With the DatasetRef, we may also use butler.getDirect
End of explanation |
10,782 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
https
Step1: Step 0 - hyperparams
Step2: Step 1 - collect data (and/or generate them)
Step3: Step 2 - Build model
Step4: Step 3 training the network
GRU cell
Step5: Conclusion
GRU has performed much better than basic RNN
GRU cell - 50 epochs | Python Code:
from __future__ import division
import tensorflow as tf
from os import path
import numpy as np
import pandas as pd
import csv
from sklearn.model_selection import StratifiedShuffleSplit
from time import time
from matplotlib import pyplot as plt
import seaborn as sns
from mylibs.jupyter_notebook_helper import show_graph
from tensorflow.contrib import rnn
from tensorflow.contrib import learn
import shutil
from tensorflow.contrib.learn.python.learn import learn_runner
from IPython.display import Image
from IPython.core.display import HTML
from mylibs.tf_helper import getDefaultGPUconfig
from data_providers.binary_shifter_varlen_data_provider import \
BinaryShifterVarLenDataProvider
from data_providers.price_history_varlen_data_provider import PriceHistoryVarLenDataProvider
from models.model_05_price_history_rnn_varlen import PriceHistoryRnnVarlen
from sklearn.metrics import r2_score
from mylibs.py_helper import factors
from fastdtw import fastdtw
from scipy.spatial.distance import euclidean
from statsmodels.tsa.stattools import coint
from cost_functions.huber_loss import huber_loss
dtype = tf.float32
seed = 16011984
random_state = np.random.RandomState(seed=seed)
config = getDefaultGPUconfig()
%matplotlib inline
from common import get_or_run_nn
Explanation: https://r2rt.com/recurrent-neural-networks-in-tensorflow-iii-variable-length-sequences.html
End of explanation
num_epochs = 10
series_max_len = 60
num_features = 1 #just one here, the function we are predicting is one-dimensional
state_size = 400
target_len = 30
batch_size = 47
Explanation: Step 0 - hyperparams
End of explanation
csv_in = '../price_history_03a_fixed_width.csv'
npz_path = '../price_history_03_dp_60to30_from_fixed_len.npz'
# XX, YY, sequence_lens, seq_mask = PriceHistoryVarLenDataProvider.createAndSaveDataset(
# csv_in=csv_in,
# npz_out=npz_path,
# input_seq_len=60, target_seq_len=30)
# XX.shape, YY.shape, sequence_lens.shape, seq_mask.shape
dp = PriceHistoryVarLenDataProvider(filteringSeqLens = lambda xx : xx >= target_len,
npz_path=npz_path)
dp.inputs.shape, dp.targets.shape, dp.sequence_lengths.shape, dp.sequence_masks.shape
Explanation: Step 1 - collect data (and/or generate them)
End of explanation
model = PriceHistoryRnnVarlen(rng=random_state, dtype=dtype, config=config)
graph = model.getGraph(batch_size=batch_size, state_size=state_size,
rnn_cell= PriceHistoryRnnVarlen.RNN_CELLS.GRU,
target_len=target_len, series_max_len=series_max_len)
show_graph(graph)
Explanation: Step 2 - Build model
End of explanation
rnn_cell = PriceHistoryRnnVarlen.RNN_CELLS.GRU
num_epochs, state_size, batch_size
def experiment():
dynStats, predictions_dict = model.run(epochs=num_epochs,
rnn_cell=rnn_cell,
state_size=state_size,
series_max_len=series_max_len,
target_len=target_len,
npz_path=npz_path,
batch_size=batch_size)
return dynStats, predictions_dict
from os.path import isdir
data_folder = '../../../../Dropbox/data'
assert isdir(data_folder)
dyn_stats, preds_dict = get_or_run_nn(experiment,
filename='002_rnn_gru_60to30', nn_runs_folder= data_folder + '/nn_runs')
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(dp.targets))
reals = dp.targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
average_huber_loss = np.mean([np.mean(huber_loss(dp.targets[ind], preds_dict[ind]))
for ind in range(len(dp.targets))])
average_huber_loss
Explanation: Step 3 training the network
GRU cell
End of explanation
rnn_cell = PriceHistoryRnnVarlen.RNN_CELLS.GRU
num_epochs = 50
state_size, batch_size
def experiment():
dynStats, predictions_dict = model.run(epochs=num_epochs,
rnn_cell=rnn_cell,
state_size=state_size,
series_max_len=series_max_len,
target_len=target_len,
npz_path=npz_path,
batch_size=batch_size)
return dynStats, predictions_dict
dyn_stats, preds_dict = get_or_run_nn(experiment,
filename='002_rnn_gru_60to30_50epochs',
nn_runs_folder= data_folder + '/nn_runs')
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(dp.targets))
reals = dp.targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
average_huber_loss = np.mean([np.mean(huber_loss(dp.targets[ind], preds_dict[ind]))
for ind in range(len(dp.targets))])
average_huber_loss
Explanation: Conclusion
GRU has performed much better than basic RNN
GRU cell - 50 epochs
End of explanation |
10,783 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inverse Distance Verification
Step1: Generate random x and y coordinates, and observation values proportional to x * y.
Set up two test grid locations at (30, 30) and (60, 60).
Step2: Set up a cKDTree object and query all of the observations within "radius" of each grid point.
The variable indices represents the index of each matched coordinate within the
cKDTree's data list.
Step3: For grid 0, we will use Cressman to interpolate its value.
Step4: For grid 1, we will use barnes to interpolate its value.
We need to calculate kappa--the average distance between observations over the domain.
Step5: Plot all of the affiliated information and interpolation values.
Step6: For each point, we will do a manual check of the interpolation values by doing a step by
step and visual breakdown.
Plot the grid point, observations within radius of the grid point, their locations, and
their distances from the grid point.
Step7: Step through the cressman calculations.
Step8: Now repeat for grid 1, except use barnes interpolation.
Step9: Step through barnes calculations. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
from scipy.spatial import cKDTree
from scipy.spatial.distance import cdist
from metpy.gridding.gridding_functions import calc_kappa
from metpy.gridding.interpolation import barnes_point, cressman_point
from metpy.gridding.triangles import dist_2
plt.rcParams['figure.figsize'] = (15, 10)
def draw_circle(x, y, r, m, label):
nx = x + r * np.cos(np.deg2rad(list(range(360))))
ny = y + r * np.sin(np.deg2rad(list(range(360))))
plt.plot(nx, ny, m, label=label)
Explanation: Inverse Distance Verification: Cressman and Barnes
Compare inverse distance interpolation methods
Two popular interpolation schemes that use inverse distance weighting of observations are the
Barnes and Cressman analyses. The Cressman analysis is relatively straightforward and uses
the ratio between distance of an observation from a grid cell and the maximum allowable
distance to calculate the relative importance of an observation for calculating an
interpolation value. Barnes uses the inverse exponential ratio of each distance between
an observation and a grid cell and the average spacing of the observations over the domain.
Algorithmically:
A KDTree data structure is built using the locations of each observation.
All observations within a maximum allowable distance of a particular grid cell are found in
O(log n) time.
Using the weighting rules for Cressman or Barnes analyses, the observations are given a
proportional value, primarily based on their distance from the grid cell.
The sum of these proportional values is calculated and this value is used as the
interpolated value.
Steps 2 through 4 are repeated for each grid cell.
End of explanation
np.random.seed(100)
pts = np.random.randint(0, 100, (10, 2))
xp = pts[:, 0]
yp = pts[:, 1]
zp = xp * xp / 1000
sim_gridx = [30, 60]
sim_gridy = [30, 60]
Explanation: Generate random x and y coordinates, and observation values proportional to x * y.
Set up two test grid locations at (30, 30) and (60, 60).
End of explanation
grid_points = np.array(list(zip(sim_gridx, sim_gridy)))
radius = 40
obs_tree = cKDTree(list(zip(xp, yp)))
indices = obs_tree.query_ball_point(grid_points, r=radius)
Explanation: Set up a cKDTree object and query all of the observations within "radius" of each grid point.
The variable indices represents the index of each matched coordinate within the
cKDTree's data list.
End of explanation
x1, y1 = obs_tree.data[indices[0]].T
cress_dist = dist_2(sim_gridx[0], sim_gridy[0], x1, y1)
cress_obs = zp[indices[0]]
cress_val = cressman_point(cress_dist, cress_obs, radius)
Explanation: For grid 0, we will use Cressman to interpolate its value.
End of explanation
x2, y2 = obs_tree.data[indices[1]].T
barnes_dist = dist_2(sim_gridx[1], sim_gridy[1], x2, y2)
barnes_obs = zp[indices[1]]
ave_spacing = np.mean((cdist(list(zip(xp, yp)), list(zip(xp, yp)))))
kappa = calc_kappa(ave_spacing)
barnes_val = barnes_point(barnes_dist, barnes_obs, kappa)
Explanation: For grid 1, we will use barnes to interpolate its value.
We need to calculate kappa--the average distance between observations over the domain.
End of explanation
for i, zval in enumerate(zp):
plt.plot(pts[i, 0], pts[i, 1], '.')
plt.annotate(str(zval) + ' F', xy=(pts[i, 0] + 2, pts[i, 1]))
plt.plot(sim_gridx, sim_gridy, '+', markersize=10)
plt.plot(x1, y1, 'ko', fillstyle='none', markersize=10, label='grid 0 matches')
plt.plot(x2, y2, 'ks', fillstyle='none', markersize=10, label='grid 1 matches')
draw_circle(sim_gridx[0], sim_gridy[0], m='k-', r=radius, label='grid 0 radius')
draw_circle(sim_gridx[1], sim_gridy[1], m='b-', r=radius, label='grid 1 radius')
plt.annotate('grid 0: cressman {:.3f}'.format(cress_val), xy=(sim_gridx[0] + 2, sim_gridy[0]))
plt.annotate('grid 1: barnes {:.3f}'.format(barnes_val), xy=(sim_gridx[1] + 2, sim_gridy[1]))
plt.axes().set_aspect('equal', 'datalim')
plt.legend()
Explanation: Plot all of the affiliated information and interpolation values.
End of explanation
plt.annotate('grid 0: ({}, {})'.format(sim_gridx[0], sim_gridy[0]),
xy=(sim_gridx[0] + 2, sim_gridy[0]))
plt.plot(sim_gridx[0], sim_gridy[0], '+', markersize=10)
mx, my = obs_tree.data[indices[0]].T
mz = zp[indices[0]]
for x, y, z in zip(mx, my, mz):
d = np.sqrt((sim_gridx[0] - x)**2 + (y - sim_gridy[0])**2)
plt.plot([sim_gridx[0], x], [sim_gridy[0], y], '--')
xave = np.mean([sim_gridx[0], x])
yave = np.mean([sim_gridy[0], y])
plt.annotate('distance: {}'.format(d), xy=(xave, yave))
plt.annotate('({}, {}) : {} F'.format(x, y, z), xy=(x, y))
plt.xlim(0, 80)
plt.ylim(0, 80)
plt.axes().set_aspect('equal', 'datalim')
Explanation: For each point, we will do a manual check of the interpolation values by doing a step by
step and visual breakdown.
Plot the grid point, observations within radius of the grid point, their locations, and
their distances from the grid point.
End of explanation
dists = np.array([22.803508502, 7.21110255093, 31.304951685, 33.5410196625])
values = np.array([0.064, 1.156, 3.364, 0.225])
cres_weights = (radius * radius - dists * dists) / (radius * radius + dists * dists)
total_weights = np.sum(cres_weights)
proportion = cres_weights / total_weights
value = values * proportion
val = cressman_point(cress_dist, cress_obs, radius)
print('Manual cressman value for grid 1:\t', np.sum(value))
print('Metpy cressman value for grid 1:\t', val)
Explanation: Step through the cressman calculations.
End of explanation
plt.annotate('grid 1: ({}, {})'.format(sim_gridx[1], sim_gridy[1]),
xy=(sim_gridx[1] + 2, sim_gridy[1]))
plt.plot(sim_gridx[1], sim_gridy[1], '+', markersize=10)
mx, my = obs_tree.data[indices[1]].T
mz = zp[indices[1]]
for x, y, z in zip(mx, my, mz):
d = np.sqrt((sim_gridx[1] - x)**2 + (y - sim_gridy[1])**2)
plt.plot([sim_gridx[1], x], [sim_gridy[1], y], '--')
xave = np.mean([sim_gridx[1], x])
yave = np.mean([sim_gridy[1], y])
plt.annotate('distance: {}'.format(d), xy=(xave, yave))
plt.annotate('({}, {}) : {} F'.format(x, y, z), xy=(x, y))
plt.xlim(40, 80)
plt.ylim(40, 100)
plt.axes().set_aspect('equal', 'datalim')
Explanation: Now repeat for grid 1, except use barnes interpolation.
End of explanation
dists = np.array([9.21954445729, 22.4722050542, 27.892651362, 38.8329756779])
values = np.array([2.809, 6.241, 4.489, 2.704])
weights = np.exp(-dists**2 / kappa)
total_weights = np.sum(weights)
value = np.sum(values * (weights / total_weights))
print('Manual barnes value:\t', value)
print('Metpy barnes value:\t', barnes_point(barnes_dist, barnes_obs, kappa))
Explanation: Step through barnes calculations.
End of explanation |
10,784 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Description
Time to make a simple SIP data simulation with the dataset that you alreadly created
Make sure you have created the dataset before trying to run this notebook
Setting variables
"workDir" is the path to the working directory for this analysis (where the files will be download to)
NOTE
Step1: Init
Step2: Experimental design
How many gradients?
Which are labeled treatments & which are controls?
For this tutorial, we'll keep things simple and just simulate one control & one treatment
For the labeled treatment, 34% of the taxa (1 of 3) will incorporate 50% isotope
The script below ("SIPSim incorpConfigExample") is helpful for making simple experimental designs
Step3: Pre-fractionation communities
What is the relative abundance of taxa in the pre-fractionation samples?
Step4: Note
Step6: Simulating fragments
Simulating shotgun-fragments
Fragment length distribution
Step7: Simulation
Step8: Plotting fragments
Step9: Note
Step10: Note
Step11: Plotting fragment distribution w/ and w/out diffusion
Making a table of fragment values from KDEs
Step12: Plotting
plotting KDE with or without diffusion added
Step13: Adding diffusive boundary layer (DBL) effects
'smearing' effects
Step14: Adding isotope incorporation
Using the config file produced in the Experimental Design section
Step15: Note
Step16: Making an OTU table
Number of amplicon-fragment in each fraction in each gradient
Assuming a total pre-fractionation community size of 1e7
Step17: Plotting fragment count distributions
Step18: Notes
Step19: Adding effects of PCR
This will alter the fragment counts based on the PCR kinetic model of
Step20: Notes
The table is in the same format as with the original OTU table, but the counts and relative abundances should be altered.
Simulating sequencing
Sampling from the OTU table
Step21: Notes
The table is in the same format as with the original OTU table, but the counts and relative abundances should be altered.
Plotting
Step22: Misc
A 'wide' OTU table
If you want to reformat the OTU table to a more standard 'wide' format (as used in Mothur or QIIME)
Step23: SIP metadata
If you want to make a table of SIP sample metadata
Step24: Other SIPSim commands
SIPSim -l will list all available SIPSim commands | Python Code:
workDir = '../../t/SIPSim_example/'
nprocs = 3
Explanation: Description
Time to make a simple SIP data simulation with the dataset that you alreadly created
Make sure you have created the dataset before trying to run this notebook
Setting variables
"workDir" is the path to the working directory for this analysis (where the files will be download to)
NOTE: MAKE SURE to modify this path to the directory where YOU want to run the example.
"nprocs" is the number of processors to use (3 by default, since only 3 genomes). Change this if needed.
End of explanation
import os
# Note: you will need to install `rpy2.ipython` and the necessary R packages (see next cell)
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
workDir = os.path.abspath(workDir)
if not os.path.isdir(workDir):
os.makedirs(workDir)
%cd $workDir
genomeDir = os.path.join(workDir, 'genomes_rn')
Explanation: Init
End of explanation
%%bash
source activate SIPSim
# creating example config
SIPSim incorp_config_example \
--percTaxa 34 \
--percIncorpUnif 50 \
--n_reps 1 \
> incorp.config
!cat incorp.config
Explanation: Experimental design
How many gradients?
Which are labeled treatments & which are controls?
For this tutorial, we'll keep things simple and just simulate one control & one treatment
For the labeled treatment, 34% of the taxa (1 of 3) will incorporate 50% isotope
The script below ("SIPSim incorpConfigExample") is helpful for making simple experimental designs
End of explanation
%%bash
source activate SIPSim
SIPSim communities \
--config incorp.config \
./genomes_rn/genome_index.txt \
> comm.txt
!cat comm.txt
Explanation: Pre-fractionation communities
What is the relative abundance of taxa in the pre-fractionation samples?
End of explanation
%%bash
source activate SIPSim
SIPSim gradient_fractions \
--BD_min 1.67323 \
--BD_max 1.7744 \
comm.txt \
> fracs.txt
!head -n 6 fracs.txt
Explanation: Note: "library" = gradient
Simulating gradient fractions
BD size ranges for each fraction (& start/end of the total BD range)
End of explanation
# primers = >515F
# GTGCCAGCMGCCGCGGTAA
# >806R
# GGACTACHVGGGTWTCTAAT
#
# F = os.path.join(workDir, '515F-806R.fna')
# with open(F, 'wb') as oFH:
# oFH.write(primers)
# print 'File written: {}'.format(F)
Explanation: Simulating fragments
Simulating shotgun-fragments
Fragment length distribution: skewed-normal
Primer sequences (wait... what?)
If you were to simulate amplicons, instead of shotgun fragments, you can use something like the following:
End of explanation
%%bash -s $genomeDir
source activate SIPSim
# skewed-normal
SIPSim fragments \
$1/genome_index.txt \
--fp $1 \
--fld skewed-normal,9000,2500,-5 \
--flr None,None \
--nf 1000 \
--debug \
--tbl \
> shotFrags.txt
!head -n 5 shotFrags.txt
!tail -n 5 shotFrags.txt
Explanation: Simulation
End of explanation
%%R -w 700 -h 350
df = read.delim('shotFrags.txt')
p = ggplot(df, aes(fragGC, fragLength, color=taxon_name)) +
geom_density2d() +
scale_color_discrete('Taxon') +
labs(x='Fragment G+C', y='Fragment length (bp)') +
theme_bw() +
theme(
text = element_text(size=16)
)
plot(p)
Explanation: Plotting fragments
End of explanation
%%bash
source activate SIPSim
SIPSim fragment_KDE \
shotFrags.txt \
> shotFrags_kde.pkl
!ls -thlc shotFrags_kde.pkl
Explanation: Note: for information on what's going on in this config file, use the command: SIPSim isotope_incorp -h
Converting fragments to a 2d-KDE
Estimating the joint-probabilty for fragment G+C & length
End of explanation
%%bash
source activate SIPSim
SIPSim diffusion \
shotFrags_kde.pkl \
--np 3 \
> shotFrags_kde_dif.pkl
!ls -thlc shotFrags_kde_dif.pkl
Explanation: Note: The generated list of KDEs (1 per taxon per gradient) are in a binary file format
To get a table of length/G+C values, use the command: SIPSim KDE_sample
Adding diffusion
Simulating the BD distribution of fragments as Gaussian distributions.
One Gaussian distribution per homogeneous set of DNA molecules (same G+C and length)
See the README if you get MKL errors with the next step and re-run the fragment KDE generation step
End of explanation
n = 100000
%%bash -s $n
source activate SIPSim
SIPSim KDE_sample -n $1 shotFrags_kde.pkl > shotFrags_kde.txt
SIPSim KDE_sample -n $1 shotFrags_kde_dif.pkl > shotFrags_kde_dif.txt
ls -thlc shotFrags_kde*.txt
Explanation: Plotting fragment distribution w/ and w/out diffusion
Making a table of fragment values from KDEs
End of explanation
%%R
df1 = read.delim('shotFrags_kde.txt', sep='\t')
df2 = read.delim('shotFrags_kde_dif.txt', sep='\t')
df1$data = 'no diffusion'
df2$data = 'diffusion'
df = rbind(df1, df2) %>%
gather(Taxon, BD, Clostridium_ljungdahlii_DSM_13528,
Escherichia_coli_1303, Streptomyces_pratensis_ATCC_33331) %>%
mutate(Taxon = gsub('_(ATCC|DSM)', '\n\\1', Taxon))
df %>% head(n=3)
%%R -w 800 -h 300
p = ggplot(df, aes(BD, fill=data)) +
geom_density(alpha=0.25) +
facet_wrap( ~ Taxon) +
scale_fill_discrete('') +
theme_bw() +
theme(
text=element_text(size=16),
axis.title.y = element_text(vjust=1),
axis.text.x = element_text(angle=50, hjust=1)
)
plot(p)
Explanation: Plotting
plotting KDE with or without diffusion added
End of explanation
%%bash
source activate SIPSim
SIPSim DBL \
shotFrags_kde_dif.pkl \
--np 3 \
> shotFrags_kde_dif_DBL.pkl
# viewing DBL logs
!ls -thlc *pkl
Explanation: Adding diffusive boundary layer (DBL) effects
'smearing' effects
End of explanation
%%bash
source activate SIPSim
SIPSim isotope_incorp \
--comm comm.txt \
--np 3 \
shotFrags_kde_dif_DBL.pkl \
incorp.config \
> shotFrags_KDE_dif_DBL_inc.pkl
!ls -thlc *.pkl
Explanation: Adding isotope incorporation
Using the config file produced in the Experimental Design section
End of explanation
%%R
df = read.delim('BD-shift_stats.txt', sep='\t')
df
Explanation: Note: statistics on how much isotope was incorporated by each taxon are listed in "BD-shift_stats.txt"
End of explanation
%%bash
source activate SIPSim
SIPSim OTU_table \
--abs 1e7 \
--np 3 \
shotFrags_KDE_dif_DBL_inc.pkl \
comm.txt \
fracs.txt \
> OTU.txt
!head -n 7 OTU.txt
Explanation: Making an OTU table
Number of amplicon-fragment in each fraction in each gradient
Assuming a total pre-fractionation community size of 1e7
End of explanation
%%R -h 350 -w 750
df = read.delim('OTU.txt', sep='\t')
p = ggplot(df, aes(BD_mid, count, fill=taxon)) +
geom_area(stat='identity', position='dodge', alpha=0.5) +
scale_x_continuous(expand=c(0,0)) +
labs(x='Buoyant density') +
labs(y='Shotgun fragment counts') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank()
)
plot(p)
Explanation: Plotting fragment count distributions
End of explanation
%%R -h 350 -w 750
p = ggplot(df, aes(BD_mid, count, fill=taxon)) +
geom_area(stat='identity', position='fill') +
scale_x_continuous(expand=c(0,0)) +
scale_y_continuous(expand=c(0,0)) +
labs(x='Buoyant density') +
labs(y='Shotgun fragment counts') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank()
)
plot(p)
Explanation: Notes:
This plot represents the theoretical number of amplicon-fragments at each BD across each gradient.
Derived from subsampling the fragment BD proability distributions generated in earlier steps.
The fragment BD distribution of one of the 3 taxa should have shifted in Gradient 2 (the treatment gradient).
The fragment BD distributions of the other 2 taxa should be approx. the same between the two gradients.
Viewing fragment counts as relative quantities
End of explanation
%%bash
source activate SIPSim
SIPSim OTU_PCR OTU.txt > OTU_PCR.txt
!head -n 5 OTU_PCR.txt
!tail -n 5 OTU_PCR.txt
Explanation: Adding effects of PCR
This will alter the fragment counts based on the PCR kinetic model of:
Suzuki MT, Giovannoni SJ. (1996). Bias caused by template annealing in the
amplification of mixtures of 16S rRNA genes by PCR. Appl Environ Microbiol
62:625-630.
End of explanation
%%bash
source activate SIPSim
SIPSim OTU_subsample OTU_PCR.txt > OTU_PCR_sub.txt
!head -n 5 OTU_PCR_sub.txt
Explanation: Notes
The table is in the same format as with the original OTU table, but the counts and relative abundances should be altered.
Simulating sequencing
Sampling from the OTU table
End of explanation
%%R -h 350 -w 750
df = read.delim('OTU_PCR_sub.txt', sep='\t')
p = ggplot(df, aes(BD_mid, rel_abund, fill=taxon)) +
geom_area(stat='identity', position='fill') +
scale_x_continuous(expand=c(0,0)) +
scale_y_continuous(expand=c(0,0)) +
labs(x='Buoyant density') +
labs(y='Taxon relative abundances') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank()
)
plot(p)
Explanation: Notes
The table is in the same format as with the original OTU table, but the counts and relative abundances should be altered.
Plotting
End of explanation
%%bash
source activate SIPSim
SIPSim OTU_wide_long -w \
OTU_PCR_sub.txt \
> OTU_PCR_sub_wide.txt
!head -n 4 OTU_PCR_sub_wide.txt
Explanation: Misc
A 'wide' OTU table
If you want to reformat the OTU table to a more standard 'wide' format (as used in Mothur or QIIME):
End of explanation
%%bash
source activate SIPSim
SIPSim OTU_sample_data \
OTU_PCR_sub.txt \
> OTU_PCR_sub_meta.txt
!head OTU_PCR_sub_meta.txt
Explanation: SIP metadata
If you want to make a table of SIP sample metadata
End of explanation
%%bash
source activate SIPSim
SIPSim -l
Explanation: Other SIPSim commands
SIPSim -l will list all available SIPSim commands
End of explanation |
10,785 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fetching data from Infodengue
We can download the data from a full state. Let's pick Goiás.
Step2: Building the dashboard
Step3: Building Animated films
Step4: Downloading data
We will start by Downloading the full alerta table for all diseases.
Step5: loading data from disk
we can load all chunks at once,into a single dataframe, since they are parquet files.doenca | Python Code:
go = get_alerta_table(state='GO', doenca='dengue')
go
municipios = geobr.read_municipality(code_muni='GO')
municipios
municipios['code_muni'] = municipios.code_muni.astype('int')
municipios.plot(figsize=(10,10));
goias = pd.merge(go.reset_index(), municipios,how='left', left_on='municipio_geocodigo', right_on='code_muni')
goias
goias = gpd.GeoDataFrame(goias)
ax = goias[goias.SE==202144].plot(figsize=(10,10),
column='casos_est',
scheme="naturalbreaks",
legend=True,
legend_kwds={'title': "Casos estimados"},
);
ax.set_axis_off();
Explanation: Fetching data from Infodengue
We can download the data from a full state. Let's pick Goiás.
End of explanation
from functools import lru_cache
from IPython.display import display, Markdown
import pandas_bokeh
pandas_bokeh.output_notebook()
pd.options.plotting.backend = "pandas_bokeh"
@lru_cache(maxsize=27)
def get_dados(sigla='PR', doenca='dengue'):
df = get_alerta_table(state=sigla, doenca=doenca)
municipios = geobr.read_municipality(code_muni=sigla)
municipios['code_muni'] = municipios.code_muni.astype('int')
dados = pd.merge(df.reset_index(), municipios,how='left', left_on='municipio_geocodigo', right_on='code_muni')
dados = dados.sort_values('SE')
return gpd.GeoDataFrame(dados)
def gera_SE_seq(anoi, anof):
ses=[]
for a in range(anoi,anof+1):
for w in range(1,52):
w = str(w).zfill(2)
ses.append(int(f'{a}{w}'))
return ses
estado='TO'
doenca='chik'
doenca='dengue'
gdf = get_dados(estado, doenca)
try:
gdf.set_index('data_iniSE', inplace=True)
except KeyError:
pass
munis = list(set(gdf.name_muni))
try:
munis = sorted(munis)
except: pass
@interact
def painel(mun=widgets.Select(options=munis, description='Municipio'),
week=widgets.SelectionSlider(options=gera_SE_seq(2021,2022), description='SE'),
):
week = gdf.SE.max() if week > gdf.SE.max() else week
umid = pd.DataFrame(gdf.reset_index())[['data_iniSE','umidmin', 'umidmax']].plot_bokeh(kind='line', x='data_iniSE')
temp = pd.DataFrame(gdf.reset_index())[['data_iniSE','tempmin','tempmax']].plot_bokeh(kind='line', x='data_iniSE')
mapplot = gdf[gdf.SE==int(week)].plot_bokeh(simplify_shapes=5000,
dropdown=['casos_est','casos','p_inc100k','nivel'],
colormap='Viridis',
hovertool_string=f<h1>@name_muni</h1>
<h3>Casos: @casos </h3>,
)
cases = pd.DataFrame(gdf[gdf.name_muni==mun].reset_index())[['data_iniSE','casos','casos_est']].plot_bokeh(kind='line', x='data_iniSE')
mapplot.width = 900
umid.width = 450
temp.width = 450
cases.width = 900
layout = pandas_bokeh.column(mapplot,
pandas_bokeh.row(umid, temp),
cases)
pandas_bokeh.plot_grid(layout, width=1200)
estado='TO'
doenca='chik'
doenca='dengue'
gdf = get_dados(estado, doenca)
try:
gdf.set_index('data_iniSE', inplace=True)
except KeyError:
pass
munis = list(set(gdf.name_muni))
try:
munis = sorted(munis)
except: pass
pd.options.plotting.backend = "matplotlib"
@interact
def painel(mun=widgets.Select(options=munis, description='Municipio'),
week=widgets.SelectionSlider(options=gera_SE_seq(2021,2022), value=202215, description='SE'),
variable=['casos','casos_est','p_inc100k']
):
week = gdf.SE.max() if week > gdf.SE.max() else week
display(Markdown(f"# {doenca}"))
fig, axs = plt.subplot_mosaic([['a', 'c'], ['b', 'c'], ['d', 'd']],
figsize=(20, 20),
constrained_layout=True)
for label, ax in axs.items():
if label == 'a':
gdf[(gdf.name_muni==mun)&(gdf.SE>=202101)].umidmax.plot(kind='area',ax=ax,alpha=0.3, label='máxima')
gdf[(gdf.name_muni==mun)&(gdf.SE>=202101)].umidmin.plot(kind='area',ax=ax,alpha=0.3, label='mínima')
ax.set_title('Umidade')
ax.legend()
elif label == 'b':
gdf[(gdf.name_muni==mun)&(gdf.SE>=202101)].tempmin.plot(ax=ax, label='mínima')
# gdf.tempmax.plot(ax=ax, label='máxima')
ax.set_title('Temperatura')
ax.legend()
elif label == 'c':
leg = 'Casos estimados' if variable=='casos_est' else 'Casos notificados'
gdf[gdf.SE==int(week)].plot(ax=ax,column=variable,scheme="User_defined",
legend=True,
classification_kwds=dict(bins=[20,50,100,500,2000,5000]),
legend_kwds={'title': f"{leg}",'loc':'lower right'})
ax.set_axis_off();
ax.set_title(str(week));
elif label == 'd':
gdf[(gdf.name_muni==mun)&(gdf.SE>=202101)].casos.plot(ax=ax,label='casos')
gdf[(gdf.name_muni==mun)&(gdf.SE>=202101)].casos_est.plot(ax=ax,label='casos_est')
ax.legend()
ax.vlines(x=gdf[gdf.SE==int(week)].index,ymin=0,ymax=500)
ax.set_title(mun)
plt.show();
Explanation: Building the dashboard
End of explanation
data_path = Path('./data/')
map_path = Path("./maps/")
os.makedirs(data_path, exist_ok=True)
os.makedirs(map_path, exist_ok=True)
Explanation: Building Animated films
End of explanation
from infodenguepredict.data.infodengue import get_full_alerta_table
diseases = ['dengue','chik','zika']
for dis in diseases:
os.makedirs(data_path/dis, exist_ok=True)
for dis in diseases:
get_full_alerta_table(dis, output_dir=data_path/dis, chunksize=50000, start_SE=202140)
brmunis = geobr.read_municipality(code_muni='all')
brmunis.plot();
def merge(munis, df):
munis['code_muni'] = munis.code_muni.astype('int')
dados = pd.merge(df.reset_index(), munis,how='left', left_on='municipio_geocodigo', right_on='code_muni')
dados = dados.sort_values('SE')
return gpd.GeoDataFrame(dados)
def create_frames(dados, doenca='dengue', variable='casos_est'):
vnames = {'casos_est': 'Casos Estimados', 'casos': 'Casos Notificados', 'p_inc100k': 'Incidência por 100 mil hab.'}
leg = vnames[variable]
if doenca == 'dengue':
bins = {'casos_est':[20,50,100,500,1000],'p_inc100k':[50,500,1000,2500] }
elif doenca == 'chik':
bins = {'casos_est':[5,15,50,120,300],'p_inc100k':[50,100,500,1000] }
elif doenca == 'zika':
bins = {'casos_est':[5,15,50,120,300],'p_inc100k':[50,100,500,1000] }
acumulados = 0
ews = sorted(list(set(dados.SE)))
for i,se in enumerate(ews):
fig = dados[dados.SE==se].plot(figsize=(10,10),
column=variable,
scheme="user_defined",
cmap='plasma',
classification_kwds={'bins':bins[variable]},
legend=True,
legend_kwds={'title': f"{leg}",'loc':'lower right'}
);
acumulados += dados[dados.SE==se].casos_est.sum()
fig.set_axis_off();
fig.text(-50,2, f'Casos: {int(acumulados)}', fontsize=24)
fig.set_title(f'{leg} de {doenca}\nna semana {str(se)[-2:]} de {str(se)[:-2]}', fontdict={'fontsize': 14});
opath = map_path/variable
os.makedirs(opath, exist_ok=True)
plt.savefig(opath/f'{doenca}_{i:0>3}.png', dpi=200)
plt.close()
Explanation: Downloading data
We will start by Downloading the full alerta table for all diseases.
End of explanation
dengue = pd.read_parquet(data_path/'dengue')
dengue.sort_values('SE',inplace=True)
chik = pd.read_parquet(data_path/'chik')
chik.sort_values('SE',inplace=True)
zika = pd.read_parquet(data_path/'zika')
zika.sort_values('SE',inplace=True)
dengue
dmdf = merge(brmunis,dengue)
cmdf = merge(brmunis, chik)
zmdf = merge(brmunis, zika)
dmdf[dmdf.SE==202208].plot(column='casos_est',scheme="naturalbreaks");
os.getcwd()
create_frames(dmdf)
create_frames(dmdf, variable='p_inc100k')
create_frames(cmdf, doenca='chik')
create_frames(cmdf, doenca='chik', variable='p_inc100k')
create_frames(zmdf, doenca='zika')
create_frames(zmdf, doenca='zika', variable='p_inc100k')
Explanation: loading data from disk
we can load all chunks at once,into a single dataframe, since they are parquet files.doenca
End of explanation |
10,786 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This exercise will get you started with running your own code.
Set up the notebook
To begin, run the code in the next cell.
- Begin by clicking inside the code cell.
- Click on the triangle (in the shape of a "Play button") that appears to the left of the code cell.
- If your code was run sucessfully, you will see Setup Complete as output below the cell.
Instead of clicking on the triangle, you can also run code by pressing Shift + Enter on your keyboard. Try this now! Nothing bad will happen if you run the code more than once.
Step1: The code above sets up the notebook so that it can check your answers in this exercise. You should never modify this code. (Otherwise, the notebook won't be able to verify that you have successfully completed the exercise.)
After finishing all of the questions below, you'll see the exercise marked as complete on the course page. Once you complete all of the lessons, you'll get a course completion certificate!
Question 1
Next, you will run some code from the tutorial, so you can see how it works for yourself. Run the next code cell without changes.
Step2: You just ran code to print Hello world!, which you should see in the output above.
The second line of code (q1.check()) checks your answer. You should never modify this checking code; if you remove it, you won't get credit for completing the problem.
Question 2
Now, you will print another message of your choosing. To do this, change print("Your message here!") to use a different message. For instance, you might like to change it to something like
Step3: Question 3
As you learned in the tutorial, a comment in Python has a pound sign (#) in front of it, which tells Python to ignore the text after it.
Putting a pound sign in front of a line of code will make Python ignore that code. For instance, this line would be ignored by Python, and nothing would appear in the output
Step4: In the next question, and in most of the exercises in this course, you will have the option to uncomment to view hints and solutions. Once you feel comfortable with uncommenting, continue to the next question.
Question 4
In the tutorial, you defined several variables to calculate the total number of seconds in a year. Run the next code cell to do the calculation here.
Step5: Use the next code cell to
Step6: 🌶️ Question 5
(Questions marked with a 🌶️ will be a little bit more challenging than the others! Remember you can always get a hint or view the solution.)
The Titanic competition is Kaggle's most famous data science competition. In this competition, participants are challenged to build a machine learning model that can predict whether or not passengers survived the Titanic shipwreck, based on information like age, sex, family size, and ticket number.
Run the next code cell without changes to load and preview the titanic data.
Don't worry about the details of the code for now - the end result is just that the all of the titanic data has been loaded in a variable named titanic_data. (In order to learn how to write this code yourself, you can take the Python course and then the Pandas course.)
Step7: The data has a different row for each passenger.
The next code cell defines and prints the values of three variables
Step8: So,
- total = 891 (there were 891 passengers on board the Titanic),
- survived = 342 (342 passengers survived), and
- minors = 113 (113 passengers were under the age of 18).
In the code cell below, replace the underlines (____) with code to calculate the values for two more variables | Python Code:
# Set up the exercise
from learntools.core import binder
binder.bind(globals())
from learntools.intro_to_programming.ex1 import *
print('Setup complete.')
Explanation: This exercise will get you started with running your own code.
Set up the notebook
To begin, run the code in the next cell.
- Begin by clicking inside the code cell.
- Click on the triangle (in the shape of a "Play button") that appears to the left of the code cell.
- If your code was run sucessfully, you will see Setup Complete as output below the cell.
Instead of clicking on the triangle, you can also run code by pressing Shift + Enter on your keyboard. Try this now! Nothing bad will happen if you run the code more than once.
End of explanation
print("Hello, world!")
# DO NOT REMOVE: Mark this question as completed
q1.check()
Explanation: The code above sets up the notebook so that it can check your answers in this exercise. You should never modify this code. (Otherwise, the notebook won't be able to verify that you have successfully completed the exercise.)
After finishing all of the questions below, you'll see the exercise marked as complete on the course page. Once you complete all of the lessons, you'll get a course completion certificate!
Question 1
Next, you will run some code from the tutorial, so you can see how it works for yourself. Run the next code cell without changes.
End of explanation
# TODO: Change the message
print("Your message here!")
# DO NOT REMOVE: Mark this question as completed
q2.check()
Explanation: You just ran code to print Hello world!, which you should see in the output above.
The second line of code (q1.check()) checks your answer. You should never modify this checking code; if you remove it, you won't get credit for completing the problem.
Question 2
Now, you will print another message of your choosing. To do this, change print("Your message here!") to use a different message. For instance, you might like to change it to something like:
- print("Good morning!")
- print("I am learning how to code :D")
Or, you might like to see what happens if you write something like print("3+4"). Does it return 7, or does it just think of "3+4" as just another message?
Make sure that your message is enclosed in quotation marks ("), and the message itself does not use quotation marks. For instance, this will throw an error: print("She said "great job" and gave me a high-five!") because the message contains quotation marks. If you decide to take the Python course after completing this course, you will learn more about how to avoid this error in Lesson 6.
Feel free to try out multiple messages!
End of explanation
# Uncomment to get a hint
#_COMMENT_IF(PROD)_
q3.hint()
# Uncomment to view solution
#_COMMENT_IF(PROD)_
q3.solution()
# DO NOT REMOVE: Check your answer
q3.check()
Explanation: Question 3
As you learned in the tutorial, a comment in Python has a pound sign (#) in front of it, which tells Python to ignore the text after it.
Putting a pound sign in front of a line of code will make Python ignore that code. For instance, this line would be ignored by Python, and nothing would appear in the output:
```python
print(1+2)
```
Removing the pound sign will make it so that you can run the code again. When we remove the pound sign in front of a line of code, we call this uncommenting.
In this problem, you will uncomment two lines in the code cell below and view the output:
- Remove the # in front of q3.hint(). To avoid errors, do NOT remove the # in front of # Uncomment to view hint.
- Next, remove the # in front of q3.solution().
As in the previous questions, do not change the final line of code that marks your work as completed.
End of explanation
# Create variables
num_years = 4
days_per_year = 365
hours_per_day = 24
mins_per_hour = 60
secs_per_min = 60
# Calculate number of seconds in four years
total_secs = secs_per_min * mins_per_hour * hours_per_day * days_per_year * num_years
print(total_secs)
Explanation: In the next question, and in most of the exercises in this course, you will have the option to uncomment to view hints and solutions. Once you feel comfortable with uncommenting, continue to the next question.
Question 4
In the tutorial, you defined several variables to calculate the total number of seconds in a year. Run the next code cell to do the calculation here.
End of explanation
# TODO: Set the value of the births_per_min variable
births_per_min = ____
# TODO: Set the value of the births_per_day variable
births_per_day = ____
# DO NOT REMOVE: Check your answer
q4.check()
#%%RM_IF(PROD)%%
# Set the value of the births_per_min variable
births_per_min = 250
# Set the value of the births_per_day variable
births_per_day = births_per_min * mins_per_hour * hours_per_day
q4.assert_check_passed()
# Uncomment to get a hint
#_COMMENT_IF(PROD)_
q4.hint()
# Uncomment to view solution
#_COMMENT_IF(PROD)_
q4.solution()
Explanation: Use the next code cell to:
- Define a variable births_per_min and set it to 250. (There are on average 250 babies born each minute.)
- Define a variable births_per_day that contains the average number of babies born each day. (To set the value of this variable, you should use births_per_min and some of the variables from the previous code cell.)
Remember you can always get a hint if you need it!
End of explanation
# Load the data from the titanic competition
import pandas as pd
titanic_data = pd.read_csv("../input/titanic/train.csv")
# Show the first five rows of the data
titanic_data.head()
Explanation: 🌶️ Question 5
(Questions marked with a 🌶️ will be a little bit more challenging than the others! Remember you can always get a hint or view the solution.)
The Titanic competition is Kaggle's most famous data science competition. In this competition, participants are challenged to build a machine learning model that can predict whether or not passengers survived the Titanic shipwreck, based on information like age, sex, family size, and ticket number.
Run the next code cell without changes to load and preview the titanic data.
Don't worry about the details of the code for now - the end result is just that the all of the titanic data has been loaded in a variable named titanic_data. (In order to learn how to write this code yourself, you can take the Python course and then the Pandas course.)
End of explanation
# Number of total passengers
total = len(titanic_data)
print(total)
# Number of passengers who survived
survived = (titanic_data.Survived == 1).sum()
print(survived)
# Number of passengers under 18
minors = (titanic_data.Age < 18).sum()
print(minors)
Explanation: The data has a different row for each passenger.
The next code cell defines and prints the values of three variables:
- total = total number of passengers who boarded the ship
- survived = number of passengers who survived the shipwreck
- minors = number of passengers under 18 years of age
Run the code cell without changes. (Don't worry about the details of how these variables are calculated for now. You can learn more about how to calculate these values in the Pandas course.)
End of explanation
# TODO: Fill in the value of the survived_fraction variable
survived_fraction = ____
# Print the value of the variable
print(survived_fraction)
# TODO: Fill in the value of the minors_fraction variable
minors_fraction = ____
# Print the value of the variable
print(minors_fraction)
# DO NOT REMOVE: Check your answer
q5.check()
#%%RM_IF(PROD)%%
# Fill in the value of the survived_fraction variable
survived_fraction = survived/total
# Print the value of the survived_fraction variable
print(survived_fraction)
# Fill in the value of the minors_fraction variable
minors_fraction = minors/total
# Print the value of the minors_fraction variable
print(minors_fraction)
q5.assert_check_passed()
# Uncomment to receive a hint
#_COMMENT_IF(PROD)_
q5.hint()
# Uncomment to view the solution
#_COMMENT_IF(PROD)_
q5.solution()
Explanation: So,
- total = 891 (there were 891 passengers on board the Titanic),
- survived = 342 (342 passengers survived), and
- minors = 113 (113 passengers were under the age of 18).
In the code cell below, replace the underlines (____) with code to calculate the values for two more variables:
- survived_fraction should be set to the fraction of passengers who survived the Titanic disaster.
- minors_fraction should be the fraction of passengers who were minors (under the age of 18).
For each variable, your answer should be a number between 0 and 1.
If you need a hint or want to view the solution, you can skip to the next code cell and uncomment the appropriate lines of code (q5.hint() and q5.solution()).
End of explanation |
10,787 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Unsupervised Learning
Project
Step1: Data Exploration
In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.
Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories
Step2: Implementation
Step3: Question 1
Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
What kind of establishment (customer) could each of the three samples you've chosen represent?
Hint
Step4: Question 2
Which feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?
Hint
Step5: Question 3
Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?
Hint
Step6: Answer
Step7: Observation
After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).
Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
Step8: Implementation
Step9: Question 4
Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why.
Step10: Answer
Step11: Question 5
How much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.
Hint
Step12: Answer
Step13: Implementation
Step14: Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
Step15: Visualizing a Biplot
A biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case Dimension 1 and Dimension 2). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.
Run the code cell below to produce a biplot of the reduced-dimension data.
Step16: Observation
Once we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on 'Milk', 'Grocery' and 'Detergents_Paper', but not so much on the other product categories.
From the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier?
ANSWER
Step17: Question 7
Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?
Answer
Step18: From the graph above it is clear that the case with only 2 components has the best silhouette score, even though this score isn't particularly high
Step19: Implementation
Step20: Question 8
Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent?
Hint
Step21: Answer
Step22: Answer
Step23: Conclusion
In this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the customer segments, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which segment that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the customer segments to a hidden variable present in the data, to see whether the clustering identified certain relationships.
Question 10
Companies will often run A/B tests when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?
Hint | Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
import matplotlib.pyplot as plt
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the wholesale customers dataset
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape)
except:
print "Dataset could not be loaded. Is the dataset missing?"
Explanation: Machine Learning Engineer Nanodegree
Unsupervised Learning
Project: Creating Customer Segments
Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Getting Started
In this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in monetary units) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.
The dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of this project, the features 'Channel' and 'Region' will be excluded in the analysis — with focus instead on the six product categories recorded for customers.
Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
End of explanation
# Display a description of the dataset
display(data.describe())
Explanation: Data Exploration
In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.
Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper', and 'Delicatessen'. Consider what each category represents in terms of products you could purchase.
End of explanation
# TODO: Select three indices of your choice you wish to sample from the dataset
indices = [23, 77, 103]
# Create a DataFrame of the chosen samples
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print "Chosen samples of wholesale customers dataset:"
display(samples)
Explanation: Implementation: Selecting Samples
To get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.
End of explanation
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeRegressor
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
new_data = data.drop(['Milk'], axis = 1)
# TODO: Split the data into training and testing sets using the given feature as the target
X_train, X_test, y_train, y_test = train_test_split(new_data, data["Milk"], test_size=0.25, random_state=42)
# TODO: Create a decision tree regressor and fit it to the training set
regressor = DecisionTreeRegressor(random_state=42)
regressor.fit(X_train, y_train)
# TODO: Report the score of the prediction using the testing set
score = regressor.score(X_test, y_test)
print score
Explanation: Question 1
Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
What kind of establishment (customer) could each of the three samples you've chosen represent?
Hint: Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying "McDonalds" when describing a sample customer as a restaurant.
Answer:
The first customer bought a lot (i.e. significantly above the mean) of several types of items, especially Milk and Delicatessen. This suggests that this user is running some sort of business, and judging by the Milk and Delicatessen, probably a cafe where those things are consumed in greater quantities.
The second customer bought a very large quantity, much greater than the mean plus standard deviation, of groceries and detergents / paper. This would suggest, possibly, a hotel or a bed and breakfast, where there is a lot of cleaning and toiletries involved but also a daily element of cooking "from scratch".
The third customer bought a normal amount of all items, all within the mean plus standard deviation, except for Fresh and Frozen, which are very high. While it's not clear how "Fresh" is different from "Groceries", since fresh food (at least in Italy where I live) is the vast majority of anyone's groceries, it's plausible that some sort of restaurant / fast-food business may have a large need for both of those types of items.
Implementation: Feature Relevance
One interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.
In the code block below, you will need to implement the following:
- Assign new_data a copy of the data by removing a feature of your choice using the DataFrame.drop function.
- Use sklearn.cross_validation.train_test_split to split the dataset into training and testing sets.
- Use the removed feature as your target label. Set a test_size of 0.25 and set a random_state.
- Import a decision tree regressor, set a random_state, and fit the learner to the training data.
- Report the prediction score of the testing set using the regressor's score function.
End of explanation
# Produce a scatter matrix for each pair of features in the data
pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
Explanation: Question 2
Which feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?
Hint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2 implies the model fails to fit the data.
Answer:
I predicted the feature "Milk", which was predicted with score 0.156. This score is extremely poor; in other words, the other variables are unable to predict it well, and hence this variable is (as far as this regressor is concerned) quite indepedent from the others: it appears to be necessary to keep this variable.
Visualize Feature Distributions
To get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.
End of explanation
import seaborn as sns
sns.heatmap(data.corr(), annot=True);
Explanation: Question 3
Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?
Hint: Is the data normally distributed? Where do most of the data points lie?
End of explanation
# TODO: Scale the data using the natural logarithm
log_data = np.log(data)
# TODO: Scale the sample data using the natural logarithm
log_samples = np.log(samples)
# Produce a scatter matrix for each pair of newly-transformed features
pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
Explanation: Answer:
There appears to be a correleation between "Grocery" and "Milk", as well as between "Detergents_Paper" and "Milk". This might seem surprising given how poorly the regression performed; this may have to do with Decision-tree-regressors not being very good, and a different regressor would be more suited. It may also be that for the vast majority of the data, there is no clear correlation at all, i.e. that most of the data is in some shapeless ball close to the origin, and there is a "linear-looking branch" shooting off from it, giving the impression of a clear linear dependence when in fact this is only true for very large x and y values.
The data for each feature is mostly on the "low" end of the spectrum, with a long tail---it looks much more similar to a binomial or a Poisson distribution than a Gaussian. This supports the idea of a "ball" near the origin, with linear dependences only clear when including the long tail in each distribution.
Data Preprocessing
In this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.
Implementation: Feature Scaling
If data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.
In the code block below, you will need to implement the following:
- Assign a copy of the data to log_data after applying logarithmic scaling. Use the np.log function for this.
- Assign a copy of the sample data to log_samples after applying logarithmic scaling. Again, use np.log.
End of explanation
# Display the log-transformed sample data
display(log_samples)
Explanation: Observation
After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).
Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
End of explanation
allindices = []
# For each feature find the data points with extreme high or low values
for feature in log_data.keys():
# TODO: Calculate Q1 (25th percentile of the data) for the given feature
Q1 = np.percentile(log_data[feature], 25)
# TODO: Calculate Q3 (75th percentile of the data) for the given feature
Q3 = np.percentile(log_data[feature], 75)
# TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = (Q3 - Q1)*1.5
# Display the outliers
print "Data points considered outliers for the feature '{}':".format(feature)
theoutliers = log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))]
display(theoutliers)
allindices.append(theoutliers.index.values)
# OPTIONAL: Select the indices for data points you wish to remove
outliers = [65, 66, 75, 128, 154]
# Remove the outliers, if any were specified
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
Explanation: Implementation: Outlier Detection
Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.
In the code block below, you will need to implement the following:
- Assign the value of the 25th percentile for the given feature to Q1. Use np.percentile for this.
- Assign the value of the 75th percentile for the given feature to Q3. Again, use np.percentile.
- Assign the calculation of an outlier step for the given feature to step.
- Optionally remove data points from the dataset by adding indices to the outliers list.
NOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these points!
Once you have performed this implementation, the dataset will be stored in the variable good_data.
End of explanation
def howManyTimesOutlier(index):
return np.sum([(index in outlierindices) for outlierindices in allindices])
more_than_1 = np.array([ind for ind in log_data.index.values if howManyTimesOutlier(ind)>1])
print "%d outliers in more than one feature:" % len(more_than_1)
print more_than_1
more_than_2 = np.array([ind for ind in log_data.index.values if howManyTimesOutlier(ind)>2])
print "\n%d Outliers in more than two features:" % len(more_than_2)
print more_than_2
more_than_3 = np.array([ind for ind in log_data.index.values if howManyTimesOutlier(ind)>3])
print "\n%d Outliers in more than three features:" % len(more_than_3)
print more_than_3
Explanation: Question 4
Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why.
End of explanation
from sklearn.decomposition import PCA
# TODO: Apply PCA by fitting the good data with the same number of dimensions as features
pca = PCA(n_components=len(list(good_data)))
pca.fit(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Generate PCA results plot
pca_results = vs.pca_results(good_data, pca)
Explanation: Answer:
Yes, as can be seen from the printout I created above this cell, there are 5 datapoints that are outliers in more than one feature. I could, but didn't, also have removed data points that are extreme outliers in any one feature. I could, for example, chop off the top 3-4% of the distribution in each feature. However, this is not very useful: it is often these customers who are the most prized ones (since they purchased so extremely much), so it's not right to pretend they don't exist. They may, after all, be a "cluster" in their own right...
Feature Transformation
In this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.
Implementation: PCA
Now that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the good_data to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the explained variance ratio of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.
In the code block below, you will need to implement the following:
- Import sklearn.decomposition.PCA and assign the results of fitting PCA in six dimensions with good_data to pca.
- Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.
End of explanation
pca_results.cumsum()
Explanation: Question 5
How much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.
Hint: A positive increase in a specific dimension corresponds with an increase of the positive-weighted features and a decrease of the negative-weighted features. The rate of increase or decrease is based on the indivdual feature weights.
End of explanation
# Display sample log-data after having a PCA transformation applied
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
Explanation: Answer:
The first two principal components together account for 0.707 of the total variance. The first four account for 0.931 of the total variance.
The first PC mainly describes spending on Milk, Grocery and/or Detergents_Paper; in other words, consumers who spend on this sort of "retail goods" will have a large value on "Dimension 1". Our second sample customer is likely to have a large value on this first dimension.
The second PC mostly describes spending on food items, in particular Food, Frozen, and Delicatessen: customers who purchase a lot of this sort of food-based items will appear higher on Dimension 2. Our third sample customer is an example of this, as it bought a lot of Food and Frozen (but not Delicatessen).
The third PC breaks apart those customers who bought a lot of Fresh and a lot of Frozen/Delicatessen, from those who bought mostly Fresh or mostly Frozen/Delicatessen. In other words, here it is no longer equivalent which of the three the customer bought, and it breaks up the data depending on the balance between these categories. This must be because there is quite a lot of variety in those who bought Fresh-Frozen-Delicatessen, and this component tells them apart.
The fourth PC further refines the Fresh-Frozen-Delicatessen (i.e. food-item) customers, but additionally splitting up those who bough a lot of Frozen from those who bought a lot of Delicatessen. This fully breaks up the various types of food buyers.
Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.
End of explanation
# TODO: Apply PCA by fitting the good data with only two dimensions
pca = PCA(n_components=2)
pca.fit(good_data)
# TODO: Transform the good data using the PCA fit above
reduced_data = pca.transform(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Create a DataFrame for the reduced data
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
Explanation: Implementation: Dimensionality Reduction
When using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the cumulative explained variance ratio is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.
In the code block below, you will need to implement the following:
- Assign the results of fitting PCA in two dimensions with good_data to pca.
- Apply a PCA transformation of good_data using pca.transform, and assign the results to reduced_data.
- Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.
End of explanation
# Display sample log-data after applying PCA transformation in two dimensions
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
Explanation: Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
End of explanation
# Create a biplot
vs.biplot(good_data, reduced_data, pca)
Explanation: Visualizing a Biplot
A biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case Dimension 1 and Dimension 2). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.
Run the code cell below to produce a biplot of the reduced-dimension data.
End of explanation
from sklearn.mixture import GMM
from sklearn.metrics import silhouette_score
# TODO: Apply your clustering algorithm of choice to the reduced data
clusterer = GMM(n_components=2).fit(reduced_data)
# TODO: Predict the cluster for each data point
preds = clusterer.predict(reduced_data)
# TODO: Find the cluster centers
centers = clusterer.means_
# TODO: Predict the cluster for each transformed sample data point
sample_preds = clusterer.predict(pca_samples)
# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(reduced_data, preds)
print score
Explanation: Observation
Once we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on 'Milk', 'Grocery' and 'Detergents_Paper', but not so much on the other product categories.
From the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier?
ANSWER: The first PC is mostly correlated with Milk, Grocery and Detergents_Paper, while the second is mostly composed of Fresh and Frozen, but also Delicatessen. This was precisely my observation described eariler.
Clustering
In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale.
Question 6
What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?
Answer:
K-means is fast and is guaranteed to converge to an answer. It is also very simple: it makes no assumptions about the underlying distribution of the data. The advantages of using Gaussian Mixture Models are that we obtain probabilities for each data point to have a specific label, assuming the distribution of the various features is Gaussian. Since after the log transform our distributions look very similar to Gaussians (even though we do have a couple of multimodal distributions), Gaussian mixture models are likely to generate useful results which are more powerful than K-means. Disadvantages of this model could be its speed, but in this case we only have approximately 400 datapoints, so speed is not an issue.
Implementation: Creating Clusters
Depending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known a priori, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's silhouette coefficient. The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the mean silhouette coefficient provides for a simple scoring method of a given clustering.
In the code block below, you will need to implement the following:
- Fit a clustering algorithm to the reduced_data and assign it to clusterer.
- Predict the cluster for each data point in reduced_data using clusterer.predict and assign them to preds.
- Find the cluster centers using the algorithm's respective attribute and assign them to centers.
- Predict the cluster for each sample data point in pca_samples and assign them sample_preds.
- Import sklearn.metrics.silhouette_score and calculate the silhouette score of reduced_data against preds.
- Assign the silhouette score to score and print the result.
End of explanation
allvariances = []
allscores = []
for compnum in range(1,10):
clust = GMM(n_components=compnum).fit(reduced_data)
allvariances.append(np.mean([np.mean(sig) for sig in clust.covars_]))
thepreds = clust.predict(reduced_data)
if compnum==1:
thescore = 0
else:
thescore = silhouette_score(reduced_data, thepreds)
allscores.append(thescore)
plt.plot(range(1,10), allscores)
plt.xlabel("num_components")
plt.ylabel("silhouette score");
Explanation: Question 7
Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?
Answer:
I tried the silhouette scores for num_components ranging from 2 to 10. These are the results:
End of explanation
# Display the results of the clustering from implementation
vs.cluster_results(reduced_data, preds, centers, pca_samples)
Explanation: From the graph above it is clear that the case with only 2 components has the best silhouette score, even though this score isn't particularly high: 0.415. Studying the biplot by hand I would also have guessed two clusters to be a good choice.
Cluster Visualization
Once you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.
End of explanation
# TODO: Inverse transform the centers
log_centers = pca.inverse_transform(centers)
# TODO: Exponentiate the centers
true_centers = np.exp(log_centers)
# Display the true centers
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
Explanation: Implementation: Data Recovery
Each cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the averages of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to the average customer of that segment. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.
In the code block below, you will need to implement the following:
- Apply the inverse transform to centers using pca.inverse_transform and assign the new centers to log_centers.
- Apply the inverse function of np.log to log_centers using np.exp and assign the true centers to true_centers.
End of explanation
np.exp(good_data).describe().loc[["mean"]]
Explanation: Question 8
Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent?
Hint: A customer who is assigned to 'Cluster X' should best identify with the establishments represented by the feature set of 'Segment X'.
End of explanation
# Display the predictions
for i, pred in enumerate(sample_preds):
print "Sample point", i, "predicted to be in Cluster", pred
samples
Explanation: Answer:
The first buys more Fresh and Frozen (similar quantities to the dataset average), but little Milk, Grocery and Detergents_Paper (half to less-than-half of the dataset average). That suggests some sort of restaurant / fast-food place which prepares some food from scratch but uses a lot of frozen products as well.
The second is the opposite: they buy a lot of Milk (near average of whole dataset), Groceries (much higher than average) and Detergents_Paper (twice as high as average) but very little Fresh or Frozen compared to the dataset average. This seems much closer to a hotel-type of business, where there is a lot of cleaning and washing involved, and a lot of breakfast preparation (which uses a lot of Milk and "Grocery" which I assume must mean non-fresh foods like cereal).
Question 9
For each sample point, which customer segment from Question 8 best represents it? Are the predictions for each sample point consistent with this?
Run the code block below to find which cluster each sample point is predicted to be.
End of explanation
clusterer.predict_proba(pca_samples)[2]
Explanation: Answer:
Sample points 1 and 2 fit into the descriptions of each cluster: point 1 buys a large amount of Milk, Grocery and Detergents_Paper while point 2 buys a lot of Fresh and Frozen. I would, judging by eye, think point 2 is misclassified. In fact, looking at the cell below, we see that the probabilities assigned to this point are not so clear cut: the algorithms estimates there to be a 30% chance it got it wrong.
Sample point 0, on the other hand, seems more unclear: it buys a lot of Milk and Detergents_Paper, suggesting cluster 0, but also a lot of Fresh and Frozen, suggesting cluster 1. Looking at the cluster visualization, above, we see that this point should still clearly be labeled cluster 0, which just goes to show how useful a visualization like that can be.
End of explanation
# Display the clustering results based on 'Channel' data
vs.channel_results(reduced_data, outliers, pca_samples)
Explanation: Conclusion
In this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the customer segments, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which segment that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the customer segments to a hidden variable present in the data, to see whether the clustering identified certain relationships.
Question 10
Companies will often run A/B tests when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?
Hint: Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most?
Answer:
The group of customers who buy lot of detergents, milk and groceries can probably withstand a 3-days-a-week delivery service, since the products they purchase will probably not go bad within that time; this cluster is the one that is least likely to get affected. However, it will force them to be more careful in how much milk they buy, it cannot run out in that time, but also it shouldn't expire before the next delivery.
The other cluster, which buys a lot of fresh food and frozen food, will dislike getting fresh food that is actually several days older than it could have been (obvisouly the frozen food doesn't matter).
Probably it would make sense to only A/B test the change on (a small subset of) the cluster buying a lot of detergents.
Question 11
Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a customer segment it best identifies with (depending on the clustering algorithm applied), we can consider 'customer segment' as an engineered feature for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a customer segment to determine the most appropriate delivery service.
How can the wholesale distributor label the new customers using only their estimated product spending and the customer segment data?
Hint: A supervised learner could be used to train on the original customers. What would be the target variable?
Answer:
There are two ways this could be done: we can either take our trained Gaussian Mixture Model and predict on the 10 new customers what label they should have, or we can train a supervised classifier on the previous, now labeled, data to predict the labels of the new customers. This second approach will not necessarily do better than the first of course.
If we were to opt for the second approach, we should probably one-hot encode the clustering labels (in this case we only have two cluster so there is no need, but in general we'll need to do one-hot encode). We can then train on the labeled data, and predict on the 10 new customers. It would make sense to compare the prediction with a simple cluster.predict(new_data) as a sanity check that we aren't making big mistakes on our new customers. Especially since not all customers are equal---making a labeling mistake on a customer who is willing to pay a lot of money is much worse than mislabeling a customer who spends very little.
We could also take the labels for the ten new customers and predict how much they will spend on the various food categories, based on how other customers with the same label purchased.
Visualizing Underlying Distributions
At the beginning of this project, it was discussed that the 'Channel' and 'Region' features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the 'Channel' feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.
Run the code block below to see how each data point is labeled either 'HoReCa' (Hotel/Restaurant/Cafe) or 'Retail' the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.
End of explanation |
10,788 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression with Grid Search (scikit-learn)
<a href="https
Step1: This example builds on our basic census income classification example by incorporating S3 data versioning.
Step2: Imports
Step3: Log Workflow
This section demonstrates logging model metadata and training artifacts to ModelDB.
Instantiate Client
Step4: <h2 style="color
Step5: Prepare Hyperparameters
Step6: Train Models
Step7: Revisit Workflow
This section demonstrates querying and retrieving runs via the Client.
Retrieve Best Run
Step8: Train on Full Dataset
Step9: Calculate Accuracy on Full Training Set
Step10: Deployment and Live Predictions
This section demonstrates model deployment and predictions, if supported by your version of ModelDB.
Step11: Prepare "Live" Data
Step12: Deploy Model
Step13: Query Deployed Model | Python Code:
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
!pip install verta
Explanation: Logistic Regression with Grid Search (scikit-learn)
<a href="https://colab.research.google.com/github/VertaAI/modeldb/blob/master/client/workflows/demos/census-end-to-end-s3-example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
HOST = "app.verta.ai"
PROJECT_NAME = "Census Income Classification - S3 Data"
EXPERIMENT_NAME = "Logistic Regression"
# import os
# os.environ['VERTA_EMAIL'] = ''
# os.environ['VERTA_DEV_KEY'] = ''
Explanation: This example builds on our basic census income classification example by incorporating S3 data versioning.
End of explanation
from __future__ import print_function
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
import itertools
import os
import time
import six
import numpy as np
import pandas as pd
import sklearn
from sklearn import model_selection
from sklearn import linear_model
from sklearn import metrics
try:
import wget
except ImportError:
!pip install wget # you may need pip3
import wget
Explanation: Imports
End of explanation
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
Explanation: Log Workflow
This section demonstrates logging model metadata and training artifacts to ModelDB.
Instantiate Client
End of explanation
from verta.dataset import S3
dataset = client.set_dataset(name="Census Income S3")
version = dataset.create_version(S3("s3://verta-starter"))
DATASET_PATH = "./"
train_data_filename = DATASET_PATH + "census-train.csv"
test_data_filename = DATASET_PATH + "census-test.csv"
def download_starter_dataset(bucket_name):
if not os.path.exists(DATASET_PATH + "census-train.csv"):
train_data_url = "http://s3.amazonaws.com/" + bucket_name + "/census-train.csv"
if not os.path.isfile(train_data_filename):
wget.download(train_data_url)
if not os.path.exists(DATASET_PATH + "census-test.csv"):
test_data_url = "http://s3.amazonaws.com/" + bucket_name + "/census-test.csv"
if not os.path.isfile(test_data_filename):
wget.download(test_data_url)
download_starter_dataset("verta-starter")
df_train = pd.read_csv(train_data_filename)
X_train = df_train.iloc[:,:-1]
y_train = df_train.iloc[:, -1]
df_train.head()
Explanation: <h2 style="color:blue">Prepare Data</h2>
End of explanation
hyperparam_candidates = {
'C': [1e-6, 1e-4],
'solver': ['lbfgs'],
'max_iter': [15, 28],
}
hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values))
for values
in itertools.product(*hyperparam_candidates.values())]
Explanation: Prepare Hyperparameters
End of explanation
def run_experiment(hyperparams):
# create object to track experiment run
run = client.set_experiment_run()
# create validation split
(X_val_train, X_val_test,
y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train,
test_size=0.2,
shuffle=True)
# log hyperparameters
run.log_hyperparameters(hyperparams)
print(hyperparams, end=' ')
# create and train model
model = linear_model.LogisticRegression(**hyperparams)
model.fit(X_train, y_train)
# calculate and log validation accuracy
val_acc = model.score(X_val_test, y_val_test)
run.log_metric("val_acc", val_acc)
print("Validation accuracy: {:.4f}".format(val_acc))
# create deployment artifacts
model_api = ModelAPI(X_train, y_train)
requirements = ["scikit-learn"]
# save and log model
run.log_model(model, model_api=model_api)
run.log_requirements(requirements)
# log dataset snapshot as version
run.log_dataset_version("train", version)
for hyperparams in hyperparam_sets:
run_experiment(hyperparams)
Explanation: Train Models
End of explanation
best_run = expt.expt_runs.sort("metrics.val_acc", descending=True)[0]
print("Validation Accuracy: {:.4f}".format(best_run.get_metric("val_acc")))
best_hyperparams = best_run.get_hyperparameters()
print("Hyperparameters: {}".format(best_hyperparams))
Explanation: Revisit Workflow
This section demonstrates querying and retrieving runs via the Client.
Retrieve Best Run
End of explanation
model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams)
model.fit(X_train, y_train)
Explanation: Train on Full Dataset
End of explanation
train_acc = model.score(X_train, y_train)
print("Training accuracy: {:.4f}".format(train_acc))
Explanation: Calculate Accuracy on Full Training Set
End of explanation
model_id = 'YOUR_MODEL_ID'
run = client.set_experiment_run(id=model_id)
Explanation: Deployment and Live Predictions
This section demonstrates model deployment and predictions, if supported by your version of ModelDB.
End of explanation
df_test = pd.read_csv(test_data_filename)
X_test = df_test.iloc[:,:-1]
Explanation: Prepare "Live" Data
End of explanation
run.deploy(wait=True)
run
Explanation: Deploy Model
End of explanation
deployed_model = run.get_deployed_model()
for x in itertools.cycle(X_test.values.tolist()):
print(deployed_model.predict([x]))
time.sleep(.5)
Explanation: Query Deployed Model
End of explanation |
10,789 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using cURL with Elasticsearch
The introductory documents and tutorials all use cURL (here after referred to by its command line name curl) to interact with Elasticsearch and demonstrate what is possible and what is returned. Below is a short collection of these exercises with some explainations.
Hello World!
This first example for elasticsearch is almost always a simple get with no parameters. It is a simple way to check to see if the environment and server are set and functioning properly. Hence, the reason for the title.
The examples are using an AWS instance, the user will need to change the server to either "localhost" for their personal machine or the URL for the elasticsearch server they are using.
Step1: Count
Counting is faster than searching and should be used when the actual results are not needed. From "ElasticSearch Cookbook - Second Edition"
Step2: The second type of simple count is to count by index. If the index is gdelt1979 then
Step3: or if the index is the Global Summary of the Day data, i.e. gsod then
Step4: If the user prefers a nicer looking output then a request to make it pretty is in order.
Step5: Count Summary
Keep in mind counts can be as complicated as searches. Just changing _count to _search and vice versa changes how elasticsearch handles the request.
With that said it is now time to show and develop some search examples.
Search
Search is the main use for elasticsearch, hence the name and where the bulk of the examples will be. This notebook will attempt to take the user through examples that show only one new feature at a time. This will hopefully allow the user to see the order of commands which is unfortuantely important to elasticsearch.
As with count above it will start with a simple example.
Step6: By default elasticsearch returns 10 documents for every search. As is evident the pretty option used for count above is needed here.
Step7: Much better but it can be easily seen that if this notebook continues with the elasticsearch default for number of documents it will become very unweldy very quickly. So, let's use the size option. | Python Code:
%%bash
curl -XGET "http://search-01.ec2.internal:9200/"
Explanation: Using cURL with Elasticsearch
The introductory documents and tutorials all use cURL (here after referred to by its command line name curl) to interact with Elasticsearch and demonstrate what is possible and what is returned. Below is a short collection of these exercises with some explainations.
Hello World!
This first example for elasticsearch is almost always a simple get with no parameters. It is a simple way to check to see if the environment and server are set and functioning properly. Hence, the reason for the title.
The examples are using an AWS instance, the user will need to change the server to either "localhost" for their personal machine or the URL for the elasticsearch server they are using.
End of explanation
%%bash
curl -XGET 'http://search-01.ec2.internal:9200/_count'
Explanation: Count
Counting is faster than searching and should be used when the actual results are not needed. From "ElasticSearch Cookbook - Second Edition":
It is often required to return only the count of the matched results and not the results themselves. The advantages of using a count request is the performance it offers and reduced resource usage, as a standard search call also returns hits count.
The simplest count is a count of all the documents in elasticsearch.
End of explanation
%%bash
curl -XGET 'http://search-01.ec2.internal:9200/gdelt1979/_count'
Explanation: The second type of simple count is to count by index. If the index is gdelt1979 then:
Example 1
End of explanation
%%bash
curl -XGET 'http://search-01.ec2.internal:9200/gsod/_count'
Explanation: or if the index is the Global Summary of the Day data, i.e. gsod then:
Example 2
End of explanation
%%bash
curl -XGET 'http://search-01.ec2.internal:9200/gsod/_count?pretty'
Explanation: If the user prefers a nicer looking output then a request to make it pretty is in order.
End of explanation
%%bash
curl -XGET 'http://search-01.ec2.internal:9200/gsod/_search'
Explanation: Count Summary
Keep in mind counts can be as complicated as searches. Just changing _count to _search and vice versa changes how elasticsearch handles the request.
With that said it is now time to show and develop some search examples.
Search
Search is the main use for elasticsearch, hence the name and where the bulk of the examples will be. This notebook will attempt to take the user through examples that show only one new feature at a time. This will hopefully allow the user to see the order of commands which is unfortuantely important to elasticsearch.
As with count above it will start with a simple example.
End of explanation
%%bash
curl -XGET 'http://search-01.ec2.internal:9200/gsod/_search?pretty'
Explanation: By default elasticsearch returns 10 documents for every search. As is evident the pretty option used for count above is needed here.
End of explanation
%%bash
curl -XGET 'http://search-01.ec2.internal:9200/gsod/_search?pretty' -d '
{
"size": "1"
}'
%%bash
curl -XGET 'http://search-01.ec2.internal:9200/gsod/_search?pretty' -d '
{
"_source": ["Max Temp"],
"size": "2"
}'
%%bash
curl -XGET 'http://search-01.ec2.internal:9200/gsod/_search?pretty' -d '
{
"query": {
"filtered": {
"filter": {
"range": {
"Date": {
"gte": "2007-01-01",
"lte": "2007-01-01"
}
}
}
}
},
"_source": ["Max Temp"],
"size": "1"
}'
%%bash
curl -XGET 'http://search-01.ec2.internal:9200/gsod/_search?pretty' -d '
{
"query": {
"filtered": {
"query": { "match_all": {} },
"filter": {
"range": {
"Date": {
"gte": "2007-01-01",
"lte": "2007-12-31"
}
}
}
}
},
"size": "1"
}'
%%bash
curl -XGET 'http://search-01.ec2.internal:9200/gsod/_count' -d '
{
"query": {
"filtered": {
"filter": {
"range": {
"Date": {
"gte": "2007-01-01",
"lte": "2007-01-31"
}
}
}
}
}
}'
%%bash
curl -XGET 'http://search-01.ec2.internal:9200/gsod/_search?pretty' -d '
{
"query": {
"filtered": {
"query": { "match_all": {} },
"filter": {
"range": {
"Date": {
"gte": "2007-01-01",
"lte": "2007-01-31"
}
}
}
}
},
"_source": ["Mean Temp", "Min Temp", "Max Temp"],
"size": "563280"
}' > temps_200701.txt
import json
with open("temps_2007.txt", "r") as f:
mean_temps = []
max_temps = []
min_temps = []
for line in f:
if "_source" in line:
line = json.loads(line[16:-1])
min_tmp = float(line['Min Temp'])
if -300 < min_tmp < 300:
min_temps.append(min_tmp)
mean_tmp = float(line['Mean Temp'])
if -300 < min_tmp < 300:
mean_temps.append(mean_tmp)
max_tmp = float(line['Max Temp'])
if -300 < max_tmp < 300:
max_temps.append(max_tmp)
print("From {} observations the temperatures for 2007 are:"\
.format(len(mean_temps)))
print("Min Temp: {:.1f}".format(min(min_temps)))
print("Mean Temp: {:.1f}".format(sum(mean_temps)/len(mean_temps)))
print("Max Temp: {:.1f}".format(max(max_temps)))
Explanation: Much better but it can be easily seen that if this notebook continues with the elasticsearch default for number of documents it will become very unweldy very quickly. So, let's use the size option.
End of explanation |
10,790 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interpreting numeric split points in H2O POJO tree based models
This notebook explains how to correctly interpret split points that you might see in POJOs of H2O tree based models.
Motivation
Step1: If we try to compare the numbers we will see they are not actually the same number
Step2: When two numbers are compared their precion is first adjusted to be the same. This typically means the lower precison number is converted to the higher precision representation. In this case f32 will be converted to float64 representation. We can do the same thing explicitly
Step3: The comparison failed because the converted number is actually different
Step4: Notice the 7th decimal digit after the conversion.
Step5: Examining GBM POJO
Understanding how computers compare numbers of different precision is critical for correctly interpretting split points in tree-based POJOs. Lets now train a simple GBM model.
Step6: Please take a close look at the POJO code, you should see statements like this one
Double.isNaN(data[5]) || data[5 /* VOL */] < 25.695312f ? -0.09571693f
Step7: The java comparison rewritten to Python would look like this
Step8: This means that observation represented by array data should got the left subtree of the current node. If we ignored the fact that the split point is using 32-bit precision and considered it as 64-bit precision, we would miclassify the observation to left sub-tree.
Step9: Expert options
Forcing split point in POJO to be written in 64-bit precision
H2O allows users to modify the POJO output by setting a property sys.ai.h2o.java.output.doubles. Setting this property to true will cause the POJO generator to output split point in 64-bit precision (doubles) instead of the default 32-bit precision.
We can set this property even on a running H2O instance by invoking a rapids expression.
Step10: In the modified POJO output you can now see the original split is coded as
Double.isNaN(data[5]) || data[5 /* VOL */] < 25.6953125 ? -0.0957169309258461 | Python Code:
import numpy as np
f32 = np.float32("25.695312")
f32
f64 = np.float64("25.695312")
f64
Explanation: Interpreting numeric split points in H2O POJO tree based models
This notebook explains how to correctly interpret split points that you might see in POJOs of H2O tree based models.
Motivation: we had seen there are users who are parsing H2O POJO and translating the Java code into another representation (SQL statements, ...). While we do not encourage users to use POJO in this particular use case we want to clarify how to interpret the numerical values correctly.
Concept of floating point numbers in computers
Computers and software like H2O use floating-point representation of real numbers. In this representation sequences of bits (0/1) are used to store the number with a limited precision. In H2O we use mainly 32-bit and 64-bit floating point number representation.
Lets take look at one example of a floating point number - 25.695312 and use 32-bit and 64-bit representation to compare the behavior.
End of explanation
f32 == f64
Explanation: If we try to compare the numbers we will see they are not actually the same number
End of explanation
np.float64(f32) == f64
Explanation: When two numbers are compared their precion is first adjusted to be the same. This typically means the lower precison number is converted to the higher precision representation. In this case f32 will be converted to float64 representation. We can do the same thing explicitly:
End of explanation
np.float64(f32)
Explanation: The comparison failed because the converted number is actually different
End of explanation
np.float64(f32) - f64
np.float64(f32) > f64
Explanation: Notice the 7th decimal digit after the conversion.
End of explanation
import h2o
from h2o.estimators.gbm import H2OGradientBoostingEstimator
# Connect to a pre-existing cluster
h2o.init()
from h2o.utils.shared_utils import _locate # private function. used to find files within h2o git project directory.
df = h2o.upload_file(path=_locate("smalldata/logreg/prostate.csv"))
# Remove ID from training frame
train = df.drop("ID")
# For VOL & GLEASON, a zero really means "missing"
vol = train['VOL']
vol[vol == 0] = None
gle = train['GLEASON']
gle[gle == 0] = None
# Convert CAPSULE to a logical factor
train['CAPSULE'] = train['CAPSULE'].asfactor()
# Run GBM
my_gbm = H2OGradientBoostingEstimator(ntrees=1, seed=1234)
my_gbm.train(y="CAPSULE", training_frame=train)
# Get the POJO
my_gbm.download_pojo()
Explanation: Examining GBM POJO
Understanding how computers compare numbers of different precision is critical for correctly interpretting split points in tree-based POJOs. Lets now train a simple GBM model.
End of explanation
data = np.array([0, 0, 0, 0, 0, np.float64(25.695312)])
data[5]
Explanation: Please take a close look at the POJO code, you should see statements like this one
Double.isNaN(data[5]) || data[5 /* VOL */] < 25.695312f ? -0.09571693f : -0.16740088f
This code represents one split decision in a GBM tree. data represents a single input row. The split decision is looking a column VOL to decide whether the observation should go to the left sub-tree or go right based on the value of element 5 in the data array.
It is important to notice that data is defined as a double array:
double[] data
This means data is represented by 64-bit floating point numbers.
The split point itself is however outputted in 32-bit precision. In java code we capture this fact by using f suffix in the number representation, eg.: 25.695312f.
This means we have the same scenario as outlined in the beginning of this notebook - we are comparing numbers with two different precisions and we need to pay attention to how the numbers are interpreted.
End of explanation
data[5] < np.float32(25.695312)
Explanation: The java comparison rewritten to Python would look like this:
End of explanation
data[5] < np.float64(25.695312)
Explanation: This means that observation represented by array data should got the left subtree of the current node. If we ignored the fact that the split point is using 32-bit precision and considered it as 64-bit precision, we would miclassify the observation to left sub-tree.
End of explanation
h2o.rapids("(setproperty \"{}\" \"{}\")".format("sys.ai.h2o.java.output.doubles", "true"))["string"]
my_gbm.download_pojo()
Explanation: Expert options
Forcing split point in POJO to be written in 64-bit precision
H2O allows users to modify the POJO output by setting a property sys.ai.h2o.java.output.doubles. Setting this property to true will cause the POJO generator to output split point in 64-bit precision (doubles) instead of the default 32-bit precision.
We can set this property even on a running H2O instance by invoking a rapids expression.
End of explanation
mojo_path = my_gbm.download_mojo()
mojo_path
# Find h2o.jar (this is using internal functions)
from h2o.backend import H2OLocalServer
h2o_jar = H2OLocalServer()._find_jar()
# Invoke MojoConvertTool without arguments to print out usage instructions
import subprocess
subprocess.call(["java", "-cp", h2o_jar, "water.tools.MojoConvertTool"], stderr=subprocess.STDOUT, shell=False)
# Add path to MOJO file and write output to "pojo.java"
subprocess.call(["java", "-cp", h2o_jar, "water.tools.MojoConvertTool", mojo_path, "pojo.java"], stderr=subprocess.STDOUT, shell=False)
# Display the content of the POJO
with open('pojo.java', 'r') as f:
print(f.read())
# Now specify system property sys.ai.h2o.java.output.doubles to output numbers in 64-bit precision
subprocess.call(["java", "-Dsys.ai.h2o.java.output.doubles=true", "-cp", h2o_jar, "water.tools.MojoConvertTool", mojo_path, "pojo64.java"], stderr=subprocess.STDOUT, shell=False)
# Display the content of the POJO with 64-bit number representation
with open('pojo64.java', 'r') as f:
print(f.read())
Explanation: In the modified POJO output you can now see the original split is coded as
Double.isNaN(data[5]) || data[5 /* VOL */] < 25.6953125 ? -0.0957169309258461 : -0.16740088164806366
Notice the last decimal place and observer there is now no suffix f at the end of the number. Compare it to the original version
Double.isNaN(data[5]) || data[5 /* VOL */] < 25.695312f ? -0.09571693f : -0.16740088f
The 64-bit precision output might be more natural to users for understanding what the POJO is doing when deciding how should a given observation traverse the tree.
Convert existing MOJO into POJO with 64-bit precision number representation
Suppose we already have a MOJO model that was created by an older H2O version and we want to see how would the POJO look like with numbers represented in 64-bits.
For this use case H2O provides a conversion tool MojoConvertTool as a part of the h2o.jar.
End of explanation |
10,791 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Λ-Type Three-Level
Step2: Solve the Problem
Step3: Plot Output | Python Code:
mb_solve_json =
{
"atom": {
"fields": [
{
"coupled_levels": [[0, 1]],
"detuning": 0.0,
"label": "probe",
"rabi_freq": 1.0e-3,
"rabi_freq_t_args":
{
"ampl": 1.0,
"centre": 0.0,
"fwhm": 1.0
},
"rabi_freq_t_func": "gaussian"
},
{
"coupled_levels": [[1, 2]],
"detuning": 0.0,
"detuning_positive": false,
"label": "coupling",
"rabi_freq": 5.0,
"rabi_freq_t_args":
{
"ampl": 1.0,
"fwhm": 0.2,
"off": 4.0,
"on": 6.0
},
"rabi_freq_t_func": "ramp_offon"
}
],
"num_states": 3
},
"t_min": -2.0,
"t_max": 12.0,
"t_steps": 140,
"z_min": -0.2,
"z_max": 1.2,
"z_steps": 140,
"z_steps_inner": 50,
"num_density_z_func": "gaussian",
"num_density_z_args": {
"ampl": 1.0,
"fwhm": 0.5,
"centre": 0.5
},
"interaction_strengths": [1.0e3, 1.0e3],
"savefile": "mbs-lambda-weak-pulse-cloud-atoms-some-coupling-store"
}
from maxwellbloch import mb_solve
mbs = mb_solve.MBSolve().from_json_str(mb_solve_json)
Explanation: Λ-Type Three-Level: Weak Pulse with Time-Dependent Coupling in a Cloud — Storage and Retrieval
Time taken to solve this problem on a 2013 MacBook Pro:
2h 32min 15s
Define the Problem
End of explanation
%time Omegas_zt, states_zt = mbs.mbsolve(recalc=False)
Explanation: Solve the Problem
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import numpy as np
fig = plt.figure(1, figsize=(16, 12))
# Probe
ax = fig.add_subplot(211)
cmap_range = np.linspace(0.0, 1.0e-3, 11)
cf = ax.contourf(mbs.tlist, mbs.zlist,
np.abs(mbs.Omegas_zt[0]/(2*np.pi)),
cmap_range, cmap=plt.cm.Blues)
ax.set_title('Rabi Frequency ($\Gamma / 2\pi $)')
ax.set_ylabel('Distance ($L$)')
ax.text(0.02, 0.95, 'Probe',
verticalalignment='top', horizontalalignment='left',
transform=ax.transAxes, color='grey', fontsize=16)
plt.colorbar(cf)
# Coupling
ax = fig.add_subplot(212)
cmap_range = np.linspace(0.0, 8.0, 11)
cf = ax.contourf(mbs.tlist, mbs.zlist,
np.abs(mbs.Omegas_zt[1]/(2*np.pi)),
cmap_range, cmap=plt.cm.Greens)
ax.set_xlabel('Time ($1/\Gamma$)')
ax.set_ylabel('Distance ($L$)')
ax.text(0.02, 0.95, 'Coupling',
verticalalignment='top', horizontalalignment='left',
transform=ax.transAxes, color='grey', fontsize=16)
plt.colorbar(cf)
# Both
for ax in fig.axes:
for y in [0.0, 1.0]:
ax.axhline(y, c='grey', lw=1.0, ls='dotted')
plt.tight_layout();
Explanation: Plot Output
End of explanation |
10,792 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to the Research Environment
The research environment is powered by IPython notebooks, which allow one to perform a great deal of data analysis and statistical validation. We'll demonstrate a few simple techniques here.
Code Cells vs. Text Cells
As you can see, each cell can be either code or text. To select between them, choose from the 'Cell Type' dropdown menu on the top left.
Executing a Command
A code cell will be evaluated when you press play, or when you press the shortcut, shift-enter. Evaluating a cell evaluates each line of code in sequence, and prints the results of the last line below the cell.
Step1: Sometimes there is no result to be printed, as is the case with assignment.
Step2: Remember that only the result from the last line is printed.
Step3: However, you can print whichever lines you want using the print statement.
Step4: Knowing When a Cell is Running
While a cell is running, a [*] will dsiplay on the left. When a cell has yet to be executed, [ ] will display. When it has been run, a number will display indicating the order in which it was run during the execution of the notebook [5]. Try on this cell and note it happening.
Step5: Importing Libraries
The vast majority of the time, you'll want to use functions from pre-built libraries. You can't import every library on Quantopian due to security issues, but you can import most of the common scientific ones. Here I import numpy and pandas, the two most common and useful libraries in quant finance. I recommend copying this import statement to every new notebook.
Notice that you can rename libraries to whatever you want after importing. The as statement allows this. Here we use np and pd as aliases for numpy and pandas. This is a very common aliasing and will be found in most code snippets around the web. The point behind this is to allow you to type fewer characters when you are frequently accessing these libraries.
Step6: Tab Autocomplete
Pressing tab will give you a list of IPython's best guesses for what you might want to type next. This is incredibly valuable and will save you a lot of time. If there is only one possible option for what you could type next, IPython will fill that in for you. Try pressing tab very frequently, it will seldom fill in anything you don't want, as if there is ambiguity a list will be shown. This is a great way to see what functions are available in a library.
Try placing your cursor after the . and pressing tab.
Step7: Getting Documentation Help
Placing a question mark after a function and executing that line of code will give you the documentation IPython has for that function. It's often best to do this in a new cell, as you avoid re-executing other code and running into bugs.
Step8: Sampling
We'll sample some random data using a function from numpy.
Step9: Plotting
We can use the plotting library we imported as follows.
Step10: Squelching Line Output
You might have noticed the annoying line of the form [<matplotlib.lines.Line2D at 0x7f72fdbc1710>] before the plots. This is because the .plot function actually produces output. Sometimes we wish not to display output, we can accomplish this with the semi-colon as follows.
Step11: Adding Axis Labels
No self-respecting quant leaves a graph without labeled axes. Here are some commands to help with that.
Step12: Generating Statistics
Let's use numpy to take some simple statistics.
Step13: Getting Real Pricing Data
Randomly sampled data can be great for testing ideas, but let's get some real data. We can use get_pricing to do that. You can use the ? syntax as discussed above to get more information on get_pricing's arguments.
Step14: Our data is now a dataframe. You can see the datetime index and the colums with different pricing data.
Step15: This is a pandas dataframe, so we can index in to just get price like this. For more info on pandas, please click here.
Step16: Because there is now also date information in our data, we provide two series to .plot. X.index gives us the datetime index, and X.values gives us the pricing values. These are used as the X and Y coordinates to make a graph.
Step17: We can get statistics again on real data.
Step18: Getting Returns from Prices
We can use the pct_change function to get returns. Notice how we drop the first element after doing this, as it will be NaN (nothing -> something results in a NaN percent change).
Step19: We can plot the returns distribution as a histogram.
Step20: Get statistics again.
Step21: Now let's go backwards and generate data out of a normal distribution using the statistics we estimated from Microsoft's returns. We'll see that we have good reason to suspect Microsoft's returns may not be normal, as the resulting normal distribution looks far different.
Step22: Generating a Moving Average
pandas has some nice tools to allow us to generate rolling statistics. Here's an example. Notice how there's no moving average for the first 60 days, as we don't have 60 days of data on which to generate the statistic. | Python Code:
2 + 2
Explanation: Introduction to the Research Environment
The research environment is powered by IPython notebooks, which allow one to perform a great deal of data analysis and statistical validation. We'll demonstrate a few simple techniques here.
Code Cells vs. Text Cells
As you can see, each cell can be either code or text. To select between them, choose from the 'Cell Type' dropdown menu on the top left.
Executing a Command
A code cell will be evaluated when you press play, or when you press the shortcut, shift-enter. Evaluating a cell evaluates each line of code in sequence, and prints the results of the last line below the cell.
End of explanation
X = 2
Explanation: Sometimes there is no result to be printed, as is the case with assignment.
End of explanation
2 + 2
3 + 3
Explanation: Remember that only the result from the last line is printed.
End of explanation
print 2 + 2
3 + 3
Explanation: However, you can print whichever lines you want using the print statement.
End of explanation
#Take some time to run something
c = 0
for i in range(10000000):
c = c + i
c
Explanation: Knowing When a Cell is Running
While a cell is running, a [*] will dsiplay on the left. When a cell has yet to be executed, [ ] will display. When it has been run, a number will display indicating the order in which it was run during the execution of the notebook [5]. Try on this cell and note it happening.
End of explanation
import numpy as np
import pandas as pd
# This is a plotting library for pretty pictures.
import matplotlib.pyplot as plt
Explanation: Importing Libraries
The vast majority of the time, you'll want to use functions from pre-built libraries. You can't import every library on Quantopian due to security issues, but you can import most of the common scientific ones. Here I import numpy and pandas, the two most common and useful libraries in quant finance. I recommend copying this import statement to every new notebook.
Notice that you can rename libraries to whatever you want after importing. The as statement allows this. Here we use np and pd as aliases for numpy and pandas. This is a very common aliasing and will be found in most code snippets around the web. The point behind this is to allow you to type fewer characters when you are frequently accessing these libraries.
End of explanation
np.random.
Explanation: Tab Autocomplete
Pressing tab will give you a list of IPython's best guesses for what you might want to type next. This is incredibly valuable and will save you a lot of time. If there is only one possible option for what you could type next, IPython will fill that in for you. Try pressing tab very frequently, it will seldom fill in anything you don't want, as if there is ambiguity a list will be shown. This is a great way to see what functions are available in a library.
Try placing your cursor after the . and pressing tab.
End of explanation
np.random.normal?
Explanation: Getting Documentation Help
Placing a question mark after a function and executing that line of code will give you the documentation IPython has for that function. It's often best to do this in a new cell, as you avoid re-executing other code and running into bugs.
End of explanation
# Sample 100 points with a mean of 0 and an std of 1. This is a standard normal distribution.
X = np.random.normal(0, 1, 100)
Explanation: Sampling
We'll sample some random data using a function from numpy.
End of explanation
plt.plot(X)
Explanation: Plotting
We can use the plotting library we imported as follows.
End of explanation
plt.plot(X);
Explanation: Squelching Line Output
You might have noticed the annoying line of the form [<matplotlib.lines.Line2D at 0x7f72fdbc1710>] before the plots. This is because the .plot function actually produces output. Sometimes we wish not to display output, we can accomplish this with the semi-colon as follows.
End of explanation
X = np.random.normal(0, 1, 100)
X2 = np.random.normal(0, 1, 100)
plt.plot(X);
plt.plot(X2);
plt.xlabel('Time') # The data we generated is unitless, but don't forget units in general.
plt.ylabel('Returns')
plt.legend(['X', 'X2']);
Explanation: Adding Axis Labels
No self-respecting quant leaves a graph without labeled axes. Here are some commands to help with that.
End of explanation
np.mean(X)
np.std(X)
Explanation: Generating Statistics
Let's use numpy to take some simple statistics.
End of explanation
data = get_pricing('MSFT', start_date='2012-1-1', end_date='2015-6-1')
Explanation: Getting Real Pricing Data
Randomly sampled data can be great for testing ideas, but let's get some real data. We can use get_pricing to do that. You can use the ? syntax as discussed above to get more information on get_pricing's arguments.
End of explanation
data
Explanation: Our data is now a dataframe. You can see the datetime index and the colums with different pricing data.
End of explanation
X = data['price']
Explanation: This is a pandas dataframe, so we can index in to just get price like this. For more info on pandas, please click here.
End of explanation
plt.plot(X.index, X.values)
plt.ylabel('Price')
plt.legend(['MSFT']);
Explanation: Because there is now also date information in our data, we provide two series to .plot. X.index gives us the datetime index, and X.values gives us the pricing values. These are used as the X and Y coordinates to make a graph.
End of explanation
np.mean(X)
np.std(X)
Explanation: We can get statistics again on real data.
End of explanation
R = X.pct_change()[1:]
Explanation: Getting Returns from Prices
We can use the pct_change function to get returns. Notice how we drop the first element after doing this, as it will be NaN (nothing -> something results in a NaN percent change).
End of explanation
plt.hist(R, bins=20)
plt.xlabel('Return')
plt.ylabel('Frequency')
plt.legend(['MSFT Returns']);
Explanation: We can plot the returns distribution as a histogram.
End of explanation
np.mean(R)
np.std(R)
Explanation: Get statistics again.
End of explanation
plt.hist(np.random.normal(np.mean(R), np.std(R), 10000), bins=20)
plt.xlabel('Return')
plt.ylabel('Frequency')
plt.legend(['Normally Distributed Returns']);
Explanation: Now let's go backwards and generate data out of a normal distribution using the statistics we estimated from Microsoft's returns. We'll see that we have good reason to suspect Microsoft's returns may not be normal, as the resulting normal distribution looks far different.
End of explanation
# Take the average of the last 60 days at each timepoint.
MAVG = pd.rolling_mean(X, window=60)
plt.plot(X.index, X.values)
plt.plot(MAVG.index, MAVG.values)
plt.ylabel('Price')
plt.legend(['MSFT', '60-day MAVG']);
Explanation: Generating a Moving Average
pandas has some nice tools to allow us to generate rolling statistics. Here's an example. Notice how there's no moving average for the first 60 days, as we don't have 60 days of data on which to generate the statistic.
End of explanation |
10,793 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
Step2: Convolution
Step4: Aside
Step5: Convolution
Step6: Max pooling
Step7: Max pooling
Step8: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory
Step9: Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
Step10: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file cs231n/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug
Step11: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer.
Step12: Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
Step13: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting
Step14: Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set
Step15: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following
Step16: Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.
Spatial batch normalization
Step17: Spatial batch normalization
Step18: Experiment!
Experiment and try to get the best performance that you can on CIFAR-10 using a ConvNet. Here are some ideas to get you started | Python Code:
# As usual, a bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.cnn import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from cs231n.layers import *
from cs231n.fast_layers import *
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
Explanation: Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
End of explanation
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=3)
conv_param = {'stride': 2, 'pad': 1}
out, _ = conv_forward_naive(x, w, b, conv_param)
correct_out = np.array([[[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]]])
# Compare your output to ours; difference should be around 1e-8
print 'Testing conv_forward_naive'
print 'difference: ', rel_error(out, correct_out)
Explanation: Convolution: Naive forward pass
The core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive.
You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.
You can test your implementation by running the following:
End of explanation
from scipy.misc import imread, imresize
kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg')
# kitten is wide, and puppy is already square
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d/2:-d/2, :]
img_size = 200 # Make this smaller if it runs too slow
x = np.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))
x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))
# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = np.array([0, 128])
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})
def imshow_noax(img, normalize=True):
Tiny helper to show images as uint8 and remove axis labels
if normalize:
img_max, img_min = np.max(img), np.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_noax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_noax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_noax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_noax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_noax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_noax(out[1, 1])
plt.show()
Explanation: Aside: Image processing via convolutions
As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
End of explanation
x = np.random.randn(4, 3, 5, 5)
w = np.random.randn(2, 3, 3, 3)
b = np.random.randn(2,)
dout = np.random.randn(4, 2, 5, 5)
conv_param = {'stride': 1, 'pad': 1}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward_naive(x, w, b, conv_param)
dx, dw, db = conv_backward_naive(dout, cache)
# Your errors should be around 1e-9'
print 'Testing conv_backward_naive function'
print 'dx error: ', rel_error(dx, dx_num)
print 'dw error: ', rel_error(dw, dw_num)
print 'db error: ', rel_error(db, db_num)
Explanation: Convolution: Naive backward pass
Implement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don't need to worry too much about computational efficiency.
When you are done, run the following to check your backward pass with a numeric gradient check.
End of explanation
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward_naive(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be around 1e-8.
print 'Testing max_pool_forward_naive function:'
print 'difference: ', rel_error(out, correct_out)
Explanation: Max pooling: Naive forward
Implement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don't worry too much about computational efficiency.
Check your implementation by running the following:
End of explanation
x = np.random.randn(3, 2, 8, 8)
dout = np.random.randn(3, 2, 4, 4)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)
out, cache = max_pool_forward_naive(x, pool_param)
dx = max_pool_backward_naive(dout, cache)
# Your error should be around 1e-12
print 'Testing max_pool_backward_naive function:'
print 'dx error: ', rel_error(dx, dx_num)
Explanation: Max pooling: Naive backward
Implement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency.
Check your implementation with numeric gradient checking by running the following:
End of explanation
from cs231n.fast_layers import conv_forward_fast, conv_backward_fast
from time import time
x = np.random.randn(100, 3, 31, 31)
w = np.random.randn(25, 3, 3, 3)
b = np.random.randn(25,)
dout = np.random.randn(100, 25, 16, 16)
conv_param = {'stride': 2, 'pad': 1}
t0 = time()
out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)
t1 = time()
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
t2 = time()
print 'Testing conv_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'Difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)
t1 = time()
dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting conv_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
print 'dw difference: ', rel_error(dw_naive, dw_fast)
print 'db difference: ', rel_error(db_naive, db_fast)
from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast
x = np.random.randn(100, 3, 32, 32)
dout = np.random.randn(100, 3, 16, 16)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time()
out_naive, cache_naive = max_pool_forward_naive(x, pool_param)
t1 = time()
out_fast, cache_fast = max_pool_forward_fast(x, pool_param)
t2 = time()
print 'Testing pool_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'fast: %fs' % (t2 - t1)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive = max_pool_backward_naive(dout, cache_naive)
t1 = time()
dx_fast = max_pool_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting pool_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
Explanation: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:
bash
python setup.py build_ext --inplace
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.
NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.
You can compare the performance of the naive and fast versions of these layers by running the following:
End of explanation
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
x = np.random.randn(2, 3, 16, 16)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
print 'Testing conv_relu_pool'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
from cs231n.layer_utils import conv_relu_forward, conv_relu_backward
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
print 'Testing conv_relu:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
Explanation: Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
End of explanation
model = ThreeLayerConvNet()
N = 50
X = np.random.randn(N, 3, 32, 32)
y = np.random.randint(10, size=N)
loss, grads = model.loss(X, y)
print 'Initial loss (no regularization): ', loss
model.reg = 0.5
loss, grads = model.loss(X, y)
print 'Initial loss (with regularization): ', loss
Explanation: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file cs231n/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug:
Sanity check loss
After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.
End of explanation
num_inputs = 2
input_dim = (3, 16, 16)
reg = 0.0
num_classes = 10
X = np.random.randn(num_inputs, *input_dim)
y = np.random.randint(num_classes, size=num_inputs)
model = ThreeLayerConvNet(num_filters=3, filter_size=3,
input_dim=input_dim, hidden_dim=7,
dtype=np.float64)
loss, grads = model.loss(X, y)
for param_name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
Explanation: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer.
End of explanation
num_train = 100
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
model = ThreeLayerConvNet(weight_scale=1e-2)
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=1)
solver.train()
Explanation: Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
End of explanation
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, 'o')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-o')
plt.plot(solver.val_acc_history, '-o')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
Explanation: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
End of explanation
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)
solver = Solver(model, data,
num_epochs=1, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=20)
solver.train()
Explanation: Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:
End of explanation
from cs231n.vis_utils import visualize_grid
grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))
plt.imshow(grid.astype('uint8'))
plt.axis('off')
plt.gcf().set_size_inches(5, 5)
plt.show()
Explanation: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following:
End of explanation
# Check the training-time forward pass by checking means and variances
# of features both before and after spatial batch normalization
N, C, H, W = 2, 3, 4, 5
x = 4 * np.random.randn(N, C, H, W) + 10
print 'Before spatial batch normalization:'
print ' Shape: ', x.shape
print ' Means: ', x.mean(axis=(0, 2, 3))
print ' Stds: ', x.std(axis=(0, 2, 3))
# Means should be close to zero and stds close to one
gamma, beta = np.ones(C), np.zeros(C)
bn_param = {'mode': 'train'}
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print 'After spatial batch normalization:'
print ' Shape: ', out.shape
print ' Means: ', out.mean(axis=(0, 2, 3))
print ' Stds: ', out.std(axis=(0, 2, 3))
# Means should be close to beta and stds close to gamma
gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print 'After spatial batch normalization (nontrivial gamma, beta):'
print ' Shape: ', out.shape
print ' Means: ', out.mean(axis=(0, 2, 3))
print ' Stds: ', out.std(axis=(0, 2, 3))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, C, H, W = 10, 4, 11, 12
bn_param = {'mode': 'train'}
gamma = np.ones(C)
beta = np.zeros(C)
for t in xrange(50):
x = 2.3 * np.random.randn(N, C, H, W) + 13
spatial_batchnorm_forward(x, gamma, beta, bn_param)
bn_param['mode'] = 'test'
x = 2.3 * np.random.randn(N, C, H, W) + 13
a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print 'After spatial batch normalization (test-time):'
print ' means: ', a_norm.mean(axis=(0, 2, 3))
print ' stds: ', a_norm.std(axis=(0, 2, 3))
Explanation: Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.
Spatial batch normalization: forward
In the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following:
End of explanation
N, C, H, W = 2, 3, 4, 5
x = 5 * np.random.randn(N, C, H, W) + 12
gamma = np.random.randn(C)
beta = np.random.randn(C)
dout = np.random.randn(N, C, H, W)
bn_param = {'mode': 'train'}
fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)
print 'dx error: ', rel_error(dx_num, dx)
print 'dgamma error: ', rel_error(da_num, dgamma)
print 'dbeta error: ', rel_error(db_num, dbeta)
Explanation: Spatial batch normalization: backward
In the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:
End of explanation
# Train a really good model on CIFAR-10
Explanation: Experiment!
Experiment and try to get the best performance that you can on CIFAR-10 using a ConvNet. Here are some ideas to get you started:
Things you should try:
Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
Number of filters: Above we used 32 filters. Do more or fewer do better?
Batch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization aafter affine layers. Do your networks train faster?
Network architecture: The network above has two layers of trainable parameters. Can you do better with a deeper network? You can implement alternative architectures in the file cs231n/classifiers/convnet.py. Some good architectures to try include:
[conv-relu-pool]xN - conv - relu - [affine]xM - [softmax or SVM]
[conv-relu-pool]XN - [affine]XM - [softmax or SVM]
[conv-relu-conv-relu-pool]xN - [affine]xM - [softmax or SVM]
Tips for training
For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:
If the parameters are working well, you should see improvement within a few hundred iterations
Remember the course-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.
Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
Alternative activation functions such as leaky ReLU, parametric ReLU, or MaxOut.
Model ensembles
Data augmentation
If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.
What we expect
At the very least, you should be able to train a ConvNet that gets at least 65% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.
You should use the space below to experiment and train your network. The final cell in this notebook should contain the training, validation, and test set accuracies for your final trained network. In this notebook you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.
Have fun and happy training!
End of explanation |
10,794 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Section 2a
Our first look at the data will be focused on the time variations of the accidents
Step1: Read the dataframe
We have loaded in the SQL database the years 2010 to 2014. We can directly extract from the database the counts for each month of each year and this throughout this whole section.
We use the Characteristics table since it each accident corresponds to one entry in this table. Furthermore this table contains a column for the data and time of the accident stored in the convenient datetime format.
Step2: Evolution over the years
Step3: It is pleasing to see that the number of accidents has been overall decreasing since 2010, although 2014 is slightly higher than 2013.
Number of accidents for each month
Let's see if certain months have more accidents than others. We need to normalize per days this time since the difference between the longest and shortest months corresponds to ~10% of the length of a month.
Step4: The 5 years we have follow the same trends overall. We can safely average over them. | Python Code:
from CSVtoSQLconverter import load_sql_engine
sqlEngine = load_sql_engine()
import pandas as pd
import numpy as np
# Provides better color palettes
import seaborn as sns
from pandas import DataFrame,Series
import matplotlib as mpl
import matplotlib.pyplot as plt
# Command to display the plots in the iPython Notebook
%matplotlib inline
import matplotlib.patches as mpatches
mpl.style.use('seaborn-whitegrid')
plt.style.use('seaborn-talk')
Explanation: Section 2a
Our first look at the data will be focused on the time variations of the accidents:
- What is the evolution of the number of accidents over the years ?
- Are there more accidents certain months of the year ? Certain days of the week ?
End of explanation
PerMonth = pd.read_sql_query('''SELECT YEAR(datetime), MONTH(datetime), DAY(LAST_DAY(datetime)),
COUNT(`accident id`) FROM characteristics
GROUP BY YEAR(datetime), MONTH(datetime);''',
sqlEngine)
PerMonth.head()
PerMonth.rename(columns={'YEAR(datetime)':'year','MONTH(datetime)':'month',
'DAY(LAST_DAY(datetime))':'number days',
'COUNT(`accident id`)':'accident count'},inplace=True)
PerMonth.head()
Explanation: Read the dataframe
We have loaded in the SQL database the years 2010 to 2014. We can directly extract from the database the counts for each month of each year and this throughout this whole section.
We use the Characteristics table since it each accident corresponds to one entry in this table. Furthermore this table contains a column for the data and time of the accident stored in the convenient datetime format.
End of explanation
PerYear = PerMonth.groupby(['year'],as_index=False).sum()
PerYear
g = sns.factorplot(x="year",y='accident count',
data=PerYear, kind='bar', size=5, aspect=2.0)
Explanation: Evolution over the years
End of explanation
PerMonthNorm = PerMonth.copy()
PerMonthNorm['per day'] = PerMonthNorm['accident count'] / PerMonthNorm['number days']
PerMonthNorm.head()
sns.factorplot(x='month', y='per day', hue="year", data=PerMonthNorm, size=5,aspect=2.0)
Explanation: It is pleasing to see that the number of accidents has been overall decreasing since 2010, although 2014 is slightly higher than 2013.
Number of accidents for each month
Let's see if certain months have more accidents than others. We need to normalize per days this time since the difference between the longest and shortest months corresponds to ~10% of the length of a month.
End of explanation
PerMonthMean = PerMonth.groupby('month', as_index=False).mean()
PerMonthMean['per day'] = PerMonthMean['accident count'] / PerMonthMean['number days']
PerMonthMean
sns.factorplot(x='month', y='per day', data=PerMonthMean, size=5,aspect=2.0)
Explanation: The 5 years we have follow the same trends overall. We can safely average over them.
End of explanation |
10,795 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interactive Demo for Metrics
command line executables
Step1: Load trajectories
Step2: Load KITTI files with entries of the first three rows of $\mathrm{SE}(3)$ matrices per line (no timestamps)
Step3: ...or load a ROS bagfile with geometry_msgs/PoseStamped, geometry_msgs/TransformStamped, geometry_msgs/PoseWithCovarianceStamped or nav_msgs/Odometry topics
Step4: ... or load TUM files with 3D position and orientation quaternion per line ($x$ $y$ $z$ $q_x$ $q_y$ $q_z$ $q_w$)
Step5: APE
Algorithm and API explanation
Step6: RPE
Algorithm and API explanation
Step7: Do stuff with the result objects | Python Code:
from evo.tools import log
log.configure_logging()
from evo.tools import plot
from evo.tools.plot import PlotMode
from evo.core.metrics import PoseRelation, Unit
from evo.tools.settings import SETTINGS
# temporarily override some package settings
SETTINGS.plot_figsize = [6, 6]
SETTINGS.plot_split = True
SETTINGS.plot_usetex = False
# magic plot configuration
import matplotlib.pyplot as plt
%matplotlib inline
%matplotlib notebook
# interactive widgets configuration
import ipywidgets
check_opts_ape = {"align": False, "correct_scale": False, "show_plot": True}
check_boxes_ape=[ipywidgets.Checkbox(description=desc, value=val) for desc, val in check_opts_ape.items()]
check_opts_rpe = {"align": False, "correct_scale": False, "all_pairs": False, "show_plot": True}
check_boxes_rpe=[ipywidgets.Checkbox(description=desc, value=val) for desc, val in check_opts_rpe.items()]
delta_input = ipywidgets.FloatText(value=1.0, description='delta', disabled=False, color='black')
delta_unit_selector=ipywidgets.Dropdown(
options={u.value: u for u in Unit if u is not Unit.seconds},
value=Unit.frames, description='delta_unit'
)
plotmode_selector=ipywidgets.Dropdown(
options={p.value: p for p in PlotMode},
value=PlotMode.xy, description='plot_mode'
)
pose_relation_selector=ipywidgets.Dropdown(
options={p.value: p for p in PoseRelation},
value=PoseRelation.translation_part, description='pose_relation'
)
Explanation: Interactive Demo for Metrics
command line executables: see README.md
algorithm documentation: metrics.py API & Algorithm Documentation
...some modules and settings for this demo:
End of explanation
from evo.tools import file_interface
from evo.core import sync
Explanation: Load trajectories
End of explanation
traj_ref = file_interface.read_kitti_poses_file("../test/data/KITTI_00_gt.txt")
traj_est = file_interface.read_kitti_poses_file("../test/data/KITTI_00_ORB.txt")
Explanation: Load KITTI files with entries of the first three rows of $\mathrm{SE}(3)$ matrices per line (no timestamps):
End of explanation
from rosbags.rosbag1 import Reader as Rosbag1Reader
with Rosbag1Reader("../test/data/ROS_example.bag") as reader:
traj_ref = file_interface.read_bag_trajectory(reader, "groundtruth")
traj_est = file_interface.read_bag_trajectory(reader, "ORB-SLAM")
traj_ref, traj_est = sync.associate_trajectories(traj_ref, traj_est)
Explanation: ...or load a ROS bagfile with geometry_msgs/PoseStamped, geometry_msgs/TransformStamped, geometry_msgs/PoseWithCovarianceStamped or nav_msgs/Odometry topics:
End of explanation
traj_ref = file_interface.read_tum_trajectory_file("../test/data/fr2_desk_groundtruth.txt")
traj_est = file_interface.read_tum_trajectory_file("../test/data/fr2_desk_ORB_kf_mono.txt")
traj_ref, traj_est = sync.associate_trajectories(traj_ref, traj_est)
print(traj_ref)
print(traj_est)
Explanation: ... or load TUM files with 3D position and orientation quaternion per line ($x$ $y$ $z$ $q_x$ $q_y$ $q_z$ $q_w$):
End of explanation
import evo.main_ape as main_ape
import evo.common_ape_rpe as common
count = 0
results = []
def callback_ape(pose_relation, align, correct_scale, plot_mode, show_plot):
global results, count
est_name="APE Test #{}".format(count)
result = main_ape.ape(traj_ref, traj_est, est_name=est_name,
pose_relation=pose_relation, align=align, correct_scale=correct_scale)
count += 1
results.append(result)
if show_plot:
fig = plt.figure()
ax = plot.prepare_axis(fig, plot_mode)
plot.traj(ax, plot_mode, traj_ref, style="--", alpha=0.5)
plot.traj_colormap(
ax, result.trajectories[est_name], result.np_arrays["error_array"], plot_mode,
min_map=result.stats["min"], max_map=result.stats["max"])
_ = ipywidgets.interact_manual(callback_ape, pose_relation=pose_relation_selector, plot_mode=plotmode_selector,
**{c.description: c.value for c in check_boxes_ape})
Explanation: APE
Algorithm and API explanation: see here
Interactive APE Demo
Run the code below, configure the parameters in the GUI and press the update button.
(uses the trajectories loaded above)
End of explanation
import evo.main_rpe as main_rpe
count = 0
results = []
def callback_rpe(pose_relation, delta, delta_unit, all_pairs, align, correct_scale, plot_mode, show_plot):
global results, count
est_name="RPE Test #{}".format(count)
result = main_rpe.rpe(traj_ref, traj_est, est_name=est_name,
pose_relation=pose_relation, delta=delta, delta_unit=delta_unit,
all_pairs=all_pairs, align=align, correct_scale=correct_scale,
support_loop=True)
count += 1
results.append(result)
if show_plot:
fig = plt.figure()
ax = plot.prepare_axis(fig, plot_mode)
plot.traj(ax, plot_mode, traj_ref, style="--", alpha=0.5)
plot.traj_colormap(
ax, result.trajectories[est_name], result.np_arrays["error_array"], plot_mode,
min_map=result.stats["min"], max_map=result.stats["max"])
_ = ipywidgets.interact_manual(callback_rpe, pose_relation=pose_relation_selector, plot_mode=plotmode_selector,
delta=delta_input, delta_unit=delta_unit_selector,
**{c.description: c.value for c in check_boxes_rpe})
Explanation: RPE
Algorithm and API explanation: see here
Interactive RPE Demo
Run the code below, configure the parameters in the GUI and press the update button.
(uses the trajectories loaded above, alignment only useful for visualization here)
End of explanation
import pandas as pd
from evo.tools import pandas_bridge
df = pd.DataFrame()
for result in results:
df = pd.concat((df, pandas_bridge.result_to_df(result)), axis="columns")
df
df.loc["stats"]
Explanation: Do stuff with the result objects:
End of explanation |
10,796 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Apply logistic regression to categorize whether a county had high mortality rate due to contamination
1. Import the necessary packages to read in the data, plot, and create a logistic regression model
Step1: 2. Read in the hanford.csv file in the data/ folder
Step2: <img src="../../images/hanford_variables.png"></img>
3. Calculate the basic descriptive statistics on the data | Python Code:
import pandas as pd
%matplotlib inline
import numpy as np
from sklearn.linear_model import LogisticRegression
Explanation: Apply logistic regression to categorize whether a county had high mortality rate due to contamination
1. Import the necessary packages to read in the data, plot, and create a logistic regression model
End of explanation
df = pd.read_csv("hanford.csv")
df
Explanation: 2. Read in the hanford.csv file in the data/ folder
End of explanation
df.describe()
df['Exposure'].max() - df['Exposure'].min()
df['Mortality'].max() - df['Mortality'].min()
df['Exposure'].quantile(q=0.25)
df['Exposure'].quantile(q=0.25)
df['Exposure'].quantile(q=0.5)
df['Exposure'].quantile(q=0.75)
iqr_ex = df['Exposure'].quantile(q=0.75) - df['Exposure'].quantile(q=0.25)
iqr_ex
df['Mortality'].quantile(q=0.25)
df['Mortality'].quantile(q=0.5)
df['Mortality'].quantile(q=0.75)
iqr_mort = df['Mortality'].quantile(q=0.75) - df['Mortality'].quantile(q=0.25)
iqr_mort
df.std()
Explanation: <img src="../../images/hanford_variables.png"></img>
3. Calculate the basic descriptive statistics on the data
End of explanation |
10,797 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
resampling
does not have frequency and we want it
does not have the frequency we want
Step1: convert hourly to 45 min frequency and fill data
ffill --> forward fill --> use previous month data
bfill
Step2: resampling better option to not lose all the data | Python Code:
rng = pd.date_range('1/1/2011', periods=72, freq='H')
rng[1:4]
ts = pd.Series(list(range(len(rng))), index=rng)
ts.head()
Explanation: resampling
does not have frequency and we want it
does not have the frequency we want
End of explanation
converted = ts.asfreq('45Min', method='ffill')
converted.head(10)
ts.shape
converted.shape
converted2 = ts.asfreq('3H')
converted2.head()
Explanation: convert hourly to 45 min frequency and fill data
ffill --> forward fill --> use previous month data
bfill
End of explanation
#mean of 0 and 1, 2 and 3 etc
ts.resample('2H').mean()[0:10]
#resampling events in irregular time series
irreq_ts = ts[ list( np.random.choice( a = list( range( len(ts))), size=10, replace=False ))]
irreq_ts
irreq_ts = irreq_ts.sort_index()
irreq_ts
irreq_ts.resample('H').fillna( method='ffill', limit=5)
irreq_ts.resample('H').count()
Explanation: resampling better option to not lose all the data
End of explanation |
10,798 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Non-linear recharge models
R.A. Collenteur, University of Graz
This notebook explains the use of the RechargeModel stress model to simulate the combined effect of precipitation and potential evaporation on the groundwater levels. For the computation of the groundwater recharge, three recharge models are currently available
Step1: Read Input data
Input data handling is similar to other stressmodels. The only thing that is necessary to check is that the precipitation and evaporation are provided in mm/day. This is necessary because the parameters for the non-linear recharge models are defined in mm for the length unit and days for the time unit. It is possible to use other units, but this would require manually setting the initial values and parameter boundaries for the recharge models.
Step2: Make a basic model
The normal workflow may be used to create and calibrate the model.
1. Create a Pastas Model instance
2. Choose a recharge model. All recharge models can be accessed through the recharge subpackage (ps.rch).
3. Create a RechargeModel object and add it to the model
4. Solve and visualize the model
Step3: Analyze the estimated recharge flux
After the parameter estimation we can take a look at the recharge flux computed by the model. The flux is easy to obtain using the get_stress method of the model object, which automatically provides the optimal parameter values that were just estimated. After this, we can for example look at the yearly recharge flux estimated by the Pastas model. | Python Code:
import pandas as pd
import pastas as ps
import matplotlib.pyplot as plt
ps.show_versions(numba=True)
ps.set_log_level("INFO")
Explanation: Non-linear recharge models
R.A. Collenteur, University of Graz
This notebook explains the use of the RechargeModel stress model to simulate the combined effect of precipitation and potential evaporation on the groundwater levels. For the computation of the groundwater recharge, three recharge models are currently available:
Linear (Berendrecht et al., 2003; von Asmuth et al., 2008)
Berendrecht (Berendrecht et al., 2006)
FlexModel (Collenteur et al., in 2021)
The first model is a simple linear function of precipitation and potential evaporation while the latter two are simulate a non-linear response of recharge to precipitation using a soil-water balance concepts. Detailed descriptions of these models can be found in articles listed in the References at the end of this notebook.
<div class="alert alert-info">
<b>Tip</b>
To run this notebook and the related non-linear recharge models, it is strongly recommended to install Numba (http://numba.pydata.org). This Just-In-Time (JIT) compiler compiles the computationally intensive part of the recharge calculation, making the non-linear model as fast as the Linear recharge model.
</div>
End of explanation
head = pd.read_csv("../data/B32C0639001.csv", parse_dates=['date'],
index_col='date', squeeze=True)
# Make this millimeters per day
evap = ps.read_knmi("../data/etmgeg_260.txt", variables="EV24").series * 1e3
rain = ps.read_knmi("../data/etmgeg_260.txt", variables="RH").series * 1e3
ps.plots.series(head, [evap, rain], figsize=(10,6),
labels=["Head [m]", "Evap [mm/d]", "Rain [mm/d]"]);
Explanation: Read Input data
Input data handling is similar to other stressmodels. The only thing that is necessary to check is that the precipitation and evaporation are provided in mm/day. This is necessary because the parameters for the non-linear recharge models are defined in mm for the length unit and days for the time unit. It is possible to use other units, but this would require manually setting the initial values and parameter boundaries for the recharge models.
End of explanation
ml = ps.Model(head)
# Select a recharge model
rch = ps.rch.FlexModel()
#rch = ps.rch.Berendrecht()
#rch = ps.rch.Linear()
rm = ps.RechargeModel(rain, evap, recharge=rch, rfunc=ps.Gamma, name="rch")
ml.add_stressmodel(rm)
ml.solve(noise=True, tmin="1990", report="basic")
ml.plots.results(figsize=(10,6));
Explanation: Make a basic model
The normal workflow may be used to create and calibrate the model.
1. Create a Pastas Model instance
2. Choose a recharge model. All recharge models can be accessed through the recharge subpackage (ps.rch).
3. Create a RechargeModel object and add it to the model
4. Solve and visualize the model
End of explanation
recharge = ml.get_stress("rch").resample("A").sum()
ax = recharge.plot.bar(figsize=(10,3))
ax.set_xticklabels(recharge.index.year)
plt.ylabel("Recharge [mm/year]");
Explanation: Analyze the estimated recharge flux
After the parameter estimation we can take a look at the recharge flux computed by the model. The flux is easy to obtain using the get_stress method of the model object, which automatically provides the optimal parameter values that were just estimated. After this, we can for example look at the yearly recharge flux estimated by the Pastas model.
End of explanation |
10,799 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import important modules and declare important directories
Step1: This is a function that we'll use later to plot the results of a linear SVM classifier
Step2: Load in the sample JSON file and view its contents
Step3: Now, let's create two lists for all the reviews in Ohio
Step4: Let's take a look at the following regression (information is correlated with review length)
Step5: Let's try using dictionary sentiment categories as dependent variables
Step6: NOTE
Step7: Let's plot the overall distribution of ratings aggregated across all of the states
Step8: Let's plot the rating distribution of reviews within each of the states.
Step9: Now let's try to build a simple linear support vector machine
Note, all support vector machine algorithm relies on drawing a separating hyperplane amongst the different classes. This is not necessarily guarenteed to exist. For a complete set of conditions that must be satisfied for this to be an appropriate algorithm to use, please see below
Step10: In order to use the machine learning algorithms in Sci-Kit learn, we first have to initialize a CountVectorizer object. We can use this object creates a matrix representation of each of our words. There are many options that we can specify when we initialize our CountVectorizer object (see documentation for full list) but they essentially all relate to how the words are represented in the final matrix.
Step11: Create dataframe to hold our results from the classification algorithms
Step12: Lets call a linear SVM instance from SK Learn have it train on our subset of reviews. We'll output the results to an output dataframe and then calculate a total accuracy percentage.
Step13: SKLearn uses what's known as a pipeline. Instead of having to declare each of these objects on their own and passing them into each other, we can just create one object with all the necessary options specified and then use that to run the algorithm. For each pipeline below, we specify the vector to be the CountVectorizer object we have defined above, set it to use tfidf, and then specify the classifier that we want to use.
Below, we create a separate pipeline for Random Forest, a Bagged Decision Tree, and Multinomial Logistic Regression. We then append the results to the dataframe that we've already created.
Step14: Test results using all of the states
0.5383 from Naive TF-IDF Linear SVM
0.4567 from Naive TF-IDF Linear SVM using Harvard-IV dictionary
0.5241 from Naive TF-IDF Bagged DT using 100 estimators
0.496 from Naive TF-IDF Bagged DT using 100 estimators and Harvard-IV dictionary
0.5156 from Naive TF-IDF RandomForest and Harvard-IV dictionary
0.53 from Naive TF-IDF RF
0.458 from Naive TF-IDF SVM
As you can see, none of the above classifiers performs significantly better than a fair coin toss. This is most likely due to the heavily skewed distribution of review ratings. There are many reviews that receive 4 or 5 stars, therefore it is likely that the language associated with each review is being confused with each other. We can confirm this by looking at the "confusion matrix" of our predictions.
Step15: Each row and column corresponds to a rating number. For example, element (1,1) is the number of 1 star reviews that were correctly classified. Element (1,2) is the number of 1 star reviews that were incorrectly classified as 2 stars. Therefore, the sum of the diagonal represents the total number of correctly classified reviews. As you can see, the bagged decision tree classifier is classifying many four starred reviews as five starred reviews and vice versa.
This indicates that we can improve our results by using more aggregated categories. For example, we can call all four and five star reviews as "good" and all other review ratings as "bad".
Step16: We draw a heat map for each state below. Longitude is on the Y axis and Latitude is on the X axis. The color coding is as follows
Step17: We run the following linear regression model for each of the states | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd
import json
import pandas as pd
import csv
import os
import re
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn import svm
from sklearn.linear_model import SGDClassifier
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.ensemble import BaggingClassifier
from sklearn.svm import LinearSVC
from sklearn.naive_bayes import MultinomialNB
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
from sklearn.pipeline import Pipeline
import numpy as np
from sklearn import datasets, linear_model
from sklearn.linear_model import LinearRegression
import statsmodels.api as sm
from scipy import stats
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from pymongo import MongoClient
from datetime import datetime
Explanation: Import important modules and declare important directories
End of explanation
def plot_coefficients(classifier, feature_names, top_features=20):
coef = classifier.coef_.ravel()[0:200]
top_positive_coefficients = np.argsort(coef)[-top_features:]
top_negative_coefficients = np.argsort(coef)[:top_features]
top_coefficients = np.hstack([top_negative_coefficients, top_positive_coefficients])
#create plot
plt.figure(figsize=(15, 5))
colors = ['red' if c < 0 else 'blue' for c in coef[top_coefficients]]
plt.bar(np.arange(2 * top_features), coef[top_coefficients], color=colors)
feature_names = np.array(feature_names)
plt.xticks(np.arange(1, 1 + 2 * top_features), feature_names[top_coefficients], rotation=60, ha='right')
plt.show()
#def bayesian_average()
#This is the main folder where all the modules and JSON files are stored on my computer.
#You need to change this to the folder path specific to your computer
file_directory = "/Users/ed/yelp-classification/"
reviews_file = "cleaned_reviews_states_2010.json"
biz_file = "cleaned_business_data.json"
Explanation: This is a function that we'll use later to plot the results of a linear SVM classifier
End of explanation
#This is a smaller subset of our overall Yelp data
#I randomly chose 5000 reviews from each state and filed them into the JSON file
#Note that for the overall dataset, we have about 2 million reviews.
#That's why we need to use a data management system like MongoDB in order to hold all our data
#and to more efficiently manipulate it
reviews_json = json.load(open(file_directory+reviews_file))
biz_json = json.load(open(file_directory+biz_file))
for key in reviews_json.keys():
reviews_json[key] = reviews_json[key][0:5000]
#Let's see how reviews_json is set up
#changed this for python 3
print(reviews_json.keys())
reviews_json['OH'][0]
#We can see that on the highest level, the dictionary keys are the different states
#Let's look at the first entry under Ohio
print(reviews_json['OH'][0]['useful'])
#So for each review filed under Ohio, we have many different attributes to choose from
#Let's look at what the review and rating was for the first review filed under Ohio
print(reviews_json['OH'][0]['text'])
print(reviews_json['OH'][0]['stars'])
Explanation: Load in the sample JSON file and view its contents
End of explanation
#We want to split up reviews between text and labels for each state
reviews = []
stars = []
cool = []
useful = []
funny = []
compliment = []
cunumber = []
for key in reviews_json.keys():
for review in reviews_json[key]:
reviews.append(review['text'])
stars.append(review['stars'])
cool.append(review['cool'])
useful.append(review['useful'])
funny.append(review['funny'])
compliment.append(review['funny']+review['useful']+review['cool'])
cunumber.append(review['useful']+review['cool'])
#Just for demonstration, let's pick out the same review example as above but from our respective lists
print(reviews[0])
print(stars[0])
print(cool[0])
print(useful[0])
print(funny[0])
reviews_json['OH'][1]['cool']+1
Explanation: Now, let's create two lists for all the reviews in Ohio:
One that holds all the reviews
One that holds all the ratings
End of explanation
#added 'low_memory=False' after I got a warning about mixed data types
harvard_dict = pd.read_csv('HIV-4.csv',low_memory=False)
negative_words = list(harvard_dict.loc[harvard_dict['Negativ'] == 'Negativ']['Entry'])
positive_words = list(harvard_dict.loc[harvard_dict['Positiv'] == 'Positiv']['Entry'])
#Use word dictionary from Hu and Liu (2004)
#had to use encoding = "ISO-8859-1" to avoid error
negative_words = open('negative-words.txt', 'r',encoding = "ISO-8859-1").read()
negative_words = negative_words.split('\n')
positive_words = open('positive-words.txt', 'r',encoding = "ISO-8859-1").read()
positive_words = positive_words.split('\n')
total_words = negative_words + positive_words
total_words = list(set(total_words))
review_length = []
negative_percent = []
positive_percent = []
for review in reviews:
length_words = len(review.split())
neg_words = [x.lower() for x in review.split() if x in negative_words]
pos_words = [x.lower() for x in review.split() if x in positive_words]
negative_percent.append(float(len(neg_words))/float(length_words))
positive_percent.append(float(len(pos_words))/float(length_words))
review_length.append(length_words)
regression_df = pd.DataFrame({'stars':stars, 'review_length':review_length, 'neg_percent': negative_percent, 'positive_percent': positive_percent})
use_df = pd.DataFrame({'useful':cunumber, 'review_length':review_length, 'neg_percent': negative_percent, 'positive_percent': positive_percent})
use_df2 = pd.DataFrame({'useful':cunumber, 'review_length':review_length})
#Standardize dependent variables
std_vars = ['neg_percent', 'positive_percent', 'review_length']
for var in std_vars:
len_std = regression_df[var].std()
len_mu = regression_df[var].mean()
regression_df[var] = [(x - len_mu)/len_std for x in regression_df[var]]
Explanation: Let's take a look at the following regression (information is correlated with review length):
$Rating = \beta_{neg}neg + \beta_{pos}pos + \beta_{num}\text{Std_NumWords} + \epsilon$
Where:
$neg = \frac{\text{Number of Negative Words}}{\text{Total Number of Words}}$
$pos = \frac{\text{Number of Positive Words}}{\text{Total Number of Words}}$
End of explanation
#The R-Squared from using the Harvard Dictionary is 0.1 but with the Hu & Liu word dictionary
X = np.column_stack((regression_df.review_length,regression_df.neg_percent, regression_df.positive_percent))
y = regression_df.stars
X = sm.add_constant(X)
est = sm.OLS(y, X)
est2 = est.fit()
print(est2.summary())
#The R-Squared from using the Harvard Dictionary is 0.1 but with the Hu & Liu word dictionary
X = np.column_stack((regression_df.review_length,regression_df.neg_percent, regression_df.positive_percent))
y = use_df2.useful
X = sm.add_constant(X)
est = sm.OLS(y, X)
est2 = est.fit()
print(est2.summary())
Explanation: Let's try using dictionary sentiment categories as dependent variables
End of explanation
multi_logit = Pipeline([('vect', vectorizer),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB())])
multi_logit.set_params(clf__alpha=1, clf__fit_prior = True, clf__class_prior = None).fit(train_reviews, train_ratings)
output['multi_logit'] = multi_logit.predict(test_reviews)
x = np.array(regression_df.stars)
#beta = [3.3648, -0.3227 , 0.5033]
y = [int(round(i)) for i in list(est2.fittedvalues)]
y = np.array(y)
errors = np.subtract(x,y)
np.sum(errors)
# fig, ax = plt.subplots(figsize=(5,5))
# ax.plot(x, x, 'b', label="data")
# ax.plot(x, y, 'o', label="ols")
# #ax.plot(x, est2.fittedvalues, 'r--.', label="OLS")
# #ax.plot(x, iv_u, 'r--')
# #ax.plot(x, iv_l, 'r--')
# ax.legend(loc='best');
#Do a QQ plot of the data
fig = sm.qqplot(errors)
plt.show()
Explanation: NOTE: BLUE Estimator does not require normality of errors
Gauss-Markov Theorem states that the ordinary least squares estimate is the best linear unbiased estimator (BLUE) of the regression coefficients ('Best' meaning optimal in terms of minimizing mean squared error) as long as the errors:
(1) have mean zero
(2) are uncorrelated
(3) have constant variance
Now lets try it using multinomial logit regression
End of explanation
star_hist = pd.DataFrame({'Ratings':stars})
star_hist.plot.hist()
cooluse_hist = pd.DataFrame({'Ratings':cunumber})
cooluse_hist.plot.hist(range=[0, 6])
Explanation: Let's plot the overall distribution of ratings aggregated across all of the states
End of explanation
df_list = []
states = list(reviews_json.keys())
for state in states:
stars_state = []
for review in reviews_json[state]:
stars_state.append(review['stars'])
star_hist = pd.DataFrame({'Ratings':stars_state})
df_list.append(star_hist)
for i in range(0, len(df_list)):
print(states[i] + " Rating Distribution")
df_list[i].plot.hist()
plt.show()
Explanation: Let's plot the rating distribution of reviews within each of the states.
End of explanation
#First let's separate out our dataset into a training sample and a test sample
#We specify a training sample percentage of 80% of our total dataset. This is just a rule of thumb
training_percent = 0.8
train_reviews = reviews[0:int(len(reviews)*training_percent)]
test_reviews = reviews[int(len(reviews)*training_percent):len(reviews)]
train_ratings = stars[0:int(len(stars)*training_percent)]
test_ratings = stars[int(len(stars)*training_percent):len(stars)]
Explanation: Now let's try to build a simple linear support vector machine
Note, all support vector machine algorithm relies on drawing a separating hyperplane amongst the different classes. This is not necessarily guarenteed to exist. For a complete set of conditions that must be satisfied for this to be an appropriate algorithm to use, please see below:
http://www.unc.edu/~normanp/890part4.pdf
The following is also a good, and more general, introduction to Support Vector Machines:
http://web.mit.edu/6.034/wwwbob/svm-notes-long-08.pdf
End of explanation
vectorizer = CountVectorizer(analyzer = "word", \
tokenizer = None, \
preprocessor = None, \
stop_words = None, \
vocabulary = total_words, \
max_features = 200)
train_data_features = vectorizer.fit_transform(train_reviews)
test_data_features = vectorizer.fit_transform(test_reviews)
Explanation: In order to use the machine learning algorithms in Sci-Kit learn, we first have to initialize a CountVectorizer object. We can use this object creates a matrix representation of each of our words. There are many options that we can specify when we initialize our CountVectorizer object (see documentation for full list) but they essentially all relate to how the words are represented in the final matrix.
End of explanation
output = pd.DataFrame( data={"Reviews": test_reviews, "Rating": test_ratings} )
Explanation: Create dataframe to hold our results from the classification algorithms
End of explanation
#Let's do the same exercise as above but use TF-IDF, you can learn more about TF-IDF here:
#https://nlp.stanford.edu/IR-book/html/htmledition/tf-idf-weighting-1.html
tf_transformer = TfidfTransformer(use_idf=True)
train_data_features = tf_transformer.fit_transform(train_data_features)
test_data_features = tf_transformer.fit_transform(test_data_features)
lin_svm = lin_svm.fit(train_data_features, train_ratings)
lin_svm_result = lin_svm.predict(test_data_features)
output['lin_svm'] = lin_svm_result
output['Accurate'] = np.where(output['Rating'] == output['lin_svm'], 1, 0)
accurate_percentage = float(sum(output['Accurate']))/float(len(output))
print accurate_percentage
#Here we plot the features with the highest absolute value coefficient weight
plot_coefficients(lin_svm, vectorizer.get_feature_names())
Explanation: Lets call a linear SVM instance from SK Learn have it train on our subset of reviews. We'll output the results to an output dataframe and then calculate a total accuracy percentage.
End of explanation
# random_forest = Pipeline([('vect', vectorizer),
# ('tfidf', TfidfTransformer()),
# ('clf', RandomForestClassifier())])
# random_forest.set_params(clf__n_estimators=100, clf__criterion='entropy').fit(train_reviews, train_ratings)
# output['random_forest'] = random_forest.predict(test_reviews)
# output['Accurate'] = np.where(output['Rating'] == output['random_forest'], 1, 0)
# accurate_percentage = float(sum(output['Accurate']))/float(len(output))
# print accurate_percentage
# bagged_dt = Pipeline([('vect', vectorizer),
# ('tfidf', TfidfTransformer()),
# ('clf', BaggingClassifier())])
# bagged_dt.set_params(clf__n_estimators=100, clf__n_jobs=1).fit(train_reviews, train_ratings)
# output['bagged_dt'] = bagged_dt.predict(test_reviews)
# output['Accurate'] = np.where(output['Rating'] == output['bagged_dt'], 1, 0)
# accurate_percentage = float(sum(output['Accurate']))/float(len(output))
# print accurate_percentage
multi_logit = Pipeline([('vect', vectorizer),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB())])
multi_logit.set_params(clf__alpha=1, clf__fit_prior = True, clf__class_prior = None).fit(train_reviews, train_ratings)
output['multi_logit'] = multi_logit.predict(test_reviews)
output['Accurate'] = np.where(output['Rating'] == output['multi_logit'], 1, 0)
accurate_percentage = float(sum(output['Accurate']))/float(len(output))
print(accurate_percentage)
random_forest = Pipeline([('vect', vectorizer),
('tfidf', TfidfTransformer()),
('clf', RandomForestClassifier())])
random_forest.set_params(clf__n_estimators=100, clf__criterion='entropy').fit(train_reviews, train_ratings)
output['random_forest'] = random_forest.predict(test_reviews)
output['Accurate'] = np.where(output['Rating'] == output['random_forest'], 1, 0)
accurate_percentage = float(sum(output['Accurate']))/float(len(output))
print(accurate_percentage)
Explanation: SKLearn uses what's known as a pipeline. Instead of having to declare each of these objects on their own and passing them into each other, we can just create one object with all the necessary options specified and then use that to run the algorithm. For each pipeline below, we specify the vector to be the CountVectorizer object we have defined above, set it to use tfidf, and then specify the classifier that we want to use.
Below, we create a separate pipeline for Random Forest, a Bagged Decision Tree, and Multinomial Logistic Regression. We then append the results to the dataframe that we've already created.
End of explanation
print(metrics.confusion_matrix(test_ratings, bagged_dt.predict(test_reviews), labels = [1, 2, 3, 4, 5]))
Explanation: Test results using all of the states
0.5383 from Naive TF-IDF Linear SVM
0.4567 from Naive TF-IDF Linear SVM using Harvard-IV dictionary
0.5241 from Naive TF-IDF Bagged DT using 100 estimators
0.496 from Naive TF-IDF Bagged DT using 100 estimators and Harvard-IV dictionary
0.5156 from Naive TF-IDF RandomForest and Harvard-IV dictionary
0.53 from Naive TF-IDF RF
0.458 from Naive TF-IDF SVM
As you can see, none of the above classifiers performs significantly better than a fair coin toss. This is most likely due to the heavily skewed distribution of review ratings. There are many reviews that receive 4 or 5 stars, therefore it is likely that the language associated with each review is being confused with each other. We can confirm this by looking at the "confusion matrix" of our predictions.
End of explanation
for review in reviews_json[reviews_json.keys()[0]]:
print(type(review['date']))
break
reviews_json.keys()
latitude_list = []
longitude_list = []
stars_list = []
count_list = []
state_list = []
for biz in biz_json:
stars_list.append(biz['stars'])
latitude_list.append(biz['latitude'])
longitude_list.append(biz['longitude'])
count_list.append(biz['review_count'])
state_list.append(biz['state'])
biz_df = pd.DataFrame({'ratings':stars_list, 'latitude':latitude_list, 'longitude': longitude_list, 'review_count': count_list, 'state':state_list})
Explanation: Each row and column corresponds to a rating number. For example, element (1,1) is the number of 1 star reviews that were correctly classified. Element (1,2) is the number of 1 star reviews that were incorrectly classified as 2 stars. Therefore, the sum of the diagonal represents the total number of correctly classified reviews. As you can see, the bagged decision tree classifier is classifying many four starred reviews as five starred reviews and vice versa.
This indicates that we can improve our results by using more aggregated categories. For example, we can call all four and five star reviews as "good" and all other review ratings as "bad".
End of explanation
states = [u'OH', u'NC', u'WI', u'IL', u'AZ', u'NV']
cmap, norm = mpl.colors.from_levels_and_colors([1, 2, 3, 4, 5], ['red', 'orange', 'yellow', 'green', 'blue'], extend = 'max')
for state in states:
state_df = biz_df[biz_df.state == state]
state_df_filt = state_df[(np.abs(state_df.longitude-state_df.longitude.mean()) <= 2*state_df.longitude.std()) \
& (np.abs(state_df.latitude-state_df.latitude.mean()) <= 2*state_df.latitude.std())]
plt.ylim(min(state_df_filt.latitude), max(state_df_filt.latitude))
plt.xlim(min(state_df_filt.longitude), max(state_df_filt.longitude))
plt.scatter(state_df_filt.longitude, state_df_filt.latitude, c=state_df_filt.ratings, cmap=cmap, norm=norm)
plt.show()
print state
Explanation: We draw a heat map for each state below. Longitude is on the Y axis and Latitude is on the X axis. The color coding is as follows:
Red = Rating of 1
Orange = Rating of 2
Yellow = Rating of 3
Green = Rating of 4
Blue = Rating of 5
End of explanation
for state in states:
state_df = biz_df[biz_df.state == state]
state_df_filt = state_df[(np.abs(state_df.longitude-state_df.longitude.mean()) <= 2*state_df.longitude.std()) \
& (np.abs(state_df.latitude-state_df.latitude.mean()) <= 2*state_df.latitude.std())]
state_df_filt['longitude'] = (state_df_filt.longitude - state_df.longitude.mean())/state_df.longitude.std()
state_df_filt['latitude'] = (state_df_filt.latitude - state_df.latitude.mean())/state_df.latitude.std()
state_df_filt['review_count'] = (state_df_filt.review_count - state_df.review_count.mean())/state_df.review_count.std()
X = np.column_stack((state_df_filt.longitude, state_df_filt.latitude, state_df_filt.review_count))
y = state_df_filt.ratings
est = sm.OLS(y, X)
est2 = est.fit()
print(est2.summary())
print state
Explanation: We run the following linear regression model for each of the states:
$Rating = \beta_{1} Longitude + \beta_{2} Latitude + \beta_{3} Num of Reviews + \epsilon$
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.