path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Advertising.ipynb | ###Markdown
R Project: Predict whether a user will click an Ad 1.1 Introduction 1.1.1 Defining the Question* Create a prediction model that more accurately predicts whether a user will click an Ad. 1.1.2 The Context* A Kenyan entrepreneur has created an online cryptography course and would want to advertise it on her blog. * She currently targets audiences originating from various countries. * In the past, she ran ads to advertise a related course on the same blog and collected data in the process. * She would now like to employ your services as a Data Science Consultant to create a solution that would allow her to determine whether ads targeted to audiences of certain characteristics i.e. city, male country, ad topic, etc. would click on her ads. 1.1.3 Metrics for Success* Accuracy Score 85% or above. 1.1.4 Experimental Design Taken* Installing packages and loading libraries needed* Loading the data* Exploratory Data Analysis* Data Cleaning* Visualizations* Modelling: Random Forest* Predictions and Evaluation of the Model* Conclusion 1.1.5 Appropriateness of the Data* Dataset link: [link text](http://bit.ly/IPAdvertisingData)* The columns in the dataset include: * Daily Time Spent on Site * Age * Area Income * Daily Internet Usage * Ad Topic Line * City * Male * Country * Timestamp * Clicked on Ad 1.2 Installing & Loading Necessary Packages
###Code
# Installing packages we need for the project analysis.
install.packages("iterators")
install.packages("caret")
install.packages("caretEnsemble")
install.packages("ggplot2")
install.packages("e1071")
install.packages("randomForest")
install.packages("ggcorrplot")
install.packages('ranger')
install.packages('caTools')
install.packages('rpart.plot')
# Importing Libraries we need for this Project analysis.
library(tidyverse)
library(data.table)
library(ggplot2)
library(lattice)
library(caret)
library(rpart)
library(RColorBrewer)
library("rpart.plot")
###Output
package 'iterators' successfully unpacked and MD5 sums checked
The downloaded binary packages are in
C:\Users\Josephine\AppData\Local\Temp\RtmpyIgHUR\downloaded_packages
package 'caret' successfully unpacked and MD5 sums checked
The downloaded binary packages are in
C:\Users\Josephine\AppData\Local\Temp\RtmpyIgHUR\downloaded_packages
###Markdown
1.3 Loading the Data
###Code
# Reading a csv file
adv <-read_csv("http://bit.ly/IPAdvertisingData")
###Output
Parsed with column specification:
cols(
`Daily Time Spent on Site` = col_double(),
Age = col_double(),
`Area Income` = col_double(),
`Daily Internet Usage` = col_double(),
`Ad Topic Line` = col_character(),
City = col_character(),
Male = col_double(),
Country = col_character(),
Timestamp = col_datetime(format = ""),
`Clicked on Ad` = col_double()
)
###Markdown
1.4 Exploratory Data Analysis
###Code
# Viewing the top observations
head(adv)
# Viewing the bottom observations
tail(adv)
# Checking the number of rows and columns
dim(adv)
###Output
_____no_output_____
###Markdown
There are 1000 rows and 10 columns.
###Code
# checking the types of attributes (columns)
sapply(adv, class)
# checking the summary statistics of the dataset such as the mean
summary(adv)
# Summary information of the dataset
glimpse(adv)
###Output
Observations: 1,000
Variables: 10
$ `Daily Time Spent on Site` <dbl> 68.95, 80.23, 69.47, 74.15, 68.37, 59.99...
$ Age <dbl> 35, 31, 26, 29, 35, 23, 33, 48, 30, 20, ...
$ `Area Income` <dbl> 61833.90, 68441.85, 59785.94, 54806.18, ...
$ `Daily Internet Usage` <dbl> 256.09, 193.77, 236.50, 245.89, 225.58, ...
$ `Ad Topic Line` <chr> "Cloned 5thgeneration orchestration", "M...
$ City <chr> "Wrightburgh", "West Jodi", "Davidton", ...
$ Male <dbl> 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0...
$ Country <chr> "Tunisia", "Nauru", "San Marino", "Italy...
$ Timestamp <dttm> 2016-03-27 00:53:11, 2016-04-04 01:39:0...
$ `Clicked on Ad` <dbl> 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0...
###Markdown
The glimpse output shows the datatypes of each column and a few observations. 1.5 Data Cleaning Missing values
###Code
# Completeness:
# Checking for missing values by columns
colSums(is.na(adv))
###Output
_____no_output_____
###Markdown
There are no missing values in the dataset from the output. Duplicates
###Code
# checking for duplicates
# The function distinct() [dplyr package] can be used to keep only unique/distinct rows from a data frame.
# If there are duplicate rows, only the first row is preserved.
# It’s an efficient version of the R base function unique().
# unique(adv)
#distinct(adv)
###Output
_____no_output_____
###Markdown
Fixing spaces in the column names
###Code
# Checking the column names
names(adv)
# Replacing spaces in the columns names with an underscore
names(adv) <- gsub(" ", "_", names(adv))
# Confirming the columns names have changed
names(adv)
###Output
_____no_output_____
###Markdown
Outliers
###Code
# Using a boxplot to check for observations far away from other data points.
# Using all three double type columns: specifying each
# labeling the title
# labeling the x axis
# specifying color options
Daily_Time_Spent_on_Site <- adv$Daily_Time_Spent_on_Site
Age <- adv$Age
Daily_Internet_Usage <- adv$Daily_Internet_Usage
boxplot(Daily_Time_Spent_on_Site,Age, Daily_Internet_Usage,
main = "Multiple boxplots for comparision",
at = c(1,2,3),
names = c("Daily_Time_Spent_on_Site", "Age","Daily_Internet_Usage"),
las = 2,
col = c("orange","red","blue"),
border = "brown",
horizontal = TRUE,
notch = TRUE
)
###Output
_____no_output_____
###Markdown
There are no outliers in the three features plotted
###Code
# Boxplot for the Area Income
# labeling the title
# labeling the x axis
# specifying color options
boxplot(adv$Area_Income,
main = "Area Income Boxplot",
xlab = "Area Income",
col = "orange",
border = "brown",
horizontal = TRUE,
notch = TRUE
)
###Output
_____no_output_____
###Markdown
There are a few outliers on the first quartile of the Area income boxplot. 1.6 Visualizations
###Code
# Stacked bar chart
# Giving a title to the chart
# Labeling the x and y axis
# Setting the color options
# Creating a legend for easier reference
counts <- table(adv$Clicked_on_Ad, adv$Age)
barplot(counts,
main="A stacked bar chart showing Clicked on Ad by Age",
xlab="Age",
ylab = "Frequency",
col=c("darkblue","red"),
legend = rownames(counts))
###Output
_____no_output_____
###Markdown
* 1 shows that the particpant clicked on an Ad.* The stacked bar chart shows the distribution of the number of people who clicked on an Ad by age.* The highest age of the participants was 61 and lowest was 19.* The people who cliked most on Ads were between age 28 to 36.
###Code
# Stacked bar chart
# Giving a title to the chart
# Labeling the x and y axis
# Setting the color options
# Creating a legend for easier reference
counts <- table(adv$Clicked_on_Ad, adv$Male)
barplot(counts,
main="A stacked bar chart showing Clicked on Ad by Gender",
xlab="Gender",
ylab = "Frequency",
col=c("cyan","green"),
legend = rownames(counts))
###Output
_____no_output_____
###Markdown
* There are slightly more females than males in the dataset.* More females clicked on Ad compared to males.
###Code
# Bar chart of the target variable
counts <- table(adv$Clicked_on_Ad)
barplot(counts,
main="A bar chart showing Clicked on Ad distribution",
xlab="Clicked on Ad or Not",
ylab = "Frequency",
col=c("magenta","gold"),
legend = rownames(counts))
###Output
_____no_output_____
###Markdown
* The data is balanced since the number of people who clicked on Ad and those who did not are equal.
###Code
# A violin plot
# Specifying the x and y variables to be plot
# Setting the color
# Plotting a boxplot inside the violin plot
# Giving a title to the chart
ggplot(adv,
aes(x = Clicked_on_Ad,
y = Daily_Internet_Usage)) +
geom_violin(fill = "cornflowerblue") +
geom_boxplot(width = .2,
fill = "orange",
outlier.color = "orange",
outlier.size = 2) +
labs(title = "Daily internet usage for people who clicked the ad")
###Output
Warning message:
"Continuous x aesthetic -- did you forget aes(group=...)?"
###Markdown
* People who click on Ad spend Daily internet amount between 135 and 220.* There are no outliers.
###Code
# Heat map
# Checking the relationship between the variables
# Using Numeric variables only
numeric_tbl <- adv %>%
select_if(is.numeric) %>%
select(Daily_Time_Spent_on_Site, Age, Area_Income,Daily_Internet_Usage)
# Calculate the correlations
corr <- cor(numeric_tbl, use = "complete.obs")
ggcorrplot(round(corr, 2),
type = "full", lab = T)
###Output
_____no_output_____
###Markdown
* There is a moderate relationship between daily time spent on the site and and daily internet usage.* Other variables have weak relationships. 1.7 Modelling
###Code
# Converting the target as a factor
adv$Clicked_on_Ad = factor(adv$Clicked_on_Ad, levels = c(0,1))
# checking the variable datatypes
sapply(adv, class)
###Output
_____no_output_____
###Markdown
Decision Tree Classifier
###Code
# Using decision tree
# Fitting the model
# Specifying the target and predictor variables
m <- rpart(Clicked_on_Ad ~ . ,
data = adv,
method = "class")
# Plotting the decision tree model
rpart.plot(m)
# Making predictions
# Printing the confusion matrix
p <- predict(m, adv, type ="class")
table(p, adv$Clicked_on_Ad)
# Printing the Accuracy
mean(adv$Clicked_on_Ad == p)
###Output
_____no_output_____
###Markdown
* The model accuracy is 95.7%* This is a good model to make predictions.* We will evaluate this model or challenge it using another model. 1.8 Challenging the Solution Random Forest Classifier
###Code
# Training the model
# Setting seed for randomness
set.seed(12)
model <- train(Clicked_on_Ad ~. ,
data = adv,
method = "ranger")
# Printing the model
model
###Output
_____no_output_____
###Markdown
Multi-Linear_Regression On Advertising Data-Set
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sn
import pylab as pl
data = pd.read_csv('data/Advertising.csv')
data.head()
data = data.drop(['Unnamed: 0'], axis = 1)
data.head()
data.tail()
data.info()
data.describe()
data.index
data.columns
x = data[['TV', 'radio', 'newspaper']]
x.head(2)
y = data['sales']
y.head()
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2)
x_train.shape
x_test.shape
y_train.shape
y_test.shape
from sklearn import linear_model
regr = linear_model.LinearRegression()
regr.fit(x_train, y_train)
regr.coef_
regr.intercept_
ypred = regr.predict(x_test)
ypred
regr.score(x_test, ypred)
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
mean_absolute_error(y_test, ypred)
mean_squared_error(y_test, ypred)
r2_score(y_test, ypred)
from sklearn.model_selection import cross_val_score
cv_results = cross_val_score(regr, x_train, y_train, cv = 20)
cv_results
np.min(cv_results)
np.max(cv_results)
np.mean(cv_results)
###Output
_____no_output_____
###Markdown
Simple Linear Regression For the Above Problem:
###Code
x = data[['TV']]
x.head()
y = data['sales']
y.head()
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, train_size = 0.7)
x_train.shape
x_test.shape
y_train.shape
y_test.shape
from sklearn.linear_model import LinearRegression
regr = LinearRegression()
regr.fit(x_train, y_train)
regr.coef_
regr.intercept_
regr.rank_
yhat = regr.predict(x_test)
yhat
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
mean_absolute_error(y_test, yhat)
mean_squared_error(y_test, yhat)
r2_score(y_test, yhat)
from sklearn.model_selection import cross_val_score
cv_results = cross_val_score(regr, x_train, y_train, cv = 20)
cv_results
np.min(cv_results)
np.max(cv_results)
np.mean(cv_results)
###Output
_____no_output_____
###Markdown
A single unit increase in TV spend will observe 0.0475 unit increase in sales A spend on 1000 on TV will increase 47.5 unit increase in sales(This is what the coefficient describes) Calculate the sales due to a spend of $50,000 Find the slope: y = mx + c => x = 50, c = intercept, m = coefficient
###Code
lr.coef_ * 50 + lr.intercept_
###Output
_____no_output_____
###Markdown
Spending $50,000 will get the sales figures to 9409 units of TVs
###Code
X_new = pd.DataFrame({ 'TV': [df.TV.min(), df.TV.max()] })
predictions = lr.predict(X_new)
predictions
df.plot(kind = 'scatter', x='TV', y='sales')
plt.plot(X_new, predictions, c='red')
import statsmodels.formula.api as smf
model = smf.ols(formula = 'sales ~ TV', data = df).fit()
model.pvalues
model.pvalues.loc['TV'] < 0.05
model.rsquared
###Output
_____no_output_____
###Markdown
Performing Multiple Linear Regression
###Code
features = df[['TV', 'radio', 'newspaper']]
target = df[['sales']]
features.shape, target.shape
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(features, target, train_size=0.8, random_state=4)
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
###Output
(160, 3) (40, 3) (160, 1) (40, 1)
###Markdown
As you can see, there are 3 variables for test and 1 variable for test
###Code
mlr_model = LinearRegression()
mlr_model.fit(X_train, y_train)
mlr_model.coef_
###Output
_____no_output_____
###Markdown
3 coefficients corresponding to 3 variables (TV, radio, newspaper)
###Code
mlr_model.intercept_
###Output
_____no_output_____
###Markdown
1 intercept corresponding to target (sales)
###Code
lm = smf.ols(formula = 'sales ~ TV + radio + newspaper', data = df).fit()
lm.summary()
###Output
_____no_output_____
###Markdown
Calculating accuracy of the model
###Code
from sklearn.metrics import mean_squared_error
y_preds = mlr_model.predict(X_test)
###Output
_____no_output_____
###Markdown
MSE
###Code
mean_squared_error(y_preds, y_test)
###Output
_____no_output_____
###Markdown
RMSE
###Code
import numpy as np
np.sqrt(mean_squared_error(y_preds, y_test))
###Output
_____no_output_____ |
BOOK_5_GETTING_STARTED_WITH_PYTHON/Chapter03/050310_XML_Data.ipynb | ###Markdown
Working with XML data
###Code
from lxml import objectify
import pandas as pd
xml = objectify.parse(open('XMLData.xml'))
root = xml.getroot()
df = pd.DataFrame(columns=('Number', 'String', 'Boolean'))
for i in range(0,4):
obj = root.getchildren()[i].getchildren()
row = dict(zip(['Number', 'String', 'Boolean'],[obj[0].text, obj[1].text, obj[2].text]))
row_s = pd.Series(row)
row_s.name = i
df = df.append(row_s)
print(df)
###Output
Number String Boolean
0 1 First True
1 2 Second False
2 3 Third True
3 4 Fourth False
|
lab_1-2/assignment1.ipynb | ###Markdown
Лабораторная работа 1 1) Классификация данных методом k ближайших соседей ( kNN)2) Классификация данных методом опорных векторов (SVM)3) Построение softmax-классификатораВариант 1: задания 1 и 2 на наборе данных CIFAR-10Вариант 2: задания 1 и 2 на наборе данных MNISTВариант 3: задания 1 и 3 на наборе данных CIFAR-10Вариант 4: задания 1 и 3 на наборе данных MNIST Лабораторные работы можно выполнять с использованием сервиса Google Colaboratory (https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d) или на локальном компьютере. 1. Классификация данных методом k ближайших соседей ( kNN)
###Code
import random
import numpy as np
import matplotlib.pyplot as plt
from scripts.data_utils import load_CIFAR10
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
###Output
_____no_output_____
###Markdown
1.1 Скачайте данные в соответсвии с заданием.CIFAR-10 по ссылке https://www.cs.toronto.edu/~kriz/cifar.htmlили используйте команду !bash get_datasets.sh (google colab, local ubuntu)MNIST sklearn.datasets import load_digitsdigits = load_digits()
###Code
cifar10_dir = 'scripts/datasets/cifar-10-batches-py'
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
###Output
Clear previously loaded data.
Training data shape: (50000, 32, 32, 3)
Training labels shape: (50000,)
Test data shape: (10000, 32, 32, 3)
Test labels shape: (10000,)
###Markdown
1.2 Выведите несколько примеров изображений из обучающей выборки для каждого класса. 1.3 Разделите данные на обучающу и тестовую выборки (X_train, y_train, X_test, y_test). Преобразуйте каждое изображение в одномерный массив. 1.4 Напишите реализацию классификатора в скрипте /classifiers/k_nearest_neighbor.py и обучите его на сформированной выборке.
###Code
from scripts.classifiers import KNearestNeighbor
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
###Output
_____no_output_____
###Markdown
1.5 Выполните классификацию на тестовой выборке 1.6 Визуализируйте матрицу расстояний для каждого изображения из тестовой выборки до изображений из обучающей выборки. 1.7 Посчитайте долю правильно классифицированных изображений из тестовой выборки. 1.8 Постройте график зависимости доли правильно классифицированных изображений от числа соседей, используемых при классификации. 1.9 Выберите лучшее значение параметра k на основе кросс-валидации. 1.10 Переобучите и протестируйте классификатор с использованием выбранного значения k. 1.11 Сделайте выводы по результатам 1 части задания. 2. Классификация данных методом опорных векторов (SVM) 2.1 Разделите данные на обучающую, тестовую и валидационную выборки. Преобразуйте каждое изображение в одномерный массив. Выведите размеры выборок. 2.2 Проведите предварительную обработку данных, путем вычитания среднего изображения, рассчитанного по обучающей выборке.2.3 Чтобы далее не учитывать смещение (свободный член b), добавьте дополнитульную размерность к массиву дынных и заполните ее 1.
###Code
mean_image = np.mean(X_train, axis=0)
print(mean_image[:10])
plt.figure(figsize=(4,4))
plt.imshow(mean_image.reshape((32,32,3)).astype('uint8'))
plt.show()
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
print(X_train.shape, X_val.shape, X_test.shape)
###Output
_____no_output_____
###Markdown
2.4 Реализуйте loss-функции в scripts/classifiers/linear_svm.py
###Code
from scripts.classifiers.linear_svm import svm_loss_naive
import time
W = np.random.randn(3073, 10) * 0.0001
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.000005)
print('loss: %f' % (loss, ))
###Output
_____no_output_____
###Markdown
2.5 Убедитесь, что вы верно реализовали расчет градиента, сравнив с реализацией численными методами (код приведен ниже).
###Code
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.0)
from scripts.gradient_check import grad_check_sparse
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad)
loss, grad = svm_loss_naive(W, X_dev, y_dev, 5e1)
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 5e1)[0]
grad_numerical = grad_check_sparse(f, W, grad)
###Output
_____no_output_____
###Markdown
2.6 Сравните svm_loss_naive и svm_loss_vectorized реализации
###Code
tic = time.time()
_, grad_naive = svm_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Naive loss and gradient: computed in %fs' % (toc - tic))
tic = time.time()
_, grad_vectorized = svm_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Vectorized loss and gradient: computed in %fs' % (toc - tic))
difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print('difference: %f' % difference)
###Output
_____no_output_____
###Markdown
2.7 Реализуйте стохастический градиентный спуск в /classifiers/linear_classifier.py . Реализуйте методы train() и predict() и запустите следующий код
###Code
from scripts.classifiers import LinearSVM
svm = LinearSVM()
tic = time.time()
loss_hist = svm.train(X_train, y_train, learning_rate=1e-7, reg=2.5e4,
num_iters=1500, verbose=True)
toc = time.time()
print('That took %fs' % (toc - tic))
y_train_pred = svm.predict(X_train)
print('training accuracy: %f' % (np.mean(y_train == y_train_pred), ))
y_val_pred = svm.predict(X_val)
print('validation accuracy: %f' % (np.mean(y_val == y_val_pred), ))
###Output
_____no_output_____
###Markdown
2.8 С помощью кросс-валидации выберите значения параметров скорости обучения и регуляризации. В кросс-валидации используйте обучающую и валидационную выборки. Оцените accuracy на тестовой выборке.
###Code
learning_rates = [1e-7, 5e-5]
regularization_strengths = [2.5e4, 5e4]
###Output
_____no_output_____
###Markdown
2.9 Сделайте выводы по второй части задания 3. Построение softmax-классификатора 3.1 Разделите данные на обучающую, тестовую и валидационную выборки. Преобразуйте каждое изображение в одномерный массив. Выведите размеры выборок. 3.2 Проведите предварительную обработку данных, путем вычитания среднего изображения, рассчитанного по обучающей выборке.3.3 Чтобы далее не учитывать смещение (свободный член b), добавьте дополнитульную размерность к массиву данных и заполните ее единицами.
###Code
mean_image = np.mean(X_train, axis=0)
print(mean_image[:10])
plt.figure(figsize=(4,4))
plt.imshow(mean_image.reshape((32,32,3)).astype('uint8'))
plt.show()
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
print(X_train.shape, X_val.shape, X_test.shape)
###Output
_____no_output_____
###Markdown
3.4 Реализуйте функции в classifiers/softmax.py
###Code
from scripts.classifiers.softmax import softmax_loss_naive
import time
# Generate a random softmax weight matrix and use it to compute the loss.
W = np.random.randn(3073, 10) * 0.0001
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As a rough sanity check, our loss should be something close to -log(0.1).
print('loss: %f' % loss)
print('sanity check: %f' % (-np.log(0.1)))
###Output
_____no_output_____
###Markdown
3.5 Убедитесь, что вы верно реализовали расчет градиента, сравнив с реализацией численными методами (код приведен ниже).
###Code
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
from scripts.gradient_check import grad_check_sparse
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 5e1)
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 5e1)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
###Output
_____no_output_____
###Markdown
3.6 Сравните softmax_loss_naive и softmax_loss_vectorized реализации
###Code
tic = time.time()
loss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('naive loss: %e computed in %fs' % (loss_naive, toc - tic))
from scripts.classifiers.softmax import softmax_loss_vectorized
tic = time.time()
loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))
grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print('Loss difference: %f' % np.abs(loss_naive - loss_vectorized))
print('Gradient difference: %f' % grad_difference)
###Output
_____no_output_____
###Markdown
3.7 Реализуйте стохастический градиентный спуск в /classifiers/linear_classifier.py . Реализуйте методы train() и predict() и запустите следующий код3.8 Обучите Softmax-классификатор и оцените accuracy на тестовой выборке. 3.9 С помощью кросс-валидации выберите значения параметров скорости обучения и регуляризации. В кросс-валидации используйте обучающую и валидационную выборки. Оцените accuracy на тестовой выборке.
###Code
learning_rates = [1e-7, 5e-5]
regularization_strengths = [2.5e4, 5e4]
###Output
_____no_output_____ |
notebooks/Scala-Spark.ipynb | ###Markdown
Initialization
###Code
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
val conf = new SparkConf().setAppName("scalaSparkApp")
val sc = new SparkContext(conf)
###Output
_____no_output_____
###Markdown
Use Case I : Words CountWe will check what hadoop file system tracks
###Code
// tokenizing our text and deleting empty lines
val wordsAll = sc.textFile("hdfs://node-master:9000/user/root/alice_in_wonderland.txt").flatMap(line => line.split(" "))
val words = wordsAll.filter{_.size > 0}
// counting words by applying a map and reduce operations
val wordsCount = words.map(word => (word, 1)).reduceByKey(_+_)
wordsCount.collect.foreach(println)
val mostCommonWords = wordsCount.map(item => item.swap).sortByKey(false)
mostCommonWords.take(10)
###Output
_____no_output_____
###Markdown
Use Case II : Classification Iris Dataset
###Code
import org.apache.spark.ml.Pipeline
import org.apache.spark.ml.feature.{StringIndexer, VectorAssembler}
import org.apache.spark.ml.classification.DecisionTreeClassifier
import org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
import org.apache.spark.sql.types.FloatType
import org.apache.spark.sql.functions.col
import org.apache.spark.sql.SQLContext
val sqlContext = new SQLContext(sc)
val dfBase = sqlContext.read.format("csv").option("header", "true").load("hdfs://node-master:9000/user/root/iris.csv")
dfBase.show(5)
dfBase.printSchema()
val df = dfBase.withColumn("slength", col("sepal_length").cast(FloatType))
.drop("sepal_length")
.withColumnRenamed("slength", "sepal_length")
.withColumn("plength", col("petal_length").cast(FloatType))
.drop("petal_length")
.withColumnRenamed("plength", "petal_length")
.withColumn("swidth", col("sepal_width").cast(FloatType))
.drop("sepal_width")
.withColumnRenamed("swidth", "sepal_width")
.withColumn("pwidth", col("petal_width").cast(FloatType))
.drop("petal_width")
.withColumnRenamed("pwidth", "petal_width")
val indexer = new StringIndexer().setInputCol("variety").setOutputCol("variety_label").fit(df)
val assembler = new VectorAssembler()
.setInputCols(Array("sepal_length", "petal_length", "sepal_width", "petal_width"))
.setOutputCol("features")
val dt = new DecisionTreeClassifier().setLabelCol("variety_label").setFeaturesCol("features")
val Array(trainingData, testData) = df.randomSplit(Array(0.7, 0.3))
val pipeline = new Pipeline().setStages(Array(indexer, assembler, dt))
val model = pipeline.fit(trainingData)
val predictions = model.transform(testData)
predictions.select("prediction", "variety_label").show(5)
val evaluator = new MulticlassClassificationEvaluator()
.setLabelCol("variety_label")
.setPredictionCol("prediction")
.setMetricName("accuracy")
val accuracy = evaluator.evaluate(predictions)
println(f"Test Accuracy = ${accuracy*100}%.2f%%")
println(f"Test Error = ${(1 - accuracy)*100}%.2f%%")
###Output
Test Accuracy = 93.62%
Test Error = 6.38%
|
Getting Started.ipynb | ###Markdown
Fitting a distribution to waiting times How long should we wait for the bus before giving up on it and starting to walk?First, we'll need to observe some data on the historic arrival times of the bus and fit a distribution to them. Note however that some of our data will be incomplete since when we give up on the bus after x minutes, we only know it took more than that time for it to arrive, but not exactly how much. These are called censored observations.Let's generate sample data - both complete observations (ti) and some censored observations (xi) - and fit a distribution!
###Code
import matplotlib.pyplot as plt
from distributions.lomax import *
from distributions.loglogistic import *
# Define parameters for Lomax
k = 10.0 # shape
lmb = 0.5 # scale
sample_size = 5000
censor_level = 0.5 # after half an hour, we stop waiting.
prob = 1.0
# Let's assume the arrival times of the bus follow a Lomax distribution.
l = Lomax(k=k, lmb=lmb)
###Output
_____no_output_____
###Markdown
What is lomax distribution?It is basically a Pareto distribution that has been shifted so that its support begins at zero. A heavy tailed distribution. For a non-negative random variable.Two parameters define the distribution: scale parameter λ and shape parameter κ (sometimes denoted as α). The shorthand X ∼ Lomax(λ,κ) indicates the random variable X has a Lomax distribution with those two parameters.
###Code
# Generate waiting times from Lomax distribution.
samples = l.samples(size=sample_size)
samples
# Since we never wait for the bus more than x minutes,
# the observed samples are the ones that take less than x minutes.
ti = samples[(samples<=censor_level)]
ti
len(ti) # About 10% of people stopped waiting after 30 minutes.
samples > censor_level
# xi array contains the censored data.
xi = np.ones(sum(samples>censor_level)) * censor_level
xi
len(xi)
# Fit a log logistic model to the data we just generated.
# You can safely ignore the warnings.
ll1 = LogLogistic(ti=ti, xi=xi)
# See how well the distribution fits the histogram.
histo = plt.hist(samples, normed=True)
xs = (histo[1][:len(histo[1])-1]+histo[1][1:])/2
xs # We are going to call pdf for each xs values.
plt.plot(xs, [ll1.pdf(i) for i in xs])
plt.show()
###Output
_____no_output_____
###Markdown
Optimizing waiting threshold using the distributionLet's model the process as a state machine. There are three possible states that we care about - "1. waiting for a bus", "2. walking to work" and "3. working at the office". The figure below represents the states and the arrows show the possible transitions between the states.Also, we assume that which state we go to next and how much time it takes to jump to that state depends only on *which state we are currently in*. This property is called the **Markov property**. To describe this transitions, we need two matrices.1. Transition probabilities : the transition from state 'i' to state 'j' \n2. Transition timesthe first state (i = 0) is "waiting", the second state (i=1) is "walking" and the last and most desirable state (i = 2) is "working", where we want to spend the highest proportion of time.Continuing from above, we can run the following code:
###Code
# The time it takes to walk to work
intervention_cost=200
# The amount of time we wait for the bus before walking respectively.
tau=275
# The transition probabilities (p) and transition times (t) depend on
# the amount of time we're willing to wait for the bus (tau)
# and the amount of time it takes to walk to work (intervention_cost).
(p,t) = ll1.construct_matrices(tau, intervention_cost)
# The transition probabilities
p
###Output
_____no_output_____
###Markdown
The 'p' matrix you see above is the matrix of transition probabilities. The (i,j) row is the probability of transitioning to state 'j' given you started in state 'i'. Note that the rows of this matrix sum to 1 since we have to make a transition to one of the other available states. Also, the diagonals of the matrix are 0 since the (i,i) entry would imply transitioning from state i to i, which doesn't make sense in the context of a transition.
###Code
#transition times
t
###Output
_____no_output_____
###Markdown
How to get started: First, import the package:
###Code
import minder_utils as dri
###Output
_____no_output_____
###Markdown
And change the settings of the package to reflect our token and directory to the mapping data:Note, this only needs to be done once! Although, it could be useful to do it every time. Because these settings are saved internally, you can not run this package with different settings at the same time. If you want to run this in parallel, please use the same settings for all sessions.
###Code
import minder_utils.settings as settings
settings.token_save('YOUR TOKEN')
settings.set_data_dir('./mapping_data/')
# if you want to use tihm data
settings.set_tihm_dir(path_to_tihm)
For example, let us run the data loader:
###Output
_____no_output_____
###Markdown
For example, let us run the data loader:
###Code
from minder_utils.weekly_run import load_data_default
###Output
/Users/ac4919/PhD Local/to_run_dri_data_util/src/minder-utils/minder_utils/models/feature_extractors/autoencoders.py:9: SyntaxWarning: "is" with a literal. Did you mean "=="?
if model_type is 'nn':
/Users/ac4919/PhD Local/to_run_dri_data_util/src/minder-utils/minder_utils/models/feature_extractors/autoencoders.py:19: SyntaxWarning: "is" with a literal. Did you mean "=="?
elif model_type is 'cnn':
###Markdown
The first time you run this function, it will save all of the data in the correct place. Further runs will overwrite the data with the new versions.
###Code
load_data_default()
###Output
activity weekly raw data does not exist, start to download
Deleting Existing export request
Creating new export request
Exporting the {'raw_activity_pir': {}, 'raw_door_sensor': {}, 'raw_appliance_use': {}, 'device_types': {}}
Waiting for the sever to complete the job /
Job is completed, start to download the data
Start to export job
Exporting 1/4 device_types Success
Exporting 2/4 raw_door_sensor Success
Exporting 3/4 raw_activity_pir Success
Exporting 4/4 raw_appliance_use Success
formatting the data: activity weekly
Processing: raw_door_sensor Finished in 0.08 seconds
Processing: raw_appliance_use Finished in 0.01 seconds
Processing: device_types Finished in 0.00 seconds
Processing: raw_activity_pir Finished in 0.25 seconds
activity previous raw data does not exist, start to download
Deleting Existing export request
Creating new export request
Exporting the {'raw_activity_pir': {}, 'raw_door_sensor': {}, 'raw_appliance_use': {}, 'device_types': {}}
Waiting for the sever to complete the job /
Job is completed, start to download the data
Start to export job
Exporting 1/12 device_types Success
Exporting 2/12 raw_door_sensor Success
Exporting 3/12 raw_door_sensor Success
Exporting 4/12 raw_activity_pir Success
Exporting 5/12 raw_activity_pir Success
Exporting 6/12 raw_activity_pir Success
Exporting 7/12 raw_activity_pir Success
Exporting 8/12 raw_activity_pir Success
Exporting 9/12 raw_activity_pir Success
Exporting 10/12 raw_activity_pir Success
Exporting 11/12 raw_activity_pir Success
Exporting 12/12 raw_appliance_use Success
formatting the data: activity previous
Processing: raw_door_sensor Finished in 0.77 seconds
Processing: raw_appliance_use Finished in 0.12 seconds
Processing: device_types Finished in 0.00 seconds
Processing: raw_activity_pir Finished in 4.62 seconds
Target directory does not exist, creating a new folder
Deleting Existing export request
Creating new export request
Exporting the {'procedure': {}, 'device_types': {}}
Waiting for the sever to complete the job /
Job is completed, start to download the data
Start to export job
Exporting 1/2 procedure Success
Exporting 2/2 device_types Success
###Markdown
What is this?There are many topics in the [Open Digital Archaeology Text](http://o-date.github.io/live) for which there are supporting 'Jupyter Notebooks'. What you are looking at right now is one such notebook. These notebooks mix explanatory text with chunks of code. Generally, each notebook is structured so that each code block depends on the one before having been run. You progress through these from top to bottom. By having explanations mixed in with the code, you learn what the code does, what to expect, or why it's framed the way it is. This is called '[literate programming](https://en.wikipedia.org/wiki/Literate_programming)'. Each of our notebooks are contined in a [Github repository](https://o-date.github.io/support/notebooks-toc/). We use the [Binder service](http://mybinder.org) to launch these notebooks online in a computing environment that contains all the necessary software that you'll need to work through the notebooks. Look for the [](http://mybinder.org/v2/gh/o-date/notebooks/master) ... and click on it!**Warning: Currently, Binder will time-out after ten minutes of inactivity, and you will lose any changes. Work does not persist between sessions** Run these notebooks on your own machineYou can download these repositories to your own computer, if you like (there is a green 'download' button for each repository on Github). You will then need to install [Jupyter](http://jupyter.org/install) for your computer. If you have [Git installed on your computer](https://git-scm.com/downloads) you can download via the command line or terminal prompt:```$ git clone https://github.com/o-date/notebooks.git change this line as appropriate, right?$ cd notebooks$ jupyter notebook``` How do I work through a notebook? In the toolbar across the top, the most important button for the time being is the 'run' button. Click on the cellblock below so that a box appears around it, and hit 'Run'.
###Code
2 + 2
###Output
_____no_output_____
###Markdown
Notice what happened - the notebook calculated the result of the equation, and created a new block with the answer! While the block was running, an asterisk also appeared at the left hand side of the block, like so: `In [*]`. Once the calculation finished, the notebook will put a number within those square brackets, eg `In [1]` so that you know the order in which you've run the blocks. Now, `2 + 2` is a trivial calculation, and happened so fast that you probably didn't even spot the `[*]`. Sometimes, it can take quite a lot of time for the block to run. Watch for that asterisk! You will get errors if you run subsequent code blocks that depend on the results of an earlier calculation! When things go wrongIf things go wrong, the notebook will output a message in red text. Read these carefully - sometimes they are merely warnings that some underlying software is not the most up-to-date version; more often you may have neglected to finish (or perform) a necessary earlier step. If worse comes to worse, copy and paste the text of the error into a search engine; you may be able to work out what has gone wrong and solve it yourself! What language is the notebook using?That is a very good question. If you look at the top right of the screen, you'll see that it says `Python 3`. This is known as the 'kernel'. Generally, for any notebook that we provide to you, we have already loaded up the necessary language interpreters - the kernels - that you will need. If you're running a notebook on your own machine, you can install other kernels as necessary. You can even mix languages _inside_ a notebook! We don't do that in ODATE but you can find how to do so easily enough with a search. [Stackoverflow](https://stackoverflow.com/questions/28831854/how-do-i-add-python3-kernel-to-jupyter-ipython) is very useful in this regard. Running a command for the terminal inside a notebookSometimes, you might like to run a command for the terminal as part of your work. If you click on the 'jupyter' icon at the top left, you can start a terminal prompt:This is a screenshot from Shawn's computer - he has several kernels installed - but you can see where the terminal prompt can be found. Starting a new terminal will open a new tab in your browser and you will be presented with a terminal prompt. You can enter command line commands there. You can also do this inside a notebook, however. Try running the codeblocks below:
###Code
!pwd
!ls
###Output
_____no_output_____
###Markdown
The `!` character tells the notebook to pass the command to the terminal, and print the response. This can be handy, as sometimes perhaps you might want to install a 'package' (a bundle of pre-made functions that extend what you can do) for Python using the `pip` command:
###Code
!pip install requests
###Output
_____no_output_____
###Markdown
What is RecoGym?RecoGym is a Python [OpenAI Gym](https://gym.openai.com/) environment for testing recommendation algorithms. It allows for the testing of both offline and reinforcement-learning based agents. It provides a way to test algorithms in a toy environment quickly.In this notebook, we will code a simple recommendation agent that suggests an item in proportion to how many times it has been viewed. We hope to inspire you to create your own agents and test them against our baseline models.In order to make the most out of RecoGym, we suggest you have some experience coding in Python, some background knowledge in recommender systems, and familiarity with the reinforcement learning setup. Also, be sure to check out the python-based requirements in the README if something below errors. Reinforcement Learning SetupRecoGym follows the usual reinforcement learning setup. This means there are interactions between the environment (the user's behaviour) and the agent (our recommendation algorithm). The agent receives a reward if the user clicks on the recommendation. Organic and BanditEven though our focus is biased towards online advertising, we tried to make RecoGym universal to all types of recommendation. Hence, we introduce the domain-agnostic terms Organic and Bandit sessions. An Organic session is an observation of items the user interacts with. For example, it could be views of products on an e-commerce website, listens to songs while streaming music, or readings of articles on an online newspaper. A Bandit session is one where we have an opportunity to recommend the user an item and observe their behaviour. We receive a reward if they click. Offline and Online LearningThis project was born out of a desire to improve Criteo's recommendation system by exploring reinforcement learning algorithms. We quickly realised that we couldn't just blindly apply RL algorithms in a production system out of the box. The learning period would be too costly. Instead, we need to leverage the vast amounts of offline training examples we already to make the algorithm perform as good as the current system before releasing into the online production environment.Thus, RecoGym follows a similar flow. An agent is first given access to many offline training examples produced from a fixed policy. Then, they have access to the online system where they choose the actions. Let's see some code - Interacting with the environment The code snippet below shows how to initialise the environment and step through in an 'offline' manner (Here offline means that the environment is generating some recommendations for us). We print out the results from the environment at each step.
###Code
import gym, reco_gym
# env_0_args is a dictionary of default parameters (i.e. number of products)
from reco_gym import env_1_args, Configuration
# You can overwrite environment arguments here:
env_1_args['random_seed'] = 42
# Initialize the gym for the first time by calling .make() and .init_gym()
env = gym.make('reco-gym-v1')
env.init_gym(env_1_args)
# .reset() env before each episode (one episode per user).
env.reset()
done = False
# Counting how many steps.
i = 0
observation, reward, done = None, 0, False
while not done:
action, observation, reward, done, info = env.step_offline(observation, reward, done)
print(f"Step: {i} - Action: {action} - Observation: {observation.sessions()} - Reward: {reward}")
i += 1
###Output
Step: 0 - Action: None - Observation: [{'t': 0, 'u': 0, 'z': 'pageview', 'v': 0}] - Reward: None
Step: 1 - Action: {'t': 1, 'u': 0, 'a': 3, 'ps': 0.1, 'ps-a': array([0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1])} - Observation: [] - Reward: 0
Step: 2 - Action: {'t': 2, 'u': 0, 'a': 4, 'ps': 0.1, 'ps-a': array([0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1])} - Observation: [] - Reward: 0
Step: 3 - Action: {'t': 3, 'u': 0, 'a': 5, 'ps': 0.1, 'ps-a': array([0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1])} - Observation: [] - Reward: 0
###Markdown
Okay, there's quite a bit going on here: - `action`, is a number between `0` and `num_products - 1` that references the index of the product recommended. - `observation` will either be `None` or a session of Organic data, showing the index of products the user views. - `reward` is 0 if the user does not click on the recommended product and 1 if they do. Notice that when a user clicks on a product (Wherever the reward is 1), they start a new Organic session.- `done` is a True/False flag indicating if the episode (aka user's timeline) is over. - `info` currently not used, so it is always an empty dictionary.Also, notice that the first `action` is `None`. In our implementation, the agent observes Organic behaviour before recommending anything.Now, we will show calling the environment in an online manner, where the agent needs to supply an action. For demonstration purposes, we will create a list of hard-coded actions.
###Code
# Create list of hard coded actions.
actions = [None] + [1, 2, 3, 4, 5]
# Reset env and set done to False.
env.reset()
done = False
# Counting how many steps.
i = 0
while not done and i < len(actions):
action = actions[i]
observation, reward, done, info = env.step(action)
print(f"Step: {i} - Action: {action} - Observation: {observation.sessions()} - Reward: {reward}")
i += 1
###Output
Step: 0 - Action: None - Observation: [{'t': 0, 'u': 0, 'z': 'pageview', 'v': 1}] - Reward: None
Step: 1 - Action: 1 - Observation: [] - Reward: 0
Step: 2 - Action: 2 - Observation: [] - Reward: 0
Step: 3 - Action: 3 - Observation: [] - Reward: 0
Step: 4 - Action: 4 - Observation: [] - Reward: 0
Step: 5 - Action: 5 - Observation: [{'t': 6, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 7, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 8, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 9, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 10, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 11, 'u': 0, 'z': 'pageview', 'v': 6}, {'t': 12, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 13, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 14, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 15, 'u': 0, 'z': 'pageview', 'v': 6}, {'t': 16, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 17, 'u': 0, 'z': 'pageview', 'v': 1}, {'t': 18, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 19, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 20, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 21, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 22, 'u': 0, 'z': 'pageview', 'v': 1}, {'t': 23, 'u': 0, 'z': 'pageview', 'v': 6}] - Reward: 0
###Markdown
You'll notice that the offline and online APIs are nearly identical. The only difference is that one calls either env.step_offline() or env.step(action). Creating our first agentNow that we see have seen how the offline and online versions of the environment work, it is time to code our first recommendation agent! Technically, an agent can be anything that produces actions for the environment to use. However, we will show you the object-oriented way we like to create agents.Below is the code for a very simple agent - the popularity based agent. The popularity based agent records merely how many times a user sees each product organically, then when required to make a recommendation, the agent chooses a product randomly in proportion with a number of times the user has viewed it.
###Code
import numpy as np
from numpy.random import choice
from agents import Agent
# Define an Agent class.
class PopularityAgent(Agent):
def __init__(self, config):
# Set number of products as an attribute of the Agent.
super(PopularityAgent, self).__init__(config)
# Track number of times each item viewed in Organic session.
self.organic_views = np.zeros(self.config.num_products)
def train(self, observation, action, reward, done):
"""Train method learns from a tuple of data.
this method can be called for offline or online learning"""
# Adding organic session to organic view counts.
if observation:
for session in observation.sessions():
self.organic_views[session['v']] += 1
def act(self, observation, reward, done):
"""Act method returns an action based on current observation and past
history"""
# Choosing action randomly in proportion with number of views.
prob = self.organic_views / sum(self.organic_views)
action = choice(self.config.num_products, p = prob)
return {
**super().act(observation, reward, done),
**{
'a': action,
'ps': prob[action]
}
}
###Output
_____no_output_____
###Markdown
The `PopularityAgent` class above demonstrates our preferred way to create agents for RecoGym. Notice how we have both a `train` and `act` method present. The `train` method is designed to take in training data from the environments `step_offline` method and thus has nothing to return, while the `act` method must return an action to pass back into the environment. The code below highlights how one would use this agent for first offline training and then using the learned knowledge to make recommendations online.
###Code
# Instantiate instance of PopularityAgent class.
num_products = 10
agent = PopularityAgent(Configuration({
**env_1_args,
'num_products': num_products,
}))
# Resets random seed back to 42, or whatever we set it to in env_0_args.
env.reset_random_seed()
# Train on 1000 users offline.
num_offline_users = 1000
for _ in range(num_offline_users):
# Reset env and set done to False.
env.reset()
done = False
observation, reward, done = None, 0, False
while not done:
old_observation = observation
action, observation, reward, done, info = env.step_offline(observation, reward, done)
agent.train(old_observation, action, reward, done)
# Train on 100 users online and track click through rate.
num_online_users = 100
num_clicks, num_events = 0, 0
for _ in range(num_online_users):
# Reset env and set done to False.
env.reset()
observation, _, done, _ = env.step(None)
reward = None
done = None
while not done:
action = agent.act(observation, reward, done)
observation, reward, done, info = env.step(action['a'])
# Used for calculating click through rate.
num_clicks += 1 if reward == 1 and reward is not None else 0
num_events += 1
ctr = num_clicks / num_events
print(f"Click Through Rate: {ctr:.4f}")
###Output
Click Through Rate: 0.0147
###Markdown
Testing our first agentNow we have created our popularity based agent, and we should test it against an even simpler baseline - one that performs no learning and recommends products uniformly at random. To do this, we will first load a more complex version of the toy data environment called `reco-gym-v1`.Next, we will load another agent for our agent to compete against each other. Here you can see we make use of the `RandomAgent` and create an instance of it in addition to our `PopularityAgent`.
###Code
import gym, reco_gym
from reco_gym import env_1_args
from copy import deepcopy
env_1_args['random_seed'] = 42
env = gym.make('reco-gym-v1')
env.init_gym(env_1_args)
# Import the random agent.
from agents import RandomAgent, random_args
# Create the two agents.
num_products = env_1_args['num_products']
popularity_agent = PopularityAgent(Configuration(env_1_args))
agent_rand = RandomAgent(Configuration({
**env_1_args,
**random_args,
}))
###Output
_____no_output_____
###Markdown
Now we have instances of our two agents. We can use the `test_agent` method from RecoGym and compare there performance.To use `test_agent`, one must provide a copy of the current env, a copy of the agent class, the number of training users and the number of testing users.
###Code
# Credible interval of the CTR median and 0.025 0.975 quantile.
reco_gym.test_agent(deepcopy(env), deepcopy(agent_rand), 1000, 1000)
# Credible interval of the CTR median and 0.025 0.975 quantile.
reco_gym.test_agent(deepcopy(env), deepcopy(popularity_agent), 1000, 1000)
###Output
Start: Agent Training #0
Start: Agent Testing #0
End: Agent Testing #0 (12.809738874435425s)
###Markdown
What is RecoGym?RecoGym is a Python [OpenAI Gym](https://gym.openai.com/) environment for testing recommendation algorithms. It allows for the testing of both offline and reinforcement-learning based agents. It provides a way to test algorithms in a toy environment quickly.In this notebook, we will code a simple recommendation agent that suggests an item in proportion to how many times it has been viewed. We hope to inspire you to create your own agents and test them against our baseline models.In order to make the most out of RecoGym, we suggest you have some experience coding in Python, some background knowledge in recommender systems, and familiarity with the reinforcement learning setup. Also, be sure to check out the python-based requirements in the README if something below errors. Reinforcement Learning SetupRecoGym follows the usual reinforcement learning setup. This means there are interactions between the environment (the user's behaviour) and the agent (our recommendation algorithm). The agent receives a reward if the user clicks on the recommendation. Organic and BanditEven though our focus is biased towards online advertising, we tried to make RecoGym universal to all types of recommendation. Hence, we introduce the domain-agnostic terms Organic and Bandit sessions. An Organic session is an observation of items the user interacts with. For example, it could be views of products on an e-commerce website, listens to songs while streaming music, or readings of articles on an online newspaper. A Bandit session is one where we have an opportunity to recommend the user an item and observe their behaviour. We receive a reward if they click. Offline and Online LearningThis project was born out of a desire to improve Criteo's recommendation system by exploring reinforcement learning algorithms. We quickly realised that we couldn't just blindly apply RL algorithms in a production system out of the box. The learning period would be too costly. Instead, we need to leverage the vast amounts of offline training examples we already to make the algorithm perform as good as the current system before releasing into the online production environment.Thus, RecoGym follows a similar flow. An agent is first given access to many offline training examples produced from a fixed policy. Then, they have access to the online system where they choose the actions. Let's see some code - Interacting with the environment The code snippet below shows how to initialise the environment and step through in an 'offline' manner (Here offline means that the environment is generating some recommendations for us). We print out the results from the environment at each step.
###Code
import gym, recogym
# env_0_args is a dictionary of default parameters (i.e. number of products)
from recogym import env_1_args, Configuration
# You can overwrite environment arguments here:
env_1_args['random_seed'] = 42
# Initialize the gym for the first time by calling .make() and .init_gym()
env = gym.make('reco-gym-v1')
env.init_gym(env_1_args)
# .reset() env before each episode (one episode per user).
env.reset()
done = False
# Counting how many steps.
i = 0
observation, reward, done = None, 0, False
while not done:
action, observation, reward, done, info = env.step_offline(observation, reward, done)
print(f"Step: {i} - Action: {action} - Observation: {observation.sessions()} - Reward: {reward}")
i += 1
###Output
Step: 0 - Action: None - Observation: [{'t': 0, 'u': 0, 'z': 'pageview', 'v': 0}] - Reward: None
Step: 1 - Action: {'t': 1, 'u': 0, 'a': 3, 'ps': 0.1, 'ps-a': array([0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1])} - Observation: [] - Reward: 0
Step: 2 - Action: {'t': 2, 'u': 0, 'a': 4, 'ps': 0.1, 'ps-a': array([0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1])} - Observation: [] - Reward: 0
Step: 3 - Action: {'t': 3, 'u': 0, 'a': 5, 'ps': 0.1, 'ps-a': array([0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1])} - Observation: [] - Reward: 0
###Markdown
Okay, there's quite a bit going on here: - `action`, is a number between `0` and `num_products - 1` that references the index of the product recommended. - `observation` will either be `None` or a session of Organic data, showing the index of products the user views. - `reward` is 0 if the user does not click on the recommended product and 1 if they do. Notice that when a user clicks on a product (Wherever the reward is 1), they start a new Organic session.- `done` is a True/False flag indicating if the episode (aka user's timeline) is over. - `info` currently not used, so it is always an empty dictionary.Also, notice that the first `action` is `None`. In our implementation, the agent observes Organic behaviour before recommending anything.Now, we will show calling the environment in an online manner, where the agent needs to supply an action. For demonstration purposes, we will create a list of hard-coded actions.
###Code
# Create list of hard coded actions.
actions = [None] + [1, 2, 3, 4, 5]
# Reset env and set done to False.
env.reset()
done = False
# Counting how many steps.
i = 0
while not done and i < len(actions):
action = actions[i]
observation, reward, done, info = env.step(action)
print(f"Step: {i} - Action: {action} - Observation: {observation.sessions()} - Reward: {reward}")
i += 1
###Output
Step: 0 - Action: None - Observation: [{'t': 0, 'u': 0, 'z': 'pageview', 'v': 1}] - Reward: None
Step: 1 - Action: 1 - Observation: [] - Reward: 0
Step: 2 - Action: 2 - Observation: [] - Reward: 0
Step: 3 - Action: 3 - Observation: [] - Reward: 0
Step: 4 - Action: 4 - Observation: [] - Reward: 0
Step: 5 - Action: 5 - Observation: [{'t': 6, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 7, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 8, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 9, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 10, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 11, 'u': 0, 'z': 'pageview', 'v': 6}, {'t': 12, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 13, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 14, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 15, 'u': 0, 'z': 'pageview', 'v': 6}, {'t': 16, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 17, 'u': 0, 'z': 'pageview', 'v': 1}, {'t': 18, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 19, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 20, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 21, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 22, 'u': 0, 'z': 'pageview', 'v': 1}, {'t': 23, 'u': 0, 'z': 'pageview', 'v': 6}] - Reward: 0
###Markdown
You'll notice that the offline and online APIs are nearly identical. The only difference is that one calls either env.step_offline() or env.step(action). Creating our first agentNow that we see have seen how the offline and online versions of the environment work, it is time to code our first recommendation agent! Technically, an agent can be anything that produces actions for the environment to use. However, we will show you the object-oriented way we like to create agents.Below is the code for a very simple agent - the popularity based agent. The popularity based agent records merely how many times a user sees each product organically, then when required to make a recommendation, the agent chooses a product randomly in proportion with a number of times the user has viewed it.
###Code
import numpy as np
from numpy.random import choice
from agents import Agent
# Define an Agent class.
class PopularityAgent(Agent):
def __init__(self, config):
# Set number of products as an attribute of the Agent.
super(PopularityAgent, self).__init__(config)
# Track number of times each item viewed in Organic session.
self.organic_views = np.zeros(self.config.num_products)
def train(self, observation, action, reward, done):
"""Train method learns from a tuple of data.
this method can be called for offline or online learning"""
# Adding organic session to organic view counts.
if observation:
for session in observation.sessions():
self.organic_views[session['v']] += 1
def act(self, observation, reward, done):
"""Act method returns an action based on current observation and past
history"""
# Choosing action randomly in proportion with number of views.
prob = self.organic_views / sum(self.organic_views)
action = choice(self.config.num_products, p = prob)
return {
**super().act(observation, reward, done),
**{
'a': action,
'ps': prob[action]
}
}
###Output
_____no_output_____
###Markdown
The `PopularityAgent` class above demonstrates our preferred way to create agents for RecoGym. Notice how we have both a `train` and `act` method present. The `train` method is designed to take in training data from the environments `step_offline` method and thus has nothing to return, while the `act` method must return an action to pass back into the environment. The code below highlights how one would use this agent for first offline training and then using the learned knowledge to make recommendations online.
###Code
# Instantiate instance of PopularityAgent class.
num_products = 10
agent = PopularityAgent(Configuration({
**env_1_args,
'num_products': num_products,
}))
# Resets random seed back to 42, or whatever we set it to in env_0_args.
env.reset_random_seed()
# Train on 1000 users offline.
num_offline_users = 1000
for _ in range(num_offline_users):
# Reset env and set done to False.
env.reset()
done = False
observation, reward, done = None, 0, False
while not done:
old_observation = observation
action, observation, reward, done, info = env.step_offline(observation, reward, done)
agent.train(old_observation, action, reward, done)
# Train on 100 users online and track click through rate.
num_online_users = 100
num_clicks, num_events = 0, 0
for _ in range(num_online_users):
# Reset env and set done to False.
env.reset()
observation, _, done, _ = env.step(None)
reward = None
done = None
while not done:
action = agent.act(observation, reward, done)
observation, reward, done, info = env.step(action['a'])
# Used for calculating click through rate.
num_clicks += 1 if reward == 1 and reward is not None else 0
num_events += 1
ctr = num_clicks / num_events
print(f"Click Through Rate: {ctr:.4f}")
###Output
Click Through Rate: 0.0147
###Markdown
Testing our first agentNow we have created our popularity based agent, and we should test it against an even simpler baseline - one that performs no learning and recommends products uniformly at random. To do this, we will first load a more complex version of the toy data environment called `reco-gym-v1`.Next, we will load another agent for our agent to compete against each other. Here you can see we make use of the `RandomAgent` and create an instance of it in addition to our `PopularityAgent`.
###Code
import gym, recogym
from recogym import env_1_args
from copy import deepcopy
env_1_args['random_seed'] = 42
env = gym.make('reco-gym-v1')
env.init_gym(env_1_args)
# Import the random agent.
from agents import RandomAgent, random_args
# Create the two agents.
num_products = env_1_args['num_products']
popularity_agent = PopularityAgent(Configuration(env_1_args))
agent_rand = RandomAgent(Configuration({
**env_1_args,
**random_args,
}))
###Output
_____no_output_____
###Markdown
Now we have instances of our two agents. We can use the `test_agent` method from RecoGym and compare there performance.To use `test_agent`, one must provide a copy of the current env, a copy of the agent class, the number of training users and the number of testing users.
###Code
# Credible interval of the CTR median and 0.025 0.975 quantile.
recogym.test_agent(deepcopy(env), deepcopy(agent_rand), 1000, 1000)
# Credible interval of the CTR median and 0.025 0.975 quantile.
recogym.test_agent(deepcopy(env), deepcopy(popularity_agent), 1000, 1000)
###Output
Start: Agent Training #0
Start: Agent Testing #0
End: Agent Testing #0 (12.809738874435425s)
###Markdown
Quantidade de partos por microrregião por localidade
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Constantes
###Code
N_CT_PARTO_NORMAL_1 = "n_ct_parto_normal_1"
N_CT_PARTO_NORMAL_2 = "n_ct_parto_normal_2"
N_CT_PARTO_NORMAL_3 = "n_ct_parto_normal_3"
N_CT_PARTO_NORMAL_4 = "n_ct_parto_normal_4"
N_CT_PARTO_NORMAL_5 = "n_ct_parto_normal_5"
QTDE_PARTO_NORMAL = [N_CT_PARTO_NORMAL_1, N_CT_PARTO_NORMAL_2, N_CT_PARTO_NORMAL_3, N_CT_PARTO_NORMAL_4, N_CT_PARTO_NORMAL_5]
N_CT_PARTO_CESARIA_1 = "n_ct_parto_cesaria_1"
N_CT_PARTO_CESARIA_2 = "n_ct_parto_cesaria_2"
N_CT_PARTO_CESARIA_3 = "n_ct_parto_cesaria_3"
N_CT_PARTO_CESARIA_4 = "n_ct_parto_cesaria_4"
N_CT_PARTO_CESARIA_5 = "n_ct_parto_cesaria_5"
QTDE_PARTO_CESARIA = [N_CT_PARTO_CESARIA_1, N_CT_PARTO_CESARIA_2, N_CT_PARTO_CESARIA_3, N_CT_PARTO_CESARIA_4, N_CT_PARTO_CESARIA_5]
N_TP_OCORRENCIA_HOSPITAL = "n_tp_ocorrencia_1"
N_TP_OCORRENCIA_ESTABELECIMENTO_SAUDE = "n_tp_ocorrencia_2"
N_TP_OCORRENCIA_RESIDENCIA = "n_tp_ocorrencia_3"
N_TP_OCORRENCIA_OUTRO = "n_tp_ocorrencia_4"
LOCAIS_NASCIMENTO = [N_TP_OCORRENCIA_HOSPITAL, N_TP_OCORRENCIA_ESTABELECIMENTO_SAUDE, N_TP_OCORRENCIA_RESIDENCIA, N_TP_OCORRENCIA_OUTRO]
COD_MICROREGIAO = "CO_MICIBGE"
NOME_MICROREGIAO = "nome_microregiao"
###Output
_____no_output_____
###Markdown
Estados Dados AC
###Code
dados_ac_df = pd.read_csv(r"C:\\Users\\AlissonSilva\\Downloads\\TPE\\data\\estados\\estados\\Dados-AC.csv")
dados_ac_df.head()
dados_ac_df.shape
###Output
_____no_output_____
###Markdown
Dados ES
###Code
dados_es_df = pd.read_csv(r"/home/leticia/ifsp/tpe-trabalho-final/data/estados/Dados-ES.csv")
dados_es_df.head()
dados_es_df.shape
###Output
_____no_output_____
###Markdown
Dados RN
###Code
dados_rn_df = pd.read_csv(r"/home/leticia/ifsp/tpe-trabalho-final/data/estados/Dados-RN.csv")
dados_rn_df.head()
dados_rn_df.shape
###Output
_____no_output_____
###Markdown
MicroregiõesDados de 2006 a 2016
###Code
microregioes_2006_df = pd.read_csv(r"C:/Users/AlissonSilva/Downloads/TPE/data/microregioes/micro_merged_2006.csv")
microregioes_2007_df = pd.read_csv(r"C:/Users/AlissonSilva/Downloads/TPE/data/microregioes/micro_merged_2007.csv")
microregioes_2008_df = pd.read_csv(r"C:/Users/AlissonSilva/Downloads/TPE/data/microregioes/micro_merged_2008.csv")
microregioes_2009_df = pd.read_csv(r"C:/Users/AlissonSilva/Downloads/TPE/data/microregioes/micro_merged_2009.csv")
microregioes_2010_df = pd.read_csv(r"C:/Users/AlissonSilva/Downloads/TPE/data/microregioes/micro_merged_2010.csv")
microregioes_2011_df = pd.read_csv(r"C:/Users/AlissonSilva/Downloads/TPE/data/microregioes/micro_merged_2011.csv")
microregioes_2012_df = pd.read_csv(r"C:/Users/AlissonSilva/Downloads/TPE/data/microregioes/micro_merged_2012.csv")
microregioes_2013_df = pd.read_csv(r"C:/Users/AlissonSilva/Downloads/TPE/data/microregioes/micro_merged_2013.csv")
microregioes_2014_df = pd.read_csv(r"C:/Users/AlissonSilva/Downloads/TPE/data/microregioes/micro_merged_2014.csv")
microregioes_2015_df = pd.read_csv(r"C:/Users/AlissonSilva/Downloads/TPE/data/microregioes/micro_merged_2015.csv")
microregioes_2016_df = pd.read_csv(r"C:/Users/AlissonSilva/Downloads/TPE/data/microregioes/micro_merged_2016.csv")
microregioes_2006_df.head()
microregioes_2006_df = microregioes_2006_df.sort_values(by = "CO_MICIBGE")
microregioes_2007_df = microregioes_2007_df.sort_values(by = "CO_MICIBGE")
microregioes_2008_df = microregioes_2008_df.sort_values(by = "CO_MICIBGE")
microregioes_2009_df = microregioes_2009_df.sort_values(by = "CO_MICIBGE")
microregioes_2010_df = microregioes_2010_df.sort_values(by = "CO_MICIBGE")
microregioes_2011_df = microregioes_2011_df.sort_values(by = "CO_MICIBGE")
microregioes_2012_df = microregioes_2012_df.sort_values(by = "CO_MICIBGE")
microregioes_2013_df = microregioes_2013_df.sort_values(by = "CO_MICIBGE")
microregioes_2014_df = microregioes_2014_df.sort_values(by = "CO_MICIBGE")
microregioes_2015_df = microregioes_2015_df.sort_values(by = "CO_MICIBGE")
microregioes_2016_df = microregioes_2016_df.sort_values(by = "CO_MICIBGE")
microregioes_2006_df.head()
###Output
_____no_output_____
###Markdown
Códigos das Microregiões
###Code
microregioes_ibge_df = pd.read_csv(r"/home/leticia/ifsp/tpe-trabalho-final/data/microregioes/microregioes_ibge.csv", encoding = "ISO-8859-1")
microregioes_ibge_df = microregioes_ibge_df.sort_values(by = "Micro")
microregioes_ibge_df.head()
###Output
_____no_output_____
###Markdown
Inclusão do nome da microregião nos dataframes de dados
###Code
microregioes_2006_df['nome_microregiao'] = microregioes_ibge_df['Nome'].values
microregioes_2007_df['nome_microregiao'] = microregioes_ibge_df['Nome'].values
microregioes_2008_df['nome_microregiao'] = microregioes_ibge_df['Nome'].values
microregioes_2009_df['nome_microregiao'] = microregioes_ibge_df['Nome'].values
microregioes_2010_df['nome_microregiao'] = microregioes_ibge_df['Nome'].values
microregioes_2011_df['nome_microregiao'] = microregioes_ibge_df['Nome'].values
microregioes_2012_df['nome_microregiao'] = microregioes_ibge_df['Nome'].values
microregioes_2013_df['nome_microregiao'] = microregioes_ibge_df['Nome'].values
microregioes_2014_df['nome_microregiao'] = microregioes_ibge_df['Nome'].values
microregioes_2015_df['nome_microregiao'] = microregioes_ibge_df['Nome'].values
microregioes_2016_df['nome_microregiao'] = microregioes_ibge_df['Nome'].values
microregioes_2006_df.head()
qtde_partos_microregioes_2006_df = microregioes_2006_df.loc[:, microregioes_2006_df.columns.isin(QTDE_PARTO_CESARIA + QTDE_PARTO_NORMAL + LOCAIS_NASCIMENTO + [COD_MICROREGIAO, NOME_MICROREGIAO])]
qtde_partos_microregioes_2007_df = microregioes_2007_df.loc[:, microregioes_2007_df.columns.isin(QTDE_PARTO_CESARIA + QTDE_PARTO_NORMAL + LOCAIS_NASCIMENTO + [COD_MICROREGIAO, NOME_MICROREGIAO])]
qtde_partos_microregioes_2008_df = microregioes_2008_df.loc[:, microregioes_2008_df.columns.isin(QTDE_PARTO_CESARIA + QTDE_PARTO_NORMAL + LOCAIS_NASCIMENTO + [COD_MICROREGIAO, NOME_MICROREGIAO])]
qtde_partos_microregioes_2009_df = microregioes_2009_df.loc[:, microregioes_2009_df.columns.isin(QTDE_PARTO_CESARIA + QTDE_PARTO_NORMAL + LOCAIS_NASCIMENTO + [COD_MICROREGIAO, NOME_MICROREGIAO])]
qtde_partos_microregioes_2010_df = microregioes_2010_df.loc[:, microregioes_2010_df.columns.isin(QTDE_PARTO_CESARIA + QTDE_PARTO_NORMAL + LOCAIS_NASCIMENTO + [COD_MICROREGIAO, NOME_MICROREGIAO])]
qtde_partos_microregioes_2011_df = microregioes_2011_df.loc[:, microregioes_2011_df.columns.isin(QTDE_PARTO_CESARIA + QTDE_PARTO_NORMAL + LOCAIS_NASCIMENTO + [COD_MICROREGIAO, NOME_MICROREGIAO])]
qtde_partos_microregioes_2012_df = microregioes_2012_df.loc[:, microregioes_2012_df.columns.isin(QTDE_PARTO_CESARIA + QTDE_PARTO_NORMAL + LOCAIS_NASCIMENTO + [COD_MICROREGIAO, NOME_MICROREGIAO])]
qtde_partos_microregioes_2013_df = microregioes_2013_df.loc[:, microregioes_2013_df.columns.isin(QTDE_PARTO_CESARIA + QTDE_PARTO_NORMAL + LOCAIS_NASCIMENTO + [COD_MICROREGIAO, NOME_MICROREGIAO])]
qtde_partos_microregioes_2014_df = microregioes_2014_df.loc[:, microregioes_2014_df.columns.isin(QTDE_PARTO_CESARIA + QTDE_PARTO_NORMAL + LOCAIS_NASCIMENTO + [COD_MICROREGIAO, NOME_MICROREGIAO])]
qtde_partos_microregioes_2015_df = microregioes_2015_df.loc[:, microregioes_2015_df.columns.isin(QTDE_PARTO_CESARIA + QTDE_PARTO_NORMAL + LOCAIS_NASCIMENTO + [COD_MICROREGIAO, NOME_MICROREGIAO])]
qtde_partos_microregioes_2016_df = microregioes_2016_df.loc[:, microregioes_2016_df.columns.isin(QTDE_PARTO_CESARIA + QTDE_PARTO_NORMAL + LOCAIS_NASCIMENTO + [COD_MICROREGIAO, NOME_MICROREGIAO])]
qtde_partos_microregioes_2006_df.head()
###Output
_____no_output_____
###Markdown
WFIRST Simulation ToolsThe WFIRST effort at STScI has produced software tools for use by the scientific community. This repository contains example code and instructions for WebbPSF, the PSF simulator for JWST and WFIRST, as well as Pandeia, the exposure time and signal-to-noise calculator for both missions. Test your installationIf the following cell executes without errors, you're good to go! (Give it a few seconds.)
###Code
import warnings
warnings.filterwarnings('ignore', message='No .+ tables found')
import pysynphot
import webbpsf.wfirst
wfi = webbpsf.wfirst.WFI()
from pandeia.engine.wfirst import WFIRSTImager
_ = WFIRSTImager(mode="imaging")
print("Looks like everything installed correctly!")
###Output
_____no_output_____
###Markdown
What is RecoGym?RecoGym is a Python [OpenAI Gym](https://gym.openai.com/) environment for testing recommendation algorithms. It allows for the testing of both offline and reinforcement-learning based agents. It provides a way to test algorithms in a toy environment quickly.In this notebook, we will code a simple recommendation agent that suggests an item in proportion to how many times it has been viewed. We hope to inspire you to create your own agents and test them against our baseline models.In order to make the most out of RecoGym, we suggest you have some experience coding in Python, some background knowledge in recommender systems, and familiarity with the reinforcement learning setup. Also, be sure to check out the python-based requirements in the README if something below errors. Reinforcement Learning SetupRecoGym follows the usual reinforcement learning setup. This means there are interactions between the environment (the user's behaviour) and the agent (our recommendation algorithm). The agent receives a reward if the user clicks on the recommendation. Organic and BanditEven though our focus is biased towards online advertising, we tried to make RecoGym universal to all types of recommendation. Hence, we introduce the domain-agnostic terms Organic and Bandit sessions. An Organic session is an observation of items the user interacts with. For example, it could be views of products on an e-commerce website, listens to songs while streaming music, or readings of articles on an online newspaper. A Bandit session is one where we have an opportunity to recommend the user an item and observe their behaviour. We receive a reward if they click. Offline and Online LearningThis project was born out of a desire to improve Criteo's recommendation system by exploring reinforcement learning algorithms. We quickly realised that we couldn't just blindly apply RL algorithms in a production system out of the box. The learning period would be too costly. Instead, we need to leverage the vast amounts of offline training examples we already to make the algorithm perform as good as the current system before releasing into the online production environment.Thus, RecoGym follows a similar flow. An agent is first given access to many offline training examples produced from a fixed policy. Then, they have access to the online system where they choose the actions. Let's see some code - Interacting with the environment The code snippet below shows how to initialise the environment and step through in an 'offline' manner (Here offline means that the environment is generating some recommendations for us). We print out the results from the environment at each step.
###Code
import gym, recogym
# env_0_args is a dictionary of default parameters (i.e. number of products)
from recogym import env_1_args, Configuration
# You can overwrite environment arguments here:
env_1_args['random_seed'] = 42
# Initialize the gym for the first time by calling .make() and .init_gym()
env = gym.make('reco-gym-v1')
env.init_gym(env_1_args)
# .reset() env before each episode (one episode per user).
env.reset()
done = False
# Counting how many steps.
i = 0
observation, reward, done = None, 0, False
while not done:
action, observation, reward, done, info = env.step_offline(observation, reward, done)
print(f"Step: {i} - Action: {action} - Observation: {observation.sessions()} - Reward: {reward}")
i += 1
###Output
Step: 0 - Action: None - Observation: [{'t': 0, 'u': 0, 'z': 'pageview', 'v': 0}] - Reward: None
Step: 1 - Action: {'t': 1, 'u': 0, 'a': 3, 'ps': 0.1, 'ps-a': array([0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1])} - Observation: [] - Reward: 0
Step: 2 - Action: {'t': 2, 'u': 0, 'a': 4, 'ps': 0.1, 'ps-a': array([0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1])} - Observation: [] - Reward: 0
Step: 3 - Action: {'t': 3, 'u': 0, 'a': 5, 'ps': 0.1, 'ps-a': array([0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1])} - Observation: [] - Reward: 0
###Markdown
Okay, there's quite a bit going on here: - `action`, is a number between `0` and `num_products - 1` that references the index of the product recommended. - `observation` will either be `None` or a session of Organic data, showing the index of products the user views. - `reward` is 0 if the user does not click on the recommended product and 1 if they do. Notice that when a user clicks on a product (Wherever the reward is 1), they start a new Organic session.- `done` is a True/False flag indicating if the episode (aka user's timeline) is over. - `info` currently not used, so it is always an empty dictionary.Also, notice that the first `action` is `None`. In our implementation, the agent observes Organic behaviour before recommending anything.Now, we will show calling the environment in an online manner, where the agent needs to supply an action. For demonstration purposes, we will create a list of hard-coded actions.
###Code
# Create list of hard coded actions.
actions = [None] + [1, 2, 3, 4, 5]
# Reset env and set done to False.
env.reset()
done = False
# Counting how many steps.
i = 0
while not done and i < len(actions):
action = actions[i]
observation, reward, done, info = env.step(action)
print(f"Step: {i} - Action: {action} - Observation: {observation.sessions()} - Reward: {reward}")
i += 1
###Output
Step: 0 - Action: None - Observation: [{'t': 0, 'u': 0, 'z': 'pageview', 'v': 1}] - Reward: None
Step: 1 - Action: 1 - Observation: [] - Reward: 0
Step: 2 - Action: 2 - Observation: [] - Reward: 0
Step: 3 - Action: 3 - Observation: [] - Reward: 0
Step: 4 - Action: 4 - Observation: [] - Reward: 0
Step: 5 - Action: 5 - Observation: [{'t': 6, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 7, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 8, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 9, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 10, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 11, 'u': 0, 'z': 'pageview', 'v': 6}, {'t': 12, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 13, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 14, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 15, 'u': 0, 'z': 'pageview', 'v': 6}, {'t': 16, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 17, 'u': 0, 'z': 'pageview', 'v': 1}, {'t': 18, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 19, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 20, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 21, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 22, 'u': 0, 'z': 'pageview', 'v': 1}, {'t': 23, 'u': 0, 'z': 'pageview', 'v': 6}] - Reward: 0
###Markdown
You'll notice that the offline and online APIs are nearly identical. The only difference is that one calls either env.step_offline() or env.step(action). Creating our first agentNow that we see have seen how the offline and online versions of the environment work, it is time to code our first recommendation agent! Technically, an agent can be anything that produces actions for the environment to use. However, we will show you the object-oriented way we like to create agents.Below is the code for a very simple agent - the popularity based agent. The popularity based agent records merely how many times a user sees each product organically, then when required to make a recommendation, the agent chooses a product randomly in proportion with a number of times the user has viewed it.
###Code
import numpy as np
from numpy.random import choice
from recogym.agents import Agent
# Define an Agent class.
class PopularityAgent(Agent):
def __init__(self, config):
# Set number of products as an attribute of the Agent.
super(PopularityAgent, self).__init__(config)
# Track number of times each item viewed in Organic session.
self.organic_views = np.zeros(self.config.num_products)
def train(self, observation, action, reward, done):
"""Train method learns from a tuple of data.
this method can be called for offline or online learning"""
# Adding organic session to organic view counts.
if observation:
for session in observation.sessions():
self.organic_views[session['v']] += 1
def act(self, observation, reward, done):
"""Act method returns an action based on current observation and past
history"""
# Choosing action randomly in proportion with number of views.
prob = self.organic_views / sum(self.organic_views)
action = choice(self.config.num_products, p = prob)
return {
**super().act(observation, reward, done),
**{
'a': action,
'ps': prob[action]
}
}
###Output
_____no_output_____
###Markdown
The `PopularityAgent` class above demonstrates our preferred way to create agents for RecoGym. Notice how we have both a `train` and `act` method present. The `train` method is designed to take in training data from the environments `step_offline` method and thus has nothing to return, while the `act` method must return an action to pass back into the environment. The code below highlights how one would use this agent for first offline training and then using the learned knowledge to make recommendations online.
###Code
# Instantiate instance of PopularityAgent class.
num_products = 10
agent = PopularityAgent(Configuration({
**env_1_args,
'num_products': num_products,
}))
# Resets random seed back to 42, or whatever we set it to in env_0_args.
env.reset_random_seed()
# Train on 1000 users offline.
num_offline_users = 1000
for _ in range(num_offline_users):
# Reset env and set done to False.
env.reset()
done = False
observation, reward, done = None, 0, False
while not done:
old_observation = observation
action, observation, reward, done, info = env.step_offline(observation, reward, done)
agent.train(old_observation, action, reward, done)
# Train on 100 users online and track click through rate.
num_online_users = 100
num_clicks, num_events = 0, 0
for _ in range(num_online_users):
# Reset env and set done to False.
env.reset()
observation, _, done, _ = env.step(None)
reward = None
done = None
while not done:
action = agent.act(observation, reward, done)
observation, reward, done, info = env.step(action['a'])
# Used for calculating click through rate.
num_clicks += 1 if reward == 1 and reward is not None else 0
num_events += 1
ctr = num_clicks / num_events
print(f"Click Through Rate: {ctr:.4f}")
###Output
Click Through Rate: 0.0147
###Markdown
Testing our first agentNow we have created our popularity based agent, and we should test it against an even simpler baseline - one that performs no learning and recommends products uniformly at random. To do this, we will first load a more complex version of the toy data environment called `reco-gym-v1`.Next, we will load another agent for our agent to compete against each other. Here you can see we make use of the `RandomAgent` and create an instance of it in addition to our `PopularityAgent`.
###Code
import gym, recogym
from recogym import env_1_args
from copy import deepcopy
env_1_args['random_seed'] = 42
env = gym.make('reco-gym-v1')
env.init_gym(env_1_args)
# Import the random agent.
from recogym.agents import RandomAgent, random_args
# Create the two agents.
num_products = env_1_args['num_products']
popularity_agent = PopularityAgent(Configuration(env_1_args))
agent_rand = RandomAgent(Configuration({
**env_1_args,
**random_args,
}))
###Output
_____no_output_____
###Markdown
Now we have instances of our two agents. We can use the `test_agent` method from RecoGym and compare there performance.To use `test_agent`, one must provide a copy of the current env, a copy of the agent class, the number of training users and the number of testing users.
###Code
# Credible interval of the CTR median and 0.025 0.975 quantile.
recogym.test_agent(deepcopy(env), deepcopy(agent_rand), 1000, 1000)
# Credible interval of the CTR median and 0.025 0.975 quantile.
recogym.test_agent(deepcopy(env), deepcopy(popularity_agent), 1000, 1000)
###Output
Start: Agent Training #0
Start: Agent Testing #0
End: Agent Testing #0 (12.809738874435425s)
###Markdown
What is RecoGym?RecoGym is a Python [OpenAI Gym](https://gym.openai.com/) environment for testing recommendation algorithms. It allows for the testing of both offline and reinforcement-learning based agents. It provides a way to test algorithms in a toy environment quickly.In this notebook, we will code a simple recommendation agent that suggests an item in proportion to how many times it has been viewed. We hope to inspire you to create your own agents and test them against our baseline models.In order to make the most out of RecoGym, we suggest you have some experience coding in Python, some background knowledge in recommender systems, and familiarity with the reinforcement learning setup. Also, be sure to check out the python-based requirements in the README if something below errors. Reinforcement Learning SetupRecoGym follows the usual reinforcement learning setup. This means there are interactions between the environment (the user's behaviour) and the agent (our recommendation algorithm). The agent receives a reward if the user clicks on the recommendation. Organic and BanditEven though our focus is biased towards online advertising, we tried to make RecoGym universal to all types of recommendation. Hence, we introduce the domain-agnostic terms Organic and Bandit sessions. An Organic session is an observation of items the user interacts with. For example, it could be views of products on an e-commerce website, listens to songs while streaming music, or readings of articles on an online newspaper. A Bandit session is one where we have an opportunity to recommend the user an item and observe their behaviour. We receive a reward if they click. Offline and Online LearningThis project was born out of a desire to improve Criteo's recommendation system by exploring reinforcement learning algorithms. We quickly realised that we couldn't just blindly apply RL algorithms in a production system out of the box. The learning period would be too costly. Instead, we need to leverage the vast amounts of offline training examples we already to make the algorithm perform as good as the current system before releasing into the online production environment.Thus, RecoGym follows a similar flow. An agent is first given access to many offline training examples produced from a fixed policy. Then, they have access to the online system where they choose the actions. Let's see some code - Interacting with the environment The code snippet below shows how to initialise the environment and step through in an 'offline' manner (Here offline means that the environment is generating some recommendations for us). We print out the results from the environment at each step.
###Code
#!pip install gym recogym torch==1.1.0
import gym, recogym
#!cd recogym ; pwd
import sys, os
module_path = os.path.abspath(os.path.join('recogym'))
if module_path not in sys.path:
sys.path.insert(0, module_path)
print(sys.path)
# env_0_args is a dictionary of default parameters (i.e. number of products)
from envs.reco_env_v2 import env_2_args
def flatten(something):
if isinstance(something, (list, tuple, set, range)):
for sub in something:
yield from flatten(sub)
else:
yield something
import csv
data = list(csv.reader(open("jameson.csv")))[1:5000]
data = list(map(lambda x: x[0].split(' '), data))
life_events = list(map(lambda y: list(filter(lambda x: x < '1000', y)), data))
life_events = list(set(flatten(life_events)))
print(life_events)
print(len(data), len(data[0]))
# You can overwrite environment arguments here:
env_2_args['random_seed'] = 42
env_2_args['training_data'] = data[1:1000]
env_2_args['life_events'] = life_events
env_2_args['debug'] = False
print(data[1:10])
#[
# [1, 2, 3, 4, 5, 6],
# [3, 4, 5, 6],
# [1, 3, 4, 5, 6],
# [1, 2, 3, 4]
#]
env_2_args['num_products'] = len(life_events) # int(max(map(max, env_2_args['training_data'])).strip())
print('Number of products: ', env_2_args['num_products'])
print(len(env_2_args['training_data']))
# Initialize the gym for the first time by calling .make() and .init_gym()
env = gym.make('reco-gym-v2')
env.init_gym(env_2_args)
# .reset() env before each episode (one episode per user).
env.reset()
done = False
# Counting how many steps.
i = 0
observation, reward, done = None, 0, False
while not done:
action, observation, reward, done, info = env.step_offline(observation, reward, done)
print(f"Step: {i} - Action: {action} - Observation: {observation.sessions()} - Reward: {reward}")
i += 1
###Output
['/Users/markus.braasch/git/reco-gym/recogym', '', '/Users/markus.braasch/git/reco-gym', '/anaconda3/lib/python37.zip', '/anaconda3/lib/python3.7', '/anaconda3/lib/python3.7/lib-dynload', '/anaconda3/lib/python3.7/site-packages', '/anaconda3/lib/python3.7/site-packages/aeosa', '/anaconda3/lib/python3.7/site-packages/IPython/extensions', '/Users/markus.braasch/.ipython']
['10.58', '10.53', '10.52', '1', '1.3', '1.114', '1.2', '1.9', '1.4', '1.8', '1.5', '10.55', '10.69', '10.56', '1.113', '10', '10.62', '10.60', '1.1', '10.63']
4999 8
[['3130', '2929', '2658', '1446', '1819', '2828', '1069', '2373', '1761', '1324', '1258', '1534', '1200', '2649', '1598', '2751', '3189', '2897', '2300', '1915', '1258', '2774', '2762', '2649', '1841', '1837', '2649', '1841', '2649', '2762', '1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783'], ['1722', '1894', '1722', '1696'], ['2109', '3130', '2929', '1069', '2897', '2828', '3189', '1761', '1324', '2649', '1534', '2751', '1258', '1200', '2058', '1607', '2432', '1786', '3024', '2531', '3041', '1786', '3024', '2531', '3041', '2531', '3041', '1951', '1894', '1696', '2751', '1757', '1200', '1446', '2125', '1837', '2649', '2762', '1841', '3130', '2929', '2300', '1200', '1446', '2658', '2828', '1819', '2897', '2373', '1069', '3189', '1761', '1324', '1534', '2751', '2649', '1258', '1200', '2058', '3085', '1400', '2862', '3063', '3130', '2929', '1446', '2300', '1200', '2658', '1819', '1069', '2897', '2828', '2373', '3189', '1761', '1324', '1534', '2649', '2751', '1258', '1200', '2058', '2288', '3051', '2333', '1481', '1559', '2783', '1722', '2650', '3130', '1446', '2658', '5.94', '2373', '2828', '2649', '1534', '1200', '2649', '1841', '1208', '1761', '2234', '1841', '2649', '2649', '1841', '1761', '2649', '1841', '3130', '2373', '1446', '2658', '1069', '2828', '2649', '1761', '1534', '1200', '1120', '1841', '2649', '3189', '2369', '1598', '2522', '2929', '1983', '2649', '2751', '2300', '1841', '2522', '1598', '1595', '2649', '1841', '2369', '1228', '3173', '2929', '2522', '3161', '1598', '1595', '2649', '1841', '3130', '2929', '2911', '2300', '1446', '2658', '2897', '1819', '2828', '1069', '2373', '1761', '1324', '1258', '2751', '2649', '1534', '1200', '5.94', '5.33', '5', '2649', '2415', '2769', '2649', '2649', '3060', '2751', '2897', '1598', '2300', '3189', '2649', '2783'], ['1841', '2109', '3130', '2929', '2897', '2828', '1069', '3189', '1761', '1324', '1534', '2751', '1258', '2649', '1200', '2058', '1607', '2432', '1786', '3024', '2531', '3041', '1951', '2625', '2875', '2109', '3130', '2929', '1446', '1069', '2828', '2658', '1819', '2373', '1761', '1258', '1534', '2649', '1324', '1200', '2897', '2751', '2300', '1598', '2649', '3189', '3130', '1200', '1534', '2649', '1915', '2109', '3130', '1761', '1534', '1324', '1200', '1258', '1598', '2751', '2300', '3189', '2897', '2649', '2109', '3130', '2649', '1761', '1534', '1324', '1258', '1200', '1598', '3189', '2300', '2751', '2649', '2897', '2908', '2908', '1915', '2109', '3130', '2929', '2828', '2897', '1069', '3189', '1761', '1324', '1534', '2751', '2649', '1258', '1200', '2058', '1607', '2432', '1786', '3024', '2531', '3041', '1951', '1894', '1696', '1446', '2751', '1200', '1757', '1722', '2125', '1837', '2649', '1841', '2649', '2762', '1841', '3130', '2929', '2911', '1446', '2828', '1069', '2658', '1819', '2373', '1534', '1258', '2649', '1761', '1324', '1200', '2751', '2897', '2300', '1598', '3189', '2649'], ['1722', '1696', '2751', '1200', '1446', '1757', '1722', '2125', '1837', '2649', '1841', '2762', '1841', '2649', '3130', '2929', '1446', '1200', '2300', '2828', '2658', '1819', '2897', '1069', '2373', '3189', '1761', '1324', '1534', '2751', '2649', '1200', '1258', '2058', '1722', '3055', '1722', '2864', '1722', '3030', '1722', '1537', '1722', '1371', '1722', '1994', '1722', '1568', '1722', '2657', '1722', '1933', '1722', '2310', '1722', '1559', '1722', '2650', '1200', '1200', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '2897', '1819', '1069', '2373', '3189', '1324', '1761', '2751', '2649', '1258', '1534', '1200', '2058', '1722', '2865', '1722', '2145', '1722', '2865', '3130', '2929', '1446', '2300', '2828', '1819', '1069', '2658', '2373', '2897', '3189', '1761', '1324', '1534', '2649', '1258', '1200', '2751', '2058', '1722', '2865'], ['3130', '1534', '1200', '2649', '1598', '3130', '1534', '2649', '1200', '1915', '1915', '3130', '1534', '1200', '1598', '2649', '3130', '2109', '1923', '1761', '1324', '1534', '1200', '2649', '1258', '1598', '2751', '2300', '2897', '3189', '2649'], ['1894', '1894', '1696', '1757', '2751', '1446', '1200', '2125', '1837', '2649', '1841', '2649', '2762', '1841', '1841', '2649', '2649', '1841', '2234', '2649', '1841', '1837', '2649', '1841', '2649', '2762', '1841', '2649', '1837', '2649', '1841', '2649', '1841', '2762', '2649', '1841', '1069', '2649', '2994', '1069', '1841', '2649', '1841', '2344', '2649', '1120', '1841', '2369', '1598', '2929', '1983', '2522', '2649', '2751', '2300', '1841', '3161', '2522', '1598', '2649', '1841', '1595', '2762', '2649', '1841', '3130', '2929', '2300', '1200', '1446', '2828', '1069', '2658', '2897', '1819', '2373', '3189', '1761', '1324', '1534', '2649', '2751', '1258', '1200', '2058', '1818', '1754', '1478', '1754', '1496', '1780', '2302', '1918', '1495', '1559', '2650', '3130', '2929', '2300', '1446', '2828', '1069', '2658', '2897', '1819', '2373', '3189', '1761', '1324', '1534', '2649', '2751', '1258', '1200', '2058', '3085', '1400', '2862', '3063', '3130', '2929', '2300', '1446', '2828', '1069', '2897', '2658', '1819', '2373', '3189', '1761', '1324', '1534', '2649', '2751', '1258', '1200', '2058', '3085', '1400', '2862', '1722', '3063', '3130', '1446', '2658', '5.94', '2828', '2373', '2649', '1200', '1534', '2649', '1841', '1761', '2649', '1841', '1761', '2649', '1841', '2234', '2649', '1761', '1841', '3130', '2373', '1446', '2828', '2658', '1761', '2649', '1200', '1534', '2649', '1841', '1069', '2994', '1069', '2649', '1841', '2649', '2344', '1841', '2649', '3189', '1120', '1841', '2369', '1598', '2522', '1983', '2929', '2751', '2649', '2300', '1841', '2522', '1598', '1595', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '1598', '2649', '1841', '1595', '3130', '2929', '2911', '2300', '1446', '2828', '1069', '1819', '2658', '2897', '2373', '1534', '1761', '1324', '2751', '2649', '1258', '1200', '15.103', '15.97', '15.6', '11.71', '5.94', '5.33', '15', '11', '5', '2649', '2415', '2649', '2769', '2649', '2783', '3060', '2751', '1598', '2300', '3189', '2897', '2649'], ['1696', '2908', '2908', '1200', '1446', '2751', '1757', '2125', '1837', '2649', '1841', '1841', '2762', '2649', '3130', '2658', '1446', '2373', '2828', '1200', '2649', '1534', '1069', '1841', '2649', '1069', '2649', '2994', '1841', '1841', '2649', '2344', '1841', '2649', '1120', '3038', '2369', '1841', '2649', '3161', '3130', '2929', '2300', '1200', '1446', '2828', '1819', '2897', '2373', '2658', '1069', '3189', '1761', '1324', '1534', '2649', '1258', '1200', '2751', '2058', '1483', '3130', '2658', '1446', '2373', '2828', '2649', '1200', '1534', '2649', '1841', '1069', '1287', '1841', '2649', '2649', '1841', '2649', '1841', '1069', '1635', '2649', '1841', '2649', '3142', '1841', '2649', '1069', '1841', '1841', '2649', '1440', '2649', '3193', '1841', '2649', '1841', '1069', '1287', '1841', '2649', '1612', '1841', '2649', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '1819', '2897', '2373', '1069', '3189', '1324', '1761', '2751', '1534', '2649', '1258', '1200', '2058', '1723', '1757', '1446', '1200', '2751', '2125', '1837', '1258', '2649', '1841', '2897', '2762', '1324', '2751', '3189', '2300', '1841', '2649', '2889', '2950', '2490', '2563', '1837', '1841', '2649', '1841', '2649', '2762', '3130', '2929', '2300', '1446', '2897', '2373', '2828', '2658', '1069', '1819', '3189', '1324', '2649', '1761', '1534', '1200', '2751', '1258', '2058', '1004', '1841', '2762', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '2908', '1915', '1446', '1200', '1757', '3130', '2929', '2828', '1446', '2300', '1200', '2658', '2897', '1069', '1819', '3189', '2373', '1534', '1761', '1324', '2751', '2649', '1200', '1258', '2058', '2762', '2649', '1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2897', '1069', '2373', '3189', '1761', '1324', '1258', '1534', '2649', '2751', '1200', '2058', '1483', '3088', '1485', '2085', '2388', '1026', '1899', '1483', '1899', '1483', '3130', '2658', '1446', '2828', '2373', '1534', '2649', '1200', '2908', '1915', '1757', '1446', '1200', '3130', '2929', '2300', '1200', '2828', '1446', '1819', '3189', '2658', '2897', '1069', '2373', '1324', '1761', '2649', '1258', '1534', '1200', '2751', '2058', '3085', '1400', '2862', '3063', '2421', '3130', '2658', '1446', '2828', '2373', '2649', '1534', '1200', '1069', '2649', '1841', '2649', '1069', '1841', '2994', '2344', '1841', '2649', '3038', '1120', '1841', '2649', '2369', '2649', '1841', '2369', '2522', '1598', '1841', '2300', '2929', '2751', '1983', '2649', '1598', '2522', '1595', '1841', '2649', '3161', '1446', '1757', '1200', '3130', '2300', '2929', '1446', '2828', '2658', '1819', '2897', '3189', '1069', '2373', '1761', '1258', '1324', '1534', '2649', '2751', '1200', '2058', '2950', '2490', '2563', '3019', '2323', '1733', '1344', '1559', '2650', '3130', '2658', '1446', '2828', '2373', '1534', '2649', '1200', '1199', '1841', '2649', '2234', '1757', '1446', '1200', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2897', '1069', '2373', '3189', '1761', '1324', '1258', '1534', '2751', '2649', '1200', '2058', '1245', '1200', '1757', '1446', '3130', '1446', '2658', '2373', '2828', '2649', '1534', '1200', '1841', '2649', '1446', '1757', '1200', '3130', '2658', '1446', '2828', '2373', '2649', '1534', '1200', '1612', '2649', '1841', '1200', '1446', '1757', '3130', '2658', '1446', '2373', '2828', '1534', '2649', '1200', '1193', '1841', '2649', '1723', '1200', '1446', '1757', '1446', '1200', '1757', '3130', '2658', '1446', '2828', '2373', '2649', '1200', '1534', '2908', '1915', '1726', '1446', '1757', '1200', '3130', '2373', '2658', '1446', '2828', '2649', '1200', '1534', '1204', '2234', '1841', '2649', '1652', '3130', '2929', '2300', '1200', '1446', '1819', '2828', '2658', '3189', '2897', '1069', '2373', '1761', '2751', '2649', '1324', '1534', '1258', '1200', '2058', '1245', '3130', '2929', '2300', '2658', '1446', '2828', '1819', '1069', '3189', '2373', '2897', '1761', '1324', '1258', '1534', '2751', '2649', '1200', '2058', '1582', '2890', '3130', '2929', '1446', '2300', '2658', '2828', '1819', '1069', '2373', '2897', '3189', '1761', '1324', '1534', '2649', '2751', '1200', '1258', '2058', '2648', '1903', '7.91', '2658', '2373', '3130', '2828', '1446', '2649', '1200', '1534', '2234', '2649', '1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2897', '1069', '2373', '3189', '1761', '1324', '1534', '2649', '2751', '1200', '1258', '2058', '2890', '1582', '2192', '2890', '5.33', '3130', '2929', '2300', '1446', '2828', '3189', '1819', '2658', '1069', '2897', '2373', '1324', '1761', '2751', '1534', '2649', '1200', '1258', '2058', '1483', '1899', '1483', '1320', '3088', '1485', '2085', '1339', '2388', '1026', '1899', '1483', '3.14', '3130', '2929', '2300', '1446', '2658', '2373', '2828', '1819', '2897', '1069', '3189', '1761', '1324', '1534', '1258', '2649', '2751', '1200', '2058', '3085', '1400', '2862', '1722', '3063', '2658', '3130', '5.94', '1446', '2373', '2828', '1200', '2649', '1534', '2649', '1841', '1193', '2649', '1841', '1761', '1841', '2649', '2421', '3130', '1446', '2658', '2828', '2373', '1761', '2649', '1534', '1200', '1069', '1841', '2649', '1069', '1841', '2994', '2649', '2344', '2649', '1841', '1120', '3038', '2649', '1841', '2369', '2649', '1841', '3189', '2369', '1364', '1598', '2522', '1983', '2929', '2300', '2649', '2751', '1841', '3161', '2522', '1841', '1598', '2649', '1595', '3130', '2929', '2911', '1819', '2658', '1446', '2373', '2828', '1069', '1324', '1258', '1534', '1761', '1200', '1598', '2897', '2300', '2751', '3189', '2649', '2783'], ['3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '1761', '1324', '1534', '2649', '2751', '1258', '1200', '2058', '3130', '1446', '2658', '2828', '2373', '2649', '1534', '1200', '2522', '1841', '1595', '1598', '2649', '1598', '2649', '1841', '2522', '2649', '1841', '2649', '1841', '3130', '1446', '2658', '2828', '2373', '2649', '1534', '1200', '2522', '1595', '1841', '1598', '2649', '3161', '3130', '1446', '2658', '1069', '2828', '2373', '2649', '1534', '1200', '2649', '1120', '1841', '3038', '2369', '1841', '2649', '2369', '1598', '2522', '2929', '1983', '2649', '2751', '2300', '1841', '2522', '2649', '1598', '1595', '1841', '3161', '3130', '2929', '2911', '2658', '1819', '1446', '2828', '1069', '2373', '1761', '1324', '1534', '2649', '1258', '1200', '1598', '2897', '2751', '3189', '2300', '2649']]
Number of products: 20
999
Gladly returning product with ID: 3130 for user ID: 0
Gladly returning product with ID: 2929 for user ID: 0
Gladly returning product with ID: 2658 for user ID: 0
Gladly returning product with ID: 1446 for user ID: 0
Gladly returning product with ID: 1819 for user ID: 0
Gladly returning product with ID: 2828 for user ID: 0
Step: 0 - Action: None - Observation: [{'t': 0, 'u': 0, 'z': 'pageview', 'v': 3130}, {'t': 1, 'u': 0, 'z': 'pageview', 'v': 2929}, {'t': 2, 'u': 0, 'z': 'pageview', 'v': 2658}, {'t': 3, 'u': 0, 'z': 'pageview', 'v': 1446}, {'t': 4, 'u': 0, 'z': 'pageview', 'v': 1819}, {'t': 5, 'u': 0, 'z': 'pageview', 'v': 2828}] - Reward: None
Checking if click for user 0 with product 3 is contained in future page views: ['1069', '2373', '1761', '1324', '1258', '1534', '1200', '2649', '1598', '2751', '3189', '2897', '2300', '1915', '1258', '2774', '2762', '2649', '1841', '1837', '2649', '1841', '2649', '2762', '1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Gladly returning product with ID: 1069 for user ID: 0
Step: 1 - Action: {'t': 6, 'u': 0, 'a': 3, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [{'t': 7, 'u': 0, 'z': 'pageview', 'v': 1069}] - Reward: 1
Checking if click for user 0 with product 5 is contained in future page views: ['2373', '1761', '1324', '1258', '1534', '1200', '2649', '1598', '2751', '3189', '2897', '2300', '1915', '1258', '2774', '2762', '2649', '1841', '1837', '2649', '1841', '2649', '2762', '1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Gladly returning product with ID: 2373 for user ID: 0
Gladly returning product with ID: 1761 for user ID: 0
Gladly returning product with ID: 1324 for user ID: 0
Gladly returning product with ID: 1258 for user ID: 0
Step: 2 - Action: {'t': 8, 'u': 0, 'a': 5, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [{'t': 9, 'u': 0, 'z': 'pageview', 'v': 2373}, {'t': 10, 'u': 0, 'z': 'pageview', 'v': 1761}, {'t': 11, 'u': 0, 'z': 'pageview', 'v': 1324}, {'t': 12, 'u': 0, 'z': 'pageview', 'v': 1258}] - Reward: 1
Checking if click for user 0 with product 4 is contained in future page views: ['1534', '1200', '2649', '1598', '2751', '3189', '2897', '2300', '1915', '1258', '2774', '2762', '2649', '1841', '1837', '2649', '1841', '2649', '2762', '1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 3 - Action: {'t': 13, 'u': 0, 'a': 4, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 8 is contained in future page views: ['1534', '1200', '2649', '1598', '2751', '3189', '2897', '2300', '1915', '1258', '2774', '2762', '2649', '1841', '1837', '2649', '1841', '2649', '2762', '1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 4 - Action: {'t': 14, 'u': 0, 'a': 8, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 8 is contained in future page views: ['1534', '1200', '2649', '1598', '2751', '3189', '2897', '2300', '1915', '1258', '2774', '2762', '2649', '1841', '1837', '2649', '1841', '2649', '2762', '1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 5 - Action: {'t': 15, 'u': 0, 'a': 8, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 1 is contained in future page views: ['1534', '1200', '2649', '1598', '2751', '3189', '2897', '2300', '1915', '1258', '2774', '2762', '2649', '1841', '1837', '2649', '1841', '2649', '2762', '1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 6 - Action: {'t': 16, 'u': 0, 'a': 1, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 4 is contained in future page views: ['1534', '1200', '2649', '1598', '2751', '3189', '2897', '2300', '1915', '1258', '2774', '2762', '2649', '1841', '1837', '2649', '1841', '2649', '2762', '1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 7 - Action: {'t': 17, 'u': 0, 'a': 4, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 7 is contained in future page views: ['1534', '1200', '2649', '1598', '2751', '3189', '2897', '2300', '1915', '1258', '2774', '2762', '2649', '1841', '1837', '2649', '1841', '2649', '2762', '1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Gladly returning product with ID: 1534 for user ID: 0
Gladly returning product with ID: 1200 for user ID: 0
Gladly returning product with ID: 2649 for user ID: 0
Gladly returning product with ID: 1598 for user ID: 0
Gladly returning product with ID: 2751 for user ID: 0
Gladly returning product with ID: 3189 for user ID: 0
Gladly returning product with ID: 2897 for user ID: 0
Gladly returning product with ID: 2300 for user ID: 0
Gladly returning product with ID: 1915 for user ID: 0
Gladly returning product with ID: 1258 for user ID: 0
Gladly returning product with ID: 2774 for user ID: 0
Gladly returning product with ID: 2762 for user ID: 0
Step: 8 - Action: {'t': 18, 'u': 0, 'a': 7, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [{'t': 19, 'u': 0, 'z': 'pageview', 'v': 1534}, {'t': 20, 'u': 0, 'z': 'pageview', 'v': 1200}, {'t': 21, 'u': 0, 'z': 'pageview', 'v': 2649}, {'t': 22, 'u': 0, 'z': 'pageview', 'v': 1598}, {'t': 23, 'u': 0, 'z': 'pageview', 'v': 2751}, {'t': 24, 'u': 0, 'z': 'pageview', 'v': 3189}, {'t': 25, 'u': 0, 'z': 'pageview', 'v': 2897}, {'t': 26, 'u': 0, 'z': 'pageview', 'v': 2300}, {'t': 27, 'u': 0, 'z': 'pageview', 'v': 1915}, {'t': 28, 'u': 0, 'z': 'pageview', 'v': 1258}, {'t': 29, 'u': 0, 'z': 'pageview', 'v': 2774}, {'t': 30, 'u': 0, 'z': 'pageview', 'v': 2762}] - Reward: 1
Checking if click for user 0 with product 10 is contained in future page views: ['2649', '1841', '1837', '2649', '1841', '2649', '2762', '1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 9 - Action: {'t': 31, 'u': 0, 'a': 10, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 15 is contained in future page views: ['2649', '1841', '1837', '2649', '1841', '2649', '2762', '1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 10 - Action: {'t': 32, 'u': 0, 'a': 15, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 5 is contained in future page views: ['2649', '1841', '1837', '2649', '1841', '2649', '2762', '1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Gladly returning product with ID: 2649 for user ID: 0
Step: 11 - Action: {'t': 33, 'u': 0, 'a': 5, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [{'t': 34, 'u': 0, 'z': 'pageview', 'v': 2649}] - Reward: 1
Checking if click for user 0 with product 3 is contained in future page views: ['1841', '1837', '2649', '1841', '2649', '2762', '1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Gladly returning product with ID: 1841 for user ID: 0
Step: 12 - Action: {'t': 35, 'u': 0, 'a': 3, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [{'t': 36, 'u': 0, 'z': 'pageview', 'v': 1841}] - Reward: 1
Checking if click for user 0 with product 0 is contained in future page views: ['1837', '2649', '1841', '2649', '2762', '1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 13 - Action: {'t': 37, 'u': 0, 'a': 0, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 17 is contained in future page views: ['1837', '2649', '1841', '2649', '2762', '1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 14 - Action: {'t': 38, 'u': 0, 'a': 17, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 9 is contained in future page views: ['1837', '2649', '1841', '2649', '2762', '1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Gladly returning product with ID: 1837 for user ID: 0
Gladly returning product with ID: 2649 for user ID: 0
Gladly returning product with ID: 1841 for user ID: 0
Gladly returning product with ID: 2649 for user ID: 0
Gladly returning product with ID: 2762 for user ID: 0
Step: 15 - Action: {'t': 39, 'u': 0, 'a': 9, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [{'t': 40, 'u': 0, 'z': 'pageview', 'v': 1837}, {'t': 41, 'u': 0, 'z': 'pageview', 'v': 2649}, {'t': 42, 'u': 0, 'z': 'pageview', 'v': 1841}, {'t': 43, 'u': 0, 'z': 'pageview', 'v': 2649}, {'t': 44, 'u': 0, 'z': 'pageview', 'v': 2762}] - Reward: 1
Checking if click for user 0 with product 16 is contained in future page views: ['1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 16 - Action: {'t': 45, 'u': 0, 'a': 16, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 15 is contained in future page views: ['1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 17 - Action: {'t': 46, 'u': 0, 'a': 15, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 18 is contained in future page views: ['1841', '3130', '2929', '2300', '1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Gladly returning product with ID: 1841 for user ID: 0
Gladly returning product with ID: 3130 for user ID: 0
Gladly returning product with ID: 2929 for user ID: 0
Gladly returning product with ID: 2300 for user ID: 0
Step: 18 - Action: {'t': 47, 'u': 0, 'a': 18, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [{'t': 48, 'u': 0, 'z': 'pageview', 'v': 1841}, {'t': 49, 'u': 0, 'z': 'pageview', 'v': 3130}, {'t': 50, 'u': 0, 'z': 'pageview', 'v': 2929}, {'t': 51, 'u': 0, 'z': 'pageview', 'v': 2300}] - Reward: 1
Checking if click for user 0 with product 4 is contained in future page views: ['1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 19 - Action: {'t': 52, 'u': 0, 'a': 4, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 16 is contained in future page views: ['1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 20 - Action: {'t': 53, 'u': 0, 'a': 16, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 2 is contained in future page views: ['1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 21 - Action: {'t': 54, 'u': 0, 'a': 2, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 19 is contained in future page views: ['1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 22 - Action: {'t': 55, 'u': 0, 'a': 19, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 0 is contained in future page views: ['1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 23 - Action: {'t': 56, 'u': 0, 'a': 0, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 17 is contained in future page views: ['1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 24 - Action: {'t': 57, 'u': 0, 'a': 17, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 2 is contained in future page views: ['1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 25 - Action: {'t': 58, 'u': 0, 'a': 2, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 17 is contained in future page views: ['1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 26 - Action: {'t': 59, 'u': 0, 'a': 17, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 1 is contained in future page views: ['1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 27 - Action: {'t': 60, 'u': 0, 'a': 1, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 9 is contained in future page views: ['1446', '2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Gladly returning product with ID: 1446 for user ID: 0
Step: 28 - Action: {'t': 61, 'u': 0, 'a': 9, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [{'t': 62, 'u': 0, 'z': 'pageview', 'v': 1446}] - Reward: 1
Checking if click for user 0 with product 0 is contained in future page views: ['2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 29 - Action: {'t': 63, 'u': 0, 'a': 0, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 15 is contained in future page views: ['2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 30 - Action: {'t': 64, 'u': 0, 'a': 15, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 9 is contained in future page views: ['2828', '2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Gladly returning product with ID: 2828 for user ID: 0
Step: 31 - Action: {'t': 65, 'u': 0, 'a': 9, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [{'t': 66, 'u': 0, 'z': 'pageview', 'v': 2828}] - Reward: 1
Checking if click for user 0 with product 12 is contained in future page views: ['2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 32 - Action: {'t': 67, 'u': 0, 'a': 12, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 6 is contained in future page views: ['2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 33 - Action: {'t': 68, 'u': 0, 'a': 6, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 12 is contained in future page views: ['2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 34 - Action: {'t': 69, 'u': 0, 'a': 12, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 7 is contained in future page views: ['2658', '1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Gladly returning product with ID: 2658 for user ID: 0
Step: 35 - Action: {'t': 70, 'u': 0, 'a': 7, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [{'t': 71, 'u': 0, 'z': 'pageview', 'v': 2658}] - Reward: 1
Checking if click for user 0 with product 11 is contained in future page views: ['1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 36 - Action: {'t': 72, 'u': 0, 'a': 11, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
Checking if click for user 0 with product 14 is contained in future page views: ['1819', '2373', '1069', '2897', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1285', '2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Gladly returning product with ID: 1819 for user ID: 0
Gladly returning product with ID: 2373 for user ID: 0
Gladly returning product with ID: 1069 for user ID: 0
Gladly returning product with ID: 2897 for user ID: 0
Gladly returning product with ID: 3189 for user ID: 0
Gladly returning product with ID: 2649 for user ID: 0
Gladly returning product with ID: 1761 for user ID: 0
Gladly returning product with ID: 1324 for user ID: 0
Gladly returning product with ID: 1534 for user ID: 0
Gladly returning product with ID: 2751 for user ID: 0
Gladly returning product with ID: 1258 for user ID: 0
Gladly returning product with ID: 1200 for user ID: 0
Gladly returning product with ID: 2058 for user ID: 0
Gladly returning product with ID: 3008 for user ID: 0
Gladly returning product with ID: 1285 for user ID: 0
Step: 37 - Action: {'t': 73, 'u': 0, 'a': 14, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [{'t': 74, 'u': 0, 'z': 'pageview', 'v': 1819}, {'t': 75, 'u': 0, 'z': 'pageview', 'v': 2373}, {'t': 76, 'u': 0, 'z': 'pageview', 'v': 1069}, {'t': 77, 'u': 0, 'z': 'pageview', 'v': 2897}, {'t': 78, 'u': 0, 'z': 'pageview', 'v': 3189}, {'t': 79, 'u': 0, 'z': 'pageview', 'v': 2649}, {'t': 80, 'u': 0, 'z': 'pageview', 'v': 1761}, {'t': 81, 'u': 0, 'z': 'pageview', 'v': 1324}, {'t': 82, 'u': 0, 'z': 'pageview', 'v': 1534}, {'t': 83, 'u': 0, 'z': 'pageview', 'v': 2751}, {'t': 84, 'u': 0, 'z': 'pageview', 'v': 1258}, {'t': 85, 'u': 0, 'z': 'pageview', 'v': 1200}, {'t': 86, 'u': 0, 'z': 'pageview', 'v': 2058}, {'t': 87, 'u': 0, 'z': 'pageview', 'v': 3008}, {'t': 88, 'u': 0, 'z': 'pageview', 'v': 1285}] - Reward: 1
Checking if click for user 0 with product 3 is contained in future page views: ['2405', '1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Gladly returning product with ID: 2405 for user ID: 0
Step: 38 - Action: {'t': 89, 'u': 0, 'a': 3, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [{'t': 90, 'u': 0, 'z': 'pageview', 'v': 2405}] - Reward: 1
Checking if click for user 0 with product 2 is contained in future page views: ['1064', '1697', '1221', '1581', '3092', '3008', '3130', '2929', '2658', '1446', '2300', '1819', '2828', '2373', '1069', '2897', '3189', '2649', '1324', '1761', '1534', '2751', '1200', '1258', '2058', '2402', '2634', '1956', '2402', '3130', '2929', '2300', '2658', '2828', '1446', '1819', '3189', '2373', '2897', '1069', '1324', '1761', '2649', '1534', '2751', '1200', '1258', '2058', '2908', '2751', '1446', '1757', '1200', '2125', '1837', '2649', '1258', '1841', '2762', '2649', '1324', '2751', '1841', '2897', '2300', '3189', '2889', '1143', '2488', '1522', '1837', '2649', '1841', '2649', '2762', '1841', '1446', '1200', '1757', '3130', '2929', '2658', '2300', '1446', '2828', '1819', '1069', '2897', '3189', '2373', '2649', '1324', '1761', '1534', '1258', '1200', '2751', '2058', '1175', '2403', '1875', '2456', '3039', '2351', '2319', '1559', '2650', '3130', '2929', '1446', '2300', '1819', '2658', '2828', '1069', '2897', '2373', '3189', '2649', '1761', '1324', '1534', '2751', '1200', '1258', '2058', '1034', '1681', '2.11', '3130', '2929', '2300', '1200', '2658', '1446', '2828', '2373', '1819', '2897', '1069', '3189', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3063', '3130', '2929', '1446', '2300', '1200', '2828', '2658', '1819', '3189', '2897', '1069', '2373', '2649', '1761', '1534', '1324', '1200', '2751', '1258', '2058', '3085', '2909', '2120', '1400', '2862', '3063', '3130', '2929', '2300', '2658', '1.114', '1.1', '2828', '1819', '1446', '1200', '2897', '2373', '3189', '1069', '2649', '1761', '1324', '1534', '2751', '1258', '1200', '2058', '3008', '1284', '1285', '2405', '1064', '1697', '1297', '3042', '1221', '3042', '1221', '1581', '1273', '1722', '3008', '1.8', '1683', '2658', '5.94', '5', '3130', '1446', '2828', '2373', '1534', '2649', '1200', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '1841', '1841', '2649', '2649', '1841', '2649', '1841', '1219', '1841', '2649', '2649', '1841', '2649', '1841', '2649', '1841', '2649', '2234', '1761', '1841', '1761', '1841', '2649', '3130', '2658', '1446', '2373', '2828', '1534', '1200', '2649', '1761', '2649', '1761', '1841', '3130', '1069', '2658', '1446', '2828', '2373', '2649', '1200', '1761', '1534', '1120', '1841', '2649', '3189', '2369', '2522', '1598', '1983', '2929', '2751', '2649', '2300', '1841', '1595', '2522', '1598', '2649', '1841', '2369', '3173', '1228', '2929', '3161', '2522', '2649', '1598', '1841', '1595', '3130', '2929', '2911', '2373', '2828', '2658', '1819', '1069', '1446', '1761', '1324', '1534', '1258', '2649', '1200', '2.13', '2.11', '1.114', '1.113', '1.9', '1.8', '1.1', '2', '1', '2649', '2649', '2649', '1598', '2649', '2649', '2649', '2649', '2649', '2649', '1598', '1598', '2897', '3189', '2649', '2300', '2751', '2783', '1598', '1598', '2751', '2300', '3189', '2897', '2649', '2783']
Step: 39 - Action: {'t': 91, 'u': 0, 'a': 2, 'ps': 0.05, 'ps-a': array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])} - Observation: [] - Reward: 0
###Markdown
Okay, there's quite a bit going on here: - `action`, is a number between `0` and `num_products - 1` that references the index of the product recommended. - `observation` will either be `None` or a session of Organic data, showing the index of products the user views. - `reward` is 0 if the user does not click on the recommended product and 1 if they do. Notice that when a user clicks on a product (Wherever the reward is 1), they start a new Organic session.- `done` is a True/False flag indicating if the episode (aka user's timeline) is over. - `info` currently not used, so it is always an empty dictionary.Also, notice that the first `action` is `None`. In our implementation, the agent observes Organic behaviour before recommending anything.Now, we will show calling the environment in an online manner, where the agent needs to supply an action. For demonstration purposes, we will create a list of hard-coded actions.
###Code
# Create list of hard coded actions.
actions = [None] + [1, 2, 3, 4, 5]
# Reset env and set done to False.
env.reset()
done = False
# Counting how many steps.
i = 0
while not done and i < len(actions):
action = actions[i]
observation, reward, done, info = env.step(action)
print(f"Step: {i} - Action: {action} - Observation: {observation.sessions()} - Reward: {reward}")
i += 1
###Output
Step: 0 - Action: None - Observation: [{'t': 0, 'u': 0, 'z': 'pageview', 'v': 1}] - Reward: None
Step: 1 - Action: 1 - Observation: [] - Reward: 0
Step: 2 - Action: 2 - Observation: [] - Reward: 0
Step: 3 - Action: 3 - Observation: [] - Reward: 0
Step: 4 - Action: 4 - Observation: [] - Reward: 0
Step: 5 - Action: 5 - Observation: [{'t': 6, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 7, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 8, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 9, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 10, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 11, 'u': 0, 'z': 'pageview', 'v': 6}, {'t': 12, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 13, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 14, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 15, 'u': 0, 'z': 'pageview', 'v': 6}, {'t': 16, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 17, 'u': 0, 'z': 'pageview', 'v': 1}, {'t': 18, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 19, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 20, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 21, 'u': 0, 'z': 'pageview', 'v': 4}, {'t': 22, 'u': 0, 'z': 'pageview', 'v': 1}, {'t': 23, 'u': 0, 'z': 'pageview', 'v': 6}] - Reward: 0
###Markdown
You'll notice that the offline and online APIs are nearly identical. The only difference is that one calls either env.step_offline() or env.step(action). Creating our first agentNow that we see have seen how the offline and online versions of the environment work, it is time to code our first recommendation agent! Technically, an agent can be anything that produces actions for the environment to use. However, we will show you the object-oriented way we like to create agents.Below is the code for a very simple agent - the popularity based agent. The popularity based agent records merely how many times a user sees each product organically, then when required to make a recommendation, the agent chooses a product randomly in proportion with a number of times the user has viewed it.
###Code
import numpy as np
from numpy.random import choice
from agents import Agent
# Define an Agent class.
class PopularityAgent(Agent):
def __init__(self, config):
# Set number of products as an attribute of the Agent.
super(PopularityAgent, self).__init__(config)
# Track number of times each item viewed in Organic session.
self.organic_views = np.zeros(self.config.num_products)
def train(self, observation, action, reward, done):
"""Train method learns from a tuple of data.
this method can be called for offline or online learning"""
# Adding organic session to organic view counts.
if observation:
for session in observation.sessions():
self.organic_views[session['v']] += 1
def act(self, observation, reward, done):
"""Act method returns an action based on current observation and past
history"""
# Choosing action randomly in proportion with number of views.
prob = self.organic_views / sum(self.organic_views)
action = choice(self.config.num_products, p = prob)
return {
**super().act(observation, reward, done),
**{
'a': action,
'ps': prob[action]
}
}
###Output
_____no_output_____
###Markdown
The `PopularityAgent` class above demonstrates our preferred way to create agents for RecoGym. Notice how we have both a `train` and `act` method present. The `train` method is designed to take in training data from the environments `step_offline` method and thus has nothing to return, while the `act` method must return an action to pass back into the environment. The code below highlights how one would use this agent for first offline training and then using the learned knowledge to make recommendations online.
###Code
# Instantiate instance of PopularityAgent class.
num_products = 10
agent = PopularityAgent(Configuration({
**env_1_args,
'num_products': num_products,
}))
# Resets random seed back to 42, or whatever we set it to in env_0_args.
env.reset_random_seed()
# Train on 1000 users offline.
num_offline_users = 1000
for _ in range(num_offline_users):
# Reset env and set done to False.
env.reset()
done = False
observation, reward, done = None, 0, False
while not done:
old_observation = observation
action, observation, reward, done, info = env.step_offline(observation, reward, done)
agent.train(old_observation, action, reward, done)
# Train on 100 users online and track click through rate.
num_online_users = 100
num_clicks, num_events = 0, 0
for _ in range(num_online_users):
# Reset env and set done to False.
env.reset()
observation, _, done, _ = env.step(None)
reward = None
done = None
while not done:
action = agent.act(observation, reward, done)
observation, reward, done, info = env.step(action['a'])
# Used for calculating click through rate.
num_clicks += 1 if reward == 1 and reward is not None else 0
num_events += 1
ctr = num_clicks / num_events
print(f"Click Through Rate: {ctr:.4f}")
###Output
Click Through Rate: 0.0147
###Markdown
Testing our first agentNow we have created our popularity based agent, and we should test it against an even simpler baseline - one that performs no learning and recommends products uniformly at random. To do this, we will first load a more complex version of the toy data environment called `reco-gym-v1`.Next, we will load another agent for our agent to compete against each other. Here you can see we make use of the `RandomAgent` and create an instance of it in addition to our `PopularityAgent`.
###Code
import gym, recogym
from recogym import env_1_args
from copy import deepcopy
env_1_args['random_seed'] = 42
env = gym.make('reco-gym-v1')
env.init_gym(env_1_args)
# Import the random agent.
from agents import RandomAgent, random_args
# Create the two agents.
num_products = env_1_args['num_products']
popularity_agent = PopularityAgent(Configuration(env_1_args))
agent_rand = RandomAgent(Configuration({
**env_1_args,
**random_args,
}))
###Output
/Users/markus.braasch/git/reco-gym/recogym/agents/__init__.py:32: UserWarning: Agents Bandit MF Square, Organic MF Square and NN IPS are not available since torch cannot be imported. Install it with `pip install torch` and test it with `python -c "import torch"`
warnings.warn('Agents Bandit MF Square, Organic MF Square and NN IPS are not available '
###Markdown
Now we have instances of our two agents. We can use the `test_agent` method from RecoGym and compare there performance.To use `test_agent`, one must provide a copy of the current env, a copy of the agent class, the number of training users and the number of testing users.
###Code
# Credible interval of the CTR median and 0.025 0.975 quantile.
recogym.test_agent(deepcopy(env), deepcopy(agent_rand), 1000, 1000)
# Credible interval of the CTR median and 0.025 0.975 quantile.
recogym.test_agent(deepcopy(env), deepcopy(popularity_agent), 1000, 1000)
###Output
Start: Agent Training #0
Start: Agent Testing #0
End: Agent Testing #0 (12.809738874435425s)
###Markdown
Preprocessing Data
###Code
def get_mnist_data():
from keras.datasets import mnist
from keras.utils import np_utils
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], 1, X_train.shape[1], X_train.shape[2])
y_train = y_train.reshape(y_train.shape[0], 1)
X_test = X_test.reshape(X_test.shape[0], 1, X_test.shape[1], X_test.shape[2])
y_test = y_test.reshape(y_test.shape[0], 1)
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
return X_train, y_train, X_test, y_test
X_train, y_train, X_test, y_test = get_mnist_data()
X_train.shape, y_train.shape, X_test.shape, y_test.shape
X_mean = X_train.mean().astype(np.float32)
X_std = X_train.std().astype(np.float32)
def normalizer(x):
return (x - X_mean) / X_std
###Output
_____no_output_____
###Markdown
Building the model
###Code
from keras.layers import Convolution2D, Dense, Flatten, Lambda, Dropout
from keras.models import Sequential
from keras.optimizers import Adam
###Output
_____no_output_____
###Markdown
Linear Model
###Code
model = Sequential([Lambda(normalizer, input_shape=(1, 28, 28)), Flatten(), Dense(10, activation='softmax')])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'] )
model.fit(X_train, y_train,validation_data=(X_test, y_test))
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/10
60000/60000 [==============================] - 5s - loss: 0.3868 - acc: 0.8848 - val_loss: 0.2938 - val_acc: 0.9165
Epoch 2/10
60000/60000 [==============================] - 5s - loss: 0.2987 - acc: 0.9154 - val_loss: 0.2793 - val_acc: 0.9213
Epoch 3/10
60000/60000 [==============================] - 5s - loss: 0.2884 - acc: 0.9195 - val_loss: 0.2858 - val_acc: 0.9201
Epoch 4/10
60000/60000 [==============================] - 5s - loss: 0.2825 - acc: 0.9210 - val_loss: 0.2768 - val_acc: 0.9210
Epoch 5/10
60000/60000 [==============================] - 5s - loss: 0.2778 - acc: 0.9231 - val_loss: 0.2838 - val_acc: 0.9228
Epoch 6/10
60000/60000 [==============================] - 5s - loss: 0.2738 - acc: 0.9239 - val_loss: 0.2811 - val_acc: 0.9247
Epoch 7/10
60000/60000 [==============================] - 5s - loss: 0.2712 - acc: 0.9241 - val_loss: 0.2959 - val_acc: 0.9206
Epoch 8/10
60000/60000 [==============================] - 5s - loss: 0.2700 - acc: 0.9248 - val_loss: 0.2807 - val_acc: 0.9238
Epoch 9/10
60000/60000 [==============================] - 5s - loss: 0.2686 - acc: 0.9258 - val_loss: 0.3063 - val_acc: 0.9139
Epoch 10/10
60000/60000 [==============================] - 5s - loss: 0.2660 - acc: 0.9262 - val_loss: 0.2916 - val_acc: 0.9211
###Markdown
Getting StartedThis document hows you how to install and run the software in the `as_blog` repository.Before you start you should establish a `credentials.json` file inthe repository's home directory.I just followed the instructions in Step 1 of[this google article](https://developers.google.com/docs/api/quickstart/python),requesting a desktop application. This creates a new GCP project and allows you to download the configuration inthe necessary `credentials.json` file. The notebook will copy it into your repository'shome directory from your `~/Downloads` directory, so you only need to do the download once.Because the notebook changes the current directory, you should use Jupyter's `File | Close and Halt`menu, and restart the notebook to create a new clone, or alternatively execute another `os.chdir`call to return to the original directory in which the notebook was run. Failing to do this beforethe second cell is re-run will create the second clone inside the first one. Everything shouldstill work, but it could get confusing.You can, perhaps more easily, simply use this document as a guide and execute the indicatedcommands man ually inside a shell window!
###Code
#
# Define the ability to produce random directories
#
import string, random
def r():
return random.choice(string.ascii_lowercase)
def rndname():
return f"tmp-{r()}{r()}{r()}{r()}{r()}{r()}"
###Output
_____no_output_____
###Markdown
We begin by cloning the repository into a random temprary subdirectory.
###Code
dirname = rndname()
!git clone [email protected]:holdenweb/as_blog.git $dirname
###Output
Cloning into 'tmp-ssanhn'...
remote: Enumerating objects: 4020, done.[K
remote: Counting objects: 100% (4020/4020), done.[K
remote: Compressing objects: 100% (1949/1949), done.[K
remote: Total 4020 (delta 2103), reused 3968 (delta 2059), pack-reused 0[K
Receiving objects: 100% (4020/4020), 5.73 MiB | 2.54 MiB/s, done.
Resolving deltas: 100% (2103/2103), done.
###Markdown
This cell ensures that future shell commands are executed from inside the cloned repository.
###Code
import os
os.chdir(dirname)
!pwd
!cp ~/Downloads/credentials.json .
!poetry install
!cat pyproject.toml
!poetry show
###Output
[36mappdirs [0m [1m1.4.4 [0m A small Python module for determining a...
[36mattrs [0m [1m20.3.0 [0m Classes Without Boilerplate
[36mblack [0m [1m20.8b1 [0m The uncompromising code formatter.
[36mcachetools [0m [1m4.1.1 [0m Extensible memoizing collections and de...
[36mcertifi [0m [1m2020.12.5 [0m Python package for providing Mozilla's ...
[36mchardet [0m [1m3.0.4 [0m Universal encoding detector for Python ...
[36mclick [0m [1m7.1.2 [0m Composable command line interface toolkit
[36mcoverage [0m [1m5.3 [0m Code coverage measurement for Python
[36mdistlib [0m [1m0.3.1 [0m Distribution utilities
[36mdominate [0m [1m2.6.0 [0m Dominate is a Python library for creati...
[36mfilelock [0m [1m3.0.12 [0m A platform independent file lock.
[36mflake8 [0m [1m3.8.4 [0m the modular source code checker: pep8 p...
[36mflask [0m [1m1.1.2 [0m A simple framework for building complex...
[36mflask-bootstrap [0m [1m3.3.7.1 [0m An extension that includes Bootstrap in...
[36mflask-wtf [0m [1m0.14.3 [0m Simple integration of Flask and WTForms.
[36mgoogle-api-core [0m [1m1.23.0 [0m Google API client core library
[36mgoogle-api-python-client[0m [1m1.12.8 [0m Google API Client Library for Python
[36mgoogle-auth [0m [1m1.23.0 [0m Google Authentication Library
[36mgoogle-auth-httplib2 [0m [1m0.0.4 [0m Google Authentication Library: httplib2...
[36mgoogle-auth-oauthlib [0m [1m0.4.2 [0m Google Authentication Library
[36mgoogleapis-common-protos[0m [1m1.52.0 [0m Common protobufs used in Google APIs
[36mhttplib2 [0m [1m0.18.1 [0m A comprehensive HTTP client library.
[36mhu [0m [1m0.9.0 [0m Helpful utility software for open sourc...
[36mid [0m [1m0.1.0 [0m xuexi
[36midna [0m [1m2.10 [0m Internationalized Domain Names in Appli...
[36mimportlib-metadata [0m [1m2.1.1 [0m Read metadata from Python packages
[36miniconfig [0m [1m1.1.1 [0m iniconfig: brain-dead simple config-ini...
[36mitsdangerous [0m [1m1.1.0 [0m Various helpers to pass data to untrust...
[36mjinja2 [0m [1m2.11.2 [0m A very fast and expressive template eng...
[36mmarkupsafe [0m [1m1.1.1 [0m Safely add untrusted strings to HTML/XM...
[36mmccabe [0m [1m0.6.1 [0m McCabe checker, plugin for flake8
[36mmongoengine [0m [1m0.22.1 [0m MongoEngine is a Python Object-Document...
[36mmypy [0m [1m0.790 [0m Optional static typing for Python
[36mmypy-extensions [0m [1m0.4.3 [0m Experimental type system extensions for...
[36moauthlib [0m [1m3.1.0 [0m A generic, spec-compliant, thorough imp...
[36mpackaging [0m [1m20.7 [0m Core utilities for Python packages
[36mpathspec [0m [1m0.8.1 [0m Utility library for gitignore style pat...
[36mpipdeptree [0m [1m2.0.0 [0m Command line utility to show dependency...
[36mpluggy [0m [1m0.13.1 [0m plugin and hook calling mechanisms for ...
[36mprotobuf [0m [1m3.14.0 [0m Protocol Buffers
[36mpy [0m [1m1.9.0 [0m library with cross-python path, ini-par...
[36mpyasn1 [0m [1m0.4.8 [0m ASN.1 types and codecs
[36mpyasn1-modules [0m [1m0.2.8 [0m A collection of ASN.1-based protocols m...
[36mpycodestyle [0m [1m2.6.0 [0m Python style guide checker
[36mpyflakes [0m [1m2.2.0 [0m passive checker of Python programs
[36mpymongo [0m [1m3.11.3 [0m Python driver for MongoDB <http://www.m...
[36mpyparsing [0m [1m2.4.7 [0m Python parsing module
[36mpytest [0m [1m6.1.2 [0m pytest: simple powerful testing with Py...
[36mpytest-cov [0m [1m2.10.1 [0m Pytest plugin for measuring coverage.
[36mpython-dotenv [0m [1m0.15.0 [0m Add .env support to your django/flask a...
[36mpython-slugify [0m [1m4.0.1 [0m A Python Slugify application that handl...
[36mpytz [0m [1m2020.4 [0m World timezone definitions, modern and ...
[36mregex [0m [1m2020.11.13[0m Alternative regular expression module, ...
[36mrequests [0m [1m2.25.0 [0m Python HTTP for Humans.
[36mrequests-oauthlib [0m [1m1.3.0 [0m OAuthlib authentication support for Req...
[36mrsa [0m [1m4.6 [0m Pure-Python RSA implementation
[36msix [0m [1m1.15.0 [0m Python 2 and 3 compatibility utilities
[36mtext-unidecode [0m [1m1.3 [0m The most basic Text::Unidecode port
[36mtoml [0m [1m0.10.2 [0m Python Library for Tom's Obvious, Minim...
[36mtox [0m [1m3.20.1 [0m tox is a generic virtualenv management ...
[36mtyped-ast [0m [1m1.4.1 [0m a fork of Python 2 and 3 ast modules wi...
[36mtyping-extensions [0m [1m3.7.4.3 [0m Backported and Experimental Type Hints ...
[36muritemplate [0m [1m3.0.1 [0m URI templates
[36murllib3 [0m [1m1.26.2 [0m HTTP library with thread-safe connectio...
[36mvirtualenv [0m [1m20.2.1 [0m Virtual Python Environment builder
[36mvisitor [0m [1m0.1.3 [0m A tiny pythonic visitor implementation.
[36mwerkzeug [0m [1m1.0.1 [0m The comprehensive WSGI web application ...
[36mwtforms [0m [1m2.3.3 [0m A flexible forms validation and renderi...
[36mzipp [0m [1m3.4.0 [0m Backport of pathlib-compatible object w...
###Markdown
Bingo! You now have a randomly-named virtual environment containing all necessary dependencies.`poetry` will use this virtual environment when running commands unless it is called from withina virtual environment. In the latter case, the active virutal environment is used.It appears I was using Python 3.7 when I checked in, but `pyproject.toml` defines threedifferent possible environments with `python = "^3.6 || ^3.7 || ^3.8"`.You can find out which environments have been created and which is active.
###Code
!poetry env list
###Output
[34mas-blog-eNS3FgMU-py3.7 (Activated)[0m
###Markdown
That's enough `poetry` for now, let's get on to the prose `;-)`. Obviously you'll need tobe sure that the tests all work.Yes, testing is automated in this project!
###Code
!make doctest
!make pytest
###Output
pytest --doctest-modules src/snippets
[1m============================= test session starts ==============================[0m
platform darwin -- Python 3.7.8, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /Users/sholden/Projects
plugins: cov-2.10.1
collected 22 items [0m
src/snippets/sep_concerns1.py [32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m [ 22%][0m
src/snippets/sep_concerns2.py [32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m [ 50%][0m
src/snippets/sep_concerns3.py [32m.[0m[32m.[0m[32m [ 59%][0m
src/snippets/sep_concerns4.py [32m.[0m[32m.[0m[32m [ 68%][0m
src/snippets/sep_concerns5.py [32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m [100%][0m
[32m============================== [32m[1m22 passed[0m[32m in 0.16s[0m[32m ==============================[0m
###Markdown
You are now in a position to modify the code, and you know how to test it. Congratulations!OK, what about that crummy web server and all the gunk that goes with it? You know, thereally ugly code Steve hacked together as a proof-of-concept. Here's the scoop on that.First of all, there are a few environment variables. When these need to be set, insteadof using shell escapes as we have up to now we'll use `%%script` magics to run a sequenceof commands in the same environment.
###Code
!cat .env
###Output
export FLASK_APP=src/tools/serve.py
export FLASK_DEBUG=1
export PYTHONPATH=src/tools:src/snippets
###Markdown
You can run a single command in the project's active virtual environment by prepending `poetry run` to the command.If you have a bunch of commands to run then it's easier to create a shell with the virtual environmentalready active with the command poetry shellIf the following two commands give an identical result then the project virtualenvironment was probably active when you ran `jupyter notebook`. That's fine,everything should still work (if it doesn't, a bug report wouuld be appreciated)because poetry is written to work in those circumstances.
###Code
!which python
!poetry run which python
###Output
/Users/sholden/Library/Caches/pypoetry/virtualenvs/as-blog-LcUmjM9R-py3.7/bin/python
/Users/sholden/Library/Caches/pypoetry/virtualenvs/as-blog-eNS3FgMU-py3.7/bin/python
###Markdown
As you can see from the next cell, running `poetry shell` in a notebook script cell dooesn't appear to do what we want in a `%%script` cell. Even after issuing the command, the `python` command does not run from the correct virtual environment.
###Code
%%script /bin/sh
poetry shell
which python
###Output
Virtual environment already activated: /Users/sholden/Library/Caches/pypoetry/virtualenvs/as-blog-eNS3FgMU-py3.7
/Users/sholden/Library/Caches/pypoetry/virtualenvs/as-blog-LcUmjM9R-py3.7/bin/python
###Markdown
Notebooks don't support background processes, so you'll need to run the flask server ina command window. The following commands should do cd poetry run flask runYou should then be able to connect to the project server at `http://localhost:5000/`(clicking the linkshould open it). Theserver home page produces output that looks something like this. Feel free to modify it! { "python": "3.7.8 (default, Jul 8 2020, 14:18:28) \n[Clang 11.0.3 (clang-1103.0.32.62)]", "author": { "name": "Steve Holden", "email": "[email protected]" }, "date_time": "Tuesday 23, March 2021 19:03:20"}_The rest of this notebook assumes the server is running. We always use the `poetry run` prefixhere to avoid the problems indicated in the cell above._ Suppose you want to start working with a document. The first this you need to dois to pull it down from Google Docs. You do this with the `pull` command, againdefined in `pyproject.toml`. It takes the document id as an argument. In the nextcell we download the first episode from the series.
###Code
!poetry run pull 1jALRWW76qjrcl12e-umDm-ZbGlm8HcGuaA2jGdl1Zro
###Output
ARGS: ['pull', '1jALRWW76qjrcl12e-umDm-ZbGlm8HcGuaA2jGdl1Zro']
Pulling 1jALRWW76qjrcl12e-umDm-ZbGlm8HcGuaA2jGdl1Zro
Please visit this URL to authorize this application: https://accounts.google.com/o/oauth2/auth?response_type=code&client_id=958632701548-jibvgm82ksnaggv9iuf9o1oc8f132qq3.apps.googleusercontent.com&redirect_uri=http%3A%2F%2Flocalhost%3A54272%2F&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocuments.readonly&state=6DswiO7T7UPWVsrmdlujDAZDqk6seA&access_type=offline
###Markdown
What is RecoGym?RecoGym is a Python [OpenAI Gym](https://gym.openai.com/) environment for testing recommendation algorithms. It allows for the testing of both offline and reinforcement-learning based agents. It provides a way to quickly test algorithms in a toy environment.In this notebook we will code a simple recommendation agent that suggests an item in proportion to how many times it's been viewed. We hope to inspire you to create your own agents and test them against our baseline models.In order to make the most out of RecoGym, we suggest you have some experience coding in Python, some background knowledge in recommender systems, and familiarity with the reinforcement learning setup. Also, be sure to check out the python-based requirements in the README if something below errors. Reinforcement Learning SetupRecoGym follows the usual reinforcement learning setup. This means there are interactions between the environment (the user's behavior) and the agent (our recommendation algorithm). The agent receives reward if the user clicks on the recommendation. Organic and BanditEven though our focus is biased towards online advertising, we tried to make RecoGym universal to all types of recommendation. Hence, we introduce the domain-agnostic terms Organic and Bandit sessions. An Organic session is an observation of items the user interacts with. For example, it could be views of products on an e-commerce website, listens to songs while streaming music, or readings of articles on an online newspaper. A Bandit session is one where we have an opportunity to recommend the user an item and observe their behavior. We receive a reward if they click. Offline and Online LearningThis project was born out of a desire to improve Criteo's recommendation system by exploring reinforcement learning algorithms. We quickly realized that we can't just blindly apply RL algorithms in a production system out of the box. The learning period would be too costly. Instead, we need to leverage the vast amounts of offline training examples we already to make the algorithm perform as good as the current system before releasing into the online production environment.Thus, RecoGym follows a similar flow. An agent is first given access to many offline training examples produced from a fixed policy. Then, they have access to the online system where they choose the actions. Let's see some code - Interacting with the environment The code snippet below shows how to initialize the environment and step through in an 'offline' manner (Here offline means that the environment is generating some recommendations for us). We print out the results from the environment at each step.
###Code
import gym, reco_gym
# env_0_args is a dictionary of default parameters (i.e. number of products)
from reco_gym import env_1_args
# you can overwrite environment arguments here:
env_1_args['random_seed'] = 42
# initialize the gym for the first time by calling .make() and .init_gym()
env = gym.make('reco-gym-v1')
env.init_gym(env_1_args)
# .reset() env before each episode (one episode per user)
env.reset()
done = False
# counting how many steps
i = 0
while not done:
action, observation, reward, done, info = env.step_offline()
print(f"Step: {i} - Action: {action} - Observation: {observation} - Reward: {reward}")
i += 1
###Output
Step: 0 - Action: None - Observation: [('pageview', 4)] - Reward: None
Step: 1 - Action: 9 - Observation: None - Reward: 0
Step: 2 - Action: 3 - Observation: None - Reward: 0
Step: 3 - Action: 6 - Observation: [('pageview', 5), ('pageview', 4), ('pageview', 5)] - Reward: 0
Step: 4 - Action: 0 - Observation: None - Reward: 0
Step: 5 - Action: 4 - Observation: [('pageview', 5)] - Reward: 0
Step: 6 - Action: 2 - Observation: None - Reward: 0
Step: 7 - Action: 3 - Observation: None - Reward: 0
Step: 8 - Action: 6 - Observation: None - Reward: 0
Step: 9 - Action: 9 - Observation: None - Reward: 0
Step: 10 - Action: 5 - Observation: None - Reward: 0
Step: 11 - Action: 8 - Observation: None - Reward: 0
Step: 12 - Action: 4 - Observation: None - Reward: 0
Step: 13 - Action: 5 - Observation: [('pageview', 7)] - Reward: 0
Step: 14 - Action: 2 - Observation: None - Reward: 0
Step: 15 - Action: 6 - Observation: None - Reward: 0
Step: 16 - Action: 0 - Observation: None - Reward: 0
Step: 17 - Action: 2 - Observation: None - Reward: 0
Step: 18 - Action: 2 - Observation: None - Reward: 0
Step: 19 - Action: 7 - Observation: None - Reward: 0
Step: 20 - Action: 1 - Observation: None - Reward: 0
Step: 21 - Action: 3 - Observation: None - Reward: 0
Step: 22 - Action: 2 - Observation: None - Reward: 0
Step: 23 - Action: 5 - Observation: None - Reward: 0
Step: 24 - Action: 1 - Observation: None - Reward: 0
Step: 25 - Action: 2 - Observation: None - Reward: 0
Step: 26 - Action: 9 - Observation: None - Reward: 0
Step: 27 - Action: 4 - Observation: None - Reward: 0
Step: 28 - Action: 9 - Observation: None - Reward: 0
Step: 29 - Action: 1 - Observation: [('pageview', 9), ('pageview', 9), ('pageview', 9), ('pageview', 9)] - Reward: 0
Step: 30 - Action: 4 - Observation: None - Reward: 0
Step: 31 - Action: 1 - Observation: None - Reward: 0
Step: 32 - Action: 6 - Observation: None - Reward: 0
Step: 33 - Action: 8 - Observation: None - Reward: 0
Step: 34 - Action: 6 - Observation: None - Reward: 0
Step: 35 - Action: 0 - Observation: None - Reward: 0
Step: 36 - Action: 9 - Observation: None - Reward: 0
Step: 37 - Action: 6 - Observation: None - Reward: 0
Step: 38 - Action: 7 - Observation: None - Reward: 0
Step: 39 - Action: 3 - Observation: None - Reward: 0
Step: 40 - Action: 9 - Observation: None - Reward: 0
Step: 41 - Action: 7 - Observation: None - Reward: 0
Step: 42 - Action: 5 - Observation: None - Reward: 0
Step: 43 - Action: 4 - Observation: None - Reward: 0
Step: 44 - Action: 4 - Observation: None - Reward: 0
Step: 45 - Action: 4 - Observation: None - Reward: 0
Step: 46 - Action: 0 - Observation: None - Reward: 0
Step: 47 - Action: 8 - Observation: None - Reward: 0
Step: 48 - Action: 0 - Observation: None - Reward: 0
Step: 49 - Action: 4 - Observation: None - Reward: 0
Step: 50 - Action: 0 - Observation: None - Reward: 0
Step: 51 - Action: 5 - Observation: None - Reward: 0
Step: 52 - Action: 6 - Observation: None - Reward: 0
Step: 53 - Action: 1 - Observation: [('pageview', 9)] - Reward: 0
Step: 54 - Action: 8 - Observation: None - Reward: 0
Step: 55 - Action: 1 - Observation: None - Reward: 0
Step: 56 - Action: 5 - Observation: None - Reward: 0
Step: 57 - Action: 0 - Observation: None - Reward: 0
Step: 58 - Action: 5 - Observation: None - Reward: 0
Step: 59 - Action: 1 - Observation: None - Reward: 0
Step: 60 - Action: 8 - Observation: None - Reward: 0
Step: 61 - Action: 1 - Observation: None - Reward: 0
Step: 62 - Action: 8 - Observation: None - Reward: 0
Step: 63 - Action: 3 - Observation: None - Reward: 0
Step: 64 - Action: 8 - Observation: None - Reward: 0
Step: 65 - Action: 0 - Observation: None - Reward: 0
Step: 66 - Action: 0 - Observation: None - Reward: 0
Step: 67 - Action: 2 - Observation: None - Reward: 0
Step: 68 - Action: 5 - Observation: None - Reward: 0
Step: 69 - Action: 9 - Observation: None - Reward: 0
Step: 70 - Action: 2 - Observation: None - Reward: 0
Step: 71 - Action: 7 - Observation: None - Reward: 0
Step: 72 - Action: 0 - Observation: None - Reward: 0
Step: 73 - Action: 9 - Observation: None - Reward: 0
Step: 74 - Action: 3 - Observation: None - Reward: 0
Step: 75 - Action: 9 - Observation: None - Reward: 0
Step: 76 - Action: 7 - Observation: None - Reward: 0
Step: 77 - Action: 9 - Observation: None - Reward: 0
Step: 78 - Action: 0 - Observation: None - Reward: 0
Step: 79 - Action: 8 - Observation: None - Reward: 0
Step: 80 - Action: 2 - Observation: None - Reward: 0
Step: 81 - Action: 0 - Observation: None - Reward: 0
Step: 82 - Action: 0 - Observation: None - Reward: 0
Step: 83 - Action: 7 - Observation: None - Reward: 0
Step: 84 - Action: 5 - Observation: None - Reward: 0
Step: 85 - Action: 0 - Observation: None - Reward: 0
Step: 86 - Action: 7 - Observation: None - Reward: 0
Step: 87 - Action: 7 - Observation: None - Reward: 0
Step: 88 - Action: 0 - Observation: None - Reward: 0
Step: 89 - Action: 9 - Observation: [('pageview', 9), ('pageview', 5), ('pageview', 5)] - Reward: 0
Step: 90 - Action: 1 - Observation: None - Reward: 0
Step: 91 - Action: 8 - Observation: None - Reward: 0
Step: 92 - Action: 0 - Observation: None - Reward: 0
Step: 93 - Action: 0 - Observation: None - Reward: 0
Step: 94 - Action: 3 - Observation: None - Reward: 0
Step: 95 - Action: 8 - Observation: None - Reward: 0
Step: 96 - Action: 0 - Observation: None - Reward: 0
Step: 97 - Action: 7 - Observation: None - Reward: 0
Step: 98 - Action: 9 - Observation: None - Reward: 0
###Markdown
Okay, there's quite a bit going on here: - `action`, is a number between `0` and `num_products - 1` that references the index of the product recommended. - `observation` will either be `None` or a session of Organic data, showing the index of products the user views. - `reward` is 0 if the user does not click on the recommended product and 1 if the do. Notice that when a user clicks on a product (Wherever the reward is 1), they start a new Organic session.- `done` is a True/False flag indicating if the episode (aka user's timeline) is over. - `info` currently not used so it's always an empty dictionary.Also, notice that the first `action` is `None`. In our implementation, the agent observes Organic behavior before recommending anything.Now, we'll show calling the environment in an online manner, where the agent needs to supply an action. For demonstration purposes, we will create a list of hard-coded actions.
###Code
# create list of hard coded actions
actions = [None] + [1, 2, 3, 4, 5]
# reset env and set done to False
env.reset()
done = False
# counting how many steps
i = 0
while not done and i < len(actions):
action = actions[i]
observation, reward, done, info = env.step(action)
print(f"Step: {i} - Action: {action} - Observation: {observation} - Reward: {reward}")
i += 1
###Output
Step: 0 - Action: None - Observation: [('pageview', 4), ('pageview', 4), ('pageview', 4), ('pageview', 4), ('pageview', 4), ('pageview', 4)] - Reward: None
Step: 1 - Action: 1 - Observation: None - Reward: 0
Step: 2 - Action: 2 - Observation: None - Reward: 0
Step: 3 - Action: 3 - Observation: [('pageview', 4), ('pageview', 9), ('pageview', 9)] - Reward: 0
Step: 4 - Action: 4 - Observation: [('pageview', 9)] - Reward: 0
###Markdown
You'll notice that the offline and online APIs are nearly identical. The only difference is that one calls either env.step_offline() or env.step(action). Creating our first agentNow that we see have seen how the offline and online versions of the environment work, it's time to code our first recommendation agent! Technically, an agent can be anything that produces actions for the environment to to use. However, we will show you the object-oriented way we like to create agents.Below is the code for a very simply agent - the popularity based agent. The popularity based agent simply records how many times a user sees each product organically, then when required to make a recommendation, the agent choses a product randomly in proportion with number of times the user has viewed it.
###Code
import numpy as np
from numpy.random import choice
# define agent class
class PopularityAgent:
def __init__(self, num_products):
# set number of products as an attribute of agent
self.num_products = num_products
# track number of times each item viewed in Organic session
self.organic_views = np.zeros(self.num_products)
def train(self, observation, action, reward, done):
"""train method learns from a tuple of data.
this method can be called for offline or online learning"""
# adding organic session to organic view counts
if observation is not None:
for product in observation.get_views():
self.organic_views[product] += 1
def act(self, observation, reward, done):
"""act method returns an action based on current observation and past
history"""
# choosing action randomly in proportion with number of views
prob = self.organic_views / sum(self.organic_views)
action = choice(self.num_products, p = prob)
return action
###Output
_____no_output_____
###Markdown
The `PopularityAgent` class above demonstrates our preferred way to create agents for reco-gym. Notice how we have both a `train` and `act` method present. The `train` method is designed to take in training data from the environments `step_offline` method and thus has nothing to return, whilst the `act` method must return an action to pass back into the environment. The code below highlights how one would use this agent for first offline training and then using the learned knowledge to make recommendations online.
###Code
# instantiate instance of PopularityAgent class
num_products = 10
agent = PopularityAgent(num_products)
# resets random seed back to 42, or whatever we set it to in env_0_args
env.reset_random_seed()
# train on 1000 users offline
num_offline_users = 1000
for _ in range(num_offline_users):
#reset env and set done to False
env.reset()
done = False
while not done:
old_observation = observation
action, observation, reward, done, info = env.step_offline()
agent.train(old_observation, action, reward, done)
# train on 100 users online and track click through rate
num_online_users = 100
num_clicks, num_events = 0, 0
for _ in range(num_online_users):
#reset env and set done to False
env.reset()
observation, _, done, _ = env.step(None)
reward = None
done = None
while not done:
action = agent.act(observation, reward, done)
observation, reward, done, info = env.step(action)
# used for calculating click through rate
num_clicks += 1 if reward == 1 and reward is not None else 0
num_events += 1
ctr = num_clicks / num_events
print(f"Click Through Rate: {ctr:.4f}")
###Output
Click Through Rate: 0.0263
###Markdown
Testing our first agentNow we have created our popularity based agent, we should test it against an even simpler baseline - one that performs no learning and recommends products uniformly at random. To do this, we will first load a more complex version of the toy data environment called `reco-gym-v1`.Next we will load another agent for our agent to compete against. Here you can see we make use of the `RandomAgent` and create an instance of it in addition to our `PopularityAgent`.
###Code
import gym, reco_gym
from reco_gym import env_1_args
from copy import deepcopy
env_1_args['random_seed'] = 42
env = gym.make('reco-gym-v1')
env.init_gym(env_1_args)
# Import the random agent
from agents import RandomAgent, random_args
# Create the two agents
num_products = env_1_args['num_products']
popularity_agent = PopularityAgent(num_products)
agent_rand = RandomAgent(random_args)
###Output
_____no_output_____
###Markdown
Now we have instances of our two agents, we can use the `test_agent` method from reco-gym and compare there performance.To use `test_agent`, one must provide a copy of the current env, a copy of the agent class, the number of training users and the number of testing users.
###Code
# credible interval of the ctr median and 0.025 0.975 quantile
reco_gym.test_agent(deepcopy(env), deepcopy(agent_rand), 1000, 1000)
# credible interval of the ctr median and 0.025 0.975 quantile
reco_gym.test_agent(deepcopy(env), deepcopy(popularity_agent), 1000, 1000)
###Output
Starting Agent Training
Starting Agent Testing
|
cpp_atcoder100.ipynb | ###Markdown
メモ
###Code
# colab でプログラムを作って実行する
# !g++ temp.cpp; echo |./a.out
%%writefile temp.cpp
# include <bits/stdc++.h>
using namespace std;
int main () {cout << "Hello World!" << endl;}
!g++ temp.cpp; echo |./a.out
###Output
Hello World!
###Markdown
厳選!C++ アルゴリズム実装に使える 25 の STL 機能【前編】厳選!C++ アルゴリズム実装に使える 25 の STL 機能【後編】たのしい探索アルゴリズムの世界【前編:全探索、bit全探索から半分全列挙まで】 たのしい探索アルゴリズムの世界【前編:全探索、bit全探索から半分全列挙まで】 の 3 章アルゴリズムを勉強するなら二分探索から始めよう! 『なっとく!アルゴリズム』より二分探索アルゴリズムを一般化 〜 めぐる式二分探索法のススメ 〜二分法とは? アルゴリズム・収束・例題DFS (深さ優先探索) 超入門! 〜 グラフ・アルゴリズムの世界への入口 〜【前編】DFS (深さ優先探索) 超入門! 〜 グラフ・アルゴリズムの世界への入口 〜【後編】BFS (幅優先探索) 超入門! 〜 キューを鮮やかに使いこなす 〜典型的な DP (動的計画法) のパターンを整理 Part 1 ~ ナップサック DP 編 ~区間 DP を勉強してみた - Kutimoti の競プロメモビット演算 (bit 演算) の使い方を総特集! 〜 マスクビットから bit DP まで 〜Educational DP Contest | AtCoder最短経路問題総特集!!!~BFSから拡張ダイクストラまで~ by @ageprocpp最短経路問題総特集!!!~BFSから拡張ダイクストラまで~最小全域木問題 (クラスカル法とプリム法)Kruskal法をココロから納得する | けんちょんの競プロ精進記録「1000000007 で割ったあまり」の求め方を総特集! 〜 逆元から離散対数まで 〜累積和を何も考えずに書けるようにする!いもす法グラフ理論の基礎 by @maskot1977木構造|木とは/木の表現/2分探索木Union-Find木の解説と例題 by @ofutonfuton分野別 初中級者が解くべき過去問精選 100 問AIZU ONLINE JUDGE - Introduction to Algorithms and Data Structure (ALDS)非情報系学生のための C/C++ 入門AIZU ONLINE JUDGE - Introduction To Programming Iアルゴリズムとは何か!? ~ 文系理系問わず楽しめる精選 6 問 ~ by @drken「AtCoder Beginners Selection」AtCoder に登録したら次にやること ~ これだけ解けば十分闘える!過去問精選 10 問 ~ by @drken計算量オーダーの求め方を総整理! 〜 どこから log が出て来るか 〜ソートを極める! 〜 なぜソートを学ぶのか 〜 ---ビット演算 (bit 演算) の使い方を総特集! 〜 マスクビットから bit DP まで 〜レッドコーダーが教える、競プロ・AtCoder上達のガイドライン【初級編:競プロを始めよう】レッドコーダーが教える、競プロ・AtCoder上達のガイドライン【中級編:目指せ水色コーダー!】レッドコーダーが教える、競プロ・AtCoder上達のガイドライン【上級編:目指せレッドコーダー!】
###Code
# AtCoder のレーティング
%%html
<style>
tr td:nth-child(2) {
color: red;
text-align: center;
}
</style>
<table border="1">
<thead>
<tr><th>レーティング</th><th style="width:10%" align="center">色</th><th>相対的な位置</th><th>絶対的な位置</th></tr>
</thead>
<tbody>
<tr><td>2800+</td><td>赤</td><td>上位 0.2%</td><td></td></tr>
<tr><td>2400-2799</td><td>橙</td><td>上位 0.6%</td><td></td></tr>
<tr><td>2000-2399</td><td>黄</td><td>上位 2%</td><td>アルゴリズムの研究職・研究開発で重宝されるレベル</td></tr>
<tr><td>1600-1999</td><td>青</td><td>上位 5%</td><td>ほとんどのIT企業でアルゴリズム能力がカンストする</td></tr>
<tr><td>1200-1599</td><td>水</td><td>上位 10%</td><td>半数以上のIT企業でアルゴリズム能力がカンストする</td></tr>
<tr><td>800-1199</td><td>緑</td><td>上位 20%</td><td>エンジニアとしてかなり優秀</td></tr>
<tr><td>400-799</td><td>茶</td><td>上位 35%</td><td>学生なら優秀</td></tr>
<tr><td>1-399</td><td>灰</td><td>上位 100%</td><td>学生なら優秀</td></tr>
</tbody>
</table>
# AtCoder の過去問を解ける便利なサイト => AtCoder Problems
# 競プロにおける C++ のすすめ
# AtCoder ユーザーの半数以上が C++ を使っている
# AtCoder 公式解説のソースコード、解説記事のソースコードの大半が C++ である。
# プログラミングコンテストチャレンジブック(蟻本)、
# プログラミングコンテスト攻略のためのアルゴリズムとデータ構造
# 実行速度が他の言語に比べて高速であるため、競プロに有利である
# 問題
# 整数 A と B が与えられます。
# そのとき、縦 A センチ、横 B センチの長方形の周の長さを整数で出力してください。
# 1≤ A, B ≤ 100
# 例 4 5 => 18
# 1 9 => 20
# 6 7 => 26
# 31 27 => 116
# 100 100 => 400
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
int a, b; cin >> a >> b;
cout << (a + b) * 2 << endl;
}
!g++ temp.cpp; echo 4 5 |./a.out #=> 18
!g++ temp.cpp; echo 1 9 |./a.out #=> 20
!g++ temp.cpp; echo 6 7 |./a.out #=> 26
!g++ temp.cpp; echo 31 27 |./a.out #=> 116
!g++ temp.cpp; echo 100 100 |./a.out #=> 400
# 全てに正解になる必要がある
# 効率を考える必要がある
# 問題
# N 枚のカードが一列に並べられています。
# 左から i 番目のカードには、整数 A_i が書かれています。
# あなたは N 枚のカードの中から 2 枚同時に選び、取ることができます。
# 取った 2 枚に書かれた整数の合計がちょうど 101 となるような、カードの選び方の通り数を求めてください。
# 1≤ N ≤ 10^6, 1≤ Ai ≤10^9
# 「何枚目と何枚目を選ぶか全探索する」
# という方法だと思います。つまり、1 ≤ i < j ≤ N を満たすすべての (i,j) の組を全探索し、
# A_i + A_ j=101 となる通り数を数え上げるという方法です。
# しかし、その解法の場合、N^2 回程度のループを回す必要があります。 N ≤ 10^6 なので、
# 最大 10^12 回程度のループを回す必要があります。しかし、競技プログラミングにおいて、
# およそ 10^8 ~ 10^9 回を超える回数のループをした場合、実行時間超過 (TLE) となり、
# 1 ケースでも TLE を起こすと不正解となってしまいます。
# そのため、より効率的なアルゴリズムを実装することが求められます。
# 例えば、本問題であれば、以下のような解法を使うと、
# 高々 N 回程度のループでプログラムの実行が終わります。
# A_i ≥ 101 のカードはすべて無視する。
# A_i = 1, A_i = 2, ..., A_i = 100 のカードの枚数を数える。
# それぞれ c_1, c_2, ..., c_100 とする。
# そのとき、
# (c1×c100)+(c2×c99)+(c3×c98)+...+(c50×c51)
# が答えである。
# C++ での実装例 <= 分からないが動かすことは出来た。先へ進む
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
int N; cin >> N;
vector<int> A(1000009);
vector<int> cnt(109);
for (int i = 0; i < N; i++) {
cin >> A.at(i);
if (A.at(i) <= 100) cnt.at(A.at(i))++;
}
int Answer = 0;
for (int i = 0; i < 50; i++) Answer += cnt.at(i) * cnt.at(101 - i);
cout << Answer << endl;
}
!g++ temp.cpp; echo 2 1 1 |./a.out #=> 0
!g++ temp.cpp; echo 2 1 100 |./a.out #=> 1
!g++ temp.cpp; echo 5 1 2 3 6 95 |./a.out #=> 1
!g++ temp.cpp; echo 5 1 100 2 3 95 |./a.out #=> 1
!g++ temp.cpp; echo 5 1 2 3 4 95 |./a.out #=> 0
# 問題 AtCoder Beginner Contest 145 A - Circle
# 整数 r が与えられます。
# 半径 r の円の面積は半径 1 の円の面積の何倍か、整数で出力してください。
# つまり r×r を出力してください。
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
int r; cin >> r;
cout << r * r << endl;
}
# 「どのようにしてコードを書けば良いか」分からない人も多いです。
# そこで、一つの例として、C++ を学習できる便利なサイトを紹介します。
# 非情報系学生のための C/C++ 入門
# 提出結果のエラーコード WA などの意味
%%html
<style>
tr td:nth-child(1) {
color: red;
text-align: center;
}
</style>
<table border="1">
<thead>
<tr><th>提出結果</th><th>意味</th></tr>
</thead>
<tbody>
<tr><td>AC</td><td>正解です。やったね!</td></tr>
<tr><td>WA</td><td>出力が間違っているようなテストケースがあります。</td></tr>
<tr><td>TLE</td><td>実行時間制限以内にプログラムの実行が終わらなかったテストケースがあります。</td></tr>
<tr><td>RE</td><td>プログラムがランタイムエラー(実行時エラー)を起こしたテストケースがあります。</td></tr>
<tr><td>WJ</td><td>現在採点が行われております。しばらくお待ちください。</td></tr>
</tbody>
</table>
# 全探索に慣れる!
# 問題 くじびき
# 数字が書かれている N 枚の紙切れが袋に入っています。あなたはこの袋から紙切れを取り出し、
# 数字を見て袋に戻すということを 4 回行い、4 回の数字の和が M になっていれば、あなたの勝ちです。
# 紙切れに書かれている数字が {K_1, K_2, ..., K_N} である場合、あなたが勝つような可能性はありますか。
# 制約:N ≤ 50, M ≤ 10^8, Ki ≤10^8
# これは、1 回目に K_a、2 回目に K_b、3 回目に K_c、4 回目に K_d を取り出すとして、
# (a,b,c,d) のあり得る組を全通り調べる(四重ループをする)と解けます。N^4 回程度の計算が必要ですが、
# N ≤ 50 なので、およそ 50^4 ≒ 6.25×10^6 回のループしか回す必要がなく、実行時間制限には余裕をもって間に合います。
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main () {
int n, m; cin >> n >> m;
vector<int> vec(n);
for (int i=0; i < n ; i++) {
cin >> vec.at(i);
}
bool ans = false;
for (int i=0; i < n ; i++) {
for (int j=0; j < n ; j++) {
for (int k=0; k < n ; k++){
for (int l=0; l < n ; l++){
if (vec.at(i)+vec.at(j)+vec.at(k)+vec.at(l) == m) ans = true;
}
}
}
}
if (ans) cout << "Yes" << endl;
else cout << "No" << endl;
}
!g++ temp.cpp; echo 3 10 1 3 5 |./a.out #=> Yes 1 1 3 5
!g++ temp.cpp; echo 3 9 1 3 5 |./a.out #=> No
!g++ temp.cpp; echo 3 9 1 3 2 |./a.out #=> Yes 3 3 2 2
# 実験 重複あり全探索 => ネットにあった例
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
vector<int> buf;
void dfs(int i, const int size, const int range_start, const int range_end)
{
if (i == size) {
// ここで所望の作業を行う
for(int i = 0; i < buf.size(); ++i){
cout << buf[i] << " ";
}
cout << endl;
}
else{
for(int j = range_start; j <= range_end; ++j){
buf[i] = j;
dfs(i + 1, size, range_start, range_end);
}
}
}
int main(void)
{
int size = 3;
int range_start = 1;
int range_end = 5;
buf.resize(size);
dfs(0, size, range_start, range_end);
}
!g++ temp.cpp; ./a.out
# 4 種類の全探索
# 本当に全通り調べ上げる「全探索」大体の場合、多重ループで解けます。
# 工夫して探索の通り数を減らす「全探索」
# ビット全探索 多重ループではうまくいかない全探索の 1 つです。詳しくは、こちらの記事を参照してください。
# 順列全探索 多重ループではうまくいかない全探索の 1 つです。詳しくは、こちらの記事を参照してください。
# 全探索の問題 <= あとでやろう ABC = AtCoder Beginner Contest
# ABC144 B - 81 基礎の基礎です。
# ABC150 B - Count ABC 全探索というか、「全通り調べ上げます」。
# ABC122 B - ATCoder これも基本です。
# ABC136 B - Uneven Numbers これも基本です。
# ABC106 B - 105 これも基本です。
# ABC120 B - K-th Common Divisors 単純な考察が必要ですが、基本です。
# ABC057 C - Digits in Multiplication 探索通り数を頑張って減らします。
# ABC095 C - Half and Half 探索通り数を頑張って減らします。
# 三井住友信託銀行プログラミングコンテスト2019 D - Lucky PIN 探索通り数を頑張って減らします。
# ABC128 C - Switches ビット全探索の基本です。
# ABC147 C - HonestOrUnkind2 ビット全探索の基本です。
# ABC145 C - Average Length 順列全探索の基本です。
# ABC150 C - Count Order 順列全探索で解けます。
# ABC054 C - One-stroke Path 順列全探索で解けます。
# 最後に、最も重要なことを書きます。これは、コンテストに出場することです。
# モチベ管理こそが重要
# C++ の場合、使えてほしい STL は以下の 25 個。
%%html
<table border="1">
<tbody>
<tr><td>abs</td><td>sin/cos/tan</td><td>string</td><td>min/max</td><td>swap</td></tr>
<tr><td>__gcd</td><td>rand</td><td>clock</td><td>reverse</td><td>sort</td></tr>
<tr><td>vector</td><td>stack</td><td>queue</td><td>priority_queue</td><td>map</td></tr>
<tr><td>lower_bound</td><td>set</td><td>pair</td><td>tuple</td><td>assert</td></tr>
<tr><td>count</td><td>find</td><td>next_permutation</td><td>__builtin_popcount</td><td>bitset</td></tr>
</tbody>
</table>
# 以下に書かれている、基本的なアルゴリズムとデータ構造を全て理解する。
# アルゴリズム(12 個)
# 全探索(bit 全探索、順列全探索を含む)
# 二分探索
# 深さ優先探索(DFS)
# 幅優先探索(BFS)
# 動的計画法(bitDP などを含む)
# ダイクストラ法(最短経路問題)
# ワーシャルフロイド法(最短経路問題)
# クラスカル法(最小全域木問題)
# 高速な素数判定法
# べき乗を高速に計算するアルゴリズム
# 逆元を計算するアルゴリズム
# 累積和
# データ構造(3 個)
# グラフ(グラフ理論)
# 木
# Union-Find
# 絶対値 abs
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
// 例 1: 2 つの小数 a と b を入力し、a-b の絶対値を小数で出力します。
double a, b;
cin >> a >> b;
printf("%.12lf\n", abs(a - b)); // lf は double こと
cout << fixed << setprecision(12) << abs(a - b) << endl;
}
!g++ temp.cpp; echo 2.81 -1.2 |./a.out
# 上のプログラムの cout バージョン
# fixed と setprecision を同時に使う事によって小数点以下の桁数を決められる
# 絶対値 abs
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
// 例 1: 2 つの小数 a と b を入力し、a-b の絶対値を小数で出力します。
double a, b;
cin >> a >> b;
cout << fixed << setprecision(3) << abs(a - b) << endl;
printf("%.3lf\n", abs(a - b));
}
!g++ temp.cpp; echo 2.81 -1.2 |./a.out
# 三角関数 sin/cos/tan
# 弧度法を使う
# sin(x) の値を double 型で返す => sin(π/6) = 0.5。
# cos(x) の値を double 型で返す => cos(π/6) = 0.866025...
# tan(x) の値を double 型で返す => tan(π/6) = 0.577350...
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
// 度数法で角度を入力し、計算する
double pi = M_PI;
double x;
cin >> x;
double y = x / 180.0 * pi;
cout << y << endl;
printf("%.12lf\n", sin(x / 180.0 * pi)); // lf は double のこと , long float の意か?
printf("%.12lf\n", cos(x / 180.0 * pi));
printf("%.12lf\n", tan(x / 180.0 * pi));
cout << fixed << setprecision(12);
cout << sin(y) << endl;
cout << cos(y) << endl;
cout << tan(y) << endl;
}
!g++ temp.cpp; echo 30 |./a.out
# 実験 setprecision にデフォルトはあるか => ない
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
cout << M_PI << endl;
cout << setprecision(16) << M_PI << endl;
cout << setprecision(16) << M_PI*1000 << endl;
cout << setprecision(16) << M_PI/1000 << endl;
double x = Mi_PI;
cout << fixed << setprecision(12); // fixed と併用すると小数点以下が決められる
cout << sin(M_PI/6) << endl;
cout << cos(M_PI/6) << endl;
cout << tan(M_PI/6) << endl;
}
!g++ temp.cpp; echo |./a.out
# 実験 to_string の桁数
# cout の小数点以下の桁数を固定にする方法
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
template <typename T>
std::string to_string_with_precision(const T a_value, const int n = 6)
{
std::ostringstream out;
out.precision(n);
out << std::fixed << a_value;
return out.str();
}
int main() {
cout << M_PI << endl; // 有効桁数 6 桁 => 3.14159
cout << M_PI/1000 << endl; // 有効桁数 6 桁 => 0.00314159
cout << to_string(M_PI) << endl; // 小数点以下 6 桁で固定 => 3.141593
cout << to_string(M_PI/1000) << endl; // 小数点以下 6 桁で固定 => 0.003142
cout << to_string_with_precision(M_PI, 12) << endl; // 小数点以下 12 桁を指定 => 3.141592653590
cout << to_string_with_precision(M_PI/1000, 12) << endl; // 小数点以下 12 桁を指定 => 0.003141592654
}
!g++ temp.cpp; echo |./a.out
# string
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
// 例 1: 入力した 2 つの文字列を連結して、最初の 10 文字を出力します。
string a, b;
cin >> a >> b;
string c = a + b;
if (c.size() <= 10) cout << c << endl;
else cout << c.substr(0, 10) << endl;
// 例 2: 入力した文字列 s の偶数文字目を 'z' に変えて出力します。
string s;
cin >> s;
for (int i = 0; i < s.size(); i += 2) s.at(i) = 'z';
cout << s << endl;
return 0;
}
!g++ temp.cpp; echo "try" "012345" "012345"|./a.out
# 上のプログラムを日本語対応にしてみる
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int count_byte(char c){
if ((c & 0x80) == 0) return 1;
else if ((c & 0xe0) == 0xc0) return 2;
else if ((c & 0xf0) == 0xe0) return 3;
else return 4;
}
vector<string> str2vec(string &str){
vector<string> vec;
for (int i = 0 ; i < str.size(); i += count_byte(str.at(i))){
vec.push_back(str.substr(i,count_byte(str.at(i))));
}
return vec;
}
int main() {
// 例 1: 入力した 2 つの文字列を連結して、最初の 10 文字を出力します。
string a, b;
cin >> a >> b;
string c = a + b;
vector<string> jvec;
jvec = str2vec(c);
if (jvec.size() <= 10) {
for (int i = 0; i < jvec.size(); i++) cout << jvec.at(i);
cout << endl;
}
else {
for (int i = 0; i < 10; i++) cout << jvec.at(i);
cout << endl;
}
// 例 2: 入力した文字列 s の偶数文字目を 'z' に変えて出力します。
string s;
cin >> s;
vector<string> jvec2;
jvec2 = str2vec(s);
for (int i = 0; i < jvec2.size(); i += 2) jvec2.at(i) = 'z';
for (int i = 0; i < jvec2.size(); i++) cout << jvec2.at(i);
cout << endl;
}
!g++ temp.cpp; echo "施行" "錯誤してみる" "難しいね!!!!"|./a.out
# string
# S.substr(l) 文字列 S の l 文字目から最後の文字までの部分文字列を返す関数です。
# S.substr(l, r) 文字列 S の l 文字目から l+r-1 文字目までの部分文字列を返す関数です。
# なお、文字列は 0 文字目から始まることにご注意ください
# 最小値・最大値 min/max
# min(a, b) 2 つの値 a, b のうち小さい方を返します。
# max(a, b) 2 つの値 a, b のうち大きい方を返します。
# min({a1, a2, ..., an}) {a1, a2, ..., an} の中で最小のものを返します。
# max({a1, a2, ..., an}) {a1, a2, ..., an} の中で最大のものを返します。
# *min_element(c + l, c + r) {c[l], c[l+1], ..., c[r-1]} の中で最小のものを返します。
# *max_element(c + l, c + r) {c[l], c[l+1], ..., c[r-1]} の中で最大のものを返します。
# なお、min_element 関数と max_element 関数はイテレーターを返すため、最初に * を付ける必要があります。
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
// 例 1: 103, 168, 103 の中で最も大きい値を出力する : 168 が出力される
cout << max({103, 168, 103}) << endl;
// 例 2: {c[1], c[2], ..., c[N]} の最小値を出力する方法 1 つ目
int N, c[100009], minx = 2147483647;
cin >> N;
for (int i = 1; i <= N; i++) cin >> c[i];
for (int i = 1; i <= N; i++) minx = min(minx, c[i]);
cout << minx << endl;
// 例 3: {c[1], c[2], ..., c[N]} の最小値を出力する方法 2 つ目
cout << *min_element(c + 1, c + N + 1) << endl;
}
!g++ temp.cpp; echo |./a.out
# 問題 vector の最小値・最大値を求める関数を作る
%%writefile temp.cpp
# include <bits/stdc++.h>
using namespace std;
int myMax(const vector<int>& v) {
int max = numeric_limits<int>::min();
for (auto e : v) if (e > max) max = e;
return max;
}
int myMax02(const vector<int>& vec){
return *max_element(vec.begin(), vec.end());
}
int myMin(const vector<int>& v) {
int min = numeric_limits<int>::max();
for (auto e : v) if (e < min) min = e;
return min;
}
int myMin02(const vector<int>& vec){
return *min_element(vec.begin(), vec.end());
}
int main() {
vector<int> vex = {1, 2, 3, 4, 5};
assert(myMax(vex) == 5);
assert(myMax02(vex) == 5);
assert(myMin(vex) == 1);
assert(myMin02(vex) == 1);
}
# 値の交換 swap
# swap(a, b) で、変数 a と b の値を入れ替えることができます。
# バブルソートについて知らない人は => ソートを極める! 〜 なぜソートを学ぶのか 〜
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
// 2 つの変数 a, b を入れ替え、出力する
int a = 3, b = 8;
swap(a, b);
cout << a << " " << b << endl;
// バブルソートによって、vector を小さい順に並び替え、出力する
vector<int> vec {9, 8, 7, 1, 2, 3};
for (int i = 0; i < vec.size() - 1; i++) {
for (int j = 0; j < vec.size(); j++) {
if (vec.at(j) > vec.at(j + 1)) swap(vec.at(j), vec.at(j + 1));
}
}
for (int i = 0; i < vec.size(); i++) cout << vec.at(i) << endl;
}
!g++ temp.cpp; echo |./a.out
# 最大公約数 __gcd
# 2 つの整数 a, b の最大公約数を返す関数です。
# __gcd(8, 16) = 8
# __gcd(234, 217) = 1
# となります。計算量は O(loga)O(loga) なので、とても高速です。
# 最小公倍数を返す標準ライブラリの関数はありませんが、a と b の
# 最小公倍数は a / __gcd(a, b) * b で計算することができます。
# この書き方は、a * b / __gcd(a, b) よりオーバーフローを起こしにくいです。
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main () {
cout << __gcd(8,18) << endl;
cout << 8 / __gcd(8, 18) * 18 << endl;
}
!g++ temp.cpp; echo |./a.out
# 乱数 rand random
# 基本的に、以下の 2 つをセットで覚えておくと良いと思います。
# rand() 0 以上 2^3 −1 以下のランダムな整数を返す関数です。
# srand((unsigned)time(NULL)); おまじないです。main 関数の冒頭にこれを付けると、
# 実行ごとに乱数生成結果が変わります。
# なお、乱数の質が完全とはいえないので、より良質な乱数を生成したい場合は
# メルセンヌ・ツイスタ (C++ では std::mt19937) などを使いましょう。
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
srand((unsigned)time(NULL));
// 例 1: 1 以上 6 以下のランダムな整数を出力する
cout << rand() % 6 + 1 << endl;
// 例 2: 90% の確率で "Yay!"、10% の確率で ":(" と出力する
if (rand() % 10 <= 8) cout << "Yay!" << endl;
else cout << ":(" << endl;
return 0;
}
!g++ temp.cpp; echo |./a.out
# 実験 メルセンヌ・ツイスタ mt19937
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main () {
srand((unsigned)time(NULL));
cout << rand() << endl;
random_device rd;
mt19937 mt(rd());
cout << mt() << std::endl;
}
!g++ temp.cpp; echo |./a.out
# 問題 ランダムに 4 桁の 36 進数を作成せよ
# 36 進数とは a-z0-9 で作る文字列とする
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
const string str = "0123456789abcdefghijklmnopqrstuvwxyz";
string getDigits () {
string digits = "";
for (int i = 0; i < 4; i++) {
digits.push_back(str[rand()%36]);
}
return digits;
}
int main () {
srand((unsigned)time(NULL));
for (int i = 0; i < 4; i++) cout << getDigits() << endl;
}
!g++ temp.cpp; echo |./a.out
# 時間計測 clock
# 基本的に、以下の 2 つをセットで覚えておくと良いと思います。
# clock() プログラム実行開始から何単位時間経過したかを整数で返す関数です。
# CLOCKS_PER_SEC 定数です。環境によって1 秒が何単位時間か異なるので、秒数を知りたいときに使えます。
# つまり、実行開始からの秒数は clock()/CLOCKS_PER_SEC で表されます。
# ちなみに、CLOCKS_PER_SEC の値は gcc の場合 1000000
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
// 例 1: 実行にかかる時間を計測する
vector<int> vec;
int ti = clock(); // start 時間
for (int i = 1; i <= 100000; i++) vec.push_back(i);
printf("Execution Time: %.4lf sec\n", 1.0 * (clock() - ti) / CLOCKS_PER_SEC);
}
!g++ temp.cpp; echo |./a.out
# 配列を逆順に並び替え reverse
# 逆順に並び替える関数は、基本的に以下の表のようになります。ここでは、a を適当な型の配列とします。
# reverse(a, a + N) a[0], a[1], ..., a[N-1] を逆順に並び替えます。
# 例えば、a = {2, 1, 4, 3, ...} のときに reverse(a, a + 3) という操作を行うと、a = {4, 1, 2, 3, ...} となります。
# reverse(a + l, a + r) a[l], a[l+1], ..., a[r-1] を逆順に並び替えます。
# 計算量は O(N) です。文字列 str を逆順にしたい場合は、reverse(str.begin(), str.end()) のように書いてください。
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
// 例 1: 配列 a の 2~6 番目の要素を逆順にします。{8, 3, 2, 6, 4, 1, 7, 5} に変化します。
int a[8] = {8, 3, 7, 1, 4, 6, 2, 5};
reverse(a + 2, a + 7);
for (int i = 0; i < 8; i++) cout << a[i] << endl;
// 例 2: {b[0], b[1], ..., b[N-1]} を入力し、逆順にしたうえで出力します。
int N, b[1009];
cin >> N;
for (int i = 0; i < N; i++) cin >> b[i];
reverse(b, b + N);
for (int i = 0; i < N; i++) cout << b[i] << endl;
}
!g++ temp.cpp; echo 3 1 2 3 |./a.out
# 実験 reverse
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int count_byte(char c){
if ((c & 0x80) == 0) return 1;
else if ((c & 0xe0) == 0xc0) return 2;
else if ((c & 0xf0) == 0xe0) return 3;
else return 4;
}
vector<string> str2vec(string &str){
vector<string> vec;
for (int i = 0 ; i < str.size(); i += count_byte(str.at(i))){
vec.push_back(str.substr(i,count_byte(str.at(i))));
}
return vec;
}
int main() {
string str = "abracadabra";
reverse(str.begin(), str.end());
cout << str << endl;
vector<int> vec = {1,2,3};
reverse(vec.begin(), vec.end());
for (int i = 0; i < vec.size(); i++) {
cout << vec.at(i) << " ";
}
cout << endl;
str = "日本語を含むsentense";
vector<string> jvec;
jvec = str2vec(str);
reverse(jvec.begin(), jvec.end());
for (int i = 0; i < jvec.size(); i++) {
cout << jvec.at(i) ;
} cout << endl;
}
!g++ temp.cpp; echo 3 1 2 3 |./a.out
# ソート sort
# 以下の 3 つの書き方を覚えておくと、大体の競プロの問題において使えます。
# sort(a, a + N); a[0], a[1], ..., a[N-1] を小さい順に並び替えます。
# sort(a + l, a + r); a[l], a[l+1], ..., a[r-1] を小さい順に並び替えます。
# sort(a, a + N, greater<int>()); a[0], a[1], ..., a[N-1] を大きい順に並び替えます。
# int 型の要素をソートする場合は greater<int>() を付けます。
# double 型の要素をソートする場合は greater<double>() を付けます。
# 計算量は O(NlogN) です。なお、string 型のソートでは、辞書式順序で早い方が小さいとみなされます。
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
vector<int> vec {8, 3, 7, 1, 4, 6, 2, 5};
sort(vec.begin(), vec.end(), greater<int>());
for (int i = 0; i < vec.size(); i++) cout << vec.at(i) << " " ;
cout << endl;
sort(vec.begin(), vec.end());
for (int i = 0; i < vec.size(); i++) cout << vec.at(i) << " " ;
cout << endl;
}
!g++ temp.cpp; echo |./a.out
# vector
# vector<int> a; vector 型の変数 a を定義します。
# a.push_back(x); a の末尾に x を追加します。
# a.pop_back(); a の末尾の要素が取り除かれます。
# a[i] a の先頭から数えて i 番目の要素にアクセスできます。
# a.size() a に現在いくつ要素が入っているか、整数型で返します。
# 計算量は、push_back、pop_back 両方 O(1)と高速です。メモリ使用量も効率的です。
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
// 例 1: a に色々な操作を行う(x1 = 105, x2 = 2, x3 = 146 となります)
vector<int> a; // その時点で a は空
a.push_back(121); // その時点で a = {121}
a.push_back(105); // その時点で a = {121, 105}
a.push_back(193); // その時点で a = {121, 105, 193}
int x1 = a[1];
a.pop_back(); // その時点で a = {121, 105}
int x2 = a.size();
a.push_back(146); // その時点で a = {121, 105, 146}
int x3 = a[2];
cout << x1 << " " << x2 << " " << x3 << endl;
return 0;
}
!g++ temp.cpp; echo |./a.out
# stack
# スタックというデータ構造を管理できる型です。もの(値)を積み上げたり、
# 一番上のものを取ったりできるデータ構造です。
# stack というデータ構造では、以下の処理ができます。
# stack<int> a; stack 型の変数 a を定義します。
# a.push(x) スタック a の一番上に要素 x を積みます。
# a.pop() スタック a の一番上の要素を取り除きます。
# a.top() スタック a の一番上の要素を返す関数です。
# 例えば、下から順に {3, 1, 4} と積み上げられている場合、a.top() の値は 4 です。
# a.size() スタック a の要素数を整数で返す関数です。
# a.empty() スタック a の要素数が 0 である場合 true、1 以上である場合 false を返す関数です。
# push、pop、top などの計算量はすべて O(1) です。
# 実質的には vector の下位互換です
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
// 例 1: a に色々な操作を行う(x1 = 156, x2 = 202, x3 = 117, x4 = 3, x5 = 0 となります)
stack<int> a;
a.push(179); // その時点で下から順に {179}
a.push(173); // その時点で下から順に {179, 173}
a.push(156); // その時点で下から順に {179, 173, 156}
int x1 = a.top();
a.pop(); // その時点で下から順に {179, 173}
a.push(117); // その時点で下から順に {179, 173, 117}
a.push(202); // その時点で下から順に {179, 173, 117, 202}
int x2 = a.top();
a.pop(); // その時点で下から順に {179, 173, 117}
int x3 = a.top();
int x4 = a.size();
int x5 = 0; if (a.empty()) x5 = 10000;
cout << x1 << " " << x2 << " "<< x3 << " " << x4 << " " << x5 << endl;
}
!g++ temp.cpp; echo |./a.out
# queue
# 待ち行列のようなデータ構造を管理できる型です。
# queue<int> a; queue 型の変数 a を定義します。
# a.push(x) キュー a の最後尾に要素 x を入れます。
# a.pop() キュー a の一番前の要素を取り除きます。
# a.front() キュー a の一番前の要素を返す関数です。
# 例えば、前から順に a = {3, 1, 4} である場合、3 を返します。
# a.size() キュー a の要素数を返す関数です。
# a.empty() キュー a の要素数が 0 である場合 true、1 以上である場合 false を返す関数です。
# push、pop、front などの計算量はすべて O(1) です。
# 幅優先探索や最短経路問題などでこのデータ構造が利用できます。
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
// 例 1: a に色々な操作を行う(x1 = 179, x2 = 173, x3 = 156, x4 = 3, x5 = 0 となります)
queue<int> a;
a.push(179); // その時点で前から順に {179}
a.push(173); // その時点で前から順に {179, 173}
a.push(156); // その時点で前から順に {179, 173, 156}
int x1 = a.front();
a.pop(); // その時点で前から順に {173, 156}
a.push(117); // その時点で前から順に {173, 156, 117}
a.push(202); // その時点で前から順に {173, 156, 117, 202}
int x2 = a.front();
a.pop(); // その時点で前から順に {156, 117, 202}
int x3 = a.front();
int x4 = a.size();
int x5 = 0; if (a.empty()) x5 = 10000;
cout << x1 << " " << x2 << " "<< x3 << " " << x4 << " " << x5 << endl;
}
!g++ temp.cpp; echo |./a.out
# priority_queue
# 優先度付きキューというデータ構造を管理できる型です。
# a.push(x) 優先度付きキュー a に要素 x を追加します。
# a.pop() a の中の最も小さい要素(設定によっては、最も大きい要素)を取り除きます。
# a.top() a の中の最も小さい要素(設定によっては、最も大きい要素)の値を返す関数です。
# a.size() a の要素数を返す関数です。
# a.empty() a の要素数が 0 の場合 true、1 以上である場合 false を返す関数です。
# N を優先度付きキューに入っている要素数とするとき、push、pop、top の計算量は O(logN) です。
# 変数の定義方法がやや特殊なので、書いておきます。以下のように、
# 最も小さい値を取り出したい場合は greater、最も大きい値を取り出したい場合は less を使います。
# // int 型の要素を持ち、最も小さい値を取り出す形の priority_queue を定義する場合
# priority_queue<int, vector<int>, greater<int>> Q1;
# // double 型の要素を持ち、最も小さい値を取り出す形の priority_queue を定義する場合
# priority_queue<double, vector<double>, greater<double>> Q2;
# // int 型の要素を持ち、最も大きい値を取り出す形の priority_queue を定義する場合
# priority_queue<int, vector<int>, less<int>> Q3;
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
// 例 1: Q に色々な操作を行う(x1 = 116, x2 = 110, x3 = 122, x4 = 2 となります)
priority_queue<int, vector<int>, greater<int>> Q;
Q.push(116); // この時点で、小さい順に {116}
Q.push(145); // この時点で、小さい順に {116, 145}
Q.push(122); // この時点で、小さい順に {116, 122, 145}
int x1 = Q.top();
Q.push(110); // この時点で、小さい順に {110, 116, 122, 145}
int x2 = Q.top();
Q.pop(); // この時点で、小さい順に {116, 122, 145}
Q.pop(); // この時点で、小さい順に {122, 145}
int x3 = Q.top();
int x4 = Q.size();
cout << x1 << " " << x2 << " " << x3 << " " << x4 << endl;
}
!g++ temp.cpp; echo |./a.out
# map 連想配列 associative array unordered_map
# どんな「番地」にも書き込める、無限配列のようなデータ構造
# a を map 型の変数、x を適当な型の変数とします。
# 配列と同じように、代入・演算ができます。x は整数型でなくても、文字列型でも vector 型でも何でもありです。
# a.clear() マップ a を初期化します。
# なお、最初は値がすべて 0(文字列の場合は空文字列)で初期化されています。
# N を今まで map 型変数にアクセスした数とするとき、マップのある特定の番地へのアクセスにかかる計算量は O(logN) 程度です。
# 変数の定義方法がやや特殊なので、書いておきます。基本的に、map< 番地の型, 記録する型 > となります。
# // int 型の番地に int 型の変数を記録する場合(2^31 要素の int 型配列と同じような感じ)
# map<int, int> M1;
# // int 型の番地に string 型の変数を記録する場合(2^31 要素の string 型配列と同じような感じ)
# map<int, string> M2;
# // string 型の番地に double 型の変数を記録する場合
# map<string, double> M3;
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
map<string, int> Map;
Map["qiita"] = 777;
Map["algorithm"] = 1111;
Map["competitive_programming"] = 1073741824;
cout << Map["algorithm"] << endl; // 1111 と出力される
cout << Map["tourist"] << endl; // まだ何も書き込まれていないので、0 と出力される
}
!g++ temp.cpp; echo |./a.out
# 実験 map 代入による初期化が出来るか
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
map<int, const char*> mapList = { {1, "a"}, {2, "b"}, {3, "c"}};
/*
map<int, const char*>::iterator it = mapList.begin();
while(it != mapList.end()) {
printf("key:%d value:%s\n" , it->first , it->second);
it++;
} */
// 次のように書く方が分かりやすいと思う
for (auto it = mapList.begin(); it != mapList.end(); it++){
printf("key:%d value:%s\n" , it->first , it->second);
}
}
!g++ temp.cpp; echo |./a.out
# map も set も unordered_map と unordered_set があってそちらの方がパーフォーマンスがいい
# map も set も普通の単語なので検索しにくいので、unordered の方を使うのが良いのではないか
# lower_bound
# 二分探索 をすることができる関数
# 配列 a があったとして、a の l 番目から r-1 番目までの要素が小さい順にソートされていたとしましょう。
# そのとき、lower_bound(a+l, a+r, x) - a というプログラムは、
# a[l] から a[r-1] までの中で初めて x 以上となるようなインデックスを返します。
# つまり、l ≤ i ≤ r−1 の中で、x ≤ a となるような最小の i の値を返します。
# 例えば、a = {2, 3, 4, 6, 9, ...} の場合、lower_bound(a, a + 5, 7) - a = 4 となります。
# 計算量は O(log N) です。
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
vector<int> a {2, 3, 100, 2, 3};
sort(a.begin(), a.end());
cout << lower_bound(a.begin(), a.end(), 100) - a.begin() << endl;
// 例 1: a[i] < x となるような i が何個あるのかを O(log N) で計算する
/*
int N, a[100009];
cin >> N;
for (int i = 0; i < N; i++) cin >> a[i];
sort(a, a + N);
int x;
cin >> x;
cout << lower_bound(a, a + N, x) - a << endl;
*/
}
!g++ temp.cpp; echo |./a.out
# set 集合 unordered_set unordered_multiset
# 集合を管理することができるデータ構造です
# 二分探索したりできます。このようなデータ構造には、set と multiset の 2 種類があります。
# set 型の変数 a と、適当な型の変数または値 x があったとして、以下の処理ができます。
# ここでは、y は set 内の要素を指すイテレーターとします
# a.insert(x) 集合 a に要素 x を追加する。但し、既に x が集合 a にある場合は追加しない。
# (multiset の場合、既に要素が 1 個以上あっても追加する。)
# a.erase(x) 集合 a から要素 x を削除する。(multiset の場合、要素 x をすべて削除する。)
# a.erase(y) 集合 a からイテレーター y の要素を削除する。
# (この場合、multiset の場合でも 1 個だけ消すことができる。)
# a.lower_bound(x) 集合 a の中で x 以上である最小の要素を指すイテレーターを返す。
# a.clear() 集合 a を空にする。
# N 個の要素を持つ set に対する操作の計算量は、大体の操作で O(log N) です。(multiset も同様。)
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
// 例 1: set に対して色々な操作を行う(1 個目は "End"、2 個目は "46" と出力される)
set<int> Set;
Set.insert(37); // その時点での Set の要素は {37}
Set.insert(15); // その時点での Set の要素は {15, 37}
Set.insert(37); // その時点での Set の要素は {15, 37}
auto itr1 = Set.lower_bound(40); // => "End"
// auto itr1 = Set.lower_bound(30); // => 37
if (itr1 == Set.end()) cout << "End" << endl;
else cout << (*itr1) << endl;
Set.erase(37); // その時点での Set の要素は {15}
Set.insert(46); // その時点での Set の要素は {15, 46}
auto itr2 = Set.lower_bound(20);
if (itr2 == Set.end()) cout << "End" << endl;
else cout << (*itr2) << endl;
// 例 2: a[1],a[2],...,a[N] を小さい順に出力する(同じ要素が複数ある場合 1 回だけ出力する)
set<int> b; int N, a[100009];
cin >> N;
for (int i = 1; i <= N; i++) cin >> a[i];
for (int i = 1; i <= N; i++) b.insert(a[i]);
auto itr = b.begin();
while (itr != b.end()) {
cout << (*itr) << endl;
itr++;
}
}
!g++ temp.cpp; echo 3 1 3 8 |./a.out
# pair ペア
# pair は 2 つの異なる型を一つの変数で持つことができる「組」を表現できる型です。
# pair は 2 つの要素を持てる型です。a を pair 型の変数とするとき、
# 1 つ目の要素は a1.first
# 2 つ目の要素は a1.second
# で表されます。また、1 つ目の要素の型を v1, 2 つ目の要素の型を v2 とするとき、
# pair< v1, v2 > a; という感じで変数を定義できます。
# 例えば、両方 int 型にしたい場合は pair< int, int > a; という感じで変数を定義できます。
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int N;
pair<int, string> a[100009];
int main() {
// 例 1: N 人の成績と名前を入力して、成績の大きい順にソートする
cin >> N;
for (int i = 0; i < N; i++) {
cin >> a[i].second; // 名前を入力する
cin >> a[i].first; // 成績を入力する
}
sort(a, a + N, greater<pair<int, string>>());
for (int i = 0; i < N; i++) {
cout << "Name = " << a[i].second << ", Score = " << a[i].first << endl;
}
return 0;
}
!g++ temp.cpp; echo 2 "Bob" 1 "John" 8 |./a.out
# tuple タプル
# 3 つ以上の要素から成る「組」を持つことができる型です。pair 型は 2 つですが、tuple 型は何個でも要素を持てます。
# tuple 型は、3 つ以上の要素を持てる型です。(※ 1 つとか 2 つでもできます。)
# 以下の 3 つの構文を覚えておくと、競技プログラミングなどにおいて困ることはほぼ無いと思います。
# 型 v1, 型 v2, ..., 型 vn を持つ tuple 型変数を定義したい場合、
# tuple< v1,v2,...,vn > a; という感じのコードを書く
# tuple 型変数 a の i 個目の要素にアクセスしたい場合、get<i>(a) と書けば良い
# make_tuple(a1,a2,...,an) で引数のコピーから tuple を生成することができます。
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
// 例 1: tuple の基本
tuple<int, int, int> A;
cin >> get<0>(A) >> get<1>(A) >> get<2>(A);
cout << get<0>(A) + get<1>(A) + get<2>(A) << endl;
// 例 2: vector にも tuple を入れられます、この例はソートするプログラムです
vector<tuple<double, int, int>> B; int N;
cin >> N;
for (int i = 1; i <= N; i++) {
double p1; int p2, p3;
cin >> p1 >> p2 >> p3;
B.push_back(make_tuple(p1, p2, p3));
}
sort(B.begin(), B.end());
for (int i = 0; i < N; i++) printf("%.5lf %d %d\n", get<0>(B[i]), get<1>(B[i]), get<2>(B[i]));
}
!g++ temp.cpp; echo 1 2 3 2 4.3 2 3 1.4 5 4 |./a.out
# assert
# 条件を決めて、それを満たさない場合ランタイムエラーを起こす関数です。デバッグなどに使えます。
# assert(条件) と書くことで、条件を満たさない場合にランタイムエラーを起こす関数になります。
# 例えば、N ≤ 20 の場合のみ実行して、それ以外の場合ランタイムエラーを起こすようにしたい場合、
# assert(N <= 20) と書けばできます。
# Tips: 競プロでは実行時間制限がおよそ 2 ~ 3 秒のことが多いです。
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
int N, X, cnt = 0;
vector<int> a(1000);
cin >> N >> X;
for (int i = 0; i < N; i++) cin >> a.at(i);
assert(N <= 10000);
for (int i = 0; i < N - 1; i++) {
for (int j = i + 1; j < N; j++) {
if (a.at(i) + a.at(j) == X) cnt++;
}
}
cout << cnt << endl;
// 例 1: a[i] + a[j] = X を満たす (i, j) [i < j] の個数を列挙する
// N > 10000 の場合実行時間が間に合わない(TLE する)のでランタイムエラーを起こすようにする
cin >> N >> X;
for (int i = 1; i <= N; i++) cin >> a[i];
assert(N <= 10000);
for (int i = 1; i <= N; i++) {
for (int j = i + 1; j <= N; j++) {
if (a[i] + a[j] == X) cnt++;
}
}
cout << cnt << endl;
}
!g++ temp.cpp; echo 6 8 1 7 2 6 3 5 |./a.out
# count
# 配列や vector のある区間の要素の中で、x がいくつ含まれるかを返す関数です。
# ここでは、a を配列とします。そのとき、count(a + l, a + r, x) は a[l], a[l+1], ..., a[r-1] の中で、
# x となるようなものの個数を整数型で返します。vector の場合、count(v.begin(), v.end(), x) のように書けばよいです。
# 計算量は O(r − l) です。
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
// 例 1: 配列 a に含まれる 1 の個数、2 の個数を出力する(それぞれ4, 3 と出力されます)
int a[10] = {1, 2, 3, 4, 1, 2, 3, 1, 2, 1};
cout << count(a, a + 10, 1) << endl;
cout << count(a, a + 10, 2) << endl;
int N, Q;
cin >> N;
vector<int> b(N);
for (int i = 0; i < N; i++) cin >> b.at(i);
cin >> Q;
for (int i = 0; i < Q; i++){
int l, r, x;
cin >> l >> r >> x;
cout << count(b.begin()+l, b.begin() + r+1, x) << endl;;
}
/*
// 例 2: b[1], b[2], ..., b[N] を受け取り、その後 Q 個の質問を受け取る。
// 各質問に対し、b[l], b[l+1], ..., b[r] の中で x が何個あるかを出力する。
int b[1009], N, Q;
cin >> N;
for (int i = 1; i <= N; i++) cin >> b[i];
cin >> Q;
for (int i = 1; i <= Q; i++) {
int l, r, x;
cin >> l >> r >> x;
cout << count(b + l, b + r + 1, x) << endl;
}*/
}
!g++ temp.cpp; echo 10 1 2 3 4 1 2 3 1 2 1 2 0 9 1 0 9 2 |./a.out
# find
# 配列や vector のある区間の要素の中に x が含まれるか、含まれる場合はどこにあるかを返す関数です。
# a を配列とします。そのとき、find(a + l, a + r, x) は、以下のイテレーターを返します。
# a[l], a[l+1], ..., a[r-1] の中に x が含まれない場合は a + r のイテレーター。
# そうでない場合は、a[l] から順に見ていったときに初めて a[i] = x となるような a[i] のイテレーター。
# a が vector の場合にも使えます。find(a.begin(), a.end(), x) とすると、関数は以下のイテレーターを返します。
# a[0], a[1], ..., a[a.size() - 1] の中に x が含まれない場合 a.end()
# そうでない場合、初めて a[i] = x となるような a[i] のイテレーター。
# なお、find 関数はイテレーターを返しますが、初めて現れる位置を知りたい場合は、find(a + l, a + r, x) - a のように書くと、
# 「初めて x が現れるのは配列の何番目の要素か」を返します。計算量は O(r−l) です。
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main() {
// 例 1: a[1], a[2], ..., a[N] を受け取る。その後、Q 個の質問を受け取る。
// 各質問は (l, r, x) の組から成り、a[l], a[l+1], ..., a[r] の中で x が存在しない場合 -1
// そうでない場合、存在する位置(ポインタではない)を出力する。
int N, Q, a[1009];
cin >> N;
for (int i = 1; i <= N; i++) cin >> a[i];
cin >> Q;
for (int i = 1; i <= Q; i++) {
int l, r, x;
cin >> l >> r >> x;
int f = find(a + l, a + r + 1, x) - a;
if (f == r + 1) cout << "-1" << endl; // 存在しない場合
else cout << f << endl; // 存在する場合
}
}
!g++ temp.cpp; echo 3 2 4 5 1 1 3 2 |./a.out
# next_permutation 順列
# がよくわからないので実験
%%writefile temp.cpp
#include <bits/stdc++.h>
using namespace std;
int main(){
vector<string> vec = {"book", "pencil", "eraser"};
sort(vec.begin(), vec.end());
do {
for (int i = 0; i < vec.size(); i++){
if (i) cout << ", ";
cout << vec.at(i);
}
cout<<endl;
} while (next_permutation(vec.begin(), vec.end()));
}
!g++ temp.cpp; echo |./a.out
###Output
book, eraser, pencil
book, pencil, eraser
eraser, book, pencil
eraser, pencil, book
pencil, book, eraser
pencil, eraser, book
###Markdown
いまここ
###Code
# while 文は、次の順列が存在しないとき(それより辞書順で大きいような順列が存在しないとき)に抜けます。
# 例えば、n = 3, a = {2, 3, 1} で実行した場合の出力結果は以下のようになります。
# 2,3,1
# 3,1,2
# 3,2,1
# すべての順列を回したい場合は、a = {1, 2, 3, ..., n} など、小さい順に並んでいる状態にしてください。
# また、vector で next_permutation を使いたい場合は、next_permutation(a.begin(), a.end()) のようにしてください。
# 順列の長さを NN とするとき、次の順列を生成するのに計算量最大 O(N)O(N) かかります。
# また、{1,2,3,...,N} とソートされている場合は、N!N! 回 while 文の中をループします。
# %%writefile temp.cpp
# #include <bits/stdc++.h>
# using namespace std;
# int N, A[12][12], B[12], perm[12], ans = 2000000000;
# int main() {
# // N 個の都市があり、都市 i から j まで移動するのにかかる時間は A[i][j] 分
# // 全ての都市を訪れるのに何分かかるか? ただし、どの都市から出発、どの都市に到着してもよい。
# // A[i][j] = A[j][i], A[i][i] = 0
# cin >> N;
# for (int i = 1; i <= N; i++) {
# for (int j = 1; j <= N; j++) cin >> A[i][j];
# }
# for (int i = 0; i < N; i++) B[i] = i + 1;
# do {
# int sum = 0;
# for (int i = 0; i < N - 1; i++) {
# sum += A[B[i]][B[i + 1]];
# }
# ans = min(ans, sum);
# } while(next_permutation(B, B + N));
# cout << ans << endl;
# }
3-24. __builtin_popcount
整数 x を二進数で表したとき、1 であるようなビットの個数を返す関数です。
この関数は gcc で利用可能ですが、Visual Studio 2019 などでは使えません。
概要
__builtin_popcount(x) は、x の 1 であるようなビットの個数を返す関数です。例えば、x = 42 の場合、42 を 2 進数で表すと 101010 であり、3 個の 1 を含むので、3 を返します。
また、__builtin_popcountll(x) という関数もあって、それは long long 型に対応しています。両方、追加のインクルードファイル無しで使えます。
注記1
Visual Studio 2019 など C++17 が対応されている環境では、popcnt(x), popcnt64(x) といった関数が使えます。ただし、intrin.h をインクルードする必要があります。 (コメント を頂きました。)
注記2
bitset でも __builtin_popcount(x) と同じようなことができます。詳しくは 3-25. 節をご覧ください。(コメント を頂きました。)
サンプルコード
#include <iostream>
using namespace std;
int main() {
// 例 1: x を 2 進数表記したときに、1 であるビットの個数を出力する。
long long x;
cin >> x;
cout << __builtin_popcountll(x) << endl;
return 0;
}
3-25. bitset
ビット集合を表す型(クラス)です。N 桁の 2 進数だと思うこともできます。
ビット演算を高速に行いたいときなどに使えます。
概要
まず、変数の定義は以下のように行います。
// 例 1: 長さ 250000 の bitset(250000 桁の 2 進数だと思ってよい)を定義する。
bitset<250000> bs1;
// 例 2: 長さ 8 の bitset を定義する。整数から初期化を行う。
bitset<8> bs2(131); // 7 ビット目から 0 ビット目への順番で、10000011 となる。
// 例 3: 長さ 8 の bitset を定義する。2 進数から初期化を行う。
bitset<8> bs3("10000011"); // 7 ビット目から 0 ビット目への順番で、10000011 となる。
// 例 4: 例 3 とやってることは変わらない。ただ bitset の長さが増えただけ。
bitset<2000> bs4("10000011"); // 1999 ビット目から 0 ビット目の順番で、0...010000011 となる。
また、bitset では以下のような処理ができます。(ここでは、a, b を bitset 型の変数とします。)
プログラム 説明
a = (a ^ b) など int 型などと同じように、ビット演算(and, or, xor)をすることができます。
a.set(x) a の xx 桁目(2x2x の位)を 1 に変更します。
a.reset(x) a の xx 桁目(2x2x の位)を 0 に変更します。
a[i] 配列と同様、a の ii 桁目(2i2i の位)にアクセスすることができます。a[i] は必ず 0 か 1 です。
a.count() a の全ての桁のうち、1 となっている桁の個数を返します。__builtin_popcount(x) と似ています。
NN ビットの bitset に対して、and, or, xor などのビット演算を行うのにかかる計算量は N32N32 回程度と高速です。bitset をインクルードすると使えます。
サンプルコード
#include <iostream>
#include <bitset>
#include <string>
using namespace std;
int main() {
// 例 1: A or B を計算します。なお、A と B は string 型で 2 進数で与えられます。
string A; cin >> A;
string B; cin >> B;
bitset<2000> A1(A);
bitset<2000> B1(B);
bitset<2000> ans = (A1 | B1);
bool flag = false;
for (int i = 1999; i >= 0; i--) {
if (ans[i] == 1) flag = true;
if (flag == true) cout << ans[i];
}
cout << endl;
return 0;
}
4. 皆さんの疑問「標準ライブラリ、結局どんな場面で使うのか?」
皆さんの中には、この記事を読んでそう思った方も多いと思います。
今回の 25 個の標準ライブラリはわかったけど、どういう問題やどういうアルゴリズムの実装で活用できるのか???
そう思った皆さん。
後編を読みましょう。
厳選!C++ アルゴリズム実装に使える 25 の STL 機能【後編】
後編では、今回説明した標準ライブラリ(STL)を活用できるアルゴリズム、活用できるケースを 11 例紹介します。よろしければ、是非お読みください。
つづく
後編 につづきます。
便宜上、提出数のデータを示しておりますが、各プログラミング言語毎の参加者数で数えても、C++ は半分程度です。 ↩
残り 10% は、list・malloc などです。競技プログラミングではそれほど使われない(あるいは同等の実装量で代用できる)と思っているので、本記事には載せませんでした。 ↩
詳しくは、こちらの記事をご覧ください。 ↩
厳密には、__gcd, __builtin_popcount の 2 つは標準ライブラリに含まれません。しかし、競プロやアルゴリズム実装において比較的重要なので、含めておきました。 ↩
e869120
E869120
@e869120
東京大学 1 年の米田(E869120)です。 AtCoder・CodeForces でレッドコーダーです。国際情報オリンピックにも出場しています。競技プログラミング関連のことを記事にしていきます。よろしくお願い致します。 著書『「アルゴリズム×数学」が基礎からしっかり身につく本』 https://amazon.co.jp/dp/4297125218/
フォロー中
関連記事 Recommended by
標準ライブラリ
by bestfitat
C++のつまずきポイント解説 その1
by _EnumHack
C++Siv3D入門講座 Vol.06 イテレータ・vector要素の削除
by yagiri000
厳選!Ruby アルゴリズム実装に使える 25 の 標準ライブラリ【前編】
by superrino130
Kaggleで得たスキルを業務で活かす。Kaggleを楽しみながら活用するためのポイント
PR 日立
サントリー「脚の老化対策」4860円OFF!1世帯様1回限り
PR サントリー
この記事は以下の記事からリンクされています
akidon0000
[競技プログラミング] C++ チートシート 灰〜茶まで
からリンク
1 year ago
e869120
AtCoder での実力アップを目指そう! ~競プロ典型 90 問~
からリンク
1 year ago
e869120
情報オリンピックへのいざない ~日本一の競技プログラマーを決める戦い~
からリンク
1 year ago
e869120
実装力で戦える! ~競プロにおける実装テクニック14選~
からリンク
1 year ago
e869120
たのしい探索アルゴリズムの世界【後編:探索手法で実社会の様々な問題を斬る!】
からリンク
1 year ago
過去の16件を表示する
コメント
###Output
_____no_output_____ |
tutorials/annotations_image/classification_point_and_pose/chapter.ipynb | ###Markdown
Classification Classify a single item
###Code
# Get item from the platform
item = dataset.items.get(filepath='/your-image-file-path.jpg')
# Create a builder instance
builder = item.annotations.builder()
# Classify
builder.add(annotation_definition=dl.Classification(label=label))
# Upload classification to the item
item.annotations.upload(builder)
###Output
_____no_output_____
###Markdown
Classify Multiple Items Classifying multiple items requires using an Items entity with a filter.
###Code
# mutiple items classification using filter
...
###Output
_____no_output_____
###Markdown
Create a Point Annotation
###Code
# Get item from the platform
item = dataset.items.get(filepath='/your-image-file-path.jpg')
# Create a builder instance
builder = item.annotations.builder()
# Create point annotation with label and attribute
builder.add(annotation_definition=dl.Point(x=100,
y=100,
label='my-label',
attributes={'color': 'red'}))
# Upload point to the item
item.annotations.upload(builder)
###Output
_____no_output_____
###Markdown
Pose Annotation
###Code
# Pose annotation is based on pose template. Create the pose template from the platform UI and use it in the script by its ID
template_id = recipe.get_annotation_template_id(template_name="my_template_name")
# Get item
item = dataset.items.get(filepath='/your-image-file-path.jpg')
# Define the Pose parent annotation and upload it to the item
parent_annotation = item.annotations.upload(
dl.Annotation.new(annotation_definition=dl.Pose(label='my_parent_label',
template_id=template_id,
# instance_id is optional
instance_id=None)))[0]
# Add child points
builder = item.annotations.builder()
builder.add(annotation_definition=dl.Point(x=x,
y=y,
label='my_point_label'),
parent_id=parent_annotation.id)
builder.upload()
###Output
_____no_output_____
###Markdown
Classification Classify a single item
###Code
# Get item from the platform
item = dataset.items.get(filepath='/your-image-file-path.jpg')
# Create a builder instance
builder = item.annotations.builder()
# Classify
builder.add(annotation_definition=dl.Classification(label=label))
# Upload classification to the item
item.annotations.upload(builder)
###Output
_____no_output_____
###Markdown
Classify Multiple Items Classifying multiple items requires using an Items entity with a filter.
###Code
# mutiple items classification using filter
...
###Output
_____no_output_____
###Markdown
Create a Point Annotation
###Code
# Get item from the platform
item = dataset.items.get(filepath='/your-image-file-path.jpg')
# Create a builder instance
builder = item.annotations.builder()
# Create point annotation with label and attribute
builder.add(annotation_definition=dl.Point(x=100,
y=100,
label='my-label',
attributes={'color': 'red'}))
# Upload point to the item
item.annotations.upload(builder)
###Output
_____no_output_____
###Markdown
Pose Annotation
###Code
# Pose annotation is based on pose template. Create the pose template from the platform UI and use it in the script by its ID
template_id = recipe.get_annotation_template_id(template_name="my_template_name")
# Get item
item = dataset.items.get(filepath='/your-image-file-path.jpg')
# Define the Pose parent annotation and upload it to the item
parent_annotation = item.annotations.upload(
dl.Annotation.new(annotation_definition=dl.Pose(label='my_parent_label',
template_id=template_id,
# instance_id is optional
instance_id=None)))[0]
# Add child points
builder = item.annotations.builder()
builder.add(annotation_definition=dl.Point(x=x,
y=y,
label='my_point_label'),
parent_id=parent_annotation.id)
builder.upload()
###Output
_____no_output_____ |
examples/howto/charts/donut.ipynb | ###Markdown
Generic Examples Values with implied index
###Code
d = Donut([2, 4, 5, 2, 8])
show(d)
###Output
_____no_output_____
###Markdown
Values with Explicit Index
###Code
d = Donut(pd.Series([2, 4, 5, 2, 8], index=['a', 'b', 'c', 'd', 'e']))
show(d)
###Output
_____no_output_____
###Markdown
Autompg Data Take a look at the data
###Code
autompg.head()
###Output
_____no_output_____
###Markdown
Simple example implies count when object or categorical
###Code
d = Donut(autompg.cyl.astype(str))
show(d)
###Output
_____no_output_____
###Markdown
Equivalent with columns specified
###Code
d = Donut(autompg, label='cyl', agg='count')
show(d)
###Output
_____no_output_____
###Markdown
Given an indexed series of data pre-aggregated
###Code
d = Donut(autompg.groupby('cyl').displ.mean())
show(d)
###Output
_____no_output_____
###Markdown
Equivalent with columns specified
###Code
d = Donut(autompg, label='cyl',
values='displ', agg='mean')
show(d)
###Output
_____no_output_____
###Markdown
Given a multi-indexed series fo data pre-aggregatedSince the aggregation type isn't specified, we must provide it to the chart for use in the tooltip, otherwise it will just say "value".
###Code
d = Donut(autompg.groupby(['cyl', 'origin']).displ.mean(), hover_text='mean')
show(d)
###Output
_____no_output_____
###Markdown
Column Labels Produces Slightly Different ResultIn previous series input example we do not have the original values so we cannot size the wedges based on the mean of displacement for Cyl, then size the wedges proportionally inside of the Cyl wedge. This column labeled example can perform the right sizing, so would be preferred for any aggregated values.
###Code
d = Donut(autompg, label=['cyl', 'origin'],
values='displ', agg='mean')
show(d)
###Output
_____no_output_____
###Markdown
The spacing between each donut level can be alteredBy default, this is applied to only the levels other than the first.
###Code
d = Donut(autompg, label=['cyl', 'origin'],
values='displ', agg='mean', level_spacing=0.15)
show(d)
###Output
_____no_output_____
###Markdown
Can specify the spacing for each levelThis is applied to each level individually, including the first.
###Code
d = Donut(autompg, label=['cyl', 'origin'],
values='displ', agg='mean', level_spacing=[0.8, 0.3])
show(d)
###Output
_____no_output_____
###Markdown
Olympics Example Take a look at source data
###Code
print(data.keys())
data['data'][0]
###Output
_____no_output_____
###Markdown
Look at table formatted data
###Code
# utilize utility to make it easy to get json/dict data converted to a dataframe
df = df_from_json(data)
df.head()
###Output
_____no_output_____
###Markdown
Prepare the dataThis data is in a "pivoted" format, and since the charts interface is built around referencing columns, it is more convenient to de-pivot the data.- We will sort the data by total medals and select the top rows by the total medals.- Use pandas.melt to de-pivot the data.
###Code
# filter by countries with at least one medal and sort by total medals
df = df[df['total'] > 8]
df = df.sort("total", ascending=False)
olympics = pd.melt(df, id_vars=['abbr'],
value_vars=['bronze', 'silver', 'gold'],
value_name='medal_count', var_name='medal')
olympics.head()
# original example
d0 = Donut(olympics, label=['abbr', 'medal'], values='medal_count',
text_font_size='8pt', hover_text='medal_count')
show(d0)
###Output
_____no_output_____
###Markdown
Generic Examples Values with implied index
###Code
d = Donut([2, 4, 5, 2, 8])
show(d)
###Output
_____no_output_____
###Markdown
Values with Explicit Index
###Code
d = Donut(pd.Series([2, 4, 5, 2, 8], index=['a', 'b', 'c', 'd', 'e']))
show(d)
###Output
_____no_output_____
###Markdown
Autompg Data Take a look at the data
###Code
autompg.head()
###Output
_____no_output_____
###Markdown
Simple example implies count when object or categorical
###Code
d = Donut(autompg.cyl.astype(str))
show(d)
###Output
_____no_output_____
###Markdown
Equivalent with columns specified
###Code
d = Donut(autompg, label='cyl', agg='count')
show(d)
###Output
_____no_output_____
###Markdown
Given an indexed series of data pre-aggregated
###Code
d = Donut(autompg.groupby('cyl').displ.mean())
show(d)
###Output
_____no_output_____
###Markdown
Equivalent with columns specified
###Code
d = Donut(autompg, label='cyl',
values='displ', agg='mean')
show(d)
###Output
_____no_output_____
###Markdown
Given a multi-indexed series fo data pre-aggregatedSince the aggregation type isn't specified, we must provide it to the chart for use in the tooltip, otherwise it will just say "value".
###Code
d = Donut(autompg.groupby(['cyl', 'origin']).displ.mean(), hover_text='mean')
show(d)
###Output
_____no_output_____
###Markdown
Column Labels Produces Slightly Different ResultIn previous series input example we do not have the original values so we cannot size the wedges based on the mean of displacement for Cyl, then size the wedges proportionally inside of the Cyl wedge. This column labeled example can perform the right sizing, so would be preferred for any aggregated values.
###Code
d = Donut(autompg, label=['cyl', 'origin'],
values='displ', agg='mean')
show(d)
###Output
_____no_output_____
###Markdown
The spacing between each donut level can be alteredBy default, this is applied to only the levels other than the first.
###Code
d = Donut(autompg, label=['cyl', 'origin'],
values='displ', agg='mean', level_spacing=0.15)
show(d)
###Output
_____no_output_____
###Markdown
Can specify the spacing for each levelThis is applied to each level individually, including the first.
###Code
d = Donut(autompg, label=['cyl', 'origin'],
values='displ', agg='mean', level_spacing=[0.8, 0.3])
show(d)
###Output
_____no_output_____
###Markdown
Olympics Example Take a look at source data
###Code
print(data.keys())
data['data'][0]
###Output
_____no_output_____
###Markdown
Look at table formatted data
###Code
# utilize utility to make it easy to get json/dict data converted to a dataframe
df = df_from_json(data)
df.head()
###Output
_____no_output_____
###Markdown
Prepare the dataThis data is in a "pivoted" format, and since the charts interface is built around referencing columns, it is more convenient to de-pivot the data.- We will sort the data by total medals and select the top rows by the total medals.- Use pandas.melt to de-pivot the data.
###Code
# filter by countries with at least one medal and sort by total medals
df = df[df['total'] > 8]
df = df.sort("total", ascending=False)
olympics = pd.melt(df, id_vars=['abbr'],
value_vars=['bronze', 'silver', 'gold'],
value_name='medal_count', var_name='medal')
olympics.head()
# original example
d0 = Donut(olympics, label=['abbr', 'medal'], values='medal_count',
text_font_size='8pt', hover_text='medal_count')
show(d0)
###Output
_____no_output_____ |
Examples/solow/solow.ipynb | ###Markdown
A notebook with a simple Solow model You ca run each cell by pressing the run tool or shift+enter Import Python libraries
###Code
%matplotlib inline
import pandas as pd
from modelclass import model
model.modelflow_auto()
###Output
_____no_output_____
###Markdown
Specify the model The explode function will rewrite the business logic.
###Code
fsolow = '''\
Y = a * k**alfa * l **(1-alfa)
C = (1-SAVING_RATIO) * Y
I = Y - C
diff(K) = I-depreciates_rate * K(-1)
diff(l) = labor_growth * L(-1)
K_intense = K/L
'''
print(fsolow)
###Output
Y = a * k**alfa * l **(1-alfa)
C = (1-SAVING_RATIO) * Y
I = Y - C
diff(K) = I-depreciates_rate * K(-1)
diff(l) = labor_growth * L(-1)
K_intense = K/L
###Markdown
Create a model class instance
###Code
msolow = model.from_eq(fsolow,modelname='Solow model')
print(msolow.equations)
###Output
FRML <> Y = A * K**ALFA * L **(1-ALFA) $
FRML <> C = (1-SAVING_RATIO) * Y $
FRML <> I = Y - C $
FRML <> K=K(-1)+(I-DEPRECIATES_RATE * K(-1))$
FRML <> L=L(-1)+(LABOR_GROWTH * L(-1))$
FRML <> K_INTENSE = K/L $
###Markdown
Show model structure
###Code
msolow.drawmodel(sink = 'K_INTENSE',size=(5,5))
###Output
_____no_output_____
###Markdown
Show solving structure (only current year)
###Code
msolow.drawendo(sink = 'K_INTENSE',source='L',size=(5,5))
msolow.plotadjacency();
###Output
_____no_output_____
###Markdown
Create DataFrame with baseline exogenous
###Code
N = 300
df = pd.DataFrame({'L':[100]*N,'K':[100]*N})
df.loc[:,'ALFA'] = 0.5
df.loc[:,'A'] = 1.
df.loc[:,'DEPRECIATES_RATE'] = 0.05
df.loc[:,'LABOR_GROWTH'] = 0.01
df.loc[:,'SAVING_RATIO'] = 0.05
display(df.head())
###Output
_____no_output_____
###Markdown
Run Baseline
###Code
res1 = msolow(df)
display(res1.head())
###Output
_____no_output_____
###Markdown
Create interactive widgets If you are not familiar with Python and Ipywidgets, don't try to understand the code. Just notice that it is fairly short. You can try different parameter values
###Code
slidedef = {'Productivity' : {'var':'ALFA', 'value': 0.5 ,'min':0.0, 'max':1.0},
'DEPRECIATES_RATE' : {'var':'DEPRECIATES_RATE', 'value': 0.05,'min':0.0, 'max':1.0},
'LABOR_GROWTH' : {'var':'LABOR_GROWTH', 'value': 0.01,'min':0.0, 'max':1.0},
'SAVING_RATIO' : {'var': 'SAVING_RATIO', 'value': 0.05,'min':0.0, 'max':1.0}
}
input = msolow.inputwidget(basedf=res1,slidedef=slidedef,showout=True,varpat='Y C I K L K_INTENSE')
###Output
_____no_output_____
###Markdown
A notebook with a simple Solow model You ca run each cell by pressing the run tool or shift+enter Import Python libraries
###Code
%matplotlib inline
import pandas as pd
from modelclass import model
model.modelflow_auto()
###Output
_____no_output_____
###Markdown
Specify the model The explode function will rewrite the business logic.
###Code
fsolow = '''\
Y = a * k**alfa * l **(1-alfa)
C = (1-SAVING_RATIO) * Y
I = Y - C
diff(K) = I-depreciates_rate * K(-1)
diff(l) = labor_growth * L(-1)
K_intense = K/L
'''
print(fsolow)
###Output
Y = a * k**alfa * l **(1-alfa)
C = (1-SAVING_RATIO) * Y
I = Y - C
diff(K) = I-depreciates_rate * K(-1)
diff(l) = labor_growth * L(-1)
K_intense = K/L
###Markdown
Create a model class instance
###Code
msolow = model.from_eq(fsolow,modelname='Solow model')
print(msolow.equations)
###Output
FRML <> Y = A * K**ALFA * L **(1-ALFA) $
FRML <> C = (1-SAVING_RATIO) * Y $
FRML <> I = Y - C $
FRML <> K=K(-1)+(I-DEPRECIATES_RATE * K(-1))$
FRML <> L=L(-1)+(LABOR_GROWTH * L(-1))$
FRML <> K_INTENSE = K/L $
###Markdown
Show model structure
###Code
msolow.drawmodel(sink = 'K_INTENSE',size=(5,5))
###Output
_____no_output_____
###Markdown
Show solving structure (only current year)
###Code
msolow.drawendo(sink = 'K_INTENSE',source='L',size=(5,5))
msolow.plotadjacency();
###Output
_____no_output_____
###Markdown
Create DataFrame with baseline exogenous
###Code
N = 300
df = pd.DataFrame({'L':[100]*N,'K':[100]*N})
df.loc[:,'ALFA'] = 0.5
df.loc[:,'A'] = 1.
df.loc[:,'DEPRECIATES_RATE'] = 0.05
df.loc[:,'LABOR_GROWTH'] = 0.01
df.loc[:,'SAVING_RATIO'] = 0.05
display(df.head())
###Output
_____no_output_____
###Markdown
Run Baseline
###Code
res1 = msolow(df)
display(res1.head())
###Output
_____no_output_____
###Markdown
Create interactive widgets If you are not familiar with Python and Ipywidgets, don't try to understand the code. Just notice that it is fairly short. You can try different parameter values
###Code
slidedef = {'Productivity' : {'var':'ALFA', 'value': 0.5 ,'min':0.0, 'max':1.0},
'DEPRECIATES_RATE' : {'var':'DEPRECIATES_RATE', 'value': 0.05,'min':0.0, 'max':1.0},
'LABOR_GROWTH' : {'var':'LABOR_GROWTH', 'value': 0.01,'min':0.0, 'max':1.0},
'SAVING_RATIO' : {'var': 'SAVING_RATIO', 'value': 0.05,'min':0.0, 'max':1.0}
}
input = msolow.inputwidget(basedf=res1,slidedef=slidedef,showout=True,varpat='Y C I K L K_INTENSE')
###Output
_____no_output_____
###Markdown
A notebook with a simple Solow model You ca run each cell by pressing the run tool or shift+enter Import Python libraries
###Code
%matplotlib inline
import pandas as pd
from modelclass import model
model.modelflow_auto()
###Output
_____no_output_____
###Markdown
Specify the model The explode function will rewrite the business logic.
###Code
fsolow = '''\
Y = a * k**alfa * l **(1-alfa)
C = (1-SAVING_RATIO) * Y
I = Y - C
diff(K) = I-depreciates_rate * K(-1)
diff(l) = labor_growth * L(-1)
K_intense = K/L
'''
print(fsolow)
###Output
Y = a * k**alfa * l **(1-alfa)
C = (1-SAVING_RATIO) * Y
I = Y - C
diff(K) = I-depreciates_rate * K(-1)
diff(l) = labor_growth * L(-1)
K_intense = K/L
###Markdown
Create a model class instance
###Code
msolow = model.from_eq(fsolow,modelname='Solow model')
print(msolow.equations)
###Output
FRML <> Y = A * K**ALFA * L **(1-ALFA) $
FRML <> C = (1-SAVING_RATIO) * Y $
FRML <> I = Y - C $
FRML <> K=K(-1)+(I-DEPRECIATES_RATE * K(-1))$
FRML <> L=L(-1)+(LABOR_GROWTH * L(-1))$
FRML <> K_INTENSE = K/L $
###Markdown
Show model structure
###Code
msolow.drawmodel(sink = 'K_INTENSE',size=(5,5))
###Output
_____no_output_____
###Markdown
Show solving structure (only current year)
###Code
msolow.drawendo(sink = 'K_INTENSE',source='L',size=(5,5))
msolow.plotadjacency();
###Output
_____no_output_____
###Markdown
Create DataFrame with baseline exogenous
###Code
N = 300
df = pd.DataFrame({'L':[100]*N,'K':[100]*N})
df.loc[:,'ALFA'] = 0.5
df.loc[:,'A'] = 1.
df.loc[:,'DEPRECIATES_RATE'] = 0.05
df.loc[:,'LABOR_GROWTH'] = 0.01
df.loc[:,'SAVING_RATIO'] = 0.05
display(df.head())
###Output
_____no_output_____
###Markdown
Run Baseline
###Code
res1 = msolow(df)
display(res1.head())
###Output
_____no_output_____
###Markdown
Create interactive widgets If you are not familiar with Python and Ipywidgets, don't try to understand the code. Just notice that it is fairly short. You can try different parameter values
###Code
slidedef = {'Productivity' : {'var':'ALFA', 'value': 0.5 ,'min':0.0, 'max':1.0},
'DEPRECIATES_RATE' : {'var':'DEPRECIATES_RATE', 'value': 0.05,'min':0.0, 'max':1.0},
'LABOR_GROWTH' : {'var':'LABOR_GROWTH', 'value': 0.01,'min':0.0, 'max':1.0},
'SAVING_RATIO' : {'var': 'SAVING_RATIO', 'value': 0.05,'min':0.0, 'max':1.0}
}
input = msolow.inputwidget(basedf=res1,slidedef=slidedef,showout=True,varpat='Y C I K L K_INTENSE')
###Output
_____no_output_____ |
JupyterNotebooks/2_Recursion.ipynb | ###Markdown
WEEK 2 Code Analysis, RecusionThis notebook demonstrates a few different recursive functions, and alternative approaches to solving the same problem. Section 1: FactorialFactorials are good way to start learning about recursion.$n! = 1*2*3*...*n = \prod_{i=1}^n i$$0! = 1$It can be written using a recursive definition as well:$n! = n * (n-1)!$$0! = 1$To implement linearly, we need just need to iterate over 1...n multiplying the previous value by n.
###Code
def fact_lin(n):
ret = 1
for i in range(1, n+1):
ret *= i
return ret
for i in range(0,11):
print(f"{i}! = {fact_lin(i)}")
###Output
0! = 1
1! = 1
2! = 2
3! = 6
4! = 24
5! = 120
6! = 720
7! = 5040
8! = 40320
9! = 362880
10! = 3628800
###Markdown
But we want to practice with recursion! So let's solve this using an algorithm that implements the resursive definition of factorial.
###Code
def fact_rec(n):
return 1 if n == 0 else n * fact_rec(n-1)
for i in range(0,11):
print(f"{i}! = {fact_rec(i)}")
###Output
0! = 1
1! = 1
2! = 2
3! = 6
4! = 24
5! = 120
6! = 720
7! = 5040
8! = 40320
9! = 362880
10! = 3628800
###Markdown
Section 2: FibonacciAnother fun example is to generate the Fibonacci sequence. The Fibonacci is most commonly defined using a resurive notation, stating that:$F(0) = 0$$F(1) = 1$$F(n) = F(n-2) + F(n-1)$Which will yeild the following sequence:$0,1,1,2,3,5,8,13,21,34,55...$Let's implement this using recursion
###Code
def fib_rec(n):
# base cases
return n if n <= 1 else fib_rec(n-2) + fib_rec(n-1)
print("Fibonacci: ", end="")
for i in range(0,11):
print(f"{fib_rec(i)}", end=",")
print("\b...") # get rid of the last comma
###Output
Fibonacci: 0,1,1,2,3,5,8,13,21,34,55,...
###Markdown
There's a bit of a problem with this approach. We need to look at the Time Complexity of this recursive algorithm.It actually calculates out to be *EXPONENTIAL!* $O(2^n)$ -- see course slides for explanation.This means that to calculate the first 10 values, it will take 1024 steps, the first 20 values will take over a million steps, and the first 30 values will take over a billion... and the first 50, over a quadrillion $(10^{15})$ We can clearly do much better than that, it's just a linear sequence:
###Code
def fib_lin(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
n2 = 0
n1 = 1
for i in range(1,n):
ret = n2+n1
n2 = n1
n1 = ret
return ret
return tot
print("Fibonacci: ", end="")
for i in range(0,11):
print(f"{fib_rec(i)}", end=",")
print("\b...") # get rid of the last comma
###Output
Fibonacci: 0,1,1,2,3,5,8,13,21,34,55,...
###Markdown
But can we get any better? This is still linear time, $O(n)$It just so happens that there is a closed form of the Fibonnaci sequence!$\frac{\phi^n - \psi^n}{\sqrt{5}}$ Where: $\phi = \frac{1+\sqrt{5}}{2}$ and $\psi = \frac{1-\sqrt{5}}{2}$
###Code
def fib_const(n):
sq5 = 5**.5
psi = (1 - sq5) / 2
phi = (1 + sq5) / 2
approx = (phi**n - psi**n) / sq5
return int(approx)
import math
def fib_const_pow(n):
sq5 = math.pow(5,.5)
psi = (1 - math.pow(5,.5)) / 2
phi = (1 + math.pow(5,.5)) / 2
approx = (math.pow(phi,n) - math.pow(psi,n)) / (phi - psi)
return int(approx)
print("Fibonacci: ", end="")
for i in range(0,11):
print(f"{fib_rec(i)}", end=",")
print("\b...") # get rid of the last comma
###Output
Fibonacci: 0,1,1,2,3,5,8,13,21,34,55,...
###Markdown
Note the approximation and the casting to int. This is required because $\phi$ and $\psi$ are irractional numbers and hence cannot be exactly stored in a comnputer. However the precision held it within a rounding error for our purposes, so casting to int returns similar reesults. Test to make sure!
###Code
error = False
N = 100 # values to test...
for i in range(0, N):
l = fib_lin(i)
c = fib_const(i)
if l != c:
error = True
print(f"There is a difference of {l-c} on F({i}), l={l}, c={c}")
if not error:
print(f"They are identical for the first {N} values")
###Output
There is a difference of -1 on F(72), l=498454011879264, c=498454011879265
There is a difference of -2 on F(73), l=806515533049393, c=806515533049395
There is a difference of -3 on F(74), l=1304969544928657, c=1304969544928660
There is a difference of -5 on F(75), l=2111485077978050, c=2111485077978055
There is a difference of -8 on F(76), l=3416454622906707, c=3416454622906715
There is a difference of -14 on F(77), l=5527939700884757, c=5527939700884771
There is a difference of -24 on F(78), l=8944394323791464, c=8944394323791488
There is a difference of -39 on F(79), l=14472334024676221, c=14472334024676260
There is a difference of -59 on F(80), l=23416728348467685, c=23416728348467744
There is a difference of -102 on F(81), l=37889062373143906, c=37889062373144008
There is a difference of -161 on F(82), l=61305790721611591, c=61305790721611752
There is a difference of -279 on F(83), l=99194853094755497, c=99194853094755776
There is a difference of -464 on F(84), l=160500643816367088, c=160500643816367552
There is a difference of -743 on F(85), l=259695496911122585, c=259695496911123328
There is a difference of -1207 on F(86), l=420196140727489673, c=420196140727490880
There is a difference of -2014 on F(87), l=679891637638612258, c=679891637638614272
There is a difference of -3157 on F(88), l=1100087778366101931, c=1100087778366105088
There is a difference of -5171 on F(89), l=1779979416004714189, c=1779979416004719360
There is a difference of -8584 on F(90), l=2880067194370816120, c=2880067194370824704
There is a difference of -14523 on F(91), l=4660046610375530309, c=4660046610375544832
There is a difference of -22595 on F(92), l=7540113804746346429, c=7540113804746369024
There is a difference of -37118 on F(93), l=12200160415121876738, c=12200160415121913856
There is a difference of -59713 on F(94), l=19740274219868223167, c=19740274219868282880
There is a difference of -98879 on F(95), l=31940434634990099905, c=31940434634990198784
There is a difference of -166784 on F(96), l=51680708854858323072, c=51680708854858489856
There is a difference of -265663 on F(97), l=83621143489848422977, c=83621143489848688640
There is a difference of -440639 on F(98), l=135301852344706746049, c=135301852344707186688
There is a difference of -722686 on F(99), l=218922995834555169026, c=218922995834555891712
|
2_ml_tracking/get_prod_model.ipynb | ###Markdown
Retrieving prod models
###Code
from mlflow.tracking.client import MlflowClient
import mlflow.pyfunc
import mlflow
model_name = "BestModel"
model_stage = "Production"
mlflow.tracking.set_tracking_uri("http://127.0.0.1:5000")
client = MlflowClient()
model_version = client.get_latest_versions(model_name, stages=[model_stage])
model_version
model_version[0].source
model = mlflow.pyfunc.load_model(model_version[0].source)
model
import pandas as pd
pd.plotting.register_matplotlib_converters()
input_data = pd.read_csv("germany.csv", parse_dates=[0], index_col=0)["2015-01"]
X_test = input_data[["windspeed", "temperature", "rad_horizontal", "rad_diffuse"]]
y_test = input_data[["solar_GW", "wind_GW"]]
x = X_test.index
# This is the magic! We have a generic model from mlflow models that we can predict on.
# We dont know or care about whether it was linear regression, RF, other, we just call predict
y_predict = model.predict(X_test)
import matplotlib.pyplot as plt
fig, axes = plt.subplots(nrows=2, figsize=(9, 5), sharex=True)
axes[0].plot(x, y_test["solar_GW"], label="Actual")
axes[0].plot(x, y_predict[:, 0], label="Predicted")
axes[1].plot(x, y_test["wind_GW"], label="Actual")
axes[1].plot(x, y_predict[:, 1], label="Predicted")
axes[1].legend(), axes[0].set_ylabel("Solar GW"), axes[1].set_ylabel("Wind GW");
###Output
_____no_output_____
###Markdown
Retrieving prod models
###Code
from mlflow.tracking.client import MlflowClient
import mlflow.pyfunc
import mlflow
import warnings
warnings.filterwarnings("ignore")
model_name = "BestModel"
model_stage = "Production"
mlflow.tracking.set_tracking_uri("http://127.0.0.1:5000")
client = MlflowClient()
model_version = client.get_latest_versions(model_name, stages=[model_stage])
model_version
model_version[0].source
model = mlflow.pyfunc.load_model(model_version[0].source)
model
import pandas as pd
pd.plotting.register_matplotlib_converters()
input_data = pd.read_csv("germany.csv", parse_dates=[0], index_col=0)["2015-01"]
X_test = input_data[["windspeed", "temperature", "rad_horizontal", "rad_diffuse"]]
y_test = input_data[["solar_GW", "wind_GW"]]
x = X_test.index
# This is the magic! We have a generic model from mlflow models that we can predict on.
# We dont know or care about whether it was linear regression, RF, other, we just call predict
y_predict = model.predict(X_test)
import matplotlib.pyplot as plt
fig, axes = plt.subplots(nrows=2, figsize=(9, 5), sharex=True)
axes[0].plot(x, y_test["solar_GW"], label="Actual")
axes[0].plot(x, y_predict[:, 0], label="Predicted")
axes[1].plot(x, y_test["wind_GW"], label="Actual")
axes[1].plot(x, y_predict[:, 1], label="Predicted")
axes[1].legend(), axes[0].set_ylabel("Solar GW"), axes[1].set_ylabel("Wind GW");
###Output
_____no_output_____ |
03_seagrass_distribution_lacbay.ipynb | ###Markdown
Mapping seagrass distribution in Lac Bay Notebook for classifying and analyzing seagrass distribution in Lac Bay, Bonaire with Sentinel-2 images* Decision Tree Classifier (DTC) and Maximum Likelihood Classifier (MLC) are employed* Training sites covering 2 different classes (non-seagrass,seagrass) are used to extract pixel values (training samples) over RGB bands * 80:20 train-test ratio for splitting the training samples* K-Fold cross-validation performed for tuning the DTC model* MLC model developed with 4 different chi-square thresholds: 0% (base), 10%,20%,50%
###Code
import os
import re
import pandas as pd
import numpy as np
import rasterio as rio
import matplotlib.pyplot as plt
import seaborn as sns
from glob import glob
import geopandas as gpd
from joblib import dump,load
from tqdm import tqdm,tqdm_notebook
#custom functions
from Python.prep_raster import stack_bands,clip_raster,pixel_sample,computeIndexStack
from Python.spec_analysis import transpose_df,jmd2df
from Python.data_viz import ridgePlot,validation_curve_plot
from Python.mlc import mlClassifier
from Python.calc_acc import calc_acc
from Python.pred_raster import stack2pred, dtc_pred_stack
#sklearn functions
from sklearn.model_selection import train_test_split,validation_curve
from sklearn.preprocessing import LabelEncoder
from sklearn.tree import DecisionTreeClassifier
#setup IO directories
parent_dir = os.path.join(os.path.abspath('..'),'objective3') #change according to preference
sub_dirs = ['fullstack','clippedstack','indexstack','predicted','stack2pred']
make_dirs = [os.makedirs(os.path.join(parent_dir,name),exist_ok=True) for name in sub_dirs]
###Output
_____no_output_____
###Markdown
Sentinel-2 data preparation* Resample coarse bands to 10m resolution* Stack multiband images * Calculate spectral indices
###Code
#dates considered for classification and analysis
dates = [20190108,20190128,20190212,20190304,20190821,20191129]
#band names
bands = ['B02_10m','B03_10m','B04_10m']
#get product file paths according to dates and tile ID T19PEP (covers Bonaire)
level2_dir = '...' #change according to preference
level2_files = glob(level2_dir+'/*.SAFE')
scene_paths=[file for date in dates for file in level2_files if str(date) in file and 'T19PEP' in file]
#sort multiband image paths according to date
image_collection ={}
for scene in scene_paths:
date = re.findall(r"(\d{8})T", scene)[0]
#collect all .jp2 band images in SAFE directory
all_images = [f for f in glob(scene + "*/**/*.jp2", recursive=True)]
img_paths = [img_path for band in bands for img_path in all_images if band in img_path]
image_collection[date] = img_paths
#check nr. of images per date
for key in image_collection.keys():print(f'Date: {key} Images: {len(image_collection[key])}')
#polygon for cropping image
roi_file = './data/boundaries/objective3/lacbay_roi.geojson'
cm_20190128 = './data/boundaries/objective3/cloudmask_20190128.geojson'
cm_20190212 = './data/boundaries/objective3/cloudmask_20190212.geojson'
#stack multiband images to a geotiff
for date in tqdm(image_collection.keys(),position=0, leave=True):
stack_file = os.path.join(parent_dir,'fullstack',f'stack_{date}.tif')
stack_bands(image_collection[date],image_collection[date][1],stack_file)
clip_outfile = os.path.join(parent_dir,'clippedstack',f'stack_{date}_clipped.tif')
#crop multiband image
if '20190128' in date:
clip_raster(stack_file,cm_20190128,clip_outfile,fill=True,nodat=0)
elif '20190212' in date:
clip_raster(stack_file,cm_20190212,clip_outfile,fill=True,nodat=0)
else:
clip_raster(stack_file,roi_file,clip_outfile,fill=True,nodat=0)
###Output
_____no_output_____
###Markdown
Sample pixel values from multiband images based on training sites * Training scenes from 1 and 28 January and 12 February 2019
###Code
#get training sites and corresponding images
train_sites = glob(r".\data\training_input\objective3\*_lac.geojson")
dates = [20190108,20190128,20190212]
stack_bands = [f for date in dates for f in glob(parent_dir+'/clipped*/*_clipped.tif') if str(date) in f]
#bands
band_names = ['B02','B03','B04']
dataset = []
for i in range(len(train_sites)):
#sample multibands and spectral indices
df_sample= pixel_sample(stack_bands[i],train_sites[i],band_names)
dataset.append(df_sample)
#final dataset
dataset=pd.concat(dataset,sort=False).reset_index(drop=True)
dataset.to_csv(r'./data/training_input/csv/training_samples_20190108_20190212_seagrass.csv',index=False)
###Output
_____no_output_____
###Markdown
Expore spectral signature * Jeffries-Matusita distance (JMD) used for feature selection ([reference](https://books.google.nl/books?id=RxHbb3enITYC&pg=PA52&lpg=PA52&dq=for+one+feature+and+two+classes+the+Bhattacharyya+distance+is+given+by&source=bl&ots=sTKLGl1POo&sig=ACfU3U2s7tv0LT9vfSUat98l4L9_dyUgeg&hl=nl&sa=X&ved=2ahUKEwiKgeHYwI7lAhWIIlAKHZfJAC0Q6AEwBnoECAkQAQv=onepage&q&f=false))* RGB (bands 4,3,2) are selected as input features for the classifiers (though worst JMD scores)
###Code
#load training sample
df = pd.read_csv(r'./data/training_input/csv/training_samples_20190108_20190212_seagrass.csv')
#plot JMD heatmap for each band
jmd_bands = [jmd2df(transpose_df(df,'C',band)) for band in ['B02','B03','B04']]
sns.heatmap(pd.concat(jmd_bands,sort=True),annot=True)
ridgePlot(df[['C','B02','B03','B04']],'C')
###Output
_____no_output_____
###Markdown
Build classifiers
###Code
#load training sample
df = pd.read_csv(r'./data/training_input/csv/training_samples_20190108_20190212_seagrass.csv')
subset_df = df[['C','B02','B03','B04']]
#split into train and test datasets 80:20
train,test = train_test_split(subset_df, train_size = 0.8,random_state=1,shuffle=True,stratify=np.array(subset_df['C']))
train = train.sort_values(by='C',ascending=True) #sort labels
#split pedictors from labels (for DTC)
le = LabelEncoder()
X_train,y_train = train[['B02','B03','B04']],le.fit_transform(train['C'])
X_test,y_test = test[['B02','B03','B04']],le.fit_transform(test['C'])
###Output
_____no_output_____
###Markdown
* Decision Tree Classifier
###Code
#perform k-fold (=10) cross-validation
#parameters considered in this step
max_depth = np.arange(1,40,2)
min_samples_split = list(range(2, 100,10))
max_leaf_nodes= list(range(2, 50,5))
min_samples_leaf= list(range(1, 100,10))
min_impurity_decrease=[0,0.00005,0.0001,0.0002,0.0005,0.001,0.0015,0.002,0.005,0.01,0.02,0.05,0.08]
criterion = ['gini','entropy']
#assign parameters to a dictionary
params = {'max_depth':max_depth,'min_samples_split':min_samples_split,
'max_leaf_nodes':max_leaf_nodes,'min_samples_leaf':min_samples_leaf,
'min_impurity_decrease':min_impurity_decrease,'criterion':criterion}
#plot validation curve
fig,axs = plt.subplots(3,2,figsize=(10,8))
axs = axs.ravel()
dtc = DecisionTreeClassifier(random_state=1,criterion='entropy') #default model
for (param_name,param_range),i in zip(params.items(),range(len(params.items()))):
train_scores,test_scores = validation_curve(dtc,X_train.values,y_train,cv=10,scoring='accuracy',
n_jobs=-1,param_range=param_range,param_name=param_name)
validation_curve_plot(train_scores,test_scores,param_range,param_name,axs[i])
plt.show()
#train dtc model based on best parameters
dtc = DecisionTreeClassifier(max_depth=5,random_state=42,criterion='entropy',
min_samples_split=50,max_leaf_nodes=10,min_samples_leaf=30,min_impurity_decrease=0.02)
dtc = dtc.fit(X_train,y_train)
#export model as joblib file
dump(dtc,r".\data\models\dtc_model_seagrass.joblib")
###Output
_____no_output_____
###Markdown
* Maximum Likelihood Classifier
###Code
#train mlc model
mlc = mlClassifier(train,'C')
#export model as joblib file
dump(mlc,r".\data\models\mlc_model_seagrass.joblib")
###Output
_____no_output_____
###Markdown
* Compute model accuracies (based on test split)
###Code
#load models
dtc = load(r".\data\models\dtc_model_seagrass.joblib")
mlc = load(r".\data\models\mlc_model_seagrass.joblib")
#DTC model accuracy
dtc_y_pred = dtc.predict(X_test)
con_mat_dtc = calc_acc(le.inverse_transform(y_test),le.inverse_transform(dtc_y_pred))
con_mat_dtc['classifier'] = 'DTC'
#MLC model accuracies with chi-square threshold
chi_table = {'MLC base':None,'MLC 10%':7.78,'MLC 20%':5.99,'MLC 50%':3.36}
mlc_conmats = []
for key,value in chi_table.items():
con_mat_mlc = mlc.classify_testdata(test,'C',threshold=value)
con_mat_mlc['classifier'] = key
mlc_conmats.append(con_mat_mlc)
#export model accuracies
mlc_conmats = pd.concat(mlc_conmats)
model_acc = pd.concat([con_mat_dtc,mlc_conmats])
model_acc.to_csv('./data/output/objective3/dtc_mlc_model_acc_obj3.csv')
###Output
_____no_output_____
###Markdown
Classification
###Code
#load models
dtc = load(r".\data\models\dtc_model_seagrass.joblib")
mlc = load(r".\data\models\mlc_model_seagrass.joblib")
#output dir
os.makedirs(os.path.join(parent_dir,'predicted/dtc'),exist_ok=True)
os.makedirs(os.path.join(parent_dir,'predicted/mlc'),exist_ok=True)
clipped_files = glob(parent_dir+'/clippedstack/*_clipped.tif')
dates= [20190108,20190304,20190821,20191129]
clipped_files = [path for path in clipped_files for date in dates if str(date) in path]
for file in clipped_files:
date = re.findall(r"(\d{8})", file)[0]
chi_probs = [None,7.78,5.99,3.36]
with rio.open(file) as src:
stack2pred_img = src.read()
mlc_imgs = np.array([mlc.classify_raster_gx(stack2pred_img,threshold=prob) for prob in chi_probs])
dtc_img = np.array([dtc_pred_stack(dtc,stack2pred_img)])
#export results
mlc_profile = src.profile.copy()
mlc_profile.update({'nodata':None,'dtype':rio.uint16,'count':4})
mlc_out = os.path.join(parent_dir,'predicted/mlc',f'mlc_{date}.tif')
dtc_profile = src.profile.copy()
dtc_profile.update({'nodata':None,'dtype':rio.uint8,'count':1})
dtc_out = os.path.join(parent_dir,'predicted/dtc',f'dtc_{date}.tif')
with rio.open(mlc_out,'w',**mlc_profile) as mlc_dst, rio.open(dtc_out,'w',**dtc_profile) as dtc_dst:
mlc_dst.write(mlc_imgs.astype(rio.uint16))
dtc_dst.write(dtc_img.astype(rio.uint8))
###Output
_____no_output_____
###Markdown
External validity * Classify DTC and MLC results for a scene taken on 2019-03-04* Seagrass pixel value = 2 in the DTC and MLC rasters
###Code
#get file paths
val_samples = gpd.read_file(r'./data/training_input/objective3/sg_validation_2019.geojson')
dtc_file = glob(parent_dir+'/predicted*/dtc/dtc*20190304*.tif')[0]
mlc_file = glob(parent_dir+'/predicted*/mlc/mlc*20190304*.tif')[0]
coords = [(val_samples.geometry[i][0].x,val_samples.geometry[i][0].y) for i in range(len(val_samples))]
with rio.open(dtc_file) as dtc_src, rio.open(mlc_file) as mlc_src:
#sample from dtc raster
val_samples['DTC'] = [pt[0] for pt in dtc_src.sample(coords)]
#sample from multilayer mlc raster
mlc_multi = pd.concat([pd.DataFrame(pt).T for pt in mlc_src.sample(coords)],ignore_index=True)
val_samples[['MLC base','MLC 10%','MLC 20%','MLC 50%']] = mlc_multi
#convert pixel values to 1 if seagrass, else to 0 for others
val_samples[val_samples.columns[-5:]] = (val_samples[val_samples.columns[-5:]]==2).astype(int)
val_samples.drop(['site','mean_cover'],axis=1,inplace=True)
#compute classification (validation) accuracy
df_val = pd.DataFrame(val_samples.drop(columns='geometry'))
acc_val_dfs = []
for pred in df_val.columns[df_val.columns!='label']:
acc = calc_acc(df_val['label'].values, df_val[pred].values)
acc['classifier'] = pred
acc_val_dfs.append(acc)
acc_val_dfs = pd.concat(acc_val_dfs)
acc_val_dfs.to_csv('./data/output/objective3/dtc_mlc_external_val_obj3.csv')
model_df = pd.read_csv('./data/output/objective3/dtc_mlc_model_acc_obj3.csv').set_index('Unnamed: 0')
val_df = pd.read_csv('./data/output/objective3/dtc_mlc_external_val_obj3.csv').set_index('Observed')
acc2plot = {'Model accuracy (2 classes)':model_df.loc['PA','UA'].str[:4].astype(float),
'Model F1-score (Sg)':model_df.loc['sg','F1-score'].astype(float),
'Validation accuracy (2 classes)':val_df.loc['PA','UA'].str[:4].astype(float),
'Validation F1-score (Sg)':val_df.loc['1','F1-score'].astype(float)}
[plt.plot(val_df['classifier'].unique(),value,label=key) for key,value in acc2plot.items()]
plt.legend()
###Output
_____no_output_____
###Markdown
Comparative analysis * Compare seagrass (Sg classified area across different scenes for each model
###Code
#get classification result paths
dtc_paths = glob(parent_dir+'/predicted/dtc/dtc*.tif')
mlc_paths = glob(parent_dir+'/predicted/mlc/mlc*.tif')
data = dict.fromkeys(['Date','Sg MLC Base','Sg MLC 10%','Sg MLC 20%','Sg MLC 50%','Sg DTC'], [])
for i in range(len(mlc_paths)):
date = re.findall(r"(\d{8})", mlc_paths[i])
data['Date'] = data['Date']+ [str(pd.to_datetime(date)[0].date())]
with rio.open(dtc_paths[i]) as dtc_src, rio.open(mlc_paths[i]) as mlc_src:
data['Sg DTC'] = data['Sg DTC'] + [np.unique(dtc_src.read(),return_counts=True)[1][1]]
for k,sf_mlc_key in enumerate(list(data.keys())[1:-1]):
data[sf_mlc_key] = data[sf_mlc_key]+ [np.unique(mlc_src.read([k+1]), return_counts=True)[1][1]]
#export data
data = pd.DataFrame(data)
data.to_csv('./data/output/objective3/classified_area_obj3.csv',index=False)
###Output
_____no_output_____
###Markdown
* Plot seagrass classified area in 2019
###Code
#load data and subset only the 2019 results
data = pd.read_csv('./data/output/objective3/classified_area_obj3.csv',index_col='Date')
#plot seagrass classified area in Lac Bay
plt.ylabel('Classified area (ha)')
plt.plot(data/100)
plt.legend(data.columns,loc='upper left')
###Output
_____no_output_____ |
Project0.ipynb | ###Markdown
Inaugural Project Import and set magics:
###Code
import numpy as np
from scipy import optimize
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Question 1 First of all we begin this project by importing the packages that we will use. The parameter values are then defined as given in the project describtion. We solve the maximization problem given the parameter values. The optimal level of consumption and labor are c = 1.24 and l = 0.40 which gives us a utility of 0.17.
###Code
# First the initial variables that are subject to change are defined
w = 1
e = 0.3
# The fixed parameter values are
m = 1
v = 10
t0 = 0.4
t1 = 0.1
k = 0.4
# Now we construct the functions
def c_star(w,l,m,t0,t1,k):
return m+w*l-(t0*w*l+t1*max(w*l-k,0))
def u_star(c,l,v,e):
return np.log(c)-v*l**(1+1/e)/(1+1/e)
# Objective function
def value_of_choice(l,v,e):
c = c_star(w,l,m,t0,t1,k)
return -u_star(c,l,v,e)
# Call solver
sol_case1 = optimize.minimize_scalar(
value_of_choice,method='bounded',
bounds=(0,1),args=(v,e))
# Unpack solution
l = sol_case1.x
c = c_star(w,l,m,t0,t1,k)
u = u_star(c,l,v,e)
# Print solutions
def print_solution(c,l,u):
print(f'c = {c:0.2f}')
print(f'l = {l:0.2f}')
print(f'u = {u:0.2f}')
print_solution(c,l,u)
###Output
c = 1.24
l = 0.40
u = 0.17
###Markdown
Question 2 I this question we want to show consumption and labor as functions of wage. Overall both consumption and labor are increasing in wages which is what we expected beforehand.
###Code
# Optimal choice as function of wage
l_val = []
c_val = []
w_val = []
for i in np.linspace(0.5,1.5,100):
w = i
sol_case1 = optimize.minimize_scalar(value_of_choice,
method='bounded',
bounds=(0,1),args=(v,e))
w_val.append(i)
l_val.append(sol_case1.x)
c_val.append(c_star(sol_case1.x,i,m,t0,t1,k))
c_val
l_val
w_val
# Figure
plt.style.use("seaborn")
# Creating the figure
fig = plt.figure(figsize=(10,4))
# The left plot
ax_left = fig.add_subplot(1,2,1)
ax_left.plot(w_val,l_val)
ax_left.set_title('Optimal l given w')
ax_left.set_xlabel('w')
ax_left.set_ylabel('l')
ax_left.grid(True)
# The right plot
ax_right = fig.add_subplot(1,2,2)
ax_right.plot(w_val,c_val)
ax_right.set_title('Optimal c given w')
ax_right.set_xlabel('w')
ax_right.set_ylabel('c')
ax_right.grid(True)
###Output
_____no_output_____
###Markdown
Question 3 Now we run the utility function for 10.000 individuals with a uniform wage distribution between 0.5 and 1.5, where the elasticity of labor supply is 0.3. This gives us a total tax revenue of 2000.7.
###Code
# Drawing a random number
np.random.seed(117)
c_i = []
l_i = []
w_i = []
# Drawing a random wage for each i in the population with 10.000 individuals
for i in range(10000):
w = np.random.uniform(low=0.5,high=1.5)
sol_case1 = optimize.minimize_scalar(value_of_choice,
method='bounded',
bounds=(0,1),args=(v,e))
w_i.append(w)
l_i.append(sol_case1.x)
c_i.append(c_star(sol_case1.x,w,m,t0,t1,k))
def tax_func(c,l,w,t0,t1,k):
return t0*w*l+t1*np.max(w*l-k,0)
tax = 0
for i in range(10000):
tax += tax_func(w_i[i],l_i[i],c_i[i],t0,t1,k)
print(f'Total tax revenue = {tax:.3f}')
###Output
Total tax revenue = 2000.722
###Markdown
Question 4 We do the same as in Question 3 with a lower elasticity of labor supply being 0.1. This gives a higher tax revenue of 4608.3.
###Code
# Defining the new elasticity of labor supply
e_new = 0.1
# Defining new functions
def u_star_new(c,l,v,e_new):
return np.log(c)-v*l**(1+1/e_new)/(1+1/e_new)
def value_of_choice_new(l,v,e_new):
c = c_star(w,l,m,t0,t1,k)
return -u_star_new(c,l,v,e_new)
sol_case2 = optimize.minimize_scalar(
value_of_choice_new,method='bounded',
bounds=(0,1),args=(v,e_new))
# Drawing a random number
np.random.seed(117)
c_i_new = []
l_i_new = []
w_i_new = []
# Drawing a random wage for each i in the population with 10.000 individuals
for i in range(10000):
w = np.random.uniform(low=0.5,high=1.5)
sol_case2 = optimize.minimize_scalar(value_of_choice_new,
method='bounded',
bounds=(0,1),args=(v,e_new))
w_i_new.append(w)
l_i_new.append(sol_case2.x)
c_i_new.append(c_star(sol_case2.x,w,m,t0,t1,k))
def tax_func_new(c,l,w,t0,t1,k):
return t0*w*l+t1*np.max(w*l-k,0)
tax_new = 0
for i in range(10000):
tax_new += tax_func_new(w_i_new[i],l_i_new[i],c_i_new[i],t0,t1,k)
print(f'New total tax revenue = {tax_new:.3f}')
###Output
New total tax revenue = 4608.280
###Markdown
Question 5 Because of the Laffer curve we expect a maximum level of tax revenue for both of the income taxes. The tax revenue will rise with a higher tax until a certain point where the tax revenue will fall again if the tax is raised further. If we plot different values of t0 in Question 3, we find the Laffer curve. This is shown in the figure below, where the optimal level of the standard income tax is around 0.75.
###Code
t0 = [0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0]
tax0 = [873.4,2012.5,3016.0,3882.3,4608.3,5188.5,5612.1,5854.6,5837.1,4001.2,0.0]
plt.style.use("seaborn")
# Creating the figure
fig = plt.figure(figsize=(10,4))
# The left plot
ax_left = fig.add_subplot(1,2,1)
ax_left.plot(t0,tax)
ax_left.set_title('Optimal level of t0')
ax_left.set_xlabel('t0')
ax_left.set_ylabel('tax0')
ax_left.grid(True)
###Output
_____no_output_____
###Markdown
###Code
#@test {"output": "ignore"}
print ('Installing dependencies...')
!apt-get update -qq && apt-get install -qq libfluidsynth1 fluid-soundfont-gm build-essential libasound2-dev libjack-dev
!pip install -qU pyfluidsynth pretty_midi
!pip install -qU magenta
# Hack to allow python to pick up the newly-installed fluidsynth lib.
# This is only needed for the hosted Colab environment.
import ctypes.util
orig_ctypes_util_find_library = ctypes.util.find_library
def proxy_find_library(lib):
if lib == 'fluidsynth':
return 'libfluidsynth.so.1'
else:
return orig_ctypes_util_find_library(lib)
ctypes.util.find_library = proxy_find_library
print( 'Importing libraries and defining some helper functions...')
from google.colab import files
import magenta.music as mm
import magenta
import tensorflow
print ('🎉 Done!')
print (magenta.__version__ )
print (tensorflow.__version__)
from magenta.protobuf import music_pb2
from magenta.protobuf import music_pb2
twinkle_twinkle = music_pb2.NoteSequence()
# Add the notes to the sequence.
twinkle_twinkle.notes.add(pitch=60, start_time=0.0, end_time=0.5, velocity=80)
twinkle_twinkle.notes.add(pitch=60, start_time=0.5, end_time=1.0, velocity=80)
twinkle_twinkle.notes.add(pitch=67, start_time=1.0, end_time=1.5, velocity=80)
twinkle_twinkle.notes.add(pitch=67, start_time=1.5, end_time=2.0, velocity=80)
twinkle_twinkle.notes.add(pitch=69, start_time=2.0, end_time=2.5, velocity=80)
twinkle_twinkle.notes.add(pitch=69, start_time=2.5, end_time=3.0, velocity=80)
twinkle_twinkle.notes.add(pitch=67, start_time=3.0, end_time=4.0, velocity=80)
twinkle_twinkle.notes.add(pitch=65, start_time=4.0, end_time=4.5, velocity=80)
twinkle_twinkle.notes.add(pitch=65, start_time=4.5, end_time=5.0, velocity=80)
twinkle_twinkle.notes.add(pitch=64, start_time=5.0, end_time=5.5, velocity=80)
twinkle_twinkle.notes.add(pitch=64, start_time=5.5, end_time=6.0, velocity=80)
twinkle_twinkle.notes.add(pitch=62, start_time=6.0, end_time=6.5, velocity=80)
twinkle_twinkle.notes.add(pitch=62, start_time=6.5, end_time=7.0, velocity=80)
twinkle_twinkle.notes.add(pitch=60, start_time=7.0, end_time=8.0, velocity=80)
twinkle_twinkle.total_time = 8
twinkle_twinkle.tempos.add(qpm=60);
# This is a colab utility method that visualizes a NoteSequence.
mm.plot_sequence(twinkle_twinkle)
# This is a colab utility method that plays a NoteSequence.
mm.play_sequence(twinkle_twinkle,synth=mm.fluidsynth)
# Here's another NoteSequence!
teapot = music_pb2.NoteSequence()
teapot.notes.add(pitch=69, start_time=0, end_time=0.5, velocity=80)
teapot.notes.add(pitch=71, start_time=0.5, end_time=1, velocity=80)
teapot.notes.add(pitch=73, start_time=1, end_time=1.5, velocity=80)
teapot.notes.add(pitch=74, start_time=1.5, end_time=2, velocity=80)
teapot.notes.add(pitch=76, start_time=2, end_time=2.5, velocity=80)
teapot.notes.add(pitch=81, start_time=3, end_time=4, velocity=80)
teapot.notes.add(pitch=78, start_time=4, end_time=5, velocity=80)
teapot.notes.add(pitch=81, start_time=5, end_time=6, velocity=80)
teapot.notes.add(pitch=76, start_time=6, end_time=8, velocity=80)
teapot.total_time = 8
teapot.tempos.add(qpm=60);
mm.plot_sequence(teapot)
mm.play_sequence(teapot,synth=mm.synthesize)
drums = music_pb2.NoteSequence()
drums.notes.add(pitch=36, start_time=0, end_time=0.125, is_drum=True, instrument=10, velocity=80)
drums.notes.add(pitch=38, start_time=0, end_time=0.125, is_drum=True, instrument=10, velocity=80)
drums.notes.add(pitch=42, start_time=0, end_time=0.125, is_drum=True, instrument=10, velocity=80)
drums.notes.add(pitch=46, start_time=0, end_time=0.125, is_drum=True, instrument=10, velocity=80)
drums.notes.add(pitch=42, start_time=0.25, end_time=0.375, is_drum=True, instrument=10, velocity=80)
drums.notes.add(pitch=42, start_time=0.375, end_time=0.5, is_drum=True, instrument=10, velocity=80)
drums.notes.add(pitch=42, start_time=0.5, end_time=0.625, is_drum=True, instrument=10, velocity=80)
drums.notes.add(pitch=50, start_time=0.5, end_time=0.625, is_drum=True, instrument=10, velocity=80)
drums.notes.add(pitch=36, start_time=0.75, end_time=0.875, is_drum=True, instrument=10, velocity=80)
drums.notes.add(pitch=38, start_time=0.75, end_time=0.875, is_drum=True, instrument=10, velocity=80)
drums.notes.add(pitch=42, start_time=0.75, end_time=0.875, is_drum=True, instrument=10, velocity=80)
drums.notes.add(pitch=45, start_time=0.75, end_time=0.875, is_drum=True, instrument=10, velocity=80)
drums.notes.add(pitch=36, start_time=1, end_time=1.125, is_drum=True, instrument=10, velocity=80)
drums.notes.add(pitch=42, start_time=1, end_time=1.125, is_drum=True, instrument=10, velocity=80)
drums.notes.add(pitch=46, start_time=1, end_time=1.125, is_drum=True, instrument=10, velocity=80)
drums.notes.add(pitch=42, start_time=1.25, end_time=1.375, is_drum=True, instrument=10, velocity=80)
drums.notes.add(pitch=48, start_time=1.25, end_time=1.375, is_drum=True, instrument=10, velocity=80)
drums.notes.add(pitch=50, start_time=1.25, end_time=1.375, is_drum=True, instrument=10, velocity=80)
drums.total_time = 1.375
drums.tempos.add(qpm=60)
# This is a colab utility method that visualizes a NoteSequence.
mm.plot_sequence(drums)
# This is a colab utility method that plays a NoteSequence.
mm.play_sequence(drums,synth=mm.fluidsynth)
import os
drums1=mm.midi_file_to_sequence_proto(os.path.join('/POP1.mid'))
for i in range(len(drums1.notes)):
drums1.notes[i].is_drum=True
drums1.notes[i].instrument=9
#drums1.notes[i].velocity=80
#if drums1.notes[i].pitch >50:
# drums1.notes[i].pitch=0
#elif drums1.notes[i].pitch<36:
#drums1.notes[i].pitch=0
mm.plot_sequence(drums1)
# This is a colab utility method that plays a NoteSequence.
mm.play_sequence(drums1,synth=mm.fluidsynth)
drums1
print( 'Downloading model bundle. This will take less than a minute...')
#mm.notebook_utils.download_bundle('/drum_kit.mag')
bundle = mm.sequence_generator_bundle.read_bundle_file('/drum_kit_rnn .mag')
# Import dependencies.
#from magenta.models.melody_rnn import melody_rnn_sequence_generator
from magenta.protobuf import generator_pb2
from magenta.protobuf import music_pb2
from magenta.models.drums_rnn import drums_rnn_sequence_generator
# Initialize the model.
print( "Initializing Drums RNN...")
generator_map = drums_rnn_sequence_generator.get_generator_map()
drums_rnn = generator_map['drum_kit'](checkpoint=None, bundle=bundle)
drums_rnn.initialize()
# Model options. Change these to get different generated sequences!
input_sequence = drums1 # change this to teapot if you want
num_steps = 512 # change this for shorter or longer sequences
temperature = 1.2 # the higher the temperature the more random the sequence.
# Set the start time to begin on the next step after the last note ends.
last_end_time = (max(n.end_time for n in input_sequence.notes)
if input_sequence.notes else 0)
qpm = input_sequence.tempos[0].qpm
seconds_per_step = 42.0 / qpm / drums_rnn.steps_per_quarter
total_seconds = num_steps * seconds_per_step
generator_options = generator_pb2.GeneratorOptions()
generator_options.args['temperature'].float_value = temperature
generate_section = generator_options.generate_sections.add(
start_time=last_end_time + seconds_per_step,
end_time=total_seconds)
# Ask the model to continue the sequence.
sequence = drums_rnn.generate(input_sequence, generator_options)
mm.plot_sequence(sequence)
mm.play_sequence(sequence, synth=mm.fluidsynth)
###Output
_____no_output_____ |
deeplearning1/rnn/tf-rnn-santi.ipynb | ###Markdown
Use Tensorflow to Generate Text for Santi Define some helper functions
###Code
import pickle
import os
import re
def load_text(path):
input_file = os.path.join(path)
with open(input_file, 'r') as f:
text_data = f.read()
return text_data
def preprocess_and_save_data(text, token_lookup, create_lookup_tables):
token_dict = token_lookup()
for key, token in token_dict.items():
text = text.replace(key, '{}'.format(token))
text = list(text)
vocab_to_in, int_to_vocab = create_lookup_tables(text)
int_text = [vocab_to_in[word] for word in text]
pickle.dump((int_text, vocab_to_in, int_to_vocab, token_dict), open('preprocess.p', 'wb'))
def load_preprocess():
return pickle.load(open('preprocess.p', mode='rb'))
def save_params(params):
pickle.dump(params, open('params.p', 'wb'))
def load_params():
return pickle.load(open('params.p', mode='rb'))
import math
import numpy as np
import tensorflow as tf
from tensorflow.python.ops.rnn_cell import RNNCell
class BNLSTMCell(RNNCell):
'''Batch normalized LSTM as described in arxiv.org/abs/1603.09025'''
def __init__(self, num_units, training):
self.num_units = num_units
self.training = training
@property
def state_size(self):
return (self.num_units, self.num_units)
@property
def output_size(self):
return self.num_units
def __call__(self, x, state, scope=None):
with tf.variable_scope(scope or type(self).__name__):
c, h = state
x_size = x.get_shape().as_list()[1]
W_xh = tf.get_variable('W_xh',
[x_size, 4 * self.num_units],
initializer=orthogonal_initializer())
W_hh = tf.get_variable('W_hh',
[self.num_units, 4 * self.num_units],
initializer=bn_lstm_identity_initializer(0.95))
bias = tf.get_variable('bias', [4 * self.num_units])
xh = tf.matmul(x, W_xh)
hh = tf.matmul(h, W_hh)
bn_xh = batch_norm(xh, 'xh', self.training)
bn_hh = batch_norm(hh, 'hh', self.training)
hidden = bn_xh + bn_hh + bias
i, j, f, o = tf.split(1, 4, hidden)
new_c = c * tf.sigmoid(f) + tf.sigmoid(i) * tf.tanh(j)
bn_new_c = batch_norm(new_c, 'c', self.training)
new_h = tf.tanh(bn_new_c) * tf.sigmoid(o)
return new_h, (new_c, new_h)
def orthogonal(shape):
flat_shape = (shape[0], np.prod(shape[1:]))
a = np.random.normal(0.0, 1.0, flat_shape)
u, _, v = np.linalg.svd(a, full_matrices=False)
q = u if u.shape == flat_shape else v
return q.reshape(shape)
def bn_lstm_identity_initializer(scale):
def _initializer(shape, dtype=tf.float32, partition_info=None):
'''Ugly cause LSTM params calculated in one matrix multiply'''
size = shape[0]
# gate (j) is identity
t = np.zeros(shape)
t[:, size:size * 2] = np.identity(size) * scale
t[:, :size] = orthogonal([size, size])
t[:, size * 2:size * 3] = orthogonal([size, size])
t[:, size * 3:] = orthogonal([size, size])
return tf.constant(t, dtype)
return _initializer
def orthogonal_initializer():
def _initializer(shape, dtype=tf.float32, partition_info=None):
return tf.constant(orthogonal(shape), dtype)
return _initializer
def batch_norm(x, name_scope, training, epsilon=1e-3, decay=0.999):
'''Assume 2d [batch, values] tensor'''
with tf.variable_scope(name_scope):
size = x.get_shape().as_list()[1]
scale = tf.get_variable('scale', [size], initializer=tf.constant_initializer(0.1))
offset = tf.get_variable('offset', [size])
pop_mean = tf.get_variable('pop_mean', [size], initializer=tf.zeros_initializer, trainable=False)
pop_var = tf.get_variable('pop_var', [size], initializer=tf.ones_initializer, trainable=False)
batch_mean, batch_var = tf.nn.moments(x, [0])
train_mean_op = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_var_op = tf.assign(pop_var, pop_var * decay + batch_var * (1 - decay))
def batch_statistics():
with tf.control_dependencies([train_mean_op, train_var_op]):
return tf.nn.batch_normalization(x, batch_mean, batch_var, offset, scale, epsilon)
def population_statistics():
return tf.nn.batch_normalization(x, pop_mean, pop_var, offset, scale, epsilon)
return tf.cond(training, batch_statistics, population_statistics)
###Output
_____no_output_____
###Markdown
Read in data & Do some cleaning
###Code
data_path = './text/santi.txt'
text = load_text(data_path)
text = text.replace(' ', '')
# reduce 2 or more empty line to 1
text = re.sub(r'\n{2,}', '\n', text)
# replace ...... with 。
text = re.sub(r'\.+', '。', text)
# remove the content in 《》
text = re.sub(r'《.*》', '', text)
# remove ――
text = re.sub(r'――', '', text)
text = re.sub(r'\u3000', '', text)
print(len(text))
num_words_for_training = 500000
text = text[-num_words_for_training:]
lines_of_text = text.split('\n')
print(len(lines_of_text))
print(lines_of_text[:20])
print(lines_of_text[-10:])
###Output
['现在,小宇宙的太空中,只剩下一艘细长的小飞船和漂浮在船边的三个人。', '智子拿着一个金属盒,那是他们要留在小宇宙中的东西,是要送往新宇宙的漂流瓶。它的主体是一台微型电脑,电脑的量子存储器中存储着小宇宙电脑主机的全部信息,这几乎是三体和地球文明的全部记忆了。当新宇宙诞生时,金属盒会收到门发来的信号,然后用自己的小推进器穿过门,进入新宇宙。它会在新宇宙的高维太空中飘浮,等待着被拾取和解读的那一天。同时,它还会用中微子束把自己存储的信息不断地播放出来,如果新宇宙中也有中微子的话。', '程心和关一帆相信,其他的小宇宙,那些响应回归运动呼吁的小宇宙,也在做着同样的事。如果新宇宙真的诞生,其中会有许多来自旧宇宙的漂流瓶。可以相信,相当一部分漂流瓶中的记忆体里存储的信息可能达到这样的程度:记录了那个文明每一个个体的全部记忆和意识,以及每个个体的全部生物学细节,以至于新宇宙中的文明可以根据这些信息复原那个文明。', '“还可以再留下五公斤吗?”程心问道。她在飞船的另一侧,身穿太空服,手中举着一个发光的透明球体,球体直径约半米,里面飘浮着几个水球,有的里面游动着几条小鱼,有的里面生长着绿藻;还有两块漂浮的微型陆地,上面长着嫩绿的青草。光亮是从球体顶部发出的,那里安装着一个小小的发光体,是这个小世界的太阳。这是一个全封闭的生态球,是程心和智子十多天的工作成果,只要球体内的小太阳还能够发光,这个小小的生态系统就能生存下去。只要有它留在这里,647号宇宙就不是一个没有生命的黑暗世界。', '“当然可以,大宇宙不会因为这五公斤就不坍缩了。”关一帆说,他还有一个没说出来的想法:也许大宇宙真的会因为相差一个原子的质量而由封闭转为开放。大自然的精巧有时超出想象,比如生命的诞生,就需要各项宇宙参数在几亿亿分之一精度上的精确配合。但程心仍然可以留下她的生态球,因为在那无数文明创造的无数小宇宙中,肯定有相当一部分不响应回归运动的号召,所以,大宇宙最终被夺走的质量至少有几亿吨,甚至可能是几亿亿亿吨。', '但愿大宇宙能够忽略这个误差。', '程心和关一帆进入了飞船,智子最后也进来了。她早就不再穿那身华丽的和服了,她现在身着迷彩服,再次成为一名轻捷精悍的战士,她的身上佩带着许多武器和生存装备,最引人注目的是那把插在背后的武士刀。', '“放心,我在,你们就在!”智子对两位人类朋友说。', '聚变发动机启动了,推进器发出幽幽的蓝光,飞船缓缓地穿过了宇宙之门。', '小宇宙中只剩下漂流瓶和生态球。漂流瓶隐没于黑暗里,在一千米见方的宇宙中,只有生态球里的小太阳发出一点光芒。在这个小小的生命世界中,几只清澈的水球在零重力环境中静静地飘浮着,有一条小鱼从一只水球中蹦出,跃入另一只水球,轻盈地穿游于绿藻之间。在一小块陆地上的草丛中,有一滴露珠从一片草叶上脱离,旋转着飘起,向太空中折射出一缕晶莹的阳光。']
###Markdown
Preprocess the text data to suit the model
###Code
def create_lookup_tables(input_data):
vocab = set(input_data)
vocab_to_int = {word: idx for idx, word in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
return vocab_to_int, int_to_vocab
def token_lookup():
""" Lookup tables for Chinese punctuations
"""
symbols = set(['。', ',', '“', "”", ';', '!', '?', '(', ')', '\n'])
tokens = ["P", "C", "Q", "T", "S", "E", "M", "I", "O", "R"]
return dict(zip(symbols, tokens))
# process and save the processed data
preprocess_and_save_data(''.join(lines_of_text), token_lookup, create_lookup_tables)
int_text, vocab_to_int, int_to_vocab, token_dict = load_preprocess()
###Output
_____no_output_____
###Markdown
Check Tensorflow environment
###Code
import warnings
import tensorflow as tf
import numpy as np
# Check TensorFlow Version, need > 1
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
###Output
TensorFlow Version: 1.2.1
###Markdown
Build Our RNN! Set super params
###Code
# 训练循环次数
num_epochs = 150
# batch大小
batch_size = 256
# lstm层中包含的unit个数
rnn_size = 512
# embedding layer的大小
embed_dim = 1200
# 训练步长
seq_length = 60
# 学习率
learning_rate = 0.002
# 每多少步打印一次训练信息
show_every_n_batches = 100
# 保存session状态的位置
save_dir = './santi/save'
###Output
_____no_output_____
###Markdown
Create the placeholders for input, targets and learning_Rate
###Code
def get_inputs():
# inputs和targets的类型都是整数的
inputs = tf.placeholder(tf.int32, [None, None], name='inputs')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return inputs, targets, learning_rate
###Output
_____no_output_____
###Markdown
Create rnn cell,use lstm cell,and create lstm layers. Use dropout,and initialize lstm layer
###Code
def get_init_cell(batch_size, rnn_size):
# lstm层数
num_layers = 2
# dropout时的保留概率
keep_prob = 0.8
# 创建包含rnn_size个神经元的lstm cell
# cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.LayerNormBasicLSTMCell(rnn_size)
# 使用dropout机制防止overfitting等
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
# 创建2层lstm层
cell = tf.contrib.rnn.MultiRNNCell([drop for _ in range(num_layers)])
# 初始化状态为0.0
init_state = cell.zero_state(batch_size, tf.float32)
# 使用tf.identify给init_state取个名字,后面生成文字的时候,要使用这个名字来找到缓存的state
init_state = tf.identity(init_state, name='init_state')
return cell, init_state
###Output
_____no_output_____
###Markdown
Create embedding layers
###Code
def get_embed(input_data, vocab_size, embed_dim):
# create tf variable based on embedding layer size and vocab size
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim)), dtype=tf.float32)
return tf.nn.embedding_lookup(embedding, input_data)
###Output
_____no_output_____
###Markdown
创建rnn节点,使用dynamic_rnn方法计算出output和final_state
###Code
def build_rnn(cell, inputs):
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name="final_state")
return outputs, final_state
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
embed = get_embed(input_data, vocab_size, rnn_size)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None,
weights_initializer = tf.truncated_normal_initializer(stddev=0.1),
biases_initializer=tf.zeros_initializer())
return logits, final_state
###Output
_____no_output_____
###Markdown
Define get_batches to train batch by batch
###Code
def get_batches(int_text, batch_size, seq_length):
# 计算有多少个batch可以创建
n_batches = (len(int_text) // (batch_size * seq_length))
# 计算每一步的原始数据,和位移一位之后的数据
batch_origin = np.array(int_text[: n_batches * batch_size * seq_length])
batch_shifted = np.array(int_text[1: n_batches * batch_size * seq_length + 1])
# 将位移之后的数据的最后一位,设置成原始数据的第一位,相当于在做循环
batch_shifted[-1] = batch_origin[0]
batch_origin_reshape = np.split(batch_origin.reshape(batch_size, -1), n_batches, 1)
batch_shifted_reshape = np.split(batch_shifted.reshape(batch_size, -1), n_batches, 1)
batches = np.array(list(zip(batch_origin_reshape, batch_shifted_reshape)))
return batches
###Output
_____no_output_____
###Markdown
Now, let's build the whole RNN model!
###Code
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
# 创建rnn的cell和初始状态节点,rnn的cell已经包含了lstm,dropout
# 这里的rnn_size表示每个lstm cell中包含了多少的神经元
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
# 创建计算loss和finalstate的节点
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# 使用softmax计算最后的预测概率
probs = tf.nn.softmax(logits, name='probs')
# 计算loss
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# 使用Adam提督下降
optimizer = tf.train.AdamOptimizer(lr)
# 裁剪一下Gradient输出,最后的gradient都在[-1, 1]的范围内
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
###Output
_____no_output_____
###Markdown
Now, let's train the model
###Code
# 获得训练用的所有batch
batches = get_batches(int_text, batch_size, seq_length)
# 打开session开始训练,将上面创建的graph对象传递给session
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# 打印训练信息
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# 保存模型
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
save_params((seq_length, save_dir))
# Save the trained model
save_params((seq_length, save_dir))
###Output
_____no_output_____
###Markdown
Let's Generate Some Text!
###Code
import tensorflow as tf
import numpy as np
# %cd santi
_, vocab_to_int, int_to_vocab, token_dict = load_preprocess()
seq_length, load_dir = load_params()
# %cd ..
###Output
_____no_output_____
###Markdown
要使用保存的模型,我们要将保存下来的变量(tensor)通过指定的name获取到
###Code
def get_tensors(loaded_graph):
inputs = loaded_graph.get_tensor_by_name("inputs:0")
initial_state = loaded_graph.get_tensor_by_name("init_state:0")
final_state = loaded_graph.get_tensor_by_name("final_state:0")
probs = loaded_graph.get_tensor_by_name("probs:0")
return inputs, initial_state, final_state, probs
def pick_word(probabilities, int_to_vocab):
chances = []
for idx, prob in enumerate(probabilities):
if prob >= 0.05:
chances.append(int_to_vocab[idx])
if len(chances) < 1:
return str(int_to_vocab[np.argmax(probabilities)])
else:
rand = np.random.randint(0, len(chances))
return str(chances[rand])
# 生成文本的长度
gen_length = 500
# 文章开头的字,指定一个即可,这个字必须是在训练词汇列表中的
prime_word = '从'
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# 加载保存过的session
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# 通过名称获取缓存的tensor
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# 准备开始生成文本
gen_sentences = [prime_word]
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# 开始生成文本
for n in range(gen_length):
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
# print(probabilities)
# print(len(probabilities))
pred_word = pick_word(probabilities[0][dyn_seq_length - 1], int_to_vocab)
gen_sentences.append(pred_word)
# 将标点符号还原
novel = ''.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '“'] else ''
novel = novel.replace(token.upper(), key)
novel = novel.replace('\n ', '\n')
novel = novel.replace('( ', '(')
print(novel)
vocab_size
len(probabilities)
max(probabilities[0][0])
###Output
_____no_output_____ |
technologies/sqlite/comparison_with_hdf5_and_numpy_files/Compare loading saving speed with numpy files.ipynb | ###Markdown
Compare data saving and loading performance of QCoDeS SQLite backend vs HDF5 (h5py) and numpy npy filesThis notebook measures time it takes to save and load measurement data using qcodes dataset versus other ways of storing data, hdf5 and numpy npy files. The reason for such a study is that qcodes users should not be limited in the performance of the their experiments by the performance of data saving (and loading).HDF5 and numpy npy storage solutions are widely used in the scientific community, and are known for their efficiency.In this notebook, we are going to define convenient functions that generate data, load and save that data in the means of interest, and some infrastructure that allows us to measure the time the loading and saving takes. Preparations Imports
###Code
import time
import os
from tempfile import TemporaryFile
from functools import partial
import numpy
import h5py
from git import Repo
import qcodes
from qcodes import (
initialise_or_create_database_at, load_or_create_experiment,
Measurement, Parameter,
load_by_id
)
from qcodes.dataset.data_export import get_data_by_id
###Output
_____no_output_____
###Markdown
Relevant environment information
###Code
qcodes.version.__version__
# in case the qcodes is installed from local git repository
qcodes_repo_path = os.sep.join(qcodes.__path__[0].split(os.sep)[:-1])
qcodes_repo = Repo(qcodes_repo_path)
print(qcodes_repo.head.commit)
print(h5py.version.info)
###Output
Summary of the h5py configuration
---------------------------------
h5py 2.8.0
HDF5 1.10.2
Python 3.6.6 |Anaconda, Inc.| (default, Jun 28 2018, 11:27:44) [MSC v.1900 64 bit (AMD64)]
sys.platform win32
sys.maxsize 9223372036854775807
numpy 1.15.2
###Markdown
Simulated measurementFor this study, we are going to take the case of sweeping 2 independent parameters (s1, s2) and measuring 2 dependent parameters (magnitude and phase). For simplicity, the number of datapoints per parameter is the same, and it is set in a variable. We are going to use the same generator function throughout the study for generating dummy data that we will be saving and loading.
###Code
# number of data points per parameter
n_pts_per_param = 20
def make_data_producer(n_pts_per_param):
def produce_measurement_data():
"""
This iterator represents the code that obtains
measurement data. For the sake of example, it
just returns random dummy data: 4 parameters/dimensions,
`n_pts_per_param` per each dimension (which becomes
`n_pts_per_param**4` data points in total).
Args:
n_pts_per_param
number of points per each parameter/dimension
Returns:
tuple of values of the 4 dimensions obtained
at a single "measurement" iteration
"""
for s1_val in range(n_pts_per_param):
for s2_val in range(n_pts_per_param):
magn_vals, phas_vals = numpy.meshgrid(
numpy.random.rand(n_pts_per_param),
numpy.random.rand(n_pts_per_param),
)
magn_vals = numpy.reshape(magn_vals, -1)
phas_vals = numpy.reshape(phas_vals, -1)
yield s1_val, s2_val, magn_vals, phas_vals
return produce_measurement_data
###Output
_____no_output_____
###Markdown
Measuring execution time In most of the cases, we are going to use `timeit` to measure time.In some cases, however, `timeit` interface is not flexible: it does not let you measure "start" and "stop" time moments __within__ the code that is under test. Below is a custom decorator that allows to overcome this limitation.
###Code
import timeit
from IPython.core.magics.execution import TimeitResult
from copy import deepcopy, copy
def time_it(number=None,
repeat=timeit.default_repeat):
"""
Sometimes it is needed to define in the code itself
where you want to start measuring the execution time
of that piece of code and when you want to stop the
measurement. Unfortunately, `timeit` module does not
support that out-of-the-box. Hence, this decorator.
This decorator uses `timeit` infrastructure, but allows
to profile a function that returns its execution time.
This allows developers to define the start and stop moments
in the code itself, and the `timeit` infrastructure will
do the rest.
To use this decorator, follow these steps:
* implement a piece of code that you'd like to profile
as a function
* in the code of the function find the start and stop
points where the time needs to be measured
* use `time.perf_counter()` to get the time in seconds
at those places
* make the function return the difference between stop
and start moments and its first return value
* the function signature is not restricted to its input
arguments, and is not restricted to its return values
except for the first return value
* decorate the function with this decorator
* call your decorated function to see the results of
the profiling
Args:
number
the function gets executed this `number` of times,
and the average of the collected individual execution
times is used (same as for `timeit`); if None, then
the necessary number of execution times will be
inferred (see `timeit` module for more info)
repeat
the profiling measurement gets repeated `repeat`
number of times (same as for `timeit`)
"""
def time_sut(sut):
"""
This is the actual decorator. "sut" stands for "system
under test".
"""
def wrapper(*args, **kwargs):
"""
This wrapper function uses `timeit` infrastructure
from `timeit` module and its implementation in Jupyter
magics.
Returns the `TimeitResult` object that contains all the
information about the profiling results.
"""
t = timeit.Timer()
# define a function that the Timer class
# can consume for profiling
def inner(_it, _timer):
"""
see the internals of the `timeit.Timer` class
for more information
"""
total_time = 0
for _ in _it:
args_ = copy(args)
kwargs_ = copy(kwargs)
returned_vals = sut(*args_, **kwargs_)
total_time += returned_vals[0] \
if isinstance(returned_vals, tuple) \
else returned_vals
return total_time
t.inner = inner
# execute the profiling
try:
if number is None:
number_, __ = t.autorange()
else:
number_ = number
all_runs = t.repeat(repeat, number_)
except:
t.print_exc()
raise
# pretty print the results
best = min(all_runs) / number_
worst = max(all_runs) / number_
timeit_result = TimeitResult(number_, repeat, best, worst, all_runs, 0, 3)
print(timeit_result)
return timeit_result
return wrapper
return time_sut
###Output
_____no_output_____
###Markdown
Defining test routinesNow lets define all the test routines for saving and loading data for testing the performance of different backends. These routines will use the data generation function that is defined above, and some of them will conform to the interface that is required by the custom `time_it` decorator. QCoDeS dataset First, we need to initialize a database file.
###Code
# initialize the database file for qcodes dataset
temp_db_file = TemporaryFile(suffix='.db')
temp_db_file.close()
initialise_or_create_database_at(temp_db_file.name)
load_or_create_experiment('save_load_speed_study', 'sqlite3_from_qcodes')
###Output
_____no_output_____
###Markdown
Next, we define a convenient function that performs all the usual steps that are necessary for a qcodes measurement and data saving.Note that we exlcude from the time measurement the parts that are related to setting up the `Measurement` object, and starting the actual measurement. We do include the exiting of the `measurement.run()` context though because the last pieces of data are flushed then.We decorate it with the our custom `time_it` decorator that has been presented above (note that we want to keep the original function as well, hence the `@` syntax is not used for decoration).
###Code
def save_to_sqlite(create_data_generator,
paramtype='numeric',
write_period=10):
"""
Use qcodes dataset with its sqlite backend to save dummy
data, and measure the time this takes. The data that is being
saved is 2 dependent and 2 independent parameters. The data
for the measurement is generated by an iterator that is returned
by calling the `create_data_generator` function.
Args:
create_data_generator
a callable with not arguments that returns an iterator
that in turn generates dummy data for 4 parameters
paramtype
controls the way data the 2 dependent parameters are stored
in the sqlite database,
see `Measurement.register_parameter` for more information
(useful values in the context of this notebook are 'numeric'
and 'array')
write_period
the data is written to the data base at least every
`write_period` number of seconds
Returns:
saving_time
measured time it took to save the data, in seconds
dataset
the qcodes dataset object where the data was saved to;
it is useful for accessing the data and measuring the
time it takes to load it
"""
data_generator = create_data_generator()
# define parameters
s1 = Parameter('s1', label='Setting 1', unit='V', get_cmd=None, set_cmd=None)
s2 = Parameter('s2', label='Setting 2', unit='V', get_cmd=None, set_cmd=None)
magn = Parameter('magn', label='Magnitude', unit='V', get_cmd=None, set_cmd=None)
phas = Parameter('phas', label='Phase', unit='deg', get_cmd=None, set_cmd=None)
meas = Measurement()
# register parameters in the measurement object
meas.register_parameter(s1)
meas.register_parameter(s2)
meas.register_parameter(magn, setpoints=(s1, s2), paramtype=paramtype)
meas.register_parameter(phas, setpoints=(s1, s2), paramtype=paramtype)
# set the write period to a large value, so that actual writing
# to the database happens at the end of the "measurement"
meas.write_period = write_period
# perform the measurement
with meas.run() as datasaver:
t0 = time.perf_counter() # <-----
for s1_val, s2_val, magn_vals, phas_vals \
in data_generator:
datasaver.add_result((s1, s1_val),
(s2, s2_val),
(magn, magn_vals),
(phas, phas_vals))
t1 = time.perf_counter() # <-----
saving_time = t1 - t0
dataset = datasaver.dataset
return saving_time, dataset
# decorate the function, and leave the original one intact
time_save_to_sqlite_numeric = time_it(number=3, repeat=2)(
partial(save_to_sqlite, paramtype='numeric'))
time_save_to_sqlite_array = time_it()(
partial(save_to_sqlite, paramtype='array'))
###Output
_____no_output_____
###Markdown
HDF5 file HDF5 files (thanks for `h5py`) behave very similar to `numpy` array, the interfacing with them is very similar.We are not going to use the custom `time_it` decorator, because `timeit` itself is not limiting us.
###Code
def save_to_hdf5(create_data_generator, filename):
"""
Use HDF5 file to save dummy data, and measure the time
this takes. The data that is being saved is 2 dependent
and 2 independent parameters. The resulting HDF5 file
is going to contain a single 'dataset' with the name
"results".
The data for the measurement is generated by an iterator
that is returned by calling the `create_data_generator`
function.
Args:
create_data_generator
a callable with not arguments that returns an iterator
that in turn generates dummy data for 4 parameters
filename
the name of the HDF5 file with the full path
Returns:
saving_time
measured time it took to save the data, in seconds
"""
data_generator = create_data_generator()
with h5py.File(filename, 'w') as f:
ds = f.create_dataset('results', shape=(4, 0), maxshape=(4, None))
for s1_val, s2_val, magn_vals, phas_vals in data_generator:
n_pts = len(magn_vals)
# we simulate the fact that we don't
# know the full amount of data
# that needs to be saved, hence
# we need to resize while saving
n_cols, n_rows = ds.shape
ds.resize((n_cols, n_rows + n_pts))
ds[0, n_rows:n_rows+n_pts] = s1_val
ds[1, n_rows:n_rows+n_pts] = s2_val
ds[2, n_rows:n_rows+n_pts] = magn_vals
ds[3, n_rows:n_rows+n_pts] = phas_vals
###Output
_____no_output_____
###Markdown
Numpy npy file We are going to use `numpy`'s `.npy` files together with the handy `open_memmap` function in order to save data that is being spit out of the iterator that generates data.
###Code
def save_to_npy(create_data_generator, filename):
"""
Use numpy npy file to save dummy data, and measure the time
this takes. The data that is being saved is 2 dependent
and 2 independent parameters. The data for the measurement
is generated by an iterator that is returned
by calling the `create_data_generator` function.
Args:
create_data_generator
a callable with not arguments that returns an iterator
that in turn generates dummy data for 4 parameters
filename
the name of the npy file with the full path; it has to
contain '.npy' extension, otherwise `numpy` will add it
when saving data, and it will be impossible to refer
to the actual file without manually appending the
'.npy' extension to the `filename` in the code
outside of this function
Returns:
saving_time
measured time it took to save the data, in seconds
"""
data_generator = create_data_generator()
npy_mm = numpy.lib.format.open_memmap(
filename, mode='w+', shape=(4, 0))
# this (possibly dangerous?) hack is needed to allow
# resizing during adding data
npy_mm = numpy.require(npy_mm, requirements=['OWNDATA'])
for s1_val, s2_val, magn_vals, phas_vals in data_generator:
n_pts = len(magn_vals)
# we simulate the fact that we don't
# know the full amount of data
# that needs to be saved, hence
# we need to resize while saving
n_cols, n_rows = npy_mm.shape
npy_mm.resize((n_cols, n_rows + n_pts))
npy_mm[0, n_rows:n_rows+n_pts] = s1_val
npy_mm[1, n_rows:n_rows+n_pts] = s2_val
npy_mm[2, n_rows:n_rows+n_pts] = magn_vals
npy_mm[3, n_rows:n_rows+n_pts] = phas_vals
del npy_mm # closes the file and performs final flushing
###Output
_____no_output_____
###Markdown
Measure saving times Save time of QCoDeS dataset with 'numeric' type
###Code
save_time_dataset_numeric = time_save_to_sqlite_numeric(
make_data_producer(n_pts_per_param))
print("Data saving to dataset with 'numeric' paramtype took:")
print(save_time_dataset_numeric)
###Output
Data saving to dataset with 'numeric' paramtype took:
4.81 s ± 203 ms per loop (mean ± std. dev. of 2 runs, 3 loops each)
###Markdown
with 'array' type
###Code
save_time_dataset_array = time_save_to_sqlite_array(
make_data_producer(n_pts_per_param))
print("Data saving to dataset with 'array' paramtype took:")
print(save_time_dataset_array)
###Output
Data saving to dataset with 'array' paramtype took:
392 ms ± 51.6 ms per loop (mean ± std. dev. of 3 runs, 10 loops each)
###Markdown
Save time of HDF5
###Code
%%timeit outfile = TemporaryFile(); outfile.close()
save_to_hdf5(make_data_producer(n_pts_per_param), outfile.name)
###Output
381 ms ± 27.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Save time of npy
###Code
%%timeit outfile = TemporaryFile(suffix='.npy'); outfile.close()
save_to_npy(make_data_producer(n_pts_per_param), outfile.name)
###Output
784 ms ± 124 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Measure loading times Load time of QCoDeS dataset QCoDeS dataset has two ways of just loading the data: via `DataSet.get_data` method, and via `get_data_by_id` function.We are going to use both, but note that `get_data_by_id` does a bit more than just loading the data, hence it is supposedly more popular among users.A third way is to use `DataSet.get_values` and obtain values of each parameter one by one. `get_data_by_id` is already using it internally, hence we are not going to profile it. of 'numeric' type
###Code
_, dataset_numeric = save_to_sqlite(
make_data_producer(n_pts_per_param),
paramtype='numeric')
%%timeit parameter_names = dataset_numeric.parameters.split(',')
data = dataset_numeric.get_data(*parameter_names)
%%timeit
data = get_data_by_id(dataset_numeric.run_id)
###Output
6.05 s ± 185 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
of 'array' type
###Code
_, dataset_array = save_to_sqlite(
make_data_producer(n_pts_per_param),
paramtype='array')
%%timeit parameter_names = dataset_array.parameters.split(',')
data = dataset_array.get_data(*parameter_names)
%%timeit
data = get_data_by_id(dataset_array.run_id)
###Output
351 ms ± 57.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Load time from HDF5
###Code
hdf5file = TemporaryFile()
hdf5file.close()
hdf5filename = hdf5file.name
_ = save_to_hdf5(make_data_producer(n_pts_per_param), hdf5filename)
%%timeit
with h5py.File(hdf5filename, 'r') as f:
data = numpy.array(f['results'], copy=True)
###Output
14.6 ms ± 1.33 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
Load time from npy
###Code
npyfile = TemporaryFile(suffix='.npy')
npyfile.close()
npyfilename = npyfile.name
_ = save_to_npy(make_data_producer(n_pts_per_param), npyfilename)
%%timeit
data = numpy.load(npyfilename)
###Output
516 µs ± 44.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
###Markdown
Compare data saving and loading performance of QCoDeS SQLite backend vs HDF5 (h5py) and numpy npy filesThis notebook measures time it takes to save and load measurement data using qcodes dataset versus other ways of storing data, hdf5 and numpy npy files. The reason for such a study is that qcodes users should not be limited in the performance of the their experiments by the performance of data saving (and loading).HDF5 and numpy npy storage solutions are widely used in the scientific community, and are known for their efficiency.In this notebook, we are going to define convenient functions that generate data, load and save that data in the means of interest, and some infrastructure that allows us to measure the time the loading and saving takes. Preparations Imports
###Code
import time
import os
from tempfile import TemporaryFile
from functools import partial
import numpy
import h5py
from git import Repo
import qcodes
from qcodes import (
initialise_or_create_database_at, load_or_create_experiment,
Measurement, Parameter,
load_by_id
)
from qcodes.dataset.data_export import get_data_by_id
###Output
Logging hadn't been started.
Activating auto-logging. Current session state plus future input saved.
Filename : C:\Users\Jens-Work\.qcodes\logs\command_history.log
Mode : append
Output logging : True
Raw input log : False
Timestamping : True
State : active
Qcodes Logfile : C:\Users\Jens-Work\.qcodes\logs\200602-13884-qcodes.log
###Markdown
Relevant environment information
###Code
qcodes.version.__version__
# in case the qcodes is installed from local git repository
qcodes_repo_path = os.sep.join(qcodes.__path__[0].split(os.sep)[:-1])
qcodes_repo = Repo(qcodes_repo_path)
print(qcodes_repo.head.commit)
print(h5py.version.info)
###Output
Summary of the h5py configuration
---------------------------------
h5py 2.10.0
HDF5 1.10.5
Python 3.7.7 (default, May 6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)]
sys.platform win32
sys.maxsize 9223372036854775807
numpy 1.17.5
###Markdown
Simulated measurementFor this study, we are going to take the case of sweeping 2 independent parameters (s1, s2) and measuring 2 dependent parameters (magnitude and phase). For simplicity, the number of datapoints per parameter is the same, and it is set in a variable. We are going to use the same generator function throughout the study for generating dummy data that we will be saving and loading.
###Code
# number of data points per parameter
n_pts_per_param = 20
def make_data_producer(n_pts_per_param):
def produce_measurement_data():
"""
This iterator represents the code that obtains
measurement data. For the sake of example, it
just returns random dummy data: 4 parameters/dimensions,
`n_pts_per_param` per each dimension (which becomes
`n_pts_per_param**4` data points in total).
Args:
n_pts_per_param
number of points per each parameter/dimension
Returns:
tuple of values of the 4 dimensions obtained
at a single "measurement" iteration
"""
for s1_val in range(n_pts_per_param):
for s2_val in range(n_pts_per_param):
magn_vals, phas_vals = numpy.meshgrid(
numpy.random.rand(n_pts_per_param),
numpy.random.rand(n_pts_per_param),
)
magn_vals = numpy.reshape(magn_vals, -1)
phas_vals = numpy.reshape(phas_vals, -1)
yield s1_val, s2_val, magn_vals, phas_vals
return produce_measurement_data
###Output
_____no_output_____
###Markdown
Measuring execution time In most of the cases, we are going to use `timeit` to measure time.In some cases, however, `timeit` interface is not flexible: it does not let you measure "start" and "stop" time moments __within__ the code that is under test. Below is a custom decorator that allows to overcome this limitation.
###Code
import timeit
from IPython.core.magics.execution import TimeitResult
from copy import deepcopy, copy
def time_it(number=None,
repeat=timeit.default_repeat):
"""
Sometimes it is needed to define in the code itself
where you want to start measuring the execution time
of that piece of code and when you want to stop the
measurement. Unfortunately, `timeit` module does not
support that out-of-the-box. Hence, this decorator.
This decorator uses `timeit` infrastructure, but allows
to profile a function that returns its execution time.
This allows developers to define the start and stop moments
in the code itself, and the `timeit` infrastructure will
do the rest.
To use this decorator, follow these steps:
* implement a piece of code that you'd like to profile
as a function
* in the code of the function find the start and stop
points where the time needs to be measured
* use `time.perf_counter()` to get the time in seconds
at those places
* make the function return the difference between stop
and start moments and its first return value
* the function signature is not restricted to its input
arguments, and is not restricted to its return values
except for the first return value
* decorate the function with this decorator
* call your decorated function to see the results of
the profiling
Args:
number
the function gets executed this `number` of times,
and the average of the collected individual execution
times is used (same as for `timeit`); if None, then
the necessary number of execution times will be
inferred (see `timeit` module for more info)
repeat
the profiling measurement gets repeated `repeat`
number of times (same as for `timeit`)
"""
def time_sut(sut):
"""
This is the actual decorator. "sut" stands for "system
under test".
"""
def wrapper(*args, **kwargs):
"""
This wrapper function uses `timeit` infrastructure
from `timeit` module and its implementation in Jupyter
magics.
Returns the `TimeitResult` object that contains all the
information about the profiling results.
"""
t = timeit.Timer()
# define a function that the Timer class
# can consume for profiling
def inner(_it, _timer):
"""
see the internals of the `timeit.Timer` class
for more information
"""
total_time = 0
for _ in _it:
args_ = copy(args)
kwargs_ = copy(kwargs)
returned_vals = sut(*args_, **kwargs_)
total_time += returned_vals[0] \
if isinstance(returned_vals, tuple) \
else returned_vals
return total_time
t.inner = inner
# execute the profiling
try:
if number is None:
number_, __ = t.autorange()
else:
number_ = number
all_runs = t.repeat(repeat, number_)
except:
t.print_exc()
raise
# pretty print the results
best = min(all_runs) / number_
worst = max(all_runs) / number_
timeit_result = TimeitResult(number_, repeat, best, worst, all_runs, 0, 3)
print(timeit_result)
return timeit_result
return wrapper
return time_sut
###Output
_____no_output_____
###Markdown
Defining test routinesNow lets define all the test routines for saving and loading data for testing the performance of different backends. These routines will use the data generation function that is defined above, and some of them will conform to the interface that is required by the custom `time_it` decorator. QCoDeS dataset First, we need to initialize a database file.
###Code
# initialize the database file for qcodes dataset
temp_db_file = TemporaryFile(suffix='.db')
temp_db_file.close()
initialise_or_create_database_at(temp_db_file.name)
load_or_create_experiment('save_load_speed_study', 'sqlite3_from_qcodes')
###Output
Upgrading database; v0 -> v1: : 0it [00:00, ?it/s]
Upgrading database; v1 -> v2: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 499.50it/s]
Upgrading database; v2 -> v3: : 0it [00:00, ?it/s]
Upgrading database; v3 -> v4: : 0it [00:00, ?it/s]
Upgrading database; v4 -> v5: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 499.20it/s]
Upgrading database; v5 -> v6: : 0it [00:00, ?it/s]
Upgrading database; v6 -> v7: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 200.01it/s]
Upgrading database; v7 -> v8: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 499.68it/s]
Upgrading database; v8 -> v9: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 500.10it/s]
###Markdown
Next, we define a convenient function that performs all the usual steps that are necessary for a qcodes measurement and data saving.Note that we exlcude from the time measurement the parts that are related to setting up the `Measurement` object, and starting the actual measurement. We do include the exiting of the `measurement.run()` context though because the last pieces of data are flushed then.We decorate it with the our custom `time_it` decorator that has been presented above (note that we want to keep the original function as well, hence the `@` syntax is not used for decoration).
###Code
def save_to_sqlite(create_data_generator,
paramtype='numeric',
write_period=10):
"""
Use qcodes dataset with its sqlite backend to save dummy
data, and measure the time this takes. The data that is being
saved is 2 dependent and 2 independent parameters. The data
for the measurement is generated by an iterator that is returned
by calling the `create_data_generator` function.
Args:
create_data_generator
a callable with not arguments that returns an iterator
that in turn generates dummy data for 4 parameters
paramtype
controls the way data the 2 dependent parameters are stored
in the sqlite database,
see `Measurement.register_parameter` for more information
(useful values in the context of this notebook are 'numeric'
and 'array')
write_period
the data is written to the data base at least every
`write_period` number of seconds
Returns:
saving_time
measured time it took to save the data, in seconds
dataset
the qcodes dataset object where the data was saved to;
it is useful for accessing the data and measuring the
time it takes to load it
"""
data_generator = create_data_generator()
# define parameters
s1 = Parameter('s1', label='Setting 1', unit='V', get_cmd=None, set_cmd=None)
s2 = Parameter('s2', label='Setting 2', unit='V', get_cmd=None, set_cmd=None)
magn = Parameter('magn', label='Magnitude', unit='V', get_cmd=None, set_cmd=None)
phas = Parameter('phas', label='Phase', unit='deg', get_cmd=None, set_cmd=None)
meas = Measurement()
# register parameters in the measurement object
meas.register_parameter(s1)
meas.register_parameter(s2)
meas.register_parameter(magn, setpoints=(s1, s2), paramtype=paramtype)
meas.register_parameter(phas, setpoints=(s1, s2), paramtype=paramtype)
# set the write period to a large value, so that actual writing
# to the database happens at the end of the "measurement"
meas.write_period = write_period
# perform the measurement
with meas.run() as datasaver:
t0 = time.perf_counter() # <-----
for s1_val, s2_val, magn_vals, phas_vals \
in data_generator:
datasaver.add_result((s1, s1_val),
(s2, s2_val),
(magn, magn_vals),
(phas, phas_vals))
t1 = time.perf_counter() # <-----
saving_time = t1 - t0
dataset = datasaver.dataset
return saving_time, dataset
# decorate the function, and leave the original one intact
time_save_to_sqlite_numeric = time_it(number=3, repeat=2)(
partial(save_to_sqlite, paramtype='numeric'))
time_save_to_sqlite_array = time_it()(
partial(save_to_sqlite, paramtype='array'))
###Output
_____no_output_____
###Markdown
HDF5 file HDF5 files (thanks for `h5py`) behave very similar to `numpy` array, the interfacing with them is very similar.We are not going to use the custom `time_it` decorator, because `timeit` itself is not limiting us.
###Code
def save_to_hdf5(create_data_generator, filename):
"""
Use HDF5 file to save dummy data, and measure the time
this takes. The data that is being saved is 2 dependent
and 2 independent parameters. The resulting HDF5 file
is going to contain a single 'dataset' with the name
"results".
The data for the measurement is generated by an iterator
that is returned by calling the `create_data_generator`
function.
Args:
create_data_generator
a callable with not arguments that returns an iterator
that in turn generates dummy data for 4 parameters
filename
the name of the HDF5 file with the full path
Returns:
saving_time
measured time it took to save the data, in seconds
"""
data_generator = create_data_generator()
with h5py.File(filename, 'w') as f:
ds = f.create_dataset('results', shape=(4, 0), maxshape=(4, None))
for s1_val, s2_val, magn_vals, phas_vals in data_generator:
n_pts = len(magn_vals)
# we simulate the fact that we don't
# know the full amount of data
# that needs to be saved, hence
# we need to resize while saving
n_cols, n_rows = ds.shape
ds.resize((n_cols, n_rows + n_pts))
ds[0, n_rows:n_rows+n_pts] = s1_val
ds[1, n_rows:n_rows+n_pts] = s2_val
ds[2, n_rows:n_rows+n_pts] = magn_vals
ds[3, n_rows:n_rows+n_pts] = phas_vals
###Output
_____no_output_____
###Markdown
Numpy npy file We are going to use `numpy`'s `.npy` files together with the handy `open_memmap` function in order to save data that is being spit out of the iterator that generates data.
###Code
def save_to_npy(create_data_generator, filename):
"""
Use numpy npy file to save dummy data, and measure the time
this takes. The data that is being saved is 2 dependent
and 2 independent parameters. The data for the measurement
is generated by an iterator that is returned
by calling the `create_data_generator` function.
Args:
create_data_generator
a callable with not arguments that returns an iterator
that in turn generates dummy data for 4 parameters
filename
the name of the npy file with the full path; it has to
contain '.npy' extension, otherwise `numpy` will add it
when saving data, and it will be impossible to refer
to the actual file without manually appending the
'.npy' extension to the `filename` in the code
outside of this function
Returns:
saving_time
measured time it took to save the data, in seconds
"""
data_generator = create_data_generator()
npy_mm = numpy.lib.format.open_memmap(
filename, mode='w+', shape=(4, 0))
# this (possibly dangerous?) hack is needed to allow
# resizing during adding data
npy_mm = numpy.require(npy_mm, requirements=['OWNDATA'])
for s1_val, s2_val, magn_vals, phas_vals in data_generator:
n_pts = len(magn_vals)
# we simulate the fact that we don't
# know the full amount of data
# that needs to be saved, hence
# we need to resize while saving
n_cols, n_rows = npy_mm.shape
npy_mm.resize((n_cols, n_rows + n_pts))
npy_mm[0, n_rows:n_rows+n_pts] = s1_val
npy_mm[1, n_rows:n_rows+n_pts] = s2_val
npy_mm[2, n_rows:n_rows+n_pts] = magn_vals
npy_mm[3, n_rows:n_rows+n_pts] = phas_vals
del npy_mm # closes the file and performs final flushing
###Output
_____no_output_____
###Markdown
Measure saving times Save time of QCoDeS dataset with 'numeric' type
###Code
save_time_dataset_numeric = time_save_to_sqlite_numeric(
make_data_producer(n_pts_per_param))
print("Data saving to dataset with 'numeric' paramtype took:")
print(save_time_dataset_numeric)
###Output
Data saving to dataset with 'numeric' paramtype took:
1.86 s ± 1.26 ms per loop (mean ± std. dev. of 2 runs, 3 loops each)
###Markdown
with 'array' type
###Code
save_time_dataset_array = time_save_to_sqlite_array(
make_data_producer(n_pts_per_param))
print("Data saving to dataset with 'array' paramtype took:")
print(save_time_dataset_array)
###Output
Data saving to dataset with 'array' paramtype took:
155 ms ± 4.06 ms per loop (mean ± std. dev. of 5 runs, 2 loops each)
###Markdown
Save time of HDF5
###Code
%%timeit outfile = TemporaryFile(); outfile.close()
save_to_hdf5(make_data_producer(n_pts_per_param), outfile.name)
###Output
178 ms ± 2 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
Save time of npy
###Code
%%timeit outfile = TemporaryFile(suffix='.npy'); outfile.close()
save_to_npy(make_data_producer(n_pts_per_param), outfile.name)
###Output
398 ms ± 5.39 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Measure loading times Load time of QCoDeS dataset QCoDeS dataset has two ways of just loading the data: via `DataSet.get_data` method, and via `get_data_by_id` function.We are going to use both, but note that `get_data_by_id` does a bit more than just loading the data, hence it is supposedly more popular among users.A third way is to use `DataSet.get_values` and obtain values of each parameter one by one. `get_data_by_id` is already using it internally, hence we are not going to profile it. of 'numeric' type
###Code
_, dataset_numeric = save_to_sqlite(
make_data_producer(n_pts_per_param),
paramtype='numeric')
%%timeit parameter_names = dataset_numeric.parameters.split(',')
data = dataset_numeric.get_data(*parameter_names)
%%timeit
data = get_data_by_id(dataset_numeric.run_id)
###Output
2.56 s ± 22.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
of 'array' type
###Code
_, dataset_array = save_to_sqlite(
make_data_producer(n_pts_per_param),
paramtype='array')
%%timeit parameter_names = dataset_array.parameters.split(',')
data = dataset_array.get_data(*parameter_names)
%%timeit
data = get_data_by_id(dataset_array.run_id)
###Output
158 ms ± 648 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
Load time from HDF5
###Code
hdf5file = TemporaryFile()
hdf5file.close()
hdf5filename = hdf5file.name
_ = save_to_hdf5(make_data_producer(n_pts_per_param), hdf5filename)
%%timeit
with h5py.File(hdf5filename, 'r') as f:
data = numpy.array(f['results'], copy=True)
###Output
5.24 ms ± 35.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
Load time from npy
###Code
npyfile = TemporaryFile(suffix='.npy')
npyfile.close()
npyfilename = npyfile.name
_ = save_to_npy(make_data_producer(n_pts_per_param), npyfilename)
%%timeit
data = numpy.load(npyfilename)
###Output
251 µs ± 610 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
|
.ipynb_aml_checkpoints/Assignment-2-3-checkpoint2021-2-28-10-23-14.ipynb | ###Markdown
Socio-Demographic Effect on Attrition
###Code
age_effect = employee.groupby('Attrition').mean()['Age']
age_effect
#The average age of employees who are lost by attrition 33.61
dist_effect = employee.groupby('Attrition').mean()['DistanceFromHome']
dist_effect
# The average distance from home for those who exited thecompany through attrition was higher (10.63)
###Output
_____no_output_____
###Markdown
Effect of renumeration and other benefits on attrition
###Code
BusinessTravel = pd.crosstab(employee['BusinessTravel'],employee['Attrition'], margins = False, normalize='columns')
BusinessTravel
# a higher proportion of those who travel for business left the company through attrition.
MonthlyIncome_effect = employee.groupby('Attrition').mean()['MonthlyIncome']
MonthlyIncome_effect
#Employees who exited the company through atrrition had lower average mothly Income.
###Output
_____no_output_____ |
doc/source/tutorial/02_evaporation/02_evaporation_explicit.ipynb | ###Markdown
Tutorial 3: Evaporation tutorial Explicit solventIn this tutorial, we will set up a evaporation simulation of a mixture of a solvent with some solutes. For simplicity, we'll use spherical colloids (green), which are larger as the solvent particles (grey) with a diameter of`s_A`$=\sigma_A=4$ and a mass of $m_A=4^3$. The solvent particle diameter `s_S`$=\sigma_s=1$ defines the unit of length, and it's mass is set to $m=1$. The simulation box will be periodic in $x$ and $y$, and have a wall (black) in $z$ direction, the evaporation direction. Because the system will be set up as shown in the sketch, we will have a liquid film on top of the wall with a coexisting vapor above it. Evaporation is realized by deleting particles periodically in the deletion zone (slab in $z$ direction at the top, yellow). Because the system might exhibit evaporative cooling, we will also thermostat a small region (also a slab in $z$ direction, red) close to the wall.  System setupWe initialized and equilibrated the particles in a box with densities $\rho_s=0.3$ `rho_S` and $\rho_A=0.01$ `rho_A`. The temperature `kT` is set to 1.0. The initial liquid film height `height` is 40 in a box of total height `Lz=60`, and temperature `kT`. Particle type `A` or typeid `0` is the larger colloid A, and `S` or typeid `1` is the solvent.```Pythonimport numpy as npimport sysimport hoomdfrom hoomd import mdfrom hoomd import dataimport azpluginskT = 1.0Lz = 60hoomd.context.initialize()hoomd.context.SimulationContext()system = hoomd.init.read_gsd(filename='tutorial_02_explicit_evaporation_init.gsd',time_step=0)s_S = 1.0s_A = 4.0s_AS = 0.5*(s_A+s_S)```You can see that in addition to the colloid `A` and the solvent `S`, we defined some particle types `T` and `Z`, which will be used for thermostatting and deleting later on. For now we need to define all interactions in the system. All pair interactions with the solvent are described attractive LJ potentials with a cut off ``r_cut``$=3\sigma_i$ and onset of smoothing at ``r_on``$=2.5\sigma_i$. All colloid interactions are purely repulsive WCA pair potentials, cut at $2^{1/6}\sigma_i$. ```Pythonnl = hoomd.md.nlist.tree()lj = hoomd.md.pair.lj(nlist=nl,r_cut =3.0*s_AS,name='n')lj.set_params(mode="xplor")lj.pair_coeff.set(['S','T'], ['S','T'], epsilon=1.0, sigma=s_S, r_cut=3.0*s_S, r_on=2.5*s_S)lj.pair_coeff.set(['S','T'], 'A', epsilon=1.0, sigma=s_AS,r_cut=3.0*s_AS, r_on=2.5*s_AS)lj.pair_coeff.set('A', 'A', epsilon=1.0, sigma=s_A, r_cut=2**(1/6.)*s_A,r_on=2.5*s_A)lj.pair_coeff.set('Z', ['A','S','Z','T'], epsilon=0, sigma=0, r_cut=False)lower_wall=hoomd.md.wall.group()lower_wall.add_plane((0,0,-Lz/2.), (0,0,1))lj_wall_lo=azplugins.wall.lj93(lower_wall, r_cut=3.0*s_A,name='wall')lj_wall_lo.force_coeff.set('A', epsilon=2.0, sigma=s_A, r_cut=3.0*s_A)lj_wall_lo.force_coeff.set(['S','T','Z'], epsilon=2.0, sigma=s_S, r_cut=3.0*s_S)upper_wall=hoomd.md.wall.group()upper_wall.add_plane((0,0,Lz/2.), (0,0,-1))lj_wall_up=azplugins.wall.lj93(upper_wall, r_cut=s_A*(2/5.)**(1/6.))lj_wall_up.force_coeff.set('A', epsilon=2.0, sigma=s_A, r_cut=(2/5.)**(1/6.)*s_A)lj_wall_up.force_coeff.set(['S','T','Z'], epsilon=2.0, sigma=s_S, r_cut=(2/5.)**(1/6.)*s_S)```We use a Lennard-Jones 9-3 potential for the walls, attractive at the bottom, purely repulsive at the top (cut at $(2/5)^{1/6.}\sigma_i$).`S` and `T` particles are identical, except for the fact that `T` will be coupled to the thermostat later on, so all their pair interactions are the same. The larger colloids `A` are four times as big as the solvent `S` particles, and all pair potential values are adjusted accordingly to their respective diameters. The `Z` particles represent deleted particles, so they do not interact with any particle in the system. EvaporationNow, we need to set up the evaporation. ```Pythonazplugins.update.types(inside='T', outside='S', lo=-Lz/2., hi=-Lz/2.+2, period=1)langevin.set_gamma('T', gamma=0.1)langevin.set_gamma('S', gamma=0.0)evap = azplugins.evaporate.particles(solvent='S', evaporated='Z', lo=Lz/2.-2, hi=Lz/2., seed=77, period=1)```Here, we use ``azplugins.update.types`` to define a region where the particle types will be switched from `S` to `T` if solvent particles enter it, and swiched back when they leave it. Then, we set the friction coefficents $\gamma=$ `gamma` of the two particles such that only `T` particles are weakly coupled and `S` particles are not. The actual evaporation is also a particle type switch, this time non-reversible, from `S` to `Z`, a ghost particle. Because they don't interact with any particle in the system, they are effectively removed from the system. If you have very long or big evaporation simulations, it can be beneficial to periodically remove all `Z` particles by stopping the simulation, taking a snapshot `system.take_snapshot()`, deleting all `Z`, and then reading it back in via `system.restore_snapshot()`. See `hoomd.data` [interface](https://hoomd-blue.readthedocs.io/en/stable/module-hoomd-data.html) for more details.Because evaporated particles are not removed from the simulation box, the temperature, pressure, etc. reported by `hoomd.compute.thermo` will not be meaningful. (Their degrees of freedom are still included in calculations.) This is OK because evaporation is a nonequilibrium process, and care should be taken in defining these quantities anyway.If necessary, make sure that you compute these properties in post-processing. Both ``azplugins.update.types`` and ``azplugins.evaporate.particles`` take ``lo`` and ``hi`` as parameters, defining the upper and lower boundary of the region. They have to be inside the simulation box and if they overlap, it is the users responsibilty to ensure that the system makes physical sense. The parameter `period` is used to determine how frequent the update should be performed. Evaporation parameters`solvent` is the solvent particle type and `evaporated` is the evaporated, or ghost, particle type. The region is defined by `lo`,the lower bound $z$ coordinate of the region and `hi`, the upper bound $z$ coordinate the region.`seed` is a integer seed to the pseudo-random number generator, `Nmax` is the maximum number of particles to evaporate, if you leave it on the default `False`, all particles in the evaporation region will be removed. The `period` determines the frequency of evaporating particles. `phase` triggers the execution, When -1 (default), start on the current time step. Otherwise, execute on steps where ``(step + phase) % period`` is 0. The maximum attainable flux $j$ out of the box is given by $j = \frac{N_\text{max}}{A \Delta t \,\text{period}}$.If there are fewer than $N_\text{max}=$ `Nmax` particles in the evaporation region, then the actual flux will be lower and the simulation will be diffusion-limited.We can measure the actual flux during the simulation and calculate the interface speed $v$ from the density histograms. From that and the solute diffusion, we can define the Peclet numbers $Pe_i = v H_\text{init}/D_i$.We are writing out a gsd trajectory during the run:```Pythonhoomd.dump.gsd(filename="tutorial_02_explicit_evaporation_trajectory.gsd", overwrite=True, period=1e4, group=all,dynamic=['attribute','property','momentum'])```The only thing left to do is run the simulation:```Pythonhoomd.run(1e6)``` Analyzing the resultsFirst, have a look at the generated trajectory. Use either [vmd](https://www.ks.uiuc.edu/Research/vmd/) with the [gsd plugin](https://github.com/mphoward/gsd-vmd), [ovito](https://www.ovito.org/), or your favorite configuration viewer. You'll be able to see the thermostatted `T` particles close to the wall (identical to `S` particles), as well as the ghost particles `Z`, which you can igonre or remove from view. You should be able to see the liquid-vapor interface moving downwards as the simulation progresses.We can calculate density and temperature histograms from the saved gsd file:
###Code
import numpy as np
import gsd.hoomd
import matplotlib.pyplot as plt
fig, ax = plt.subplots(2, figsize=(6,8))
# read gsd file
trajectory = gsd.hoomd.open(name='tutorial_02_explicit_evaporation_trajectory.gsd')
for frame in trajectory[::20]:
# density
pos = frame.particles.position
box = frame.configuration.box[0:3]
hist, bins = np.histogram(pos[:,2],bins=50,range=(-0.5*box[2],0.5*box[2]))
center = (bins[:-1] + bins[1:]) / 2
binsize = bins[1]-bins[0]
volume_bin = binsize*box[0]*box[1]
ax[0].plot(center,hist/volume_bin,label=frame.configuration.step)
#temperature
temp = np.zeros(len(center))
for i,c in enumerate(center):
slab_vel = frame.particles.velocity[np.abs(pos[:,2]-c)<binsize]
l = len(slab_vel)
if l>0:
v_squared = slab_vel[:,0]**2 + slab_vel[:,1]**2 + slab_vel[:,2]**2
T = 1/(3*l)*np.sum(v_squared)
else:
T=0
temp[i]=T
ax[1].plot(center,temp,label=frame.configuration.step)
ax[1].set_xlabel('$z$')
ax[0].set_ylabel(r'$\rho$')
ax[1].set_ylabel(r'$T$')
ax[0].legend()
plt.show()
###Output
_____no_output_____
###Markdown
The histograms are fairly noisy and you should repeat the same simulation with different seeds and then average them for better statistics. For this set of parameters and initial film height, no significant evaporative cooling or accumulation can be observed. We can also check the flux during the simulation by monitoring the number of solvent particles:
###Code
timesteps = []
n_solvent =[]
inital_number_solvent = len(trajectory[0].particles.typeid[trajectory[0].particles.typeid==1])
for frame in trajectory:
# particle type 'Z' = 3 = deleted particles
ghosts = len(frame.particles.typeid[frame.particles.typeid==3])
timesteps.append(frame.configuration.step)
n_solvent.append(inital_number_solvent-ghosts)
timesteps = np.asarray(timesteps)
n_solvent = np.asarray(n_solvent)
# plotting
fig, ax = plt.subplots(2)
dt= 0.005
L = 16.0
t = timesteps*dt
ax[0].plot(t, n_solvent)
z = np.polyfit(t, n_solvent, 1)
ax[0].plot(t, z[0]*t+z[1])
ax[0].set_xlabel('time')
ax[0].set_ylabel('$N_{sol}$')
flux = -np.gradient(n_solvent,timesteps)/(L**2*dt)
ax[1].plot(t, flux)
ax[1].plot(t, -z[0]*np.ones(len(t))/L**2)
ax[1].set_xlabel('time')
ax[1].set_ylabel('$j$')
plt.show()
###Output
_____no_output_____ |
ds01-notebook.ipynb | ###Markdown
Analyzing Data and Building a Dashboard Extracting essential data from a data set and displaying it is integral in data science. Understanding analytics and trends through displayed data is the purpose of a dashboard. In this workshop, you will extract some essential economic indicators from data, you will then display these economic indicators in a Dashboard. You can then share the dashboard via an URL to show off your new skills :) What is GDP? Gross domestic product (GDP) is a comprehensive measure of U.S. economic activity. GDP is the value of the goods and services produced in the United States. The growth rate of GDP is the most popular indicator of the nation's overall economic health. Step 1 - Define Function that Makes a Dashboard Importing relevant libraries
###Code
import pandas as pd
from bokeh.plotting import figure, output_file, show,output_notebook
output_notebook()
###Output
_____no_output_____
###Markdown
We have defined a function make_dashboard() for you. If you are interested here's the code and you can look into how the function works. But for this workshop, you should only care about the inputs. The output of the function will produce a dashboard and an html file. You can then use this html file to share your dashboard. If you do not know what an html file is don't worry everything you need to know will be provided in the notebook.
###Code
def make_dashboard(x, gdp_change, unemployment, title, file_name):
output_file(file_name)
p = figure(title=title, x_axis_label='year', y_axis_label='%')
p.line(x.squeeze(), gdp_change.squeeze(), color="firebrick", line_width=4, legend="% GDP change")
p.line(x.squeeze(), unemployment.squeeze(), line_width=4, legend="% unemployed")
show(p)
###Output
_____no_output_____
###Markdown
The respository contains the CSV files with all the data that you will need for this project. We have gone ahead and cleaned the data for you so that its easier to derive analysis from it. - `./data/gdp_data.csv` or https://shorturl.at/orG57 - `./data/unemployment_data.csv` or https://shorturl.at/cfryO Task 1: Create a dataframe that contains the GDP data and displays the first five rows of the dataframe. ** Hint 1:** Use the function `pd.read_csv()` to create a Pandas dataframe**Hint 2:** On your local machine the path for your file can be found using command `pwd`
###Code
# Type your code here
###Output
_____no_output_____
###Markdown
**Hint 3:** Use the method head() to display the first five rows of the GDP data
###Code
# Type your code here
###Output
_____no_output_____
###Markdown
Task 2: Create a dataframe that contains the unemployment data. Display the first five rows of the dataframe.
###Code
# Type your code here
###Output
_____no_output_____
###Markdown
Use the method head() to display the first five rows of the GDP data
###Code
# Type your code here
###Output
_____no_output_____
###Markdown
Task 3: Display a dataframe where unemployment was greater than 8.5%
###Code
# Type your code here
###Output
_____no_output_____
###Markdown
Task 4: Use the function make_dashboard to make a dashboard In this section, you will call the function make_dashboard , to produce a dashboard. We will use the convention of giving each variable the same name as the function parameter. Create a new dataframe with the column 'date' called x from the dataframe that contains the GDP data.
###Code
x = # Create your dataframe with column date
###Output
_____no_output_____
###Markdown
Create a new dataframe with the column 'change-current' called gdp_change from the dataframe that contains the GDP data.
###Code
gdp_change = # Create your dataframe with column change-current
###Output
_____no_output_____
###Markdown
Create a new dataframe with the column 'unemployment' called unemployment from the dataframe that contains the unemployment data.
###Code
unemployment = # Create your dataframe with column unemployment
###Output
_____no_output_____
###Markdown
Give your dashboard a string title, and assign it to the variable title
###Code
title = # Give your dashboard a string title
###Output
_____no_output_____
###Markdown
Finally, the function make_dashboard will output an .html in your direictory, just like a csv file. The name of the file is "index.html" and it will be stored in the varable file_name.
###Code
file_name = "index.html"
###Output
_____no_output_____
###Markdown
Call the function make_dashboard , to produce a dashboard.
###Code
# Fill up the parameters in the following function:
make_dashboard(x=, gdp_change=, unemployment=, title=, file_name=)
###Output
_____no_output_____
###Markdown
Bells and Whistles: Save the dashboard on IBM cloud and display it Follow the tutorial Provisioning an object storage instance on IBM Cloud copy the JSON object containing the credentials you created while creating your IBM Watson Studio Project and Notebook. You’ll want to store everything you see in a credentials variable like the one below (NOTE: replace the placeholder values with your own). - DO NOT make your access_key_id and secret_access_key. - DO NOT delete @hidden_cell as this will not allow people to see your credentials when you share your notebook. credentials = { "apikey": "your-api-key", "cos_hmac_keys": { "access_key_id": "your-access-key-here", "secret_access_key": "your-secret-access-key-here" }, "endpoints": "your-endpoints", "iam_apikey_description": "your-iam_apikey_description", "iam_apikey_name": "your-iam_apikey_name", "iam_role_crn": "your-iam_apikey_name", "iam_serviceid_crn": "your-iam_serviceid_crn", "resource_instance_id": "your-resource_instance_id"}
###Code
# @hidden_cell
#
###Output
_____no_output_____
###Markdown
You will need the endpoint make sure the setting are the same as PROVISIONING AN OBJECT STORAGE INSTANCE ON IBM CLOUD assign the name of your bucket to the variable bucket_name
###Code
endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'
###Output
_____no_output_____
###Markdown
Follow the tutorial Provisioning an object storage instance on IBM Cloud assign the name of your bucket to the variable bucket_name
###Code
bucket_name =
###Output
_____no_output_____
###Markdown
We can access IBM Cloud Object Storage with Python useing the boto3 library, which we’ll import below:
###Code
! pip3 install boto3
import boto3
###Output
_____no_output_____
###Markdown
We can interact with IBM Cloud Object Storage through a boto3 resource object.
###Code
resource = boto3.resource(
's3',
aws_access_key_id = credentials["cos_hmac_keys"]['access_key_id'],
aws_secret_access_key = credentials["cos_hmac_keys"]["secret_access_key"],
endpoint_url = endpoint,
)
###Output
_____no_output_____
###Markdown
We are going to use open to create a file object. To get the path of the file, you are going to concatenate the name of the file stored in the variable file_name. The directory stored in the variable directory using the + operator and assign it to the variable html_path. We will use the function getcwd() to find current the working directory.
###Code
import os
directory = os.getcwd()
html_path = directory + "/" + file_name
html_path
###Output
_____no_output_____
###Markdown
Now you must read the html file, use the function f = open(html_path, mode) to create a file object and assign it to the variable f. The parameter file should be the variable html_path, the mode should be "r" for read.
###Code
# Type your code here
###Output
_____no_output_____
###Markdown
To load your dataset into the bucket we will use the method put_object, you must set the parameter name to the name of the bucket, the parameter Key should be the name of the HTML file and the value for the parameter Body should be set to f.read().
###Code
# Fill up the parameters in the following function:
resource.Bucket(name=).put_object(Key=, Body=)
###Output
_____no_output_____
###Markdown
In the dictionary Params provide the bucket name as the value for the key 'Bucket'. Also for the value of the key 'Key' add the name of the html file, both values should be strings.
###Code
# Fill in the value for each key
# Params = {'Bucket': ,'Key': }
###Output
_____no_output_____
###Markdown
The following lines of code will generate a URL to share your dashboard. The URL only last seven days, but don't worry you will get full marks if the URL is visible in your notebook.
###Code
import sys
time = 7*24*60**2
client = boto3.client(
's3',
aws_access_key_id = credentials["cos_hmac_keys"]['access_key_id'],
aws_secret_access_key = credentials["cos_hmac_keys"]["secret_access_key"],
endpoint_url=endpoint,
)
url = client.generate_presigned_url('get_object',Params=Params,ExpiresIn=time)
print(url)
###Output
_____no_output_____ |
feature_engineering/make-bert-feature_cls.ipynb | ###Markdown
データの読み込み
###Code
all_title_abstract = pd.read_feather(os.path.join(INPUT_DIR, 'all_title_abstract_df.feather'))
all_title_abstract_exists_cites = all_title_abstract[all_title_abstract['cites'].notnull()].reset_index(drop=True)
all_title_abstract_df = pd.concat([all_title_abstract_exists_cites, all_title_abstract.iloc[851524:, :]]).reset_index(drop=True)
print(all_title_abstract_df.shape)
all_title_abstract_df.head()
tqdm.pandas()
class BertSequenceVectorizer:
def __init__(self):
self.device = 'cuda' if torch.cuda.is_available() else 'cpu'
self.model_name = '/content/drive/MyDrive/citation_prediction/scibert_scivocab_uncased' # modelを保存しているpathを指定
self.tokenizer = BertTokenizer.from_pretrained(self.model_name)
self.bert_model = transformers.BertModel.from_pretrained(self.model_name)
self.bert_model = self.bert_model.to(self.device)
self.max_len = 128
print(self.device)
def vectorize(self, sentence : str) -> np.array:
inp = self.tokenizer.encode(sentence)
len_inp = len(inp)
if len_inp >= self.max_len:
inputs = inp[:self.max_len]
masks = [1] * self.max_len
else:
inputs = inp + [0] * (self.max_len - len_inp)
masks = [1] * len_inp + [0] * (self.max_len - len_inp)
inputs_tensor = torch.tensor([inputs], dtype=torch.long).to(self.device)
masks_tensor = torch.tensor([masks], dtype=torch.long).to(self.device)
bert_out = self.bert_model(inputs_tensor, masks_tensor)
seq_out, pooled_out = bert_out['last_hidden_state'], bert_out['pooler_output']
if torch.cuda.is_available():
target = pooled_out[0].cpu().detach().numpy()
return target
else:
target = pooled_out[0].detach().numpy()
return target
BSV = BertSequenceVectorizer()
for col in ['clean_title', 'clean_abstract']:
all_title_abstract_bert \
= all_title_abstract_df[col].fillna('nan').progress_apply(lambda x: BSV.vectorize(x))
pd.DataFrame(all_title_abstract_bert.tolist(),columns=[f'{col}_scibert_{i}' for i in range(768)]).to_feather(os.path.join(INPUT_DIR, 'scibert_{0}_cls_df.feather'.format(col)))
# device = 'cuda' if torch.cuda.is_available() else 'cpu'
# model_name = '/content/drive/MyDrive/citation_prediction/scibert_scivocab_uncased' # modelを保存しているpathを指定
# tokenizer = BertTokenizer.from_pretrained(model_name)
# bert_model = transformers.BertModel.from_pretrained(model_name)
# bert_model = bert_model.to(device)
# max_len = 128
# print(device)
# sentence = all_title_abstract_df['clean_title'].iloc[0]
# inp = tokenizer.encode(sentence)
# len_inp = len(inp)
# if len_inp >= max_len:
# inputs = inp[:max_len]
# masks = [1] * max_len
# else:
# inputs = inp + [0] * (max_len - len_inp)
# masks = [1] * len_inp + [0] * (max_len - len_inp)
# inputs_tensor = torch.tensor([inputs], dtype=torch.long).to(device)
# masks_tensor = torch.tensor([masks], dtype=torch.long).to(device)
# bert_out = bert_model(inputs_tensor, masks_tensor)
# seq_out, pooled_out = bert_out['last_hidden_state'], bert_out['pooler_output']
# sentence
# input_ids = torch.tensor(inp)
# tokenizer.convert_ids_to_tokens(inp)
###Output
_____no_output_____ |
testing/jaak-it_demo/09_JaakIt_RDD_based_API.ipynb | ###Markdown
Sumary statistics.
###Code
mat = sc.parallelize(
[
np.array([1.0, 10.0, 100.0]),
np.array([2.0, 20.0, 200.0]),
np.array([3.0, 30.0, 300.0])
]
)
summary = Statistics.colStats(mat)
###Output
_____no_output_____
###Markdown
Respectively Results.- A dense vector containing the mean value for each column- Column-wise variance- Number of nonzeros in each column
###Code
print(summary.mean())
print(summary.variance())
print(summary.numNonzeros())
###Output
[ 2. 20. 200.]
[1.e+00 1.e+02 1.e+04]
[3. 3. 3.]
###Markdown
_SeriesX_ is defined as a simple series elements, In the other hand, _SeriesY_ must have the same number of partitions and cardinality as _SeriesX_.
###Code
seriesX = sc.parallelize([1.0, 2.0, 3.0, 3.0, 5.0])
seriesY = sc.parallelize([11.0, 22.0, 33.0, 33.0, 555.0])
###Output
_____no_output_____
###Markdown
Compute the correlation using Pearson's method. Enter _"spearman"_ for Spearman's method. If a method is not specified, Pearson's method will be used by default.
###Code
print("Correlation is: " + str(Statistics.corr(seriesX, seriesY, method="pearson")))
###Output
Correlation is: 0.8500286768773001
###Markdown
Now let's create another RDD of vectors.
###Code
data = sc.parallelize(
[
np.array([1.0, 10.0, 100.0]),
np.array([2.0, 20.0, 200.0]),
np.array([5.0, 33.0, 366.0])
]
)
###Output
_____no_output_____
###Markdown
Calculate the correlation matrix using Pearson's method. Use "spearman" for Spearman's method. If a method is not specified, Pearson's method will be used by default.
###Code
print(Statistics.corr(data, method="pearson"))
###Output
[[1. 0.97888347 0.99038957]
[0.97888347 1. 0.99774832]
[0.99038957 0.99774832 1. ]]
###Markdown
With python's RDD works with many kind of sampling methods, let's hands-on with _Stratified Sampling_ creating one with key value pairs.
###Code
data = sc.parallelize(
[
(1, 'a'),
(1, 'b'),
(2, 'c'),
(2, 'd'),
(2, 'e'),
(3, 'f')
]
)
###Output
_____no_output_____
###Markdown
Specify the exact fraction desired from each key as a dictionary.
###Code
fractions = {
1: 0.1,
2: 0.6,
3: 0.3
}
approxSample = data.sampleByKey(False, fractions)
###Output
_____no_output_____ |
nb1.ipynb | ###Markdown
Notebook 1 To begin with we need a game design and a map on which to play it. We start off with something very simple; a room with a goal G in one corner and the agent A in the other, with some walls w between and around the outside, and a hole H that the agent must not fall into. In particular we have:
###Code
# Define level
level = """
wwwwwwwwwwwww
wA w w
w w ww
www w wwwww
w wwG w
w H w w
wwwwwwwwwwwww
"""
# Define game
game = """
BasicGame
LevelMapping
G > goal
A > avatar
H > hole
w > wall
InteractionSet
avatar wall > stepBack
goal avatar > killSprite
avatar hole > killSprite
SpriteSet
structure > Immovable
goal > color=GREEN
hole > color=RED
wall > color=BROWN
TerminationSet
# SpriteCounter stype=goal limit=0 win=True
SpriteCounter stype=goal win=True
SpriteCounter stype=avatar win=False
"""
# Import necessary functions
import sys
sys.path.insert(0, 'pyvgdlmaster/vgdl')
from mdpmap import MDPconverter
from core import VGDLParser
from rlenvironment import RLEnvironment
import pygame
import numpy as np
# Start game and produce image
g = VGDLParser().parseGame(game)
g.buildLevel(level)
rle = RLEnvironment(game, level, observationType='global', visualize=True)
rle._game._drawAll()
pygame.image.save(rle._game.screen, "example.png")
###Output
_____no_output_____
###Markdown
The agent gets +10 points for reaching the goal, -10 points for falling in the hole, and -1 point every turn that it has not reached either the goal or the hole (so it has an incentive to reach the goal as quickly as possible). This ends up giving us the following game: We now want to be able to extract information in the form of (state, reward, action)\_t for each time step t so as to be able to learn schemas. Each state can be extracted from the VGDL in the form of a list that describes the world in terms of objects and their positions, with a possible action and reward at each state, as follows:
###Code
# Set up RLE
rle.actionDelay = 200
rle.recordingEnabled = True
rle.reset()
# Get intial state information
initState = rle._obstypes.copy()
state = rle.getState()
initState['agent'] = [(state[0], state[1])]
initReward = 0
# Initialise parameters
numSteps = 1
actions = np.array([0,0,1,0])
ended = False
rStates = []
rReward = []
rActions = []
rStates.append(initState)
rReward.append(initReward)
# Perform sequence of actions
for i in range(numSteps):
if ended == False:
# Take and record action
rle._performAction(actions)
action = rle._allEvents[-1][1]
rActions.append(action)
# Get and record new state information
newState = rle._obstypes.copy()
state = rle.getState()
newState['agent'] = [(state[0], state[1])]
rStates.append(newState)
# Get and record new reward information
(ended, won) = rle._isDone()
if ended:
if won:
newReward = 10
else:
newReward = -10
else:
newReward = -1
rRewards.append(newReward)
# Record final action as None
rActions.append(None)
###Output
_____no_output_____
###Markdown
From the raw state data above we binary-encode sets of matrices for learning schemas. See https://www.overleaf.com/read/nrfnchwgmpdg for details on the form of the matrices required. The schemas are learnt as columns of weight matrices and then converted to logical representations for use in planning and policy formation.
###Code
# Encode raw data to binary matrices for learning
bStates, bRewards, bActions, objectMap = encodeRecording("vgdl", fileName=None, rStates, rRewards, rActions)
X, Y = formMatrices(bStates, bRewards, bActions, objectMap)
# Learn a set (list) of schemas for each object attribute i
for i in range(len(Y)):
schemas[i] = learnSchemas(X, Y[i], regConst=10, L=10)
# Convert these vector representations of schemas into logical notation that can be used for planning
###Output
_____no_output_____
###Markdown
Vizualizing BigQuery data in a Jupyter notebook[BigQuery](https://cloud.google.com/bigquery/docs/) is a petabyte-scale analytics data warehouse that you can use to run SQL queries over vast amounts of data in near realtime.Data visualization tools can help you make sense of your BigQuery data and help you analyze the data interactively. You can use visualization tools to help you identify trends, respond to them, and make predictions using your data. In this tutorial, you use the BigQuery Python client library and pandas in a Jupyter notebook to visualize data in the BigQuery natality sample table. Using Jupyter magics to query BigQuery dataThe BigQuery Python client library provides a magic command that allows you to run queries with minimal code.The BigQuery client library provides a cell magic, `%%bigquery`. The `%%bigquery` magic runs a SQL query and returns the results as a pandas `DataFrame`. The following cell executes a query of the BigQuery natality public dataset and returns the total births by year.
###Code
%%bigquery
SELECT
source_year AS year,
COUNT(is_male) AS birth_count
FROM `bigquery-public-data.samples.natality`
GROUP BY year
ORDER BY year DESC
LIMIT 15
###Output
_____no_output_____
###Markdown
The following command to runs the same query, but this time the results are saved to a variable. The variable name, `total_births`, is given as an argument to the `%%bigquery`. The results can then be used for further analysis and visualization.
###Code
%%bigquery total_births
SELECT
source_year AS year,
COUNT(is_male) AS birth_count
FROM `bigquery-public-data.samples.natality`
GROUP BY year
ORDER BY year DESC
LIMIT 15
###Output
_____no_output_____
###Markdown
The next cell uses the pandas `DataFrame.plot` method to visualize the query results as a bar chart. See the [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/visualization.html) to learn more about data visualization with pandas.
###Code
total_births.plot(kind='bar', x='year', y='birth_count');
###Output
_____no_output_____
###Markdown
Run the following query to retrieve the number of births by weekday. Because the `wday` (weekday) field allows null values, the query excludes records where wday is null.
###Code
%%bigquery births_by_weekday
SELECT
wday,
SUM(CASE WHEN is_male THEN 1 ELSE 0 END) AS male_births,
SUM(CASE WHEN is_male THEN 0 ELSE 1 END) AS female_births
FROM `bigquery-public-data.samples.natality`
WHERE wday IS NOT NULL
GROUP BY wday
ORDER BY wday ASC
###Output
_____no_output_____
###Markdown
Visualize the query results using a line chart.
###Code
births_by_weekday.plot(x='wday');
###Output
_____no_output_____
###Markdown
Using Python to query BigQuery dataMagic commands allow you to use minimal syntax to interact with BigQuery. Behind the scenes, `%%bigquery` uses the BigQuery Python client library to run the given query, convert the results to a pandas `Dataframe`, optionally save the results to a variable, and finally display the results. Using the BigQuery Python client library directly instead of through magic commands gives you more control over your queries and allows for more complex configurations. The library's integrations with pandas enable you to combine the power of declarative SQL with imperative code (Python) to perform interesting data analysis, visualization, and transformation tasks.To use the BigQuery Python client library, start by importing the library and initializing a client. The BigQuery client is used to send and receive messages from the BigQuery API.
###Code
from google.cloud import bigquery
client = bigquery.Client()
###Output
_____no_output_____
###Markdown
Use the [`Client.query`](https://googleapis.github.io/google-cloud-python/latest/bigquery/generated/google.cloud.bigquery.client.Client.htmlgoogle.cloud.bigquery.client.Client.query) method to run a query. Execute the following cell to run a query to retrieve the annual count of plural births by plurality (2 for twins, 3 for triplets, etc.).
###Code
sql = """
SELECT
plurality,
COUNT(1) AS count,
year
FROM
`bigquery-public-data.samples.natality`
WHERE
NOT IS_NAN(plurality) AND plurality > 1
GROUP BY
plurality, year
ORDER BY
count DESC
"""
df = client.query(sql).to_dataframe()
df.head()
###Output
_____no_output_____
###Markdown
To chart the query results in your `DataFrame`, run the following cell to pivot the data and create a stacked bar chart of the count of plural births over time.
###Code
pivot_table = df.pivot(index='year', columns='plurality', values='count')
pivot_table.plot(kind='bar', stacked=True, figsize=(15, 7));
###Output
_____no_output_____
###Markdown
Run the following query to retrieve the count of births by the number of gestation weeks.
###Code
sql = """
SELECT
gestation_weeks,
COUNT(1) AS count
FROM
`bigquery-public-data.samples.natality`
WHERE
NOT IS_NAN(gestation_weeks) AND gestation_weeks <> 99
GROUP BY
gestation_weeks
ORDER BY
gestation_weeks
"""
df = client.query(sql).to_dataframe()
###Output
_____no_output_____
###Markdown
Finally, chart the query results in your `DataFrame`.
###Code
ax = df.plot(kind='bar', x='gestation_weeks', y='count', figsize=(15,7))
ax.set_title('Count of Births by Gestation Weeks')
ax.set_xlabel('Gestation Weeks')
ax.set_ylabel('Count');
###Output
_____no_output_____
###Markdown
IntroductionThis is a Data Science project for the Udacity's Data Scientist Nanodegree. The project is divided into steps, following the CRISP-DM methodology.--- 1. Business UnderstandingIn this project, we dive into the Boston AirBnb Dataset, in order to answer 3 business questions:1. What was the housestay that earned the most? Is it in the most expensive street? 2. What are the busiest times of the year to visit Boston? By how much do prices spike?3. Is it possible to predict the prices using the other informations?The dataset covers the AirBnb listings from September/2016 till September/2017, and is available [here](https://www.kaggle.com/airbnb/boston).--- Import all the libraries.
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import re
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
from sklearn.preprocessing import StandardScaler
from copy import deepcopy
###Output
_____no_output_____
###Markdown
Set the plot style to dark grid and remove the pandas display limit.
###Code
sns.set_style("darkgrid")
pd.set_option('display.max_columns', None)
###Output
_____no_output_____
###Markdown
--- 2. Data understadingIn this step, we will go through the dataset, in order to understand its meaning. We can see the data is in three .csv files. First, we list all the data, in order to acquire some general information about it.
###Code
df_cal = pd.read_csv('calendar.csv')
df_list = pd.read_csv('listings.csv')
df_rev = pd.read_csv('reviews.csv')
###Output
_____no_output_____
###Markdown
The `calendar.csv` file has data on the listings. It has the listings' dates, availability and prices.
###Code
df_cal
###Output
_____no_output_____
###Markdown
The `listings.csv` has lots of information about the listings, such as host information, housestays descriptions, address, etc. It comes from a web scrapping performed on the AirBnB website.
###Code
df_list.head()
###Output
_____no_output_____
###Markdown
The `reviews.csv` file has user reviews on the listings. It has information on the users, the reviewed listings, and the comments text.
###Code
df_rev
df_rev.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 68275 entries, 0 to 68274
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 listing_id 68275 non-null int64
1 id 68275 non-null int64
2 date 68275 non-null object
3 reviewer_id 68275 non-null int64
4 reviewer_name 68275 non-null object
5 comments 68222 non-null object
dtypes: int64(3), object(3)
memory usage: 3.1+ MB
###Markdown
Exploring the data in calendar.csv
###Code
df_cal.info()
df_cal.describe()
print(df_cal.listing_id.nunique())
print(df_cal.available.nunique())
df_cal.isnull().mean()
df_cal.available.unique()
# price is null only when the host is booked
df_cal[df_cal.available == 't']
# we check the unique prices in order to have a better grasp of how the price strings are
df_cal.price.unique()
###Output
_____no_output_____
###Markdown
--- 3. Data preparationIn this step, we process and wrangle with the data, in order to acquire the insights we want, and make the data understandable for the machine learning model we are going to use. Change price from string to float
###Code
def price_to_float(val):
"""Change the prices strings to float
values so we can process them as numerically.
"""
dols, cents = val.split('.')
dols = dols.replace(',', '')
dols = float(dols[1:])
cents = float(cents) / 100
return dols + cents
mask = df_cal.price.notnull()
df_cal.price[mask] = df_cal.price[mask].apply(price_to_float)
###Output
<ipython-input-15-d19e4e3aaf20>:11: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df_cal.price[mask] = df_cal.price[mask].apply(price_to_float)
###Markdown
What was the housestay that earned the most? Is it in the most expensive street?---It's time to answer our first business question! Let's estimate the housestay earnings by multiplying the number of times it was booked by its price. First, let's obtain the number of times the listed places were booked.
###Code
# get the registers for when the housestays are not available.
# we estimate they are booked when they are not available.
df_most_rented = df_cal[df_cal.available == 'f']
# group the listings by their ids, in order to get the amount of times
# each homestay was booked
df_most_rented = df_most_rented.groupby(by='listing_id').count()
df_most_rented = df_most_rented.reset_index()
df_most_rented = df_most_rented.rename(columns={'available': 'times_booked'})
df_most_rented = df_most_rented[['listing_id', 'times_booked']]
df_most_rented = df_most_rented.sort_values(by='times_booked', ascending=False)
df_most_rented
###Output
_____no_output_____
###Markdown
Now, let's take the listed prices.
###Code
# get the prices and convert them to float
prices_df = df_list[['id', 'price']].copy()
prices_df.price = prices_df.price.apply(price_to_float)
prices_df = prices_df.rename(columns={'id': 'listing_id'})
# merge the dataframes in order to obtain the
# housestays' prices and amount of bookings
earnings_df = pd.merge(left=prices_df, right=df_most_rented, how='inner', on='listing_id')
# by multiplying the prices and the amount of bookings,
# we can estimate the housestays earnings
earnings_df['earnings'] = np.multiply(earnings_df.price.values,
earnings_df.times_booked.values)
earnings_df = earnings_df.sort_values(by='earnings', ascending=False)
TOP_ID = earnings_df['listing_id'].values[0]
earnings_df
###Output
_____no_output_____
###Markdown
Housestays' earnings plot
###Code
plt.figure(figsize=(10,7))
sns.barplot(y='listing_id', x='earnings',
data=earnings_df.iloc[:10], orient='h',
order=earnings_df.listing_id[:10])
plt.xlabel('Earnings [US$]', fontdict={'size': 18})
plt.ylabel('Listing ID', fontdict={'size': 18})
plt.xticks(size=14)
plt.yticks(size=14)
plt.show()
###Output
_____no_output_____
###Markdown
Dropping columns from `df_cal` that are useless for the analysis
###Code
d_columns = [i for i in df_list.columns if re.search('_url', i)]
d_columns += ['scrape_id', 'last_scraped', 'country', 'latitude',
'longitude', 'calendar_updated', 'calendar_last_scraped']
df_list2 = df_list.drop(columns=d_columns)
df_list2.price = df_list2.price.apply(price_to_float)
df_list2.head()
###Output
_____no_output_____
###Markdown
Listing for the housestay that earned the most.
###Code
table = df_list2[df_list2.id == TOP_ID][[
'id', 'name', 'summary', 'description', 'price',
'street', 'neighbourhood', 'amenities'
]]
table.to_excel('most_expensive_listings.xlsx')
table
df_list2[df_list2.id == earnings_df.listing_id.values[0]][['price', 'neighbourhood', 'street']]
###Output
_____no_output_____
###Markdown
Housestay reviews----There are no reviews for this housestay, which is weird.
###Code
df_rev[df_rev.id == TOP_ID]
###Output
_____no_output_____
###Markdown
Let's check how many times this place was rented. It seems the place was rented for the most part of the year, 336 days. It is really weird there are no reviews for it. Good reviews for such a successful place would be expected.
###Code
aux = df_cal[df_cal.listing_id == TOP_ID]
aux = aux[aux.available == 'f']
aux.count()
###Output
_____no_output_____
###Markdown
Finding the most expensive neighbourhoods---We consider the most expensive areas to be the ones with the highest average price.
###Code
df_hood = df_list2.groupby(by='neighbourhood').mean()
df_hood = df_hood.reset_index()
df_hood = df_hood.sort_values(by='price', ascending=False)
# df_hood[['neighbourhood', 'price']].iloc[:20]
plt.figure(figsize=(10,7))
sns.barplot(y='neighbourhood', x='price',
data=df_hood.iloc[:10], orient='h')
plt.xlabel('Average neighbourhood price [US$]', fontdict={'size': 18})
plt.ylabel('Neighbourhood', fontdict={'size': 18})
plt.xticks(size=14)
plt.yticks(size=14)
plt.show()
###Output
_____no_output_____
###Markdown
We conclude the most rentable housestay is in the top 10 most expensive neighbourhoods (9th) . Finding the most expensive streets
###Code
df_hood = df_list2.groupby(by='street').mean()
df_hood = df_hood.reset_index()
df_hood = df_hood.sort_values(by='price', ascending=False)
df_hood[['street', 'price']].iloc[:10]
plt.figure(figsize=(10,7))
sns.barplot(y='street', x='price',
data=df_hood.iloc[:10], orient='h')
plt.xlabel('Average street price [US$]', fontdict={'size': 18})
plt.ylabel('Street', fontdict={'size': 18})
plt.xticks(size=14)
plt.yticks(size=14)
plt.show()
###Output
_____no_output_____
###Markdown
We conclude the most rentable housestay is in the top 10 most expensive streets (3rd) . What are the busiest times of the year to visit Boston? By how much do prices spike?---Now, we answer our second business question. First, we need to convert the dates from `str` to `datetime`.
###Code
df_cal2 = df_cal.copy()
df_cal2.date = pd.to_datetime(df_cal2.date)
###Output
_____no_output_____
###Markdown
Now, we create a column with only the months.
###Code
df_cal2['month'] = df_cal2.date.apply(lambda x: x.month)
df_cal2
###Output
_____no_output_____
###Markdown
Split available and non-available homestays.
###Code
mask.values.sum()
df_cal2f = df_cal2[df_cal2.price.isnull()]
df_cal2f.head()
###Output
_____no_output_____
###Markdown
Count how many places are rented, in order to see the renting trend.
###Code
# in order to estimate the prices, we will work with
# the available homestays, since the do not have
# the booked places' prices. Therefore, we drop
# the NaN values, since they refer to rows
# without price information.
df_cal3 = df_cal2.dropna()[['month', 'price']]
df_cal3.price = df_cal3.price.astype(float)
df_cal3 = df_cal3.groupby(by='month').mean()
df_cal3 = df_cal3.reset_index()
df_cal3.head()
df_cal3f = df_cal2f[['month', 'date']].groupby(by='month').count()
df_cal3f = df_cal3f.rename(columns={'date': 'count'})
df_cal3f = df_cal3f.reset_index()
df_cal3f.head()
df_cal4 = df_cal2.dropna()
df_cal4['price'] = df_cal4['price'].copy().astype(float)
df_cal4 = df_cal4.groupby(by='month').mean().reset_index()
df_cal4
###Output
<ipython-input-32-157060c2a3f2>:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df_cal4['price'] = df_cal4['price'].copy().astype(float)
###Markdown
Get the average prices for each month, from the vacant homestays, since the rented ones do not present their prices.
###Code
def nice_lineplot(x_label, y_label, *kargs, **kwargs):
"""
Function for not repeating plot specifications.
"""
plt.figure(figsize=(10,7))
plt.xlabel(x_label, fontdict={'size': 18})
plt.ylabel(y_label, fontdict={'size': 18})
plt.xticks(size=14)
plt.yticks(size=14)
sns.lineplot(*kargs, **kwargs)
plt.show()
prices_spike = df_cal4.price.max() - df_cal4.price.min()
prices_spike_pct = (df_cal4.price.max() - df_cal4.price.min()) / df_cal4.price.min() * 100
reserves_spike = df_cal3f['count'].max() - df_cal3f['count'].min()
reserves_spike_pct = (df_cal3f['count'].max() - df_cal3f['count'].min()) / df_cal3f['count'].min() * 100
prices_spike_avg = df_cal4.price.max() - df_cal4.price.mean()
prices_spike_avg_pct = (df_cal4.price.max() - df_cal4.price.mean()) / df_cal4.price.mean() * 100
reserves_spike_avg = df_cal3f['count'].max() - df_cal3f['count'].mean()
reserves_spike_avg_pct = (df_cal3f['count'].max() - df_cal3f['count'].mean()) / df_cal3f['count'].mean() * 100
print(f'Prices spike - Min: US${prices_spike:.2f}, {prices_spike_pct:.2f}%.')
print(f'Prices spike - Avg: US${prices_spike_avg:.2f}, {prices_spike_avg_pct:.2f}%.')
print(f'Reserves spike - Min: {reserves_spike} reservations, {reserves_spike_pct:.2f}%.')
print(f'Reserves spike - Avg: {reserves_spike_avg} reservations, {reserves_spike_avg_pct:.2f}%.')
###Output
Prices spike - Min: US$56.09, 30.99%.
Prices spike - Avg: US$36.64, 18.28%.
Reserves spike - Min: 30584 reservations, 70.15%.
Reserves spike - Avg: 18694.25 reservations, 33.69%.
###Markdown
Lets plot the prices and number of listings over the months, so we may have a better understanding of the trends.In the first plot, we see the prices spike around during September and October (months 9 and 10), and drop from January till March. When comparing to the low season, the prices and reservations spike by 31% and 70%, respectively, in the high season. In comparison with the average measurements, in the high season, the prices and reservations spike by 18% and 34%, respectively.
###Code
nice_lineplot(x_label='Month', y_label='Price', data=df_cal4,
x='month', y='price')
nice_lineplot(x_label='Month', y_label='Reservations amount', data=df_cal3f,
x='month', y='count')
###Output
_____no_output_____
###Markdown
Is it possible to predict the prices using the other informations?This is our last business question. So far, we used only statistics. Now, we will need some machine learning to get the answers we want. Therefore, we will continue with the Data Preparation step, in order to make the data suitable for a machine learning algorithm.
###Code
df_list2.head()
###Output
_____no_output_____
###Markdown
Since we will not perform NLP in this project, we will drop the columns with descriptions, and take the columns we think to be useful. This process may insert bias in the model.
###Code
desc_columns = [
'id',
'host_location', 'host_response_time', 'host_response_rate', 'host_acceptance_rate',
'host_is_superhost', 'host_neighbourhood', 'host_listings_count', 'host_has_profile_pic',
'host_identity_verified', 'street', 'neighbourhood', 'neighbourhood_cleansed', 'city', 'market',
'is_location_exact', 'property_type', 'room_type', 'accommodates', 'bathrooms', 'bedrooms', 'beds',
'bed_type', 'amenities', 'security_deposit', 'cleaning_fee', 'guests_included', 'extra_people',
'minimum_nights', 'maximum_nights', 'availability_30', 'availability_60', 'availability_90',
'availability_365', 'review_scores_rating', 'review_scores_accuracy', 'review_scores_cleanliness',
'review_scores_checkin', 'review_scores_communication', 'review_scores_location',
'review_scores_value', 'requires_license', 'instant_bookable', 'cancellation_policy',
'require_guest_profile_picture', 'require_guest_phone_verification', 'calculated_host_listings_count',
'reviews_per_month',
'price',
# 'weekly_price', 'monthly_price'
]
df_clf = df_list2[desc_columns]
df_clf.head()
df_clf.shape
###Output
_____no_output_____
###Markdown
Let's check which columns have nan values Since we have only 3585 rows in our data, let's drop columns with more than 200 nan values. For simplicity, we also drop the rows with nan values. In a real project, we should apply a more complex treatment to these rows.
###Code
na_cols = {c: df_clf[c].isnull().sum() for c in df_clf.columns if df_clf[c].isnull().sum() > 0}
print(na_cols)
drop_cols = [c for c in df_clf.columns if df_clf[c].isnull().sum() > 200]
df_clf2 = df_clf.drop(columns=drop_cols)
df_clf2 = df_clf2.dropna()
print(df_clf2.shape)
df_clf2.head()
df_clf2.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 3527 entries, 0 to 3584
Data columns (total 34 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 3527 non-null int64
1 host_location 3527 non-null object
2 host_is_superhost 3527 non-null object
3 host_listings_count 3527 non-null int64
4 host_has_profile_pic 3527 non-null object
5 host_identity_verified 3527 non-null object
6 street 3527 non-null object
7 neighbourhood_cleansed 3527 non-null object
8 city 3527 non-null object
9 market 3527 non-null object
10 is_location_exact 3527 non-null object
11 property_type 3527 non-null object
12 room_type 3527 non-null object
13 accommodates 3527 non-null int64
14 bathrooms 3527 non-null float64
15 bedrooms 3527 non-null float64
16 beds 3527 non-null float64
17 bed_type 3527 non-null object
18 amenities 3527 non-null object
19 guests_included 3527 non-null int64
20 extra_people 3527 non-null object
21 minimum_nights 3527 non-null int64
22 maximum_nights 3527 non-null int64
23 availability_30 3527 non-null int64
24 availability_60 3527 non-null int64
25 availability_90 3527 non-null int64
26 availability_365 3527 non-null int64
27 requires_license 3527 non-null object
28 instant_bookable 3527 non-null object
29 cancellation_policy 3527 non-null object
30 require_guest_profile_picture 3527 non-null object
31 require_guest_phone_verification 3527 non-null object
32 calculated_host_listings_count 3527 non-null int64
33 price 3527 non-null float64
dtypes: float64(4), int64(11), object(19)
memory usage: 964.4+ KB
###Markdown
The variables types need some fixing. We will make boolean variables out of the `t/f` variables. We will make dummy variables out of the text variables. The numerical variables will stay as they are. The only variable that requires a more complex treatment is the `amenities` variable. Let's deal with it right now.
###Code
#
def list_ammenities(val):
"""
Helper function that splits the amenities values
into strings.
"""
x = val[1:-1].split(',')
y = []
for am in x:
am = am.replace("'", "")
am = am.replace('"', "")
y.append(am)
return y
list_ammenities(df_clf2['amenities'][0])
amenities = {}
for ind, i in enumerate(df_clf2['amenities']):
ams = list_ammenities(i)
for j in ams:
if len(j) == 0:
continue
if j not in amenities.keys():
amenities[j] = np.zeros(df_clf2.shape[0], dtype=np.bool_)
amenities[j][ind] = True
amenities['id'] = df_clf2['id']
pd.DataFrame(amenities)
df_clf3 = pd.merge(on='id', how='inner', left=df_clf2, right=pd.DataFrame(amenities))
print(df_clf3.shape)
df_clf3.head()
###Output
(3527, 79)
###Markdown
Changing the t/f columns to True and False
###Code
def fix_tf(val):
"""
Helper function for changing t/f labels
to True/False boolean values.
"""
if val == 't':
return True
elif val == 'f':
return False
else:
return val
df_clf4 = df_clf3.copy()
for c in df_clf4.columns:
df_clf4[c] = df_clf4[c].apply(fix_tf)
df_clf4
###Output
_____no_output_____
###Markdown
Let's take out the categorical variables with more than 38 unique values, since we have only 3527 rows of data. Had we more rows, we could use more categorical variables.
###Code
un_cols = {c: df_clf4[c].nunique() for c in df_clf4.columns if df_clf4[c].dtype not in ('float', 'int', 'bool')}
print(un_cols)
drop_cols = [c for c in un_cols if un_cols[c] > 38]
df_clf5 = df_clf4.drop(columns=drop_cols)
print(df_clf5.shape)
df_clf5.head()
df_clf6 = pd.get_dummies(df_clf5)
df_clf6 = df_clf6.drop(columns=['id'])
df_clf6
###Output
_____no_output_____
###Markdown
--- 4. ModellingIn this step, we will choose our ML model and train it.Since we want to predict the listings' prices, which is a continuous value variable, we need a regressor. Therefore, we choose the Decision Tree Regressor, which is a well stablished model.
###Code
# make input (X) and output(y) data
X = df_clf6.drop(columns=['price'])
y = df_clf6['price']
# rescale the data
scaler_x = StandardScaler()
scaler_y = StandardScaler()
X_scaled = scaler_x.fit_transform(X.to_numpy())
y_scaled = scaler_y.fit_transform(y.to_numpy().reshape(-1, 1))
best_reg = 0
# classifier
# we perform a search for the best model
# initialization.
rnd_states = range(500)
best_rnd = 0
min_loss = float('inf')
best_test = 0
for rnd_state in rnd_states:
X_train, X_test, y_train, y_test = \
train_test_split(X_scaled, y_scaled, test_size=0.2,
random_state=rnd_state)
regressor = DecisionTreeRegressor(random_state=rnd_state)
regressor.fit(X_train, y_train.reshape(-1, 1))
y_pred = regressor.predict(X_test)
err = mean_squared_error(y_pred, y_test.reshape(-1))
if err < min_loss:
best_rnd = rnd_state
min_loss = err
best_reg = deepcopy(regressor)
best_test = (deepcopy(X_test), deepcopy(y_test))
min_loss
X_test, y_test = best_test
y_pred = best_reg.predict(X_test)
err = mean_squared_error(y_pred, y_test.reshape(-1))
results = pd.DataFrame(
{
'pred': scaler_y.inverse_transform(y_pred),
'real_value': scaler_y.inverse_transform(y_test.reshape(-1)),
})
results['err'] = results['pred'] - results['real_value']
results['err'] = results['err'].abs()
results
table = results.describe()
table.to_excel('ml_performance.xlsx')
table
###Output
_____no_output_____
###Markdown
--- 5. EvaluationIn this last CRISP-DM step, we evaluate our model's performance. How well was it able to predict the prices?Our price regression model did not achieve an optimal performance. However, 50\% of the obtained errors are below US\\$30.00, which is only 21\% of the prices standard deviation. 75\% of the errors are below US\\$70.00, which is 51\% of the price standard deviation. Therefore, we still achieved a fairly good model. Next, we plot the Real and Predicted values for the test dataset, so we may have a better visualization of the model's performance. On the sequence, we plot the errors, also for better visualization. The closer the errors are to zero, the better.
###Code
plt.figure(figsize=(10,10))
plt.plot(results['real_value'], '+', label='Real value')
plt.plot(results['pred'], '*', label='Predicted value')
plt.xlabel('Sample', fontdict={'size': 16})
plt.ylabel('Price', fontdict={'size': 16})
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.legend(fontsize=14)
plt.show()
plt.figure(figsize=(10,10))
plt.plot(results['err'], '.')
plt.xlabel('Sample', fontdict={'size': 16})
plt.ylabel('Price Error', fontdict={'size': 16})
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Next, we see the feature importances extracted from the Decision Tree Regressor. We may notice the most important features are:* If Fenway is the homestay's cleansed neighbourhood.* If the rooms are of type Home/Apt.* The presence of cats. (This is actualy quite funny.)* The number of bathrooms.* The availability.
###Code
s_idx = best_reg.feature_importances_.argsort()[::-1]
pcts_acc = \
np.array([np.sum(best_reg.feature_importances_[s_idx][:i]) for i in range(len(s_idx))])
s75_idx = s_idx[pcts_acc < .75][::-1]
x = X.columns.values[s75_idx]
y = best_reg.feature_importances_[s75_idx]*100
plt.figure(figsize=(12,10))
plt.barh(x, y)
plt.title('Feature Importance - 75%', fontsize=14)
plt.ylabel("Feature", fontsize=14)
plt.xlabel("Importance [%]", fontsize=14)
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.show()
###Output
_____no_output_____
###Markdown
Example 1 foo baz bim
###Code
%%html
<pre><code>foo baz bim
</code></pre>
###Output
_____no_output_____
###Markdown
Example 2 foo baz bim
###Code
%%html
<pre><code>foo baz bim
</code></pre>
###Output
_____no_output_____
###Markdown
Example 3 a a ὐ a
###Code
%%html
<pre><code>a a
ὐ a
</code></pre>
###Output
_____no_output_____
###Markdown
Example 4 - foo bar
###Code
%%html
<ul>
<li>
<p>foo</p>
<p>bar</p>
</li>
</ul>
###Output
_____no_output_____
###Markdown
Example 5 - foo bar
###Code
%%html
<ul>
<li>
<p>foo</p>
<pre><code> bar
</code></pre>
</li>
</ul>
###Output
_____no_output_____
###Markdown
Example 6 > foo
###Code
%%html
<blockquote>
<pre><code> foo
</code></pre>
</blockquote>
###Output
_____no_output_____
###Markdown
Example 7 - foo
###Code
%%html
<ul>
<li>
<pre><code> foo
</code></pre>
</li>
</ul>
###Output
_____no_output_____
###Markdown
Example 8 foo bar
###Code
%%html
<pre><code>foo
bar
</code></pre>
###Output
_____no_output_____
###Markdown
Example 9 - foo - bar - baz
###Code
%%html
<ul>
<li>foo
<ul>
<li>bar
<ul>
<li>baz</li>
</ul>
</li>
</ul>
</li>
</ul>
###Output
_____no_output_____
###Markdown
Example 10 Foo
###Code
%%html
<h1>Foo</h1>
###Output
_____no_output_____
###Markdown
Example 11 * * *
###Code
%%html
<hr />
###Output
_____no_output_____
###Markdown
Example 12 - `one- two`
###Code
%%html
<ul>
<li>`one</li>
<li>two`</li>
</ul>
###Output
_____no_output_____
###Markdown
Example 13 ***---___
###Code
%%html
<hr />
<hr />
<hr />
###Output
_____no_output_____
###Markdown
Example 14 +++
###Code
%%html
<p>+++</p>
###Output
_____no_output_____
###Markdown
Example 15 ===
###Code
%%html
<p>===</p>
###Output
_____no_output_____
###Markdown
Example 16 --**__
###Code
%%html
<p>--
**
__</p>
###Output
_____no_output_____
###Markdown
Example 17 *** *** ***
###Code
%%html
<hr />
<hr />
<hr />
###Output
_____no_output_____
###Markdown
Example 18 ***
###Code
%%html
<pre><code>***
</code></pre>
###Output
_____no_output_____
###Markdown
Example 19 Foo ***
###Code
%%html
<p>Foo
***</p>
###Output
_____no_output_____
###Markdown
Example 20 _____________________________________
###Code
%%html
<hr />
###Output
_____no_output_____
###Markdown
Example 21 - - -
###Code
%%html
<hr />
###Output
_____no_output_____
###Markdown
Example 22 ** * ** * ** * **
###Code
%%html
<hr />
###Output
_____no_output_____
###Markdown
Example 23 - - - -
###Code
%%html
<hr />
###Output
_____no_output_____
###Markdown
Example 24 - - - -
###Code
%%html
<hr />
###Output
_____no_output_____
###Markdown
Example 25 _ _ _ _ aa---------a---
###Code
%%html
<p>_ _ _ _ a</p>
<p>a------</p>
<p>---a---</p>
###Output
_____no_output_____
###Markdown
Example 26 *-*
###Code
%%html
<p><em>-</em></p>
###Output
_____no_output_____
###Markdown
Example 27 - foo***- bar
###Code
%%html
<ul>
<li>foo</li>
</ul>
<hr />
<ul>
<li>bar</li>
</ul>
###Output
_____no_output_____
###Markdown
Example 28 Foo***bar
###Code
%%html
<p>Foo</p>
<hr />
<p>bar</p>
###Output
_____no_output_____
###Markdown
Example 29 Foo---bar
###Code
%%html
<h2>Foo</h2>
<p>bar</p>
###Output
_____no_output_____
###Markdown
Example 30 * Foo* * ** Bar
###Code
%%html
<ul>
<li>Foo</li>
</ul>
<hr />
<ul>
<li>Bar</li>
</ul>
###Output
_____no_output_____
###Markdown
Example 31 - Foo- * * *
###Code
%%html
<ul>
<li>Foo</li>
<li>
<hr />
</li>
</ul>
###Output
_____no_output_____
###Markdown
Example 32 foo foo foo foo foo foo
###Code
%%html
<h1>foo</h1>
<h2>foo</h2>
<h3>foo</h3>
<h4>foo</h4>
<h5>foo</h5>
<h6>foo</h6>
###Output
_____no_output_____
###Markdown
Example 33 foo
###Code
%%html
<p>####### foo</p>
###Output
_____no_output_____
###Markdown
Example 34 5 bolthashtag
###Code
%%html
<p>#5 bolt</p>
<p>#hashtag</p>
###Output
_____no_output_____
###Markdown
Example 35 \ foo
###Code
%%html
<p>## foo</p>
###Output
_____no_output_____
###Markdown
Example 36 foo *bar* \*baz\*
###Code
%%html
<h1>foo <em>bar</em> *baz*</h1>
###Output
_____no_output_____
###Markdown
Example 37 foo
###Code
%%html
<h1>foo</h1>
###Output
_____no_output_____
###Markdown
Example 38 foo foo foo
###Code
%%html
<h3>foo</h3>
<h2>foo</h2>
<h1>foo</h1>
###Output
_____no_output_____
###Markdown
Example 39 foo
###Code
%%html
<pre><code># foo
</code></pre>
###Output
_____no_output_____
###Markdown
Example 40 foo bar
###Code
%%html
<p>foo
# bar</p>
###Output
_____no_output_____
###Markdown
Example 41 foo bar
###Code
%%html
<h2>foo</h2>
<h3>bar</h3>
###Output
_____no_output_____
###Markdown
Example 42 foo foo
###Code
%%html
<h1>foo</h1>
<h5>foo</h5>
###Output
_____no_output_____
###Markdown
Example 43 foo
###Code
%%html
<h3>foo</h3>
###Output
_____no_output_____
###Markdown
Example 44 foo b
###Code
%%html
<h3>foo ### b</h3>
###Output
_____no_output_____
###Markdown
Example 45 foo
###Code
%%html
<h1>foo#</h1>
###Output
_____no_output_____
###Markdown
Example 46 foo \ foo \ foo \
###Code
%%html
<h3>foo ###</h3>
<h2>foo ###</h2>
<h1>foo #</h1>
###Output
_____no_output_____
###Markdown
Example 47 **** foo****
###Code
%%html
<hr />
<h2>foo</h2>
<hr />
###Output
_____no_output_____
###Markdown
Example 48 Foo bar bazBar foo
###Code
%%html
<p>Foo bar</p>
<h1>baz</h1>
<p>Bar foo</p>
###Output
_____no_output_____
###Markdown
Example 49
###Code
%%html
<h2></h2>
<h1></h1>
<h3></h3>
###Output
_____no_output_____
###Markdown
Example 50 Foo *bar*=========Foo *bar*---------
###Code
%%html
<h1>Foo <em>bar</em></h1>
<h2>Foo <em>bar</em></h2>
###Output
_____no_output_____
###Markdown
Example 51 Foo *barbaz*====
###Code
%%html
<h1>Foo <em>bar
baz</em></h1>
###Output
_____no_output_____
###Markdown
Example 52 Foo-------------------------Foo=
###Code
%%html
<h2>Foo</h2>
<h1>Foo</h1>
###Output
_____no_output_____
###Markdown
Example 53 Foo--- Foo----- Foo ===
###Code
%%html
<h2>Foo</h2>
<h2>Foo</h2>
<h1>Foo</h1>
###Output
_____no_output_____
###Markdown
Example 54 Foo --- Foo---
###Code
%%html
<pre><code>Foo
---
Foo
</code></pre>
<hr />
###Output
_____no_output_____
###Markdown
Example 55 Foo ----
###Code
%%html
<h2>Foo</h2>
###Output
_____no_output_____
###Markdown
Example 56 Foo ---
###Code
%%html
<p>Foo
---</p>
###Output
_____no_output_____
###Markdown
Example 57 Foo= =Foo--- -
###Code
%%html
<p>Foo
= =</p>
<p>Foo</p>
<hr />
###Output
_____no_output_____
###Markdown
Example 58 Foo -----
###Code
%%html
<h2>Foo</h2>
###Output
_____no_output_____
###Markdown
Example 59 Foo\----
###Code
%%html
<h2>Foo\</h2>
###Output
_____no_output_____
###Markdown
Example 60 `Foo----`<a title="a lot---of dashes"/>
###Code
%%html
<h2>`Foo</h2>
<p>`</p>
<h2><a title="a lot</h2>
<p>of dashes"/></p>
###Output
_____no_output_____
###Markdown
Example 61 > Foo---
###Code
%%html
<blockquote>
<p>Foo</p>
</blockquote>
<hr />
###Output
_____no_output_____
###Markdown
Example 62 > foobar===
###Code
%%html
<blockquote>
<p>foo
bar
===</p>
</blockquote>
###Output
_____no_output_____
###Markdown
Example 63 - Foo---
###Code
%%html
<ul>
<li>Foo</li>
</ul>
<hr />
###Output
_____no_output_____
###Markdown
Example 64 FooBar---
###Code
%%html
<h2>Foo
Bar</h2>
###Output
_____no_output_____
###Markdown
Example 65 ---Foo---Bar---Baz
###Code
%%html
<hr />
<h2>Foo</h2>
<h2>Bar</h2>
<p>Baz</p>
###Output
_____no_output_____
###Markdown
Example 66 ====
###Code
%%html
<p>====</p>
###Output
_____no_output_____
###Markdown
Example 67 ------
###Code
%%html
<hr />
<hr />
###Output
_____no_output_____
###Markdown
Example 68 - foo-----
###Code
%%html
<ul>
<li>foo</li>
</ul>
<hr />
###Output
_____no_output_____
###Markdown
Example 69 foo---
###Code
%%html
<pre><code>foo
</code></pre>
<hr />
###Output
_____no_output_____
###Markdown
Example 70 > foo-----
###Code
%%html
<blockquote>
<p>foo</p>
</blockquote>
<hr />
###Output
_____no_output_____
###Markdown
Example 71 \> foo------
###Code
%%html
<h2>> foo</h2>
###Output
_____no_output_____
###Markdown
Example 72 Foobar---baz
###Code
%%html
<p>Foo</p>
<h2>bar</h2>
<p>baz</p>
###Output
_____no_output_____
###Markdown
Example 73 Foobar---baz
###Code
%%html
<p>Foo
bar</p>
<hr />
<p>baz</p>
###Output
_____no_output_____
###Markdown
Example 74 Foobar* * *baz
###Code
%%html
<p>Foo
bar</p>
<hr />
<p>baz</p>
###Output
_____no_output_____
###Markdown
Example 75 Foobar\---baz
###Code
%%html
<p>Foo
bar
---
baz</p>
###Output
_____no_output_____
###Markdown
Example 76 a simple indented code block
###Code
%%html
<pre><code>a simple
indented code block
</code></pre>
###Output
_____no_output_____
###Markdown
Example 77 - foo bar
###Code
%%html
<ul>
<li>
<p>foo</p>
<p>bar</p>
</li>
</ul>
###Output
_____no_output_____
###Markdown
Example 78 1. foo - bar
###Code
%%html
<ol>
<li>
<p>foo</p>
<ul>
<li>bar</li>
</ul>
</li>
</ol>
###Output
_____no_output_____
###Markdown
Example 79 *hi* - one
###Code
%%html
<pre><code><a/>
*hi*
- one
</code></pre>
###Output
_____no_output_____
###Markdown
Example 80 chunk1 chunk2 chunk3
###Code
%%html
<pre><code>chunk1
chunk2
chunk3
</code></pre>
###Output
_____no_output_____
###Markdown
Example 81 chunk1 chunk2
###Code
%%html
<pre><code>chunk1
chunk2
</code></pre>
###Output
_____no_output_____
###Markdown
Example 82 Foo bar
###Code
%%html
<p>Foo
bar</p>
###Output
_____no_output_____
###Markdown
Example 83 foobar
###Code
%%html
<pre><code>foo
</code></pre>
<p>bar</p>
###Output
_____no_output_____
###Markdown
Example 84 Heading fooHeading------ foo----
###Code
%%html
<h1>Heading</h1>
<pre><code>foo
</code></pre>
<h2>Heading</h2>
<pre><code>foo
</code></pre>
<hr />
###Output
_____no_output_____
###Markdown
Example 85 foo bar
###Code
%%html
<pre><code> foo
bar
</code></pre>
###Output
_____no_output_____
###Markdown
Example 86 foo
###Code
%%html
<pre><code>foo
</code></pre>
###Output
_____no_output_____
###Markdown
Example 87 foo
###Code
%%html
<pre><code>foo
</code></pre>
###Output
_____no_output_____
###Markdown
Example 88 ```< >```
###Code
%%html
<pre><code><
>
</code></pre>
###Output
_____no_output_____
###Markdown
Example 89 ~~~< >~~~
###Code
%%html
<pre><code><
>
</code></pre>
###Output
_____no_output_____
###Markdown
Example 90 ```aaa~~~```
###Code
%%html
<pre><code>aaa
~~~
</code></pre>
###Output
_____no_output_____
###Markdown
Example 91 ~~~aaa```~~~
###Code
%%html
<pre><code>aaa
```
</code></pre>
###Output
_____no_output_____
###Markdown
Example 92 ````aaa`````````
###Code
%%html
<pre><code>aaa
```
</code></pre>
###Output
_____no_output_____
###Markdown
Example 93 ~~~~aaa~~~~~~~
###Code
%%html
<pre><code>aaa
~~~
</code></pre>
###Output
_____no_output_____
###Markdown
Example 94 ```
###Code
%%html
<pre><code></code></pre>
###Output
_____no_output_____
###Markdown
Example 95 ````````aaa
###Code
%%html
<pre><code>
```
aaa
</code></pre>
###Output
_____no_output_____
###Markdown
Example 96 > ```> aaabbb
###Code
%%html
<blockquote>
<pre><code>aaa
</code></pre>
</blockquote>
<p>bbb</p>
###Output
_____no_output_____
###Markdown
Example 97 ``` ```
###Code
%%html
<pre><code>
</code></pre>
###Output
_____no_output_____
###Markdown
Example 98 ``````
###Code
%%html
<pre><code></code></pre>
###Output
_____no_output_____
###Markdown
Example 99 ``` aaaaaa```
###Code
%%html
<pre><code>aaa
aaa
</code></pre>
###Output
_____no_output_____
###Markdown
Example 100 ```aaa aaaaaa ```
###Code
%%html
<pre><code>aaa
aaa
aaa
</code></pre>
###Output
_____no_output_____ |
content-analytics/sentiment-analysis.ipynb | ###Markdown
Sentiment AnalysisThis notebook is a tutorial on sentiment analysis models.**Data:** This notebook uses datasets from TF/Keras distribution.
###Code
import tensorflow as tf
from tensorflow.keras.models import Sequential, Model
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import backend as K
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler
from matplotlib import pyplot as plt
from matplotlib import colors
from matplotlib import rc, cm
plt.rcParams.update({'pdf.fonttype': 'truetype'})
###Output
_____no_output_____
###Markdown
Sentiment Classification using TransformerWe use a dataset of 25,000 movies reviews from IMDB, labeled by sentiment (positive/negative).
###Code
#
# Load the dataset
#
vocab_size = 20000 # Only consider the top 20k words
maxlen = 200 # Only consider the first 200 words of each movie review
(x_train, y_train), (x_val, y_val) = keras.datasets.imdb.load_data(num_words=vocab_size)
print(len(x_train), "Training sequences")
print(len(x_val), "Validation sequences")
x_train = keras.preprocessing.sequence.pad_sequences(x_train, maxlen=maxlen)
x_val = keras.preprocessing.sequence.pad_sequences(x_val, maxlen=maxlen)
#
# Preview the dataset
#
movie_index = 0
# Retrieve the word index file mapping words to indices
word_index = keras.datasets.imdb.get_word_index()
# Reverse the word index to obtain a dict mapping indices to words
inverted_word_index = dict((i, word) for (word, i) in word_index.items())
# Decode the first sequence in the dataset
decoded_sequence = " ".join(inverted_word_index[i] for i in x_train[movie_index])
sentiment = 'Positive' if y_train[movie_index] == 1 else 'Negative'
print(f'Review [{decoded_sequence}] -> {sentiment}')
#
# Model components
#
class TransformerBlock(layers.Layer):
def __init__(self, embed_dim, num_heads, ff_dim, rate=0.1):
super(TransformerBlock, self).__init__()
self.att = layers.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim)
self.ffn = Sequential(
[layers.Dense(ff_dim, activation="relu"), layers.Dense(embed_dim),]
)
self.layernorm1 = layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = layers.Dropout(rate)
self.dropout2 = layers.Dropout(rate)
def call(self, inputs, training):
attn_output = self.att(inputs, inputs)
attn_output = self.dropout1(attn_output, training=training)
out1 = self.layernorm1(inputs + attn_output)
ffn_output = self.ffn(out1)
ffn_output = self.dropout2(ffn_output, training=training)
return self.layernorm2(out1 + ffn_output)
class TokenAndPositionEmbedding(layers.Layer):
def __init__(self, maxlen, vocab_size, embed_dim):
super(TokenAndPositionEmbedding, self).__init__()
self.token_emb = layers.Embedding(input_dim=vocab_size, output_dim=embed_dim)
self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=embed_dim)
def call(self, x):
maxlen = tf.shape(x)[-1]
positions = tf.range(start=0, limit=maxlen, delta=1)
positions = self.pos_emb(positions)
x = self.token_emb(x)
return x + positions
#
# Model specification
#
embed_dim = 32 # Embedding size for each token
num_heads = 2 # Number of attention heads
ff_dim = 32 # Hidden layer size in feed forward network inside transformer
inputs = layers.Input(shape=(maxlen,))
embedding_layer = TokenAndPositionEmbedding(maxlen, vocab_size, embed_dim)
x = embedding_layer(inputs)
print(x)
transformer_block = TransformerBlock(embed_dim, num_heads, ff_dim)
x = transformer_block(x)
x = layers.GlobalAveragePooling1D()(x)
x = layers.Dropout(0.1)(x)
x = layers.Dense(20, activation="relu")(x)
x = layers.Dropout(0.1)(x)
outputs = layers.Dense(2, activation="softmax")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
#
# Model training
#
model.compile("adam", "sparse_categorical_crossentropy", metrics=["accuracy"])
history = model.fit(
x_train, y_train, batch_size=32, epochs=2, validation_data=(x_val, y_val)
)
###Output
25000 Training sequences
25000 Validation sequences
Review [to have after out atmosphere never more room and it so heart shows to years of every never going and help moments or of every chest visual movie except her was several of enough more with is now current film as you of mine potentially unfortunately of you than him that with out themselves her get for was camp of you movie sometimes movie that with scary but pratfalls to story wonderful that in seeing in character to of 70s musicians with heart had shadows they of here that with her serious to have does when from why what have critics they is you that isn't one will very to as itself with other tricky in of seen over landed for anyone of and br show's to whether from than out themselves history he name half some br of 'n odd was two most of mean for 1 any an boat she he should is thought frog but of script you not while history he heart to real at barrel but when from one bit then have two of script their with her nobody most that with wasn't to with armed acting watch an for with heartfelt film want an] -> Positive
KerasTensor(type_spec=TensorSpec(shape=(None, 200, 32), dtype=tf.float32, name=None), name='token_and_position_embedding_6/add:0', description="created by layer 'token_and_position_embedding_6'")
Epoch 1/2
93/782 [==>...........................] - ETA: 42s - loss: 0.6829 - accuracy: 0.5491
###Markdown
Sentiment AnalysisThis notebook is a tutorial on sentiment analysis. We develop a model that predicts the sentiment of a movie review. The implementation is based on [1]. DataThis notebook uses IMDB dataset from TF/Keras distribution. References1. https://keras.io/examples/nlp/text_classification_with_transformer/
###Code
import tensorflow as tf
from tensorflow.keras.models import Sequential, Model
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import backend as K
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler
from matplotlib import pyplot as plt
from matplotlib import colors
from matplotlib import rc, cm
plt.rcParams.update({'pdf.fonttype': 'truetype'})
###Output
_____no_output_____
###Markdown
Sentiment Classification using TransformerWe use a dataset of 25,000 movies reviews from IMDB, labeled by sentiment (positive/negative).
###Code
#
# Load the dataset
#
vocab_size = 20000 # Only consider the top 20k words
maxlen = 200 # Only consider the first 200 words of each movie review
(x_train, y_train), (x_val, y_val) = keras.datasets.imdb.load_data(num_words=vocab_size)
print(len(x_train), "Training sequences")
print(len(x_val), "Validation sequences")
x_train = keras.preprocessing.sequence.pad_sequences(x_train, maxlen=maxlen)
x_val = keras.preprocessing.sequence.pad_sequences(x_val, maxlen=maxlen)
#
# Preview the dataset
#
movie_index = 0
# Retrieve the word index file mapping words to indices
word_index = keras.datasets.imdb.get_word_index()
# Reverse the word index to obtain a dict mapping indices to words
inverted_word_index = dict((i, word) for (word, i) in word_index.items())
# Decode the first sequence in the dataset
decoded_sequence = " ".join(inverted_word_index[i] for i in x_train[movie_index])
sentiment = 'Positive' if y_train[movie_index] == 1 else 'Negative'
print(f'Review [{decoded_sequence}] -> {sentiment}')
#
# Model components
#
class TransformerBlock(layers.Layer):
def __init__(self, embed_dim, num_heads, ff_dim, rate=0.1):
super(TransformerBlock, self).__init__()
self.att = layers.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim)
self.ffn = Sequential(
[layers.Dense(ff_dim, activation="relu"), layers.Dense(embed_dim),]
)
self.layernorm1 = layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = layers.Dropout(rate)
self.dropout2 = layers.Dropout(rate)
def call(self, inputs, training):
attn_output = self.att(inputs, inputs)
attn_output = self.dropout1(attn_output, training=training)
out1 = self.layernorm1(inputs + attn_output)
ffn_output = self.ffn(out1)
ffn_output = self.dropout2(ffn_output, training=training)
return self.layernorm2(out1 + ffn_output)
class TokenAndPositionEmbedding(layers.Layer):
def __init__(self, maxlen, vocab_size, embed_dim):
super(TokenAndPositionEmbedding, self).__init__()
self.token_emb = layers.Embedding(input_dim=vocab_size, output_dim=embed_dim)
self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=embed_dim)
def call(self, x):
maxlen = tf.shape(x)[-1]
positions = tf.range(start=0, limit=maxlen, delta=1)
positions = self.pos_emb(positions)
x = self.token_emb(x)
return x + positions
#
# Model specification
#
embed_dim = 32 # Embedding size for each token
num_heads = 2 # Number of attention heads
ff_dim = 32 # Hidden layer size in feed forward network inside transformer
inputs = layers.Input(shape=(maxlen,))
embedding_layer = TokenAndPositionEmbedding(maxlen, vocab_size, embed_dim)
x = embedding_layer(inputs)
transformer_block = TransformerBlock(embed_dim, num_heads, ff_dim)
x = transformer_block(x)
x = layers.GlobalAveragePooling1D()(x)
x = layers.Dropout(0.1)(x)
x = layers.Dense(20, activation="relu")(x)
x = layers.Dropout(0.1)(x)
outputs = layers.Dense(2, activation="softmax")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
#
# Model training
#
model.compile("adam", "sparse_categorical_crossentropy", metrics=["accuracy"])
history = model.fit(
x_train, y_train, batch_size=32, epochs=2, validation_data=(x_val, y_val)
)
#
# Test scoring
#
x = x_train[movie_index]
y = y_train[movie_index]
class_p = model.predict(np.atleast_2d(x))
decoded_sequence = " ".join(inverted_word_index[i] for i in x)
true_sentiment = 'Positive' if y == 1 else 'Negative'
print(f'Review [{decoded_sequence}] -> {class_p} ({true_sentiment})')
###Output
Review [to have after out atmosphere never more room and it so heart shows to years of every never going and help moments or of every chest visual movie except her was several of enough more with is now current film as you of mine potentially unfortunately of you than him that with out themselves her get for was camp of you movie sometimes movie that with scary but pratfalls to story wonderful that in seeing in character to of 70s musicians with heart had shadows they of here that with her serious to have does when from why what have critics they is you that isn't one will very to as itself with other tricky in of seen over landed for anyone of and br show's to whether from than out themselves history he name half some br of 'n odd was two most of mean for 1 any an boat she he should is thought frog but of script you not while history he heart to real at barrel but when from one bit then have two of script their with her nobody most that with wasn't to with armed acting watch an for with heartfelt film want an] -> [[0.00153198 0.998468 ]] (Positive)
|
assigment 1/Assignment 1.ipynb | ###Markdown
Assignment 1 COMP 652: Machine Learning Samin Yeasar Arnob McGill ID: 260800927
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
1. regression, overfitting and regularization Answer 1(a)
###Code
df1 = pd.read_csv('hw1-q1x.csv',delim_whitespace =True,header=None,dtype=np.float)
df2 = pd.read_csv('hw1-q1y.csv',delim_whitespace =True,header=None,dtype=np.float)
Data=pd.concat([df1,df2],axis=1)
Data.columns=['X1','X2','Y']
no_data = df1.shape[0]
#Data centering
Cen_Data = Data - Data.mean(axis=0) #axis=0 means row wise, rows are example here
# splitting training-testing data
from sklearn.model_selection import train_test_split
train_data, test_data = train_test_split(Cen_Data, test_size=0.2,random_state=34) #randomly split train-test
feat_no = train_data.shape[1] - 1 # last column is for labels
Xm_train = train_data.iloc[:,:feat_no].values
y_train = train_data.iloc[:,feat_no:].values
Xm_test = test_data.iloc[:,:feat_no].values
y_test = test_data.iloc[:,feat_no:].values
print(Xm_train.shape)
print(Xm_test.shape)
###Output
(80, 2)
(20, 2)
###Markdown
Report:* I have centered the data which is basically remove means from input and output$\{(x_i-\overline{x},y_i-\overline{y})\}^m_{i=1}$* As centering makes the bias terms zero, I didn't add the bias term Answer 1(b)
###Code
# manual code to compute W
def computeW (Xm_train,y_train,feat_no,lmbda):
a = np.dot(Xm_train.T,Xm_train) + lmbda*np.eye(feat_no)
a = np.linalg.pinv(a)
W = np.dot(a, np.dot(Xm_train.T,y_train))
return W
# create lambda parameter vector
lmbda = []
l = 0.001
while l <= 10**1:
l = l*10
if l==0 :
break
else :
lmbda = np.append(lmbda, l)
lmbda = lmbda.reshape(1,-1)
print('lmbda:',lmbda)
# create lambda parameter vector
lmbda = np.array([])
l = 0.01
while l <= 10**5:
lmbda = np.append(lmbda, l)
l = l*10
lmbda = lmbda.reshape(1,-1)
print('lmbda:',lmbda)
lmbda[0,0]
# lmbda has to be numpy array
def RMSEresult(Xm_train,y_train,Xm_test,y_test,lmbda,norm):
from sklearn.metrics import mean_squared_error
from sklearn import datasets, linear_model
feat_no = Xm_train.shape[1]
#W_vec = np.zeros( ( 2,len(lmbda) ) )
W_vec = pd.DataFrame( np.zeros(( feat_no,lmbda.shape[1] )) )
RMSE_traindata = []
RMSE_testdata = []
Hypothesis_train = {}
Hypothesis_test = {}
for i in range (lmbda.shape[1]):
##### manual code to compute W ########
#W = computeW (Xm_train,y_train,feat_no,lmbda[i])
#W_vec[i] = W
#Hypothesis_train = np.dot(Xm_train,W)
#Hypothesis_test = np.dot(Xm_test,W)
##### using built-in model ######
if norm == 'l2':
clf = linear_model.Ridge(alpha=lmbda[0,i])
if norm == 'l1' :
clf = linear_model.Lasso(alpha=lmbda[0,i])
clf.fit(Xm_train, y_train)
H_train = clf.predict (Xm_train)
H_test = clf.predict (Xm_test)
Hypothesis_train['lmbda_%d'%i] = H_train.reshape(-1,1)
Hypothesis_test['lmbda_%d'%i] = H_test.reshape(-1,1)
W=np.reshape(clf.coef_,(-1,1))
W_vec[i] = W
RMSE_traindata = np.append ( RMSE_traindata , np.sqrt( mean_squared_error(y_train, H_train) ) )
RMSE_testdata = np.append ( RMSE_testdata , np.sqrt( mean_squared_error(y_test, H_test) ) )
return (RMSE_traindata,RMSE_testdata, W_vec,Hypothesis_train,Hypothesis_test)
RMSE_traindata, RMSE_testdata,W_vec,Hypothesis_train,Hypothesis_test = RMSEresult(Xm_train,y_train,Xm_test,y_test,lmbda,'l2')
# Graph1: RMSE for training data and testing data
#plt.loglog(lmbda,RMSE_testdata,label='testing data')
#plt.loglog(lmbda,RMSE_traindata,label='training data')
plt.semilogx(RMSE_traindata,label='training data')
plt.semilogx(RMSE_testdata,label='testing data')
plt.xlabel(' lambda parameter ')
plt.ylabel('RMSE')
plt.legend()
#plt.savefig('D:\\University materials\\Winter 2018\\Applied ML\\winter 2018\\Assignments\\Kaggle\\my codes\\train_validation_H3.png')
plt.show()
# Graph2: L2 norm of weight vectors
W_L2norm = np.sum(W_vec**2,axis=0)
plt.semilogx( W_L2norm.values )
plt.xlabel ('lambda parameter')
plt.ylabel ('L2 norm of Weight vector')
plt.show()
#Graph3: Actual values of weight obtained
plt.semilogx( W_vec.loc[[0], : ].values[0] ,label = 'W1')
plt.semilogx( W_vec.loc[[1], : ].values[0] ,label = 'W2')
plt.xlabel ('lambda parameter')
plt.ylabel ('Weight values')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Report:* For Lambda, $\lambda=0$ means, no regularization. * We see that, at $\lambda=0$ we get min training error and as we increase the value of $\lambda$ training error increases but for testing data as we increse lambda the error decreases and after $\lambda$=10 RMSE start to increase. Thus at $\lambda$=0, when no regularization was used, training data was overfitted.
###Code
print('Training data gets least RMSE for lambda:',lmbda[0,np.argwhere(RMSE_traindata == min(RMSE_traindata)) ])
print('Testing data gets least RMSE for lambda:',lmbda[0,np.argwhere(RMSE_testdata == min(RMSE_testdata)) ])
###Output
Training data gets least RMSE for lambda: [[0.01]]
Testing data gets least RMSE for lambda: [[0.01]]
###Markdown
Answer 1(c)
###Code
def ComputeValid_VS_lmbda(Xm_train,y_train,kfold,lmbda,norm):
dlen = Xm_train.shape[0]
inc = np.int(dlen/kfold)
RMSE_traindata = np.zeros([kfold,lmbda.shape[1]])
RMSE_validdata = np.zeros([kfold,lmbda.shape[1]])
Hypothesis_valid = []
Hypothesis_train = []
valid_score = np.zeros([])
for i in range (kfold):
# mask the data to be used as validation
#############################################################################
msk = np.isin (np.linspace(1,dlen,dlen), np.linspace(1+inc*i,inc+inc*i,inc))
#############################################################################
train_X = Xm_train[~msk]
train_y = y_train[~msk]
valid_X = Xm_train[msk]
valid_y = y_train[msk]
feat_on = train_X.shape[1]
RMSE_traindata[i,:], RMSE_validdata[i,:], W_vec , h_train , h_valid = RMSEresult(train_X,train_y,valid_X,valid_y,lmbda,norm)
#Hypothesis_valid.update(h_valid)
#Hypothesis_train.update(h_train)
# for k in range (len(h_train)):
# Hypothesis_valid['lmbda_%d'%k].append (h_valid['lmbda_%d'%k])
# Hypothesis_train['lmbda_%d'%k].append (h_train['lmbda_%d'%k])
Hypothesis_valid.append ( h_valid)
Hypothesis_train.append ( h_valid)
#Hypothesis_train = np.append ( Hypothesis_train , h_train)
#train_score = np.append(train_score ,RMSE_traindata)
#valid_score = valid_score + RMSE_validdata
result = np.zeros((y_train.shape[0],lmbda.shape[1]))
result = np.matrix(result)
for k in range(lmbda.shape[1]):
r = 0
for m in range(kfold):
result[m*inc:inc+r,k] = Hypothesis_valid[m]['lmbda_%d'%k]
r+=inc
#valid_score = np.mean(RMSE_validdata,axis=0)
#print('Mean validation score for different lambda',valid_score)
#return RMSE_traindata , RMSE_validdata , Hypothesis_train.reshape(dlen-np.int(inc),kfold), Hypothesis_valid.reshape(np.int(inc),kfold)
return RMSE_traindata , RMSE_validdata , result
kfold=5
RMSE_traindata, RMSE_validdata,result = ComputeValid_VS_lmbda(Xm_train,y_train,kfold,lmbda,norm='l1')
####################################################
# plot RMSE for 5 fold validation data
####################################################
for i in range (RMSE_validdata.shape[0]):
plt.plot(RMSE_validdata[i,:],label='valid data batch %.1f' %(i+1) )
plt.xlabel(' lambda parameter ')
plt.ylabel('RMSE')
plt.legend()
plt.show()
################################################
#plot RMSE for 5 fold training data
################################################
for i in range (RMSE_traindata.shape[0]):
plt.plot(RMSE_traindata[i,:],label='train data batch %.1f' %(i+1) )
plt.xlabel(' lambda parameter ')
plt.ylabel('RMSE')
plt.legend()
plt.show()
valid_score = np.mean(RMSE_validdata,axis=0)
print('Average Validation RMSE Score',valid_score)
for i in range (RMSE_traindata.shape[0]):
print('Best Value of Lambda for Training data batch %d = ' %i, lmbda[ 0, np.argmin(RMSE_traindata[i,:]) ] )
for i in range (RMSE_validdata.shape[0]):
print('Best Value of Lambda for Validation data batch %d = ' %i, lmbda[ 0, np.argmin(RMSE_validdata[i,:]) ] )
###Output
Best Value of Lambda for Validation data batch 0 = 0.01
Best Value of Lambda for Validation data batch 1 = 10.0
Best Value of Lambda for Validation data batch 2 = 10.0
Best Value of Lambda for Validation data batch 3 = 10.0
Best Value of Lambda for Validation data batch 4 = 0.01
###Markdown
Report:* best value for lambda for training data is same as 1(b) but it varies for different validation batch Answer 1(d)
###Code
y_train_sorted = np.sort(y_train,axis=0)
msk = y_train.argsort(axis=0)
Xm_train_sorted = Xm_train[msk] # dimension of Xm_sort becomes (80,1,2)
Xm_train_sorted = Xm_train_sorted.reshape(80,2) # we correct the dimension
kfold=5
sorted_RMSE_traindata, sorted_RMSE_validdata , _ = ComputeValid_VS_lmbda(Xm_train_sorted,y_train_sorted,kfold,lmbda,'l2')
################################################
#plot RMSE for 5 fold training data
################################################
for i in range (sorted_RMSE_traindata.shape[0]):
plt.plot(sorted_RMSE_traindata[i,:],label='train data batch %.1f' %(i+1) )
plt.xlabel(' lambda parameter ')
plt.ylabel('RMSE')
plt.legend()
plt.show()
####################################################
# plot RMSE for 5 fold validation data
####################################################
for i in range (sorted_RMSE_validdata.shape[0]):
plt.plot(sorted_RMSE_validdata[i,:],label='valid data batch %.1f' %(i+1) )
plt.xlabel(' lambda parameter ')
plt.ylabel('RMSE')
plt.legend()
plt.show()
for i in range (sorted_RMSE_traindata.shape[0]):
print('Best Value of Lambda for Training data batch %d = ' %i, lmbda[ 0, np.argmin(sorted_RMSE_traindata[i,:]) ] )
for i in range (sorted_RMSE_validdata.shape[0]):
print('Best Value of Lambda for Validation data batch %d = ' %i, lmbda[ 0, np.argmin(sorted_RMSE_validdata[i,:]) ] )
###Output
Best Value of Lambda for Validation data batch 0 = 0.01
Best Value of Lambda for Validation data batch 1 = 100.0
Best Value of Lambda for Validation data batch 2 = 10000.0
Best Value of Lambda for Validation data batch 3 = 100.0
Best Value of Lambda for Validation data batch 4 = 0.01
###Markdown
Report * From traing and validation output it's clear that data, for the first batch of training batch the model was most overfitted. For batch 1 data, training give the least RMSE and validation gives the highest RMSE score across the lambda range* So we can say, when data X have low value they are more scattered (have high variance) Answer 1(e)
###Code
# given
mu=np.linspace(-1,1,5).reshape(1,-1)
print(mu)
print(mu.shape)
# given
sigma_sqr = np.array([0.1,0.5,1,5]).reshape(1,-1)
###Output
_____no_output_____
###Markdown
$ \phi_j(X) = \exp^{-(\frac{(X-\mu_j)^2}{\sigma^2})}$* $\mu_j$ controls the position along x-axis and varies from (-1,1). * $\sigma$ controls width/spacing
###Code
np.tile(mu,Xm_train.shape[1])
def gaussian_basis (X,mu,sigma_sqr):
x = X.repeat(mu.shape[1],axis=1)
mu = np.tile(mu,X.shape[1])
psi = np.exp ( (-1)*( (x-mu)**2/(sigma_sqr) ) )# or X[:,np.newaxis] - w
return psi
###Output
_____no_output_____
###Markdown
Answer 1(f)
###Code
# no regularization
lmbda = np.array([0])
lmbda = lmbda.reshape(1,-1)
print(lmbda.shape)
RMSE_GaussBasis_train = []
RMSE_GaussBasis_test = []
W_vec = {}
for l in range (sigma_sqr.shape[1]):
phi_train = gaussian_basis (Xm_train,mu,sigma_sqr[0,l])
phi_test = gaussian_basis (Xm_test,mu,sigma_sqr[0,l])
train, test,W_vec[str(sigma_sqr[0,l])],_,_ = RMSEresult(phi_train,y_train,phi_test,y_test,lmbda,norm='l2')
RMSE_GaussBasis_train = np.append(RMSE_GaussBasis_train,train)
RMSE_GaussBasis_test = np.append(RMSE_GaussBasis_test,test)
RMSE_GaussBasis_train = RMSE_GaussBasis_train
RMSE_GaussBasis_test = RMSE_GaussBasis_test
RMSE_traindata, RMSE_testdata,W_vec , _ ,_ = RMSEresult(Xm_train,y_train,Xm_test,y_test,lmbda,norm='l2')
'''
RMSE_GaussBasis_train = {}
RMSE_GaussBasis_test = {}
W_vec = {}
for l in range (sigma_sqr.shape[1]):
phi_train = gaussian_basis (Xm_train,mu,sigma_sqr[0,l])
phi_test = gaussian_basis (Xm_test,mu,sigma_sqr[0,l])
RMSE_GaussBasis_train[str(sigma_sqr[0,l])], RMSE_GaussBasis_test[str(sigma_sqr[0,l])],W_vec[str(sigma_sqr[0,l])] = RMSEresult(phi_train,y_train,phi_test,y_test,lmbda)
#RMSE_validdata[str(l)] = ComputeValid_VS_lmbda(phi_train,y_train,kfold,lmbda)
'''
plt.plot(RMSE_GaussBasis_train,label='Train error for gaussina basis')
plt.plot(RMSE_GaussBasis_test,label='Test error for gaussina basis')
plt.plot(RMSE_traindata[0].repeat(len(sigma_sqr[0])),label='Train error for 1(b)')
plt.plot(RMSE_testdata[0].repeat(len(sigma_sqr[0])),label='Test error for 1(b)')
plt.xlabel(' sigma^2 ')
plt.ylabel('RMSE')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Report* The two constant lines drawn are the Train and test RMSE ($\lambda$=0, no regularization) calculated at 1(b).* more complex (fits well = higher $\sigma^2$) == > high variance * simple (doesn't fit well ) == > biased solution
###Code
print('Higher variance as we increse sigma^2',RMSE_GaussBasis_test - RMSE_GaussBasis_train )
###Output
Higher variance as we increse sigma^2 [-0.22235882 -0.75606063 0.74072019 0.46067255]
###Markdown
Answer 1(g) In order to derive a learning algorithm that computes both the placement of the mean $\mu$of the basis function, we need to update the weight $w$ and $\mu$ iteratively so as to performingtwo gradient descent algorithms. The cost function that we need to minimize withrespect to $\mu$ is derived as follows. Considering the cost function of logistic regression,\begin{align} J(W) = & - \frac{1}{m} \Bigg[ \sum_1^m y_i log(h_i) + (1-y_i) log(1-h_i) \Bigg]\end{align}The derivative of the cost function J (w) with respect to is calculated as follows. \begin{align} \frac{\partial J}{\partial \mu} = & - \frac{1}{m} \Bigg[ \sum_1^m y_i w \frac{\partial \phi(x_i)}{\partial \mu} + \frac{(1-y_i)}{1-h} (-w) \frac{\partial \phi(x_i)}{\partial \mu} \Bigg] \\\end{align}Now,\begin{align} \frac{\partial \phi(x)}{\partial \mu} & = \frac{\partial e^{-\frac{(x-\mu)^2}{2\sigma^2}} } {\partial \mu} \nonumber \\ & = e^{-\frac{(x-\mu)^2}{2\sigma^2}} \times 2 \times \frac{\partial -\frac{(x-\mu)}{2\sigma^2} } {\partial \mu} \nonumber \\ & = \frac {e^{-\frac{(x-\mu)^2}{2\sigma^2}} }{\sigma^2} \nonumber \\ & = \frac{\phi(x)}{\sigma^2}\end{align}let $L_i = \frac{\partial \phi(x_i)}{\partial \mu} $\begin{align} \frac{\partial J}{\partial \mu} = & - \frac{1}{m} \Bigg[ \sum_1^m y_i w \frac{\partial \phi(x_i)}{\partial \mu} + \frac{(1-y_i)}{1-h} (-w) \frac{\partial \phi(x_i)}{\partial \mu} \Bigg] \nonumber \\ = & - \frac{1}{m} \Bigg[ \sum_1^m \frac{y_i}{h_i} w L_i - \frac{(1-y_i)}{1-h} w L_i \Bigg] \nonumber \\ = & - \frac{1}{m} \Bigg[ \sum_1^m (h_i-y_i) w L_i \Bigg] \nonumber \\ = & - \frac{1}{m} \Bigg[ \sum_1^m (h_i-y_i) w \frac{\partial \phi(x_i)}{\partial \mu} \Bigg] \end{align}$\alpha$ = stepsize to update w $\beta$ = stepsize to update $\mu$Repeat until convergence:\{\begin{align} w_i =: & w_i - \alpha \frac{\partial J(w)}{\partial w_i} \\ \mu_i =: & \mu_i + \beta \frac{1}{m} \Bigg[ \sum_1^m (h_i-y_i) w \frac{\partial \phi(x_i)}{\partial \mu} \Bigg]\end{align}\} Answer 1(h) Report:The algorithm derived is iterative and updates the weight vector as well as the value of$\mu$ simultaneously. The algorithm can converge to a local minima but not on the globalminima since we are adjusting the weight vector and the mean vector simultaneously. Asa result the it would be quite difficult to optimise the cost function for both the parametersresulting in a convergence at the local minima but not the global minima. Answer 1(i)
###Code
Xnew_train = Xm_train[:,1].reshape(-1,1)
Xnew_test = Xm_test[:,1].reshape(-1,1)
# includes 1 bias for each example data
def getfeaturematrix (X,polynomial):
Xm = []
for i in range(0,polynomial+1):
a = np.power(X,i)
Xm.append(a)
Xm = np.array(Xm)
Xm = np.squeeze(Xm, axis=(2,)).T
return Xm
###Output
_____no_output_____
###Markdown
Approach 1
###Code
d = np.array( [ [1,2,3,5,9]] )
valid_score = []
# set lambda to zero for "no regularization"
lmbda = np.array([0])
lmbda = lmbda.reshape(1,-1)
kfold=5
for i in range(d.shape[1]):
polynomial = d[0,i]
Xp_train = getfeaturematrix (Xnew_train,polynomial)
Xp_valid = getfeaturematrix (Xnew_test,polynomial)
RMSE_polytraindata, RMSE_polyvaliddata, result= ComputeValid_VS_lmbda(Xp_train,y_train,kfold,lmbda,norm='l2')
valid_score = np.append( valid_score, np.mean(RMSE_polyvaliddata,axis=0))
plt.figure()
plt.plot(Xnew_train,result,'*',label='prediction for d = %s'%d[0,i])
plt.plot(Xnew_train,y_train,'o',label='y')
plt.xlabel('X')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Report: the best value of degree $d \in {1,2,3,5,9}$ for non-regularized linear regression
###Code
print('best value of degree,d :', d[0,valid_score.argmin()])
# create lambda parameter vector
lmbda = []
l = 0.01
while l <= 10**1:
if l==0 :
break
else :
lmbda = np.append(lmbda, l)
l = l*10
lmbda = lmbda.reshape(1,-1)
print('lmbda:',lmbda)
d = np.array( [ [9]] )
d.shape
d[0,0]
d = np.array( [ [9]] )
polynomial = d[0,0]
Xp_train = getfeaturematrix (Xnew_train,polynomial)
Xp_test = getfeaturematrix (Xnew_test,polynomial)
kfold=5
###Output
_____no_output_____
###Markdown
Approach 2
###Code
RMSE_polytraindata, RMSE_polyvaliddata, result = ComputeValid_VS_lmbda(Xp_train,y_train,kfold,lmbda,'l2')
###Output
_____no_output_____
###Markdown
Report: best $\lambda$ value using five fold cross-validation for d=9 and L2 regulizer
###Code
lmbda[0, RMSE_polytraindata.mean(0).argmin()]
x_train_sorted = np.sort(Xnew_train,axis=0)
msk = Xnew_train.argsort(axis=0)#.repeat(lmbda.shape[1],axis=1)
result = result[msk].reshape(80,4) # dimension of Xm_sort becomes (80,1,2)
plt.plot(Xnew_train,y_train,'<',label='y')
for i in range (lmbda.shape[1]):
#plt.figure(1%1)
plt.plot(x_train_sorted,result[:,i],label='prediction for lambda %s'%lmbda[0,i])
plt.xlabel('X')
#plt.plot(x_train_sorted,y_train,'<',label='y')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Approach 3
###Code
RMSE_polytraindata, RMSE_polyvaliddata, result = ComputeValid_VS_lmbda(Xp_train,y_train,kfold,lmbda,'l1')
###Output
/home/samin/environments/py36_env/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
###Markdown
Report: best $\lambda$ value using five fold cross-validation for d=9 and L1 regulizer
###Code
lmbda[0, RMSE_polytraindata.mean(0).argmin()]
x_train_sorted = np.sort(Xnew_train,axis=0)
msk = Xnew_train.argsort(axis=0)#.repeat(lmbda.shape[1],axis=1)
result = result[msk].reshape(80,4) # dimension of Xm_sort becomes (80,1,2)
plt.plot(Xnew_train,y_train,'<',label='y')
for i in range (lmbda.shape[1]):
#plt.figure()
plt.plot(x_train_sorted,result[:,i],label='prediction for lambda %s'%lmbda[0,i])
plt.xlabel('X')
#plt.plot(x_train_sorted,y_train,'<',label='y')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Test error for optimal parameters for 3 approaches Approach 1 * No regularization $\lambda$=0 * As we got best result for d = 2
###Code
d = np.array( [ [2]] )
polynomial = d[0,0]
Xp_train = getfeaturematrix (Xnew_train,polynomial)
Xp_test = getfeaturematrix (Xnew_test,polynomial)
# set lambda to zero for "no regularization"
lmbda = np.array([0])
lmbda = lmbda.reshape(1,-1)
# Test error for optimal parameters
W_vec1 = []
_ , error_ap1 ,W_vec1,_,_ = RMSEresult(Xp_train,y_train,Xp_test,y_test,lmbda,'l2')
error_ap1
###Output
_____no_output_____
###Markdown
Approach 2* Given: d=9 and L2 regulizar * best lambda = 0.01
###Code
d = np.array( [ [9]] )
polynomial = d[0,0]
Xp_train = getfeaturematrix (Xnew_train,polynomial)
Xp_test = getfeaturematrix (Xnew_test,polynomial)
lmbda = np.array([0.01]).reshape(1,-1)
lmbda.shape
_ , error_ap2 ,W_vec2,_,_ = RMSEresult(Xp_train,y_train,Xp_test,y_test,lmbda,'l2')
error_ap2
###Output
_____no_output_____
###Markdown
Approach 3* Given: d=9 and L1 regulizar * best lambda = 0.1
###Code
lmbda = np.array([0.1]).reshape(1,-1)
lmbda.shape
_ , error_ap3 ,W_vec3,_,_ = RMSEresult(Xp_train,y_train,Xp_test,y_test,lmbda,'l1')
error_ap3
###Output
_____no_output_____
###Markdown
Report: weight associated with each polynomial for L1 and L2 regression* As value of lambda is not mentioned we're using optimal of lambda for L1 and L2 we found on last experiment
###Code
d = np.array( [ [1,2,3,5,9]] )
lmbda = np.array([0.01]).reshape(1,-1)
for i in range(d.shape[1]):
polynomial = d[0,i]
Xp_train = getfeaturematrix (Xnew_train,polynomial)
Xp_test = getfeaturematrix (Xnew_test,polynomial)
lmbda = np.array([0.01]).reshape(1,-1)
_ , _ ,W_vec2,_,_ = RMSEresult(Xp_train,y_train,Xp_test,y_test,lmbda,'l2')
lmbda = np.array([0.1]).reshape(1,-1)
_ , _ ,W_vec3,_,_ = RMSEresult(Xp_train,y_train,Xp_test,y_test,lmbda,'l1')
#print(W_vec2)
plt.figure()
plt.title ('weights for polynomial d=%d'%polynomial)
plt.ylabel('weights')
plt.plot(W_vec2,label='L2 regression')
plt.plot(W_vec3,label='L1 regression')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Report* for X'^0 = 1 the bias term is set to zero for both L1 and L2 regression* For lower polynomial the weights are similar but for polynomial = 9 weight differs* At d= 9 , more L1 regression highly penalize the higher degree polynomila of data X' Compare the result with no regularization at optimal lambda=2
###Code
polynomial = 2
Xp_train = getfeaturematrix (Xnew_train,polynomial)
Xp_test = getfeaturematrix (Xnew_test,polynomial)
lmbda = np.array([0.01]).reshape(1,-1)
_ , _ ,W_vec2,_,_ = RMSEresult(Xp_train,y_train,Xp_test,y_test,lmbda,'l2')
lmbda = np.array([0.1]).reshape(1,-1)
_ , _ ,W_vec3,_,_ = RMSEresult(Xp_train,y_train,Xp_test,y_test,lmbda,'l1')
plt.plot(W_vec1,label='optiaml polynomial')
plt.plot(W_vec2,label = 'l2 optimal lambda')
plt.plot(W_vec3,label = 'l1 optimal lambda')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Report* Weights are similar for optimal degree d and L1, L2 regression for optimal lambda* So no regulizar needed for d=2 2. Maximum likelihood
###Code
from scipy.stats import beta
#plot posterior density
a = 50
b = 50
theta = np.arange(0.01,1,0.01)
y = beta.pdf (theta,a,b)
plt.plot(theta,y)
plt.title('Beta: a=%.1f .b=%.1f' % (a,b))
plt.xlabel('$\\theta$')
plt.ylabel('Probability density')
plt.show()
###Output
_____no_output_____
###Markdown
posterior mean $\mathbb{E}[\theta] = \frac{\alpha}{\alpha+\beta}$
###Code
mean = beta.stats(a,b, moments='m')
print('Posterior Mean',mean)
###Output
Posterior Mean 0.5
###Markdown
* For infintely large data sets Bayesian and maximum likelihood results will agree. * For finite data sets, posterior mean for $\theta$ always lies between the prior mean and maximum likelihood estimation of $\theta$.Thus, Maximum likelihood estimator is a good summary of the distribution Derive maximum likelihood estimate of W $y_i = h_w(x_i) +\epsilon_i$ where $\epsilon_i \sim N (0,\sigma^2_i) $ We can say $y$ is a normal distribution with mean $XW$ and variance $\sigma^2$$$y \sim N(XW,\sigma^2)$$\begin{align} L(XW,\sigma^2 |x) =& (\frac{1}{2\pi\sigma^2})^{\frac{N}{2}} e^{ - \frac{1}{2\sigma^2} \sum_{n=0}^{N-1} (y_i - x_i w)^2 }\\ = & (\frac{1}{2\pi\sigma^2})^{\frac{N}{2}} e^{ - \frac{1}{2\sigma^2} (Y-XW)^T(Y-XW) } \\ ln ( L(XW,\sigma^2 |x) ) = & - \frac{N}{2} ln(2\pi) - \frac{N}{2} ln(\sigma^2) - \frac{1}{2\sigma^2} (Y-XW)^T(Y-XW) \nonumber \\ \frac{\partial ln ( L(XW,\sigma^2 |x) ) }{\partial w} = & \frac{1}{2\sigma^2} \frac{\partial (Y-XW)^T(Y-XW)}{\partial w} \nonumber \end{align}Optimal value for W\begin{align} \frac{\partial ln ( L(XW,\sigma^2 |x) ) }{\partial w} = & 0 \nonumber \\ \frac{1}{2\sigma^2} \frac{\partial (Y-XW)^T(Y-XW)}{\partial w} = & 0 \nonumber \\ W = & \frac{(X^TY)^{-1}}{X^TX} \nonumber \\ W = & (X^TY)^{-1} X^TX \end{align} 3. Baysian analysis and biased coin let unknown bias $\theta$ $P(x = H|\theta) = \theta$ thus, $P(x = T|\theta) = 1 - \theta$ Probability distribution over $x$ can be written as$$ bern(x|\theta) = \theta^x (1-\theta)^{1-x} $$$D = \{x_1,x_2, ... x_w\}$ We can construct a likelihood function which is a function of $\theta$\begin{align} P(D|\theta) = & \prod_{n=1}^N P(x_n|\theta) = \prod_{n=1}^N \theta^{x_n} (1-\theta)^{1-x_n} \\ ln (P(D|\theta)) = & \prod_{n=1}^N ln (P(x_n|\theta)) = \prod_{n=1}^N x_n ln(\theta) +(1-x_n) ln(1-\theta) \nonumber \\ \frac{\partial ln (P(D|\theta))}{\partial \theta} = & \prod_{n=1}^N \frac{x_n}{\theta} +(1-x_n) \frac{1}{1-\theta} (-1) = 0 \nonumber \\ \theta_{ML} = & \frac{1}{N} \sum_{n=1}^N x_n\end{align}Now outcomes are ${ H , H , H}$ equivalent to $ x_n = {1, 1, 1}$Thus\begin{align} \theta_{ML} = & \frac{1}{3} \times [1+1+1] \nonumber \nonumber \\ = & 1\end{align} Report:This is not a good estimator. We don't have enough data to make good prediciton. As the dataset is very small the model overfits over samples 4. Multivariate Regression Answer 4(a) input-output training data $ \{(x_i,y_i) \}^m_{i=1} $ where each $x_i$ is a $d$-dimensional vector and each $y_i$ is a $p$-dimensional vector, meaning for each example of feature $x_i$ we get $p$ number of possible outputs. Thus $X$ is $m\times d $ dimensinal and $Y$ is $m \times p$ dimensional.As we know $$Y=\phi(X) W$$Assuming $\phi(X)$ remains same dimension as $X$, $W$ would be $d \times p$ dimensional.For single output case, squared error loss function was\begin{align} J(W) = & \sum_{i=1}^m || W^T x_i - y_i ||^2_2 \\ J(W) = & || X W - y ||^2_2 \\ \bigtriangledown_W J= & \bigtriangledown_W \Big[ (XW-Y)^T (XW-Y) \Big]\end{align}Where W was $d \times 1$ dimensional.But now for multiple outputs of $y_i$, assuming outputs are independet of prior, squared error loss function for each possible $p$ output will be :\begin{align} J(W_p) = & || X W_p - y_p ||^2_2 \\ \bigtriangledown_{W_p} J= & \bigtriangledown_{W_p} \Big[ (XW-Y)^T (XW-Y) \Big] \\ = & X^TX W_p -X^T y_p = 0 \\ thus, \, W_p = & (X^TX)^{-1} X^T y_p \end{align} Answer 4(b) From the proof above we can say that if we independently solve $p$ classical linear regression an and get $W$ parameter $p$ times, we solve the multivariate regression task. In this problem we assumed outputs are independent of prior but in baysian approximation where we have a prior on output, for multivate output we need to consider all the output prior for future $W$ parameter updates and thus can't calculate posterior output inpendently.consider noisy conversation $y = f(x) + \epsilon = \phi(x)^T w + \epsilon$according to bayes rules:$ P_\phi(w|y,X) = \frac{ P_\phi(y|X,w)P(w) }{P_\phi(y|X) } $ 5. Kernels and RKHS Answer 5 (a)
###Code
import numpy as np
from sklearn.metrics.pairwise import linear_kernel as lk
import matplotlib.pyplot as plt
# defined input space
X1 = np.linspace(-1,1,10).reshape(-1,1)
def linear_kernel (X,X1):
lk = np.dot(X,X1)
return lk
k = linear_kernel (1,X1)
k_func1 = 3 * k
k_func2 = 3 * k + 4
k_func3 = 3 * k + 4 * k
plt.plot(X1,k_func1,label='generated func : $ g(k) = 3k $')
plt.plot(X1,k_func2,label='generated func : $ g(k) = 3k+4 $')
plt.plot(X1,k_func3,label='generated func : $ g(k) = 3k+4k $')
plt.plot(X1,k,'.',label = 'linear kernel function')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Report $||f||_k$ $||f||^2_k = _k = _k = k(w,w)$function 1: $ k_1(w,w) = 3 k(w,w) = 3 w^2 < \infty $ function 2: $ k_2(w,w) = 3 k(w,w)+4 = 3 w^2 +4 < \infty $ function 3: $ k_3(w,w) = 3 k(w,w)+4 k(w,w) = 3 w^2 + 4 w^2 < \infty$ Answer 5 (b)
###Code
def gaussian_kernel (X,X1,p):
gk = np.exp (-1 *( (X-X1)**2 ) / (2*(p**2) ) )
return gk
###Output
_____no_output_____
###Markdown
For each bandwidth $\rho \in \{ 0.1, 0.5, 1 \} $ function hase been generated
###Code
# given
p = np.matrix([0.1,0.5,1])
# defined input space for given limit X = [-1,1]
X1 = np.linspace(-1,1,100).reshape(-1,1)
X2 = X1
for i in range(p.shape[1]):
k = gaussian_kernel (X1[-1,0],X2,p[0,i])
plt.plot(X1,k , label = 'p = %s'%p[0,i])
plt.xlabel('$\mathbb{X2}$')
plt.title('kernel function k(X1,X2) plotted as a functino of X2 dor X1=1')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
We're approximating the feature mapping $\phi(.)$ using taylor series expansion up to d=150 features $\gamma = \frac{1}{2 \sigma^2}$\begin{align} e^{- \gamma ||x_i - x_j ||} = & e^{- \gamma(x_i-x_j)^2} = e^{-\gamma x_i^2 + 2\gamma x_i x_j-\gamma x_j^2} \\ = & e^{-\gamma x_i^2 \gamma x_j^2} \Bigg(1 \times 1 + \sqrt{\frac{2\gamma}{1!}}x_i \times \sqrt{\frac{2\gamma}{1!}}x_j + \sqrt{\frac{(2\gamma)^2}{2!}}x_i^2 \times \sqrt{\frac{(2\gamma)^2}{2!}}x_j^2 \nonumber \\ & + \sqrt{\frac{(2\gamma)^3}{3!}}x_i^3 \times \sqrt{\frac{(2\gamma)^3}{3!}}x_j^3 + ....... \Bigg) \\ = & \phi(x_i)^T\phi(x_j)\end{align}here $\phi(x) = e^{-\gamma x^2} \Bigg[ 1, \sqrt{\frac{(2\gamma)^1}{1!}}x^1 , \sqrt{\frac{(2\gamma)^2}{2!}}x^2 ..... \sqrt{\frac{(2\gamma)^{150} }{150!}}x^{150} \Bigg]^T$ ploting gaussian basis function $\phi(x_i)$, where $x_i$ = 1
###Code
def gaussian_basis(x,p,d):
gamma = 1/(2*(p**2))
g = np.matrix(np.zeros([d+1,1]))
for i in range(d+1):
from scipy.special import factorial
g[i,0] = np.sqrt(((2*gamma)**i)/factorial(i, exact=True))*(x**i)
g_basis = np.exp(-gamma*(x)**2)*g
return g_basis
d = 150
g_basis=gaussian_basis(X1[-1,0],p[0,1],d) #gaussian basis for Xi = 1
plt.plot(g_basis)
###Output
_____no_output_____
###Markdown
Plotting function for fixed $\rho$=0.5
###Code
d = 150
x1=X1[-1,0]
g_basis1=gaussian_basis(x1,p[0,1],d) # for fixed p=0.5, X1=1
func = np.matrix(np.zeros([X1.shape[0],1]))
for k in range(X1.shape[0]):
x2 = X2[k,0]
g_basis2=gaussian_basis(x2,p[0,1],d) # for fixed p
func[k,:]=np.dot(g_basis1.T,g_basis2)
plt.plot(X1,func,label='p = %s'%p[0,1])
plt.xlabel('$\mathbb{X2}$')
plt.ylabel('k(x1,x2)')
plt.title('x1 = 1')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
ploting functions for $\rho \in \{ 0.1, 0.5, 1 \} $
###Code
x1=X1[-1,0]
d = 120
# evaluate kernel funciton for different bandwidth
for m in range (p.shape[1]):
g_basis1=gaussian_basis(x1,p[0,m],d)
func = np.matrix(np.zeros([X1.shape[0],1]))
# get kernel function as a function of X2 (when bandwidth P and X1 fixed)
for k in range(X1.shape[0]):
x2 = X2[k,0]
g_basis2=gaussian_basis(x2,p[0,m],d)
func[k,:]=np.dot(g_basis1.T,g_basis2)
plt.plot(X1,func,label='p = %s'%p[0,m])
plt.xlabel('$\mathbb{X2}$')
plt.ylabel('k(x1,x2)')
plt.title('kernel function k(X1,X2) plotted as a functino of X2 dor X1=1')
plt.legend()
plt.show()
###Output
_____no_output_____ |
notebooks/OS2020/sog-stats-for-poster.ipynb | ###Markdown
sog stats
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import netCDF4 as nc
import seaborn as sns
import matplotlib.colors as mcolors
import glob
import os
import xarray as xr
import datetime
from salishsea_tools import viz_tools, tidetools, geo_tools, gsw_calls, wind_tools
import pickle
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
%matplotlib inline
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
wind_grid = nc.Dataset('https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSaAtmosphereGridV1')
geo_tools.find_closest_model_point(-123.67, 49.21, wind_grid['longitude'][:]-360, wind_grid['latitude'][:],
grid = 'GEM2.5')
wind_data = xr.open_dataset('https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSaSurfaceAtmosphereFieldsV1')
time_slice = slice('2015-01-01 00:00:00', '2019-01-01 00:00:00')
u_winds = wind_data.u_wind.isel(gridY=143, gridX=141).sel(time=time_slice).data
v_winds = wind_data.v_wind.isel(gridY=143, gridX=141).sel(time=time_slice).data
wind_speed, wind_dir = wind_tools.wind_speed_dir(u_winds, v_winds)
wnd_dir_avg = np.array([])
for i in range(1450):
start = 24*i
end = start + 168
wnd_dir_avg = np.append(wnd_dir_avg, wind_dir[start:end].mean())
wnd_spd_avg = np.array([])
for i in range(1450):
start = 24*i
end = start + 168
wnd_spd_avg = np.append(wnd_spd_avg, wind_speed[start:end].mean())
pickle_in1 = open("/home/abhudia/Desktop/current speed/hourly/mag2015.pickle","rb")
pickle_in2 = open("/home/abhudia/Desktop/current speed/hourly/mag2016.pickle","rb")
pickle_in3 = open("/home/abhudia/Desktop/current speed/hourly/mag2017.pickle","rb")
pickle_in4 = open("/home/abhudia/Desktop/current speed/hourly/mag2018.pickle","rb")
example1 = pickle.load(pickle_in1)
example2 = pickle.load(pickle_in2)
example3 = pickle.load(pickle_in3)
example4 = pickle.load(pickle_in4)
two = np.append(example1[:,274,242], example2[:,274,242])
three = np.append(two, example3[:,274,242])
fullc = np.append(three, example4[:,274,242])
fullc.shape
cur_avg = np.array([])
for i in range(1450):
start = 24*i
end = start + 168
cur_avg = np.append(cur_avg, fullc[start:end].mean())
dates = np.array([datetime.date(2015,1,1) + datetime.timedelta(i) for i in range(1450)])
dates.shape
fig, ax = plt.subplots(3,1, figsize = (35,15))
ax[0].plot(dates, wnd_dir_avg)
ax[0].set_title('averaged wind direction', fontsize = 24)
#ax[0].hlines(fulls.mean(), dates[0], dates[-1])
ax[1].plot(dates,wnd_spd_avg)
#ax[1].hlines(fullc.mean(), dates[0], dates[-1])
ax[1].set_title('averaged wind speed (m/s)', fontsize = 24)
ax[2].plot(dates,cur_avg)
ax[2].set_title('averaged surface current speed (m/s)', fontsize = 24)
#ax[2].hlines(full.mean(), dates[0], dates[-1])
for ax in ax:
ax.set_xlim(dates[0], dates[-1])
ax.axvline(datetime.date(2017,8,1), color='r', ls='--')
ax.axvline(datetime.date(2017,1,1), color='r', ls='--');
ax.axvline(datetime.date(2015,6,5), color='r', ls='--');
ax.axvline(datetime.date(2018,1,15), color='r', ls='--');
ax.axvline(datetime.date(2017,6,15), color='r', ls='--');
ax.axvline(datetime.date(2017,11,21), color='r', ls='--');
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(24);
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(24);
plt.tight_layout()
#fig.savefig('/home/vdo/Pictures/salmon-choices.png', bbox_inches='tight');
###Output
_____no_output_____ |
class_exercises/.ipynb_checkpoints/D09-Seaborn-checkpoint.ipynb | ###Markdown
Days 9 Class Exercises: Seaborn For these class exercises, we will be using a wine quality dataset which was obtained from this URL:http://mlr.cs.umass.edu/ml/machine-learning-databases/wine-quality. The data for these exercises can be found in the `data` directory of this repository. Additionally, with these class exercises we learn a few new things. When new knowledge is introduced you'll see the icon shown on the right: Get StartedImport the Numpy, Pandas, Matplotlib (matplotlib magic) and Seaborn.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
Exercise 1. Explore the dataFirst, read about this dataset from the file [../data/winequality.names](../data/winequality.names) Next, read in the file named `winequality-red.csv`. This data, despite the `csv` suffix, is separated using a semicolon.
###Code
wine = pd.read_csv('../data/winequality-red.csv', sep=';')
wine.head()
###Output
_____no_output_____
###Markdown
How many samples (observations) do we have?
###Code
wine.shape
###Output
_____no_output_____
###Markdown
Are the data types for the columns in the dataframe appropriate for the type of data in each column?
###Code
wine.dtypes
###Output
_____no_output_____
###Markdown
Any missing values?
###Code
wine.isna().sum()
wine.duplicated().sum()
###Output
_____no_output_____
###Markdown
Exercise 2: Explore the Data The quality column contains our expected outcome. Wines scored as 0 are considered very bad and wines scored as 10 are very excellent. Plot a bargraph to see how many samples are there per each quality of wine. **Hints**: - Use the [pd.Series.value_counts()](https://pandas.pydata.org/docs/reference/api/pandas.Series.value_counts.html) function to count the number of values- Panda DataFrames and Series have built in plotting funcitons that use MatplotLib. Therefore, we can use the [pd.Series.plot.bar()](https://pandas.pydata.org/docs/reference/api/pandas.Series.plot.bar.html) function to simplify use of matplotlib.
###Code
counts = wine['quality'].value_counts(sort = False)
counts.plot.bar()
###Output
_____no_output_____
###Markdown
Now use Matplotlib functionality to recreate the plot (no need to color each bar)
###Code
qcounts = wine['quality'].value_counts(sort=False)
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.bar(x=qcounts.index, height=qcounts)
plt.show()
###Output
_____no_output_____
###Markdown
Recreate the bargraph using Seaborn
###Code
sns.barplot(x=counts.index, y = counts)
###Output
_____no_output_____
###Markdown
Describe the data for all of the columns in the dataframe. This includes our physicochemical measurements (independent data) as well as the quality data (dependent).
###Code
wine.describe()
###Output
_____no_output_____
###Markdown
Visualizing the data can sometimes better help undrestand it's limits. Create a single figure, that contains boxplots for each of the data columns. Use the [seaborn.boxplot()](https://seaborn.pydata.org/generated/seaborn.boxplot.html) function to do this:
###Code
sns.boxplot(data=wine, palette = 'colorblind')
###Output
_____no_output_____
###Markdown
In our plot, the axis labels are squished together and many of the box plots are too hard to see because all of them share the same y-axis coordinate system. Unfortunately, not all Seaborn functions provide arguments to control the height and widht of a plot, the `boxplot` function is one of them. However, remember that Seaborn uses matplotlib! So, we can use matplot lib functions set the height using a command such as:```pythonplt.figure(figsize=(10, 6))```Where the first number is the width and the second number is the height. Repeat the plot from the previous cell but add this line of code just above the figure.
###Code
plt.figure(figsize=(10,6))
sns.boxplot(data=wine, palette = 'colorblind')
###Output
_____no_output_____
###Markdown
 Unfortunately, we are still unable to read some of the x-axis labels. But we can use Matplotlib to correct this. When calling a Seaborn plot function it will return the Matplotlib axis object. We can then call functions on the axis such as the [set_xticklabels](https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.set_xticklabels.html) function. That function will allow us to set a rotation on the axis tick labels and it takes a `rotation` argument. For example. The following function call on an axis object named `g` will reset the tick labels (using the `get_xticklabels()` function) and set a rotation of 45 degrees.```pythong.set_xticklabels(g.get_xticklabels(), rotation=45);```Try it on the wine data boxplot:
###Code
g = sns.boxplot(data=wine, palette = 'colorblind')
g.set_xticklabels(g.get_xticklabels(), rotation=45)
###Output
_____no_output_____
###Markdown
Days 9 Class Exercises: Seaborn For these class exercises, we will be using a wine quality dataset which was obtained from this URL:http://mlr.cs.umass.edu/ml/machine-learning-databases/wine-quality. The data for these exercises can be found in the `data` directory of this repository. Additionally, with these class exercises we learn a few new things. When new knowledge is introduced you'll see the icon shown on the right: Get StartedImport the Numpy, Pandas, Matplotlib (matplotlib magic) and Seaborn.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
Exercise 1. Explore the dataFirst, read about this dataset from the file [../data/winequality.names](../data/winequality.names) Next, read in the file named `winequality-red.csv`. This data, despite the `csv` suffix, is separated using a semicolon.
###Code
wine = pd.read_csv("..//data/winequality-red.csv", sep=';')
wine.head()
###Output
_____no_output_____
###Markdown
How many samples (observations) do we have?
###Code
wine.shape
###Output
_____no_output_____
###Markdown
Are the data types for the columns in the dataframe appropriate for the type of data in each column?
###Code
wine.dtypes
###Output
_____no_output_____
###Markdown
Any missing values?
###Code
wine.isna().sum()
wine.duplicated().sum()
###Output
_____no_output_____
###Markdown
Exercise 2: Explore the Data The quality column contains our expected outcome. Wines scored as 0 are considered very bad and wines scored as 10 are very excellent. Plot a bargraph to see how many samples are there per each quality of wine. **Hints**: - Use the [pd.Series.value_counts()](https://pandas.pydata.org/docs/reference/api/pandas.Series.value_counts.html) function to count the number of values- Panda DataFrames and Series have built in plotting funcitons that use MatplotLib. Therefore, we can use the [pd.Series.plot.bar()](https://pandas.pydata.org/docs/reference/api/pandas.Series.plot.bar.html) function to simplify use of matplotlib.
###Code
wine['quality'].value_counts(sort = False).plot.bar();
###Output
_____no_output_____
###Markdown
Now use Matplotlib functionality to recreate the plot (no need to color each bar)
###Code
qcounts =wine['quality'].value_counts(sort = False)
fig = plt.figure()
###Output
_____no_output_____
###Markdown
Recreate the bargraph using Seaborn
###Code
sns.barplot(x = 'quality', data = wine )
###Output
_____no_output_____ |
examples/notebooks/14_legends.ipynb | ###Markdown
Uncomment the following line to install [geemap](https://geemap.org) if needed.
###Code
# !pip install geemap
import ee
import geemap
geemap.show_youtube('NwnW_qOkNRw')
###Output
_____no_output_____
###Markdown
Add builtin legends from geemap Python packagehttps://github.com/giswqs/geemap/blob/master/geemap/legends.py Available builtin legends:
###Code
legends = geemap.builtin_legends
for legend in legends:
print(legend)
###Output
_____no_output_____
###Markdown
Available Land Cover Datasets in Earth Enginehttps://developers.google.com/earth-engine/datasets/tags/landcover National Land Cover Database (NLCD)https://developers.google.com/earth-engine/datasets/catalog/USGS_NLCD
###Code
Map = geemap.Map()
Map.add_basemap('HYBRID')
landcover = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
Map.addLayer(landcover, {}, 'NLCD Land Cover')
Map.add_legend(builtin_legend='NLCD')
Map
###Output
_____no_output_____
###Markdown
National Wetlands Inventory (NWI)https://www.fws.gov/wetlands/data/mapper.html
###Code
Map = geemap.Map()
Map.add_basemap('HYBRID')
Map.add_basemap('FWS NWI Wetlands')
Map.add_legend(builtin_legend='NWI')
Map
###Output
_____no_output_____
###Markdown
MODIS Land Cover Type Yearly Global 500mhttps://developers.google.com/earth-engine/datasets/catalog/MODIS_051_MCD12Q1
###Code
Map = geemap.Map()
Map.add_basemap('HYBRID')
landcover = ee.Image('MODIS/051/MCD12Q1/2013_01_01') \
.select('Land_Cover_Type_1')
Map.setCenter(6.746, 46.529, 2)
Map.addLayer(landcover, {}, 'MODIS Land Cover')
Map.add_legend(builtin_legend='MODIS/051/MCD12Q1')
Map
###Output
_____no_output_____
###Markdown
Add customized legends for Earth Engine dataThere are three ways you can add customized legends for Earth Engine data1. Define legend keys and colors2. Defind legend dictionary3. Convert Earth Engine class table to legend dictionary Define legend keys and colors
###Code
Map = geemap.Map()
legend_keys = ['One', 'Two', 'Three', 'Four', 'ect']
#colorS can be defined using either hex code or RGB (0-255, 0-255, 0-255)
legend_colors = ['#8DD3C7', '#FFFFB3', '#BEBADA', '#FB8072', '#80B1D3']
# legend_colors = [(255, 0, 0), (127, 255, 0), (127, 18, 25), (36, 70, 180), (96, 68 123)]
Map.add_legend(legend_keys=legend_keys, legend_colors=legend_colors, position='bottomleft')
Map
###Output
_____no_output_____
###Markdown
Define a legend dictionary
###Code
Map = geemap.Map()
legend_dict = {
'11 Open Water': '466b9f',
'12 Perennial Ice/Snow': 'd1def8',
'21 Developed, Open Space': 'dec5c5',
'22 Developed, Low Intensity': 'd99282',
'23 Developed, Medium Intensity': 'eb0000',
'24 Developed High Intensity': 'ab0000',
'31 Barren Land (Rock/Sand/Clay)': 'b3ac9f',
'41 Deciduous Forest': '68ab5f',
'42 Evergreen Forest': '1c5f2c',
'43 Mixed Forest': 'b5c58f',
'51 Dwarf Scrub': 'af963c',
'52 Shrub/Scrub': 'ccb879',
'71 Grassland/Herbaceous': 'dfdfc2',
'72 Sedge/Herbaceous': 'd1d182',
'73 Lichens': 'a3cc51',
'74 Moss': '82ba9e',
'81 Pasture/Hay': 'dcd939',
'82 Cultivated Crops': 'ab6c28',
'90 Woody Wetlands': 'b8d9eb',
'95 Emergent Herbaceous Wetlands': '6c9fb8'
}
landcover = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
Map.addLayer(landcover, {}, 'NLCD Land Cover')
Map.add_legend(legend_title="NLCD Land Cover Classification", legend_dict=legend_dict)
Map
###Output
_____no_output_____
###Markdown
Convert an Earth Engine class table to legendFor example: MCD12Q1.051 Land Cover Type Yearly Global 500mhttps://developers.google.com/earth-engine/datasets/catalog/MODIS_051_MCD12Q1
###Code
Map = geemap.Map()
ee_class_table = """
Value Color Description
0 1c0dff Water
1 05450a Evergreen needleleaf forest
2 086a10 Evergreen broadleaf forest
3 54a708 Deciduous needleleaf forest
4 78d203 Deciduous broadleaf forest
5 009900 Mixed forest
6 c6b044 Closed shrublands
7 dcd159 Open shrublands
8 dade48 Woody savannas
9 fbff13 Savannas
10 b6ff05 Grasslands
11 27ff87 Permanent wetlands
12 c24f44 Croplands
13 a5a5a5 Urban and built-up
14 ff6d4c Cropland/natural vegetation mosaic
15 69fff8 Snow and ice
16 f9ffa4 Barren or sparsely vegetated
254 ffffff Unclassified
"""
landcover = ee.Image('MODIS/051/MCD12Q1/2013_01_01') \
.select('Land_Cover_Type_1')
Map.setCenter(6.746, 46.529, 2)
Map.addLayer(landcover, {}, 'MODIS Land Cover')
legend_dict = geemap.legend_from_ee(ee_class_table)
Map.add_legend(legend_title="MODIS Global Land Cover", legend_dict=legend_dict)
Map
###Output
_____no_output_____
###Markdown
Add builtin legends from geemap Python packagehttps://github.com/giswqs/geemap/blob/master/geemap/legends.py Available builtin legends:
###Code
legends = geemap.builtin_legends
for legend in legends:
print(legend)
###Output
_____no_output_____
###Markdown
Available Land Cover Datasets in Earth Enginehttps://developers.google.com/earth-engine/datasets/tags/landcover National Land Cover Database (NLCD)https://developers.google.com/earth-engine/datasets/catalog/USGS_NLCD
###Code
Map = geemap.Map()
Map.add_basemap('HYBRID')
landcover = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
Map.addLayer(landcover, {}, 'NLCD Land Cover')
Map.add_legend(builtin_legend='NLCD')
Map
###Output
_____no_output_____
###Markdown
National Wetlands Inventory (NWI)https://www.fws.gov/wetlands/data/mapper.html
###Code
Map = geemap.Map()
Map.add_basemap('HYBRID')
Map.add_basemap('FWS NWI Wetlands')
Map.add_legend(builtin_legend='NWI')
Map
###Output
_____no_output_____
###Markdown
MODIS Land Cover Type Yearly Global 500mhttps://developers.google.com/earth-engine/datasets/catalog/MODIS_051_MCD12Q1
###Code
Map = geemap.Map()
Map.add_basemap('HYBRID')
landcover = ee.Image('MODIS/051/MCD12Q1/2013_01_01') \
.select('Land_Cover_Type_1')
Map.setCenter(6.746, 46.529, 2)
Map.addLayer(landcover, {}, 'MODIS Land Cover')
Map.add_legend(builtin_legend='MODIS/051/MCD12Q1')
Map
###Output
_____no_output_____
###Markdown
Add customized legends for Earth Engine dataThere are three ways you can add customized legends for Earth Engine data1. Define legend keys and colors2. Defind legend dictionary3. Convert Earth Engine class table to legend dictionary Define legend keys and colors
###Code
Map = geemap.Map()
legend_keys = ['One', 'Two', 'Three', 'Four', 'ect']
#colorS can be defined using either hex code or RGB (0-255, 0-255, 0-255)
legend_colors = ['#8DD3C7', '#FFFFB3', '#BEBADA', '#FB8072', '#80B1D3']
# legend_colors = [(255, 0, 0), (127, 255, 0), (127, 18, 25), (36, 70, 180), (96, 68 123)]
Map.add_legend(legend_keys=legend_keys, legend_colors=legend_colors, position='bottomleft')
Map
###Output
_____no_output_____
###Markdown
Define a legend dictionary
###Code
Map = geemap.Map()
legend_dict = {
'11 Open Water': '466b9f',
'12 Perennial Ice/Snow': 'd1def8',
'21 Developed, Open Space': 'dec5c5',
'22 Developed, Low Intensity': 'd99282',
'23 Developed, Medium Intensity': 'eb0000',
'24 Developed High Intensity': 'ab0000',
'31 Barren Land (Rock/Sand/Clay)': 'b3ac9f',
'41 Deciduous Forest': '68ab5f',
'42 Evergreen Forest': '1c5f2c',
'43 Mixed Forest': 'b5c58f',
'51 Dwarf Scrub': 'af963c',
'52 Shrub/Scrub': 'ccb879',
'71 Grassland/Herbaceous': 'dfdfc2',
'72 Sedge/Herbaceous': 'd1d182',
'73 Lichens': 'a3cc51',
'74 Moss': '82ba9e',
'81 Pasture/Hay': 'dcd939',
'82 Cultivated Crops': 'ab6c28',
'90 Woody Wetlands': 'b8d9eb',
'95 Emergent Herbaceous Wetlands': '6c9fb8'
}
landcover = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
Map.addLayer(landcover, {}, 'NLCD Land Cover')
Map.add_legend(legend_title="NLCD Land Cover Classification", legend_dict=legend_dict)
Map
###Output
_____no_output_____
###Markdown
Convert an Earth Engine class table to legendFor example: MCD12Q1.051 Land Cover Type Yearly Global 500mhttps://developers.google.com/earth-engine/datasets/catalog/MODIS_051_MCD12Q1
###Code
Map = geemap.Map()
ee_class_table = """
Value Color Description
0 1c0dff Water
1 05450a Evergreen needleleaf forest
2 086a10 Evergreen broadleaf forest
3 54a708 Deciduous needleleaf forest
4 78d203 Deciduous broadleaf forest
5 009900 Mixed forest
6 c6b044 Closed shrublands
7 dcd159 Open shrublands
8 dade48 Woody savannas
9 fbff13 Savannas
10 b6ff05 Grasslands
11 27ff87 Permanent wetlands
12 c24f44 Croplands
13 a5a5a5 Urban and built-up
14 ff6d4c Cropland/natural vegetation mosaic
15 69fff8 Snow and ice
16 f9ffa4 Barren or sparsely vegetated
254 ffffff Unclassified
"""
landcover = ee.Image('MODIS/051/MCD12Q1/2013_01_01') \
.select('Land_Cover_Type_1')
Map.setCenter(6.746, 46.529, 2)
Map.addLayer(landcover, {}, 'MODIS Land Cover')
legend_dict = geemap.legend_from_ee(ee_class_table)
Map.add_legend(legend_title="MODIS Global Land Cover", legend_dict=legend_dict)
Map
###Output
_____no_output_____
###Markdown
Uncomment the following line to install [geemap](https://geemap.org) if needed.
###Code
# !pip install geemap
import ee
import geemap
geemap.show_youtube('NwnW_qOkNRw')
###Output
_____no_output_____
###Markdown
Add builtin legends from geemap Python packagehttps://github.com/giswqs/geemap/blob/master/geemap/legends.py Available builtin legends:
###Code
legends = geemap.builtin_legends
for legend in legends:
print(legend)
###Output
_____no_output_____
###Markdown
Available Land Cover Datasets in Earth Enginehttps://developers.google.com/earth-engine/datasets/tags/landcover National Land Cover Database (NLCD)https://developers.google.com/earth-engine/datasets/catalog/USGS_NLCD
###Code
Map = geemap.Map()
Map.add_basemap('HYBRID')
landcover = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
Map.addLayer(landcover, {}, 'NLCD Land Cover')
Map.add_legend(builtin_legend='NLCD')
Map
###Output
_____no_output_____
###Markdown
National Wetlands Inventory (NWI)https://www.fws.gov/wetlands/data/mapper.html
###Code
Map = geemap.Map()
Map.add_basemap('HYBRID')
Map.add_basemap('FWS NWI Wetlands')
Map.add_legend(builtin_legend='NWI')
Map
###Output
_____no_output_____
###Markdown
MODIS Land Cover Type Yearly Global 500mhttps://developers.google.com/earth-engine/datasets/catalog/MODIS_051_MCD12Q1
###Code
Map = geemap.Map()
Map.add_basemap('HYBRID')
landcover = ee.Image('MODIS/051/MCD12Q1/2013_01_01') \
.select('Land_Cover_Type_1')
Map.setCenter(6.746, 46.529, 2)
Map.addLayer(landcover, {}, 'MODIS Land Cover')
Map.add_legend(builtin_legend='MODIS/051/MCD12Q1')
Map
###Output
_____no_output_____
###Markdown
Add customized legends for Earth Engine dataThere are three ways you can add customized legends for Earth Engine data1. Define legend keys and colors2. Define legend dictionary3. Convert Earth Engine class table to legend dictionary Define legend keys and colors
###Code
Map = geemap.Map()
legend_keys = ['One', 'Two', 'Three', 'Four', 'ect']
#colorS can be defined using either hex code or RGB (0-255, 0-255, 0-255)
legend_colors = ['#8DD3C7', '#FFFFB3', '#BEBADA', '#FB8072', '#80B1D3']
# legend_colors = [(255, 0, 0), (127, 255, 0), (127, 18, 25), (36, 70, 180), (96, 68 123)]
Map.add_legend(legend_keys=legend_keys, legend_colors=legend_colors, position='bottomleft')
Map
###Output
_____no_output_____
###Markdown
Define a legend dictionary
###Code
Map = geemap.Map()
legend_dict = {
'11 Open Water': '466b9f',
'12 Perennial Ice/Snow': 'd1def8',
'21 Developed, Open Space': 'dec5c5',
'22 Developed, Low Intensity': 'd99282',
'23 Developed, Medium Intensity': 'eb0000',
'24 Developed High Intensity': 'ab0000',
'31 Barren Land (Rock/Sand/Clay)': 'b3ac9f',
'41 Deciduous Forest': '68ab5f',
'42 Evergreen Forest': '1c5f2c',
'43 Mixed Forest': 'b5c58f',
'51 Dwarf Scrub': 'af963c',
'52 Shrub/Scrub': 'ccb879',
'71 Grassland/Herbaceous': 'dfdfc2',
'72 Sedge/Herbaceous': 'd1d182',
'73 Lichens': 'a3cc51',
'74 Moss': '82ba9e',
'81 Pasture/Hay': 'dcd939',
'82 Cultivated Crops': 'ab6c28',
'90 Woody Wetlands': 'b8d9eb',
'95 Emergent Herbaceous Wetlands': '6c9fb8'
}
landcover = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
Map.addLayer(landcover, {}, 'NLCD Land Cover')
Map.add_legend(legend_title="NLCD Land Cover Classification", legend_dict=legend_dict)
Map
###Output
_____no_output_____
###Markdown
Convert an Earth Engine class table to legendFor example: MCD12Q1.051 Land Cover Type Yearly Global 500mhttps://developers.google.com/earth-engine/datasets/catalog/MODIS_051_MCD12Q1
###Code
Map = geemap.Map()
ee_class_table = """
Value Color Description
0 1c0dff Water
1 05450a Evergreen needleleaf forest
2 086a10 Evergreen broadleaf forest
3 54a708 Deciduous needleleaf forest
4 78d203 Deciduous broadleaf forest
5 009900 Mixed forest
6 c6b044 Closed shrublands
7 dcd159 Open shrublands
8 dade48 Woody savannas
9 fbff13 Savannas
10 b6ff05 Grasslands
11 27ff87 Permanent wetlands
12 c24f44 Croplands
13 a5a5a5 Urban and built-up
14 ff6d4c Cropland/natural vegetation mosaic
15 69fff8 Snow and ice
16 f9ffa4 Barren or sparsely vegetated
254 ffffff Unclassified
"""
landcover = ee.Image('MODIS/051/MCD12Q1/2013_01_01') \
.select('Land_Cover_Type_1')
Map.setCenter(6.746, 46.529, 2)
Map.addLayer(landcover, {}, 'MODIS Land Cover')
legend_dict = geemap.legend_from_ee(ee_class_table)
Map.add_legend(legend_title="MODIS Global Land Cover", legend_dict=legend_dict)
Map
###Output
_____no_output_____
###Markdown
Uncomment the following line to install [geemap](https://geemap.org) if needed.
###Code
# !pip install geemap
import ee
import geemap
geemap.show_youtube('NwnW_qOkNRw')
###Output
_____no_output_____
###Markdown
Add builtin legends from geemap Python packagehttps://github.com/giswqs/geemap/blob/master/geemap/legends.py Available builtin legends:
###Code
legends = geemap.builtin_legends
for legend in legends:
print(legend)
###Output
_____no_output_____
###Markdown
Available Land Cover Datasets in Earth Enginehttps://developers.google.com/earth-engine/datasets/tags/landcover National Land Cover Database (NLCD)https://developers.google.com/earth-engine/datasets/catalog/USGS_NLCD_RELEASES_2019_REL_NLCD
###Code
Map = geemap.Map()
Map.add_basemap('HYBRID')
landcover = ee.Image('USGS/NLCD_RELEASES/2019_REL/NLCD/2019').select('landcover')
Map.addLayer(landcover, {}, 'NLCD Land Cover')
Map.add_legend(builtin_legend='NLCD')
Map
###Output
_____no_output_____
###Markdown
National Wetlands Inventory (NWI)https://www.fws.gov/wetlands/data/mapper.html
###Code
Map = geemap.Map()
Map.add_basemap('HYBRID')
Map.add_basemap('FWS NWI Wetlands')
Map.add_legend(builtin_legend='NWI')
Map
###Output
_____no_output_____
###Markdown
MODIS Land Cover Type Yearly Global 500mhttps://developers.google.com/earth-engine/datasets/catalog/MODIS_051_MCD12Q1
###Code
Map = geemap.Map()
Map.add_basemap('HYBRID')
landcover = ee.Image('MODIS/051/MCD12Q1/2013_01_01').select('Land_Cover_Type_1')
Map.setCenter(6.746, 46.529, 2)
Map.addLayer(landcover, {}, 'MODIS Land Cover')
Map.add_legend(builtin_legend='MODIS/051/MCD12Q1')
Map
###Output
_____no_output_____
###Markdown
Add customized legends for Earth Engine dataThere are three ways you can add customized legends for Earth Engine data1. Define legend keys and colors2. Define legend dictionary3. Convert Earth Engine class table to legend dictionary Define legend keys and colors
###Code
Map = geemap.Map()
legend_keys = ['One', 'Two', 'Three', 'Four', 'ect']
# colorS can be defined using either hex code or RGB (0-255, 0-255, 0-255)
legend_colors = ['#8DD3C7', '#FFFFB3', '#BEBADA', '#FB8072', '#80B1D3']
# legend_colors = [(255, 0, 0), (127, 255, 0), (127, 18, 25), (36, 70, 180), (96, 68 123)]
Map.add_legend(
legend_keys=legend_keys, legend_colors=legend_colors, position='bottomleft'
)
Map
###Output
_____no_output_____
###Markdown
Define a legend dictionary
###Code
Map = geemap.Map()
legend_dict = {
'11 Open Water': '466b9f',
'12 Perennial Ice/Snow': 'd1def8',
'21 Developed, Open Space': 'dec5c5',
'22 Developed, Low Intensity': 'd99282',
'23 Developed, Medium Intensity': 'eb0000',
'24 Developed High Intensity': 'ab0000',
'31 Barren Land (Rock/Sand/Clay)': 'b3ac9f',
'41 Deciduous Forest': '68ab5f',
'42 Evergreen Forest': '1c5f2c',
'43 Mixed Forest': 'b5c58f',
'51 Dwarf Scrub': 'af963c',
'52 Shrub/Scrub': 'ccb879',
'71 Grassland/Herbaceous': 'dfdfc2',
'72 Sedge/Herbaceous': 'd1d182',
'73 Lichens': 'a3cc51',
'74 Moss': '82ba9e',
'81 Pasture/Hay': 'dcd939',
'82 Cultivated Crops': 'ab6c28',
'90 Woody Wetlands': 'b8d9eb',
'95 Emergent Herbaceous Wetlands': '6c9fb8',
}
landcover = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
Map.addLayer(landcover, {}, 'NLCD Land Cover')
Map.add_legend(legend_title="NLCD Land Cover Classification", legend_dict=legend_dict)
Map
###Output
_____no_output_____
###Markdown
Convert an Earth Engine class table to legendFor example: MCD12Q1.051 Land Cover Type Yearly Global 500mhttps://developers.google.com/earth-engine/datasets/catalog/MODIS_051_MCD12Q1
###Code
Map = geemap.Map()
ee_class_table = """
Value Color Description
0 1c0dff Water
1 05450a Evergreen needleleaf forest
2 086a10 Evergreen broadleaf forest
3 54a708 Deciduous needleleaf forest
4 78d203 Deciduous broadleaf forest
5 009900 Mixed forest
6 c6b044 Closed shrublands
7 dcd159 Open shrublands
8 dade48 Woody savannas
9 fbff13 Savannas
10 b6ff05 Grasslands
11 27ff87 Permanent wetlands
12 c24f44 Croplands
13 a5a5a5 Urban and built-up
14 ff6d4c Cropland/natural vegetation mosaic
15 69fff8 Snow and ice
16 f9ffa4 Barren or sparsely vegetated
254 ffffff Unclassified
"""
landcover = ee.Image('MODIS/051/MCD12Q1/2013_01_01').select('Land_Cover_Type_1')
Map.setCenter(6.746, 46.529, 2)
Map.addLayer(landcover, {}, 'MODIS Land Cover')
legend_dict = geemap.legend_from_ee(ee_class_table)
Map.add_legend(legend_title="MODIS Global Land Cover", legend_dict=legend_dict)
Map
###Output
_____no_output_____
###Markdown
Add builtin legends from geemap Python packagehttps://github.com/giswqs/geemap/blob/master/geemap/legends.py Available builtin legends:
###Code
legends = geemap.builtin_legends
for legend in legends:
print(legend)
###Output
_____no_output_____
###Markdown
Available Land Cover Datasets in Earth Enginehttps://developers.google.com/earth-engine/datasets/tags/landcover National Land Cover Database (NLCD)https://developers.google.com/earth-engine/datasets/catalog/USGS_NLCD
###Code
Map = geemap.Map()
Map.add_basemap('HYBRID')
landcover = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
Map.addLayer(landcover, {}, 'NLCD Land Cover')
Map.add_legend(builtin_legend='NLCD')
Map
###Output
_____no_output_____
###Markdown
National Wetlands Inventory (NWI)https://www.fws.gov/wetlands/data/mapper.html
###Code
Map = geemap.Map()
Map.add_basemap('HYBRID')
Map.add_basemap('FWS NWI Wetlands')
Map.add_legend(builtin_legend='NWI')
Map
###Output
_____no_output_____
###Markdown
MODIS Land Cover Type Yearly Global 500mhttps://developers.google.com/earth-engine/datasets/catalog/MODIS_051_MCD12Q1
###Code
Map = geemap.Map()
Map.add_basemap('HYBRID')
landcover = ee.Image('MODIS/051/MCD12Q1/2013_01_01') \
.select('Land_Cover_Type_1')
Map.setCenter(6.746, 46.529, 2)
Map.addLayer(landcover, {}, 'MODIS Land Cover')
Map.add_legend(builtin_legend='MODIS/051/MCD12Q1')
Map
###Output
_____no_output_____
###Markdown
Add customized legends for Earth Engine dataThere are three ways you can add customized legends for Earth Engine data1. Define legend keys and colors2. Defind legend dictionary3. Convert Earth Engine class table to legend dictionary Define legend keys and colors
###Code
Map = geemap.Map()
legend_keys = ['One', 'Two', 'Three', 'Four', 'ect']
#colorS can be defined using either hex code or RGB (0-255, 0-255, 0-255)
legend_colors = ['#8DD3C7', '#FFFFB3', '#BEBADA', '#FB8072', '#80B1D3']
# legend_colors = [(255, 0, 0), (127, 255, 0), (127, 18, 25), (36, 70, 180), (96, 68 123)]
Map.add_legend(legend_keys=legend_keys, legend_colors=legend_colors, position='bottomleft')
Map
###Output
_____no_output_____
###Markdown
Define a legend dictionary
###Code
Map = geemap.Map()
legend_dict = {
'11 Open Water': '466b9f',
'12 Perennial Ice/Snow': 'd1def8',
'21 Developed, Open Space': 'dec5c5',
'22 Developed, Low Intensity': 'd99282',
'23 Developed, Medium Intensity': 'eb0000',
'24 Developed High Intensity': 'ab0000',
'31 Barren Land (Rock/Sand/Clay)': 'b3ac9f',
'41 Deciduous Forest': '68ab5f',
'42 Evergreen Forest': '1c5f2c',
'43 Mixed Forest': 'b5c58f',
'51 Dwarf Scrub': 'af963c',
'52 Shrub/Scrub': 'ccb879',
'71 Grassland/Herbaceous': 'dfdfc2',
'72 Sedge/Herbaceous': 'd1d182',
'73 Lichens': 'a3cc51',
'74 Moss': '82ba9e',
'81 Pasture/Hay': 'dcd939',
'82 Cultivated Crops': 'ab6c28',
'90 Woody Wetlands': 'b8d9eb',
'95 Emergent Herbaceous Wetlands': '6c9fb8'
}
landcover = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
Map.addLayer(landcover, {}, 'NLCD Land Cover')
Map.add_legend(legend_title="NLCD Land Cover Classification", legend_dict=legend_dict)
Map
###Output
_____no_output_____
###Markdown
Convert an Earth Engine class table to legendFor example: MCD12Q1.051 Land Cover Type Yearly Global 500mhttps://developers.google.com/earth-engine/datasets/catalog/MODIS_051_MCD12Q1
###Code
Map = geemap.Map()
ee_class_table = """
Value Color Description
0 1c0dff Water
1 05450a Evergreen needleleaf forest
2 086a10 Evergreen broadleaf forest
3 54a708 Deciduous needleleaf forest
4 78d203 Deciduous broadleaf forest
5 009900 Mixed forest
6 c6b044 Closed shrublands
7 dcd159 Open shrublands
8 dade48 Woody savannas
9 fbff13 Savannas
10 b6ff05 Grasslands
11 27ff87 Permanent wetlands
12 c24f44 Croplands
13 a5a5a5 Urban and built-up
14 ff6d4c Cropland/natural vegetation mosaic
15 69fff8 Snow and ice
16 f9ffa4 Barren or sparsely vegetated
254 ffffff Unclassified
"""
landcover = ee.Image('MODIS/051/MCD12Q1/2013_01_01') \
.select('Land_Cover_Type_1')
Map.setCenter(6.746, 46.529, 2)
Map.addLayer(landcover, {}, 'MODIS Land Cover')
legend_dict = geemap.legend_from_ee(ee_class_table)
Map.add_legend(legend_title="MODIS Global Land Cover", legend_dict=legend_dict)
Map
###Output
_____no_output_____
###Markdown
Uncomment the following line to install [geemap](https://geemap.org) if needed.
###Code
# !pip install geemap
import ee
import geemap
geemap.show_youtube('NwnW_qOkNRw')
###Output
_____no_output_____
###Markdown
Add builtin legends from geemap Python packagehttps://github.com/giswqs/geemap/blob/master/geemap/legends.py Available builtin legends:
###Code
legends = geemap.builtin_legends
for legend in legends:
print(legend)
###Output
_____no_output_____
###Markdown
Available Land Cover Datasets in Earth Enginehttps://developers.google.com/earth-engine/datasets/tags/landcover National Land Cover Database (NLCD)https://developers.google.com/earth-engine/datasets/catalog/USGS_NLCD
###Code
Map = geemap.Map()
Map.add_basemap('HYBRID')
landcover = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
Map.addLayer(landcover, {}, 'NLCD Land Cover')
Map.add_legend(builtin_legend='NLCD')
Map
###Output
_____no_output_____
###Markdown
National Wetlands Inventory (NWI)https://www.fws.gov/wetlands/data/mapper.html
###Code
Map = geemap.Map()
Map.add_basemap('HYBRID')
Map.add_basemap('FWS NWI Wetlands')
Map.add_legend(builtin_legend='NWI')
Map
###Output
_____no_output_____
###Markdown
MODIS Land Cover Type Yearly Global 500mhttps://developers.google.com/earth-engine/datasets/catalog/MODIS_051_MCD12Q1
###Code
Map = geemap.Map()
Map.add_basemap('HYBRID')
landcover = ee.Image('MODIS/051/MCD12Q1/2013_01_01').select('Land_Cover_Type_1')
Map.setCenter(6.746, 46.529, 2)
Map.addLayer(landcover, {}, 'MODIS Land Cover')
Map.add_legend(builtin_legend='MODIS/051/MCD12Q1')
Map
###Output
_____no_output_____
###Markdown
Add customized legends for Earth Engine dataThere are three ways you can add customized legends for Earth Engine data1. Define legend keys and colors2. Define legend dictionary3. Convert Earth Engine class table to legend dictionary Define legend keys and colors
###Code
Map = geemap.Map()
legend_keys = ['One', 'Two', 'Three', 'Four', 'ect']
# colorS can be defined using either hex code or RGB (0-255, 0-255, 0-255)
legend_colors = ['#8DD3C7', '#FFFFB3', '#BEBADA', '#FB8072', '#80B1D3']
# legend_colors = [(255, 0, 0), (127, 255, 0), (127, 18, 25), (36, 70, 180), (96, 68 123)]
Map.add_legend(
legend_keys=legend_keys, legend_colors=legend_colors, position='bottomleft'
)
Map
###Output
_____no_output_____
###Markdown
Define a legend dictionary
###Code
Map = geemap.Map()
legend_dict = {
'11 Open Water': '466b9f',
'12 Perennial Ice/Snow': 'd1def8',
'21 Developed, Open Space': 'dec5c5',
'22 Developed, Low Intensity': 'd99282',
'23 Developed, Medium Intensity': 'eb0000',
'24 Developed High Intensity': 'ab0000',
'31 Barren Land (Rock/Sand/Clay)': 'b3ac9f',
'41 Deciduous Forest': '68ab5f',
'42 Evergreen Forest': '1c5f2c',
'43 Mixed Forest': 'b5c58f',
'51 Dwarf Scrub': 'af963c',
'52 Shrub/Scrub': 'ccb879',
'71 Grassland/Herbaceous': 'dfdfc2',
'72 Sedge/Herbaceous': 'd1d182',
'73 Lichens': 'a3cc51',
'74 Moss': '82ba9e',
'81 Pasture/Hay': 'dcd939',
'82 Cultivated Crops': 'ab6c28',
'90 Woody Wetlands': 'b8d9eb',
'95 Emergent Herbaceous Wetlands': '6c9fb8',
}
landcover = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
Map.addLayer(landcover, {}, 'NLCD Land Cover')
Map.add_legend(legend_title="NLCD Land Cover Classification", legend_dict=legend_dict)
Map
###Output
_____no_output_____
###Markdown
Convert an Earth Engine class table to legendFor example: MCD12Q1.051 Land Cover Type Yearly Global 500mhttps://developers.google.com/earth-engine/datasets/catalog/MODIS_051_MCD12Q1
###Code
Map = geemap.Map()
ee_class_table = """
Value Color Description
0 1c0dff Water
1 05450a Evergreen needleleaf forest
2 086a10 Evergreen broadleaf forest
3 54a708 Deciduous needleleaf forest
4 78d203 Deciduous broadleaf forest
5 009900 Mixed forest
6 c6b044 Closed shrublands
7 dcd159 Open shrublands
8 dade48 Woody savannas
9 fbff13 Savannas
10 b6ff05 Grasslands
11 27ff87 Permanent wetlands
12 c24f44 Croplands
13 a5a5a5 Urban and built-up
14 ff6d4c Cropland/natural vegetation mosaic
15 69fff8 Snow and ice
16 f9ffa4 Barren or sparsely vegetated
254 ffffff Unclassified
"""
landcover = ee.Image('MODIS/051/MCD12Q1/2013_01_01').select('Land_Cover_Type_1')
Map.setCenter(6.746, 46.529, 2)
Map.addLayer(landcover, {}, 'MODIS Land Cover')
legend_dict = geemap.legend_from_ee(ee_class_table)
Map.add_legend(legend_title="MODIS Global Land Cover", legend_dict=legend_dict)
Map
###Output
_____no_output_____ |
examples/experimental/sliderPlugin.ipynb | ###Markdown
Adding HTML elements to figures This notebook contains examples how to add HTML elements to figures and create interaction between Javascript and Python code.**Note**: this notebook makes interactive calculation when slider position is changed, so you need to download this notebook to see any changes in plot.
###Code
%matplotlib inline
import matplotlib.pylab as plt
import mpld3
mpld3.enable_notebook()
###Output
_____no_output_____
###Markdown
Simple example: slider plugin We add a simple slider HTML element ```` to our figure. When slider position is changed, we call ``kernel.execute()`` and pass updated value to Python function ``runCalculation()``. In this simple example we just update the frequency $\omega$ of $\sin(\omega x)$.
###Code
class SliderView(mpld3.plugins.PluginBase):
""" Add slider and JavaScript / Python interaction. """
JAVASCRIPT = """
mpld3.register_plugin("sliderview", SliderViewPlugin);
SliderViewPlugin.prototype = Object.create(mpld3.Plugin.prototype);
SliderViewPlugin.prototype.constructor = SliderViewPlugin;
SliderViewPlugin.prototype.requiredProps = ["idline", "callback_func"];
SliderViewPlugin.prototype.defaultProps = {}
function SliderViewPlugin(fig, props){
mpld3.Plugin.call(this, fig, props);
};
SliderViewPlugin.prototype.draw = function(){
var line = mpld3.get_element(this.props.idline);
var callback_func = this.props.callback_func;
var div = d3.select("#" + this.fig.figid);
// Create slider
div.append("input").attr("type", "range").attr("min", 0).attr("max", 10).attr("step", 0.1).attr("value", 1)
.on("change", function() {
var command = callback_func + "(" + this.value + ")";
console.log("running "+command);
var callbacks = { 'iopub' : {'output' : handle_output}};
var kernel = IPython.notebook.kernel;
kernel.execute(command, callbacks, {silent:false});
});
function handle_output(out){
//console.log(out);
var res = null;
// if output is a print statement
if (out.msg_type == "stream"){
res = out.content.data;
}
// if output is a python object
else if(out.msg_type === "pyout"){
res = out.content.data["text/plain"];
}
// if output is a python error
else if(out.msg_type == "pyerr"){
res = out.content.ename + ": " + out.content.evalue;
alert(res);
}
// if output is something we haven't thought of
else{
res = "[out type not implemented]";
}
// Update line data
line.data = JSON.parse(res);
line.elements()
.attr("d", line.datafunc(line.data))
.style("stroke", "black");
}
};
"""
def __init__(self, line, callback_func):
self.dict_ = {"type": "sliderview",
"idline": mpld3.utils.get_id(line),
"callback_func": callback_func}
import numpy as np
def updateSlider(val1):
t = np.linspace(0, 10, 500)
y = np.sin(val1*t)
return map(list, list(zip(list(t), list(y))))
fig, ax = plt.subplots(figsize=(8, 4))
t = np.linspace(0, 10, 500)
y = np.sin(t)
ax.set_xlabel('Time')
ax.set_ylabel('Amplitude')
# create the line object
line, = ax.plot(t, y, '-k', lw=3, alpha=0.5)
ax.set_ylim(-1.2, 1.2)
ax.set_title("Slider demo")
mpld3.plugins.connect(fig, SliderView(line, callback_func="updateSlider"))
###Output
_____no_output_____
###Markdown
**Note**: this notebook makes interactive calculation when slider position is changed, so you need to download this notebook to see any changes in plot. Complex example: beam deflection When creating more interaction between Javascript and Python, things get easily quite complicated. Therefore one should consider using e.g. Backbone or similar to get more structured code in Javascript side. IPython notebook seems to be using Backbone internally already.In the next example we add more inputs and use Backbone to handle syncronizing Python and Javascript. In this example we calculate the deflection line $v(x)$ for simple supported Euler Bernoulli beam and update visualization when user changes force location $x \in [0, 1]$. The formula for deflection $v(x)$ is \begin{equation}v(x) = \frac{FL^2}{6EI}\left[ \frac{ab}{L^2}(L+b)\frac{x}{L} - b\left(\frac{x}{L}\right)^3 + \frac{1}{L^2}^3 \right],\end{equation}where\begin{equation}=\begin{cases}0 & ,x<a\\x-a & ,x\geq a\end{cases}\end{equation}and $a$ is distance from left support, $a+b=L$.More information about deflection: http://en.wikipedia.org/wiki/Deflection_%28engineering%29First we define the template we use in our plugin:
###Code
from IPython import display
display.HTML("""
<script type="text/template" id="tools-template">
<h3>Tools</h3>
<p><strong>Force location</strong></p>
<input id="slider1" type="range" min="0" max="1" step="0.01" value="0.50" style="display: inline-block;">
<label id="slider1label" for="slider1" style="display: inline-block; width: 40px;">0.50</label>
<p><strong>Boundary conditions</strong></p>
<select id="boundary_conditions">
<option value="simple-simple">Simple support-Simple support</option>
<option value="clamp-simple">Clamp-Simple support</option>
<option value="clamp-clamp">Clamp-Clamp</option>
</select>
<p><strong>Young's modulus (GPa)</strong></p>
<input id="young" type="number" value="210"/>
<p><strong>Other options</strong></p>
<div>
<label><span style="vertical-align: middle">Use FEM to calculate deflection line?</span>
<input id="useFEM" type="checkbox" style="vertical-align: middle" /></label>
</div>
</script>
""")
###Output
_____no_output_____
###Markdown
Our plugin code comes next. Note that we have now Backbone model LineModel to handle Python-Javascript-interaction and Backbone views ToolsView and CanvasView to take care of visualization when data is changed.Note that not all input elements are "connected" to the visualization, they are more like placeholders ready for your own coding experiments. They are not implemented on purpose to keep lines of code as low as possible. To get them work, modify ``this.notImplemented`` $\rightarrow$ ``this.modelChanged`` in ``initialize`` function and change ``var command = ... `` in ``modelChanged`` to pass more parameters to notebook server. Don't forget to change Python side function accordingly.
###Code
class MyUserInterface(mpld3.plugins.PluginBase):
""" Here we use Backbone to create more structured Javascript. """
JAVASCRIPT = """
var LineModel = Backbone.Model.extend({
initialize: function(options) {
this.options = options || {};
this.on("change:sliderPosition", this.modelChanged);
this.on("change:boundaryCondition", this.notImplemented);
this.on("change:youngsModulus", this.notImplemented);
this.on("change:useFEM", this.notImplemented);
},
/**
This example should be quite easy to extend to use more inputs. You
just have to pass more model.get('...') things to kernel execute command below.
*/
notImplemented: function(model) {
alert("This function is not implemented in the example on purpose.");
},
/**
Model changed, execute notebook kernel and update model data.
*/
modelChanged: function(model) {
var command = this.options.callback_func + "(" + model.get('sliderPosition') + ")";
console.log("IPython kernel execute "+command);
var callbacks = {
'iopub' : {
'output' : function(out) {
//console.log(out);
var res = null;
// if output is a print statement
if (out.msg_type == "stream"){
res = out.content.data;
}
// if output is a python object
else if(out.msg_type === "pyout"){
res = out.content.data["text/plain"];
}
// if output is a python error
else if(out.msg_type == "pyerr"){
res = out.content.ename + ": " + out.content.evalue;
alert(res);
}
// if output is something we haven't thought of
else{
res = "[out type not implemented]";
alert(res);
}
model.set("line", JSON.parse(res));
}
}
};
IPython.notebook.kernel.execute(command, callbacks, {silent:false});
}
});
var ToolsView = Backbone.View.extend({
/**
This view renders toolbar with slider and other html elements.
*/
initialize: function(options) {
this.options = options || {};
_.bindAll(this, 'render');
},
render: function() {
var template = _.template($("#tools-template").html(), {});
$(this.el).append(template);
return this;
},
/**
Listen event changes.
*/
events: {
"change #slider1": "changeSlider1",
"change #boundary_conditions": "changeBoundaryConditions",
"change #young": "changeModulus",
"change #useFEM": "changeUseFEM"
},
changeSlider1: function(ev) {
var sliderPosition = $(ev.currentTarget).val();
this.model.set('sliderPosition', sliderPosition);
$(this.el).find("#slider1label").text(parseFloat(sliderPosition).toFixed(2));
},
changeBoundaryConditions: function(ev) {
this.model.set('boundaryCondition', $(ev.currentTarget).val());
},
changeModulus: function(ev) {
this.model.set('youngsModulus', $(ev.currentTarget).val());
},
changeUseFEM: function(ev) {
var isChecked = $(ev.currentTarget).is(":checked");
this.model.set('useFEM', isChecked);
}
});
var CanvasView = Backbone.View.extend({
initialize: function(options) {
this.options = options || {};
this.line = mpld3.get_element(this.options.props.idline);
_.bindAll(this, 'render');
this.model.bind('change:line', this.render);
},
/**
Update line when model changes, f.e. new data is calculated
inside notebook and updated to Backbone model.
*/
render: function() {
this.line.elements().transition()
.attr("d", this.line.datafunc(this.model.get('line')))
.style("stroke", "black");
}
});
// PLUGIN START
mpld3.register_plugin("myuserinterface", MyUserInterfacePlugin);
MyUserInterfacePlugin.prototype = Object.create(mpld3.Plugin.prototype);
MyUserInterfacePlugin.prototype.constructor = MyUserInterfacePlugin;
MyUserInterfacePlugin.prototype.requiredProps = ["idline", "callback_func"];
MyUserInterfacePlugin.prototype.defaultProps = {}
function MyUserInterfacePlugin(fig, props){
mpld3.Plugin.call(this, fig, props);
};
MyUserInterfacePlugin.prototype.draw = function() {
// Some hacking to get proper layout.
var div = $("#" + this.fig.figid).attr("style", "border: 1px solid;");
var figdiv = div.find("div");
figdiv.attr("style", "display: inline;");
// Create LineModel
var lineModel = new LineModel({
callback_func: this.props.callback_func
});
// Create tools view
var myel = $('<div style="float: left; margin: 10px 30px;" id="tools"></div>');
div.append(myel);
var toolsView = new ToolsView({
el: myel,
model: lineModel
});
toolsView.render();
// Create canvas view which updates line visualization when the model is changed
var canvasView = new CanvasView({
el: figdiv,
model: lineModel,
props: this.props
});
};
"""
def __init__(self, line, callback_func):
self.dict_ = {"type": "myuserinterface",
"idline": mpld3.utils.get_id(line),
"callback_func": callback_func}
###Output
_____no_output_____
###Markdown
Next we do the actual calculation of the deflection using Python and display results:
###Code
import numpy as np
L = 1.0
F = 3.0
E = 100.0
I = 0.1
def v_(x, a):
b = L - a
v = a*b/L**2*(L+b)*x/L - b*(x/L)**3
if x-a > 0.0:
v += 1.0/L**2*(x-a)**3
v *= F*L**2/(6.0*E*I)
return v
v = np.vectorize(v_)
def runCalculation(a):
"""
"""
x = np.linspace(0, L, 500)
y = -v(x, a)*1000.0
return map(list, list(zip(list(x), list(y))))
fig, ax = plt.subplots(figsize=(8, 4))
t = np.linspace(0, 1, 200)
y = np.sin(t)
ax.set_xlabel('x [m]')
ax.set_ylabel('Deflection [mm]')
ax.set_title('Euler-Bernoulli beam deflection line')
# create the line object
initial_data = np.array(runCalculation(0.5))
line, = ax.plot(initial_data[:, 0], initial_data[:, 1], '-k', lw=3, alpha=0.5)
ax.plot([0.975, 1.025, 1.00, 0.975], [-1, -1, 0, -1], '-k', lw=1)
ax.plot([-0.025, 0.025, 0.000, -0.025], [-1, -1, 0, -1], '-k', lw=1)
ax.set_ylim(-10, 5)
ax.grid(lw=0.1, alpha=0.2)
mpld3.plugins.connect(fig, MyUserInterface(line, callback_func="runCalculation"))
###Output
_____no_output_____ |
examples/embeddings/Zero-shot_classification.ipynb | ###Markdown
Zero-shot classification using the embeddingsIn this notebook we will classify the sentiment of reviews using embeddings and zero labeled data! The dataset is created in the [Obtain_dataset Notebook](Obtain_dataset.ipynb).We'll define positive sentiment to be 4 and 5-star reviews, and negative sentiment to be 1 and 2-star reviews. 3-star reviews are considered neutral and we won't use them for this example.We will perform zero-shot classification by embedding descriptions of each class and then comparing new samples to those class embeddings.
###Code
import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, accuracy_score
df = pd.read_csv('output/embedded_1k_reviews.csv')
df['babbage_similarity'] = df.babbage_similarity.apply(eval).apply(np.array)
df['babbage_search'] = df.babbage_search.apply(eval).apply(np.array)
df= df[df.Score!=3]
df['sentiment'] = df.Score.replace({1:'negative', 2:'negative', 4:'positive', 5:'positive'})
###Output
_____no_output_____
###Markdown
Zero-Shot ClassificationTo perform zero shot classification, we want to predict labels for our samples without any training. To do this, we can simply embed short descriptions of each label, such as positive and negative, and then compare the cosine distance between embeddings of samples and label descriptions. The highest similarity label to the sample input is the predicted label. We can also define a prediction score to be the difference between the cosine distance to the positive and to the negative label. This score can be used for plotting a precision-recall curve, which can be used to select a different tradeoff between precision and recall, by selecting a different threshold.
###Code
from openai.embeddings_utils import cosine_similarity, get_embedding
from sklearn.metrics import PrecisionRecallDisplay
def evaluate_emeddings_approach(
labels = ['negative', 'positive'],
engine = 'babbage-similarity',
):
label_embeddings = [get_embedding(label, engine=engine) for label in labels]
def label_score(review_embedding, label_embeddings):
return cosine_similarity(review_embedding, label_embeddings[1]) - cosine_similarity(review_embedding, label_embeddings[0])
engine_col_name = engine.replace('-','_').replace('_query','')
probas = df[engine_col_name].apply(lambda x: label_score(x, label_embeddings))
preds = probas.apply(lambda x: 'positive' if x>0 else 'negative')
report = classification_report(df.sentiment, preds)
print(report)
display = PrecisionRecallDisplay.from_predictions(df.sentiment, probas, pos_label='positive')
_ = display.ax_.set_title("2-class Precision-Recall curve")
evaluate_emeddings_approach(labels=['negative', 'positive'], engine='babbage-similarity')
###Output
precision recall f1-score support
negative 0.67 0.88 0.76 136
positive 0.98 0.93 0.95 789
accuracy 0.92 925
macro avg 0.82 0.90 0.86 925
weighted avg 0.93 0.92 0.92 925
###Markdown
We can see that this classifier already performs extremely well. We used similarity embeddings, and the simplest possible label name. Let's try to improve on this by using more descriptive label names, and search embeddings.
###Code
evaluate_emeddings_approach(labels=['An Amazon review with a negative sentiment.', 'An Amazon review with a positive sentiment.'], engine='babbage-similarity')
###Output
precision recall f1-score support
negative 0.65 0.93 0.76 136
positive 0.99 0.91 0.95 789
accuracy 0.92 925
macro avg 0.82 0.92 0.86 925
weighted avg 0.94 0.92 0.92 925
###Markdown
Using the search embeddings and descriptive names leads to an additional improvement in performance.
###Code
evaluate_emeddings_approach(labels=['An Amazon review with a negative sentiment.', 'An Amazon review with a positive sentiment.'], engine='babbage-search-query')
###Output
precision recall f1-score support
negative 0.77 0.79 0.78 136
positive 0.96 0.96 0.96 789
accuracy 0.94 925
macro avg 0.87 0.88 0.87 925
weighted avg 0.94 0.94 0.94 925
###Markdown
Zero-shot classification using the embeddingsIn this notebook we will classify the sentiment of reviews using embeddings and zero labeled data! The dataset is created in the [Obtain_dataset Notebook](Obtain_dataset.ipynb).We'll define positive sentiment to be 4 and 5-star reviews, and negative sentiment to be 1 and 2-star reviews. 3-star reviews are considered neutral and we won't use them for this example.We will perform zero-shot classification by embedding descriptions of each class and then comparing new samples to those class embeddings.
###Code
import pandas as pd
import numpy as np
from sklearn.metrics import classification_report
df = pd.read_csv('output/embedded_1k_reviews.csv')
df['babbage_similarity'] = df.babbage_similarity.apply(eval).apply(np.array)
df['babbage_search'] = df.babbage_search.apply(eval).apply(np.array)
df= df[df.Score!=3]
df['sentiment'] = df.Score.replace({1:'negative', 2:'negative', 4:'positive', 5:'positive'})
###Output
_____no_output_____
###Markdown
Zero-Shot ClassificationTo perform zero shot classification, we want to predict labels for our samples without any training. To do this, we can simply embed short descriptions of each label, such as positive and negative, and then compare the cosine distance between embeddings of samples and label descriptions. The highest similarity label to the sample input is the predicted label. We can also define a prediction score to be the difference between the cosine distance to the positive and to the negative label. This score can be used for plotting a precision-recall curve, which can be used to select a different tradeoff between precision and recall, by selecting a different threshold.
###Code
from openai.embeddings_utils import cosine_similarity, get_embedding
from sklearn.metrics import PrecisionRecallDisplay
def evaluate_emeddings_approach(
labels = ['negative', 'positive'],
engine = 'text-similarity-babbage-001',
):
label_embeddings = [get_embedding(label, engine=engine) for label in labels]
def label_score(review_embedding, label_embeddings):
return cosine_similarity(review_embedding, label_embeddings[1]) - cosine_similarity(review_embedding, label_embeddings[0])
engine_col_name = engine.replace('-','_').replace('_query','')
probas = df[engine_col_name].apply(lambda x: label_score(x, label_embeddings))
preds = probas.apply(lambda x: 'positive' if x>0 else 'negative')
report = classification_report(df.sentiment, preds)
print(report)
display = PrecisionRecallDisplay.from_predictions(df.sentiment, probas, pos_label='positive')
_ = display.ax_.set_title("2-class Precision-Recall curve")
evaluate_emeddings_approach(labels=['negative', 'positive'], engine='text-similarity-babbage-001')
###Output
precision recall f1-score support
negative 0.67 0.88 0.76 136
positive 0.98 0.93 0.95 789
accuracy 0.92 925
macro avg 0.82 0.90 0.86 925
weighted avg 0.93 0.92 0.92 925
###Markdown
We can see that this classifier already performs extremely well. We used similarity embeddings, and the simplest possible label name. Let's try to improve on this by using more descriptive label names, and search embeddings.
###Code
evaluate_emeddings_approach(labels=['An Amazon review with a negative sentiment.', 'An Amazon review with a positive sentiment.'], engine='text-similarity-babbage-001')
###Output
precision recall f1-score support
negative 0.65 0.93 0.76 136
positive 0.99 0.91 0.95 789
accuracy 0.92 925
macro avg 0.82 0.92 0.86 925
weighted avg 0.94 0.92 0.92 925
###Markdown
Using the search embeddings and descriptive names leads to an additional improvement in performance.
###Code
evaluate_emeddings_approach(labels=['An Amazon review with a negative sentiment.', 'An Amazon review with a positive sentiment.'], engine='text-similarity-babbage-001')
###Output
precision recall f1-score support
negative 0.77 0.79 0.78 136
positive 0.96 0.96 0.96 789
accuracy 0.94 925
macro avg 0.87 0.88 0.87 925
weighted avg 0.94 0.94 0.94 925
###Markdown
Zero-shot classification using the embeddingsIn this notebook we will classify the sentiment of reviews using embeddings and zero labeled data! The dataset is created in the [Obtain_dataset Notebook](Obtain_dataset.ipynb).We'll define positive sentiment to be 4 and 5-star reviews, and negative sentiment to be 1 and 2-star reviews. 3-star reviews are considered neutral and we won't use them for this example.We will perform zero-shot classification by embedding descriptions of each class and then comparing new samples to those class embeddings.
###Code
import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, accuracy_score
df = pd.read_csv('output/embedded_1k_reviews.csv')
df['babbage_similarity'] = df.babbage_similarity.apply(eval).apply(np.array)
df['babbage_search'] = df.babbage_search.apply(eval).apply(np.array)
df= df[df.Score!=3]
df['sentiment'] = df.Score.replace({1:'negative', 2:'negative', 4:'positive', 5:'positive'})
###Output
_____no_output_____
###Markdown
Zero-Shot ClassificationTo perform zero shot classification, we want to predict labels for our samples without any training. To do this, we can simply embed short descriptions of each label, such as positive and negative, and then compare the cosine distance between embeddings of samples and label descriptions. The highest similarity label to the sample input is the predicted label. We can also define a prediction score to be the difference between the cosine distance to the positive and to the negative label. This score can be used for plotting a precision-recall curve, which can be used to select a different tradeoff between precision and recall, by selecting a different threshold.
###Code
from utils import cosine_similarity, get_embedding
from sklearn.metrics import PrecisionRecallDisplay
def evaluate_emeddings_approach(
labels = ['negative', 'positive'],
engine = 'babbage-similarity',
):
label_embeddings = [get_embedding(label, engine=engine) for label in labels]
def label_score(review_embedding, label_embeddings):
return cosine_similarity(review_embedding, label_embeddings[1]) - cosine_similarity(review_embedding, label_embeddings[0])
engine_col_name = engine.replace('-','_').replace('_query','')
probas = df[engine_col_name].apply(lambda x: label_score(x, label_embeddings))
preds = probas.apply(lambda x: 'positive' if x>0 else 'negative')
report = classification_report(df.sentiment, preds)
print(report)
display = PrecisionRecallDisplay.from_predictions(df.sentiment, probas, pos_label='positive')
_ = display.ax_.set_title("2-class Precision-Recall curve")
evaluate_emeddings_approach(labels=['negative', 'positive'], engine='babbage-similarity')
###Output
precision recall f1-score support
negative 0.67 0.88 0.76 136
positive 0.98 0.93 0.95 789
accuracy 0.92 925
macro avg 0.82 0.90 0.86 925
weighted avg 0.93 0.92 0.92 925
###Markdown
We can see that this classifier already performs extremely well. We used similarity embeddings, and the simplest possible label name. Let's try to improve on this by using more descriptive label names, and search embeddings.
###Code
evaluate_emeddings_approach(labels=['An Amazon review with a negative sentiment.', 'An Amazon review with a positive sentiment.'], engine='babbage-similarity')
###Output
precision recall f1-score support
negative 0.65 0.93 0.76 136
positive 0.99 0.91 0.95 789
accuracy 0.92 925
macro avg 0.82 0.92 0.86 925
weighted avg 0.94 0.92 0.92 925
###Markdown
Using the search embeddings and descriptive names leads to an additional improvement in performance.
###Code
evaluate_emeddings_approach(labels=['An Amazon review with a negative sentiment.', 'An Amazon review with a positive sentiment.'], engine='babbage-search-query')
###Output
precision recall f1-score support
negative 0.77 0.79 0.78 136
positive 0.96 0.96 0.96 789
accuracy 0.94 925
macro avg 0.87 0.88 0.87 925
weighted avg 0.94 0.94 0.94 925
|
Functions/3.0 Global SUS score.ipynb | ###Markdown
Dataset selection
###Code
_dataName, _inputData, _dataNameSUSNormalized, _inputDataSUSNormalized = selectDataset("data20190703")
stackedBarPlotsFilenamePathStem = graphsSavePathStem + "/SUS-matrices"
tryCreateFolder(stackedBarPlotsFilenamePathStem)
###Output
_____no_output_____
###Markdown
SUS score functions
###Code
def getSUSScore(question, game, data=_inputDataSUSNormalized, printResult=True):
_score = 0
_maxScore = 0
_gameQuestionData = data.loc[data[gameQuestion]==game, question]
for _answer in _gameQuestionData:
if pd.notna(_answer):
_score += _answer
_maxScore += 4
_percentageScoreResult = _score*100/_maxScore
if printResult:
print("'" + question + "': " + game + " " + '{:1.1f}%'.format(_percentageScoreResult))
return _percentageScoreResult
###Output
_____no_output_____
###Markdown
Best scores per question, including non-SUS Q11
###Code
print("Best scores:")
for question in indexedLikertQuestions:
bestGame = ""
bestScore = 0
for game in games:
questionScore = getSUSScore(question, game, printResult=False)
if questionScore > bestScore:
bestGame = game
bestScore = questionScore
print("'" + question + "': " + bestGame + " " + '{:1.1f}%'.format(bestScore))
###Output
_____no_output_____
###Markdown
Worst scores per question, including non-SUS Q11
###Code
print("Worst scores:")
for question in indexedLikertQuestions:
worstGame = ""
worstScore = 100
for game in games:
questionScore = getSUSScore(question, game, printResult=False)
if questionScore < worstScore:
worstGame = game
worstScore = questionScore
print("'" + question + "': " + worstGame + " " + '{:1.1f}%'.format(worstScore))
###Output
_____no_output_____
###Markdown
SUS scores matrices
###Code
allSUSScores = pd.DataFrame(index=indexedSUSQuestions, columns=games)
for game in games:
for question in indexedSUSQuestions:
allSUSScores.loc[question, game] = getSUSScore(question, game, printResult=False)
allSUSScores = allSUSScores.astype(float)
saveFig = True
# normalize using minimum
fig = plt.figure()
ax = fig.add_subplot(111)
h = sns.heatmap(allSUSScores,
ax=ax,
cmap=plt.cm.jet,
vmin=0,
vmax=100,
square=True,
cbar_kws = dict(use_gridspec=False,location="left")
)
ax.yaxis.tick_right()
plt.yticks(rotation=0);
# manually moves x labels closer to their ticks
h.set_xticklabels(ax.get_xticklabels(), rotation=30)
indexer = 0
for label in ax.xaxis.get_majorticklabels():
dx = (50 - indexer * 9)/72.; dy = 0/72.
offset = tr.ScaledTranslation(dx, dy, fig.dpi_scale_trans)
label.set_transform(label.get_transform() - offset)
indexer+=1
if saveFig:
path = stackedBarPlotsFilenamePathStem + "/" + _dataName
tryCreateFolder(path)
fig.savefig(path + "/SUS-score-matrix")
pd.concat((allSUSScores.idxmin(), allSUSScores.min()), 1)
pd.concat((allSUSScores.idxmax(), allSUSScores.max()), 1)
###Output
_____no_output_____
###Markdown
SUS scores variance
###Code
def getVarianceOnQG(question, game, data=_inputData, printResult=True):
# print("Q='"+question+"'; G="+game)
gameQuestionData = data.loc[data[gameQuestion]==game, question]
gameQuestionData = [int(v) for v in gameQuestionData if pd.notna(v)]
return np.var(gameQuestionData)
allVars = pd.DataFrame(index=indexedSUSQuestions, columns=games)
for game in games:
for question in indexedSUSQuestions:
allVars.loc[question, game] = getVarianceOnQG(question, game, printResult=False)
allVars = allVars.astype(float)
saveFig = True
fig = plt.figure()
ax = fig.add_subplot(111)
h = sns.heatmap(allVars,
ax=ax,
cmap=plt.cm.jet,
vmin=0.4,
vmax=2.2,
square=True,
cbar_kws = dict(use_gridspec=False,location="left")
)
ax.yaxis.tick_right()
plt.yticks(rotation=0);
# manually moves x labels closer to their ticks
h.set_xticklabels(ax.get_xticklabels(), rotation=30)
indexer = 0
for label in ax.xaxis.get_majorticklabels():
dx = (50 - indexer * 9)/72.
dy = 0/72.
offset = tr.ScaledTranslation(dx, dy, fig.dpi_scale_trans)
label.set_transform(label.get_transform() - offset)
indexer+=1
if saveFig:
path = stackedBarPlotsFilenamePathStem + "/" + _dataName
tryCreateFolder(path)
fig.savefig(path + "/SUS-score-variance-matrix")
# recommendations:
# - best game: which questions?
#
pd.concat((allVars.idxmin(), allVars.min()), 1)
pd.concat((allVars.idxmax(), allVars.max()), 1)
###Output
_____no_output_____
###Markdown
Global SUS score
###Code
getSUSScore(question, game, data=_inputDataSUSNormalized, printResult=False)
totalSUSScore = 0
questionSUSScore = 0
for game in games:
print("---------------------------------------------------------------------------")
print(game)
for question in indexedSUSQuestions:
questionSUSScore = getSUSScore(question, game, data=_inputDataSUSNormalized, printResult=False)
totalSUSScore += questionSUSScore
print('{:1.1f}%'.format(questionSUSScore) + " " + question)
totalSUSScore = totalSUSScore / (len(indexedSUSQuestions))
print('{:1.1f}%'.format(totalSUSScore))
totalSUSScore = 0
questionSUSScore = 0
for game in games:
print("---------------------------------------------------------------------------")
print(game)
question = shortLikertQuestions[lastLikertQuestionIndex-1]
questionSUSScore = getSUSScore(question, game, data=_inputDataSUSNormalized, printResult=False)
print('{:1.1f}%'.format(questionSUSScore) + " " + question)
###Output
_____no_output_____ |
sports_eda.ipynb | ###Markdown
--- **EXPLORATORY DATA ANALYSIS - SPORTS (INDIAN PREMIER LEAGUE)** **AUTHOR - AKSHAYA RAJ S A**--- **IMPORTING LIBRARY AND LOAD DATASET**---
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from google.colab import files
uploaded = files.upload()
import io
sample_match = pd.read_csv(io.BytesIO(uploaded['matches.csv']))
sample_del = pd.read_csv(io.BytesIO(uploaded['deliveries.csv']))
sample_match = sample_match.drop(columns=["umpire3"],axis=1)
sample_match.head(2)
sample_del.head(2)
###Output
_____no_output_____
###Markdown
--- **EXPLORATORY DATA ANALYSIS**---
###Code
print(sample_match.info())
print(sample_del.info())
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 756 entries, 0 to 755
Data columns (total 17 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 756 non-null int64
1 season 756 non-null int64
2 city 749 non-null object
3 date 756 non-null object
4 team1 756 non-null object
5 team2 756 non-null object
6 toss_winner 756 non-null object
7 toss_decision 756 non-null object
8 result 756 non-null object
9 dl_applied 756 non-null int64
10 winner 752 non-null object
11 win_by_runs 756 non-null int64
12 win_by_wickets 756 non-null int64
13 player_of_match 752 non-null object
14 venue 756 non-null object
15 umpire1 754 non-null object
16 umpire2 754 non-null object
dtypes: int64(5), object(12)
memory usage: 100.5+ KB
None
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 179078 entries, 0 to 179077
Data columns (total 21 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 match_id 179078 non-null int64
1 inning 179078 non-null int64
2 batting_team 179078 non-null object
3 bowling_team 179078 non-null object
4 over 179078 non-null int64
5 ball 179078 non-null int64
6 batsman 179078 non-null object
7 non_striker 179078 non-null object
8 bowler 179078 non-null object
9 is_super_over 179078 non-null int64
10 wide_runs 179078 non-null int64
11 bye_runs 179078 non-null int64
12 legbye_runs 179078 non-null int64
13 noball_runs 179078 non-null int64
14 penalty_runs 179078 non-null int64
15 batsman_runs 179078 non-null int64
16 extra_runs 179078 non-null int64
17 total_runs 179078 non-null int64
18 player_dismissed 8834 non-null object
19 dismissal_kind 8834 non-null object
20 fielder 6448 non-null object
dtypes: int64(13), object(8)
memory usage: 28.7+ MB
None
###Markdown
--- **SEASON ANALYSIS** ---
###Code
plt.figure(figsize = (18,6))
sns.countplot(x='venue', data=sample_match,palette="rocket", order = sample_match['venue'].value_counts().index)
plt.xticks(rotation=90)
plt.show()
plt.figure(figsize = (18,6))
sns.countplot('season',data=sample_match,palette="mako",order = sample_match['season'].value_counts().index)
plt.title("SEASONS WITH HEIGHEST NUMBER OF MATCHES")
plt.xlabel("SEASON")
plt.ylabel("FREQUENCY OF MATCHES")
plt.show()
###Output
/usr/local/lib/python3.7/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
###Markdown
**OBSERVATION FROM SEASON ANALYSIS*** Stadium Eden Garcen is the most chosen stadium for matches and ACA-VDCA Stadium is the least chosen one.* The season 2013 has got most number of matches followed by 2012 and 2011 and 2009 has got the least number of matches. --- **TEAM ANALYSIS**---
###Code
team_match = pd.melt(sample_match, id_vars=['id','season'], value_vars=['team1', 'team2'])
plt.figure(figsize = (18,6))
sns.countplot(x='value', data=team_match, palette="flare", order = team_match['value'].value_counts().index)
plt.title("TEAMS WITH HEIGHEST NUMBER OF MATCHES PLAYED")
plt.xlabel("TEAM")
plt.ylabel("FREQUENCY OF MATCHES")
plt.xticks(rotation=90)
plt.show()
plt.figure(figsize = (18,6))
sns.countplot(x='winner',data=sample_match, palette='crest', order = sample_match['winner'].value_counts().index)
plt.title("TEAMS WITH HEIGHEST WINS")
plt.xticks(rotation=90)
plt.xlabel("TEAMS")
plt.ylabel("NUMBER OF WINS")
plt.show()
overall_win=sample_match.drop_duplicates(subset=['season'], keep='last')
plt.figure(figsize = (18,6))
sns.countplot(x='winner',data=overall_win, palette='magma', order = overall_win['winner'].value_counts().index)
plt.title("TEAMS WITH OVERALL WINS")
plt.xticks(rotation=90)
plt.xlabel("TEAMS")
plt.ylabel("NUMBER OF WINS")
plt.show()
###Output
_____no_output_____
###Markdown
**OBSERVATION FROM TEAM ANALYSIS*** Team Mumbai Indians is the team with most number of maches played followed by Royal Challengers Bangalore. The teams like Rising Pune Supergiants, Kochi Tuskers Kerala played least number of matches.* Team Mumba Indians has won most matches followed by Chennai Super Kings. The teams Rising Pune Supergiants, Kochi Tuskers Kerala has won least number of matches. Still, these teams are not bad when compared to the number of matches they have played.* When analysed as overall win in a season, Mumbai Indians rule the leaderboard by winning four season followed by Chennai Super Kings who have won three seasons --- **TOSS ANALYSIS**---
###Code
sns.countplot(x=sample_match["toss_decision"], palette="icefire")
plt.figure(figsize=(18,6))
sns.countplot(x='toss_winner', hue='toss_decision', data=sample_match, palette="viridis")
plt.xlabel("TEAMS")
plt.ylabel("FREQUENCY OF OPTING BAT/BALL")
plt.xticks(rotation=90)
plt.show()
plt.figure(figsize=(18,6))
sns.countplot(x='season', hue='toss_decision', data=sample_match, palette="viridis")
plt.xlabel("SEASON")
plt.ylabel("FREQUENCY OF OPTING BAT/BALL")
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
**OBSERVATION FROM TOSS ANALYSIS*** Overall probability of a toss winner opting for field is higher than opting for bat.* Each team seems to have their startegy of choosing the toss decision. Teams Chennai Super Kings, Deccan Chargers and Pune warriors have chosen bat first more number of times than field first. Other all teams has chosen field first more times.* From Season 2014, we see a trend transition of prefering field first same as in 2008 and 2011. In 2012, bat and field where chosen equally. --- **HEIGHEST CHOSEN UMPIRES**---
###Code
sample_umpire = pd.melt(sample_match, id_vars=['id'], value_vars=['umpire1', 'umpire2'])
sample_umpire['value'].unique()
plt.figure(figsize=(18,6))
sns.countplot(x='value', data=sample_umpire, palette="icefire", order = sample_umpire['value'].value_counts().index)
plt.xlabel("UMPIRE")
plt.ylabel("FREQUENCY OF CHOOSING UMPIRE")
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
--- **PLAYER OF MATCH**---
###Code
plt.figure(figsize = (18,6))
plt.ylim([0,30])
plt.title("PLAYER OF THE MATCH")
sns.barplot(x = sample_match["player_of_match"].value_counts()[:30].index, y = sample_match["player_of_match"].value_counts()[:30], palette="viridis")
plt.ylabel("NUMBER OF TIMES A PLAYER IS PLAYER OF THE MATCH")
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
**OBSERVATION FROM PLAYER OF THE MATCH*** CH Gayle is ruling the leaderboard by getting player of the match slightly heigher than AB de Villiers.* It is also convincing to see that A Nehra and RR Pant were the player of the match in few matches. --- **BATSMAN ANALYSIS**---
###Code
plt.figure(figsize = (18,6))
high_run = sample_del.groupby('batsman')['batsman_runs'].agg('sum').reset_index().sort_values(by='batsman_runs', ascending=False).reset_index(drop=True)
plt.title("BATSMAN WITH HEIGHEST NUMBER OF RUNS")
sns.barplot(x = high_run["batsman"].iloc[:20], y = high_run["batsman_runs"].iloc[:20], data= high_run, palette="cubehelix")
plt.xlabel("BATSMAN")
plt.ylabel("NUMBER OF RUNS")
plt.xticks(rotation=90)
plt.show()
least_run=high_run.tail(20)
plt.figure(figsize = (18,6))
plt.title("BATSMAN WITH LEAST NUMBER OF RUNS")
sns.barplot(x = least_run["batsman"], y = least_run["batsman_runs"], data= least_run, palette="autumn")
plt.ylabel("NUMBER OF RUNS")
plt.xlabel("BATSMAN")
plt.xticks(rotation=90)
plt.show()
plt.figure(figsize = (18,6))
high_bound = sample_del.groupby('batsman')['batsman_runs'].agg(lambda x: (x==4).sum()).reset_index().sort_values(by='batsman_runs', ascending=False).reset_index(drop=True)
plt.title("BATSMAN WITH HEIGHEST NUMBER OF BOUNDARIES")
sns.barplot(x = high_bound["batsman"].iloc[:20], y = high_bound["batsman_runs"].iloc[:20], data= high_bound, palette="autumn")
plt.xlabel("BATSMAN")
plt.ylabel("NUMBER OF BOUNDARIES")
plt.xticks(rotation=90)
plt.show()
plt.figure(figsize = (18,6))
high_six = sample_del.groupby('batsman')['batsman_runs'].agg(lambda x: (x==6).sum()).reset_index().sort_values(by='batsman_runs', ascending=False).reset_index(drop=True)
plt.title("BATSMAN WITH HEIGHEST NUMBER OF SIX")
sns.barplot(x = high_six["batsman"].iloc[:20], y = high_six["batsman_runs"].iloc[:20], data= high_six, palette="coolwarm")
plt.xlabel("BATSMAN")
plt.ylabel("NUMBER OF SIX")
plt.xticks(rotation=90)
plt.show()
plt.figure(figsize = (18,6))
high_dot = sample_del.groupby('batsman')['batsman_runs'].agg(lambda x: (x==0).sum()).reset_index().sort_values(by='batsman_runs', ascending=False).reset_index(drop=True)
plt.title("BATSMAN WHO PLAYED HEIGHEST NUMBER OF DOT BALLS")
sns.barplot(x = high_dot["batsman"].iloc[:20], y = high_dot["batsman_runs"].iloc[:20], data= high_dot, palette="inferno")
plt.xlabel("BATSMAN")
plt.ylabel("NUMBER OF DOT BALLS")
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
**OBSERVATION FROM BATSMAN ANALYSIS*** V Kohli followed by S K Raina has secured most number of runs and nearly 16 batsmen have not scored any run for their team.* S Dhawan, S K Raina, G Gambir and V Kohli secure the leaderboard in most number of boundaries.* G H Gayle has took his unbeatable first place in hitting most number of six. S Dhawan and R R Pant has scored least number of six among the top 20.* V Kohli has batted heighest number of Dot Balls. --- **DISMISSAL TYPE ANALYSIS**---
###Code
plt.figure(figsize = (18,6))
sns.countplot(x=sample_del['dismissal_kind'], palette="Wistia", order = sample_del['dismissal_kind'].value_counts().index)
plt.xlabel("DISMISSAL KIND")
plt.ylabel("FREQUENCY OF DISMISSAL")
plt.title("MOST DISMISSAL TYPE")
plt.xticks(rotation='vertical')
plt.show()
###Output
_____no_output_____
###Markdown
**Caught is the most common dismissal type in IPL followed by bowled** --- **BOWLERS ANALYSIS**---
###Code
plt.figure(figsize = (18,6))
high_bowl = sample_del.groupby('bowler')['ball'].agg('count').reset_index().sort_values(by='ball', ascending=False).reset_index(drop=True)
plt.title("BOWLERS WHO BOWLED HEIGHEST NUMBER OF BALLS")
sns.barplot(x = high_bowl["bowler"].iloc[:20], y= high_bowl["ball"].iloc[:20], data= high_bowl, palette="copper")
plt.xlabel("BOWLERS")
plt.ylabel("NUMBER OF BALLS BOWLED")
plt.xticks(rotation=90)
plt.show()
plt.figure(figsize = (18,6))
high_bowl_extra = sample_del.groupby('bowler')['extra_runs'].agg(lambda x: (x>0).sum()).reset_index().sort_values(by='extra_runs', ascending=False).reset_index(drop=True)
plt.title("BOWLERS WHO BOWLED HEIGHEST NUMBER OF EXTRA BALLS")
sns.barplot(x = high_bowl_extra["bowler"].iloc[:20], y= high_bowl_extra["extra_runs"].iloc[:20], data= high_bowl_extra, palette="gist_earth")
plt.xlabel("BOWLERS")
plt.ylabel("NUMBER OF EXTRA BALLS BOWLED")
plt.xticks(rotation=90)
plt.show()
plt.figure(figsize = (18,6))
high_bowl_dot = sample_del.groupby('bowler')['total_runs'].agg(lambda x: (x==0).sum()).reset_index().sort_values(by='total_runs', ascending=False).reset_index(drop=True)
plt.title("BOWLERS WHO BOWLED HEIGHEST NUMBER OF DOT BALLS")
sns.barplot(x = high_bowl_dot["bowler"].iloc[:20], y= high_bowl_dot["total_runs"].iloc[:20], data= high_bowl_dot, palette="gist_rainbow")
plt.xlabel("BOWLERS")
plt.ylabel("NUMBER OF DOT BALLS BOWLED")
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____ |
notebooks/Update_ID_in_AV_to_reflect_divided_AO.ipynb | ###Markdown
Update IDs in annotation volume according to divided anatomical ontologyThis notebook updates IDs of divided nodes from original ID.- inputs - annotation_100.nrrd - dividedIDs.csv- output - annotation_100_divided.nrrd Set variables
###Code
dir_data = 'data'
fn_input_AV = 'annotation_100.nrrd'
fn_input_ID = 'dividedIDs.csv'
fn_output_AV = 'AVbase.nrrd'#'annotation_100_divided.nrrd'
import os
import nrrd
import numpy as np
import pandas as pd
import copy
###Output
_____no_output_____
###Markdown
Load data
###Code
AV, Header = nrrd.read(os.path.join(dir_data, fn_input_AV))
df_dividedIDs = pd.read_csv(os.path.join(dir_data, fn_input_ID))
###Output
_____no_output_____
###Markdown
Modify IDs for divided ROIs
###Code
AV_modified = copy.deepcopy(AV)
for idx, originalID in enumerate(df_dividedIDs.divided_ID):
AV_modified[AV_modified == originalID] = df_dividedIDs.iloc[idx,:].new_ID
###Output
_____no_output_____
###Markdown
Save divided AV
###Code
nrrd.write(os.path.join(dir_data, fn_output_AV), AV_modified, Header)
###Output
_____no_output_____
###Markdown
Check data
###Code
df_dividedIDs.head() # This is the same as dividedIDs.csv
###Output
_____no_output_____ |
case-studies/near-duplicates-in-touche-22-task-02/touche-version-0-0-2.ipynb | ###Markdown
Touché 2022: Collection Version 0.0.2 for Task 2 We discussed the following improvements of version 0.0.2 over 0.0.1:- Remove duplicated Ids (they were in version 0.0.1 due to some pooling problem)- Remove passages that are too short (less than 10 terms) - concerns 5083 of the 1222231 passages- Remove passages that are too long (more than 1024 terms) - concerns 584 of the 1222231 passages- Remove near-duplicate passages - concerns 348512 of the 1222231 passages- Combine everything into a single file
###Code
def all_lines(file_name):
import json
from tqdm import tqdm
with open('/mnt/ceph/storage/data-in-progress/data-research/arguana/touche-shared-tasks/data/2022-task2/data-cleaning/' + file_name) as f:
for i in tqdm(f):
try:
yield json.loads(i)
except:
pass
too_short = []
too_long = []
docs = {}
for year in ['2020', '2021']:
for i in all_lines(year + '-task2-passages-of-top100-docs.jsonl'):
docs[i['id']] = i
length = len(i['fullyCanonicalizedContent'].split())
if length > 1024:
too_long += [i['id']]
if length < 10:
too_short += [i['id']]
too_short = set(too_short)
too_long = set(too_long)
len(docs)
len(too_short)
len(too_long)
duplicates = []
for year in ['2020', '2021']:
for i in all_lines('s3-scores-' + year + '-task2-passages-of-top100-docs.jsonl'):
firstId = i['idPair']['left']
secondId = i['idPair']['right']
if firstId == secondId:
continue
if firstId > secondId:
raise ValueError('')
# this near-duplicate threshold is from previous studies
if i['s3Score'] > 0.82:
duplicates += [secondId]
duplicates = set(duplicates)
print(len(duplicates))
###Output
348512
###Markdown
Some example duplicates
###Code
#clueweb12-0915wb-42-00127___124 , clueweb12-0915wb-93-17218___124 --> 1.0
print(docs['clueweb12-0915wb-42-00127___124']['content'])
print('\n\n')
print(docs['clueweb12-0915wb-93-17218___124']['content'])
#clueweb12-0803wb-25-35631___2 , clueweb12-0808wb-93-02306___2 --> 0.9166666666666666
print(docs['clueweb12-0803wb-25-35631___2']['content'])
print('\n\n')
print(docs['clueweb12-0808wb-93-02306___2']['content'])
#clueweb12-0803wb-67-20451___7 , clueweb12-0805wb-36-03834___7 --> 0.847926267281106
print(docs['clueweb12-0803wb-67-20451___7']['content'])
print('\n\n')
print(docs['clueweb12-0805wb-36-03834___7']['content'])
###Output
Rather than mailing a card, send friends and family a virtual card instead. The festive season does not have to be a time of over consumption and waste. By planning ahead you can minimise your carbon footprint and encourage others around you to do the same. Photo: Marju Randmer Write the News! Know something we don't? Help set the local agenda by writing your own news stories. Write news About the author Writer: Nicole Articles Written: 49 Joined: 9 August 2011 Related Articles Save the environment and money with our tips for a green Christmas Has your food waste expanded over Christmas? Would you like to know how you can minimise your carbon... Add your Comment Your name: * Email address: * Postcode: Comment: *(max 1200 characters) Verify: (type word into box on right)
Rather than mailing a card, send friends and family a virtual card instead. The festive season does not have to be a time of over consumption and waste. By planning ahead you can minimise your carbon footprint and encourage others around you to do the same. Photo: Marju Randmer Write the News! Know something we don't? Help set the local agenda by writing your own news stories. Write news About the author Writer: Nicole Articles Written: 49 Joined: 9 August 2011 Related Articles Save the environment and money with our tips for a green Christmas Has your food waste expanded over Christmas? Would you like to know how you can minimise your carbon... Helping Bring Back The Bush at Plough & Harrow Reserve On the last Sunday of each month since July this year a group of interested local residents have... Add your Comment Your name: * Email address: * Postcode: Comment: *(max 1200 characters) Verify: (type word into box on right)
###Markdown
Verify too short or too long
###Code
docs[[i for i in too_short][0]]['content']
docs[[i for i in too_short][10]]['content']
docs[[i for i in too_long][0]]['content']
docs[[i for i in too_long][10]]['content']
###Output
_____no_output_____
###Markdown
Create Version 0.0.2 from 0.0.1 using the above information
###Code
ids_already_covered = set()
with open('/mnt/ceph/storage/data-in-progress/data-research/arguana/touche-shared-tasks/data/2022-task2/touche-task2-passages-version-002.jsonl', 'w') as out_file:
import json
for year in ['2020', '2021']:
for i in all_lines('../../' + year + '-task2-passages-of-top100-docs.jsonl'):
doc_id = i['id']
if doc_id in ids_already_covered or doc_id in duplicates or doc_id in too_short or doc_id in too_long:
continue
ids_already_covered.add(doc_id)
out_file.write(json.dumps(i) + '\n')
print(len(ids_already_covered))
!gzip -c /mnt/ceph/storage/data-in-progress/data-research/arguana/touche-shared-tasks/data/2022-task2/touche-task2-passages-version-002.jsonl > /mnt/ceph/storage/data-in-progress/data-research/arguana/touche-shared-tasks/data/2022-task2/touche-task2-passages-version-002.jsonl.gz
###Output
_____no_output_____ |
Chapter 4 - Evaluation and Optimization.ipynb | ###Markdown
Chapter 4 - Evaluation and Optimization
###Code
%pylab inline
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
We generate two inputs:* features – a matrix of input features* target – an array of target variables corresponding to those features
###Code
features = rand(100,5)
target = rand(100) > 0.5
###Output
_____no_output_____
###Markdown
The holdout methodWe divide into a randomized training and test set:
###Code
N = features.shape[0]
N_train = floor(0.7 * N)
# Randomize index
# Note: sometimes you want to retain the order in the dataset and skip this step
# E.g. in the case of time-based datasets where you want to test on 'later' instances
idx = random.permutation(N)
# Split index
idx_train = idx[:N_train]
idx_test = idx[N_train:]
# Break your data into training and testing subsets
features_train = features[idx_train,:]
target_train = target[idx_train]
features_test = features[idx_test,:]
target_test = target[idx_test]
# Build, predict, evaluate (to be filled out)
# model = train(features_train, target_train)
# preds_test = predict(model, features_test)
# accuracy = evaluate_acc(preds_test, target_test)
print features_train.shape
print features_test.shape
print target_train.shape
print target_test.shape
###Output
(70, 5)
(30, 5)
(70,)
(30,)
###Markdown
K-fold cross-validation
###Code
N = features.shape[0]
K = 10 # number of folds
preds_kfold = np.empty(N)
folds = np.random.randint(0, K, size=N)
for idx in np.arange(K):
# For each fold, break your data into training and testing subsets
features_train = features[folds != idx,:]
target_train = target[folds != idx]
features_test = features[folds == idx,:]
# Print the indices in each fold, for inspection
print nonzero(folds == idx)[0]
# Build and predict for CV fold (to be filled out)
# model = train(features_train, target_train)
# preds_kfold[folds == idx] = predict(model, features_test)
# accuracy = evaluate_acc(preds_kfold, target)
###Output
[16 23 33 34 35 39 44 48 99]
[ 3 10 29 37 45 52 54 62 68 83 98]
[ 4 12 20 22 28 50 51 53 55 56 59 60 64 77 84 86 92 97]
[ 6 7 9 11 18 27 41 49 57 61 70 73 80]
[42 67 96]
[30 63 66 78 90 91]
[ 2 5 26 40 46 58 72 74 87 95]
[ 0 8 14 24 32 36 38 43 85 88 89]
[ 1 13 17 19 21 47 71 79 81 82 93 94]
[15 25 31 65 69 75 76]
###Markdown
The ROC curve
###Code
def roc_curve(true_labels, predicted_probs, n_points=100, pos_class=1):
thr = linspace(0,1,n_points)
tpr = zeros(n_points)
fpr = zeros(n_points)
pos = true_labels == pos_class
neg = logical_not(pos)
n_pos = count_nonzero(pos)
n_neg = count_nonzero(neg)
for i,t in enumerate(thr):
tpr[i] = count_nonzero(logical_and(predicted_probs >= t, pos)) / n_pos
fpr[i] = count_nonzero(logical_and(predicted_probs >= t, neg)) / n_neg
return fpr, tpr, thr
# Randomly generated predictions should give us a diagonal ROC curve
preds = rand(len(target))
fpr, tpr, thr = roc_curve(target, preds, pos_class=True)
plot(fpr, tpr)
###Output
_____no_output_____
###Markdown
The area under the ROC curve
###Code
def auc(true_labels, predicted_labels, pos_class=1):
fpr, tpr, thr = roc_curve(true_labels, predicted_labels,
pos_class=pos_class)
area = -trapz(tpr, x=fpr)
return area
auc(target, preds, pos_class=True)
###Output
_____no_output_____
###Markdown
Multi-class classification
###Code
d = pandas.read_csv("data/mnist_small.csv")
d_train = d[:int(0.8*len(d))]
d_test = d[int(0.8*len(d)):]
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
rf.fit(d_train.drop('label', axis=1), d_train['label'])
from sklearn.metrics import confusion_matrix
preds = rf.predict(d_test.drop('label', axis=1))
cm = confusion_matrix(d_test['label'], preds)
matshow(cm, cmap='Greys')
colorbar()
savefig("figures/figure-4.19.eps", format='eps')
###Output
_____no_output_____
###Markdown
The root-mean-square error
###Code
def rmse(true_values, predicted_values):
n = len(true_values)
residuals = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
return np.sqrt(residuals/n)
rmse(rand(10), rand(10))
###Output
_____no_output_____
###Markdown
The R-squared error
###Code
def r2(true_values, predicted_values):
n = len(true_values)
mean = np.mean(true_values)
residuals = 0
total = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
total += (true_values[i] - mean)**2.
return 1.0 - residuals/total
r2(arange(10)+rand(), arange(10)+rand(10))
###Output
_____no_output_____
###Markdown
Grid search with kernel-SVM modelImporting modules:
###Code
from sklearn.metrics import roc_auc_score
from sklearn.svm import SVC
###Output
_____no_output_____
###Markdown
Loading data and performang poor-mans feature engineering:
###Code
d = pandas.read_csv("data/titanic.csv")
# Target
y = d["Survived"]
# Features
X = d.drop(["Survived", "PassengerId", "Cabin","Ticket","Name", "Fare"], axis=1)
X['Sex'] = map(lambda x: 1 if x=="male" else 0, X['Sex'])
X['Embarked-Q'] = map(lambda x: 1 if x=="Q" else 0, X['Embarked'])
X['Embarked-C'] = map(lambda x: 1 if x=="C" else 0, X['Embarked'])
X['Embarked-S'] = map(lambda x: 1 if x=="S" else 0, X['Embarked'])
X = X.drop(["Embarked", "Sex"], axis=1)
X = X.fillna(-1)
###Output
_____no_output_____
###Markdown
Performing grid-search to find the optimal hyper-parameters:
###Code
# grid of (gamma, C) values to try
gam_vec, cost_vec = np.meshgrid(np.linspace(0.01, 10, 11),
np.linspace(0.01, 10, 11))
AUC_all = [] # initialize empty array to store AUC results
# set up cross-validation folds
N = len(y)
K = 10 # number of cross-validation folds
folds = np.random.randint(0, K, size=N)
# search over every value of the grid
for param_ind in np.arange(len(gam_vec.ravel())):
# initialize cross-validation predictions
y_cv_pred = np.empty(N)
# loop through the cross-validation folds
for ii in np.arange(K):
# break your data into training and testing subsets
X_train = X.ix[folds != ii,:]
y_train = y.ix[folds != ii]
X_test = X.ix[folds == ii,:]
# build a model on the training set
model = SVC(gamma=gam_vec.ravel()[param_ind], C=cost_vec.ravel()[param_ind])
model.fit(X_train, y_train)
# generate and store model predictions on the testing set
y_cv_pred[folds == ii] = model.predict(X_test)
# evaluate the AUC of the predictions
AUC_all.append(roc_auc_score(y, y_cv_pred))
indmax = np.argmax(AUC_all)
print "Maximum = %.3f" % (np.max(AUC_all))
print "Tuning Parameters: (gamma = %.2f, C = %.2f)" % (gam_vec.ravel()[indmax], cost_vec.ravel()[indmax])
###Output
Maximum = 0.674
Tuning Parameters: (gamma = 0.01, C = 4.01)
###Markdown
Plotting the contours of the parameter performance:
###Code
AUC_grid = np.array(AUC_all).reshape(gam_vec.shape)
contourf(gam_vec, cost_vec, AUC_grid, 20, cmap='Greys')
xlabel("kernel coefficient, gamma")
ylabel("penalty parameter, C")
colorbar()
savefig("figures/figure-4.25.eps", format='eps')
###Output
_____no_output_____
###Markdown
Chapter 4 - Evaluation and Optimization
###Code
%pylab inline
import pandas as pandas
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
We generate two inputs:* features – a matrix of input features* target – an array of target variables corresponding to those features
###Code
features = rand(100,5)
target = rand(100) > 0.5
###Output
_____no_output_____
###Markdown
The holdout methodWe divide into a randomized training and test set:
###Code
int(floor(0.7*100))
N = features.shape[0]
N_train = int(floor(0.7 * N))
# Randomize index
# Note: sometimes you want to retain the order in the dataset and skip this step
# E.g. in the case of time-based datasets where you want to test on 'later' instances
idx = random.permutation(N)
# Split index
idx_train = idx[:N_train]
idx_test = idx[N_train:]
# Break your data into training and testing subsets
features_train = features[idx_train,:]
target_train = target[idx_train]
features_test = features[idx_test,:]
target_test = target[idx_test]
# Build, predict, evaluate (to be filled out)
# model = train(features_train, target_train)
# preds_test = predict(model, features_test)
# accuracy = evaluate_acc(preds_test, target_test)
print(features_train.shape)
print(features_test.shape)
print(target_train.shape)
print(target_test.shape)
###Output
(70, 5)
(30, 5)
(70,)
(30,)
###Markdown
K-fold cross-validation
###Code
N = features.shape[0]
K = 10 # number of folds
preds_kfold = np.empty(N)
folds = np.random.randint(0, K, size=N)
print(folds)
for idx in np.arange(K):
# For each fold, break your data into training and testing subsets
features_train = features[folds != idx,:]
target_train = target[folds != idx]
features_test = features[folds == idx,:]
# Print the indices in each fold, for inspection
print("Positions of "+str(idx)+" in fold array: ", end="")
print(nonzero(folds == idx)[0])
# Build and predict for CV fold (to be filled out)
# model = train(features_train, target_train)
# preds_kfold[folds == idx] = predict(model, features_test)
# accuracy = evaluate_acc(preds_kfold, target)
###Output
[5 8 5 7 2 9 3 8 5 1 0 8 3 0 3 5 8 3 2 0 3 8 0 8 2 4 8 3 6 3 5 4 6 7 6 1 0
9 6 1 5 6 6 8 6 2 1 6 6 3 2 0 5 4 0 8 8 7 4 5 8 2 4 1 8 6 6 2 5 2 2 0 2 7
2 8 1 8 9 2 2 3 8 0 7 3 7 0 4 1 4 6 8 1 3 2 7 7 2 3]
Positions of 0 in fold array: [10 13 19 22 36 51 54 71 83 87]
Positions of 1 in fold array: [ 9 35 39 46 63 76 89 93]
Positions of 2 in fold array: [ 4 18 24 45 50 61 67 69 70 72 74 79 80 95 98]
Positions of 3 in fold array: [ 6 12 14 17 20 27 29 49 81 85 94 99]
Positions of 4 in fold array: [25 31 53 58 62 88 90]
Positions of 5 in fold array: [ 0 2 8 15 30 40 52 59 68]
Positions of 6 in fold array: [28 32 34 38 41 42 44 47 48 65 66 91]
Positions of 7 in fold array: [ 3 33 57 73 84 86 96 97]
Positions of 8 in fold array: [ 1 7 11 16 21 23 26 43 55 56 60 64 75 77 82 92]
Positions of 9 in fold array: [ 5 37 78]
###Markdown
The ROC curve
###Code
def roc_curve(true_labels, predicted_probs, n_points=100, pos_class=1):
thr = linspace(0,1,n_points)
tpr = zeros(n_points)
fpr = zeros(n_points)
pos = true_labels == pos_class
neg = logical_not(pos)
n_pos = count_nonzero(pos)
n_neg = count_nonzero(neg)
for i,t in enumerate(thr):
tpr[i] = count_nonzero(logical_and(predicted_probs >= t, pos)) / n_pos
fpr[i] = count_nonzero(logical_and(predicted_probs >= t, neg)) / n_neg
return fpr, tpr, thr
# Randomly generated predictions should give us a diagonal ROC curve
preds = rand(len(target))
fpr, tpr, thr = roc_curve(target, preds, pos_class=True)
plot(fpr, tpr)
###Output
_____no_output_____
###Markdown
The area under the ROC curve
###Code
def auc(true_labels, predicted_labels, pos_class=1):
fpr, tpr, thr = roc_curve(true_labels, predicted_labels,
pos_class=pos_class)
area = -trapz(tpr, x=fpr)
return area
auc(target, preds, pos_class=True)
###Output
_____no_output_____
###Markdown
Multi-class classification
###Code
d = pandas.read_csv("data/mnist_small.csv")
d_train = d[:int(0.8*len(d))]
d_test = d[int(0.8*len(d)):]
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
rf.fit(d_train.drop('label', axis=1), d_train['label'])
from sklearn.metrics import confusion_matrix
preds = rf.predict(d_test.drop('label', axis=1))
cm = confusion_matrix(d_test['label'], preds)
matshow(cm, cmap='Greys')
colorbar()
savefig("figures/figure-4.19.eps", format='eps')
###Output
_____no_output_____
###Markdown
The root-mean-square error
###Code
def rmse(true_values, predicted_values):
n = len(true_values)
residuals = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
return np.sqrt(residuals/n)
rmse(rand(10), rand(10))
###Output
_____no_output_____
###Markdown
The R-squared error
###Code
def r2(true_values, predicted_values):
n = len(true_values)
mean = np.mean(true_values)
residuals = 0
total = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
total += (true_values[i] - mean)**2.
return 1.0 - residuals/total
r2(arange(10)+rand(), arange(10)+rand(10))
###Output
_____no_output_____
###Markdown
Grid search with kernel-SVM modelImporting modules:
###Code
from sklearn.metrics import roc_auc_score
from sklearn.svm import SVC
###Output
_____no_output_____
###Markdown
Loading data and performang poor-mans feature engineering:
###Code
d = pandas.read_csv("data/titanic.csv")
# Target
y = d["Survived"]
# Features
X = d.drop(["Survived", "PassengerId", "Cabin","Ticket","Name", "Fare"], axis=1)
X['Sex'] = list(map(lambda x: 1 if x=="male" else 0, X['Sex']))
X['Embarked-Q'] = list(map(lambda x: 1 if x=="Q" else 0, X['Embarked']))
X['Embarked-C'] = list(map(lambda x: 1 if x=="C" else 0, X['Embarked']))
X['Embarked-S'] = list(map(lambda x: 1 if x=="S" else 0, X['Embarked']))
X = X.drop(["Embarked"], axis=1)
X = X.fillna(-1)
print(X[:5])
###Output
Pclass Sex Age SibSp Parch Embarked-Q Embarked-C Embarked-S
0 3 1 22.0 1 0 0 0 1
1 1 0 38.0 1 0 0 1 0
2 3 0 26.0 0 0 0 0 1
3 1 0 35.0 1 0 0 0 1
4 3 1 35.0 0 0 0 0 1
###Markdown
Performing grid-search to find the optimal hyper-parameters:
###Code
# grid of (gamma, C) values to try
gam_vec, cost_vec = np.meshgrid(np.logspace(0.01, 0.1, 11),
np.linspace(1, 5, 10))
AUC_all = [] # initialize empty array to store AUC results
# set up cross-validation folds
N = len(y)
K = 10 # number of cross-validation folds
folds = np.random.randint(0, K, size=N)
# search over every value of the grid
for param_ind in np.arange(len(gam_vec.ravel())):
# initialize cross-validation predictions
y_cv_pred = np.empty(N)
# loop through the cross-validation folds
for ii in np.arange(K):
# break your data into training and testing subsets
# X_train = X.ix[folds != ii,:]
# y_train = y.ix[folds != ii]
# X_test = X.ix[folds == ii,:]
X_train = X.iloc[folds != ii,:]
y_train = y.iloc[folds != ii]
X_test = X.iloc[folds == ii,:]
#X_train = X.iloc[folds, :]
#X_train = X_train.drop(ii)
#y_train = y.iloc[folds]
#y_train = y.drop(ii)
#X_test = X.iloc[folds, :]
#X_test = X_test[folds == ii]
# build a model on the training set
model = SVC(gamma=gam_vec.ravel()[param_ind], C=cost_vec.ravel()[param_ind])
model.fit(X_train, y_train)
# generate and store model predictions on the testing set
y_cv_pred[folds == ii] = model.predict(X_test)
# evaluate the AUC of the predictions
AUC_all.append(roc_auc_score(y, y_cv_pred))
indmax = np.argmax(AUC_all)
print("Maximum = %.3f" % (np.max(AUC_all)))
print("Tuning Parameters: (gamma = %.2f, C = %.2f)" % (gam_vec.ravel()[indmax], cost_vec.ravel()[indmax]))
ix=2
print(folds)
# Train subset taking all rows except the ones with index == to the positions of ix in the folds array
X_train = X.iloc[folds!=ix,:]
print(X_train.head(20))
X_test = X.iloc[folds==ix,:]
print(X_test.head(20))
###Output
[9 3 1 0 6 3 9 6 7 4 5 1 3 4 0 0 9 9 4 3 9 1 9 8 3 8 2 2 3 1 7 6 8 8 8 1 2
1 5 0 1 2 4 9 3 2 9 9 1 7 3 0 9 3 2 6 8 9 1 9 4 9 8 3 3 0 3 1 8 7 0 8 2 7
9 8 4 7 5 1 1 9 0 8 0 0 1 0 4 9 7 5 1 1 4 0 2 7 8 5 2 0 7 0 2 6 6 1 9 5 8
6 5 4 3 9 5 4 0 5 3 6 5 0 6 4 2 9 3 5 3 3 7 3 5 3 7 7 8 3 5 3 1 6 8 4 6 6
4 7 8 2 1 8 6 4 6 4 6 3 3 4 9 5 1 1 3 6 2 3 7 2 5 8 4 7 0 3 3 1 6 9 1 7 6
6 1 2 9 3 8 4 9 1 0 7 7 9 4 0 6 3 7 8 2 0 4 1 9 5 7 7 4 2 0 1 2 8 4 4 0 8
8 1 0 3 3 3 5 9 3 7 5 9 8 7 0 4 1 3 0 7 9 5 2 1 9 9 6 2 5 6 8 1 8 5 6 3 5
0 4 0 2 6 8 0 0 8 5 7 5 1 6 5 8 1 4 0 5 0 0 1 2 7 0 5 5 5 8 3 7 1 0 1 3 5
7 6 4 0 3 4 0 9 0 5 5 9 5 9 7 5 3 8 9 1 5 4 2 2 7 6 9 0 8 0 4 7 8 5 9 4 6
9 9 9 3 7 7 2 5 6 0 7 6 6 9 9 1 1 6 2 3 9 0 6 5 6 3 7 1 2 9 3 6 7 0 9 0 4
6 0 8 0 6 5 5 4 3 3 4 9 6 1 3 1 0 7 6 6 4 7 6 7 2 7 7 7 3 4 1 0 0 8 4 0 0
8 3 3 2 6 4 7 2 8 5 5 8 2 2 0 7 0 4 3 9 3 3 3 7 0 2 1 2 3 7 8 3 1 1 1 8 6
1 2 2 8 9 7 7 7 6 3 7 8 6 1 9 2 6 3 2 9 6 6 6 9 6 3 8 1 7 3 7 7 2 0 4 4 1
7 7 9 9 8 8 2 5 8 2 8 8 3 8 7 1 9 5 3 2 8 6 7 4 8 2 4 9 2 2 2 0 4 3 0 0 7
3 3 1 5 0 0 4 9 7 7 1 1 2 1 1 6 4 8 3 9 2 0 8 1 9 8 1 0 1 1 0 3 7 7 0 1 0
6 6 2 4 2 8 9 3 6 3 9 5 0 6 6 4 0 7 6 2 4 2 2 4 1 6 8 8 2 1 0 3 3 4 5 1 0
7 9 8 9 9 9 4 4 5 3 2 8 0 4 8 8 2 8 5 1 8 9 8 9 7 3 9 9 1 7 6 0 1 1 6 9 3
7 7 2 5 7 3 7 0 3 3 3 2 1 8 2 9 4 3 5 8 1 0 9 4 8 8 2 2 4 2 1 9 0 4 0 8 1
2 9 1 9 8 5 0 2 2 6 8 4 4 3 9 8 1 4 8 0 8 8 0 6 7 1 8 1 5 3 2 8 3 6 5 8 0
7 2 8 7 8 6 2 6 2 0 9 3 9 7 8 4 9 7 1 1 5 5 3 8 5 3 2 2 9 4 3 6 7 5 6 4 7
8 5 9 7 3 5 7 3 8 7 8 8 0 6 7 3 5 5 5 5 2 2 6 3 1 4 6 8 6 7 8 1 2 8 5 5 6
2 0 2 6 3 8 5 0 5 7 9 8 7 1 4 8 6 4 6 1 2 5 4 5 3 6 1 6 7 3 3 0 9 7 7 7 2
6 0 8 8 4 5 2 0 4 8 3 5 5 7 3 2 9 1 6 1 5 1 6 9 2 5 2 1 3 5 6 7 4 8 2 8 8
7 8 3 7 9 7 4 9 4 5 9 4 4 8 0 2 9 9 1 1 7 8 4 8 0 0 1 8 1 7 6 6 5 3 3 0 3
9 1 0]
Pclass Sex Age SibSp Parch Embarked-Q Embarked-C Embarked-S
0 3 1 22.0 1 0 0 0 1
1 1 0 38.0 1 0 0 1 0
2 3 0 26.0 0 0 0 0 1
3 1 0 35.0 1 0 0 0 1
4 3 1 35.0 0 0 0 0 1
5 3 1 -1.0 0 0 1 0 0
6 1 1 54.0 0 0 0 0 1
7 3 1 2.0 3 1 0 0 1
8 3 0 27.0 0 2 0 0 1
9 2 0 14.0 1 0 0 1 0
10 3 0 4.0 1 1 0 0 1
11 1 0 58.0 0 0 0 0 1
12 3 1 20.0 0 0 0 0 1
13 3 1 39.0 1 5 0 0 1
14 3 0 14.0 0 0 0 0 1
15 2 0 55.0 0 0 0 0 1
16 3 1 2.0 4 1 1 0 0
17 2 1 -1.0 0 0 0 0 1
18 3 0 31.0 1 0 0 0 1
19 3 0 -1.0 0 0 0 1 0
Pclass Sex Age SibSp Parch Embarked-Q Embarked-C Embarked-S
26 3 1 -1.0 0 0 0 1 0
27 1 1 19.0 3 2 0 0 1
36 3 1 -1.0 0 0 0 1 0
41 2 0 27.0 1 0 0 0 1
45 3 1 -1.0 0 0 0 0 1
54 1 1 65.0 0 1 0 1 0
72 2 1 21.0 0 0 0 0 1
96 1 1 71.0 0 0 0 1 0
100 3 0 28.0 0 0 0 0 1
104 3 1 37.0 2 0 0 0 1
126 3 1 -1.0 0 0 1 0 0
151 1 0 22.0 1 0 0 0 1
168 1 1 -1.0 0 0 0 0 1
171 3 1 4.0 4 1 1 0 0
187 1 1 45.0 0 0 0 0 1
204 3 1 18.0 0 0 0 0 1
213 2 1 30.0 0 0 0 0 1
216 3 0 27.0 0 0 0 0 1
244 3 1 30.0 0 0 0 1 0
249 2 1 54.0 1 0 0 0 1
###Markdown
Plotting the contours of the parameter performance:
###Code
AUC_grid = np.array(AUC_all).reshape(gam_vec.shape)
contourf(gam_vec, cost_vec, AUC_grid, 20, cmap='Greys')
xlabel("kernel coefficient, gamma")
ylabel("penalty parameter, C")
colorbar()
# savefig("figures/figure-4.25.eps", format='eps')
###Output
_____no_output_____
###Markdown
Chapter 4 - Evaluation and Optimization
###Code
%pylab inline
import pandas as pandas
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
We generate two inputs:* features – a matrix of input features* target – an array of target variables corresponding to those features
###Code
features = rand(100,5)
target = rand(100) > 0.5
###Output
_____no_output_____
###Markdown
The holdout methodWe divide into a randomized training and test set:
###Code
int(floor(0.7*100))
N = features.shape[0]
N_train = int(floor(0.7 * N))
# Randomize index
# Note: sometimes you want to retain the order in the dataset and skip this step
# E.g. in the case of time-based datasets where you want to test on 'later' instances
idx = random.permutation(N)
# Split index
idx_train = idx[:N_train]
idx_test = idx[N_train:]
# Break your data into training and testing subsets
features_train = features[idx_train,:]
target_train = target[idx_train]
features_test = features[idx_test,:]
target_test = target[idx_test]
# Build, predict, evaluate (to be filled out)
# model = train(features_train, target_train)
# preds_test = predict(model, features_test)
# accuracy = evaluate_acc(preds_test, target_test)
print(features_train.shape)
print(features_test.shape)
print(target_train.shape)
print(target_test.shape)
###Output
(70, 5)
(30, 5)
(70,)
(30,)
###Markdown
K-fold cross-validation
###Code
N = features.shape[0]
K = 10 # number of folds
preds_kfold = np.empty(N)
folds = np.random.randint(0, K, size=N)
print(folds)
for idx in np.arange(K):
# For each fold, break your data into training and testing subsets
features_train = features[folds != idx,:]
target_train = target[folds != idx]
features_test = features[folds == idx,:]
# Print the indices in each fold, for inspection
print("Positions of "+str(idx)+" in fold array: ", end="")
print(nonzero(folds == idx)[0])
# Build and predict for CV fold (to be filled out)
# model = train(features_train, target_train)
# preds_kfold[folds == idx] = predict(model, features_test)
# accuracy = evaluate_acc(preds_kfold, target)
###Output
[0 5 5 9 0 0 8 1 0 2 6 2 4 8 6 0 9 7 8 2 8 6 1 7 6 6 9 3 6 0 0 8 9 2 5 8 8
4 6 7 5 9 2 9 6 1 0 3 1 4 8 2 6 7 3 3 3 6 8 6 7 1 5 4 2 5 1 4 6 0 6 8 9 1
5 9 2 9 6 2 7 2 0 1 3 1 5 4 8 3 9 5 9 4 6 1 1 7 3 2]
Positions of 0 in fold array: [ 0 4 5 8 15 29 30 46 69 82]
Positions of 1 in fold array: [ 7 22 45 48 61 66 73 83 85 95 96]
Positions of 2 in fold array: [ 9 11 19 33 42 51 64 76 79 81 99]
Positions of 3 in fold array: [27 47 54 55 56 84 89 98]
Positions of 4 in fold array: [12 37 49 63 67 87 93]
Positions of 5 in fold array: [ 1 2 34 40 62 65 74 86 91]
Positions of 6 in fold array: [10 14 21 24 25 28 38 44 52 57 59 68 70 78 94]
Positions of 7 in fold array: [17 23 39 53 60 80 97]
Positions of 8 in fold array: [ 6 13 18 20 31 35 36 50 58 71 88]
Positions of 9 in fold array: [ 3 16 26 32 41 43 72 75 77 90 92]
###Markdown
The ROC curve
###Code
def roc_curve(true_labels, predicted_probs, n_points=100, pos_class=1):
thr = linspace(0,1,n_points)
tpr = zeros(n_points)
fpr = zeros(n_points)
pos = true_labels == pos_class
neg = logical_not(pos)
n_pos = count_nonzero(pos)
n_neg = count_nonzero(neg)
for i,t in enumerate(thr):
tpr[i] = count_nonzero(logical_and(predicted_probs >= t, pos)) / n_pos
fpr[i] = count_nonzero(logical_and(predicted_probs >= t, neg)) / n_neg
return fpr, tpr, thr
# Randomly generated predictions should give us a diagonal ROC curve
preds = rand(len(target))
fpr, tpr, thr = roc_curve(target, preds, pos_class=True)
plot(fpr, tpr)
###Output
_____no_output_____
###Markdown
The area under the ROC curve
###Code
def auc(true_labels, predicted_labels, pos_class=1):
fpr, tpr, thr = roc_curve(true_labels, predicted_labels,
pos_class=pos_class)
area = -trapz(tpr, x=fpr)
return area
auc(target, preds, pos_class=True)
###Output
_____no_output_____
###Markdown
Multi-class classification
###Code
d = pandas.read_csv("data/mnist_small.csv")
d_train = d[:int(0.8*len(d))]
d_test = d[int(0.8*len(d)):]
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
rf.fit(d_train.drop('label', axis=1), d_train['label'])
from sklearn.metrics import confusion_matrix
preds = rf.predict(d_test.drop('label', axis=1))
cm = confusion_matrix(d_test['label'], preds)
matshow(cm, cmap='Greys')
colorbar()
savefig("figures/figure-4.19.eps", format='eps')
###Output
_____no_output_____
###Markdown
The root-mean-square error
###Code
def rmse(true_values, predicted_values):
n = len(true_values)
residuals = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
return np.sqrt(residuals/n)
rmse(rand(10), rand(10))
###Output
_____no_output_____
###Markdown
The R-squared error
###Code
def r2(true_values, predicted_values):
n = len(true_values)
mean = np.mean(true_values)
residuals = 0
total = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
total += (true_values[i] - mean)**2.
return 1.0 - residuals/total
r2(arange(10)+rand(), arange(10)+rand(10))
###Output
_____no_output_____
###Markdown
Grid search with kernel-SVM modelImporting modules:
###Code
from sklearn.metrics import roc_auc_score
from sklearn.svm import SVC
###Output
_____no_output_____
###Markdown
Loading data and performang poor-mans feature engineering:
###Code
d = pandas.read_csv("data/titanic.csv")
# Target
y = d["Survived"]
# Features
X = d.drop(["Survived", "PassengerId", "Cabin","Ticket","Name", "Fare"], axis=1)
X['Sex'] = list(map(lambda x: 1 if x=="male" else 0, X['Sex']))
X['Embarked-Q'] = list(map(lambda x: 1 if x=="Q" else 0, X['Embarked']))
X['Embarked-C'] = list(map(lambda x: 1 if x=="C" else 0, X['Embarked']))
X['Embarked-S'] = list(map(lambda x: 1 if x=="S" else 0, X['Embarked']))
X = X.drop(["Embarked", "Sex"], axis=1)
X = X.fillna(-1)
###Output
_____no_output_____
###Markdown
Performing grid-search to find the optimal hyper-parameters:
###Code
# grid of (gamma, C) values to try
gam_vec, cost_vec = np.meshgrid(np.logspace(0.01, 0.1, 11),
np.linspace(1, 5, 10))
AUC_all = [] # initialize empty array to store AUC results
# set up cross-validation folds
N = len(y)
K = 10 # number of cross-validation folds
folds = np.random.randint(0, K, size=N)
# search over every value of the grid
for param_ind in np.arange(len(gam_vec.ravel())):
# initialize cross-validation predictions
y_cv_pred = np.empty(N)
# loop through the cross-validation folds
for ii in np.arange(K):
# break your data into training and testing subsets
# X_train = X.ix[folds != ii,:]
# y_train = y.ix[folds != ii]
# X_test = X.ix[folds == ii,:]
X_train = X.iloc[folds != ii,:]
y_train = y.iloc[folds != ii]
X_test = X.iloc[folds == ii,:]
#X_train = X.iloc[folds, :]
#X_train = X_train.drop(ii)
#y_train = y.iloc[folds]
#y_train = y.drop(ii)
#X_test = X.iloc[folds, :]
#X_test = X_test[folds == ii]
# build a model on the training set
model = SVC(gamma=gam_vec.ravel()[param_ind], C=cost_vec.ravel()[param_ind])
model.fit(X_train, y_train)
# generate and store model predictions on the testing set
y_cv_pred[folds == ii] = model.predict(X_test)
# evaluate the AUC of the predictions
AUC_all.append(roc_auc_score(y, y_cv_pred))
indmax = np.argmax(AUC_all)
print("Maximum = %.3f" % (np.max(AUC_all)))
print("Tuning Parameters: (gamma = %.2f, C = %.2f)" % (gam_vec.ravel()[indmax], cost_vec.ravel()[indmax]))
ix=2
print(folds)
# Train subset taking all rows except the ones with index == to the positions of ix in the folds array
X_train = X.iloc[folds!=ix,:]
print(X_train.head(20))
X_test = X.iloc[folds==ix,:]
print(X_test.head(20))
###Output
[4 3 4 5 3 9 4 0 8 4 6 0 3 7 5 3 2 0 5 3 7 9 2 0 8 4 8 2 0 6 8 9 8 4 1 2 6
3 8 3 1 0 3 3 9 7 6 8 0 6 5 0 3 1 2 2 9 6 1 0 2 3 5 6 5 7 1 1 7 0 4 0 6 8
7 9 6 4 7 4 0 1 2 9 5 8 7 5 2 5 6 1 5 3 6 0 0 8 2 0 8 8 6 2 1 3 2 0 2 1 5
1 6 9 9 0 0 5 9 1 9 5 2 0 2 9 5 8 1 5 2 3 6 4 0 9 9 6 6 4 5 5 4 9 5 4 3 2
7 2 6 7 9 1 7 7 1 8 8 3 2 5 3 4 3 0 5 8 6 5 9 1 5 3 1 6 1 7 0 1 7 7 9 3 7
8 9 6 1 8 8 3 1 9 0 9 1 8 7 8 9 4 4 3 4 5 6 2 4 8 7 5 6 2 7 8 5 4 8 2 1 3
1 6 7 5 1 2 0 0 2 6 3 3 5 0 5 5 5 5 1 5 4 2 1 9 3 6 1 2 0 4 4 5 4 2 6 5 4
1 4 6 3 2 5 8 3 9 0 6 6 3 3 0 9 6 2 3 1 4 1 9 1 6 9 2 2 0 5 4 8 8 4 7 4 5
1 0 7 4 7 0 8 0 4 7 2 6 1 0 3 4 2 3 0 3 6 9 8 3 6 5 2 7 3 6 4 4 2 3 0 5 9
7 0 8 6 2 5 9 3 3 8 5 2 7 4 0 6 6 0 1 8 7 8 9 1 9 1 2 8 5 4 6 7 9 9 0 8 6
8 9 6 0 8 1 7 8 2 5 0 3 1 7 4 7 6 4 7 2 2 3 5 0 0 5 2 1 4 5 1 5 3 7 2 3 4
9 0 1 0 0 0 2 1 8 1 9 2 5 8 5 8 1 9 7 2 2 5 2 4 6 3 0 1 0 1 0 3 9 1 0 4 4
2 7 9 8 2 4 9 4 3 6 7 1 3 5 8 1 8 3 2 4 8 6 0 5 4 4 2 8 6 7 6 3 5 3 9 3 2
7 0 6 2 3 3 1 2 9 8 9 8 5 3 6 9 9 3 3 2 5 1 3 9 5 7 5 0 0 8 6 4 4 4 4 8 9
4 6 3 9 5 5 8 8 2 4 2 7 4 0 0 4 1 5 9 0 7 9 6 0 2 4 8 9 8 8 1 7 4 7 6 2 5
9 7 7 4 7 9 7 4 3 3 6 0 9 6 7 1 9 7 2 1 7 7 0 7 3 5 4 4 0 8 0 5 1 4 0 3 7
5 7 0 7 2 3 5 2 6 5 4 7 6 6 8 8 0 1 2 9 0 3 9 3 1 3 7 1 3 7 5 9 4 0 5 7 1
8 4 8 9 5 0 2 0 9 9 4 8 2 2 6 7 4 4 2 6 7 1 7 6 9 7 7 1 9 5 7 1 8 4 4 9 9
2 9 2 7 3 9 9 2 2 2 1 0 8 4 0 4 5 1 4 2 2 6 6 3 2 0 0 1 9 6 3 9 8 2 6 7 5
5 8 8 8 2 9 8 9 7 7 7 4 5 6 4 0 6 7 5 5 9 0 1 4 0 0 7 5 1 5 9 0 6 2 4 9 4
9 7 7 4 1 0 1 5 7 3 7 6 2 3 6 5 1 1 6 5 7 4 2 5 0 6 0 1 5 9 3 7 7 3 4 2 4
1 7 0 2 8 3 0 9 4 9 9 4 4 1 9 6 8 0 3 5 5 1 8 6 5 0 9 3 5 0 7 7 1 1 0 5 7
3 8 2 7 2 3 1 3 4 5 4 9 5 1 0 2 7 3 3 7 4 7 9 9 9 7 0 8 9 3 3 5 7 0 3 7 0
9 6 0 5 4 2 8 4 3 8 2 4 6 2 2 2 6 3 1 3 6 5 1 8 2 4 9 4 6 0 4 2 7 1 5 5 0
0 5 0]
Pclass Age SibSp Parch Embarked-Q Embarked-C Embarked-S
0 3 22.0 1 0 0 0 1
1 1 38.0 1 0 0 1 0
2 3 26.0 0 0 0 0 1
3 1 35.0 1 0 0 0 1
4 3 35.0 0 0 0 0 1
5 3 -1.0 0 0 1 0 0
6 1 54.0 0 0 0 0 1
7 3 2.0 3 1 0 0 1
8 3 27.0 0 2 0 0 1
9 2 14.0 1 0 0 1 0
10 3 4.0 1 1 0 0 1
11 1 58.0 0 0 0 0 1
12 3 20.0 0 0 0 0 1
13 3 39.0 1 5 0 0 1
14 3 14.0 0 0 0 0 1
15 2 55.0 0 0 0 0 1
17 2 -1.0 0 0 0 0 1
18 3 31.0 1 0 0 0 1
19 3 -1.0 0 0 0 1 0
20 2 35.0 0 0 0 0 1
Pclass Age SibSp Parch Embarked-Q Embarked-C Embarked-S
16 3 2.0 4 1 1 0 0
22 3 15.0 0 0 1 0 0
27 1 19.0 3 2 0 0 1
35 1 42.0 1 0 0 0 1
54 1 65.0 0 1 0 1 0
55 1 -1.0 0 0 0 0 1
60 3 22.0 0 0 0 1 0
82 3 -1.0 0 0 1 0 0
88 1 23.0 3 2 0 0 1
98 2 34.0 0 1 0 0 1
103 3 33.0 0 0 0 0 1
106 3 21.0 0 0 0 0 1
108 3 38.0 0 0 0 0 1
122 2 32.5 1 0 0 1 0
124 1 54.0 0 1 0 0 1
130 3 33.0 0 0 0 1 0
147 3 9.0 2 2 0 0 1
149 2 42.0 0 0 0 0 1
160 3 44.0 0 1 0 0 1
207 3 26.0 0 0 0 1 0
###Markdown
Plotting the contours of the parameter performance:
###Code
AUC_grid = np.array(AUC_all).reshape(gam_vec.shape)
contourf(gam_vec, cost_vec, AUC_grid, 20, cmap='Greys')
xlabel("kernel coefficient, gamma")
ylabel("penalty parameter, C")
colorbar()
savefig("figures/figure-4.25.eps", format='eps')
###Output
_____no_output_____
###Markdown
Chapter 4 - Evaluation and Optimization
###Code
%pylab inline
import pandas as pandas
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
We generate two inputs:* features – a matrix of input features* target – an array of target variables corresponding to those features
###Code
features = rand(100,5)
target = rand(100) > 0.5
target
###Output
_____no_output_____
###Markdown
The holdout methodWe divide into a randomized training and test set:
###Code
int(floor(0.7*100))
N = features.shape[0]
N_train = int(floor(0.7 * N))
# Randomize index
# Note: sometimes you want to retain the order in the dataset and skip this step
# E.g. in the case of time-based datasets where you want to test on 'later' instances
idx = random.permutation(N)
# Split index
idx_train = idx[:N_train]
idx_test = idx[N_train:]
# Break your data into training and testing subsets
features_train = features[idx_train,:]
target_train = target[idx_train]
features_test = features[idx_test,:]
target_test = target[idx_test]
# Build, predict, evaluate (to be filled out)
# model = train(features_train, target_train)
# preds_test = predict(model, features_test)
# accuracy = evaluate_acc(preds_test, target_test)
print(features_train.shape)
print(features_test.shape)
print(target_train.shape)
print(target_test.shape)
###Output
(70, 5)
(30, 5)
(70,)
(30,)
###Markdown
K-fold cross-validation
###Code
N = features.shape[0]
K = 10 # number of folds
preds_kfold = np.empty(N)
folds = np.random.randint(0, K, size=N)
print(folds)
for idx in np.arange(K):
# For each fold, break your data into training and testing subsets
features_train = features[folds != idx,:]
target_train = target[folds != idx]
features_test = features[folds == idx,:]
# Print the indices in each fold, for inspection
print("Positions of "+str(idx)+" in fold array: ", end="")
print(nonzero(folds == idx)[0])
# Build and predict for CV fold (to be filled out)
# model = train(features_train, target_train)
# preds_kfold[folds == idx] = predict(model, features_test)
# accuracy = evaluate_acc(preds_kfold, target)
###Output
[0 9 0 3 2 6 4 0 2 6 2 5 8 7 0 1 8 8 4 0 4 2 0 6 2 1 4 5 0 4 9 0 0 8 3 9 7
4 8 6 5 9 7 2 0 4 7 1 4 4 8 8 9 8 9 7 3 6 1 4 4 7 7 6 1 5 5 3 7 6 8 8 1 8
0 1 6 3 0 2 5 7 5 2 9 0 9 8 4 0 8 6 0 3 6 9 5 6 7 6]
Positions of 0 in fold array: [ 0 2 7 14 19 22 28 31 32 44 74 78 85 89 92]
Positions of 1 in fold array: [15 25 47 58 64 72 75]
Positions of 2 in fold array: [ 4 8 10 21 24 43 79 83]
Positions of 3 in fold array: [ 3 34 56 67 77 93]
Positions of 4 in fold array: [ 6 18 20 26 29 37 45 48 49 59 60 88]
Positions of 5 in fold array: [11 27 40 65 66 80 82 96]
Positions of 6 in fold array: [ 5 9 23 39 57 63 69 76 91 94 97 99]
Positions of 7 in fold array: [13 36 42 46 55 61 62 68 81 98]
Positions of 8 in fold array: [12 16 17 33 38 50 51 53 70 71 73 87 90]
Positions of 9 in fold array: [ 1 30 35 41 52 54 84 86 95]
###Markdown
The ROC curve
###Code
def roc_curve(true_labels, predicted_probs, n_points=100, pos_class=1):
thr = linspace(0,1,n_points)
tpr = zeros(n_points)
fpr = zeros(n_points)
pos = true_labels == pos_class
neg = logical_not(pos)
n_pos = count_nonzero(pos)
n_neg = count_nonzero(neg)
for i,t in enumerate(thr):
tpr[i] = count_nonzero(logical_and(predicted_probs >= t, pos)) / n_pos
fpr[i] = count_nonzero(logical_and(predicted_probs >= t, neg)) / n_neg
return fpr, tpr, thr
# Randomly generated predictions should give us a diagonal ROC curve
preds = rand(len(target))
fpr, tpr, thr = roc_curve(target, preds, pos_class=True)
plot(fpr, tpr)
###Output
_____no_output_____
###Markdown
The area under the ROC curve
###Code
def auc(true_labels, predicted_labels, pos_class=1):
fpr, tpr, thr = roc_curve(true_labels, predicted_labels,
pos_class=pos_class)
area = -trapz(tpr, x=fpr)
return area
auc(target, preds, pos_class=True)
###Output
_____no_output_____
###Markdown
Multi-class classification
###Code
d = pandas.read_csv("data/mnist_small.csv")
d_train = d[:int(0.8*len(d))]
d_test = d[int(0.8*len(d)):]
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
rf.fit(d_train.drop('label', axis=1), d_train['label'])
from sklearn.metrics import confusion_matrix
preds = rf.predict(d_test.drop('label', axis=1))
cm = confusion_matrix(d_test['label'], preds)
matshow(cm, cmap='Greys')
colorbar()
#savefig("figures/figure-4.19.eps", format='eps')
###Output
_____no_output_____
###Markdown
The root-mean-square error
###Code
def rmse(true_values, predicted_values):
n = len(true_values)
residuals = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
return np.sqrt(residuals/n)
rmse(rand(10), rand(10))
###Output
_____no_output_____
###Markdown
The R-squared error
###Code
def r2(true_values, predicted_values):
n = len(true_values)
mean = np.mean(true_values)
residuals = 0
total = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
total += (true_values[i] - mean)**2.
return 1.0 - residuals/total
r2(arange(10)+rand(), arange(10)+rand(10))
###Output
_____no_output_____
###Markdown
Grid search with kernel-SVM modelImporting modules:
###Code
from sklearn.metrics import roc_auc_score
from sklearn.svm import SVC
###Output
_____no_output_____
###Markdown
Loading data and performang poor-mans feature engineering:
###Code
d = pandas.read_csv("data/titanic.csv")
# Target
y = d["Survived"]
# Features
X = d.drop(["Survived", "PassengerId", "Cabin","Ticket","Name", "Fare"], axis=1)
X['Sex'] = list(map(lambda x: 1 if x=="male" else 0, X['Sex']))
X['Embarked-Q'] = list(map(lambda x: 1 if x=="Q" else 0, X['Embarked']))
X['Embarked-C'] = list(map(lambda x: 1 if x=="C" else 0, X['Embarked']))
X['Embarked-S'] = list(map(lambda x: 1 if x=="S" else 0, X['Embarked']))
X = X.drop(["Embarked"], axis=1)
X = X.fillna(-1)
X.head(5)
###Output
_____no_output_____
###Markdown
Performing grid-search to find the optimal hyper-parameters:
###Code
# grid of (gamma, C) values to try
gam_vec, cost_vec = np.meshgrid(np.logspace(0.01, 0.1, 11),
np.linspace(1, 5, 10))
# grid of (gamma, C) values to try
#gam_vec, cost_vec = np.meshgrid(np.logspace(0.01, 0.1, 11),
# np.linspace(1, 5, 10))
AUC_all = [] # initialize empty array to store AUC results
# set up cross-validation folds
N = len(y)
K = 10 # number of cross-validation folds
folds = np.random.randint(0, K, size=N)
# search over every value of the grid
for param_ind in np.arange(len(gam_vec.ravel())):
# initialize cross-validation predictions
y_cv_pred = np.empty(N)
# loop through the cross-validation folds
for ii in np.arange(K):
# break your data into training and testing subsets
# X_train = X.ix[folds != ii,:]
# y_train = y.ix[folds != ii]
# X_test = X.ix[folds == ii,:]
X_train = X.iloc[folds != ii,:]
y_train = y.iloc[folds != ii]
X_test = X.iloc[folds == ii,:]
#X_train = X.iloc[folds, :]
#X_train = X_train.drop(ii)
#y_train = y.iloc[folds]
#y_train = y.drop(ii)
#X_test = X.iloc[folds, :]
#X_test = X_test[folds == ii]
# build a model on the training set
model = SVC(gamma=gam_vec.ravel()[param_ind], C=cost_vec.ravel()[param_ind])
model.fit(X_train, y_train)
# generate and store model predictions on the testing set
y_cv_pred[folds == ii] = model.predict(X_test)
# evaluate the AUC of the predictions
AUC_all.append(roc_auc_score(y, y_cv_pred))
indmax = np.argmax(AUC_all)
print("Maximum = %.3f" % (np.max(AUC_all)))
print("Tuning Parameters: (gamma = %.2f, C = %.2f)" % (gam_vec.ravel()[indmax], cost_vec.ravel()[indmax]))
ix=2
print(folds)
# Train subset taking all rows except the ones with index == to the positions of ix in the folds array
X_train = X.iloc[folds!=ix,:]
print(X_train.head(20))
X_test = X.iloc[folds==ix,:]
print(X_test.head(20))
final_model = SVC(gamma=1.02, C=5.0)
X_train = X[:int(0.8*len(X))]
Y_train = y[:int(0.8*len(y))]
X_test = X[int(0.8*len(X)):]
Y_test = y[int(0.8*len(y)):]
final_model.fit(X_train, Y_train)
y_pred = final_model.predict(X_test)
roc_auc_score(Y_test, y_pred)
###Output
_____no_output_____
###Markdown
Plotting the contours of the parameter performance:
###Code
AUC_grid = np.array(AUC_all).reshape(gam_vec.shape)
contourf(gam_vec, cost_vec, AUC_grid, 20, cmap='Greys')
xlabel("kernel coefficient, gamma")
ylabel("penalty parameter, C")
colorbar()
savefig("figures/figure-4.25.eps", format='eps')
###Output
_____no_output_____
###Markdown
Chapter 4 - Evaluation and Optimization
###Code
%pylab inline
import pandas as pandas
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
We generate two inputs:* features – a matrix of input features* target – an array of target variables corresponding to those features
###Code
#uso 5 CATEGORIE BOOLEANE A CASO
#e uso numeri random come target
features = rand(100,5)
target = rand(100) > 0.5
###Output
_____no_output_____
###Markdown
The holdout methodWe divide into a randomized training and test set:
###Code
int(floor(0.7*100))
N = features.shape[0]
N_train = int(floor(0.7 * N))
# Randomize index
# Note: sometimes you want to retain the order in the dataset and skip this step
# E.g. in the case of time-based datasets where you want to test on 'later' instances
idx = random.permutation(N)
# Split index
idx_train = idx[:N_train]
idx_test = idx[N_train:]
# Break your data into training and testing subsets
features_train = features[idx_train,:]
target_train = target[idx_train]
features_test = features[idx_test,:]
target_test = target[idx_test]
# Build, predict, evaluate (to be filled out)
# model = train(features_train, target_train)
# preds_test = predict(model, features_test)
# accuracy = evaluate_acc(preds_test, target_test)
print(features_train.shape)
print(features_test.shape)
print(target_train.shape)
print(target_test.shape)
###Output
(70, 5)
(30, 5)
(70,)
(30,)
###Markdown
K-fold cross-validation
###Code
N = features.shape[0]
K = 10 # number of folds
#ad ogni riga di dati associa, a caso, un'etichetta di FOLD così non separa
#i dati in modo contiguo ma randomico creando 10 "fold" cioè dieci cluster
#usando il randint non avrò fold omogenei nel senso 10 fold da 10
#ma avrò 10 fold ognuno con 8, 13, 10, 6 elementi, etc....
preds_kfold = np.empty(N)
folds = np.random.randint(0, K, size=N)
print(folds)
#ciclo tutti i fold, butto i (k-1) nel training e il k-simo per il test
for idx in np.arange(K):
# For each fold, break your data into training and testing subsets
features_train = features[folds != idx,:]
target_train = target[folds != idx]
features_test = features[folds == idx,:]
# Print the indices in each fold, for inspection
print("Positions of "+str(idx)+" in fold array: ", end="")
print(nonzero(folds == idx)[0])
# Build and predict for CV fold (to be filled out)
# model = train(features_train, target_train)
# preds_kfold[folds == idx] = predict(model, features_test)
# accuracy = evaluate_acc(preds_kfold, target)
###Output
[6 6 4 1 2 8 1 0 9 5 1 0 9 4 1 5 7 8 5 4 0 4 3 4 7 2 2 1 6 6 0 8 6 7 2 2 1
0 9 3 6 3 4 8 0 4 1 0 5 4 2 4 7 1 4 1 6 8 0 8 3 4 1 6 8 0 5 4 3 9 0 9 3 3
2 1 7 3 8 0 0 5 7 2 4 9 6 5 2 2 4 1 0 0 4 1 0 1 8 7]
Positions of 0 in fold array: [ 7 11 20 30 37 44 47 58 65 70 79 80 92 93 96]
Positions of 1 in fold array: [ 3 6 10 14 27 36 46 53 55 62 75 91 95 97]
Positions of 2 in fold array: [ 4 25 26 34 35 50 74 83 88 89]
Positions of 3 in fold array: [22 39 41 60 68 72 73 77]
Positions of 4 in fold array: [ 2 13 19 21 23 42 45 49 51 54 61 67 84 90 94]
Positions of 5 in fold array: [ 9 15 18 48 66 81 87]
Positions of 6 in fold array: [ 0 1 28 29 32 40 56 63 86]
Positions of 7 in fold array: [16 24 33 52 76 82 99]
Positions of 8 in fold array: [ 5 17 31 43 57 59 64 78 98]
Positions of 9 in fold array: [ 8 12 38 69 71 85]
###Markdown
The ROC curve
###Code
def roc_curve(true_labels, predicted_probs, n_points=100, pos_class=1):
thr = linspace(0,1,n_points)
tpr = zeros(n_points)
fpr = zeros(n_points)
pos = true_labels == pos_class
neg = logical_not(pos)
n_pos = count_nonzero(pos)
n_neg = count_nonzero(neg)
for i,t in enumerate(thr):
tpr[i] = count_nonzero(logical_and(predicted_probs >= t, pos)) / n_pos
fpr[i] = count_nonzero(logical_and(predicted_probs >= t, neg)) / n_neg
return fpr, tpr, thr
# Randomly generated predictions should give us a diagonal ROC curve
preds = rand(len(target))
fpr, tpr, thr = roc_curve(target, preds, pos_class=True)
plot(fpr, tpr)
###Output
_____no_output_____
###Markdown
The area under the ROC curve
###Code
def auc(true_labels, predicted_labels, pos_class=1):
fpr, tpr, thr = roc_curve(true_labels, predicted_labels,
pos_class=pos_class)
area = -trapz(tpr, x=fpr)
return area
auc(target, preds, pos_class=True)
###Output
_____no_output_____
###Markdown
Multi-class classification
###Code
d = pandas.read_csv("data/mnist_small.csv")
d_train = d[:int(0.8*len(d))]
d_test = d[int(0.8*len(d)):]
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
rf.fit(d_train.drop('label', axis=1), d_train['label'])
from sklearn.metrics import confusion_matrix
preds = rf.predict(d_test.drop('label', axis=1))
cm = confusion_matrix(d_test['label'], preds)
matshow(cm, cmap='Greys')
colorbar()
savefig("figures/figure-4.19.eps", format='eps')
###Output
_____no_output_____
###Markdown
The root-mean-square error
###Code
def rmse(true_values, predicted_values):
n = len(true_values)
residuals = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
return np.sqrt(residuals/n)
rmse(rand(10), rand(10))
###Output
_____no_output_____
###Markdown
The R-squared error
###Code
def r2(true_values, predicted_values):
n = len(true_values)
mean = np.mean(true_values)
residuals = 0
total = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
total += (true_values[i] - mean)**2.
return 1.0 - residuals/total
r2(arange(10)+rand(), arange(10)+rand(10))
###Output
_____no_output_____
###Markdown
Grid search with kernel-SVM modelImporting modules:
###Code
#per calcolare AUC uso tool di scikit
from sklearn.metrics import roc_auc_score
from sklearn.svm import SVC
###Output
_____no_output_____
###Markdown
Loading data and performang poor-mans feature engineering:
###Code
d = pandas.read_csv("data/titanic.csv")
# Target
y = d["Survived"]
# Selezione e puulizia delle features, fatto al volo con funzioni lambda
X = d.drop(["Survived", "PassengerId", "Cabin","Ticket","Name", "Fare"], axis=1)
X['Sex'] = list(map(lambda x: 1 if x=="male" else 0, X['Sex']))
X['Embarked-Q'] = list(map(lambda x: 1 if x=="Q" else 0, X['Embarked']))
X['Embarked-C'] = list(map(lambda x: 1 if x=="C" else 0, X['Embarked']))
X['Embarked-S'] = list(map(lambda x: 1 if x=="S" else 0, X['Embarked']))
#X = X.drop(["Embarked", "Sex"], axis=1)
X = X.drop(["Embarked"], axis=1)
X = X.fillna(-1)
###Output
_____no_output_____
###Markdown
Performing grid-search to find the optimal hyper-parameters:
###Code
# grid of (gamma, C) values to try
gam_vec, cost_vec = np.meshgrid(np.logspace(0.01, 0.1, 11),
np.linspace(1, 5, 10))
AUC_all = [] # initialize empty array to store AUC results
# set up cross-validation folds
N = len(y)
K = 10 # number of cross-validation folds
folds = np.random.randint(0, K, size=N)
# search over every value of the grid
#il comando 'ravel' linearizza la matrice
for param_ind in np.arange(len(gam_vec.ravel())):
# initialize cross-validation predictions
y_cv_pred = np.empty(N)
# loop through the cross-validation folds
for ii in np.arange(K):
# break your data into training and testing subsets
# X_train = X.ix[folds != ii,:]
# y_train = y.ix[folds != ii]
# X_test = X.ix[folds == ii,:]
X_train = X.iloc[folds != ii,:]
y_train = y.iloc[folds != ii]
X_test = X.iloc[folds == ii,:]
#X_train = X.iloc[folds, :]
#X_train = X_train.drop(ii)
#y_train = y.iloc[folds]
#y_train = y.drop(ii)
#X_test = X.iloc[folds, :]
#X_test = X_test[folds == ii]
# build a model on the training set
model = SVC(gamma=gam_vec.ravel()[param_ind], C=cost_vec.ravel()[param_ind])
model.fit(X_train, y_train)
# generate and store model predictions on the testing set
y_cv_pred[folds == ii] = model.predict(X_test)
# evaluate the AUC of the predictions
AUC_all.append(roc_auc_score(y, y_cv_pred))
indmax = np.argmax(AUC_all)
print("Maximum = %.3f" % (np.max(AUC_all)))
print("Tuning Parameters: (gamma = %.2f, C = %.2f)" % (gam_vec.ravel()[indmax], cost_vec.ravel()[indmax]))
ix=2
print(folds)
# Train subset taking all rows except the ones with index == to the positions of ix in the folds array
X_train = X.iloc[folds!=ix,:]
print(X_train.head(20))
X_test = X.iloc[folds==ix,:]
print(X_test.head(20))
###Output
[6 8 7 9 7 7 0 9 1 2 4 6 3 5 4 6 7 6 2 9 0 4 1 3 6 2 8 1 7 1 2 2 1 3 4 0 8
2 5 3 4 1 5 7 4 8 0 2 0 3 6 0 0 5 2 2 8 3 6 9 6 4 7 2 3 6 8 8 3 4 2 8 5 3
6 5 5 2 3 5 3 4 3 0 5 4 9 7 2 5 4 6 3 9 4 2 1 4 3 9 3 7 1 2 2 6 4 2 1 1 7
6 0 9 6 6 7 3 8 1 2 7 6 0 1 1 9 6 0 8 8 3 2 5 7 2 5 2 6 4 4 8 8 9 1 4 1 3
5 4 2 7 4 5 9 8 7 6 5 6 7 2 2 7 2 9 4 3 3 1 5 3 7 9 8 6 8 5 3 3 4 0 3 0 5
6 2 0 2 7 3 2 4 1 5 5 8 2 7 2 6 3 3 7 9 7 3 1 2 9 4 1 2 2 0 2 5 6 6 1 6 1
8 3 3 0 2 0 4 3 2 2 3 2 2 2 0 4 5 1 0 5 8 4 3 8 3 6 2 0 6 3 3 8 5 7 9 2 4
9 9 8 7 4 9 3 3 9 0 5 8 3 3 7 2 8 9 7 6 6 1 4 6 7 1 9 6 1 2 6 4 1 8 7 6 5
6 5 8 0 8 4 4 8 2 8 9 4 2 0 8 2 4 4 1 7 5 6 7 4 5 7 1 3 9 6 2 7 4 8 8 0 2
1 7 4 9 3 4 4 3 3 4 4 0 3 5 6 1 9 8 0 4 7 5 4 7 4 5 5 3 3 8 3 0 0 4 0 8 7
6 5 7 8 1 7 7 8 3 3 7 2 4 1 8 5 8 5 9 0 1 2 7 8 2 5 3 3 7 3 7 6 1 3 8 5 7
1 5 2 5 6 5 4 2 3 0 9 8 0 3 0 6 9 2 8 2 1 3 1 6 7 7 4 5 0 1 2 7 9 1 0 0 1
0 5 2 0 9 1 5 3 0 2 5 9 8 9 2 3 6 0 9 9 0 9 1 5 6 2 3 9 5 7 5 8 7 8 5 7 0
1 1 2 5 1 2 1 9 1 9 1 0 9 8 1 2 2 9 2 9 3 7 3 1 6 2 4 2 2 3 4 9 7 1 6 6 2
7 8 3 4 7 7 6 0 1 7 8 6 6 0 9 3 2 3 8 3 3 2 2 1 5 3 5 9 0 5 5 4 6 1 4 3 6
2 2 9 5 9 0 1 9 3 3 3 4 0 2 8 2 8 8 2 1 0 8 5 2 5 2 9 7 0 4 4 9 8 7 4 8 9
8 5 6 6 1 2 5 9 2 3 6 4 2 4 7 9 6 4 4 7 7 5 1 6 7 7 8 6 1 5 7 0 1 2 2 1 3
8 1 1 5 1 4 6 7 5 4 3 2 0 2 4 5 6 8 2 8 9 8 4 8 5 4 4 4 4 7 8 1 8 1 7 8 6
8 6 8 0 0 1 8 1 1 5 0 0 9 5 8 6 5 7 4 8 7 8 5 6 5 6 1 5 1 9 1 4 0 2 2 5 3
0 4 3 5 3 7 6 0 0 7 0 0 4 4 7 2 3 8 4 9 8 1 2 9 2 2 1 8 4 3 7 2 8 8 5 8 4
3 4 8 6 3 9 2 2 9 7 2 0 2 5 8 7 6 8 1 1 7 6 8 6 5 7 2 6 5 9 4 9 1 8 9 9 1
9 1 7 5 3 4 9 3 5 2 2 6 7 0 9 0 9 8 0 0 5 7 2 6 3 3 1 1 8 9 8 3 9 3 7 4 5
4 4 7 3 1 7 3 3 9 0 2 5 0 6 7 5 9 0 5 6 0 4 7 2 4 4 7 6 7 6 8 5 6 4 0 9 5
1 6 2 4 5 1 0 5 4 5 2 5 1 6 5 5 2 1 1 3 2 4 1 5 5 0 5 5 6 7 2 8 4 5 2 0 4
2 2 7]
Pclass Sex Age SibSp Parch Embarked-Q Embarked-C Embarked-S
0 3 1 22.0 1 0 0 0 1
1 1 0 38.0 1 0 0 1 0
2 3 0 26.0 0 0 0 0 1
3 1 0 35.0 1 0 0 0 1
4 3 1 35.0 0 0 0 0 1
5 3 1 -1.0 0 0 1 0 0
6 1 1 54.0 0 0 0 0 1
7 3 1 2.0 3 1 0 0 1
8 3 0 27.0 0 2 0 0 1
10 3 0 4.0 1 1 0 0 1
11 1 0 58.0 0 0 0 0 1
12 3 1 20.0 0 0 0 0 1
13 3 1 39.0 1 5 0 0 1
14 3 0 14.0 0 0 0 0 1
15 2 0 55.0 0 0 0 0 1
16 3 1 2.0 4 1 1 0 0
17 2 1 -1.0 0 0 0 0 1
19 3 0 -1.0 0 0 0 1 0
20 2 1 35.0 0 0 0 0 1
21 2 1 34.0 0 0 0 0 1
Pclass Sex Age SibSp Parch Embarked-Q Embarked-C Embarked-S
9 2 0 14.0 1 0 0 1 0
18 3 0 31.0 1 0 0 0 1
25 3 0 38.0 1 5 0 0 1
30 1 1 40.0 0 0 0 1 0
31 1 0 -1.0 1 0 0 1 0
37 3 1 21.0 0 0 0 0 1
47 3 0 -1.0 0 0 1 0 0
54 1 1 65.0 0 1 0 1 0
55 1 1 -1.0 0 0 0 0 1
63 3 1 4.0 3 2 0 0 1
70 2 1 32.0 0 0 0 0 1
77 3 1 -1.0 0 0 0 0 1
88 1 0 23.0 3 2 0 0 1
95 3 1 -1.0 0 0 0 0 1
103 3 1 33.0 0 0 0 0 1
104 3 1 37.0 2 0 0 0 1
107 3 1 -1.0 0 0 0 0 1
120 2 1 21.0 2 0 0 0 1
132 3 0 47.0 1 0 0 0 1
135 2 1 23.0 0 0 0 1 0
###Markdown
Plotting the contours of the parameter performance:
###Code
AUC_grid = np.array(AUC_all).reshape(gam_vec.shape)
contourf(gam_vec, cost_vec, AUC_grid, 20, cmap='Greys')
xlabel("kernel coefficient, gamma")
ylabel("penalty parameter, C")
colorbar()
savefig("figures/figure-4.25.eps", format='eps')
###Output
_____no_output_____
###Markdown
Chapter 4 - Evaluation and Optimization
###Code
%pylab inline
import pandas as pandas
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
We generate two inputs:* features – a matrix of input features* target – an array of target variables corresponding to those features
###Code
features = rand(100,5)
target = rand(100) > 0.5
###Output
_____no_output_____
###Markdown
The holdout methodWe divide into a randomized training and test set:
###Code
int(floor(0.7*100))
N = features.shape[0]
N_train = int(floor(0.7 * N))
# Randomize index
# Note: sometimes you want to retain the order in the dataset and skip this step
# E.g. in the case of time-based datasets where you want to test on 'later' instances
idx = random.permutation(N)
# Split index
idx_train = idx[:N_train]
idx_test = idx[N_train:]
# Break your data into training and testing subsets
features_train = features[idx_train,:]
target_train = target[idx_train]
features_test = features[idx_test,:]
target_test = target[idx_test]
# Build, predict, evaluate (to be filled out)
# model = train(features_train, target_train)
# preds_test = predict(model, features_test)
# accuracy = evaluate_acc(preds_test, target_test)
print(features_train.shape)
print(features_test.shape)
print(target_train.shape)
print(target_test.shape)
###Output
(70, 5)
(30, 5)
(70,)
(30,)
###Markdown
K-fold cross-validation
###Code
N = features.shape[0]
K = 10 # number of folds
preds_kfold = np.empty(N)
folds = np.random.randint(0, K, size=N)
print(folds)
for idx in np.arange(K):
# For each fold, break your data into training and testing subsets
features_train = features[folds != idx,:]
target_train = target[folds != idx]
features_test = features[folds == idx,:]
# Print the indices in each fold, for inspection
print("Positions of "+str(idx)+" in fold array: ", end="")
print(nonzero(folds == idx)[0])
# Build and predict for CV fold (to be filled out)
# model = train(features_train, target_train)
# preds_kfold[folds == idx] = predict(model, features_test)
# accuracy = evaluate_acc(preds_kfold, target)
###Output
[8 2 5 7 0 8 8 8 7 7 2 9 4 4 9 3 8 8 8 4 9 7 2 6 0 9 2 2 3 0 6 2 3 2 7 6 0
9 4 5 6 1 9 6 2 7 9 0 9 4 4 8 7 6 0 8 5 1 7 4 2 1 1 4 1 5 0 9 8 4 4 1 2 9
5 3 7 0 2 1 3 6 0 3 7 9 4 6 7 2 7 3 9 9 5 9 1 9 8 7]
Positions of 0 in fold array: [ 4 24 29 36 47 54 66 77 82]
Positions of 1 in fold array: [41 57 61 62 64 71 79 96]
Positions of 2 in fold array: [ 1 10 22 26 27 31 33 44 60 72 78 89]
Positions of 3 in fold array: [15 28 32 75 80 83 91]
Positions of 4 in fold array: [12 13 19 38 49 50 59 63 69 70 86]
Positions of 5 in fold array: [ 2 39 56 65 74 94]
Positions of 6 in fold array: [23 30 35 40 43 53 81 87]
Positions of 7 in fold array: [ 3 8 9 21 34 45 52 58 76 84 88 90 99]
Positions of 8 in fold array: [ 0 5 6 7 16 17 18 51 55 68 98]
Positions of 9 in fold array: [11 14 20 25 37 42 46 48 67 73 85 92 93 95 97]
###Markdown
The ROC curve
###Code
def roc_curve(true_labels, predicted_probs, n_points=100, pos_class=1):
thr = linspace(0,1,n_points)
tpr = zeros(n_points)
fpr = zeros(n_points)
pos = true_labels == pos_class
neg = logical_not(pos)
n_pos = count_nonzero(pos)
n_neg = count_nonzero(neg)
for i,t in enumerate(thr):
tpr[i] = count_nonzero(logical_and(predicted_probs >= t, pos)) / n_pos
fpr[i] = count_nonzero(logical_and(predicted_probs >= t, neg)) / n_neg
return fpr, tpr, thr
# Randomly generated predictions should give us a diagonal ROC curve
preds = rand(len(target))
fpr, tpr, thr = roc_curve(target, preds, pos_class=True)
plot(fpr, tpr)
###Output
_____no_output_____
###Markdown
The area under the ROC curve
###Code
def auc(true_labels, predicted_labels, pos_class=1):
fpr, tpr, thr = roc_curve(true_labels, predicted_labels,
pos_class=pos_class)
area = -trapz(tpr, x=fpr)
return area
auc(target, preds, pos_class=True)
###Output
_____no_output_____
###Markdown
Multi-class classification
###Code
d = pandas.read_csv("data/mnist_small.csv")
d_train = d[:int(0.8*len(d))]
d_test = d[int(0.8*len(d)):]
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
rf.fit(d_train.drop('label', axis=1), d_train['label'])
from sklearn.metrics import confusion_matrix
preds = rf.predict(d_test.drop('label', axis=1))
cm = confusion_matrix(d_test['label'], preds)
matshow(cm, cmap='Greys')
colorbar()
savefig("figures/figure-4.19.eps", format='eps')
###Output
_____no_output_____
###Markdown
The root-mean-square error
###Code
def rmse(true_values, predicted_values):
n = len(true_values)
residuals = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
return np.sqrt(residuals/n)
rmse(rand(10), rand(10))
###Output
_____no_output_____
###Markdown
The R-squared error
###Code
def r2(true_values, predicted_values):
n = len(true_values)
mean = np.mean(true_values)
residuals = 0
total = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
total += (true_values[i] - mean)**2.
return 1.0 - residuals/total
r2(arange(10)+rand(), arange(10)+rand(10))
###Output
_____no_output_____
###Markdown
Grid search with kernel-SVM modelImporting modules:
###Code
from sklearn.metrics import roc_auc_score
from sklearn.svm import SVC
###Output
_____no_output_____
###Markdown
Loading data and performang poor-mans feature engineering:
###Code
d = pandas.read_csv("data/titanic.csv")
# Target
y = d["Survived"]
# Features
X = d.drop(["Survived", "PassengerId", "Cabin","Ticket","Name", "Fare"], axis=1)
X['Sex'] = list(map(lambda x: 1 if x=="male" else 0, X['Sex']))
X['Embarked-Q'] = list(map(lambda x: 1 if x=="Q" else 0, X['Embarked']))
X['Embarked-C'] = list(map(lambda x: 1 if x=="C" else 0, X['Embarked']))
X['Embarked-S'] = list(map(lambda x: 1 if x=="S" else 0, X['Embarked']))
X = X.drop(["Embarked"], axis=1)
X = X.fillna(-1)
gam_vec, cost_vec = np.meshgrid(np.logspace(0.01, 0.1, 11),
np.linspace(1, 5, 10))
print(gam_vec)
print(cost_vec)
###Output
[[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]
[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]
[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]
[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]
[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]
[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]
[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]
[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]
[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]
[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]]
[[1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. ]
[1.44444444 1.44444444 1.44444444 1.44444444 1.44444444 1.44444444
1.44444444 1.44444444 1.44444444 1.44444444 1.44444444]
[1.88888889 1.88888889 1.88888889 1.88888889 1.88888889 1.88888889
1.88888889 1.88888889 1.88888889 1.88888889 1.88888889]
[2.33333333 2.33333333 2.33333333 2.33333333 2.33333333 2.33333333
2.33333333 2.33333333 2.33333333 2.33333333 2.33333333]
[2.77777778 2.77777778 2.77777778 2.77777778 2.77777778 2.77777778
2.77777778 2.77777778 2.77777778 2.77777778 2.77777778]
[3.22222222 3.22222222 3.22222222 3.22222222 3.22222222 3.22222222
3.22222222 3.22222222 3.22222222 3.22222222 3.22222222]
[3.66666667 3.66666667 3.66666667 3.66666667 3.66666667 3.66666667
3.66666667 3.66666667 3.66666667 3.66666667 3.66666667]
[4.11111111 4.11111111 4.11111111 4.11111111 4.11111111 4.11111111
4.11111111 4.11111111 4.11111111 4.11111111 4.11111111]
[4.55555556 4.55555556 4.55555556 4.55555556 4.55555556 4.55555556
4.55555556 4.55555556 4.55555556 4.55555556 4.55555556]
[5. 5. 5. 5. 5. 5.
5. 5. 5. 5. 5. ]]
###Markdown
Performing grid-search to find the optimal hyper-parameters:
###Code
# grid of (gamma, C) values to try
gam_vec, cost_vec = np.meshgrid(np.logspace(0.01, 0.1, 11),
np.linspace(1, 5, 10))
AUC_all = [] # initialize empty array to store AUC results
# set up cross-validation folds
N = len(y)
K = 10 # number of cross-validation folds
folds = np.random.randint(0, K, size=N)
# search over every value of the grid
for param_ind in np.arange(len(gam_vec.ravel())):
# initialize cross-validation predictions
y_cv_pred = np.empty(N)
# loop through the cross-validation folds
for ii in np.arange(K):
# break your data into training and testing subsets
# X_train = X.ix[folds != ii,:]
# y_train = y.ix[folds != ii]
# X_test = X.ix[folds == ii,:]
X_train = X.iloc[folds != ii,:]
y_train = y.iloc[folds != ii]
X_test = X.iloc[folds == ii,:]
#X_train = X.iloc[folds, :]
#X_train = X_train.drop(ii)
#y_train = y.iloc[folds]
#y_train = y.drop(ii)
#X_test = X.iloc[folds, :]
#X_test = X_test[folds == ii]
# build a model on the training set
model = SVC(gamma=gam_vec.ravel()[param_ind], C=cost_vec.ravel()[param_ind])
model.fit(X_train, y_train)
# generate and store model predictions on the testing set
y_cv_pred[folds == ii] = model.predict(X_test)
# evaluate the AUC of the predictions
AUC_all.append(roc_auc_score(y, y_cv_pred))
indmax = np.argmax(AUC_all)
print("Maximum = %.3f" % (np.max(AUC_all)))
print("Tuning Parameters: (gamma = %.2f, C = %.2f)" % (gam_vec.ravel()[indmax], cost_vec.ravel()[indmax]))
ix=2
print(folds)
# Train subset taking all rows except the ones with index == to the positions of ix in the folds array
X_train = X.iloc[folds!=ix,:]
print(X_train.head(20))
X_test = X.iloc[folds==ix,:]
print(X_test.head(20))
###Output
[6 5 9 4 6 0 6 0 6 8 6 0 2 5 1 3 5 0 0 1 3 5 5 7 5 0 2 2 8 0 6 5 7 5 1 2 1
5 3 3 3 0 1 3 6 6 0 1 2 8 2 4 6 8 6 9 1 7 7 3 8 5 8 3 8 0 3 1 5 7 4 2 9 5
3 3 3 7 8 1 1 5 4 4 7 7 8 9 2 9 8 5 5 4 2 5 2 0 0 2 8 5 9 8 6 1 6 2 4 6 8
6 8 8 1 8 3 8 8 0 1 8 3 6 0 3 3 7 1 0 0 8 7 5 9 8 4 8 4 3 0 4 6 8 8 5 2 6
4 0 9 1 0 2 4 7 2 8 6 2 8 7 7 6 5 1 9 9 5 1 2 4 3 5 8 9 6 8 9 4 6 2 1 0 2
8 0 3 0 3 9 8 9 1 5 0 3 4 9 1 5 0 9 6 5 6 9 2 8 3 0 2 7 9 5 8 2 8 1 4 7 7
2 9 0 3 8 4 5 2 5 9 0 9 8 6 0 8 6 6 7 1 1 8 3 9 4 6 4 3 2 8 0 3 9 8 0 4 0
9 9 4 2 8 2 6 5 5 7 2 3 4 1 5 6 2 3 1 1 9 0 4 8 1 4 9 8 0 9 6 0 5 6 4 6 3
7 7 1 7 4 4 6 6 0 9 1 9 5 6 3 0 7 4 9 5 7 6 7 6 1 5 1 3 2 0 7 3 3 7 7 0 6
9 7 5 9 8 7 1 7 4 1 9 2 7 6 7 7 5 4 1 6 3 9 9 2 8 4 2 6 1 2 9 6 9 3 6 3 7
9 3 4 3 8 7 2 4 9 5 3 5 3 3 2 5 2 8 2 0 9 7 5 1 1 8 4 9 2 9 7 7 2 4 6 5 8
0 5 5 8 2 0 5 4 9 1 2 2 9 3 2 4 9 9 2 2 6 0 8 5 6 3 3 4 3 4 2 3 6 9 4 3 0
6 7 2 6 1 0 8 9 0 6 5 9 4 8 8 3 1 9 5 1 6 0 2 7 0 5 6 5 5 2 1 7 1 8 5 4 0
1 9 3 8 0 4 3 2 4 8 5 8 1 8 8 7 2 3 0 4 4 7 1 5 6 9 3 9 0 6 3 5 5 1 2 9 0
0 3 5 8 4 5 6 4 9 6 0 0 0 6 5 2 3 0 8 0 8 6 0 9 7 1 6 5 0 3 7 7 6 3 9 6 5
7 3 8 8 9 7 9 0 6 7 4 9 0 7 6 7 6 1 4 3 3 6 8 0 0 1 6 9 8 6 6 4 5 6 3 0 8
9 9 5 4 9 4 8 0 4 8 9 5 1 0 9 8 6 6 4 6 2 7 2 7 7 2 8 1 6 7 3 0 9 0 3 1 1
2 0 6 0 6 4 4 2 0 2 3 7 9 9 7 5 9 8 6 4 4 4 9 9 5 2 1 2 9 3 2 2 7 7 5 6 4
2 2 3 1 3 7 3 9 7 3 6 7 7 1 9 6 8 8 3 0 2 0 1 7 3 9 4 8 4 6 9 5 0 4 3 3 3
4 8 8 1 2 8 7 8 1 2 0 1 6 4 7 7 3 9 7 8 0 0 0 6 5 9 1 3 9 5 7 6 3 1 4 8 9
8 4 8 1 2 2 5 3 4 2 9 5 5 8 3 2 3 4 6 6 8 6 3 2 3 6 1 2 4 8 3 8 7 8 6 6 5
0 6 7 1 8 2 4 1 1 9 4 3 4 6 0 0 6 7 8 2 3 4 4 3 0 4 9 8 7 5 4 9 7 6 7 1 9
0 6 2 0 6 0 4 8 7 0 1 8 4 6 9 9 9 3 0 0 9 4 6 6 5 0 6 4 8 3 8 0 8 2 5 4 5
0 9 5 5 4 4 9 9 9 8 9 7 4 9 1 1 3 1 9 1 1 5 0 7 1 6 1 4 3 6 2 2 6 3 2 8 1
0 2 8]
Pclass Age SibSp Parch Embarked-Q Embarked-C Embarked-S
0 3 22.0 1 0 0 0 1
1 1 38.0 1 0 0 1 0
2 3 26.0 0 0 0 0 1
3 1 35.0 1 0 0 0 1
4 3 35.0 0 0 0 0 1
5 3 -1.0 0 0 1 0 0
6 1 54.0 0 0 0 0 1
7 3 2.0 3 1 0 0 1
8 3 27.0 0 2 0 0 1
9 2 14.0 1 0 0 1 0
10 3 4.0 1 1 0 0 1
11 1 58.0 0 0 0 0 1
13 3 39.0 1 5 0 0 1
14 3 14.0 0 0 0 0 1
15 2 55.0 0 0 0 0 1
16 3 2.0 4 1 1 0 0
17 2 -1.0 0 0 0 0 1
18 3 31.0 1 0 0 0 1
19 3 -1.0 0 0 0 1 0
20 2 35.0 0 0 0 0 1
Pclass Age SibSp Parch Embarked-Q Embarked-C Embarked-S
12 3 20.0 0 0 0 0 1
26 3 -1.0 0 0 0 1 0
27 1 19.0 3 2 0 0 1
35 1 42.0 1 0 0 0 1
48 3 -1.0 2 0 0 1 0
50 3 7.0 4 1 0 0 1
71 3 16.0 5 2 0 0 1
88 1 23.0 3 2 0 0 1
94 3 59.0 0 0 0 0 1
96 1 71.0 0 0 0 1 0
99 2 34.0 1 0 0 0 1
107 3 -1.0 0 0 0 0 1
146 3 27.0 0 0 0 0 1
153 3 40.5 0 2 0 0 1
156 3 16.0 0 0 1 0 0
159 3 -1.0 8 2 0 0 1
170 1 61.0 0 0 0 0 1
181 2 -1.0 0 0 0 1 0
184 3 4.0 0 2 0 0 1
207 3 26.0 0 0 0 1 0
###Markdown
Plotting the contours of the parameter performance:
###Code
AUC_grid = np.array(AUC_all).reshape(gam_vec.shape)
contourf(gam_vec, cost_vec, AUC_grid, 20, cmap='Greys')
xlabel("kernel coefficient, gamma")
ylabel("penalty parameter, C")
colorbar()
savefig("figures/figure-4.25.eps", format='eps')
###Output
_____no_output_____
###Markdown
Chapter 4 - Evaluation and Optimization
###Code
%pylab inline
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
We generate two inputs:* features – a matrix of input features* target – an array of target variables corresponding to those features
###Code
features = rand(100,5)
target = rand(100) > 0.5
###Output
_____no_output_____
###Markdown
The holdout methodWe divide into a randomized training and test set:
###Code
N = features.shape[0]
N_train = floor(0.7 * N)
# Randomize index
# Note: sometimes you want to retain the order in the dataset and skip this step
# E.g. in the case of time-based datasets where you want to test on 'later' instances
idx = random.permutation(N)
# Split index
idx_train = idx[:N_train]
idx_test = idx[N_train:]
# Break your data into training and testing subsets
features_train = features[idx_train,:]
target_train = target[idx_train]
features_test = features[idx_test,:]
target_test = target[idx_test]
# Build, predict, evaluate (to be filled out)
# model = train(features_train, target_train)
# preds_test = predict(model, features_test)
# accuracy = evaluate_acc(preds_test, target_test)
print features_train.shape
print features_test.shape
print target_train.shape
print target_test.shape
###Output
(70, 5)
(30, 5)
(70,)
(30,)
###Markdown
K-fold cross-validation
###Code
N = features.shape[0]
K = 10 # number of folds
preds_kfold = np.empty(N)
folds = np.random.randint(0, K, size=N)
for idx in np.arange(K):
# For each fold, break your data into training and testing subsets
features_train = features[folds != idx,:]
target_train = target[folds != idx]
features_test = features[folds == idx,:]
# Print the indices in each fold, for inspection
print nonzero(folds == idx)[0]
# Build and predict for CV fold (to be filled out)
# model = train(features_train, target_train)
# preds_kfold[folds == idx] = predict(model, features_test)
# accuracy = evaluate_acc(preds_kfold, target)
###Output
[16 23 33 34 35 39 44 48 99]
[ 3 10 29 37 45 52 54 62 68 83 98]
[ 4 12 20 22 28 50 51 53 55 56 59 60 64 77 84 86 92 97]
[ 6 7 9 11 18 27 41 49 57 61 70 73 80]
[42 67 96]
[30 63 66 78 90 91]
[ 2 5 26 40 46 58 72 74 87 95]
[ 0 8 14 24 32 36 38 43 85 88 89]
[ 1 13 17 19 21 47 71 79 81 82 93 94]
[15 25 31 65 69 75 76]
###Markdown
The ROC curve
###Code
def roc_curve(true_labels, predicted_probs, n_points=100, pos_class=1):
thr = linspace(0,1,n_points)
tpr = zeros(n_points)
fpr = zeros(n_points)
pos = true_labels == pos_class
neg = logical_not(pos)
n_pos = count_nonzero(pos)
n_neg = count_nonzero(neg)
for i,t in enumerate(thr):
tpr[i] = count_nonzero(logical_and(predicted_probs >= t, pos)) / n_pos
fpr[i] = count_nonzero(logical_and(predicted_probs >= t, neg)) / n_neg
return fpr, tpr, thr
# Randomly generated predictions should give us a diagonal ROC curve
preds = rand(len(target))
fpr, tpr, thr = roc_curve(target, preds, pos_class=True)
plot(fpr, tpr)
###Output
_____no_output_____
###Markdown
The area under the ROC curve
###Code
def auc(true_labels, predicted_labels, pos_class=1):
fpr, tpr, thr = roc_curve(true_labels, predicted_labels,
pos_class=pos_class)
area = -trapz(tpr, x=fpr)
return area
auc(target, preds, pos_class=True)
###Output
_____no_output_____
###Markdown
Multi-class classification
###Code
d = pandas.read_csv("data/mnist_small.csv")
d_train = d[:int(0.8*len(d))]
d_test = d[int(0.8*len(d)):]
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
rf.fit(d_train.drop('label', axis=1), d_train['label'])
from sklearn.metrics import confusion_matrix
preds = rf.predict(d_test.drop('label', axis=1))
cm = confusion_matrix(d_test['label'], preds)
matshow(cm, cmap='Greys')
colorbar()
savefig("figures/figure-4.19.eps", format='eps')
###Output
_____no_output_____
###Markdown
The root-mean-square error
###Code
def rmse(true_values, predicted_values):
n = len(true_values)
residuals = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
return np.sqrt(residuals/n)
rmse(rand(10), rand(10))
###Output
_____no_output_____
###Markdown
The R-squared error
###Code
def r2(true_values, predicted_values):
n = len(true_values)
mean = np.mean(true_values)
residuals = 0
total = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
total += (true_values[i] - mean)**2.
return 1.0 - residuals/total
r2(arange(10)+rand(), arange(10)+rand(10))
###Output
_____no_output_____
###Markdown
Grid search with kernel-SVM modelImporting modules:
###Code
from sklearn.metrics import roc_auc_score
from sklearn.svm import SVC
###Output
_____no_output_____
###Markdown
Loading data and performang poor-mans feature engineering:
###Code
d = pandas.read_csv("data/titanic.csv")
# Target
y = d["Survived"]
# Features
X = d.drop(["Survived", "PassengerId", "Cabin","Ticket","Name", "Fare"], axis=1)
X['Sex'] = map(lambda x: 1 if x=="male" else 0, X['Sex'])
X['Embarked-Q'] = map(lambda x: 1 if x=="Q" else 0, X['Embarked'])
X['Embarked-C'] = map(lambda x: 1 if x=="C" else 0, X['Embarked'])
X['Embarked-S'] = map(lambda x: 1 if x=="S" else 0, X['Embarked'])
X = X.drop(["Embarked", "Sex"], axis=1)
X = X.fillna(-1)
###Output
_____no_output_____
###Markdown
Performing grid-search to find the optimal hyper-parameters:
###Code
# grid of (gamma, C) values to try
gam_vec, cost_vec = np.meshgrid(np.linspace(0.01, 10, 11),
np.linspace(0.01, 10, 11))
AUC_all = [] # initialize empty array to store AUC results
# set up cross-validation folds
N = len(y)
K = 10 # number of cross-validation folds
folds = np.random.randint(0, K, size=N)
# search over every value of the grid
for param_ind in np.arange(len(gam_vec.ravel())):
# initialize cross-validation predictions
y_cv_pred = np.empty(N)
# loop through the cross-validation folds
for ii in np.arange(K):
# break your data into training and testing subsets
X_train = X.ix[folds != ii,:]
y_train = y.ix[folds != ii]
X_test = X.ix[folds == ii,:]
# build a model on the training set
model = SVC(gamma=gam_vec.ravel()[param_ind], C=cost_vec.ravel()[param_ind])
model.fit(X_train, y_train)
# generate and store model predictions on the testing set
y_cv_pred[folds == ii] = model.predict(X_test)
# evaluate the AUC of the predictions
AUC_all.append(roc_auc_score(y, y_cv_pred))
indmax = np.argmax(AUC_all)
print "Maximum = %.3f" % (np.max(AUC_all))
print "Tuning Parameters: (gamma = %.2f, C = %.2f)" % (gam_vec.ravel()[indmax], cost_vec.ravel()[indmax])
###Output
Maximum = 0.674
Tuning Parameters: (gamma = 0.01, C = 4.01)
###Markdown
Plotting the contours of the parameter performance:
###Code
AUC_grid = np.array(AUC_all).reshape(gam_vec.shape)
contourf(gam_vec, cost_vec, AUC_grid, 20, cmap='Greys')
xlabel("kernel coefficient, gamma")
ylabel("penalty parameter, C")
colorbar()
savefig("figures/figure-4.25.eps", format='eps')
###Output
_____no_output_____
###Markdown
Chapter 4 - Evaluation and Optimization
###Code
%pylab inline
import pandas as pandas
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
We generate two inputs:* features – a matrix of input features* target – an array of target variables corresponding to those features
###Code
features = rand(100,5)
target = rand(100) > 0.5
###Output
_____no_output_____
###Markdown
The holdout methodWe divide into a randomized training and test set:
###Code
int(floor(0.7*100))
N = features.shape[0]
N_train = int(floor(0.7 * N))
# Randomize index
# Note: sometimes you want to retain the order in the dataset and skip this step
# E.g. in the case of time-based datasets where you want to test on 'later' instances
idx = random.permutation(N)
# Split index
idx_train = idx[:N_train]
idx_test = idx[N_train:]
# Break your data into training and testing subsets
features_train = features[idx_train,:]
target_train = target[idx_train]
features_test = features[idx_test,:]
target_test = target[idx_test]
# Build, predict, evaluate (to be filled out)
# model = train(features_train, target_train)
# preds_test = predict(model, features_test)
# accuracy = evaluate_acc(preds_test, target_test)
print(features_train.shape)
print(features_test.shape)
print(target_train.shape)
print(target_test.shape)
folds = np.random.randint(0, K, size=N)
folds
###Output
_____no_output_____
###Markdown
K-fold cross-validation
###Code
N = features.shape[0]
K = 10 # number of folds
preds_kfold = np.empty(N)
folds = np.random.randint(0, K, size=N)
print(folds)
for idx in np.arange(K):
# For each fold, break your data into training and testing subsets
features_train = features[folds != idx,:]
target_train = target[folds != idx]
features_test = features[folds == idx,:]
# Print the indices in each fold, for inspection
print("Positions of "+str(idx)+" in fold array: ", end="")
print(nonzero(folds == idx)[0])
# Build and predict for CV fold (to be filled out)
# model = train(features_train, target_train)
# preds_kfold[folds == idx] = predict(model, features_test)
# accuracy = evaluate_acc(preds_kfold, target)
###Output
[2 3 5 7 4 7 3 3 3 8 1 2 3 0 5 4 6 9 4 6 1 3 4 0 5 8 5 8 6 8 3 4 3 6 6 7 5
6 5 5 8 3 7 4 9 1 6 3 9 4 2 6 6 5 4 9 9 4 9 7 8 3 9 4 0 3 1 3 7 4 1 7 8 5
4 8 4 7 4 4 5 6 4 2 9 9 3 0 1 1 9 9 1 0 4 9 0 1 7 8]
Positions of 0 in fold array: [13 23 64 87 93 96]
Positions of 1 in fold array: [10 20 45 66 70 88 89 92 97]
Positions of 2 in fold array: [ 0 11 50 83]
Positions of 3 in fold array: [ 1 6 7 8 12 21 30 32 41 47 61 65 67 86]
Positions of 4 in fold array: [ 4 15 18 22 31 43 49 54 57 63 69 74 76 78 79 82 94]
Positions of 5 in fold array: [ 2 14 24 26 36 38 39 53 73 80]
Positions of 6 in fold array: [16 19 28 33 34 37 46 51 52 81]
Positions of 7 in fold array: [ 3 5 35 42 59 68 71 77 98]
Positions of 8 in fold array: [ 9 25 27 29 40 60 72 75 99]
Positions of 9 in fold array: [17 44 48 55 56 58 62 84 85 90 91 95]
###Markdown
The ROC curve
###Code
def roc_curve(true_labels, predicted_probs, n_points=100, pos_class=1):
thr = linspace(0,1,n_points)
tpr = zeros(n_points)
fpr = zeros(n_points)
pos = true_labels == pos_class
neg = logical_not(pos)
n_pos = count_nonzero(pos)
n_neg = count_nonzero(neg)
for i,t in enumerate(thr):
tpr[i] = count_nonzero(logical_and(predicted_probs >= t, pos)) / n_pos
fpr[i] = count_nonzero(logical_and(predicted_probs >= t, neg)) / n_neg
return fpr, tpr, thr
# Randomly generated predictions should give us a diagonal ROC curve
preds = rand(len(target))
fpr, tpr, thr = roc_curve(target, preds, pos_class=True)
plot(fpr, tpr)
###Output
_____no_output_____
###Markdown
The area under the ROC curve
###Code
def auc(true_labels, predicted_labels, pos_class=1):
fpr, tpr, thr = roc_curve(true_labels, predicted_labels,
pos_class=pos_class)
area = -trapz(tpr, x=fpr)
return area
auc(target, preds, pos_class=True)
###Output
_____no_output_____
###Markdown
Multi-class classification
###Code
d = pandas.read_csv("data/mnist_small.csv")
d_train = d[:int(0.8*len(d))]
d_test = d[int(0.8*len(d)):]
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
rf.fit(d_train.drop('label', axis=1), d_train['label'])
from sklearn.metrics import confusion_matrix
preds = rf.predict(d_test.drop('label', axis=1))
cm = confusion_matrix(d_test['label'], preds)
matshow(cm, cmap='Greys')
colorbar()
savefig("figures/figure-4.19.eps", format='eps')
###Output
_____no_output_____
###Markdown
The root-mean-square error
###Code
def rmse(true_values, predicted_values):
n = len(true_values)
residuals = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
return np.sqrt(residuals/n)
rmse(rand(10), rand(10))
###Output
_____no_output_____
###Markdown
The R-squared error
###Code
def r2(true_values, predicted_values):
n = len(true_values)
mean = np.mean(true_values)
residuals = 0
total = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
total += (true_values[i] - mean)**2.
return 1.0 - residuals/total
r2(arange(10)+rand(), arange(10)+rand(10))
###Output
_____no_output_____
###Markdown
Grid search with kernel-SVM modelImporting modules:
###Code
from sklearn.metrics import roc_auc_score
from sklearn.svm import SVC
###Output
_____no_output_____
###Markdown
Loading data and performang poor-mans feature engineering:
###Code
d = pandas.read_csv("data/titanic.csv")
# Target
y = d["Survived"]
# Features
X = d.drop(["Survived", "PassengerId", "Cabin","Ticket","Name", "Fare"], axis=1)
X['Sex'] = list(map(lambda x: 1 if x=="male" else 0, X['Sex']))
X['Embarked-Q'] = list(map(lambda x: 1 if x=="Q" else 0, X['Embarked']))
X['Embarked-C'] = list(map(lambda x: 1 if x=="C" else 0, X['Embarked']))
X['Embarked-S'] = list(map(lambda x: 1 if x=="S" else 0, X['Embarked']))
X = X.drop(["Embarked", "Sex"], axis=1)
X = X.fillna(-1)
###Output
_____no_output_____
###Markdown
Performing grid-search to find the optimal hyper-parameters:
###Code
# grid of (gamma, C) values to try
gam_vec, cost_vec = np.meshgrid(np.logspace(0.01, 0.1, 11),
np.linspace(1, 5, 10))
AUC_all = [] # initialize empty array to store AUC results
# set up cross-validation folds
N = len(y)
K = 10 # number of cross-validation folds
folds = np.random.randint(0, K, size=N)
# search over every value of the grid
for param_ind in np.arange(len(gam_vec.ravel())):
# initialize cross-validation predictions
y_cv_pred = np.empty(N)
# loop through the cross-validation folds
for ii in np.arange(K):
# break your data into training and testing subsets
# X_train = X.ix[folds != ii,:]
# y_train = y.ix[folds != ii]
# X_test = X.ix[folds == ii,:]
X_train = X.iloc[folds != ii,:]
y_train = y.iloc[folds != ii]
X_test = X.iloc[folds == ii,:]
#X_train = X.iloc[folds, :]
#X_train = X_train.drop(ii)
#y_train = y.iloc[folds]
#y_train = y.drop(ii)
#X_test = X.iloc[folds, :]
#X_test = X_test[folds == ii]
# build a model on the training set
model = SVC(gamma=gam_vec.ravel()[param_ind], C=cost_vec.ravel()[param_ind])
model.fit(X_train, y_train)
# generate and store model predictions on the testing set
y_cv_pred[folds == ii] = model.predict(X_test)
# evaluate the AUC of the predictions
AUC_all.append(roc_auc_score(y, y_cv_pred))
indmax = np.argmax(AUC_all)
print("Maximum = %.3f" % (np.max(AUC_all)))
print("Tuning Parameters: (gamma = %.2f, C = %.2f)" % (gam_vec.ravel()[indmax], cost_vec.ravel()[indmax]))
ix=2
print(folds)
# Train subset taking all rows except the ones with index == to the positions of ix in the folds array
X_train = X.iloc[folds!=ix,:]
print(X_train.head(20))
X_test = X.iloc[folds==ix,:]
print(X_test.head(20))
###Output
[4 3 4 5 3 9 4 0 8 4 6 0 3 7 5 3 2 0 5 3 7 9 2 0 8 4 8 2 0 6 8 9 8 4 1 2 6
3 8 3 1 0 3 3 9 7 6 8 0 6 5 0 3 1 2 2 9 6 1 0 2 3 5 6 5 7 1 1 7 0 4 0 6 8
7 9 6 4 7 4 0 1 2 9 5 8 7 5 2 5 6 1 5 3 6 0 0 8 2 0 8 8 6 2 1 3 2 0 2 1 5
1 6 9 9 0 0 5 9 1 9 5 2 0 2 9 5 8 1 5 2 3 6 4 0 9 9 6 6 4 5 5 4 9 5 4 3 2
7 2 6 7 9 1 7 7 1 8 8 3 2 5 3 4 3 0 5 8 6 5 9 1 5 3 1 6 1 7 0 1 7 7 9 3 7
8 9 6 1 8 8 3 1 9 0 9 1 8 7 8 9 4 4 3 4 5 6 2 4 8 7 5 6 2 7 8 5 4 8 2 1 3
1 6 7 5 1 2 0 0 2 6 3 3 5 0 5 5 5 5 1 5 4 2 1 9 3 6 1 2 0 4 4 5 4 2 6 5 4
1 4 6 3 2 5 8 3 9 0 6 6 3 3 0 9 6 2 3 1 4 1 9 1 6 9 2 2 0 5 4 8 8 4 7 4 5
1 0 7 4 7 0 8 0 4 7 2 6 1 0 3 4 2 3 0 3 6 9 8 3 6 5 2 7 3 6 4 4 2 3 0 5 9
7 0 8 6 2 5 9 3 3 8 5 2 7 4 0 6 6 0 1 8 7 8 9 1 9 1 2 8 5 4 6 7 9 9 0 8 6
8 9 6 0 8 1 7 8 2 5 0 3 1 7 4 7 6 4 7 2 2 3 5 0 0 5 2 1 4 5 1 5 3 7 2 3 4
9 0 1 0 0 0 2 1 8 1 9 2 5 8 5 8 1 9 7 2 2 5 2 4 6 3 0 1 0 1 0 3 9 1 0 4 4
2 7 9 8 2 4 9 4 3 6 7 1 3 5 8 1 8 3 2 4 8 6 0 5 4 4 2 8 6 7 6 3 5 3 9 3 2
7 0 6 2 3 3 1 2 9 8 9 8 5 3 6 9 9 3 3 2 5 1 3 9 5 7 5 0 0 8 6 4 4 4 4 8 9
4 6 3 9 5 5 8 8 2 4 2 7 4 0 0 4 1 5 9 0 7 9 6 0 2 4 8 9 8 8 1 7 4 7 6 2 5
9 7 7 4 7 9 7 4 3 3 6 0 9 6 7 1 9 7 2 1 7 7 0 7 3 5 4 4 0 8 0 5 1 4 0 3 7
5 7 0 7 2 3 5 2 6 5 4 7 6 6 8 8 0 1 2 9 0 3 9 3 1 3 7 1 3 7 5 9 4 0 5 7 1
8 4 8 9 5 0 2 0 9 9 4 8 2 2 6 7 4 4 2 6 7 1 7 6 9 7 7 1 9 5 7 1 8 4 4 9 9
2 9 2 7 3 9 9 2 2 2 1 0 8 4 0 4 5 1 4 2 2 6 6 3 2 0 0 1 9 6 3 9 8 2 6 7 5
5 8 8 8 2 9 8 9 7 7 7 4 5 6 4 0 6 7 5 5 9 0 1 4 0 0 7 5 1 5 9 0 6 2 4 9 4
9 7 7 4 1 0 1 5 7 3 7 6 2 3 6 5 1 1 6 5 7 4 2 5 0 6 0 1 5 9 3 7 7 3 4 2 4
1 7 0 2 8 3 0 9 4 9 9 4 4 1 9 6 8 0 3 5 5 1 8 6 5 0 9 3 5 0 7 7 1 1 0 5 7
3 8 2 7 2 3 1 3 4 5 4 9 5 1 0 2 7 3 3 7 4 7 9 9 9 7 0 8 9 3 3 5 7 0 3 7 0
9 6 0 5 4 2 8 4 3 8 2 4 6 2 2 2 6 3 1 3 6 5 1 8 2 4 9 4 6 0 4 2 7 1 5 5 0
0 5 0]
Pclass Age SibSp Parch Embarked-Q Embarked-C Embarked-S
0 3 22.0 1 0 0 0 1
1 1 38.0 1 0 0 1 0
2 3 26.0 0 0 0 0 1
3 1 35.0 1 0 0 0 1
4 3 35.0 0 0 0 0 1
5 3 -1.0 0 0 1 0 0
6 1 54.0 0 0 0 0 1
7 3 2.0 3 1 0 0 1
8 3 27.0 0 2 0 0 1
9 2 14.0 1 0 0 1 0
10 3 4.0 1 1 0 0 1
11 1 58.0 0 0 0 0 1
12 3 20.0 0 0 0 0 1
13 3 39.0 1 5 0 0 1
14 3 14.0 0 0 0 0 1
15 2 55.0 0 0 0 0 1
17 2 -1.0 0 0 0 0 1
18 3 31.0 1 0 0 0 1
19 3 -1.0 0 0 0 1 0
20 2 35.0 0 0 0 0 1
Pclass Age SibSp Parch Embarked-Q Embarked-C Embarked-S
16 3 2.0 4 1 1 0 0
22 3 15.0 0 0 1 0 0
27 1 19.0 3 2 0 0 1
35 1 42.0 1 0 0 0 1
54 1 65.0 0 1 0 1 0
55 1 -1.0 0 0 0 0 1
60 3 22.0 0 0 0 1 0
82 3 -1.0 0 0 1 0 0
88 1 23.0 3 2 0 0 1
98 2 34.0 0 1 0 0 1
103 3 33.0 0 0 0 0 1
106 3 21.0 0 0 0 0 1
108 3 38.0 0 0 0 0 1
122 2 32.5 1 0 0 1 0
124 1 54.0 0 1 0 0 1
130 3 33.0 0 0 0 1 0
147 3 9.0 2 2 0 0 1
149 2 42.0 0 0 0 0 1
160 3 44.0 0 1 0 0 1
207 3 26.0 0 0 0 1 0
###Markdown
Plotting the contours of the parameter performance:
###Code
AUC_grid = np.array(AUC_all).reshape(gam_vec.shape)
contourf(gam_vec, cost_vec, AUC_grid, 20, cmap='Greys')
xlabel("kernel coefficient, gamma")
ylabel("penalty parameter, C")
colorbar()
savefig("figures/figure-4.25.eps", format='eps')
###Output
_____no_output_____
###Markdown
Chapter 4 - Evaluation and Optimization
###Code
%pylab inline
import pandas as pandas
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
We generate two inputs:* features – a matrix of input features* target – an array of target variables corresponding to those features
###Code
features = rand(100,5)
target = rand(100) > 0.5
###Output
_____no_output_____
###Markdown
The holdout methodWe divide into a randomized training and test set:
###Code
int(floor(0.7*100))
N = features.shape[0]
N_train = int(floor(0.7 * N))
# Randomize index
# Note: sometimes you want to retain the order in the dataset and skip this step
# E.g. in the case of time-based datasets where you want to test on 'later' instances
idx = random.permutation(N)
# Split index
idx_train = idx[:N_train]
idx_test = idx[N_train:]
# Break your data into training and testing subsets
features_train = features[idx_train,:]
target_train = target[idx_train]
features_test = features[idx_test,:]
target_test = target[idx_test]
# Build, predict, evaluate (to be filled out)
# model = train(features_train, target_train)
# preds_test = predict(model, features_test)
# accuracy = evaluate_acc(preds_test, target_test)
print(features_train.shape)
print(features_test.shape)
print(target_train.shape)
print(target_test.shape)
###Output
(70, 5)
(30, 5)
(70,)
(30,)
###Markdown
K-fold cross-validation
###Code
N = features.shape[0]
K = 10 # number of folds
preds_kfold = np.empty(N)
folds = np.random.randint(0, K, size=N)
print(folds)
for idx in np.arange(K):
# For each fold, break your data into training and testing subsets
features_train = features[folds != idx,:]
target_train = target[folds != idx]
features_test = features[folds == idx,:]
# Print the indices in each fold, for inspection
print("Positions of "+str(idx)+" in fold array: ", end="")
print(nonzero(folds == idx)[0])
# Build and predict for CV fold (to be filled out)
# model = train(features_train, target_train)
# preds_kfold[folds == idx] = predict(model, features_test)
# accuracy = evaluate_acc(preds_kfold, target)
###Output
[0 3 6 1 6 7 1 6 1 0 8 0 7 6 6 2 4 3 4 6 4 6 9 7 9 2 9 5 4 4 9 8 8 7 2 9 7
3 3 0 2 5 4 4 3 6 9 8 5 3 4 0 9 7 5 7 0 7 2 0 9 2 3 8 0 4 9 0 5 9 1 5 2 2
9 5 0 1 9 5 8 5 6 9 3 6 2 0 8 7 6 6 7 1 1 7 9 9 1 5]
Positions of 0 in fold array: [ 0 9 11 39 51 56 59 64 67 76 87]
Positions of 1 in fold array: [ 3 6 8 70 77 93 94 98]
Positions of 2 in fold array: [15 25 34 40 58 61 72 73 86]
Positions of 3 in fold array: [ 1 17 37 38 44 49 62 84]
Positions of 4 in fold array: [16 18 20 28 29 42 43 50 65]
Positions of 5 in fold array: [27 41 48 54 68 71 75 79 81 99]
Positions of 6 in fold array: [ 2 4 7 13 14 19 21 45 82 85 90 91]
Positions of 7 in fold array: [ 5 12 23 33 36 53 55 57 89 92 95]
Positions of 8 in fold array: [10 31 32 47 63 80 88]
Positions of 9 in fold array: [22 24 26 30 35 46 52 60 66 69 74 78 83 96 97]
###Markdown
The ROC curve
###Code
def roc_curve(true_labels, predicted_probs, n_points=100, pos_class=1):
thr = linspace(0,1,n_points)
tpr = zeros(n_points)
fpr = zeros(n_points)
pos = true_labels == pos_class
neg = logical_not(pos)
n_pos = count_nonzero(pos)
n_neg = count_nonzero(neg)
for i,t in enumerate(thr):
tpr[i] = count_nonzero(logical_and(predicted_probs >= t, pos)) / n_pos
fpr[i] = count_nonzero(logical_and(predicted_probs >= t, neg)) / n_neg
return fpr, tpr, thr
# Randomly generated predictions should give us a diagonal ROC curve
preds = rand(len(target))
fpr, tpr, thr = roc_curve(target, preds, pos_class=True)
plot(fpr, tpr)
###Output
_____no_output_____
###Markdown
The area under the ROC curve
###Code
def auc(true_labels, predicted_labels, pos_class=1):
fpr, tpr, thr = roc_curve(true_labels, predicted_labels,
pos_class=pos_class)
area = -trapz(tpr, x=fpr)
return area
auc(target, preds, pos_class=True)
###Output
_____no_output_____
###Markdown
Multi-class classification
###Code
d = pandas.read_csv("data/mnist_small.csv")
d_train = d[:int(0.8*len(d))]
d_test = d[int(0.8*len(d)):]
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
rf.fit(d_train.drop('label', axis=1), d_train['label'])
from sklearn.metrics import confusion_matrix
preds = rf.predict(d_test.drop('label', axis=1))
cm = confusion_matrix(d_test['label'], preds)
matshow(cm, cmap='Greys')
colorbar()
savefig("figures/figure-4.19.eps", format='eps')
###Output
_____no_output_____
###Markdown
The root-mean-square error
###Code
def rmse(true_values, predicted_values):
n = len(true_values)
residuals = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
return np.sqrt(residuals/n)
rmse(rand(10), rand(10))
###Output
_____no_output_____
###Markdown
The R-squared error
###Code
def r2(true_values, predicted_values):
n = len(true_values)
mean = np.mean(true_values)
residuals = 0
total = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
total += (true_values[i] - mean)**2.
return 1.0 - residuals/total
r2(arange(10)+rand(), arange(10)+rand(10))
###Output
_____no_output_____
###Markdown
Grid search with kernel-SVM modelImporting modules:
###Code
from sklearn.metrics import roc_auc_score
from sklearn.svm import SVC
###Output
_____no_output_____
###Markdown
Loading data and performang poor-mans feature engineering:
###Code
d = pandas.read_csv("data/titanic.csv")
# Target
y = d["Survived"]
# Features
X = d.drop(["Survived", "PassengerId", "Cabin","Ticket","Name", "Fare"], axis=1)
X['Sex'] = list(map(lambda x: 1 if x=="male" else 0, X['Sex']))
X['Embarked-Q'] = list(map(lambda x: 1 if x=="Q" else 0, X['Embarked']))
X['Embarked-C'] = list(map(lambda x: 1 if x=="C" else 0, X['Embarked']))
X['Embarked-S'] = list(map(lambda x: 1 if x=="S" else 0, X['Embarked']))
X = X.drop(["Embarked"], axis=1)
X = X.fillna(-1)
gam_vec, cost_vec = np.meshgrid(np.logspace(0.01, 0.1, 11),
np.linspace(1, 5, 10))
print(gam_vec)
print(cost_vec)
print(gam_vec.ravel())
###Output
[[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]
[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]
[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]
[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]
[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]
[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]
[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]
[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]
[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]
[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541]]
[[1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. ]
[1.44444444 1.44444444 1.44444444 1.44444444 1.44444444 1.44444444
1.44444444 1.44444444 1.44444444 1.44444444 1.44444444]
[1.88888889 1.88888889 1.88888889 1.88888889 1.88888889 1.88888889
1.88888889 1.88888889 1.88888889 1.88888889 1.88888889]
[2.33333333 2.33333333 2.33333333 2.33333333 2.33333333 2.33333333
2.33333333 2.33333333 2.33333333 2.33333333 2.33333333]
[2.77777778 2.77777778 2.77777778 2.77777778 2.77777778 2.77777778
2.77777778 2.77777778 2.77777778 2.77777778 2.77777778]
[3.22222222 3.22222222 3.22222222 3.22222222 3.22222222 3.22222222
3.22222222 3.22222222 3.22222222 3.22222222 3.22222222]
[3.66666667 3.66666667 3.66666667 3.66666667 3.66666667 3.66666667
3.66666667 3.66666667 3.66666667 3.66666667 3.66666667]
[4.11111111 4.11111111 4.11111111 4.11111111 4.11111111 4.11111111
4.11111111 4.11111111 4.11111111 4.11111111 4.11111111]
[4.55555556 4.55555556 4.55555556 4.55555556 4.55555556 4.55555556
4.55555556 4.55555556 4.55555556 4.55555556 4.55555556]
[5. 5. 5. 5. 5. 5.
5. 5. 5. 5. 5. ]]
[1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541 1.02329299
1.04472022 1.06659612 1.08893009 1.11173173 1.13501082 1.15877736
1.18304156 1.20781384 1.23310483 1.25892541 1.02329299 1.04472022
1.06659612 1.08893009 1.11173173 1.13501082 1.15877736 1.18304156
1.20781384 1.23310483 1.25892541 1.02329299 1.04472022 1.06659612
1.08893009 1.11173173 1.13501082 1.15877736 1.18304156 1.20781384
1.23310483 1.25892541 1.02329299 1.04472022 1.06659612 1.08893009
1.11173173 1.13501082 1.15877736 1.18304156 1.20781384 1.23310483
1.25892541 1.02329299 1.04472022 1.06659612 1.08893009 1.11173173
1.13501082 1.15877736 1.18304156 1.20781384 1.23310483 1.25892541
1.02329299 1.04472022 1.06659612 1.08893009 1.11173173 1.13501082
1.15877736 1.18304156 1.20781384 1.23310483 1.25892541 1.02329299
1.04472022 1.06659612 1.08893009 1.11173173 1.13501082 1.15877736
1.18304156 1.20781384 1.23310483 1.25892541 1.02329299 1.04472022
1.06659612 1.08893009 1.11173173 1.13501082 1.15877736 1.18304156
1.20781384 1.23310483 1.25892541 1.02329299 1.04472022 1.06659612
1.08893009 1.11173173 1.13501082 1.15877736 1.18304156 1.20781384
1.23310483 1.25892541]
###Markdown
Performing grid-search to find the optimal hyper-parameters:
###Code
# grid of (gamma, C) values to try
gam_vec, cost_vec = np.meshgrid(np.logspace(0.01, 0.1, 11),
np.linspace(1, 5, 10))
AUC_all = [] # initialize empty array to store AUC results
# set up cross-validation folds
N = len(y)
K = 10 # number of cross-validation folds
folds = np.random.randint(0, K, size=N)
# search over every value of the grid
for param_ind in np.arange(len(gam_vec.ravel())):
# initialize cross-validation predictions
y_cv_pred = np.empty(N)
# loop through the cross-validation folds
for ii in np.arange(K):
# break your data into training and testing subsets
# X_train = X.ix[folds != ii,:]
# y_train = y.ix[folds != ii]
# X_test = X.ix[folds == ii,:]
X_train = X.iloc[folds != ii,:]
y_train = y.iloc[folds != ii]
X_test = X.iloc[folds == ii,:]
#X_train = X.iloc[folds, :]
#X_train = X_train.drop(ii)
#y_train = y.iloc[folds]
#y_train = y.drop(ii)
#X_test = X.iloc[folds, :]
#X_test = X_test[folds == ii]
# build a model on the training set
model = SVC(gamma=gam_vec.ravel()[param_ind], C=cost_vec.ravel()[param_ind])
model.fit(X_train, y_train)
# generate and store model predictions on the testing set
y_cv_pred[folds == ii] = model.predict(X_test)
# evaluate the AUC of the predictions
AUC_all.append(roc_auc_score(y, y_cv_pred))
indmax = np.argmax(AUC_all)
print("Maximum = %.3f" % (np.max(AUC_all)))
print("Tuning Parameters: (gamma = %.2f, C = %.2f)" % (gam_vec.ravel()[indmax], cost_vec.ravel()[indmax]))
final_model = SVC(gamma = 1.02, C = 1.89)
X_train = X[:int(0.8*len(X))]
Y_train = X[:int(0.8*len(y))]
X_test = X[int(0.8*len(X)):]
Y_test = X[int(0.8*len(y)):]
final_model.fit(X_train,Y_train)
y_pred = final_model.predict(X_test)
roc_auc_score(Y_test,y_pred)
ix=2
print(folds)
# Train subset taking all rows except the ones with index == to the positions of ix in the folds array
X_train = X.iloc[folds!=ix,:]
print(X_train.head(20))
X_test = X.iloc[folds==ix,:]
print(X_test.head(20))
###Output
[2 5 1 1 6 4 5 3 1 6 9 6 1 7 5 6 2 6 9 3 4 2 0 3 9 6 0 2 3 3 4 2 2 8 9 1 0
7 7 3 9 8 6 1 0 9 3 6 5 1 0 4 0 2 9 7 1 2 0 6 3 9 1 4 8 9 9 8 2 9 4 5 6 2
1 0 8 4 0 1 5 0 8 1 5 9 6 8 4 3 5 2 0 6 8 2 9 6 4 1 8 4 5 3 9 6 2 0 5 0 5
2 7 3 5 6 9 4 0 2 5 9 1 9 7 5 9 3 0 6 9 6 1 0 7 1 9 2 2 4 0 4 9 3 4 2 6 8
3 3 3 9 4 3 0 1 5 8 9 9 3 2 9 0 9 5 7 5 4 2 2 4 4 7 2 9 4 5 1 5 4 2 3 4 9
5 0 4 5 7 7 2 7 2 1 2 2 2 4 6 3 3 0 2 5 7 9 0 8 7 0 7 5 1 5 7 7 6 5 0 9 8
0 0 7 3 6 5 4 5 3 5 7 4 0 5 8 0 0 9 9 5 0 9 7 4 9 2 3 4 6 3 8 9 4 8 0 8 6
1 6 1 4 9 5 7 4 2 5 3 2 0 5 3 6 8 0 0 1 4 0 9 4 6 8 0 8 8 1 9 0 7 7 5 6 7
8 9 5 2 2 8 4 3 6 6 6 6 4 7 0 2 1 3 4 3 4 5 5 1 5 1 0 8 3 5 6 7 8 7 3 9 0
6 5 8 1 5 6 9 5 4 2 0 5 9 7 9 0 4 1 6 3 4 6 9 1 2 2 0 6 4 9 8 5 9 2 6 2 4
5 1 8 0 4 5 5 9 8 6 9 0 0 3 9 7 6 4 5 8 5 8 3 3 5 0 2 0 4 0 7 5 7 0 2 1 4
6 0 5 5 9 1 7 5 9 8 1 6 0 2 6 1 3 0 8 0 2 1 3 0 2 7 2 3 3 3 2 6 0 4 2 1 6
2 4 2 3 5 3 2 7 4 1 5 3 2 8 7 4 8 9 2 7 9 0 0 6 1 7 4 4 1 3 3 2 7 8 4 6 1
8 3 9 4 6 0 7 6 2 9 4 7 3 2 0 8 9 9 6 7 8 3 0 3 5 6 4 9 8 7 1 1 0 4 7 7 2
5 9 3 4 0 5 3 8 2 5 8 2 5 5 0 0 1 8 3 6 6 4 9 2 2 9 2 4 9 5 9 2 8 4 6 3 9
7 3 5 1 5 3 4 6 8 4 9 6 5 7 1 9 8 1 9 1 4 2 8 4 5 7 9 1 2 1 9 7 9 3 6 9 6
8 3 4 1 0 1 6 8 0 2 4 6 9 0 4 9 6 2 3 4 4 8 1 7 3 2 6 0 0 4 6 9 4 6 2 8 4
7 3 7 5 5 6 5 2 8 4 8 7 1 0 2 8 1 4 2 9 4 3 1 9 6 0 1 1 4 4 7 0 0 5 1 9 8
2 7 6 1 3 2 3 2 2 9 6 8 0 9 5 3 5 7 7 3 3 8 7 0 2 2 3 8 0 7 3 5 8 4 2 0 6
9 2 0 4 5 0 1 7 3 3 4 0 7 6 7 7 9 3 7 1 6 6 7 8 0 5 2 7 8 4 0 2 3 0 1 1 7
7 5 1 1 8 8 6 3 7 7 7 0 5 5 3 2 7 0 5 9 1 7 9 9 6 3 1 4 7 0 7 6 4 6 1 2 1
2 4 8 6 6 5 3 5 0 0 9 1 2 0 7 7 0 2 6 7 6 0 9 2 9 4 9 2 4 0 8 6 1 6 5 2 1
5 1 7 0 4 1 5 7 9 4 5 5 0 5 6 0 5 6 2 6 1 8 3 9 6 7 2 7 7 9 1 8 9 4 0 5 4
7 6 8 2 3 9 8 3 0 7 4 4 2 3 2 4 7 3 2 9 8 1 7 7 9 9 6 5 7 6 1 8 4 2 3 2 1
0 4 8]
Pclass Sex Age SibSp Parch Embarked-Q Embarked-C Embarked-S
1 1 0 38.0 1 0 0 1 0
2 3 0 26.0 0 0 0 0 1
3 1 0 35.0 1 0 0 0 1
4 3 1 35.0 0 0 0 0 1
5 3 1 -1.0 0 0 1 0 0
6 1 1 54.0 0 0 0 0 1
7 3 1 2.0 3 1 0 0 1
8 3 0 27.0 0 2 0 0 1
9 2 0 14.0 1 0 0 1 0
10 3 0 4.0 1 1 0 0 1
11 1 0 58.0 0 0 0 0 1
12 3 1 20.0 0 0 0 0 1
13 3 1 39.0 1 5 0 0 1
14 3 0 14.0 0 0 0 0 1
15 2 0 55.0 0 0 0 0 1
17 2 1 -1.0 0 0 0 0 1
18 3 0 31.0 1 0 0 0 1
19 3 0 -1.0 0 0 0 1 0
20 2 1 35.0 0 0 0 0 1
22 3 0 15.0 0 0 1 0 0
Pclass Sex Age SibSp Parch Embarked-Q Embarked-C Embarked-S
0 3 1 22.0 1 0 0 0 1
16 3 1 2.0 4 1 1 0 0
21 2 1 34.0 0 0 0 0 1
27 1 1 19.0 3 2 0 0 1
31 1 0 -1.0 1 0 0 1 0
32 3 0 -1.0 0 0 1 0 0
53 2 0 29.0 1 0 0 0 1
57 3 1 28.5 0 0 0 1 0
68 3 0 17.0 4 2 0 0 1
73 3 1 26.0 1 0 0 1 0
91 3 1 20.0 0 0 0 0 1
95 3 1 -1.0 0 0 0 0 1
106 3 0 21.0 0 0 0 0 1
111 3 0 14.5 1 0 0 1 0
119 3 0 2.0 4 2 0 0 1
137 1 1 37.0 1 0 0 0 1
138 3 1 16.0 0 0 0 0 1
145 2 1 19.0 1 1 0 0 1
161 2 0 40.0 0 0 0 0 1
169 3 1 28.0 0 0 0 0 1
###Markdown
Plotting the contours of the parameter performance:
###Code
AUC_grid = np.array(AUC_all).reshape(gam_vec.shape)
contourf(gam_vec, cost_vec, AUC_grid, 20, cmap='Greys')
xlabel("kernel coefficient, gamma")
ylabel("penalty parameter, C")
colorbar()
savefig("figure-4.25.eps", format='eps')
###Output
_____no_output_____
###Markdown
Chapter 4 - Evaluation and Optimization
###Code
%pylab inline
import pandas as pandas
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
We generate two inputs:* features – a matrix of input features* target – an array of target variables corresponding to those features
###Code
features = rand(100,5)
target = rand(100) > 0.5
###Output
_____no_output_____
###Markdown
The holdout methodWe divide into a randomized training and test set:
###Code
int(floor(0.7*100))
N = features.shape[0]
N_train = int(floor(0.7 * N))
# Randomize index
# Note: sometimes you want to retain the order in the dataset and skip this step
# E.g. in the case of time-based datasets where you want to test on 'later' instances
idx = random.permutation(N)
# Split index
idx_train = idx[:N_train]
idx_test = idx[N_train:]
# Break your data into training and testing subsets
features_train = features[idx_train,:]
target_train = target[idx_train]
features_test = features[idx_test,:]
target_test = target[idx_test]
# Build, predict, evaluate (to be filled out)
# model = train(features_train, target_train)
# preds_test = predict(model, features_test)
# accuracy = evaluate_acc(preds_test, target_test)
print(features_train.shape)
print(features_test.shape)
print(target_train.shape)
print(target_test.shape)
###Output
(70, 5)
(30, 5)
(70,)
(30,)
###Markdown
K-fold cross-validation
###Code
N = features.shape[0]
K = 10 # number of folds
preds_kfold = np.empty(N)
folds = np.random.randint(0, K, size=N)
print(folds)
for idx in np.arange(K):
# For each fold, break your data into training and testing subsets
features_train = features[folds != idx,:]
target_train = target[folds != idx]
features_test = features[folds == idx,:]
# Print the indices in each fold, for inspection
print("Positions of "+str(idx)+" in fold array: ", end="")
print(nonzero(folds == idx)[0])
# Build and predict for CV fold (to be filled out)
# model = train(features_train, target_train)
# preds_kfold[folds == idx] = predict(model, features_test)
# accuracy = evaluate_acc(preds_kfold, target)
###Output
[2 2 8 6 8 8 9 5 7 1 9 2 3 0 9 2 7 6 4 1 7 7 6 5 4 5 2 4 7 4 8 5 8 1 2 0 4
6 8 5 1 0 9 8 4 0 7 1 0 7 4 2 0 3 2 9 5 5 4 8 9 4 5 5 6 2 2 0 9 0 7 2 9 3
4 1 1 2 1 3 6 8 1 7 5 4 9 0 5 2 9 9 9 6 8 2 3 7 2 9]
Positions of 0 in fold array: [13 35 41 45 48 52 67 69 87]
Positions of 1 in fold array: [ 9 19 33 40 47 75 76 78 82]
Positions of 2 in fold array: [ 0 1 11 15 26 34 51 54 65 66 71 77 89 95 98]
Positions of 3 in fold array: [12 53 73 79 96]
Positions of 4 in fold array: [18 24 27 29 36 44 50 58 61 74 85]
Positions of 5 in fold array: [ 7 23 25 31 39 56 57 62 63 84 88]
Positions of 6 in fold array: [ 3 17 22 37 64 80 93]
Positions of 7 in fold array: [ 8 16 20 21 28 46 49 70 83 97]
Positions of 8 in fold array: [ 2 4 5 30 32 38 43 59 81 94]
Positions of 9 in fold array: [ 6 10 14 42 55 60 68 72 86 90 91 92 99]
###Markdown
The ROC curve
###Code
def roc_curve(true_labels, predicted_probs, n_points=100, pos_class=1):
thr = linspace(0,1,n_points)
tpr = zeros(n_points)
fpr = zeros(n_points)
pos = true_labels == pos_class
neg = logical_not(pos)
n_pos = count_nonzero(pos)
n_neg = count_nonzero(neg)
for i,t in enumerate(thr):
tpr[i] = count_nonzero(logical_and(predicted_probs >= t, pos)) / n_pos
fpr[i] = count_nonzero(logical_and(predicted_probs >= t, neg)) / n_neg
return fpr, tpr, thr
# Randomly generated predictions should give us a diagonal ROC curve
preds = rand(len(target))
fpr, tpr, thr = roc_curve(target, preds, pos_class=True)
plot(fpr, tpr)
###Output
_____no_output_____
###Markdown
The area under the ROC curve
###Code
def auc(true_labels, predicted_labels, pos_class=1):
fpr, tpr, thr = roc_curve(true_labels, predicted_labels,
pos_class=pos_class)
area = -trapz(tpr, x=fpr)
return area
auc(target, preds, pos_class=True)
###Output
_____no_output_____
###Markdown
Multi-class classification
###Code
d = pandas.read_csv("data/mnist_small.csv")
d_train = d[:int(0.8*len(d))]
d_test = d[int(0.8*len(d)):]
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
rf.fit(d_train.drop('label', axis=1), d_train['label'])
from sklearn.metrics import confusion_matrix
preds = rf.predict(d_test.drop('label', axis=1))
cm = confusion_matrix(d_test['label'], preds)
matshow(cm, cmap='Greys')
colorbar()
savefig("figures/figure-4.19.eps", format='eps')
###Output
_____no_output_____
###Markdown
The root-mean-square error
###Code
def rmse(true_values, predicted_values):
n = len(true_values)
residuals = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
return np.sqrt(residuals/n)
rmse(rand(10), rand(10))
###Output
_____no_output_____
###Markdown
The R-squared error
###Code
def r2(true_values, predicted_values):
n = len(true_values)
mean = np.mean(true_values)
residuals = 0
total = 0
for i in range(n):
residuals += (true_values[i] - predicted_values[i])**2.
total += (true_values[i] - mean)**2.
return 1.0 - residuals/total
r2(arange(10)+rand(), arange(10)+rand(10))
###Output
_____no_output_____
###Markdown
Grid search with kernel-SVM modelImporting modules:
###Code
from sklearn.metrics import roc_auc_score
from sklearn.svm import SVC
###Output
_____no_output_____
###Markdown
Loading data and performang poor-mans feature engineering:
###Code
d = pandas.read_csv("data/titanic.csv")
# Target
y = d["Survived"]
# Features
X = d.drop(["Survived", "PassengerId", "Cabin","Ticket","Name", "Fare"], axis=1)
X['Sex'] = list(map(lambda x: 1 if x=="male" else 0, X['Sex']))
X['Embarked-Q'] = list(map(lambda x: 1 if x=="Q" else 0, X['Embarked']))
X['Embarked-C'] = list(map(lambda x: 1 if x=="C" else 0, X['Embarked']))
X['Embarked-S'] = list(map(lambda x: 1 if x=="S" else 0, X['Embarked']))
X = X.drop(["Embarked", "Sex"], axis=1)
X = X.fillna(-1)
###Output
_____no_output_____
###Markdown
Performing grid-search to find the optimal hyper-parameters:
###Code
# grid of (gamma, C) values to try
gam_vec, cost_vec = np.meshgrid(np.logspace(0.01, 0.1, 11),
np.linspace(1, 5, 10))
AUC_all = [] # initialize empty array to store AUC results
# set up cross-validation folds
N = len(y)
K = 10 # number of cross-validation folds
folds = np.random.randint(0, K, size=N)
# search over every value of the grid
for param_ind in np.arange(len(gam_vec.ravel())):
# initialize cross-validation predictions
y_cv_pred = np.empty(N)
# loop through the cross-validation folds
for ii in np.arange(K):
# break your data into training and testing subsets
# X_train = X.ix[folds != ii,:]
# y_train = y.ix[folds != ii]
# X_test = X.ix[folds == ii,:]
X_train = X.iloc[folds != ii,:]
y_train = y.iloc[folds != ii]
X_test = X.iloc[folds == ii,:]
#X_train = X.iloc[folds, :]
#X_train = X_train.drop(ii)
#y_train = y.iloc[folds]
#y_train = y.drop(ii)
#X_test = X.iloc[folds, :]
#X_test = X_test[folds == ii]
# build a model on the training set
model = SVC(gamma=gam_vec.ravel()[param_ind], C=cost_vec.ravel()[param_ind])
model.fit(X_train, y_train)
# generate and store model predictions on the testing set
y_cv_pred[folds == ii] = model.predict(X_test)
# evaluate the AUC of the predictions
AUC_all.append(roc_auc_score(y, y_cv_pred))
indmax = np.argmax(AUC_all)
print("Maximum = %.3f" % (np.max(AUC_all)))
print("Tuning Parameters: (gamma = %.2f, C = %.2f)" % (gam_vec.ravel()[indmax], cost_vec.ravel()[indmax]))
ix=2
print(folds)
# Train subset taking all rows except the ones with index == to the positions of ix in the folds array
X_train = X.iloc[folds!=ix,:]
print(X_train.head(20))
X_test = X.iloc[folds==ix,:]
print(X_test.head(20))
###Output
[2 2 8 6 8 8 9 5 7 1 9 2 3 0 9 2 7 6 4 1 7 7 6 5 4 5 2 4 7 4 8 5 8 1 2 0 4
6 8 5 1 0 9 8 4 0 7 1 0 7 4 2 0 3 2 9 5 5 4 8 9 4 5 5 6 2 2 0 9 0 7 2 9 3
4 1 1 2 1 3 6 8 1 7 5 4 9 0 5 2 9 9 9 6 8 2 3 7 2 9]
Pclass Age SibSp Parch Embarked-Q Embarked-C Embarked-S
2 3 26.0 0 0 0 0 1
3 1 35.0 1 0 0 0 1
4 3 35.0 0 0 0 0 1
5 3 -1.0 0 0 1 0 0
6 1 54.0 0 0 0 0 1
7 3 2.0 3 1 0 0 1
8 3 27.0 0 2 0 0 1
9 2 14.0 1 0 0 1 0
10 3 4.0 1 1 0 0 1
12 3 20.0 0 0 0 0 1
13 3 39.0 1 5 0 0 1
14 3 14.0 0 0 0 0 1
16 3 2.0 4 1 1 0 0
17 2 -1.0 0 0 0 0 1
18 3 31.0 1 0 0 0 1
19 3 -1.0 0 0 0 1 0
20 2 35.0 0 0 0 0 1
21 2 34.0 0 0 0 0 1
22 3 15.0 0 0 1 0 0
23 1 28.0 0 0 0 0 1
Pclass Age SibSp Parch Embarked-Q Embarked-C Embarked-S
0 3 22.0 1 0 0 0 1
1 1 38.0 1 0 0 1 0
11 1 58.0 0 0 0 0 1
15 2 55.0 0 0 0 0 1
26 3 -1.0 0 0 0 1 0
34 1 28.0 1 0 0 1 0
51 3 21.0 0 0 0 0 1
54 1 65.0 0 1 0 1 0
65 3 -1.0 1 1 0 1 0
66 2 29.0 0 0 0 0 1
71 3 16.0 5 2 0 0 1
77 3 -1.0 0 0 0 0 1
89 3 24.0 0 0 0 0 1
95 3 -1.0 0 0 0 0 1
98 2 34.0 0 1 0 0 1
###Markdown
Plotting the contours of the parameter performance:
###Code
AUC_grid = np.array(AUC_all).reshape(gam_vec.shape)
contourf(gam_vec, cost_vec, AUC_grid, 20, cmap='Greys')
xlabel("kernel coefficient, gamma")
ylabel("penalty parameter, C")
colorbar()
savefig("figures/figure-4.25.eps", format='eps')
###Output
_____no_output_____ |
Sentiment Analysis Project - RNN PyTorch.ipynb | ###Markdown
End-to-End Sentiment Analysis Web App Using PyTorch and SageMaker Objectives:Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to a deployed model which will predict the sentiment of the entered review. General OutlineThe general outline for SageMaker projects using a notebook instance.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.For this project, you will be following the steps in the general outline with some modifications. Step 1: Downloading the dataWe will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
mkdir: cannot create directory ‘../data’: File exists
--2020-05-16 21:48:11-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 46.8MB/s in 1.7s
2020-05-16 21:48:12 (46.8 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
###Markdown
Step 2: Preparing and Processing the dataTo begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
###Output
IMDB reviews: train = 12500 pos / 12500 neg, test = 12500 pos / 12500 neg
###Markdown
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records.
###Code
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
###Output
IMDb reviews (combined): train = 25000, test = 25000
###Markdown
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly.
###Code
print(train_X[100])
print(train_y[100])
###Output
Hahahaha!!!!!!Funny-that sums this movie up in one word.What the crap was this "thing",since It might kill me to use the word movie!?!?!I hope the director,writer,and producer didn't mean for this to turn out good,because it sure didn't!!!A scientist turning his son into a hammerhead shark,and the shark killing a bunch of people the scientist invited to the island!!!Oh my Gooooooodddd!!!!I hate this film so much that when I was watching it I laughed at all the serious parts,because they were so corny and unprofessional....and they couldn't have made the shark look more unrealistic,even though this "thing" had a bit larger budget than most low-budget movies.All I have to say is watch this movie expecting to laugh at all the bad acting,and stupid corny dialogue,because if you are expecting a good movie you'll be highly disappointed.
0
###Markdown
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis.
###Code
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
###Output
_____no_output_____
###Markdown
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, we will try applying `review_to_words` to one of the reviews in the training set.
###Code
# TODO: Apply review_to_words to a review (train_X[100] or any other review)
print(review_to_words(train_X[100]))
###Output
['hahahaha', 'funni', 'sum', 'movi', 'one', 'word', 'crap', 'thing', 'sinc', 'might', 'kill', 'use', 'word', 'movi', 'hope', 'director', 'writer', 'produc', 'mean', 'turn', 'good', 'sure', 'scientist', 'turn', 'son', 'hammerhead', 'shark', 'shark', 'kill', 'bunch', 'peopl', 'scientist', 'invit', 'island', 'oh', 'gooooooodddd', 'hate', 'film', 'much', 'watch', 'laugh', 'seriou', 'part', 'corni', 'unprofession', 'made', 'shark', 'look', 'unrealist', 'even', 'though', 'thing', 'bit', 'larger', 'budget', 'low', 'budget', 'movi', 'say', 'watch', 'movi', 'expect', 'laugh', 'bad', 'act', 'stupid', 'corni', 'dialogu', 'expect', 'good', 'movi', 'highli', 'disappoint']
###Markdown
**`review_to_words()` function:** > - Removes HTML tags> - Converts all charcters to lower case> - Remove stopwords using list of stopwords imported from nltk library> - Stemming the resulted words The method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time.
###Code
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
Read preprocessed data from cache file: preprocessed_data.pkl
###Markdown
Transform the dataWe will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.Since we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews. Creating a word dictionaryTo begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.> `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'.
###Code
import numpy as np
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a list of words.
word_count = {} # A dict storing the words that appear in the reviews along with how often they occur
for review in data:
for word in review:
if word in word_count:
word_count[word] += 1
else:
word_count[word] = 1
# TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
sorted_words =sorted(word_count.items(), key=lambda x: x[1], reverse=True)
sorted_words = [k for k,_ in sorted_words]
word_dict = {} # This is what we are building, a dictionary that translates words into integers
for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'
word_dict[word] = idx + 2 # 'infrequent' labels
return word_dict
word_dict = build_dict(train_X)
###Output
_____no_output_____
###Markdown
The five most frequently appearing words in the training set.
###Code
list(word_dict.keys())[:5]
###Output
_____no_output_____
###Markdown
Save `word_dict`Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use.
###Code
data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)
with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f)
###Output
_____no_output_____
###Markdown
Transform the reviewsNow that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`.
###Code
def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pad]):
if word in word_dict:
working_sentence[word_index] = word_dict[word]
else:
working_sentence[word_index] = INFREQ
return working_sentence, min(len(sentence), pad)
def convert_and_pad_data(word_dict, data, pad=500):
result = []
lengths = []
for sentence in data:
converted, leng = convert_and_pad(word_dict, sentence, pad)
result.append(converted)
lengths.append(leng)
return np.array(result), np.array(lengths)
train_X, train_X_len = convert_and_pad_data(word_dict, train_X)
test_X, test_X_len = convert_and_pad_data(word_dict, test_X)
###Output
_____no_output_____
###Markdown
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set?
###Code
# Use this cell to examine one of the processed reviews to make sure everything is working as intended.
print("len of a review",len(train_X[333]))
print(train_X[333])
###Output
len of a review 500
[ 11 281 281 281 1205 293 22 637 11 451 487 2140 33 234
1107 99 79 3518 261 151 19 5 79 24 2929 3 852 54
324 80 955 518 9 235 4839 2664 140 223 1107 2 947 518
15 29 11 694 2 14 805 101 114 92 1725 8 1205 293
22 19 781 323 261 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
###Markdown
In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. I don't think their is a problem as I don't see any data leakage happening by doing so Step 3: Upload the data to S3We will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on. Save the processed training dataset locallyIt is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review.
###Code
import pandas as pd
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Uploading the training dataNext, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model.
###Code
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)
###Output
_____no_output_____
###Markdown
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory. Step 4: Build and Train the PyTorch ModelA model comprises three objects: - Model Artifacts, - Training Code, and - Inference Code, each of which interact with one another. Here we will be using containers provided by Amazon with the added benefit of being able to include our own custom code.We will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file.
###Code
!pygmentize train/model.py
###Output
[34mimport[39;49;00m [04m[36mtorch.nn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mclass[39;49;00m [04m[32mLSTMClassifier[39;49;00m(nn.Module):
[33m"""[39;49;00m
[33m This is the simple RNN model we will be using to perform Sentiment Analysis.[39;49;00m
[33m """[39;49;00m
[34mdef[39;49;00m [32m__init__[39;49;00m([36mself[39;49;00m, embedding_dim, hidden_dim, vocab_size):
[33m"""[39;49;00m
[33m Initialize the model by settingg up the various layers.[39;49;00m
[33m """[39;49;00m
[36msuper[39;49;00m(LSTMClassifier, [36mself[39;49;00m).[32m__init__[39;49;00m()
[36mself[39;49;00m.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=[34m0[39;49;00m)
[36mself[39;49;00m.lstm = nn.LSTM(embedding_dim, hidden_dim)
[36mself[39;49;00m.dense = nn.Linear(in_features=hidden_dim, out_features=[34m1[39;49;00m)
[36mself[39;49;00m.sig = nn.Sigmoid()
[36mself[39;49;00m.word_dict = [36mNone[39;49;00m
[34mdef[39;49;00m [32mforward[39;49;00m([36mself[39;49;00m, x):
[33m"""[39;49;00m
[33m Perform a forward pass of our model on some input.[39;49;00m
[33m """[39;49;00m
x = x.t()
lengths = x[[34m0[39;49;00m,:]
reviews = x[[34m1[39;49;00m:,:]
embeds = [36mself[39;49;00m.embedding(reviews)
lstm_out, _ = [36mself[39;49;00m.lstm(embeds)
out = [36mself[39;49;00m.dense(lstm_out)
out = out[lengths - [34m1[39;49;00m, [36mrange[39;49;00m([36mlen[39;49;00m(lengths))]
[34mreturn[39;49;00m [36mself[39;49;00m.sig(out.squeeze())
###Markdown
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.First we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving.
###Code
import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()
# Build the dataset
train_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)
# Build the dataloader
train_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50)
###Output
_____no_output_____
###Markdown
Writing the training methodNext we need to write the training code itself.
###Code
def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# TODO: Complete this train method to train the model provided.
optimizer.zero_grad()
out = model.forward(batch_X)
loss = loss_fn(out, batch_y)
loss.backward()
optimizer.step()
total_loss += loss.data.item()
print("Epoch: {}, BCELoss: {}".format(epoch, total_loss / len(train_loader)))
###Output
_____no_output_____
###Markdown
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose.
###Code
import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 5, optimizer, loss_fn, device)
###Output
Epoch: 1, BCELoss: 0.6944038510322571
Epoch: 2, BCELoss: 0.6846871256828309
Epoch: 3, BCELoss: 0.6764788031578064
Epoch: 4, BCELoss: 0.667823314666748
Epoch: 5, BCELoss: 0.657815408706665
###Markdown
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run. Training the modelWhen a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained `train.py`.The way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train.py` file.
###Code
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
hyperparameters={
'epochs': 10,
'hidden_dim': 200,
})
estimator.fit({'training': input_data})
###Output
2020-05-16 21:51:11 Starting - Starting the training job...
2020-05-16 21:51:13 Starting - Launching requested ML instances...
2020-05-16 21:52:10 Starting - Preparing the instances for training.........
2020-05-16 21:53:26 Downloading - Downloading input data......
2020-05-16 21:54:35 Training - Training image download completed. Training in progress..[34mbash: cannot set terminal process group (-1): Inappropriate ioctl for device[0m
[34mbash: no job control in this shell[0m
[34m2020-05-16 21:54:36,055 sagemaker-containers INFO Imported framework sagemaker_pytorch_container.training[0m
[34m2020-05-16 21:54:36,081 sagemaker_pytorch_container.training INFO Block until all host DNS lookups succeed.[0m
[34m2020-05-16 21:54:42,333 sagemaker_pytorch_container.training INFO Invoking user training script.[0m
[34m2020-05-16 21:54:42,584 sagemaker-containers INFO Module train does not provide a setup.py. [0m
[34mGenerating setup.py[0m
[34m2020-05-16 21:54:42,584 sagemaker-containers INFO Generating setup.cfg[0m
[34m2020-05-16 21:54:42,585 sagemaker-containers INFO Generating MANIFEST.in[0m
[34m2020-05-16 21:54:42,585 sagemaker-containers INFO Installing module with the following command:[0m
[34m/usr/bin/python -m pip install -U . -r requirements.txt[0m
[34mProcessing /opt/ml/code[0m
[34mCollecting pandas (from -r requirements.txt (line 1))[0m
[34m Downloading https://files.pythonhosted.org/packages/74/24/0cdbf8907e1e3bc5a8da03345c23cbed7044330bb8f73bb12e711a640a00/pandas-0.24.2-cp35-cp35m-manylinux1_x86_64.whl (10.0MB)[0m
[34mCollecting numpy (from -r requirements.txt (line 2))
Downloading https://files.pythonhosted.org/packages/38/92/fa5295d9755c7876cb8490eab866e1780154033fa45978d9cf74ffbd4c68/numpy-1.18.4-cp35-cp35m-manylinux1_x86_64.whl (20.0MB)[0m
[34mCollecting nltk (from -r requirements.txt (line 3))
Downloading https://files.pythonhosted.org/packages/92/75/ce35194d8e3022203cca0d2f896dbb88689f9b3fce8e9f9cff942913519d/nltk-3.5.zip (1.4MB)[0m
[34mCollecting beautifulsoup4 (from -r requirements.txt (line 4))
Downloading https://files.pythonhosted.org/packages/e8/b5/7bb03a696f2c9b7af792a8f51b82974e51c268f15e925fc834876a4efa0b/beautifulsoup4-4.9.0-py3-none-any.whl (109kB)[0m
[34mCollecting html5lib (from -r requirements.txt (line 5))
Downloading https://files.pythonhosted.org/packages/a5/62/bbd2be0e7943ec8504b517e62bab011b4946e1258842bc159e5dfde15b96/html5lib-1.0.1-py2.py3-none-any.whl (117kB)[0m
[34mCollecting pytz>=2011k (from pandas->-r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/4f/a4/879454d49688e2fad93e59d7d4efda580b783c745fd2ec2a3adf87b0808d/pytz-2020.1-py2.py3-none-any.whl (510kB)[0m
[34mRequirement already satisfied, skipping upgrade: python-dateutil>=2.5.0 in /usr/local/lib/python3.5/dist-packages (from pandas->-r requirements.txt (line 1)) (2.7.5)[0m
[34mRequirement already satisfied, skipping upgrade: click in /usr/local/lib/python3.5/dist-packages (from nltk->-r requirements.txt (line 3)) (7.0)[0m
[34mCollecting joblib (from nltk->-r requirements.txt (line 3))
Downloading https://files.pythonhosted.org/packages/28/5c/cf6a2b65a321c4a209efcdf64c2689efae2cb62661f8f6f4bb28547cf1bf/joblib-0.14.1-py2.py3-none-any.whl (294kB)[0m
[34mCollecting regex (from nltk->-r requirements.txt (line 3))[0m
[34m Downloading https://files.pythonhosted.org/packages/14/8d/d44863d358e9dba3bdfb06099bbbeddbac8fb360773ba73250a849af4b01/regex-2020.5.14.tar.gz (696kB)[0m
[34mCollecting tqdm (from nltk->-r requirements.txt (line 3))
Downloading https://files.pythonhosted.org/packages/c9/40/058b12e8ba10e35f89c9b1fdfc2d4c7f8c05947df2d5eb3c7b258019fda0/tqdm-4.46.0-py2.py3-none-any.whl (63kB)[0m
[34mCollecting soupsieve>1.2 (from beautifulsoup4->-r requirements.txt (line 4))[0m
[34m Downloading https://files.pythonhosted.org/packages/05/cf/ea245e52f55823f19992447b008bcbb7f78efc5960d77f6c34b5b45b36dd/soupsieve-2.0-py2.py3-none-any.whl[0m
[34mCollecting webencodings (from html5lib->-r requirements.txt (line 5))
Downloading https://files.pythonhosted.org/packages/f4/24/2a3e3df732393fed8b3ebf2ec078f05546de641fe1b667ee316ec1dcf3b7/webencodings-0.5.1-py2.py3-none-any.whl[0m
[34mRequirement already satisfied, skipping upgrade: six>=1.9 in /usr/local/lib/python3.5/dist-packages (from html5lib->-r requirements.txt (line 5)) (1.11.0)[0m
[34mBuilding wheels for collected packages: nltk, train, regex
Running setup.py bdist_wheel for nltk: started
Running setup.py bdist_wheel for nltk: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/ae/8c/3f/b1fe0ba04555b08b57ab52ab7f86023639a526d8bc8d384306[0m
[34m Running setup.py bdist_wheel for train: started
Running setup.py bdist_wheel for train: finished with status 'done'
Stored in directory: /tmp/pip-ephem-wheel-cache-8a_4unow/wheels/35/24/16/37574d11bf9bde50616c67372a334f94fa8356bc7164af8ca3
Running setup.py bdist_wheel for regex: started[0m
[34m Running setup.py bdist_wheel for regex: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/ee/3a/5c/1f0ce151d6ddeee56e03e933603e21b5b8dcc727989fde82f5[0m
[34mSuccessfully built nltk train regex[0m
[34mInstalling collected packages: numpy, pytz, pandas, joblib, regex, tqdm, nltk, soupsieve, beautifulsoup4, webencodings, html5lib, train
Found existing installation: numpy 1.15.4[0m
[34m Uninstalling numpy-1.15.4:
Successfully uninstalled numpy-1.15.4[0m
[34mSuccessfully installed beautifulsoup4-4.9.0 html5lib-1.0.1 joblib-0.14.1 nltk-3.5 numpy-1.18.4 pandas-0.24.2 pytz-2020.1 regex-2020.5.14 soupsieve-2.0 tqdm-4.46.0 train-1.0.0 webencodings-0.5.1[0m
[34mYou are using pip version 18.1, however version 20.1 is available.[0m
[34mYou should consider upgrading via the 'pip install --upgrade pip' command.[0m
[34m2020-05-16 21:55:06,347 sagemaker-containers INFO Invoking user script
[0m
[34mTraining Env:
[0m
[34m{
"current_host": "algo-1",
"framework_module": "sagemaker_pytorch_container.training:main",
"job_name": "sagemaker-pytorch-2020-05-16-21-51-11-206",
"network_interface_name": "eth0",
"num_gpus": 1,
"input_dir": "/opt/ml/input",
"channel_input_dirs": {
"training": "/opt/ml/input/data/training"
},
"additional_framework_parameters": {},
"log_level": 20,
"model_dir": "/opt/ml/model",
"module_dir": "s3://sagemaker-us-west-2-318365624192/sagemaker-pytorch-2020-05-16-21-51-11-206/source/sourcedir.tar.gz",
"resource_config": {
"network_interface_name": "eth0",
"hosts": [
"algo-1"
],
"current_host": "algo-1"
},
"num_cpus": 4,
"module_name": "train",
"output_dir": "/opt/ml/output",
"input_data_config": {
"training": {
"RecordWrapperType": "None",
"TrainingInputMode": "File",
"S3DistributionType": "FullyReplicated"
}
},
"output_intermediate_dir": "/opt/ml/output/intermediate",
"output_data_dir": "/opt/ml/output/data",
"input_config_dir": "/opt/ml/input/config",
"hyperparameters": {
"epochs": 10,
"hidden_dim": 200
},
"hosts": [
"algo-1"
],
"user_entry_point": "train.py"[0m
[34m}
[0m
[34mEnvironment variables:
[0m
[34mSM_INPUT_DIR=/opt/ml/input[0m
[34mSM_NETWORK_INTERFACE_NAME=eth0[0m
[34mSM_MODULE_NAME=train[0m
[34mSM_INPUT_DATA_CONFIG={"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}}[0m
[34mSM_HOSTS=["algo-1"][0m
[34mSM_OUTPUT_DIR=/opt/ml/output[0m
[34mSM_LOG_LEVEL=20[0m
[34mSM_USER_ENTRY_POINT=train.py[0m
[34mSM_CHANNELS=["training"][0m
[34mSM_FRAMEWORK_PARAMS={}[0m
[34mSM_HP_EPOCHS=10[0m
[34mSM_NUM_CPUS=4[0m
[34mPYTHONPATH=/usr/local/bin:/usr/lib/python35.zip:/usr/lib/python3.5:/usr/lib/python3.5/plat-x86_64-linux-gnu:/usr/lib/python3.5/lib-dynload:/usr/local/lib/python3.5/dist-packages:/usr/lib/python3/dist-packages[0m
[34mSM_NUM_GPUS=1[0m
[34mSM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}[0m
[34mSM_MODULE_DIR=s3://sagemaker-us-west-2-318365624192/sagemaker-pytorch-2020-05-16-21-51-11-206/source/sourcedir.tar.gz[0m
[34mSM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate[0m
[34mSM_CHANNEL_TRAINING=/opt/ml/input/data/training[0m
[34mSM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{"training":"/opt/ml/input/data/training"},"current_host":"algo-1","framework_module":"sagemaker_pytorch_container.training:main","hosts":["algo-1"],"hyperparameters":{"epochs":10,"hidden_dim":200},"input_config_dir":"/opt/ml/input/config","input_data_config":{"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","job_name":"sagemaker-pytorch-2020-05-16-21-51-11-206","log_level":20,"model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-west-2-318365624192/sagemaker-pytorch-2020-05-16-21-51-11-206/source/sourcedir.tar.gz","module_name":"train","network_interface_name":"eth0","num_cpus":4,"num_gpus":1,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"train.py"}[0m
[34mSM_HP_HIDDEN_DIM=200[0m
[34mSM_CURRENT_HOST=algo-1[0m
[34mSM_USER_ARGS=["--epochs","10","--hidden_dim","200"][0m
[34mSM_HPS={"epochs":10,"hidden_dim":200}[0m
[34mSM_FRAMEWORK_MODULE=sagemaker_pytorch_container.training:main[0m
[34mSM_OUTPUT_DATA_DIR=/opt/ml/output/data[0m
[34mSM_INPUT_CONFIG_DIR=/opt/ml/input/config[0m
[34mSM_MODEL_DIR=/opt/ml/model
[0m
[34mInvoking script with the following command:
[0m
[34m/usr/bin/python -m train --epochs 10 --hidden_dim 200
[0m
[34mUsing device cuda.[0m
[34mGet train data loader.[0m
###Markdown
Step 5: Testing the modelWe will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly. Step 6: Deploy the model for testingNow that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.There is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )Since we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.In other words **If you are no longer using a deployed endpoint, shut it down!****TODO:** Deploy the trained model.
###Code
# TODO: Deploy the trained model
predictor = estimator.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
---------------!
###Markdown
Step 7 - Use the model for testingOnce deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is.
###Code
test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)
# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array in split_array:
predictions = np.append(predictions, predictor.predict(array))
return predictions
predictions = predict(test_X.values)
predictions = [round(num) for num in predictions]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
More testingWe now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model.
###Code
test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.'
###Output
_____no_output_____
###Markdown
The question we now need to answer is, how do we send this review to our model?Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews. - Removed any html tags and stemmed the input - Encoded the review as a sequence of integers using `word_dict` In order process the review we will need to repeat these two steps.**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`.
###Code
# TODO: Convert test_review into a form usable by the model and save the results in test_data
test_data = review_to_words(test_review)
test_data = [np.array(convert_and_pad(word_dict, test_data)[0])]
###Output
_____no_output_____
###Markdown
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review.
###Code
predictor.predict(test_data)
###Output
_____no_output_____
###Markdown
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive. Delete the endpointOnce we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it.
###Code
estimator.delete_endpoint()
###Output
_____no_output_____
###Markdown
Step 6 (again) - Deploy the model for the web appNow that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.As we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.We will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.When deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use. - `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model. - `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code. - `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint. - `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.For the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize. (TODO) Writing inference code Done ! Before writing our custom inference code, we will begin by taking a look at the code which has been provided.
###Code
!pygmentize serve/predict.py
###Output
[34mimport[39;49;00m [04m[36margparse[39;49;00m
[34mimport[39;49;00m [04m[36mjson[39;49;00m
[34mimport[39;49;00m [04m[36mos[39;49;00m
[34mimport[39;49;00m [04m[36mpickle[39;49;00m
[34mimport[39;49;00m [04m[36msys[39;49;00m
[34mimport[39;49;00m [04m[36msagemaker_containers[39;49;00m
[34mimport[39;49;00m [04m[36mpandas[39;49;00m [34mas[39;49;00m [04m[36mpd[39;49;00m
[34mimport[39;49;00m [04m[36mnumpy[39;49;00m [34mas[39;49;00m [04m[36mnp[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m
[34mimport[39;49;00m [04m[36mtorch.nn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mimport[39;49;00m [04m[36mtorch.optim[39;49;00m [34mas[39;49;00m [04m[36moptim[39;49;00m
[34mimport[39;49;00m [04m[36mtorch.utils.data[39;49;00m
[34mfrom[39;49;00m [04m[36mmodel[39;49;00m [34mimport[39;49;00m LSTMClassifier
[34mfrom[39;49;00m [04m[36mutils[39;49;00m [34mimport[39;49;00m review_to_words, convert_and_pad
[34mdef[39;49;00m [32mmodel_fn[39;49;00m(model_dir):
[33m"""Load the PyTorch model from the `model_dir` directory."""[39;49;00m
[34mprint[39;49;00m([33m"[39;49;00m[33mLoading model.[39;49;00m[33m"[39;49;00m)
[37m# First, load the parameters used to create the model.[39;49;00m
model_info = {}
model_info_path = os.path.join(model_dir, [33m'[39;49;00m[33mmodel_info.pth[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(model_info_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model_info = torch.load(f)
[34mprint[39;49;00m([33m"[39;49;00m[33mmodel_info: {}[39;49;00m[33m"[39;49;00m.format(model_info))
[37m# Determine the device and construct the model.[39;49;00m
device = torch.device([33m"[39;49;00m[33mcuda[39;49;00m[33m"[39;49;00m [34mif[39;49;00m torch.cuda.is_available() [34melse[39;49;00m [33m"[39;49;00m[33mcpu[39;49;00m[33m"[39;49;00m)
model = LSTMClassifier(model_info[[33m'[39;49;00m[33membedding_dim[39;49;00m[33m'[39;49;00m], model_info[[33m'[39;49;00m[33mhidden_dim[39;49;00m[33m'[39;49;00m], model_info[[33m'[39;49;00m[33mvocab_size[39;49;00m[33m'[39;49;00m])
[37m# Load the store model parameters.[39;49;00m
model_path = os.path.join(model_dir, [33m'[39;49;00m[33mmodel.pth[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(model_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model.load_state_dict(torch.load(f))
[37m# Load the saved word_dict.[39;49;00m
word_dict_path = os.path.join(model_dir, [33m'[39;49;00m[33mword_dict.pkl[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(word_dict_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model.word_dict = pickle.load(f)
model.to(device).eval()
[34mprint[39;49;00m([33m"[39;49;00m[33mDone loading model.[39;49;00m[33m"[39;49;00m)
[34mreturn[39;49;00m model
[34mdef[39;49;00m [32minput_fn[39;49;00m(serialized_input_data, content_type):
[34mprint[39;49;00m([33m'[39;49;00m[33mDeserializing the input data.[39;49;00m[33m'[39;49;00m)
[34mif[39;49;00m content_type == [33m'[39;49;00m[33mtext/plain[39;49;00m[33m'[39;49;00m:
data = serialized_input_data.decode([33m'[39;49;00m[33mutf-8[39;49;00m[33m'[39;49;00m)
[34mreturn[39;49;00m data
[34mraise[39;49;00m [36mException[39;49;00m([33m'[39;49;00m[33mRequested unsupported ContentType in content_type: [39;49;00m[33m'[39;49;00m + content_type)
[34mdef[39;49;00m [32moutput_fn[39;49;00m(prediction_output, accept):
[34mprint[39;49;00m([33m'[39;49;00m[33mSerializing the generated output.[39;49;00m[33m'[39;49;00m)
[34mreturn[39;49;00m [36mstr[39;49;00m(prediction_output)
[34mdef[39;49;00m [32mpredict_fn[39;49;00m(input_data, model):
[34mprint[39;49;00m([33m'[39;49;00m[33mInferring sentiment of input data.[39;49;00m[33m'[39;49;00m)
device = torch.device([33m"[39;49;00m[33mcuda[39;49;00m[33m"[39;49;00m [34mif[39;49;00m torch.cuda.is_available() [34melse[39;49;00m [33m"[39;49;00m[33mcpu[39;49;00m[33m"[39;49;00m)
[34mif[39;49;00m model.word_dict [35mis[39;49;00m [36mNone[39;49;00m:
[34mraise[39;49;00m [36mException[39;49;00m([33m'[39;49;00m[33mModel has not been loaded properly, no word_dict.[39;49;00m[33m'[39;49;00m)
[37m# TODO: Process input_data so that it is ready to be sent to our model.[39;49;00m
[37m# You should produce two variables:[39;49;00m
[37m# data_X - A sequence of length 500 which represents the converted review[39;49;00m
[37m# data_len - The length of the review[39;49;00m
data_X ,data_len = convert_and_pad(model.word_dict, review_to_words(input_data))
[37m# Using data_X and data_len we construct an appropriate input tensor. Remember[39;49;00m
[37m# that our model expects input data of the form 'len, review[500]'.[39;49;00m
data_pack = np.hstack((data_len, data_X))
data_pack = data_pack.reshape([34m1[39;49;00m, -[34m1[39;49;00m)
data = torch.from_numpy(data_pack)
data = data.to(device)
[37m# Make sure to put the model into evaluation mode[39;49;00m
model.eval()
[37m# TODO: Compute the result of applying the model to the input data. The variable `result` should[39;49;00m
[37m# be a numpy array which contains a single integer which is either 1 or 0[39;49;00m
[34mwith[39;49;00m torch.no_grad():
output = model.forward(data)
result = np.round(output.numpy())
[34mreturn[39;49;00m result
###Markdown
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file. Deploying the modelNow that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data.
###Code
from sagemaker.predictor import RealTimePredictor
from sagemaker.pytorch import PyTorchModel
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
model = PyTorchModel(model_data=estimator.model_data,
role = role,
framework_version='0.4.0',
entry_point='predict.py',
source_dir='serve',
predictor_cls=StringPredictor)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
###Output
-------------!
###Markdown
Testing the modelNow that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive.
###Code
import glob
def test_reviews(data_dir='../data/aclImdb', stop=250):
results = []
ground = []
# We make sure to test both positive and negative reviews
for sentiment in ['pos', 'neg']:
path = os.path.join(data_dir, 'test', sentiment, '*.txt')
files = glob.glob(path)
files_read = 0
print('Starting ', sentiment, ' files')
# Iterate through the files and send them to the predictor
for f in files:
with open(f) as review:
# First, we store the ground truth (was the review positive or negative)
if sentiment == 'pos':
ground.append(1)
else:
ground.append(0)
# Read in the review and convert to 'utf-8' for transmission via HTTP
review_input = review.read().encode('utf-8')
# Send the review to the predictor and store the results
results.append(float(predictor.predict(review_input)))
# Sending reviews to our endpoint one at a time takes a while so we
# only send a small number of reviews
files_read += 1
if files_read == stop:
break
return ground, results
ground, results = test_reviews()
from sklearn.metrics import accuracy_score
accuracy_score(ground, results)
###Output
_____no_output_____
###Markdown
As an additional test, we can try sending the `test_review` that we looked at earlier.
###Code
predictor.predict(test_review)
###Output
_____no_output_____
###Markdown
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back. Step 7 (again): Use the model for the web app> This entire section and the next contain tasks mostly done using the AWS console.So far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.In the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.Lastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function. Setting up a Lambda functionThe first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result. Part A: Create an IAM Role for the Lambda functionSince we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.Using the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.In the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.Lastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**. Part B: Create a Lambda functionNow it is time to actually create the Lambda function.Using the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.On the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below. ```python We need to use the low-level library to interact with SageMaker since the SageMaker API is not available natively through Lambda.import boto3def lambda_handler(event, context): The SageMaker runtime is what allows us to invoke the endpoint that we've created. runtime = boto3.Session().client('sagemaker-runtime') Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', The name of the endpoint we created ContentType = 'text/plain', The data format that is expected Body = event['body']) The actual review The response is an HTTP response whose body contains the result of our inference result = response['Body'].read().decode('utf-8') return { 'statusCode' : 200, 'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' }, 'body' : result }```Once you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below.
###Code
predictor.endpoint
###Output
_____no_output_____
###Markdown
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function. Setting up API GatewayNow that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.Using AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.On the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.Now we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.Select the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.For the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.Type the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.The last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.You have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**. Step 4: Deploying our web appNow that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.In the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\*\*REPLACE WITH PUBLIC API URL\*\***. Replace this string with the url that you wrote down in the last step and then save the file.Now, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.If you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill. Example review: Missed it at the cinema, but was always slightly compelled. Found it in the throw-out bin at my local video shop for a measly two bucks! Will I now give it away to anyone who wants it? Probably! No purposeful plot, one dimensional characters, plastic world ripped off from many far better films, no decent dialogue to speak of. You know that empty feeling when you come down off ecstasy? Its that feeling right here. Sad thing is, the Australia I know is heading in this direction, minus the melodrama and simple answers. Interesting only to see the older Aussie actors (who had to ACT back in their day to get by) vs the newer Aussie actors (who have to LOOK GOOD to get by). Like some horribly garish narrative introduction to a film clip that never actually starts... Poor Kylie, started her career as an actress as well...Result: Your review was NEGATIVE!
###Code
predictor.delete_endpoint()
###Output
_____no_output_____ |
stable/_downloads/11f39f61bd7f4cfd5791b0d10da462f2/plot_eeg_erp.ipynb | ###Markdown
EEG processing and Event Related Potentials (ERPs)==================================================For a generic introduction to the computation of ERP and ERFsee `tut_epoching_and_averaging`. :depth: 1
###Code
import mne
from mne.datasets import sample
###Output
_____no_output_____
###Markdown
Setup for reading the raw data
###Code
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
# these data already have an EEG average reference
raw = mne.io.read_raw_fif(raw_fname, preload=True)
###Output
_____no_output_____
###Markdown
Let's restrict the data to the EEG channels
###Code
raw.pick_types(meg=False, eeg=True, eog=True)
###Output
_____no_output_____
###Markdown
By looking at the measurement info you will see that we have now59 EEG channels and 1 EOG channel
###Code
print(raw.info)
###Output
_____no_output_____
###Markdown
In practice it's quite common to have some EEG channels that are actuallyEOG channels. To change a channel type you can use the:func:`mne.io.Raw.set_channel_types` method. For exampleto treat an EOG channel as EEG you can change its type using
###Code
raw.set_channel_types(mapping={'EOG 061': 'eeg'})
print(raw.info)
###Output
_____no_output_____
###Markdown
And to change the nameo of the EOG channel
###Code
raw.rename_channels(mapping={'EOG 061': 'EOG'})
###Output
_____no_output_____
###Markdown
Let's reset the EOG channel back to EOG type.
###Code
raw.set_channel_types(mapping={'EOG': 'eog'})
###Output
_____no_output_____
###Markdown
The EEG channels in the sample dataset already have locations.These locations are available in the 'loc' of each channel description.For the first channel we get
###Code
print(raw.info['chs'][0]['loc'])
###Output
_____no_output_____
###Markdown
And it's actually possible to plot the channel locations using:func:`mne.io.Raw.plot_sensors`.
###Code
raw.plot_sensors()
raw.plot_sensors('3d') # in 3D
###Output
_____no_output_____
###Markdown
Setting EEG Montage (using standard montages)---------------------------------------------In the case where your data don't have locations you can set themusing a :class:`mne.channels.Montage`. MNE comes with a set of defaultmontages. To read one of them do:
###Code
montage = mne.channels.read_montage('standard_1020')
print(montage)
###Output
_____no_output_____
###Markdown
To apply a montage on your data use the ``set_montage`` method.function. Here don't actually call this function as our demo datasetalready contains good EEG channel locations.Next we'll explore the definition of the reference. Setting EEG reference---------------------Let's first remove the reference from our Raw object.This explicitly prevents MNE from adding a default EEG average referencerequired for source localization.
###Code
raw_no_ref, _ = mne.set_eeg_reference(raw, [])
###Output
_____no_output_____
###Markdown
We next define Epochs and compute an ERP for the left auditory condition.
###Code
reject = dict(eeg=180e-6, eog=150e-6)
event_id, tmin, tmax = {'left/auditory': 1}, -0.2, 0.5
events = mne.read_events(event_fname)
epochs_params = dict(events=events, event_id=event_id, tmin=tmin, tmax=tmax,
reject=reject)
evoked_no_ref = mne.Epochs(raw_no_ref, **epochs_params).average()
del raw_no_ref # save memory
title = 'EEG Original reference'
evoked_no_ref.plot(titles=dict(eeg=title), time_unit='s')
evoked_no_ref.plot_topomap(times=[0.1], size=3., title=title, time_unit='s')
###Output
_____no_output_____
###Markdown
**Average reference**: This is normally added by default, but can alsobe added explicitly.
###Code
raw.del_proj()
raw_car, _ = mne.set_eeg_reference(raw, 'average', projection=True)
evoked_car = mne.Epochs(raw_car, **epochs_params).average()
del raw_car # save memory
title = 'EEG Average reference'
evoked_car.plot(titles=dict(eeg=title), time_unit='s')
evoked_car.plot_topomap(times=[0.1], size=3., title=title, time_unit='s')
###Output
_____no_output_____
###Markdown
**Custom reference**: Use the mean of channels EEG 001 and EEG 002 asa reference
###Code
raw_custom, _ = mne.set_eeg_reference(raw, ['EEG 001', 'EEG 002'])
evoked_custom = mne.Epochs(raw_custom, **epochs_params).average()
del raw_custom # save memory
title = 'EEG Custom reference'
evoked_custom.plot(titles=dict(eeg=title), time_unit='s')
evoked_custom.plot_topomap(times=[0.1], size=3., title=title, time_unit='s')
###Output
_____no_output_____
###Markdown
Evoked arithmetic (e.g. differences)------------------------------------Trial subsets from Epochs can be selected using 'tags' separated by '/'.Evoked objects support basic arithmetic.First, we create an Epochs object containing 4 conditions.
###Code
event_id = {'left/auditory': 1, 'right/auditory': 2,
'left/visual': 3, 'right/visual': 4}
epochs_params = dict(events=events, event_id=event_id, tmin=tmin, tmax=tmax,
reject=reject)
epochs = mne.Epochs(raw, **epochs_params)
print(epochs)
###Output
_____no_output_____
###Markdown
Next, we create averages of stimulation-left vs stimulation-right trials.We can use basic arithmetic to, for example, construct and plotdifference ERPs.
###Code
left, right = epochs["left"].average(), epochs["right"].average()
# create and plot difference ERP
joint_kwargs = dict(ts_args=dict(time_unit='s'),
topomap_args=dict(time_unit='s'))
mne.combine_evoked([left, -right], weights='equal').plot_joint(**joint_kwargs)
###Output
_____no_output_____
###Markdown
This is an equal-weighting difference. If you have imbalanced trial numbers,you could also consider either equalizing the number of events percondition (using:meth:`epochs.equalize_event_counts `).As an example, first, we create individual ERPs for each condition.
###Code
aud_l = epochs["auditory", "left"].average()
aud_r = epochs["auditory", "right"].average()
vis_l = epochs["visual", "left"].average()
vis_r = epochs["visual", "right"].average()
all_evokeds = [aud_l, aud_r, vis_l, vis_r]
print(all_evokeds)
###Output
_____no_output_____
###Markdown
This can be simplified with a Python list comprehension:
###Code
all_evokeds = [epochs[cond].average() for cond in sorted(event_id.keys())]
print(all_evokeds)
# Then, we construct and plot an unweighted average of left vs. right trials
# this way, too:
mne.combine_evoked(
[aud_l, -aud_r, vis_l, -vis_r], weights='equal').plot_joint(**joint_kwargs)
###Output
_____no_output_____
###Markdown
Often, it makes sense to store Evoked objects in a dictionary or a list -either different conditions, or different subjects.
###Code
# If they are stored in a list, they can be easily averaged, for example,
# for a grand average across subjects (or conditions).
grand_average = mne.grand_average(all_evokeds)
mne.write_evokeds('/tmp/tmp-ave.fif', all_evokeds)
# If Evokeds objects are stored in a dictionary, they can be retrieved by name.
all_evokeds = dict((cond, epochs[cond].average()) for cond in event_id)
print(all_evokeds['left/auditory'])
# Besides for explicit access, this can be used for example to set titles.
for cond in all_evokeds:
all_evokeds[cond].plot_joint(title=cond, **joint_kwargs)
###Output
_____no_output_____ |
Cap00_PresentacionCurso_202101.ipynb | ###Markdown
ST0256 - Análisis NuméricoPresentación del Curso2021/01MEDELLÍN - COLOMBIA Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license.(c) Carlos Alberto Alvarez Henao *** ***Docente:*** Carlos Alberto Álvarez Henao, I.C. D.Sc.***e-mail:*** [email protected]***skype:*** carlos.alberto.alvarez.henao***Herramienta:*** [Jupyter](http://jupyter.org/)***Kernel:*** Python 3.8*** Table of Contents1 Motivación2 Aspectos generales del curso2.1 Programa clase-a-clase2.2 Evaluación2.3 Bibliográfia2.4 Asesorías y Monitorias académicas Motivación Deseamos desarrollar las siguientes operaciones aritméticas: - $2+2$- $4 \times 4$- $\left(\sqrt{3} \right )^2$desde un punto de vista analítico, las soluciones exactas (a mano y en papel?) son- $2+2 = 4$- $4 \times 4 = 16$- $\left(\sqrt{3} \right )^2 = 3$pero veamos qué sucede cuando realizamos las mismas operaciones empleando un dispositivo electrónico (calculadora, computador, etc)
###Code
a = 2 + 2
b = 4 * 4
c = (3**(1/2))**2
###Output
_____no_output_____
###Markdown
preguntemos al computador si los resultados obtenidos en los cálculos son los esperados
###Code
a == 4
b == 16
c == 3
###Output
_____no_output_____
###Markdown
`False`? Qué sucedió? por qué el resultado de comparar el valor que entendemos como verdadero y el obtenido empleando un dispositivo electrónico (calculadora) es falso? Veamos entonces cuál es el resultado que arrojó el cálculo:
###Code
print(c)
###Output
2.9999999999999996
###Markdown
ST0256 - Análisis NuméricoPresentación del Curso2021/01MEDELLÍN - COLOMBIA Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license.(c) Carlos Alberto Alvarez Henao *** ***Docente:*** Carlos Alberto Álvarez Henao, I.C. D.Sc.***e-mail:*** [email protected]***skype:*** carlos.alberto.alvarez.henao***Herramienta:*** [Jupyter](http://jupyter.org/)***Kernel:*** Python 3.8*** Table of Contents1 Motivación2 Aspectos generales del curso2.1 Programa clase-a-clase2.2 Evaluación2.3 Bibliográfia2.4 Asesorías y Monitorias académicas Motivación Deseamos desarrollar las siguientes operaciones aritméticas: - $2+2$- $4 \times 4$- $\left(\sqrt{3} \right )^2$desde un punto de vista analítico, las soluciones exactas (a mano y en papel?) son- $2+2 = 4$- $4 \times 4 = 16$- $\left(\sqrt{3} \right )^2 = 3$pero veamos qué sucede cuando realizamos las mismas operaciones empleando un dispositivo electrónico (calculadora, computador, etc)
###Code
a = 2 + 2
b = 4 * 4
c = (3**(1/2))**2
###Output
_____no_output_____
###Markdown
preguntemos al computador si los resultados obtenidos en los cálculos son los esperados
###Code
a == 4
b == 16
c == 3
###Output
_____no_output_____
###Markdown
`False`? Qué sucedió? por qué el resultado de comparar el valor que entendemos como verdadero y el obtenido empleando un dispositivo electrónico (calculadora) es falso? Veamos entonces cuál es el resultado que arrojó el cálculo:
###Code
print(c)
###Output
2.9999999999999996
|
capstone/SystemDemonstration.ipynb | ###Markdown
Load and transform the portfolio
###Code
## Load the portfolio data and transform it to feed the networks
data_dir = './data'
portfolio_data_path = os.path.join(data_dir, 'portfolio.json')
portfolio_df = pd.read_json(portfolio_data_path, orient='records', lines=True)
# Set id as index
portfolio_df.set_index(keys='id', verify_integrity=True, inplace=True)
display(portfolio_df.style.set_caption('Portfolio (raw values)'))
# Make offer_type one hot encoded
portfolio_df = portfolio_df.join(
pd.get_dummies(portfolio_df.pop('offer_type')))
# Transform channels in distinct features
channels_df = pd.DataFrame(portfolio_df.pop('channels'))
channels_df = channels_df.explode('channels')
channels_df = channels_df.assign(value=lambda x: 1)
channels_df = channels_df.pivot(columns='channels', values='value')
channels_df.fillna(value=0, inplace=True)
portfolio_df = portfolio_df.join(channels_df)
channels_df = None
# Remove email column
portfolio_df.drop(columns='email', inplace=True)
# Scale values
data = portfolio_df[['reward','difficulty','duration']]
data = scale(data)
portfolio_df[['reward','difficulty','duration']] = data
# Include the case of not sending an offer
portfolio_df = portfolio_df.append(pd.DataFrame(
[[0, 0, 0, 0, 0, 0, 0, 0, 0]],
columns=portfolio_df.columns,
index=['no_offer_sending']))
display(portfolio_df.style.set_caption('Portfolio (transformed)'))
###Output
_____no_output_____
###Markdown
Demonstrate the system in use Analyze one specific customerIn this case, we analyze one specific customer to understand how past events affect the choice for a new offer sending.Here, we pick one customer in the test dataset, so the features are already transformed. Otherwise, it is necessary to transform the data in the same manner made in the Feature Engineering phase.Each offer from the portfolio dataset is concatenated to the customer data so that we have a feature vector.The customer history is given by a sequence of feature vectors according to the offer received in past moments.To obtain the probabilities of the offer acceptance, first, we concatenate the customer history until the moment and the new feature vector. Then, we pass into the network the new sequence generated by that concatenation.In this example, I split the customer history in timesteps to obtain which offer would be the most appropriate in each past moment.
###Code
with torch.no_grad():
# Get one batch of customer histories
cust_history_list, cust_y_true_list = next(iter(test_dataloader))
# Get the history of one customer
cust_history = cust_history_list[8],
cust_y_true = cust_y_true_list[8]
print("Offers sent by the current system")
print(cust_history[0].squeeze()[:,:9].numpy())
print("True labels for the offers sent")
print(cust_y_true.numpy())
# Get the customer demographic data
cust_data = cust_history[0][0][-7:]
# Get the data of each offer
offer_data = portfolio_df.reset_index().values
# Build features by concatenating offer data + customer data
# Create one feature vector for each offer
features = np.repeat(cust_data.view(1,-1), 11, axis=0)
features = np.concatenate((offer_data[:,1:], features), axis=1)
features = torch.tensor(features.tolist())
features = features.unsqueeze(1) # batch: 11 offers; seq_length: 1 history
# Calculate the probabilities after each interaction
y_pred_moments = []
for moment in range(7):
history = cust_history[0][:moment].unsqueeze(0)
history = history.repeat(11,1,1)
features_moment = torch.cat((history, features), dim=1)
y_pred = predictor(features_moment)
y_pred = torch.softmax(y_pred, dim=2)
y_pred_moments.append(y_pred[:,-1,1].numpy()*100)
# if moment == 6:
# print(cust_history)
# print(cust_y_true)
# print(y_pred[-1].topk(1,dim=1)[1].squeeze())
# # print(y_pred[-1])
y_pred_df = pd.DataFrame(y_pred_moments,
columns=portfolio_df.index,
index=['moment_1', 'moment_2', 'moment_3',
'moment_4', 'moment_5', 'moment_6',
'moment_7']).T
y_pred_df = y_pred_df.style.background_gradient(cmap='YlGn', axis=0)
y_pred_df.set_caption('Probabilites at each moment')
display(y_pred_df)
###Output
Offers sent by the current system
[[-1.2352941 -1.3917431 -1.1351916 0. 0. 1.
1. 0. 1. ]
[ 0. 0. 0. 0. 0. 0.
0. 0. 0. ]
[ 0. 0. 0. 0. 0. 0.
0. 0. 0. ]
[ 0. 0. 0. 0. 0. 0.
0. 0. 0. ]
[ 1.7058823 0.41571546 -0.6811149 1. 0. 0.
1. 1. 1. ]
[ 0.23529412 -0.4880138 0.22703831 1. 0. 0.
1. 0. 1. ]]
True labels for the offers sent
[1 1 0 1 0 0]
###Markdown
Make predictions for a batch of customersIn this case, one batch of customers' history is passed into the network to get predictions for the next offer sending moment.In the resulting table, each column represents one customer. Each row, one offer. Each cell, the probability of completion for that offer given that customer.
###Code
with torch.no_grad():
## Get one batch of customer history
cust_history_list, cust_y_true_list = next(iter(test_dataloader))
## For each customer, make predictions for each offer
cust_result = []
for cust_history, cust_y_true in zip(cust_history_list, cust_y_true_list):
# Get the customer demographic data
cust_data = cust_history[0][-7:].unsqueeze(0)
# Get the data of each offer
offer = portfolio_df.reset_index().values
# Build features by concatenating offer data + customer data
# Create one feature vector for each offer
cust_data = np.repeat(cust_data, 11, axis=0)
feat_offer = np.concatenate((offer[:,1:], cust_data), axis=1)
feat_offer = torch.tensor(feat_offer.tolist())
# Concatenate the offers after the user history
# Once for each offer
feat_offer = feat_offer.unsqueeze(1)
cust_history = cust_history.repeat(11,1,1)
features = torch.cat((cust_history,feat_offer),dim=1)
# Calculate the probabilities after each offer
y_linear = predictor(features)
y_linear = torch.softmax(y_linear, dim=2)
# Store the probabilities for this customer
cust_result.append(y_linear[:,-1,1].numpy()*100)
cust_df = pd.DataFrame(cust_result, columns=portfolio_df.index).T
cust_df = cust_df.style.background_gradient(cmap='YlGn')
display(cust_df)
###Output
_____no_output_____
###Markdown
Starbucks Capstone Challenge System DemonstrationThis notebook aims to demonstrate how the neural network trained in project would be part of a direct marketing system. Import the necessary libraries
###Code
import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import torch
import torch.nn as nn
import torch.optim as optim
import sklearn.metrics as metrics
from sklearn.preprocessing import scale
from models import RecurrentNN
pd.set_option('display.precision', 2)
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_columns', None)
###Output
_____no_output_____
###Markdown
Load the datasets
###Code
!unzip -o dataloaders.zip
train_dataloader, valid_dataloader, test_dataloader = \
torch.load('dataloaders.pt')
###Output
Archive: dataloaders.zip
inflating: dataloaders.pt
###Markdown
Load the trained model
###Code
predictor = RecurrentNN(input_size=16, output_size=2,
hidden_size=128, hidden_layers=2)
## Load the saved model
predictor.load_state_dict(
torch.load('recurrent_classifier.pt'))
predictor.eval()
print('Model loaded successfully')
###Output
Model loaded successfully
|
Python Time Series.ipynb | ###Markdown
DateTime Index Built-in Python libraries for dates and times
###Code
from datetime import datetime # allows creation of timestamps/specific date objects
# variables
my_year = 2019
my_month = 5
my_day = 1
# datetime functionality, takes in year, month, day, time
my_date = datetime(my_year, my_month, my_day)
my_date
# convert a list of two datetime objects to an index:
my_list = [datetime(2019,1,1), datetime(2019,1,2)]
# convert a NumPy array or list to an index
dt_idx = pd.DatetimeIndex(my_list)
###Output
_____no_output_____
###Markdown
*** Time Resampling Financial datasets - data has DateTime index on smaller scale Better to aggregate data based on some frequency Eg: monthly, quarterly, etc. pandas has frequency sampling tools Download stock market data from Yahoo Finance
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# read csv file
df = pd.read_csv('TSLA.csv')
# want Date column to be index
# convert it to a datetime index with pd.to_datetime()
# pass in the Series
df['Date'] = pd.to_datetime(df['Date'])
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 251 entries, 0 to 250
Data columns (total 7 columns):
Date 251 non-null datetime64[ns]
Open 251 non-null float64
High 251 non-null float64
Low 251 non-null float64
Close 251 non-null float64
Adj Close 251 non-null float64
Volume 251 non-null int64
dtypes: datetime64[ns](1), float64(5), int64(1)
memory usage: 13.8 KB
###Markdown
Set Date column as index:
###Code
# Set index_col='Date' and set parse_dates=True
df.set_index('Date', inplace=True)
###Output
_____no_output_____
###Markdown
check the index with `df.index`:
###Code
df = pd.read_csv('TSLA.csv', index_col='Date', parse_dates=True)
df.index
###Output
_____no_output_____
###Markdown
To do time resampling, need datetime index Resample DataFrame with `df.resample()` Pass in a `rule` `rule` is how we want to resample the data [Every type of time series offset strings](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.resample.html) `rule` is acting as a `GroupBy` method specifically for time series data Example of the A `rule` "Year-end frequency" Get mean value based off resampling:
###Code
# mean value based off the end of the year resampling
df.resample(rule='A').mean()
###Output
_____no_output_____
###Markdown
Mean Open value for everything before 2018-12-31 = \\$316.12 Mean Open value for everything between 2018-12-31 & 2019-12-31 = \\$292.08 Get mean value after quarterly resampling with Q `rule`:
###Code
# mean value based off quarterly resampling
df.resample(rule='Q').mean()
###Output
_____no_output_____
###Markdown
*** Time Shifts Often, time series forecasting models require forward and backward shifting of data with a certain amount of time steps. Use Panda's `.shift()` method To shift time period up by 1 step, use: `df.shift(periods = 1).head()`
###Code
df.shift(periods=1).head()
###Output
_____no_output_____
###Markdown
First time period will no longer have any values. To shift time period backwards by 1 step, use: `df.shift(periods = -1).head()` *** Pandas Rolling & Expanding To create a rolling mean based off a given time period, Use pandas' `.rolling()` method Daily financial data can be noisy. To generate a signal about the general trend of the data, Use rolling mean / Moving Average
###Code
# Plot daily data
df['Open'].plot(figsize=(16,6))
###Output
_____no_output_____
###Markdown
To average out by the week, either Get Moving Average on: - A particular column or Series- The entire DataFrame with `.rolling()` method
###Code
# Pass in 7 as window
# Then add aggregate function .mean()
df.rolling(7).mean().head(14)
###Output
_____no_output_____
###Markdown
First 6 values are null. 7th value = mean of first 6 rows
###Code
# Plot the Open column VS 7-day moving average of Close column:
df['Open'].plot()
df.rolling(7).mean()['Close'].plot(figsize=(16,6))
###Output
_____no_output_____
###Markdown
Blue line = Open price column Orange line = Rolling 7-day Close price Instead of taking 7-day rolling window, Account everything since the beginning of the time series to where we are at that point.
###Code
# Use .expanding() method
df['Close'].expanding().mean().plot(figsize=(16,6))
###Output
_____no_output_____
###Markdown
Each time step on x-axis, y-axis shows value of everything that came before it averaged out. *** Bollinger Bands Bollinger Bands are volatility bands placed above and below a moving average, where the volatility is based off the standard deviation which changes as volatility increases or decreases. Volatility increases -> Bands widen Volatility decreases -> Bands narrow Create 3 columns and plot them: 1. Closing 20-day Moving Average2. Upper band equal to 20-day MA + 2 times the Standard Deviation over 20 days3. Lower band equal to 20-day MA - 2 times the Standard Deviation over 20 days
###Code
# Close 20 MA
df['Close: 20 Day Mean'] = df['Close'].rolling(20).mean()
# Upper = 20MA + 2*std(20)
df['Upper'] = df['Close: 20 Day Mean'] + 2*(df['Close'].rolling(20).std())
# Lower = 20MA - 2*std(20)
df['Lower'] = df['Close: 20 Day Mean'] - 2*(df['Close'].rolling(20).std())
# Plot Close
df[['Close','Close: 20 Day Mean','Upper','Lower']].plot(figsize=(16,6))
###Output
_____no_output_____
###Markdown
*** Time Series Analysis - Introduction to Statsmodel- ETS Models & Decomposition- EWMA Models- ARIMA Models Introduction to Statsmodel The most popular Python library for dealing with time series data is [StatsModels](https://www.statsmodels.org/stable/index.html): `statsmodels` is a Python module that provides classes and functions for the estimation of many different statistical models, as well as for conducting statistical tests, and statistical data exploration. StatsModels is heavily inspired by the statistical programming language R. Used to explore data, estimate statistical models, and perform statistical tests. Using [time series analysis](https://www.statsmodels.org/stable/tsa.html) `tsa module`: Import `statsmodels.api as sm` Then load a dataset that comes with the library Then load macrodata dataset:
###Code
import statsmodels.api as sm
# import dataset with load_pandas() method and .data attribute
df = sm.datasets.macrodata.load_pandas().data
df.head()
###Output
_____no_output_____
###Markdown
Set the year to be the time series index:
###Code
index = pd.Index(sm.tsa.datetools.dates_from_range('1959Q1','2009Q3'))
df.index = index
# Plot realgdp column
df['realgdp'].plot()
###Output
_____no_output_____
###Markdown
Hodrick-Prescott filter: $sm.tsa.fi < ers.hpfi < er(df['realgdp'])$ This returns a tuple of the estimated cycle and estimated trend in the data. Use tuple unpacking to get trend. Plot
###Code
# Tuple unpacking to get trend
# Plot it on top of current trend
gdp_cycle, gdp_trend = sm.tsa.filters.hpfilter(df['realgdp'])
# add a column for the trend
df['trend'] = gdp_trend
# plot the real gdp & the trend
df[['realgdp','trend']].plot()
###Output
_____no_output_____
###Markdown
Error-Trend-Seasonality (ETS) Models with StatsModels ETS models take each of the terms for smoothing purposes - and may add them, multiply them, or leave some of them out of the model. A model to fit data will be created based off these factors. Time Series Decomposition with ETS is a method to break down a time series into ETS components.
###Code
# ETS decomposition for TSLA csv
from statsmodels.tsa.seasonal import seasonal_decompose
df = pd.read_csv('TSLA.csv')
result = seasonal_decompose(df['Adj Close'], model='additive', freq=12)
# Plot out the components - trend
result.trend.plot()
# Plot all the results
result.plot()
###Output
_____no_output_____
###Markdown
Exponentially Weighted Moving Average (EWMA) Models With `pd.rolling()`, a simple model (SMA) that describes a trend of a time series can be created. Some weaknesses of SMA: - Smaller window leads to more noise rather than signal- Always lag size of window- Will never reach peak / valley of data due to averaging- Doesn't inform about future behaviour- Extreme historical values can skew SMA
###Code
# create 1 month SMA off of Adj Close
df['30 Day SMA'] = df['Adj Close'].rolling(window=30).mean()
# plot SMA & Adj Close
df[['Adj Close', '30 Day SMA']].plot(figsize=(10,8))
###Output
_____no_output_____
###Markdown
EWMA solves some issues: - Able to reduce lag time from SMA and put more weight on values that occur more recently- Amount of weight applied to the recent values depends on the parameters used in the EWMA and the number of periods in the window size
###Code
# create EWMA
df['EWMA-30'] = df['Adj Close'].ewm(span=30).mean()
# plot EWMA
df[['Adj Close', 'EWMA-30']].plot(figsize=(10,8))
###Output
_____no_output_____ |
Tutorial-GRHD_Equations-Cartesian.ipynb | ###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Equations of General Relativistic Hydrodynamics (GRHD) Authors: Zach Etienne & Patrick Nelson This notebook documents and constructs a number of quantities useful for building symbolic (SymPy) expressions for the equations of general relativistic hydrodynamics (GRHD), using the same (Valencia) formalism as `IllinoisGRMHD`**Notebook Status:** Self-Validated **Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)** IntroductionWe write the equations of general relativistic hydrodynamics in conservative form as follows (adapted from Eqs. 41-44 of [Duez et al](https://arxiv.org/pdf/astro-ph/0503420.pdf):\begin{eqnarray}\ \partial_t \rho_* &+& \partial_j \left(\rho_* v^j\right) = 0 \\\partial_t \tilde{\tau} &+& \partial_j \left(\alpha^2 \sqrt{\gamma} T^{0j} - \rho_* v^j \right) = s \\\partial_t \tilde{S}_i &+& \partial_j \left(\alpha \sqrt{\gamma} T^j{}_i \right) = \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i},\end{eqnarray}where we assume $T^{\mu\nu}$ is the stress-energy tensor of a perfect fluid:$$T^{\mu\nu} = \rho_0 h u^{\mu} u^{\nu} + P g^{\mu\nu},$$the $s$ source term is given in terms of ADM quantities via$$s = \alpha \sqrt{\gamma}\left[\left(T^{00}\beta^i\beta^j + 2 T^{0i}\beta^j + T^{ij} \right)K_{ij}- \left(T^{00}\beta^i + T^{0i} \right)\partial_i\alpha \right],$$and \begin{align}v^j &= \frac{u^j}{u^0} \\\rho_* &= \alpha\sqrt{\gamma} \rho_0 u^0 \\h &= 1 + \epsilon + \frac{P}{\rho_0}.\end{align}Also we will write the 4-metric in terms of the ADM 3-metric, lapse, and shift using standard equations.Thus the full set of input variables include:* Spacetime quantities: * ADM quantities $\alpha$, $\beta^i$, $\gamma_{ij}$, $K_{ij}$* Hydrodynamical quantities: * Rest-mass density $\rho_0$ * Pressure $P$ * Internal energy $\epsilon$ * 4-velocity $u^\mu$For completeness, the rest of the conservative variables are given by\begin{align}\tilde{\tau} &= \alpha^2\sqrt{\gamma} T^{00} - \rho_* \\\tilde{S}_i &= \alpha \sqrt{\gamma} T^0{}_i\end{align} A Note on NotationAs is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.For instance, in calculating the first term of $b^2 u^\mu u^\nu$, we use Greek indices:```pythonT4EMUU = ixp.zerorank2(DIM=4)for mu in range(4): for nu in range(4): Term 1: b^2 u^{\mu} u^{\nu} T4EMUU[mu][nu] = smallb2*u4U[mu]*u4U[nu]```When we calculate $\beta_i = \gamma_{ij} \beta^j$, we use Latin indices:```pythonbetaD = ixp.zerorank1(DIM=3)for i in range(3): for j in range(3): betaD[i] += gammaDD[i][j] * betaU[j]```As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook). This can be seen when we handle $\frac{1}{2} \alpha \sqrt{\gamma} T^{\mu \nu}_{\rm EM} \partial_i g_{\mu \nu}$:```python \alpha \sqrt{\gamma} T^{\mu \nu}_{\rm EM} \partial_i g_{\mu \nu} / 2for i in range(3): for mu in range(4): for nu in range(4): S_tilde_rhsD[i] += alpsqrtgam * T4EMUU[mu][nu] * g4DD_zerotimederiv_dD[mu][nu][i+1] / 2``` Table of Contents$$\label{toc}$$Each family of quantities is constructed within a given function (**boldfaced** below). This notebook is organized as follows1. [Step 1](importmodules): Import needed NRPy+ & Python modules1. [Step 2](stressenergy): Define the stress-energy tensor $T^{\mu\nu}$ and $T^\mu{}_\nu$: * **compute_enthalpy()**, **compute_T4UU()**, **compute_T4UD()**: 1. [Step 3](primtoconserv): Writing the conservative variables in terms of the primitive variables: * **compute_sqrtgammaDET()**, **compute_rho_star()**, **compute_tau_tilde()**, **compute_S_tildeD()**1. [Step 4](grhdfluxes): Define the fluxes for the GRHD equations 1. [Step 4.a](rhostarfluxterm): Define $\rho_*$ flux term for GRHD equations: * **compute_vU_from_u4U__no_speed_limit()**, **compute_rho_star_fluxU()**: 1. [Step 4.b](taustildesourceterms) Define $\tilde{\tau}$ and $\tilde{S}_i$ flux terms for GRHD equations: * **compute_tau_tilde_fluxU()**, **compute_S_tilde_fluxUD()**1. [Step 5](grhdsourceterms): Define source terms on RHSs of GRHD equations 1. [Step 5.a](ssourceterm): Define $s$ source term on RHS of $\tilde{\tau}$ equation: * **compute_s_source_term()** 1. [Step 5.b](stildeisourceterm): Define source term on RHS of $\tilde{S}_i$ equation 1. [Step 5.b.i](fourmetricderivs): Compute $g_{\mu\nu,i}$ in terms of ADM quantities and their derivatives: * **compute_g4DD_zerotimederiv_dD()** 1. [Step 5.b.ii](stildeisource): Compute source term of the $\tilde{S}_i$ equation: $\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}$: * **compute_S_tilde_source_termD()**1. [Step 6](convertvtou): Conversion of $v^i$ to $u^\mu$ (Courtesy Patrick Nelson): * **u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit()**, **u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit()**1. [Step 7](declarevarsconstructgrhdeqs): Declare ADM and hydrodynamical input variables, and construct GRHD equations1. [Step 8](code_validation): Code Validation against `GRHD.equations` NRPy+ module1. [Step 9](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Import needed NRPy+ & Python modules \[Back to [top](toc)\]$$\label{importmodules}$$
###Code
# Step 1: Import needed core NRPy+ modules
from outputC import * # NRPy+: Core C code output module
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
###Output
_____no_output_____
###Markdown
Step 2: Define the stress-energy tensor $T^{\mu\nu}$ and $T^\mu{}_\nu$ \[Back to [top](toc)\]$$\label{stressenergy}$$Recall from above that$$T^{\mu\nu} = \rho_0 h u^{\mu} u^{\nu} + P g^{\mu\nu},$$where $h = 1 + \epsilon + \frac{P}{\rho_0}$. Also $$T^\mu{}_{\nu} = T^{\mu\delta} g_{\delta \nu}$$
###Code
# Step 2.a: First define h, the enthalpy:
def compute_enthalpy(rho_b,P,epsilon):
global h
h = 1 + epsilon + P/rho_b
# Step 2.b: Define T^{mu nu} (a 4-dimensional tensor)
def compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U):
global T4UU
compute_enthalpy(rho_b,P,epsilon)
# Then define g^{mu nu} in terms of the ADM quantities:
import BSSN.ADMBSSN_tofrom_4metric as AB4m
AB4m.g4UU_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha)
# Finally compute T^{mu nu}
T4UU = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
T4UU[mu][nu] = rho_b * h * u4U[mu]*u4U[nu] + P*AB4m.g4UU[mu][nu]
# Step 2.c: Define T^{mu}_{nu} (a 4-dimensional tensor)
def compute_T4UD(gammaDD,betaU,alpha, T4UU):
global T4UD
# Next compute T^mu_nu = T^{mu delta} g_{delta nu}, needed for S_tilde flux.
# First we'll need g_{alpha nu} in terms of ADM quantities:
import BSSN.ADMBSSN_tofrom_4metric as AB4m
AB4m.g4DD_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha)
T4UD = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
for delta in range(4):
T4UD[mu][nu] += T4UU[mu][delta]*AB4m.g4DD[delta][nu]
###Output
_____no_output_____
###Markdown
Step 3: Writing the conservative variables in terms of the primitive variables \[Back to [top](toc)\]$$\label{primtoconserv}$$Recall from above that the conservative variables may be written as\begin{align}\rho_* &= \alpha\sqrt{\gamma} \rho_0 u^0 \\\tilde{\tau} &= \alpha^2\sqrt{\gamma} T^{00} - \rho_* \\\tilde{S}_i &= \alpha \sqrt{\gamma} T^0{}_i\end{align}$T^{\mu\nu}$ and $T^\mu{}_\nu$ have already been defined $-$ all in terms of primitive variables. Thus we'll just need $\sqrt{\gamma}=$`gammaDET`, and all conservatives can then be written in terms of other defined quantities, which themselves are written in terms of primitive variables and the ADM metric.
###Code
# Step 3: Writing the conservative variables in terms of the primitive variables
def compute_sqrtgammaDET(gammaDD):
global sqrtgammaDET
gammaUU, gammaDET = ixp.symm_matrix_inverter3x3(gammaDD)
sqrtgammaDET = sp.sqrt(gammaDET)
def compute_rho_star(alpha, sqrtgammaDET, rho_b,u4U):
global rho_star
# Compute rho_star:
rho_star = alpha*sqrtgammaDET*rho_b*u4U[0]
def compute_tau_tilde(alpha, sqrtgammaDET, T4UU,rho_star):
global tau_tilde
tau_tilde = alpha**2*sqrtgammaDET*T4UU[0][0] - rho_star
def compute_S_tildeD(alpha, sqrtgammaDET, T4UD):
global S_tildeD
S_tildeD = ixp.zerorank1(DIM=3)
for i in range(3):
S_tildeD[i] = alpha*sqrtgammaDET*T4UD[0][i+1]
###Output
_____no_output_____
###Markdown
Step 4: Define the fluxes for the GRHD equations \[Back to [top](toc)\]$$\label{grhdfluxes}$$ Step 4.a: Define $\rho_*$ flux term for GRHD equations \[Back to [top](toc)\]$$\label{rhostarfluxterm}$$Recall from above that\begin{array}\ \partial_t \rho_* &+ \partial_j \left(\rho_* v^j\right) = 0.\end{array}Here we will define the $\rho_* v^j$ that constitutes the flux of $\rho_*$, first defining $v^j=u^j/u^0$:
###Code
# Step 4: Define the fluxes for the GRHD equations
# Step 4.a: vU from u4U may be needed for computing rho_star_flux from u4U
def compute_vU_from_u4U__no_speed_limit(u4U):
global vU
# Now compute v^i = u^i/u^0:
vU = ixp.zerorank1(DIM=3)
for j in range(3):
vU[j] = u4U[j+1]/u4U[0]
# Step 4.b: rho_star flux
def compute_rho_star_fluxU(vU, rho_star):
global rho_star_fluxU
rho_star_fluxU = ixp.zerorank1(DIM=3)
for j in range(3):
rho_star_fluxU[j] = rho_star*vU[j]
###Output
_____no_output_____
###Markdown
Step 4.b: Define $\tilde{\tau}$ and $\tilde{S}_i$ flux terms for GRHD equations \[Back to [top](toc)\]$$\label{taustildesourceterms}$$Recall from above that\begin{array}\ \partial_t \tilde{\tau} &+ \partial_j \underbrace{\left(\alpha^2 \sqrt{\gamma} T^{0j} - \rho_* v^j \right)} &= s \\\partial_t \tilde{S}_i &+ \partial_j \underbrace{\left(\alpha \sqrt{\gamma} T^j{}_i \right)} &= \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i},\end{array}whereHere we will define all terms that go inside the $\partial_j$'s on the left-hand side of the above equations (i.e., the underbraced expressions):
###Code
# Step 4.c: tau_tilde flux
def compute_tau_tilde_fluxU(alpha, sqrtgammaDET, vU,T4UU, rho_star):
global tau_tilde_fluxU
tau_tilde_fluxU = ixp.zerorank1(DIM=3)
for j in range(3):
tau_tilde_fluxU[j] = alpha**2*sqrtgammaDET*T4UU[0][j+1] - rho_star*vU[j]
# Step 4.d: S_tilde flux
def compute_S_tilde_fluxUD(alpha, sqrtgammaDET, T4UD):
global S_tilde_fluxUD
S_tilde_fluxUD = ixp.zerorank2(DIM=3)
for j in range(3):
for i in range(3):
S_tilde_fluxUD[j][i] = alpha*sqrtgammaDET*T4UD[j+1][i+1]
###Output
_____no_output_____
###Markdown
Step 5: Define source terms on RHSs of GRHD equations \[Back to [top](toc)\]$$\label{grhdsourceterms}$$ Step 5.a: Define $s$ source term on RHS of $\tilde{\tau}$ equation \[Back to [top](toc)\]$$\label{ssourceterm}$$Recall again from above the $s$ source term on the right-hand side of the $\tilde{\tau}$ evolution equation is given in terms of ADM quantities and the stress-energy tensor via$$s = \underbrace{\alpha \sqrt{\gamma}}_{\text{Term 3}}\left[\underbrace{\left(T^{00}\beta^i\beta^j + 2 T^{0i}\beta^j + T^{ij} \right)K_{ij}}_{\text{Term 1}}\underbrace{- \left(T^{00}\beta^i + T^{0i} \right)\partial_i\alpha}_{\text{Term 2}} \right],$$
###Code
def compute_s_source_term(KDD,betaU,alpha, sqrtgammaDET,alpha_dD, T4UU):
global s_source_term
s_source_term = sp.sympify(0)
# Term 1:
for i in range(3):
for j in range(3):
s_source_term += (T4UU[0][0]*betaU[i]*betaU[j] + 2*T4UU[0][i+1]*betaU[j] + T4UU[i+1][j+1])*KDD[i][j]
# Term 2:
for i in range(3):
s_source_term += -(T4UU[0][0]*betaU[i] + T4UU[0][i+1])*alpha_dD[i]
# Term 3:
s_source_term *= alpha*sqrtgammaDET
###Output
_____no_output_____
###Markdown
Step 5.b: Define source term on RHS of $\tilde{S}_i$ equation \[Back to [top](toc)\]$$\label{stildeisourceterm}$$Recall from above$$\partial_t \tilde{S}_i + \partial_j \left(\alpha \sqrt{\gamma} T^j{}_i \right) = \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.$$Our goal here will be to compute$$\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.$$ Step 5.b.i: Compute $g_{\mu\nu,i}$ in terms of ADM quantities and their derivatives \[Back to [top](toc)\]$$\label{fourmetricderivs}$$To compute $g_{\mu\nu,i}$ we need to evaluate the first derivative of $g_{\mu\nu}$ in terms of ADM variables.We are given $\gamma_{ij}$, $\alpha$, and $\beta^i$, and the 4-metric is given in terms of these quantities via$$g_{\mu\nu} = \begin{pmatrix} -\alpha^2 + \beta^k \beta_k & \beta_i \\\beta_j & \gamma_{ij}\end{pmatrix}.$$Thus $$g_{\mu\nu,k} = \begin{pmatrix} -2 \alpha\alpha_{,i} + \beta^j_{,k} \beta_j + \beta^j \beta_{j,k} & \beta_{i,k} \\\beta_{j,k} & \gamma_{ij,k}\end{pmatrix},$$where $\beta_{i} = \gamma_{ij} \beta^j$, so$$\beta_{i,k} = \gamma_{ij,k} \beta^j + \gamma_{ij} \beta^j_{,k}$$
###Code
def compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD):
global g4DD_zerotimederiv_dD
# Eq. 2.121 in B&S
betaD = ixp.zerorank1(DIM=3)
for i in range(3):
for j in range(3):
betaD[i] += gammaDD[i][j]*betaU[j]
betaDdD = ixp.zerorank2(DIM=3)
for i in range(3):
for j in range(3):
for k in range(3):
# Recall that betaD[i] = gammaDD[i][j]*betaU[j] (Eq. 2.121 in B&S)
betaDdD[i][k] += gammaDD_dD[i][j][k]*betaU[j] + gammaDD[i][j]*betaU_dD[j][k]
# Eq. 2.122 in B&S
g4DD_zerotimederiv_dD = ixp.zerorank3(DIM=4)
for k in range(3):
# Recall that g4DD[0][0] = -alpha^2 + betaU[j]*betaD[j]
g4DD_zerotimederiv_dD[0][0][k+1] += -2*alpha*alpha_dD[k]
for j in range(3):
g4DD_zerotimederiv_dD[0][0][k+1] += betaU_dD[j][k]*betaD[j] + betaU[j]*betaDdD[j][k]
for i in range(3):
for k in range(3):
# Recall that g4DD[i][0] = g4DD[0][i] = betaD[i]
g4DD_zerotimederiv_dD[i+1][0][k+1] = g4DD_zerotimederiv_dD[0][i+1][k+1] = betaDdD[i][k]
for i in range(3):
for j in range(3):
for k in range(3):
# Recall that g4DD[i][j] = gammaDD[i][j]
g4DD_zerotimederiv_dD[i+1][j+1][k+1] = gammaDD_dD[i][j][k]
###Output
_____no_output_____
###Markdown
Step 5.b.ii: Compute source term of the $\tilde{S}_i$ equation: $\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}$ \[Back to [top](toc)\]$$\label{stildeisource}$$Now that we've computed `g4DD_zerotimederiv_dD`$=g_{\mu\nu,i}$, the $\tilde{S}_i$ evolution equation source term may be quickly constructed.
###Code
# Step 5.b.ii: Compute S_tilde source term
def compute_S_tilde_source_termD(alpha, sqrtgammaDET,g4DD_zerotimederiv_dD, T4UU):
global S_tilde_source_termD
S_tilde_source_termD = ixp.zerorank1(DIM=3)
for i in range(3):
for mu in range(4):
for nu in range(4):
S_tilde_source_termD[i] += sp.Rational(1,2)*alpha*sqrtgammaDET*T4UU[mu][nu]*g4DD_zerotimederiv_dD[mu][nu][i+1]
###Output
_____no_output_____
###Markdown
Step 6: Conversion of $v^i$ to $u^\mu$ (Courtesy Patrick Nelson) \[Back to [top](toc)\]$$\label{convertvtou}$$According to Eqs. 9-11 of [the IllinoisGRMHD paper](https://arxiv.org/pdf/1501.07276.pdf), the Valencia 3-velocity $v^i_{(n)}$ is related to the 4-velocity $u^\mu$ via\begin{align}\alpha v^i_{(n)} &= \frac{u^i}{u^0} + \beta^i \\\implies u^i &= u^0 \left(\alpha v^i_{(n)} - \beta^i\right)\end{align}Defining $v^i = \frac{u^i}{u^0}$, we get$$v^i = \alpha v^i_{(n)} - \beta^i,$$and in terms of this variable we get\begin{align}g_{00} \left(u^0\right)^2 + 2 g_{0i} u^0 u^i + g_{ij} u^i u^j &= \left(u^0\right)^2 \left(g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j\right)\\\implies u^0 &= \pm \sqrt{\frac{-1}{g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j}} \\&= \pm \sqrt{\frac{-1}{(-\alpha^2 + \beta^2) + 2 \beta_i v^i + \gamma_{ij} v^i v^j}} \\&= \pm \sqrt{\frac{1}{\alpha^2 - \gamma_{ij}\left(\beta^i + v^i\right)\left(\beta^j + v^j\right)}}\\&= \pm \sqrt{\frac{1}{\alpha^2 - \alpha^2 \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\&= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\end{align}Generally speaking, numerical errors will occasionally drive expressions under the radical to either negative values or potentially enormous values (corresponding to enormous Lorentz factors). Thus a reliable approach for computing $u^0$ requires that we first rewrite the above expression in terms of the Lorentz factor squared: $\Gamma^2=\left(\alpha u^0\right)^2$:\begin{align}u^0 &= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\\implies \left(\alpha u^0\right)^2 &= \frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}} \\\implies \gamma_{ij}v^i_{(n)}v^j_{(n)} &= 1 - \frac{1}{\left(\alpha u^0\right)^2} \\&= 1 - \frac{1}{\Gamma^2}\end{align}In order for the bottom expression to hold true, the left-hand side must be between 0 and 1. Again, this is not guaranteed due to the appearance of numerical errors. In fact, a robust algorithm will not allow $\Gamma^2$ to become too large (which might contribute greatly to the stress-energy of a given gridpoint), so let's define the largest allowed Lorentz factor as $\Gamma_{\rm max}$.Then our algorithm for computing $u^0$ is as follows:If$$R=\gamma_{ij}v^i_{(n)}v^j_{(n)}>1 - \frac{1}{\Gamma_{\rm max}^2},$$ then adjust the 3-velocity $v^i$ as follows:$$v^i_{(n)} = \sqrt{\frac{1 - \frac{1}{\Gamma_{\rm max}^2}}{R}}v^i_{(n)}.$$After this rescaling, we are then guaranteed that if $R$ is recomputed, it will be set to its ceiling value $R=R_{\rm max} = 1 - \frac{1}{\Gamma_{\rm max}^2}$.Then, regardless of whether the ceiling on $R$ was applied, $u^0$ can be safely computed via$$u^0 = \frac{1}{\alpha \sqrt{1-R}},$$and the remaining components $u^i$ via$$u^i = u^0 v^i.$$In summary our algorithm for computing $u^{\mu}$ from $v^i = \frac{u^i}{u^0}$ is as follows:1. Choose a maximum Lorentz factor $\Gamma_{\rm max}$=`GAMMA_SPEED_LIMIT`, and define $v^i_{(n)} = \frac{1}{\alpha}\left( \frac{u^i}{u^0} + \beta^i\right)$.1. Compute $R=\gamma_{ij}v^i_{(n)}v^j_{(n)}=1 - \frac{1}{\Gamma^2}$1. If $R \le 1 - \frac{1}{\Gamma_{\rm max}^2}$, then skip the next step.1. Otherwise if $R > 1 - \frac{1}{\Gamma_{\rm max}^2}$ then adjust $v^i_{(n)}= \sqrt{\frac{1 - \frac{1}{\Gamma_{\rm max}^2}}{R}}v^i_{(n)}$, which will force $R=R_{\rm max}$.1. Given the $R$ computed in the above step, $u^0 = \frac{1}{\alpha \sqrt{1-R}}$, and $u^i=u^0 v^i$.While the above algorithm is quite robust, its `if()` statement in the fourth step is not very friendly to NRPy+ or an optimizing C compiler, as it would require NRPy+ to generate separate C kernels for each branch of the `if()`. Let's instead try the following trick, which Roland Haas taught us. Define $R^*$ as$$R^* = \frac{1}{2} \left(R_{\rm max} + R - |R_{\rm max} - R| \right).$$If $R>R_{\rm max}$, then $|R_{\rm max} - R|=R - R_{\rm max}$, and we get:$$R^* = \frac{1}{2} \left(R_{\rm max} + R - (R - R_{\rm max}) \right) = \frac{1}{2} \left(2 R_{\rm max}\right) = R_{\rm max}$$If $R\le R_{\rm max}$, then $|R_{\rm max} - R|=R_{\rm max} - R$, and we get:$$R^* = \frac{1}{2} \left(R_{\rm max} + R - (R_{\rm max} - R) \right) = \frac{1}{2} \left(2 R\right) = R$$Then we can rescale *all* $v^i_{(n)}$ via$$v^i_{(n)} \to v^i_{(n)} \sqrt{\frac{R^*}{R}},$$though we must be very careful to carefully handle the case in which $R=0$. To avoid any problems in this case, we simply adjust the above rescaling by adding a tiny number [`TINYDOUBLE`](https://en.wikipedia.org/wiki/Tiny_Bubbles) to $R$ in the denominator, typically `1e-100`:$$v^i_{(n)} \to v^i_{(n)} \sqrt{\frac{R^*}{R + {\rm TINYDOUBLE}}}.$$Finally, $u^0$ can be immediately and safely computed, via:$$u^0 = \frac{1}{\alpha \sqrt{1-R^*}},$$and $u^i$ via $$u^i = u^0 v^i = u^0 \left(\alpha v^i_{(n)} - \beta^i\right).$$
###Code
# Step 6.a: Convert Valencia 3-velocity v_{(n)}^i into u^\mu, and apply a speed limiter
# Speed-limited ValenciavU is output to rescaledValenciavU global.
def u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, ValenciavU):
# Inputs: Metric lapse alpha, shift betaU, 3-metric gammaDD, Valencia 3-velocity ValenciavU
# Outputs (as globals): u4U_ito_ValenciavU, rescaledValenciavU
# R = gamma_{ij} v^i v^j
R = sp.sympify(0)
for i in range(3):
for j in range(3):
R += gammaDD[i][j]*ValenciavU[i]*ValenciavU[j]
thismodule = "GRHD"
# The default value isn't terribly important here, since we can overwrite in the main C code
GAMMA_SPEED_LIMIT = par.Cparameters("REAL", thismodule, "GAMMA_SPEED_LIMIT", 10.0) # Default value based on
# IllinoisGRMHD.
# GiRaFFE default = 2000.0
Rmax = 1 - 1 / (GAMMA_SPEED_LIMIT * GAMMA_SPEED_LIMIT)
# Now, we set Rstar = min(Rmax,R):
# If R < Rmax, then Rstar = 0.5*(Rmax+R-Rmax+R) = R
# If R >= Rmax, then Rstar = 0.5*(Rmax+R+Rmax-R) = Rmax
Rstar = sp.Rational(1, 2) * (Rmax + R - nrpyAbs(Rmax - R))
# We add TINYDOUBLE to R below to avoid a 0/0, which occurs when
# ValenciavU == 0 for all Valencia 3-velocity components.
# "Those tiny *doubles* make me warm all over
# with a feeling that I'm gonna love you till the end of time."
# - Adapted from Connie Francis' "Tiny Bubbles"
TINYDOUBLE = par.Cparameters("#define",thismodule,"TINYDOUBLE",1e-100)
# The rescaled (speed-limited) Valencia 3-velocity
# is given by, v_{(n)}^i = sqrt{Rstar/R} v^i
global rescaledValenciavU
rescaledValenciavU = ixp.zerorank1(DIM=3)
for i in range(3):
# If R == 0, then Rstar == 0, so Rmax/(R+TINYDOUBLE)=0/1e-100 = 0
# If your velocities are of order 1e-100 and this is physically
# meaningful, there must be something wrong with your unit conversion.
rescaledValenciavU[i] = ValenciavU[i]*sp.sqrt(Rstar/(R + TINYDOUBLE))
# Finally compute u^mu in terms of Valenciav^i
# u^0 = 1/(alpha-sqrt(1-R^*))
global u4U_ito_ValenciavU
u4U_ito_ValenciavU = ixp.zerorank1(DIM=4)
u4U_ito_ValenciavU[0] = 1/(alpha*sp.sqrt(1-Rstar))
# u^i = u^0 ( alpha v^i_{(n)} - beta^i ), where v^i_{(n)} is the Valencia 3-velocity
for i in range(3):
u4U_ito_ValenciavU[i+1] = u4U_ito_ValenciavU[0] * (alpha * rescaledValenciavU[i] - betaU[i])
# Step 6.b: Convert v^i into u^\mu, and apply a speed limiter.
# Speed-limited vU is output to rescaledvU global.
def u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, vU):
ValenciavU = ixp.zerorank1(DIM=3)
for i in range(3):
ValenciavU[i] = (vU[i] + betaU[i])/alpha
u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, ValenciavU)
# Since ValenciavU is written in terms of vU,
# u4U_ito_ValenciavU is actually u4U_ito_vU
global u4U_ito_vU
u4U_ito_vU = ixp.zerorank1(DIM=4)
for mu in range(4):
u4U_ito_vU[mu] = u4U_ito_ValenciavU[mu]
# Finally compute the rescaled (speed-limited) vU
global rescaledvU
rescaledvU = ixp.zerorank1(DIM=3)
for i in range(3):
rescaledvU[i] = alpha * rescaledValenciavU[i] - betaU[i]
###Output
_____no_output_____
###Markdown
Step 7: Declare ADM and hydrodynamical input variables, and construct GRHD equations \[Back to [top](toc)\]$$\label{declarevarsconstructgrhdeqs}$$
###Code
# First define hydrodynamical quantities
u4U = ixp.declarerank1("u4U", DIM=4)
rho_b,P,epsilon = sp.symbols('rho_b P epsilon',real=True)
# Then ADM quantities
gammaDD = ixp.declarerank2("gammaDD","sym01",DIM=3)
KDD = ixp.declarerank2("KDD" ,"sym01",DIM=3)
betaU = ixp.declarerank1("betaU", DIM=3)
alpha = sp.symbols('alpha', real=True)
# First compute stress-energy tensor T4UU and T4UD:
compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U)
compute_T4UD(gammaDD,betaU,alpha, T4UU)
# Next sqrt(gamma)
compute_sqrtgammaDET(gammaDD)
# Compute conservative variables in terms of primitive variables
compute_rho_star( alpha, sqrtgammaDET, rho_b,u4U)
compute_tau_tilde(alpha, sqrtgammaDET, T4UU,rho_star)
compute_S_tildeD( alpha, sqrtgammaDET, T4UD)
# Then compute v^i from u^mu
compute_vU_from_u4U__no_speed_limit(u4U)
# Next compute fluxes of conservative variables
compute_rho_star_fluxU( vU, rho_star)
compute_tau_tilde_fluxU(alpha, sqrtgammaDET, vU,T4UU, rho_star)
compute_S_tilde_fluxUD( alpha, sqrtgammaDET, T4UD)
# Then declare derivatives & compute g4DD_zerotimederiv_dD
gammaDD_dD = ixp.declarerank3("gammaDD_dD","sym01",DIM=3)
betaU_dD = ixp.declarerank2("betaU_dD" ,"nosym",DIM=3)
alpha_dD = ixp.declarerank1("alpha_dD" ,DIM=3)
compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD)
# Then compute source terms on tau_tilde and S_tilde equations
compute_s_source_term(KDD,betaU,alpha, sqrtgammaDET,alpha_dD, T4UU)
compute_S_tilde_source_termD( alpha, sqrtgammaDET,g4DD_zerotimederiv_dD, T4UU)
# Then compute the 4-velocities in terms of an input Valencia 3-velocity testValenciavU[i]
testValenciavU = ixp.declarerank1("testValenciavU",DIM=3)
u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, testValenciavU)
# Finally compute the 4-velocities in terms of an input 3-velocity testvU[i] = u^i/u^0
testvU = ixp.declarerank1("testvU",DIM=3)
u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, testvU)
###Output
_____no_output_____
###Markdown
Step 8: Code Validation against `GRHD.equations` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the SymPy expressions for the GRHD equations generated in1. this tutorial versus2. the NRPy+ [GRHD.equations](../edit/GRHD/equations.py) module.
###Code
import GRHD.equations as Ge
# First compute stress-energy tensor T4UU and T4UD:
Ge.compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U)
Ge.compute_T4UD(gammaDD,betaU,alpha, Ge.T4UU)
# Next sqrt(gamma)
Ge.compute_sqrtgammaDET(gammaDD)
# Compute conservative variables in terms of primitive variables
Ge.compute_rho_star( alpha, Ge.sqrtgammaDET, rho_b,u4U)
Ge.compute_tau_tilde(alpha, Ge.sqrtgammaDET, Ge.T4UU,Ge.rho_star)
Ge.compute_S_tildeD( alpha, Ge.sqrtgammaDET, Ge.T4UD)
# Then compute v^i from u^mu
Ge.compute_vU_from_u4U__no_speed_limit(u4U)
# Next compute fluxes of conservative variables
Ge.compute_rho_star_fluxU ( Ge.vU, Ge.rho_star)
Ge.compute_tau_tilde_fluxU(alpha, Ge.sqrtgammaDET, Ge.vU,Ge.T4UU, Ge.rho_star)
Ge.compute_S_tilde_fluxUD (alpha, Ge.sqrtgammaDET, Ge.T4UD)
# Then declare derivatives & compute g4DD_zerotimederiv_dD
# gammaDD_dD = ixp.declarerank3("gammaDD_dD","sym01",DIM=3)
# betaU_dD = ixp.declarerank2("betaU_dD" ,"nosym",DIM=3)
# alpha_dD = ixp.declarerank1("alpha_dD" ,DIM=3)
Ge.compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD)
# Finally compute source terms on tau_tilde and S_tilde equations
Ge.compute_s_source_term(KDD,betaU,alpha, Ge.sqrtgammaDET,alpha_dD, Ge.T4UU)
Ge.compute_S_tilde_source_termD( alpha, Ge.sqrtgammaDET,Ge.g4DD_zerotimederiv_dD,Ge.T4UU)
GetestValenciavU = ixp.declarerank1("testValenciavU")
Ge.u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha, betaU, gammaDD, GetestValenciavU)
GetestvU = ixp.declarerank1("testvU")
Ge.u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit( alpha, betaU, gammaDD, GetestvU)
all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="Ge."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2==None:
return basename+"["+str(idx1)+"]"
if idx3==None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
expr_list = []
exprcheck_list = []
namecheck_list = []
namecheck_list.extend(["sqrtgammaDET","rho_star","tau_tilde","s_source_term"])
exprcheck_list.extend([Ge.sqrtgammaDET,Ge.rho_star,Ge.tau_tilde,Ge.s_source_term])
expr_list.extend([sqrtgammaDET,rho_star,tau_tilde,s_source_term])
for mu in range(4):
namecheck_list.extend([gfnm("u4_ito_ValenciavU",mu),gfnm("u4U_ito_vU",mu)])
exprcheck_list.extend([ Ge.u4U_ito_ValenciavU[mu], Ge.u4U_ito_vU[mu]])
expr_list.extend( [ u4U_ito_ValenciavU[mu], u4U_ito_vU[mu]])
for nu in range(4):
namecheck_list.extend([gfnm("T4UU",mu,nu),gfnm("T4UD",mu,nu)])
exprcheck_list.extend([Ge.T4UU[mu][nu],Ge.T4UD[mu][nu]])
expr_list.extend([T4UU[mu][nu],T4UD[mu][nu]])
for delta in range(4):
namecheck_list.extend([gfnm("g4DD_zerotimederiv_dD",mu,nu,delta)])
exprcheck_list.extend([Ge.g4DD_zerotimederiv_dD[mu][nu][delta]])
expr_list.extend([g4DD_zerotimederiv_dD[mu][nu][delta]])
for i in range(3):
namecheck_list.extend([gfnm("S_tildeD",i),gfnm("vU",i),gfnm("rho_star_fluxU",i),
gfnm("tau_tilde_fluxU",i),gfnm("S_tilde_source_termD",i),
gfnm("rescaledValenciavU",i), gfnm("rescaledvU",i)])
exprcheck_list.extend([Ge.S_tildeD[i],Ge.vU[i],Ge.rho_star_fluxU[i],
Ge.tau_tilde_fluxU[i],Ge.S_tilde_source_termD[i],
Ge.rescaledValenciavU[i],Ge.rescaledvU[i]])
expr_list.extend([S_tildeD[i],vU[i],rho_star_fluxU[i],
tau_tilde_fluxU[i],S_tilde_source_termD[i],
rescaledValenciavU[i],rescaledvU[i]])
for j in range(3):
namecheck_list.extend([gfnm("S_tilde_fluxUD",i,j)])
exprcheck_list.extend([Ge.S_tilde_fluxUD[i][j]])
expr_list.extend([S_tilde_fluxUD[i][j]])
for i in range(len(expr_list)):
comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i])
import sys
if all_passed:
print("ALL TESTS PASSED!")
else:
print("ERROR: AT LEAST ONE TEST DID NOT PASS")
sys.exit(1)
###Output
ALL TESTS PASSED!
###Markdown
Step 9: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-GRHD_Equations-Cartesian.pdf](Tutorial-GRHD_Equations-Cartesian.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-GRHD_Equations-Cartesian.ipynb
!pdflatex -interaction=batchmode Tutorial-GRHD_Equations-Cartesian.tex
!pdflatex -interaction=batchmode Tutorial-GRHD_Equations-Cartesian.tex
!pdflatex -interaction=batchmode Tutorial-GRHD_Equations-Cartesian.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Equations of General Relativistic Hydrodynamics (GRHD) Authors: Zach Etienne & Patrick Nelson This notebook documents and constructs a number of quantities useful for building symbolic (SymPy) expressions for the equations of general relativistic hydrodynamics (GRHD), using the same (Valencia) formalism as `IllinoisGRMHD`**Notebook Status:** Self-Validated **Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)** IntroductionWe write the equations of general relativistic hydrodynamics in conservative form as follows (adapted from Eqs. 41-44 of [Duez et al](https://arxiv.org/pdf/astro-ph/0503420.pdf):\begin{eqnarray}\ \partial_t \rho_* &+& \partial_j \left(\rho_* v^j\right) = 0 \\\partial_t \tilde{\tau} &+& \partial_j \left(\alpha^2 \sqrt{\gamma} T^{0j} - \rho_* v^j \right) = s \\\partial_t \tilde{S}_i &+& \partial_j \left(\alpha \sqrt{\gamma} T^j{}_i \right) = \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i},\end{eqnarray}where we assume $T^{\mu\nu}$ is the stress-energy tensor of a perfect fluid:$$T^{\mu\nu} = \rho_0 h u^{\mu} u^{\nu} + P g^{\mu\nu},$$the $s$ source term is given in terms of ADM quantities via$$s = \alpha \sqrt{\gamma}\left[\left(T^{00}\beta^i\beta^j + 2 T^{0i}\beta^j + T^{ij} \right)K_{ij}- \left(T^{00}\beta^i + T^{0i} \right)\partial_i\alpha \right],$$and \begin{align}v^j &= \frac{u^j}{u^0} \\\rho_* &= \alpha\sqrt{\gamma} \rho_0 u^0 \\h &= 1 + \epsilon + \frac{P}{\rho_0}.\end{align}Also we will write the 4-metric in terms of the ADM 3-metric, lapse, and shift using standard equations.Thus the full set of input variables include:* Spacetime quantities: * ADM quantities $\alpha$, $\beta^i$, $\gamma_{ij}$, $K_{ij}$* Hydrodynamical quantities: * Rest-mass density $\rho_0$ * Pressure $P$ * Internal energy $\epsilon$ * 4-velocity $u^\mu$For completeness, the rest of the conservative variables are given by\begin{align}\tilde{\tau} &= \alpha^2\sqrt{\gamma} T^{00} - \rho_* \\\tilde{S}_i &= \alpha \sqrt{\gamma} T^0{}_i\end{align} A Note on NotationAs is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.For instance, in calculating the first term of $b^2 u^\mu u^\nu$, we use Greek indices:```pythonT4EMUU = ixp.zerorank2(DIM=4)for mu in range(4): for nu in range(4): Term 1: b^2 u^{\mu} u^{\nu} T4EMUU[mu][nu] = smallb2*u4U[mu]*u4U[nu]```When we calculate $\beta_i = \gamma_{ij} \beta^j$, we use Latin indices:```pythonbetaD = ixp.zerorank1(DIM=3)for i in range(3): for j in range(3): betaD[i] += gammaDD[i][j] * betaU[j]```As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook). This can be seen when we handle $\frac{1}{2} \alpha \sqrt{\gamma} T^{\mu \nu}_{\rm EM} \partial_i g_{\mu \nu}$:```python \alpha \sqrt{\gamma} T^{\mu \nu}_{\rm EM} \partial_i g_{\mu \nu} / 2for i in range(3): for mu in range(4): for nu in range(4): S_tilde_rhsD[i] += alpsqrtgam * T4EMUU[mu][nu] * g4DD_zerotimederiv_dD[mu][nu][i+1] / 2``` Table of Contents$$\label{toc}$$Each family of quantities is constructed within a given function (**boldfaced** below). This notebook is organized as follows1. [Step 1](importmodules): Import needed NRPy+ & Python modules1. [Step 2](stressenergy): Define the stress-energy tensor $T^{\mu\nu}$ and $T^\mu{}_\nu$: * **compute_enthalpy()**, **compute_T4UU()**, **compute_T4UD()**: 1. [Step 3](primtoconserv): Writing the conservative variables in terms of the primitive variables: * **compute_sqrtgammaDET()**, **compute_rho_star()**, **compute_tau_tilde()**, **compute_S_tildeD()**1. [Step 4](grhdfluxes): Define the fluxes for the GRHD equations 1. [Step 4.a](rhostarfluxterm): Define $\rho_*$ flux term for GRHD equations: * **compute_vU_from_u4U__no_speed_limit()**, **compute_rho_star_fluxU()**: 1. [Step 4.b](taustildesourceterms) Define $\tilde{\tau}$ and $\tilde{S}_i$ flux terms for GRHD equations: * **compute_tau_tilde_fluxU()**, **compute_S_tilde_fluxUD()**1. [Step 5](grhdsourceterms): Define source terms on RHSs of GRHD equations 1. [Step 5.a](ssourceterm): Define $s$ source term on RHS of $\tilde{\tau}$ equation: * **compute_s_source_term()** 1. [Step 5.b](stildeisourceterm): Define source term on RHS of $\tilde{S}_i$ equation 1. [Step 5.b.i](fourmetricderivs): Compute $g_{\mu\nu,i}$ in terms of ADM quantities and their derivatives: * **compute_g4DD_zerotimederiv_dD()** 1. [Step 5.b.ii](stildeisource): Compute source term of the $\tilde{S}_i$ equation: $\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}$: * **compute_S_tilde_source_termD()**1. [Step 6](convertvtou): Conversion of $v^i$ to $u^\mu$ (Courtesy Patrick Nelson): * **u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit()**, **u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit()**1. [Step 7](declarevarsconstructgrhdeqs): Declare ADM and hydrodynamical input variables, and construct GRHD equations1. [Step 8](code_validation): Code Validation against `GRHD.equations` NRPy+ module1. [Step 9](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Import needed NRPy+ & Python modules \[Back to [top](toc)\]$$\label{importmodules}$$
###Code
# Step 1: Import needed core NRPy+ modules
from outputC import nrpyAbs # NRPy+: Core C code output module
import NRPy_param_funcs as par # NRPy+: parameter interface
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
###Output
_____no_output_____
###Markdown
Step 2: Define the stress-energy tensor $T^{\mu\nu}$ and $T^\mu{}_\nu$ \[Back to [top](toc)\]$$\label{stressenergy}$$Recall from above that$$T^{\mu\nu} = \rho_0 h u^{\mu} u^{\nu} + P g^{\mu\nu},$$where $h = 1 + \epsilon + \frac{P}{\rho_0}$. Also $$T^\mu{}_{\nu} = T^{\mu\delta} g_{\delta \nu}$$
###Code
# Step 2.a: First define h, the enthalpy:
def compute_enthalpy(rho_b,P,epsilon):
global h
h = 1 + epsilon + P/rho_b
# Step 2.b: Define T^{mu nu} (a 4-dimensional tensor)
def compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U):
global T4UU
compute_enthalpy(rho_b,P,epsilon)
# Then define g^{mu nu} in terms of the ADM quantities:
import BSSN.ADMBSSN_tofrom_4metric as AB4m
AB4m.g4UU_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha)
# Finally compute T^{mu nu}
T4UU = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
T4UU[mu][nu] = rho_b * h * u4U[mu]*u4U[nu] + P*AB4m.g4UU[mu][nu]
# Step 2.c: Define T^{mu}_{nu} (a 4-dimensional tensor)
def compute_T4UD(gammaDD,betaU,alpha, T4UU):
global T4UD
# Next compute T^mu_nu = T^{mu delta} g_{delta nu}, needed for S_tilde flux.
# First we'll need g_{alpha nu} in terms of ADM quantities:
import BSSN.ADMBSSN_tofrom_4metric as AB4m
AB4m.g4DD_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha)
T4UD = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
for delta in range(4):
T4UD[mu][nu] += T4UU[mu][delta]*AB4m.g4DD[delta][nu]
###Output
_____no_output_____
###Markdown
Step 3: Writing the conservative variables in terms of the primitive variables \[Back to [top](toc)\]$$\label{primtoconserv}$$Recall from above that the conservative variables may be written as\begin{align}\rho_* &= \alpha\sqrt{\gamma} \rho_0 u^0 \\\tilde{\tau} &= \alpha^2\sqrt{\gamma} T^{00} - \rho_* \\\tilde{S}_i &= \alpha \sqrt{\gamma} T^0{}_i\end{align}$T^{\mu\nu}$ and $T^\mu{}_\nu$ have already been defined $-$ all in terms of primitive variables. Thus we'll just need $\sqrt{\gamma}=$`gammaDET`, and all conservatives can then be written in terms of other defined quantities, which themselves are written in terms of primitive variables and the ADM metric.
###Code
# Step 3: Writing the conservative variables in terms of the primitive variables
def compute_sqrtgammaDET(gammaDD):
global sqrtgammaDET
gammaUU, gammaDET = ixp.symm_matrix_inverter3x3(gammaDD)
sqrtgammaDET = sp.sqrt(gammaDET)
def compute_rho_star(alpha, sqrtgammaDET, rho_b,u4U):
global rho_star
# Compute rho_star:
rho_star = alpha*sqrtgammaDET*rho_b*u4U[0]
def compute_tau_tilde(alpha, sqrtgammaDET, T4UU,rho_star):
global tau_tilde
tau_tilde = alpha**2*sqrtgammaDET*T4UU[0][0] - rho_star
def compute_S_tildeD(alpha, sqrtgammaDET, T4UD):
global S_tildeD
S_tildeD = ixp.zerorank1(DIM=3)
for i in range(3):
S_tildeD[i] = alpha*sqrtgammaDET*T4UD[0][i+1]
###Output
_____no_output_____
###Markdown
Step 4: Define the fluxes for the GRHD equations \[Back to [top](toc)\]$$\label{grhdfluxes}$$ Step 4.a: Define $\rho_*$ flux term for GRHD equations \[Back to [top](toc)\]$$\label{rhostarfluxterm}$$Recall from above that\begin{array}\ \partial_t \rho_* &+ \partial_j \left(\rho_* v^j\right) = 0.\end{array}Here we will define the $\rho_* v^j$ that constitutes the flux of $\rho_*$, first defining $v^j=u^j/u^0$:
###Code
# Step 4: Define the fluxes for the GRHD equations
# Step 4.a: vU from u4U may be needed for computing rho_star_flux from u4U
def compute_vU_from_u4U__no_speed_limit(u4U):
global vU
# Now compute v^i = u^i/u^0:
vU = ixp.zerorank1(DIM=3)
for j in range(3):
vU[j] = u4U[j+1]/u4U[0]
# Step 4.b: rho_star flux
def compute_rho_star_fluxU(vU, rho_star):
global rho_star_fluxU
rho_star_fluxU = ixp.zerorank1(DIM=3)
for j in range(3):
rho_star_fluxU[j] = rho_star*vU[j]
###Output
_____no_output_____
###Markdown
Step 4.b: Define $\tilde{\tau}$ and $\tilde{S}_i$ flux terms for GRHD equations \[Back to [top](toc)\]$$\label{taustildesourceterms}$$Recall from above that\begin{array}\ \partial_t \tilde{\tau} &+ \partial_j \underbrace{\left(\alpha^2 \sqrt{\gamma} T^{0j} - \rho_* v^j \right)} &= s \\\partial_t \tilde{S}_i &+ \partial_j \underbrace{\left(\alpha \sqrt{\gamma} T^j{}_i \right)} &= \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.\end{array}Here we will define all terms that go inside the $\partial_j$'s on the left-hand side of the above equations (i.e., the underbraced expressions):
###Code
# Step 4.c: tau_tilde flux
def compute_tau_tilde_fluxU(alpha, sqrtgammaDET, vU,T4UU, rho_star):
global tau_tilde_fluxU
tau_tilde_fluxU = ixp.zerorank1(DIM=3)
for j in range(3):
tau_tilde_fluxU[j] = alpha**2*sqrtgammaDET*T4UU[0][j+1] - rho_star*vU[j]
# Step 4.d: S_tilde flux
def compute_S_tilde_fluxUD(alpha, sqrtgammaDET, T4UD):
global S_tilde_fluxUD
S_tilde_fluxUD = ixp.zerorank2(DIM=3)
for j in range(3):
for i in range(3):
S_tilde_fluxUD[j][i] = alpha*sqrtgammaDET*T4UD[j+1][i+1]
###Output
_____no_output_____
###Markdown
Step 5: Define source terms on RHSs of GRHD equations \[Back to [top](toc)\]$$\label{grhdsourceterms}$$ Step 5.a: Define $s$ source term on RHS of $\tilde{\tau}$ equation \[Back to [top](toc)\]$$\label{ssourceterm}$$Recall again from above the $s$ source term on the right-hand side of the $\tilde{\tau}$ evolution equation is given in terms of ADM quantities and the stress-energy tensor via$$s = \underbrace{\alpha \sqrt{\gamma}}_{\text{Term 3}}\left[\underbrace{\left(T^{00}\beta^i\beta^j + 2 T^{0i}\beta^j + T^{ij} \right)K_{ij}}_{\text{Term 1}}\underbrace{- \left(T^{00}\beta^i + T^{0i} \right)\partial_i\alpha}_{\text{Term 2}} \right],$$
###Code
def compute_s_source_term(KDD,betaU,alpha, sqrtgammaDET,alpha_dD, T4UU):
global s_source_term
s_source_term = sp.sympify(0)
# Term 1:
for i in range(3):
for j in range(3):
s_source_term += (T4UU[0][0]*betaU[i]*betaU[j] + 2*T4UU[0][i+1]*betaU[j] + T4UU[i+1][j+1])*KDD[i][j]
# Term 2:
for i in range(3):
s_source_term += -(T4UU[0][0]*betaU[i] + T4UU[0][i+1])*alpha_dD[i]
# Term 3:
s_source_term *= alpha*sqrtgammaDET
###Output
_____no_output_____
###Markdown
Step 5.b: Define source term on RHS of $\tilde{S}_i$ equation \[Back to [top](toc)\]$$\label{stildeisourceterm}$$Recall from above$$\partial_t \tilde{S}_i + \partial_j \left(\alpha \sqrt{\gamma} T^j{}_i \right) = \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.$$Our goal here will be to compute$$\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.$$ Step 5.b.i: Compute $g_{\mu\nu,i}$ in terms of ADM quantities and their derivatives \[Back to [top](toc)\]$$\label{fourmetricderivs}$$To compute $g_{\mu\nu,i}$ we need to evaluate the first derivative of $g_{\mu\nu}$ in terms of ADM variables.We are given $\gamma_{ij}$, $\alpha$, and $\beta^i$, and the 4-metric is given in terms of these quantities via$$g_{\mu\nu} = \begin{pmatrix} -\alpha^2 + \beta^k \beta_k & \beta_i \\\beta_j & \gamma_{ij}\end{pmatrix}.$$Thus $$g_{\mu\nu,k} = \begin{pmatrix} -2 \alpha\alpha_{,i} + \beta^j_{,k} \beta_j + \beta^j \beta_{j,k} & \beta_{i,k} \\\beta_{j,k} & \gamma_{ij,k}\end{pmatrix},$$where $\beta_{i} = \gamma_{ij} \beta^j$, so$$\beta_{i,k} = \gamma_{ij,k} \beta^j + \gamma_{ij} \beta^j_{,k}$$
###Code
def compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD):
global g4DD_zerotimederiv_dD
# Eq. 2.121 in B&S
betaD = ixp.zerorank1(DIM=3)
for i in range(3):
for j in range(3):
betaD[i] += gammaDD[i][j]*betaU[j]
betaDdD = ixp.zerorank2(DIM=3)
for i in range(3):
for j in range(3):
for k in range(3):
# Recall that betaD[i] = gammaDD[i][j]*betaU[j] (Eq. 2.121 in B&S)
betaDdD[i][k] += gammaDD_dD[i][j][k]*betaU[j] + gammaDD[i][j]*betaU_dD[j][k]
# Eq. 2.122 in B&S
g4DD_zerotimederiv_dD = ixp.zerorank3(DIM=4)
for k in range(3):
# Recall that g4DD[0][0] = -alpha^2 + betaU[j]*betaD[j]
g4DD_zerotimederiv_dD[0][0][k+1] += -2*alpha*alpha_dD[k]
for j in range(3):
g4DD_zerotimederiv_dD[0][0][k+1] += betaU_dD[j][k]*betaD[j] + betaU[j]*betaDdD[j][k]
for i in range(3):
for k in range(3):
# Recall that g4DD[i][0] = g4DD[0][i] = betaD[i]
g4DD_zerotimederiv_dD[i+1][0][k+1] = g4DD_zerotimederiv_dD[0][i+1][k+1] = betaDdD[i][k]
for i in range(3):
for j in range(3):
for k in range(3):
# Recall that g4DD[i][j] = gammaDD[i][j]
g4DD_zerotimederiv_dD[i+1][j+1][k+1] = gammaDD_dD[i][j][k]
###Output
_____no_output_____
###Markdown
Step 5.b.ii: Compute source term of the $\tilde{S}_i$ equation: $\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}$ \[Back to [top](toc)\]$$\label{stildeisource}$$Now that we've computed `g4DD_zerotimederiv_dD`$=g_{\mu\nu,i}$, the $\tilde{S}_i$ evolution equation source term may be quickly constructed.
###Code
# Step 5.b.ii: Compute S_tilde source term
def compute_S_tilde_source_termD(alpha, sqrtgammaDET,g4DD_zerotimederiv_dD, T4UU):
global S_tilde_source_termD
S_tilde_source_termD = ixp.zerorank1(DIM=3)
for i in range(3):
for mu in range(4):
for nu in range(4):
S_tilde_source_termD[i] += sp.Rational(1,2)*alpha*sqrtgammaDET*T4UU[mu][nu]*g4DD_zerotimederiv_dD[mu][nu][i+1]
###Output
_____no_output_____
###Markdown
Step 6: Conversion of $v^i$ to $u^\mu$ (Courtesy Patrick Nelson) \[Back to [top](toc)\]$$\label{convertvtou}$$According to Eqs. 9-11 of [the IllinoisGRMHD paper](https://arxiv.org/pdf/1501.07276.pdf), the Valencia 3-velocity $v^i_{(n)}$ is related to the 4-velocity $u^\mu$ via\begin{align}\alpha v^i_{(n)} &= \frac{u^i}{u^0} + \beta^i \\\implies u^i &= u^0 \left(\alpha v^i_{(n)} - \beta^i\right)\end{align}Defining $v^i = \frac{u^i}{u^0}$, we get$$v^i = \alpha v^i_{(n)} - \beta^i,$$and in terms of this variable we get\begin{align}g_{00} \left(u^0\right)^2 + 2 g_{0i} u^0 u^i + g_{ij} u^i u^j &= \left(u^0\right)^2 \left(g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j\right)\\\implies u^0 &= \pm \sqrt{\frac{-1}{g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j}} \\&= \pm \sqrt{\frac{-1}{(-\alpha^2 + \beta^2) + 2 \beta_i v^i + \gamma_{ij} v^i v^j}} \\&= \pm \sqrt{\frac{1}{\alpha^2 - \gamma_{ij}\left(\beta^i + v^i\right)\left(\beta^j + v^j\right)}}\\&= \pm \sqrt{\frac{1}{\alpha^2 - \alpha^2 \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\&= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\end{align}Generally speaking, numerical errors will occasionally drive expressions under the radical to either negative values or potentially enormous values (corresponding to enormous Lorentz factors). Thus a reliable approach for computing $u^0$ requires that we first rewrite the above expression in terms of the Lorentz factor squared: $\Gamma^2=\left(\alpha u^0\right)^2$:\begin{align}u^0 &= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\\implies \left(\alpha u^0\right)^2 &= \frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}} \\\implies \gamma_{ij}v^i_{(n)}v^j_{(n)} &= 1 - \frac{1}{\left(\alpha u^0\right)^2} \\&= 1 - \frac{1}{\Gamma^2}\end{align}In order for the bottom expression to hold true, the left-hand side must be between 0 and 1. Again, this is not guaranteed due to the appearance of numerical errors. In fact, a robust algorithm will not allow $\Gamma^2$ to become too large (which might contribute greatly to the stress-energy of a given gridpoint), so let's define the largest allowed Lorentz factor as $\Gamma_{\rm max}$.Then our algorithm for computing $u^0$ is as follows:If$$R=\gamma_{ij}v^i_{(n)}v^j_{(n)}>1 - \frac{1}{\Gamma_{\rm max}^2},$$ then adjust the 3-velocity $v^i$ as follows:$$v^i_{(n)} \to \sqrt{\frac{1 - \frac{1}{\Gamma_{\rm max}^2}}{R}}v^i_{(n)}.$$After this rescaling, we are then guaranteed that if $R$ is recomputed, it will be set to its ceiling value $R=R_{\rm max} = 1 - \frac{1}{\Gamma_{\rm max}^2}$.Then, regardless of whether the ceiling on $R$ was applied, $u^0$ can be safely computed via$$u^0 = \frac{1}{\alpha \sqrt{1-R}},$$and the remaining components $u^i$ via$$u^i = u^0 v^i.$$In summary our algorithm for computing $u^{\mu}$ from $v^i = \frac{u^i}{u^0}$ is as follows:1. Choose a maximum Lorentz factor $\Gamma_{\rm max}$=`GAMMA_SPEED_LIMIT`, and define $v^i_{(n)} = \frac{1}{\alpha}\left( \frac{u^i}{u^0} + \beta^i\right)$.1. Compute $R=\gamma_{ij}v^i_{(n)}v^j_{(n)}=1 - \frac{1}{\Gamma^2}$1. If $R \le 1 - \frac{1}{\Gamma_{\rm max}^2}$, then skip the next step.1. Otherwise if $R > 1 - \frac{1}{\Gamma_{\rm max}^2}$ then adjust $v^i_{(n)}\to \sqrt{\frac{1 - \frac{1}{\Gamma_{\rm max}^2}}{R}}v^i_{(n)}$, which will force $R=R_{\rm max}$.1. Given the $R$ computed in the above step, $u^0 = \frac{1}{\alpha \sqrt{1-R}}$, and $u^i=u^0 v^i$.While the above algorithm is quite robust, its `if()` statement in the fourth step is not very friendly to NRPy+ or an optimizing C compiler, as it would require NRPy+ to generate separate C kernels for each branch of the `if()`. Let's instead try the following trick, which Roland Haas taught us. Define $R^*$ as$$R^* = \frac{1}{2} \left(R_{\rm max} + R - |R_{\rm max} - R| \right).$$If $R>R_{\rm max}$, then $|R_{\rm max} - R|=R - R_{\rm max}$, and we get:$$R^* = \frac{1}{2} \left(R_{\rm max} + R - (R - R_{\rm max}) \right) = \frac{1}{2} \left(2 R_{\rm max}\right) = R_{\rm max}$$If $R\le R_{\rm max}$, then $|R_{\rm max} - R|=R_{\rm max} - R$, and we get:$$R^* = \frac{1}{2} \left(R_{\rm max} + R - (R_{\rm max} - R) \right) = \frac{1}{2} \left(2 R\right) = R$$Then we can rescale *all* $v^i_{(n)}$ via$$v^i_{(n)} \to v^i_{(n)} \sqrt{\frac{R^*}{R}},$$though we must be very careful to carefully handle the case in which $R=0$. To avoid any problems in this case, we simply adjust the above rescaling by adding a tiny number [`TINYDOUBLE`](https://en.wikipedia.org/wiki/Tiny_Bubbles) to $R$ in the denominator, typically `1e-100`:$$v^i_{(n)} \to v^i_{(n)} \sqrt{\frac{R^*}{R + {\rm TINYDOUBLE}}}.$$Finally, $u^0$ can be immediately and safely computed, via:$$u^0 = \frac{1}{\alpha \sqrt{1-R^*}},$$and $u^i$ via $$u^i = u^0 v^i = u^0 \left(\alpha v^i_{(n)} - \beta^i\right).$$
###Code
# Step 6.a: Convert Valencia 3-velocity v_{(n)}^i into u^\mu, and apply a speed limiter
# Speed-limited ValenciavU is output to rescaledValenciavU global.
def u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, ValenciavU):
# Inputs: Metric lapse alpha, shift betaU, 3-metric gammaDD, Valencia 3-velocity ValenciavU
# Outputs (as globals): u4U_ito_ValenciavU, rescaledValenciavU
# R = gamma_{ij} v^i v^j
R = sp.sympify(0)
for i in range(3):
for j in range(3):
R += gammaDD[i][j]*ValenciavU[i]*ValenciavU[j]
thismodule = "GRHD"
# The default value isn't terribly important here, since we can overwrite in the main C code
GAMMA_SPEED_LIMIT = par.Cparameters("REAL", thismodule, "GAMMA_SPEED_LIMIT", 10.0) # Default value based on
# IllinoisGRMHD.
# GiRaFFE default = 2000.0
Rmax = 1 - 1 / (GAMMA_SPEED_LIMIT * GAMMA_SPEED_LIMIT)
# Now, we set Rstar = min(Rmax,R):
# If R < Rmax, then Rstar = 0.5*(Rmax+R-Rmax+R) = R
# If R >= Rmax, then Rstar = 0.5*(Rmax+R+Rmax-R) = Rmax
Rstar = sp.Rational(1, 2) * (Rmax + R - nrpyAbs(Rmax - R))
# We add TINYDOUBLE to R below to avoid a 0/0, which occurs when
# ValenciavU == 0 for all Valencia 3-velocity components.
# "Those tiny *doubles* make me warm all over
# with a feeling that I'm gonna love you till the end of time."
# - Adapted from Connie Francis' "Tiny Bubbles"
TINYDOUBLE = par.Cparameters("#define",thismodule,"TINYDOUBLE",1e-100)
# The rescaled (speed-limited) Valencia 3-velocity
# is given by, v_{(n)}^i = sqrt{Rstar/R} v^i
global rescaledValenciavU
rescaledValenciavU = ixp.zerorank1(DIM=3)
for i in range(3):
# If R == 0, then Rstar == 0, so sqrt( Rstar/(R+TINYDOUBLE) )=sqrt(0/1e-100) = 0
# If your velocities are of order 1e-100 and this is physically
# meaningful, there must be something wrong with your unit conversion.
rescaledValenciavU[i] = ValenciavU[i]*sp.sqrt(Rstar/(R + TINYDOUBLE))
# Finally compute u^mu in terms of Valenciav^i
# u^0 = 1/(alpha-sqrt(1-R^*))
global u4U_ito_ValenciavU
u4U_ito_ValenciavU = ixp.zerorank1(DIM=4)
u4U_ito_ValenciavU[0] = 1/(alpha*sp.sqrt(1-Rstar))
# u^i = u^0 ( alpha v^i_{(n)} - beta^i ), where v^i_{(n)} is the Valencia 3-velocity
for i in range(3):
u4U_ito_ValenciavU[i+1] = u4U_ito_ValenciavU[0] * (alpha * rescaledValenciavU[i] - betaU[i])
# Step 6.b: Convert v^i into u^\mu, and apply a speed limiter.
# Speed-limited vU is output to rescaledvU global.
def u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, vU):
ValenciavU = ixp.zerorank1(DIM=3)
for i in range(3):
ValenciavU[i] = (vU[i] + betaU[i])/alpha
u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, ValenciavU)
# Since ValenciavU is written in terms of vU,
# u4U_ito_ValenciavU is actually u4U_ito_vU
global u4U_ito_vU
u4U_ito_vU = ixp.zerorank1(DIM=4)
for mu in range(4):
u4U_ito_vU[mu] = u4U_ito_ValenciavU[mu]
# Finally compute the rescaled (speed-limited) vU
global rescaledvU
rescaledvU = ixp.zerorank1(DIM=3)
for i in range(3):
rescaledvU[i] = alpha * rescaledValenciavU[i] - betaU[i]
###Output
_____no_output_____
###Markdown
Step 7: Declare ADM and hydrodynamical input variables, and construct GRHD equations \[Back to [top](toc)\]$$\label{declarevarsconstructgrhdeqs}$$
###Code
# First define hydrodynamical quantities
u4U = ixp.declarerank1("u4U", DIM=4)
rho_b,P,epsilon = sp.symbols('rho_b P epsilon',real=True)
# Then ADM quantities
gammaDD = ixp.declarerank2("gammaDD","sym01",DIM=3)
KDD = ixp.declarerank2("KDD" ,"sym01",DIM=3)
betaU = ixp.declarerank1("betaU", DIM=3)
alpha = sp.symbols('alpha', real=True)
# First compute stress-energy tensor T4UU and T4UD:
compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U)
compute_T4UD(gammaDD,betaU,alpha, T4UU)
# Next sqrt(gamma)
compute_sqrtgammaDET(gammaDD)
# Compute conservative variables in terms of primitive variables
compute_rho_star( alpha, sqrtgammaDET, rho_b,u4U)
compute_tau_tilde(alpha, sqrtgammaDET, T4UU,rho_star)
compute_S_tildeD( alpha, sqrtgammaDET, T4UD)
# Then compute v^i from u^mu
compute_vU_from_u4U__no_speed_limit(u4U)
# Next compute fluxes of conservative variables
compute_rho_star_fluxU( vU, rho_star)
compute_tau_tilde_fluxU(alpha, sqrtgammaDET, vU,T4UU, rho_star)
compute_S_tilde_fluxUD( alpha, sqrtgammaDET, T4UD)
# Then declare derivatives & compute g4DD_zerotimederiv_dD
gammaDD_dD = ixp.declarerank3("gammaDD_dD","sym01",DIM=3)
betaU_dD = ixp.declarerank2("betaU_dD" ,"nosym",DIM=3)
alpha_dD = ixp.declarerank1("alpha_dD" ,DIM=3)
compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD)
# Then compute source terms on tau_tilde and S_tilde equations
compute_s_source_term(KDD,betaU,alpha, sqrtgammaDET,alpha_dD, T4UU)
compute_S_tilde_source_termD( alpha, sqrtgammaDET,g4DD_zerotimederiv_dD, T4UU)
# Then compute the 4-velocities in terms of an input Valencia 3-velocity testValenciavU[i]
testValenciavU = ixp.declarerank1("testValenciavU",DIM=3)
u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, testValenciavU)
# Finally compute the 4-velocities in terms of an input 3-velocity testvU[i] = u^i/u^0
testvU = ixp.declarerank1("testvU",DIM=3)
u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, testvU)
###Output
_____no_output_____
###Markdown
Step 8: Code Validation against `GRHD.equations` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the SymPy expressions for the GRHD equations generated in1. this tutorial versus2. the NRPy+ [GRHD.equations](../edit/GRHD/equations.py) module.
###Code
import GRHD.equations as Ge
# First compute stress-energy tensor T4UU and T4UD:
Ge.compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U)
Ge.compute_T4UD(gammaDD,betaU,alpha, Ge.T4UU)
# Next sqrt(gamma)
Ge.compute_sqrtgammaDET(gammaDD)
# Compute conservative variables in terms of primitive variables
Ge.compute_rho_star( alpha, Ge.sqrtgammaDET, rho_b,u4U)
Ge.compute_tau_tilde(alpha, Ge.sqrtgammaDET, Ge.T4UU,Ge.rho_star)
Ge.compute_S_tildeD( alpha, Ge.sqrtgammaDET, Ge.T4UD)
# Then compute v^i from u^mu
Ge.compute_vU_from_u4U__no_speed_limit(u4U)
# Next compute fluxes of conservative variables
Ge.compute_rho_star_fluxU ( Ge.vU, Ge.rho_star)
Ge.compute_tau_tilde_fluxU(alpha, Ge.sqrtgammaDET, Ge.vU,Ge.T4UU, Ge.rho_star)
Ge.compute_S_tilde_fluxUD (alpha, Ge.sqrtgammaDET, Ge.T4UD)
# Then declare derivatives & compute g4DD_zerotimederiv_dD
# gammaDD_dD = ixp.declarerank3("gammaDD_dD","sym01",DIM=3)
# betaU_dD = ixp.declarerank2("betaU_dD" ,"nosym",DIM=3)
# alpha_dD = ixp.declarerank1("alpha_dD" ,DIM=3)
Ge.compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD)
# Finally compute source terms on tau_tilde and S_tilde equations
Ge.compute_s_source_term(KDD,betaU,alpha, Ge.sqrtgammaDET,alpha_dD, Ge.T4UU)
Ge.compute_S_tilde_source_termD( alpha, Ge.sqrtgammaDET,Ge.g4DD_zerotimederiv_dD,Ge.T4UU)
GetestValenciavU = ixp.declarerank1("testValenciavU")
Ge.u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha, betaU, gammaDD, GetestValenciavU)
GetestvU = ixp.declarerank1("testvU")
Ge.u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit( alpha, betaU, gammaDD, GetestvU)
all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="Ge."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2 is None:
return basename+"["+str(idx1)+"]"
if idx3 is None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
expr_list = []
exprcheck_list = []
namecheck_list = []
namecheck_list.extend(["sqrtgammaDET","rho_star","tau_tilde","s_source_term"])
exprcheck_list.extend([Ge.sqrtgammaDET,Ge.rho_star,Ge.tau_tilde,Ge.s_source_term])
expr_list.extend([sqrtgammaDET,rho_star,tau_tilde,s_source_term])
for mu in range(4):
namecheck_list.extend([gfnm("u4_ito_ValenciavU",mu),gfnm("u4U_ito_vU",mu)])
exprcheck_list.extend([ Ge.u4U_ito_ValenciavU[mu], Ge.u4U_ito_vU[mu]])
expr_list.extend( [ u4U_ito_ValenciavU[mu], u4U_ito_vU[mu]])
for nu in range(4):
namecheck_list.extend([gfnm("T4UU",mu,nu),gfnm("T4UD",mu,nu)])
exprcheck_list.extend([Ge.T4UU[mu][nu],Ge.T4UD[mu][nu]])
expr_list.extend([T4UU[mu][nu],T4UD[mu][nu]])
for delta in range(4):
namecheck_list.extend([gfnm("g4DD_zerotimederiv_dD",mu,nu,delta)])
exprcheck_list.extend([Ge.g4DD_zerotimederiv_dD[mu][nu][delta]])
expr_list.extend([g4DD_zerotimederiv_dD[mu][nu][delta]])
for i in range(3):
namecheck_list.extend([gfnm("S_tildeD",i),gfnm("vU",i),gfnm("rho_star_fluxU",i),
gfnm("tau_tilde_fluxU",i),gfnm("S_tilde_source_termD",i),
gfnm("rescaledValenciavU",i), gfnm("rescaledvU",i)])
exprcheck_list.extend([Ge.S_tildeD[i],Ge.vU[i],Ge.rho_star_fluxU[i],
Ge.tau_tilde_fluxU[i],Ge.S_tilde_source_termD[i],
Ge.rescaledValenciavU[i],Ge.rescaledvU[i]])
expr_list.extend([S_tildeD[i],vU[i],rho_star_fluxU[i],
tau_tilde_fluxU[i],S_tilde_source_termD[i],
rescaledValenciavU[i],rescaledvU[i]])
for j in range(3):
namecheck_list.extend([gfnm("S_tilde_fluxUD",i,j)])
exprcheck_list.extend([Ge.S_tilde_fluxUD[i][j]])
expr_list.extend([S_tilde_fluxUD[i][j]])
for i in range(len(expr_list)):
comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i])
import sys
if all_passed:
print("ALL TESTS PASSED!")
else:
print("ERROR: AT LEAST ONE TEST DID NOT PASS")
sys.exit(1)
###Output
ALL TESTS PASSED!
###Markdown
Step 9: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-GRHD_Equations-Cartesian.pdf](Tutorial-GRHD_Equations-Cartesian.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-GRHD_Equations-Cartesian")
###Output
Created Tutorial-GRHD_Equations-Cartesian.tex, and compiled LaTeX file to
PDF file Tutorial-GRHD_Equations-Cartesian.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Equations of General Relativistic Hydrodynamics (GRHD) Authors: Zach Etienne & Patrick Nelson This notebook documents and constructs a number of quantities useful for building symbolic (SymPy) expressions for the equations of general relativistic hydrodynamics (GRHD), using the same (Valencia) formalism as `IllinoisGRMHD`**Notebook Status:** Self-Validated **Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)** IntroductionWe write the equations of general relativistic hydrodynamics in conservative form as follows (adapted from Eqs. 41-44 of [Duez et al](https://arxiv.org/pdf/astro-ph/0503420.pdf):\begin{eqnarray}\ \partial_t \rho_* &+& \partial_j \left(\rho_* v^j\right) = 0 \\\partial_t \tilde{\tau} &+& \partial_j \left(\alpha^2 \sqrt{\gamma} T^{0j} - \rho_* v^j \right) = s \\\partial_t \tilde{S}_i &+& \partial_j \left(\alpha \sqrt{\gamma} T^j{}_i \right) = \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i},\end{eqnarray}where we assume $T^{\mu\nu}$ is the stress-energy tensor of a perfect fluid:$$T^{\mu\nu} = \rho_0 h u^{\mu} u^{\nu} + P g^{\mu\nu},$$the $s$ source term is given in terms of ADM quantities via$$s = \alpha \sqrt{\gamma}\left[\left(T^{00}\beta^i\beta^j + 2 T^{0i}\beta^j + T^{ij} \right)K_{ij}- \left(T^{00}\beta^i + T^{0i} \right)\partial_i\alpha \right],$$and \begin{align}v^j &= \frac{u^j}{u^0} \\\rho_* &= \alpha\sqrt{\gamma} \rho_0 u^0 \\h &= 1 + \epsilon + \frac{P}{\rho_0}.\end{align}Also we will write the 4-metric in terms of the ADM 3-metric, lapse, and shift using standard equations.Thus the full set of input variables include:* Spacetime quantities: * ADM quantities $\alpha$, $\beta^i$, $\gamma_{ij}$, $K_{ij}$* Hydrodynamical quantities: * Rest-mass density $\rho_0$ * Pressure $P$ * Internal energy $\epsilon$ * 4-velocity $u^\mu$For completeness, the rest of the conservative variables are given by\begin{align}\tilde{\tau} &= \alpha^2\sqrt{\gamma} T^{00} - \rho_* \\\tilde{S}_i &= \alpha \sqrt{\gamma} T^0{}_i\end{align} A Note on NotationAs is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.For instance, in calculating the first term of $b^2 u^\mu u^\nu$, we use Greek indices:```pythonT4EMUU = ixp.zerorank2(DIM=4)for mu in range(4): for nu in range(4): Term 1: b^2 u^{\mu} u^{\nu} T4EMUU[mu][nu] = smallb2*u4U[mu]*u4U[nu]```When we calculate $\beta_i = \gamma_{ij} \beta^j$, we use Latin indices:```pythonbetaD = ixp.zerorank1(DIM=3)for i in range(3): for j in range(3): betaD[i] += gammaDD[i][j] * betaU[j]```As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook). This can be seen when we handle $\frac{1}{2} \alpha \sqrt{\gamma} T^{\mu \nu}_{\rm EM} \partial_i g_{\mu \nu}$:```python \alpha \sqrt{\gamma} T^{\mu \nu}_{\rm EM} \partial_i g_{\mu \nu} / 2for i in range(3): for mu in range(4): for nu in range(4): S_tilde_rhsD[i] += alpsqrtgam * T4EMUU[mu][nu] * g4DD_zerotimederiv_dD[mu][nu][i+1] / 2``` Table of Contents$$\label{toc}$$Each family of quantities is constructed within a given function (**boldfaced** below). This notebook is organized as follows1. [Step 1](importmodules): Import needed NRPy+ & Python modules1. [Step 2](stressenergy): Define the stress-energy tensor $T^{\mu\nu}$ and $T^\mu{}_\nu$: * **compute_enthalpy()**, **compute_T4UU()**, **compute_T4UD()**: 1. [Step 3](primtoconserv): Writing the conservative variables in terms of the primitive variables: * **compute_sqrtgammaDET()**, **compute_rho_star()**, **compute_tau_tilde()**, **compute_S_tildeD()**1. [Step 4](grhdfluxes): Define the fluxes for the GRHD equations 1. [Step 4.a](rhostarfluxterm): Define $\rho_*$ flux term for GRHD equations: * **compute_vU_from_u4U__no_speed_limit()**, **compute_rho_star_fluxU()**: 1. [Step 4.b](taustildesourceterms) Define $\tilde{\tau}$ and $\tilde{S}_i$ flux terms for GRHD equations: * **compute_tau_tilde_fluxU()**, **compute_S_tilde_fluxUD()**1. [Step 5](grhdsourceterms): Define source terms on RHSs of GRHD equations 1. [Step 5.a](ssourceterm): Define $s$ source term on RHS of $\tilde{\tau}$ equation: * **compute_s_source_term()** 1. [Step 5.b](stildeisourceterm): Define source term on RHS of $\tilde{S}_i$ equation 1. [Step 5.b.i](fourmetricderivs): Compute $g_{\mu\nu,i}$ in terms of ADM quantities and their derivatives: * **compute_g4DD_zerotimederiv_dD()** 1. [Step 5.b.ii](stildeisource): Compute source term of the $\tilde{S}_i$ equation: $\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}$: * **compute_S_tilde_source_termD()**1. [Step 6](convertvtou): Conversion of $v^i$ to $u^\mu$ (Courtesy Patrick Nelson): * **u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit()**, **u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit()**1. [Step 7](declarevarsconstructgrhdeqs): Declare ADM and hydrodynamical input variables, and construct GRHD equations1. [Step 8](code_validation): Code Validation against `GRHD.equations` NRPy+ module1. [Step 9](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Import needed NRPy+ & Python modules \[Back to [top](toc)\]$$\label{importmodules}$$
###Code
# Step 1: Import needed core NRPy+ modules
from outputC import * # NRPy+: Core C code output module
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
###Output
_____no_output_____
###Markdown
Step 2: Define the stress-energy tensor $T^{\mu\nu}$ and $T^\mu{}_\nu$ \[Back to [top](toc)\]$$\label{stressenergy}$$Recall from above that$$T^{\mu\nu} = \rho_0 h u^{\mu} u^{\nu} + P g^{\mu\nu},$$where $h = 1 + \epsilon + \frac{P}{\rho_0}$. Also $$T^\mu{}_{\nu} = T^{\mu\delta} g_{\delta \nu}$$
###Code
# Step 2.a: First define h, the enthalpy:
def compute_enthalpy(rho_b,P,epsilon):
global h
h = 1 + epsilon + P/rho_b
# Step 2.b: Define T^{mu nu} (a 4-dimensional tensor)
def compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U):
global T4UU
compute_enthalpy(rho_b,P,epsilon)
# Then define g^{mu nu} in terms of the ADM quantities:
import BSSN.ADMBSSN_tofrom_4metric as AB4m
AB4m.g4UU_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha)
# Finally compute T^{mu nu}
T4UU = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
T4UU[mu][nu] = rho_b * h * u4U[mu]*u4U[nu] + P*AB4m.g4UU[mu][nu]
# Step 2.c: Define T^{mu}_{nu} (a 4-dimensional tensor)
def compute_T4UD(gammaDD,betaU,alpha, T4UU):
global T4UD
# Next compute T^mu_nu = T^{mu delta} g_{delta nu}, needed for S_tilde flux.
# First we'll need g_{alpha nu} in terms of ADM quantities:
import BSSN.ADMBSSN_tofrom_4metric as AB4m
AB4m.g4DD_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha)
T4UD = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
for delta in range(4):
T4UD[mu][nu] += T4UU[mu][delta]*AB4m.g4DD[delta][nu]
###Output
_____no_output_____
###Markdown
Step 3: Writing the conservative variables in terms of the primitive variables \[Back to [top](toc)\]$$\label{primtoconserv}$$Recall from above that the conservative variables may be written as\begin{align}\rho_* &= \alpha\sqrt{\gamma} \rho_0 u^0 \\\tilde{\tau} &= \alpha^2\sqrt{\gamma} T^{00} - \rho_* \\\tilde{S}_i &= \alpha \sqrt{\gamma} T^0{}_i\end{align}$T^{\mu\nu}$ and $T^\mu{}_\nu$ have already been defined $-$ all in terms of primitive variables. Thus we'll just need $\sqrt{\gamma}=$`gammaDET`, and all conservatives can then be written in terms of other defined quantities, which themselves are written in terms of primitive variables and the ADM metric.
###Code
# Step 3: Writing the conservative variables in terms of the primitive variables
def compute_sqrtgammaDET(gammaDD):
global sqrtgammaDET
gammaUU, gammaDET = ixp.symm_matrix_inverter3x3(gammaDD)
sqrtgammaDET = sp.sqrt(gammaDET)
def compute_rho_star(alpha, sqrtgammaDET, rho_b,u4U):
global rho_star
# Compute rho_star:
rho_star = alpha*sqrtgammaDET*rho_b*u4U[0]
def compute_tau_tilde(alpha, sqrtgammaDET, T4UU,rho_star):
global tau_tilde
tau_tilde = alpha**2*sqrtgammaDET*T4UU[0][0] - rho_star
def compute_S_tildeD(alpha, sqrtgammaDET, T4UD):
global S_tildeD
S_tildeD = ixp.zerorank1(DIM=3)
for i in range(3):
S_tildeD[i] = alpha*sqrtgammaDET*T4UD[0][i+1]
###Output
_____no_output_____
###Markdown
Step 4: Define the fluxes for the GRHD equations \[Back to [top](toc)\]$$\label{grhdfluxes}$$ Step 4.a: Define $\rho_*$ flux term for GRHD equations \[Back to [top](toc)\]$$\label{rhostarfluxterm}$$Recall from above that\begin{array}\ \partial_t \rho_* &+ \partial_j \left(\rho_* v^j\right) = 0.\end{array}Here we will define the $\rho_* v^j$ that constitutes the flux of $\rho_*$, first defining $v^j=u^j/u^0$:
###Code
# Step 4: Define the fluxes for the GRHD equations
# Step 4.a: vU from u4U may be needed for computing rho_star_flux from u4U
def compute_vU_from_u4U__no_speed_limit(u4U):
global vU
# Now compute v^i = u^i/u^0:
vU = ixp.zerorank1(DIM=3)
for j in range(3):
vU[j] = u4U[j+1]/u4U[0]
# Step 4.b: rho_star flux
def compute_rho_star_fluxU(vU, rho_star):
global rho_star_fluxU
rho_star_fluxU = ixp.zerorank1(DIM=3)
for j in range(3):
rho_star_fluxU[j] = rho_star*vU[j]
###Output
_____no_output_____
###Markdown
Step 4.b: Define $\tilde{\tau}$ and $\tilde{S}_i$ flux terms for GRHD equations \[Back to [top](toc)\]$$\label{taustildesourceterms}$$Recall from above that\begin{array}\ \partial_t \tilde{\tau} &+ \partial_j \underbrace{\left(\alpha^2 \sqrt{\gamma} T^{0j} - \rho_* v^j \right)} &= s \\\partial_t \tilde{S}_i &+ \partial_j \underbrace{\left(\alpha \sqrt{\gamma} T^j{}_i \right)} &= \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.\end{array}Here we will define all terms that go inside the $\partial_j$'s on the left-hand side of the above equations (i.e., the underbraced expressions):
###Code
# Step 4.c: tau_tilde flux
def compute_tau_tilde_fluxU(alpha, sqrtgammaDET, vU,T4UU, rho_star):
global tau_tilde_fluxU
tau_tilde_fluxU = ixp.zerorank1(DIM=3)
for j in range(3):
tau_tilde_fluxU[j] = alpha**2*sqrtgammaDET*T4UU[0][j+1] - rho_star*vU[j]
# Step 4.d: S_tilde flux
def compute_S_tilde_fluxUD(alpha, sqrtgammaDET, T4UD):
global S_tilde_fluxUD
S_tilde_fluxUD = ixp.zerorank2(DIM=3)
for j in range(3):
for i in range(3):
S_tilde_fluxUD[j][i] = alpha*sqrtgammaDET*T4UD[j+1][i+1]
###Output
_____no_output_____
###Markdown
Step 5: Define source terms on RHSs of GRHD equations \[Back to [top](toc)\]$$\label{grhdsourceterms}$$ Step 5.a: Define $s$ source term on RHS of $\tilde{\tau}$ equation \[Back to [top](toc)\]$$\label{ssourceterm}$$Recall again from above the $s$ source term on the right-hand side of the $\tilde{\tau}$ evolution equation is given in terms of ADM quantities and the stress-energy tensor via$$s = \underbrace{\alpha \sqrt{\gamma}}_{\text{Term 3}}\left[\underbrace{\left(T^{00}\beta^i\beta^j + 2 T^{0i}\beta^j + T^{ij} \right)K_{ij}}_{\text{Term 1}}\underbrace{- \left(T^{00}\beta^i + T^{0i} \right)\partial_i\alpha}_{\text{Term 2}} \right],$$
###Code
def compute_s_source_term(KDD,betaU,alpha, sqrtgammaDET,alpha_dD, T4UU):
global s_source_term
s_source_term = sp.sympify(0)
# Term 1:
for i in range(3):
for j in range(3):
s_source_term += (T4UU[0][0]*betaU[i]*betaU[j] + 2*T4UU[0][i+1]*betaU[j] + T4UU[i+1][j+1])*KDD[i][j]
# Term 2:
for i in range(3):
s_source_term += -(T4UU[0][0]*betaU[i] + T4UU[0][i+1])*alpha_dD[i]
# Term 3:
s_source_term *= alpha*sqrtgammaDET
###Output
_____no_output_____
###Markdown
Step 5.b: Define source term on RHS of $\tilde{S}_i$ equation \[Back to [top](toc)\]$$\label{stildeisourceterm}$$Recall from above$$\partial_t \tilde{S}_i + \partial_j \left(\alpha \sqrt{\gamma} T^j{}_i \right) = \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.$$Our goal here will be to compute$$\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.$$ Step 5.b.i: Compute $g_{\mu\nu,i}$ in terms of ADM quantities and their derivatives \[Back to [top](toc)\]$$\label{fourmetricderivs}$$To compute $g_{\mu\nu,i}$ we need to evaluate the first derivative of $g_{\mu\nu}$ in terms of ADM variables.We are given $\gamma_{ij}$, $\alpha$, and $\beta^i$, and the 4-metric is given in terms of these quantities via$$g_{\mu\nu} = \begin{pmatrix} -\alpha^2 + \beta^k \beta_k & \beta_i \\\beta_j & \gamma_{ij}\end{pmatrix}.$$Thus $$g_{\mu\nu,k} = \begin{pmatrix} -2 \alpha\alpha_{,i} + \beta^j_{,k} \beta_j + \beta^j \beta_{j,k} & \beta_{i,k} \\\beta_{j,k} & \gamma_{ij,k}\end{pmatrix},$$where $\beta_{i} = \gamma_{ij} \beta^j$, so$$\beta_{i,k} = \gamma_{ij,k} \beta^j + \gamma_{ij} \beta^j_{,k}$$
###Code
def compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD):
global g4DD_zerotimederiv_dD
# Eq. 2.121 in B&S
betaD = ixp.zerorank1(DIM=3)
for i in range(3):
for j in range(3):
betaD[i] += gammaDD[i][j]*betaU[j]
betaDdD = ixp.zerorank2(DIM=3)
for i in range(3):
for j in range(3):
for k in range(3):
# Recall that betaD[i] = gammaDD[i][j]*betaU[j] (Eq. 2.121 in B&S)
betaDdD[i][k] += gammaDD_dD[i][j][k]*betaU[j] + gammaDD[i][j]*betaU_dD[j][k]
# Eq. 2.122 in B&S
g4DD_zerotimederiv_dD = ixp.zerorank3(DIM=4)
for k in range(3):
# Recall that g4DD[0][0] = -alpha^2 + betaU[j]*betaD[j]
g4DD_zerotimederiv_dD[0][0][k+1] += -2*alpha*alpha_dD[k]
for j in range(3):
g4DD_zerotimederiv_dD[0][0][k+1] += betaU_dD[j][k]*betaD[j] + betaU[j]*betaDdD[j][k]
for i in range(3):
for k in range(3):
# Recall that g4DD[i][0] = g4DD[0][i] = betaD[i]
g4DD_zerotimederiv_dD[i+1][0][k+1] = g4DD_zerotimederiv_dD[0][i+1][k+1] = betaDdD[i][k]
for i in range(3):
for j in range(3):
for k in range(3):
# Recall that g4DD[i][j] = gammaDD[i][j]
g4DD_zerotimederiv_dD[i+1][j+1][k+1] = gammaDD_dD[i][j][k]
###Output
_____no_output_____
###Markdown
Step 5.b.ii: Compute source term of the $\tilde{S}_i$ equation: $\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}$ \[Back to [top](toc)\]$$\label{stildeisource}$$Now that we've computed `g4DD_zerotimederiv_dD`$=g_{\mu\nu,i}$, the $\tilde{S}_i$ evolution equation source term may be quickly constructed.
###Code
# Step 5.b.ii: Compute S_tilde source term
def compute_S_tilde_source_termD(alpha, sqrtgammaDET,g4DD_zerotimederiv_dD, T4UU):
global S_tilde_source_termD
S_tilde_source_termD = ixp.zerorank1(DIM=3)
for i in range(3):
for mu in range(4):
for nu in range(4):
S_tilde_source_termD[i] += sp.Rational(1,2)*alpha*sqrtgammaDET*T4UU[mu][nu]*g4DD_zerotimederiv_dD[mu][nu][i+1]
###Output
_____no_output_____
###Markdown
Step 6: Conversion of $v^i$ to $u^\mu$ (Courtesy Patrick Nelson) \[Back to [top](toc)\]$$\label{convertvtou}$$According to Eqs. 9-11 of [the IllinoisGRMHD paper](https://arxiv.org/pdf/1501.07276.pdf), the Valencia 3-velocity $v^i_{(n)}$ is related to the 4-velocity $u^\mu$ via\begin{align}\alpha v^i_{(n)} &= \frac{u^i}{u^0} + \beta^i \\\implies u^i &= u^0 \left(\alpha v^i_{(n)} - \beta^i\right)\end{align}Defining $v^i = \frac{u^i}{u^0}$, we get$$v^i = \alpha v^i_{(n)} - \beta^i,$$and in terms of this variable we get\begin{align}g_{00} \left(u^0\right)^2 + 2 g_{0i} u^0 u^i + g_{ij} u^i u^j &= \left(u^0\right)^2 \left(g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j\right)\\\implies u^0 &= \pm \sqrt{\frac{-1}{g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j}} \\&= \pm \sqrt{\frac{-1}{(-\alpha^2 + \beta^2) + 2 \beta_i v^i + \gamma_{ij} v^i v^j}} \\&= \pm \sqrt{\frac{1}{\alpha^2 - \gamma_{ij}\left(\beta^i + v^i\right)\left(\beta^j + v^j\right)}}\\&= \pm \sqrt{\frac{1}{\alpha^2 - \alpha^2 \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\&= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\end{align}Generally speaking, numerical errors will occasionally drive expressions under the radical to either negative values or potentially enormous values (corresponding to enormous Lorentz factors). Thus a reliable approach for computing $u^0$ requires that we first rewrite the above expression in terms of the Lorentz factor squared: $\Gamma^2=\left(\alpha u^0\right)^2$:\begin{align}u^0 &= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\\implies \left(\alpha u^0\right)^2 &= \frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}} \\\implies \gamma_{ij}v^i_{(n)}v^j_{(n)} &= 1 - \frac{1}{\left(\alpha u^0\right)^2} \\&= 1 - \frac{1}{\Gamma^2}\end{align}In order for the bottom expression to hold true, the left-hand side must be between 0 and 1. Again, this is not guaranteed due to the appearance of numerical errors. In fact, a robust algorithm will not allow $\Gamma^2$ to become too large (which might contribute greatly to the stress-energy of a given gridpoint), so let's define the largest allowed Lorentz factor as $\Gamma_{\rm max}$.Then our algorithm for computing $u^0$ is as follows:If$$R=\gamma_{ij}v^i_{(n)}v^j_{(n)}>1 - \frac{1}{\Gamma_{\rm max}^2},$$ then adjust the 3-velocity $v^i$ as follows:$$v^i_{(n)} \to \sqrt{\frac{1 - \frac{1}{\Gamma_{\rm max}^2}}{R}}v^i_{(n)}.$$After this rescaling, we are then guaranteed that if $R$ is recomputed, it will be set to its ceiling value $R=R_{\rm max} = 1 - \frac{1}{\Gamma_{\rm max}^2}$.Then, regardless of whether the ceiling on $R$ was applied, $u^0$ can be safely computed via$$u^0 = \frac{1}{\alpha \sqrt{1-R}},$$and the remaining components $u^i$ via$$u^i = u^0 v^i.$$In summary our algorithm for computing $u^{\mu}$ from $v^i = \frac{u^i}{u^0}$ is as follows:1. Choose a maximum Lorentz factor $\Gamma_{\rm max}$=`GAMMA_SPEED_LIMIT`, and define $v^i_{(n)} = \frac{1}{\alpha}\left( \frac{u^i}{u^0} + \beta^i\right)$.1. Compute $R=\gamma_{ij}v^i_{(n)}v^j_{(n)}=1 - \frac{1}{\Gamma^2}$1. If $R \le 1 - \frac{1}{\Gamma_{\rm max}^2}$, then skip the next step.1. Otherwise if $R > 1 - \frac{1}{\Gamma_{\rm max}^2}$ then adjust $v^i_{(n)}\to \sqrt{\frac{1 - \frac{1}{\Gamma_{\rm max}^2}}{R}}v^i_{(n)}$, which will force $R=R_{\rm max}$.1. Given the $R$ computed in the above step, $u^0 = \frac{1}{\alpha \sqrt{1-R}}$, and $u^i=u^0 v^i$.While the above algorithm is quite robust, its `if()` statement in the fourth step is not very friendly to NRPy+ or an optimizing C compiler, as it would require NRPy+ to generate separate C kernels for each branch of the `if()`. Let's instead try the following trick, which Roland Haas taught us. Define $R^*$ as$$R^* = \frac{1}{2} \left(R_{\rm max} + R - |R_{\rm max} - R| \right).$$If $R>R_{\rm max}$, then $|R_{\rm max} - R|=R - R_{\rm max}$, and we get:$$R^* = \frac{1}{2} \left(R_{\rm max} + R - (R - R_{\rm max}) \right) = \frac{1}{2} \left(2 R_{\rm max}\right) = R_{\rm max}$$If $R\le R_{\rm max}$, then $|R_{\rm max} - R|=R_{\rm max} - R$, and we get:$$R^* = \frac{1}{2} \left(R_{\rm max} + R - (R_{\rm max} - R) \right) = \frac{1}{2} \left(2 R\right) = R$$Then we can rescale *all* $v^i_{(n)}$ via$$v^i_{(n)} \to v^i_{(n)} \sqrt{\frac{R^*}{R}},$$though we must be very careful to carefully handle the case in which $R=0$. To avoid any problems in this case, we simply adjust the above rescaling by adding a tiny number [`TINYDOUBLE`](https://en.wikipedia.org/wiki/Tiny_Bubbles) to $R$ in the denominator, typically `1e-100`:$$v^i_{(n)} \to v^i_{(n)} \sqrt{\frac{R^*}{R + {\rm TINYDOUBLE}}}.$$Finally, $u^0$ can be immediately and safely computed, via:$$u^0 = \frac{1}{\alpha \sqrt{1-R^*}},$$and $u^i$ via $$u^i = u^0 v^i = u^0 \left(\alpha v^i_{(n)} - \beta^i\right).$$
###Code
# Step 6.a: Convert Valencia 3-velocity v_{(n)}^i into u^\mu, and apply a speed limiter
# Speed-limited ValenciavU is output to rescaledValenciavU global.
def u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, ValenciavU):
# Inputs: Metric lapse alpha, shift betaU, 3-metric gammaDD, Valencia 3-velocity ValenciavU
# Outputs (as globals): u4U_ito_ValenciavU, rescaledValenciavU
# R = gamma_{ij} v^i v^j
R = sp.sympify(0)
for i in range(3):
for j in range(3):
R += gammaDD[i][j]*ValenciavU[i]*ValenciavU[j]
thismodule = "GRHD"
# The default value isn't terribly important here, since we can overwrite in the main C code
GAMMA_SPEED_LIMIT = par.Cparameters("REAL", thismodule, "GAMMA_SPEED_LIMIT", 10.0) # Default value based on
# IllinoisGRMHD.
# GiRaFFE default = 2000.0
Rmax = 1 - 1 / (GAMMA_SPEED_LIMIT * GAMMA_SPEED_LIMIT)
# Now, we set Rstar = min(Rmax,R):
# If R < Rmax, then Rstar = 0.5*(Rmax+R-Rmax+R) = R
# If R >= Rmax, then Rstar = 0.5*(Rmax+R+Rmax-R) = Rmax
Rstar = sp.Rational(1, 2) * (Rmax + R - nrpyAbs(Rmax - R))
# We add TINYDOUBLE to R below to avoid a 0/0, which occurs when
# ValenciavU == 0 for all Valencia 3-velocity components.
# "Those tiny *doubles* make me warm all over
# with a feeling that I'm gonna love you till the end of time."
# - Adapted from Connie Francis' "Tiny Bubbles"
TINYDOUBLE = par.Cparameters("#define",thismodule,"TINYDOUBLE",1e-100)
# The rescaled (speed-limited) Valencia 3-velocity
# is given by, v_{(n)}^i = sqrt{Rstar/R} v^i
global rescaledValenciavU
rescaledValenciavU = ixp.zerorank1(DIM=3)
for i in range(3):
# If R == 0, then Rstar == 0, so sqrt( Rstar/(R+TINYDOUBLE) )=sqrt(0/1e-100) = 0
# If your velocities are of order 1e-100 and this is physically
# meaningful, there must be something wrong with your unit conversion.
rescaledValenciavU[i] = ValenciavU[i]*sp.sqrt(Rstar/(R + TINYDOUBLE))
# Finally compute u^mu in terms of Valenciav^i
# u^0 = 1/(alpha-sqrt(1-R^*))
global u4U_ito_ValenciavU
u4U_ito_ValenciavU = ixp.zerorank1(DIM=4)
u4U_ito_ValenciavU[0] = 1/(alpha*sp.sqrt(1-Rstar))
# u^i = u^0 ( alpha v^i_{(n)} - beta^i ), where v^i_{(n)} is the Valencia 3-velocity
for i in range(3):
u4U_ito_ValenciavU[i+1] = u4U_ito_ValenciavU[0] * (alpha * rescaledValenciavU[i] - betaU[i])
# Step 6.b: Convert v^i into u^\mu, and apply a speed limiter.
# Speed-limited vU is output to rescaledvU global.
def u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, vU):
ValenciavU = ixp.zerorank1(DIM=3)
for i in range(3):
ValenciavU[i] = (vU[i] + betaU[i])/alpha
u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, ValenciavU)
# Since ValenciavU is written in terms of vU,
# u4U_ito_ValenciavU is actually u4U_ito_vU
global u4U_ito_vU
u4U_ito_vU = ixp.zerorank1(DIM=4)
for mu in range(4):
u4U_ito_vU[mu] = u4U_ito_ValenciavU[mu]
# Finally compute the rescaled (speed-limited) vU
global rescaledvU
rescaledvU = ixp.zerorank1(DIM=3)
for i in range(3):
rescaledvU[i] = alpha * rescaledValenciavU[i] - betaU[i]
###Output
_____no_output_____
###Markdown
Step 7: Declare ADM and hydrodynamical input variables, and construct GRHD equations \[Back to [top](toc)\]$$\label{declarevarsconstructgrhdeqs}$$
###Code
# First define hydrodynamical quantities
u4U = ixp.declarerank1("u4U", DIM=4)
rho_b,P,epsilon = sp.symbols('rho_b P epsilon',real=True)
# Then ADM quantities
gammaDD = ixp.declarerank2("gammaDD","sym01",DIM=3)
KDD = ixp.declarerank2("KDD" ,"sym01",DIM=3)
betaU = ixp.declarerank1("betaU", DIM=3)
alpha = sp.symbols('alpha', real=True)
# First compute stress-energy tensor T4UU and T4UD:
compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U)
compute_T4UD(gammaDD,betaU,alpha, T4UU)
# Next sqrt(gamma)
compute_sqrtgammaDET(gammaDD)
# Compute conservative variables in terms of primitive variables
compute_rho_star( alpha, sqrtgammaDET, rho_b,u4U)
compute_tau_tilde(alpha, sqrtgammaDET, T4UU,rho_star)
compute_S_tildeD( alpha, sqrtgammaDET, T4UD)
# Then compute v^i from u^mu
compute_vU_from_u4U__no_speed_limit(u4U)
# Next compute fluxes of conservative variables
compute_rho_star_fluxU( vU, rho_star)
compute_tau_tilde_fluxU(alpha, sqrtgammaDET, vU,T4UU, rho_star)
compute_S_tilde_fluxUD( alpha, sqrtgammaDET, T4UD)
# Then declare derivatives & compute g4DD_zerotimederiv_dD
gammaDD_dD = ixp.declarerank3("gammaDD_dD","sym01",DIM=3)
betaU_dD = ixp.declarerank2("betaU_dD" ,"nosym",DIM=3)
alpha_dD = ixp.declarerank1("alpha_dD" ,DIM=3)
compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD)
# Then compute source terms on tau_tilde and S_tilde equations
compute_s_source_term(KDD,betaU,alpha, sqrtgammaDET,alpha_dD, T4UU)
compute_S_tilde_source_termD( alpha, sqrtgammaDET,g4DD_zerotimederiv_dD, T4UU)
# Then compute the 4-velocities in terms of an input Valencia 3-velocity testValenciavU[i]
testValenciavU = ixp.declarerank1("testValenciavU",DIM=3)
u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, testValenciavU)
# Finally compute the 4-velocities in terms of an input 3-velocity testvU[i] = u^i/u^0
testvU = ixp.declarerank1("testvU",DIM=3)
u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, testvU)
###Output
_____no_output_____
###Markdown
Step 8: Code Validation against `GRHD.equations` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the SymPy expressions for the GRHD equations generated in1. this tutorial versus2. the NRPy+ [GRHD.equations](../edit/GRHD/equations.py) module.
###Code
import GRHD.equations as Ge
# First compute stress-energy tensor T4UU and T4UD:
Ge.compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U)
Ge.compute_T4UD(gammaDD,betaU,alpha, Ge.T4UU)
# Next sqrt(gamma)
Ge.compute_sqrtgammaDET(gammaDD)
# Compute conservative variables in terms of primitive variables
Ge.compute_rho_star( alpha, Ge.sqrtgammaDET, rho_b,u4U)
Ge.compute_tau_tilde(alpha, Ge.sqrtgammaDET, Ge.T4UU,Ge.rho_star)
Ge.compute_S_tildeD( alpha, Ge.sqrtgammaDET, Ge.T4UD)
# Then compute v^i from u^mu
Ge.compute_vU_from_u4U__no_speed_limit(u4U)
# Next compute fluxes of conservative variables
Ge.compute_rho_star_fluxU ( Ge.vU, Ge.rho_star)
Ge.compute_tau_tilde_fluxU(alpha, Ge.sqrtgammaDET, Ge.vU,Ge.T4UU, Ge.rho_star)
Ge.compute_S_tilde_fluxUD (alpha, Ge.sqrtgammaDET, Ge.T4UD)
# Then declare derivatives & compute g4DD_zerotimederiv_dD
# gammaDD_dD = ixp.declarerank3("gammaDD_dD","sym01",DIM=3)
# betaU_dD = ixp.declarerank2("betaU_dD" ,"nosym",DIM=3)
# alpha_dD = ixp.declarerank1("alpha_dD" ,DIM=3)
Ge.compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD)
# Finally compute source terms on tau_tilde and S_tilde equations
Ge.compute_s_source_term(KDD,betaU,alpha, Ge.sqrtgammaDET,alpha_dD, Ge.T4UU)
Ge.compute_S_tilde_source_termD( alpha, Ge.sqrtgammaDET,Ge.g4DD_zerotimederiv_dD,Ge.T4UU)
GetestValenciavU = ixp.declarerank1("testValenciavU")
Ge.u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha, betaU, gammaDD, GetestValenciavU)
GetestvU = ixp.declarerank1("testvU")
Ge.u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit( alpha, betaU, gammaDD, GetestvU)
all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="Ge."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2==None:
return basename+"["+str(idx1)+"]"
if idx3==None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
expr_list = []
exprcheck_list = []
namecheck_list = []
namecheck_list.extend(["sqrtgammaDET","rho_star","tau_tilde","s_source_term"])
exprcheck_list.extend([Ge.sqrtgammaDET,Ge.rho_star,Ge.tau_tilde,Ge.s_source_term])
expr_list.extend([sqrtgammaDET,rho_star,tau_tilde,s_source_term])
for mu in range(4):
namecheck_list.extend([gfnm("u4_ito_ValenciavU",mu),gfnm("u4U_ito_vU",mu)])
exprcheck_list.extend([ Ge.u4U_ito_ValenciavU[mu], Ge.u4U_ito_vU[mu]])
expr_list.extend( [ u4U_ito_ValenciavU[mu], u4U_ito_vU[mu]])
for nu in range(4):
namecheck_list.extend([gfnm("T4UU",mu,nu),gfnm("T4UD",mu,nu)])
exprcheck_list.extend([Ge.T4UU[mu][nu],Ge.T4UD[mu][nu]])
expr_list.extend([T4UU[mu][nu],T4UD[mu][nu]])
for delta in range(4):
namecheck_list.extend([gfnm("g4DD_zerotimederiv_dD",mu,nu,delta)])
exprcheck_list.extend([Ge.g4DD_zerotimederiv_dD[mu][nu][delta]])
expr_list.extend([g4DD_zerotimederiv_dD[mu][nu][delta]])
for i in range(3):
namecheck_list.extend([gfnm("S_tildeD",i),gfnm("vU",i),gfnm("rho_star_fluxU",i),
gfnm("tau_tilde_fluxU",i),gfnm("S_tilde_source_termD",i),
gfnm("rescaledValenciavU",i), gfnm("rescaledvU",i)])
exprcheck_list.extend([Ge.S_tildeD[i],Ge.vU[i],Ge.rho_star_fluxU[i],
Ge.tau_tilde_fluxU[i],Ge.S_tilde_source_termD[i],
Ge.rescaledValenciavU[i],Ge.rescaledvU[i]])
expr_list.extend([S_tildeD[i],vU[i],rho_star_fluxU[i],
tau_tilde_fluxU[i],S_tilde_source_termD[i],
rescaledValenciavU[i],rescaledvU[i]])
for j in range(3):
namecheck_list.extend([gfnm("S_tilde_fluxUD",i,j)])
exprcheck_list.extend([Ge.S_tilde_fluxUD[i][j]])
expr_list.extend([S_tilde_fluxUD[i][j]])
for i in range(len(expr_list)):
comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i])
import sys
if all_passed:
print("ALL TESTS PASSED!")
else:
print("ERROR: AT LEAST ONE TEST DID NOT PASS")
sys.exit(1)
###Output
ALL TESTS PASSED!
###Markdown
Step 9: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-GRHD_Equations-Cartesian.pdf](Tutorial-GRHD_Equations-Cartesian.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-GRHD_Equations-Cartesian.ipynb
!pdflatex -interaction=batchmode Tutorial-GRHD_Equations-Cartesian.tex
!pdflatex -interaction=batchmode Tutorial-GRHD_Equations-Cartesian.tex
!pdflatex -interaction=batchmode Tutorial-GRHD_Equations-Cartesian.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Equations of General Relativistic Hydrodynamics (GRHD) Authors: Zach Etienne & Patrick Nelson This notebook documents and constructs a number of quantities useful for building symbolic (SymPy) expressions for the equations of general relativistic hydrodynamics (GRHD), using the same (Valencia) formalism as `IllinoisGRMHD`**Notebook Status:** Self-Validated **Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)** IntroductionWe write the equations of general relativistic hydrodynamics in conservative form as follows (adapted from Eqs. 41-44 of [Duez et al](https://arxiv.org/pdf/astro-ph/0503420.pdf):\begin{eqnarray}\ \partial_t \rho_* &+& \partial_j \left(\rho_* v^j\right) = 0 \\\partial_t \tilde{\tau} &+& \partial_j \left(\alpha^2 \sqrt{\gamma} T^{0j} - \rho_* v^j \right) = s \\\partial_t \tilde{S}_i &+& \partial_j \left(\alpha \sqrt{\gamma} T^j{}_i \right) = \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i},\end{eqnarray}where we assume $T^{\mu\nu}$ is the stress-energy tensor of a perfect fluid:$$T^{\mu\nu} = \rho_0 h u^{\mu} u^{\nu} + P g^{\mu\nu},$$the $s$ source term is given in terms of ADM quantities via$$s = \alpha \sqrt{\gamma}\left[\left(T^{00}\beta^i\beta^j + 2 T^{0i}\beta^j + T^{ij} \right)K_{ij}- \left(T^{00}\beta^i + T^{0i} \right)\partial_i\alpha \right],$$and \begin{align}v^j &= \frac{u^j}{u^0} \\\rho_* &= \alpha\sqrt{\gamma} \rho_0 u^0 \\h &= 1 + \epsilon + \frac{P}{\rho_0}.\end{align}Also we will write the 4-metric in terms of the ADM 3-metric, lapse, and shift using standard equations.Thus the full set of input variables include:* Spacetime quantities: * ADM quantities $\alpha$, $\beta^i$, $\gamma_{ij}$, $K_{ij}$* Hydrodynamical quantities: * Rest-mass density $\rho_0$ * Pressure $P$ * Internal energy $\epsilon$ * 4-velocity $u^\mu$For completeness, the rest of the conservative variables are given by\begin{align}\tilde{\tau} &= \alpha^2\sqrt{\gamma} T^{00} - \rho_* \\\tilde{S}_i &= \alpha \sqrt{\gamma} T^0{}_i\end{align} A Note on NotationAs is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.For instance, in calculating the first term of $b^2 u^\mu u^\nu$, we use Greek indices:```pythonT4EMUU = ixp.zerorank2(DIM=4)for mu in range(4): for nu in range(4): Term 1: b^2 u^{\mu} u^{\nu} T4EMUU[mu][nu] = smallb2*u4U[mu]*u4U[nu]```When we calculate $\beta_i = \gamma_{ij} \beta^j$, we use Latin indices:```pythonbetaD = ixp.zerorank1()for i in range(DIM): for j in range(DIM): betaD[i] += gammaDD[i][j] * betaU[j]```As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook). This can be seen when we handle $\frac{1}{2} \alpha \sqrt{\gamma} T^{\mu \nu}_{\rm EM} \partial_i g_{\mu \nu}$:```python \alpha \sqrt{\gamma} T^{\mu \nu}_{\rm EM} \partial_i g_{\mu \nu} / 2for i in range(DIM): for mu in range(4): for nu in range(4): S_tilde_rhsD[i] += alpsqrtgam * T4EMUU[mu][nu] * g4DD_zerotimederiv_dD[mu][nu][i+1] / 2``` Table of Contents$$\label{toc}$$Each family of quantities is constructed within a given function (**boldfaced** below). This notebook is organized as follows1. [Step 1](importmodules): Import needed NRPy+ & Python modules1. [Step 2](stressenergy): Define the stress-energy tensor $T^{\mu\nu}$ and $T^\mu{}_\nu$: * **compute_enthalpy()**, **compute_T4UU()**, **compute_T4UD()**: 1. [Step 3](primtoconserv): Writing the conservative variables in terms of the primitive variables: * **compute_sqrtgammaDET()**, **compute_rho_star()**, **compute_tau_tilde()**, **compute_S_tildeD()**1. [Step 4](grhdfluxes): Define the fluxes for the GRHD equations 1. [Step 4.a](rhostarfluxterm): Define $\rho_*$ flux term for GRHD equations: * **compute_vU_from_u4U__no_speed_limit()**, **compute_rho_star_fluxU()**: 1. [Step 4.b](taustildesourceterms) Define $\tilde{\tau}$ and $\tilde{S}_i$ flux terms for GRHD equations: * **compute_tau_tilde_fluxU()**, **compute_S_tilde_fluxUD()**1. [Step 5](grhdsourceterms): Define source terms on RHSs of GRHD equations 1. [Step 5.a](ssourceterm): Define $s$ source term on RHS of $\tilde{\tau}$ equation: * **compute_s_source_term()** 1. [Step 5.b](stildeisourceterm): Define source term on RHS of $\tilde{S}_i$ equation 1. [Step 5.b.i](fourmetricderivs): Compute $g_{\mu\nu,i}$ in terms of ADM quantities and their derivatives: * **compute_g4DD_zerotimederiv_dD()** 1. [Step 5.b.ii](stildeisource): Compute source term of the $\tilde{S}_i$ equation: $\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}$: * **compute_S_tilde_source_termD()**1. [Step 6](convertvtou): Conversion of $v^i$ to $u^\mu$ (Courtesy Patrick Nelson): * **u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit()**, **u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit()**1. [Step 7](declarevarsconstructgrhdeqs): Declare ADM and hydrodynamical input variables, and construct GRHD equations1. [Step 8](code_validation): Code Validation against `GRHD.equations` NRPy+ module1. [Step 9](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Import needed NRPy+ & Python modules \[Back to [top](toc)\]$$\label{importmodules}$$
###Code
# Step 1: Import needed core NRPy+ modules
from outputC import * # NRPy+: Core C code output module
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
###Output
_____no_output_____
###Markdown
Step 2: Define the stress-energy tensor $T^{\mu\nu}$ and $T^\mu{}_\nu$ \[Back to [top](toc)\]$$\label{stressenergy}$$Recall from above that$$T^{\mu\nu} = \rho_0 h u^{\mu} u^{\nu} + P g^{\mu\nu},$$where $h = 1 + \epsilon + \frac{P}{\rho_0}$. Also $$T^\mu{}_{\nu} = T^{\mu\delta} g_{\delta \nu}$$
###Code
# Step 2.a: First define h, the enthalpy:
def compute_enthalpy(rho_b,P,epsilon):
global h
h = 1 + epsilon + P/rho_b
# Step 2.b: Define T^{mu nu} (a 4-dimensional tensor)
def compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U):
global T4UU
compute_enthalpy(rho_b,P,epsilon)
# Then define g^{mu nu} in terms of the ADM quantities:
import BSSN.ADMBSSN_tofrom_4metric as AB4m
AB4m.g4UU_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha)
# Finally compute T^{mu nu}
T4UU = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
T4UU[mu][nu] = rho_b * h * u4U[mu]*u4U[nu] + P*AB4m.g4UU[mu][nu]
# Step 2.c: Define T^{mu}_{nu} (a 4-dimensional tensor)
def compute_T4UD(gammaDD,betaU,alpha, T4UU):
global T4UD
# Next compute T^mu_nu = T^{mu delta} g_{delta nu}, needed for S_tilde flux.
# First we'll need g_{alpha nu} in terms of ADM quantities:
import BSSN.ADMBSSN_tofrom_4metric as AB4m
AB4m.g4DD_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha)
T4UD = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
for delta in range(4):
T4UD[mu][nu] += T4UU[mu][delta]*AB4m.g4DD[delta][nu]
###Output
_____no_output_____
###Markdown
Step 3: Writing the conservative variables in terms of the primitive variables \[Back to [top](toc)\]$$\label{primtoconserv}$$Recall from above that the conservative variables may be written as\begin{align}\rho_* &= \alpha\sqrt{\gamma} \rho_0 u^0 \\\tilde{\tau} &= \alpha^2\sqrt{\gamma} T^{00} - \rho_* \\\tilde{S}_i &= \alpha \sqrt{\gamma} T^0{}_i\end{align}$T^{\mu\nu}$ and $T^\mu{}_\nu$ have already been defined $-$ all in terms of primitive variables. Thus we'll just need $\sqrt{\gamma}=$`gammaDET`, and all conservatives can then be written in terms of other defined quantities, which themselves are written in terms of primitive variables and the ADM metric.
###Code
# Step 3: Writing the conservative variables in terms of the primitive variables
def compute_sqrtgammaDET(gammaDD):
global sqrtgammaDET
gammaUU, gammaDET = ixp.symm_matrix_inverter3x3(gammaDD)
sqrtgammaDET = sp.sqrt(gammaDET)
def compute_rho_star(alpha, sqrtgammaDET, rho_b,u4U):
global rho_star
# Compute rho_star:
rho_star = alpha*sqrtgammaDET*rho_b*u4U[0]
def compute_tau_tilde(alpha, sqrtgammaDET, T4UU,rho_star):
global tau_tilde
tau_tilde = alpha**2*sqrtgammaDET*T4UU[0][0] - rho_star
def compute_S_tildeD(alpha, sqrtgammaDET, T4UD):
global S_tildeD
S_tildeD = ixp.zerorank1(DIM=3)
for i in range(3):
S_tildeD[i] = alpha*sqrtgammaDET*T4UD[0][i+1]
###Output
_____no_output_____
###Markdown
Step 4: Define the fluxes for the GRHD equations \[Back to [top](toc)\]$$\label{grhdfluxes}$$ Step 4.a: Define $\rho_*$ flux term for GRHD equations \[Back to [top](toc)\]$$\label{rhostarfluxterm}$$Recall from above that\begin{array}\ \partial_t \rho_* &+ \partial_j \left(\rho_* v^j\right) = 0.\end{array}Here we will define the $\rho_* v^j$ that constitutes the flux of $\rho_*$, first defining $v^j=u^j/u^0$:
###Code
# Step 4: Define the fluxes for the GRHD equations
# Step 4.a: vU from u4U may be needed for computing rho_star_flux from u4U
def compute_vU_from_u4U__no_speed_limit(u4U):
global vU
# Now compute v^i = u^i/u^0:
vU = ixp.zerorank1(DIM=3)
for j in range(3):
vU[j] = u4U[j+1]/u4U[0]
# Step 4.b: rho_star flux
def compute_rho_star_fluxU(vU, rho_star):
global rho_star_fluxU
rho_star_fluxU = ixp.zerorank1(DIM=3)
for j in range(3):
rho_star_fluxU[j] = rho_star*vU[j]
###Output
_____no_output_____
###Markdown
Step 4.b: Define $\tilde{\tau}$ and $\tilde{S}_i$ flux terms for GRHD equations \[Back to [top](toc)\]$$\label{taustildesourceterms}$$Recall from above that\begin{array}\ \partial_t \tilde{\tau} &+ \partial_j \underbrace{\left(\alpha^2 \sqrt{\gamma} T^{0j} - \rho_* v^j \right)} &= s \\\partial_t \tilde{S}_i &+ \partial_j \underbrace{\left(\alpha \sqrt{\gamma} T^j{}_i \right)} &= \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i},\end{array}whereHere we will define all terms that go inside the $\partial_j$'s on the left-hand side of the above equations (i.e., the underbraced expressions):
###Code
# Step 4.c: tau_tilde flux
def compute_tau_tilde_fluxU(alpha, sqrtgammaDET, vU,T4UU, rho_star):
global tau_tilde_fluxU
tau_tilde_fluxU = ixp.zerorank1(DIM=3)
for j in range(3):
tau_tilde_fluxU[j] = alpha**2*sqrtgammaDET*T4UU[0][j+1] - rho_star*vU[j]
# Step 4.d: S_tilde flux
def compute_S_tilde_fluxUD(alpha, sqrtgammaDET, T4UD):
global S_tilde_fluxUD
S_tilde_fluxUD = ixp.zerorank2(DIM=3)
for j in range(3):
for i in range(3):
S_tilde_fluxUD[j][i] = alpha*sqrtgammaDET*T4UD[j+1][i+1]
###Output
_____no_output_____
###Markdown
Step 5: Define source terms on RHSs of GRHD equations \[Back to [top](toc)\]$$\label{grhdsourceterms}$$ Step 5.a: Define $s$ source term on RHS of $\tilde{\tau}$ equation \[Back to [top](toc)\]$$\label{ssourceterm}$$Recall again from above the $s$ source term on the right-hand side of the $\tilde{\tau}$ evolution equation is given in terms of ADM quantities and the stress-energy tensor via$$s = \underbrace{\alpha \sqrt{\gamma}}_{\text{Term 3}}\left[\underbrace{\left(T^{00}\beta^i\beta^j + 2 T^{0i}\beta^j + T^{ij} \right)K_{ij}}_{\text{Term 1}}\underbrace{- \left(T^{00}\beta^i + T^{0i} \right)\partial_i\alpha}_{\text{Term 2}} \right],$$
###Code
def compute_s_source_term(KDD,betaU,alpha, sqrtgammaDET,alpha_dD, T4UU):
global s_source_term
s_source_term = sp.sympify(0)
# Term 1:
for i in range(3):
for j in range(3):
s_source_term += (T4UU[0][0]*betaU[i]*betaU[j] + 2*T4UU[0][i+1]*betaU[j] + T4UU[i+1][j+1])*KDD[i][j]
# Term 2:
for i in range(3):
s_source_term += -(T4UU[0][0]*betaU[i] + T4UU[0][i+1])*alpha_dD[i]
# Term 3:
s_source_term *= alpha*sqrtgammaDET
###Output
_____no_output_____
###Markdown
Step 5.b: Define source term on RHS of $\tilde{S}_i$ equation \[Back to [top](toc)\]$$\label{stildeisourceterm}$$Recall from above$$\partial_t \tilde{S}_i + \partial_j \left(\alpha \sqrt{\gamma} T^j{}_i \right) = \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.$$Our goal here will be to compute$$\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.$$ Step 5.b.i: Compute $g_{\mu\nu,i}$ in terms of ADM quantities and their derivatives \[Back to [top](toc)\]$$\label{fourmetricderivs}$$To compute $g_{\mu\nu,i}$ we need to evaluate the first derivative of $g_{\mu\nu}$ in terms of ADM variables.We are given $\gamma_{ij}$, $\alpha$, and $\beta^i$, and the 4-metric is given in terms of these quantities via$$g_{\mu\nu} = \begin{pmatrix} -\alpha^2 + \beta^k \beta_k & \beta_i \\\beta_j & \gamma_{ij}\end{pmatrix}.$$Thus $$g_{\mu\nu,k} = \begin{pmatrix} -2 \alpha\alpha_{,i} + \beta^j_{,k} \beta_j + \beta^j \beta_{j,k} & \beta_{i,k} \\\beta_{j,k} & \gamma_{ij,k}\end{pmatrix},$$where $\beta_{i} = \gamma_{ij} \beta^j$, so$$\beta_{i,k} = \gamma_{ij,k} \beta^j + \gamma_{ij} \beta^j_{,k}$$
###Code
def compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD):
global g4DD_zerotimederiv_dD
# Eq. 2.121 in B&S
betaD = ixp.zerorank1()
for i in range(3):
for j in range(3):
betaD[i] += gammaDD[i][j]*betaU[j]
betaDdD = ixp.zerorank2()
for i in range(3):
for j in range(3):
for k in range(3):
# Recall that betaD[i] = gammaDD[i][j]*betaU[j] (Eq. 2.121 in B&S)
betaDdD[i][k] += gammaDD_dD[i][j][k]*betaU[j] + gammaDD[i][j]*betaU_dD[j][k]
# Eq. 2.122 in B&S
g4DD_zerotimederiv_dD = ixp.zerorank3(DIM=4)
for k in range(3):
# Recall that g4DD[0][0] = -alpha^2 + betaU[j]*betaD[j]
g4DD_zerotimederiv_dD[0][0][k+1] += -2*alpha*alpha_dD[k]
for j in range(3):
g4DD_zerotimederiv_dD[0][0][k+1] += betaU_dD[j][k]*betaD[j] + betaU[j]*betaDdD[j][k]
for i in range(3):
for k in range(3):
# Recall that g4DD[i][0] = g4DD[0][i] = betaD[i]
g4DD_zerotimederiv_dD[i+1][0][k+1] = g4DD_zerotimederiv_dD[0][i+1][k+1] = betaDdD[i][k]
for i in range(3):
for j in range(3):
for k in range(3):
# Recall that g4DD[i][j] = gammaDD[i][j]
g4DD_zerotimederiv_dD[i+1][j+1][k+1] = gammaDD_dD[i][j][k]
###Output
_____no_output_____
###Markdown
Step 5.b.ii: Compute source term of the $\tilde{S}_i$ equation: $\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}$ \[Back to [top](toc)\]$$\label{stildeisource}$$Now that we've computed `g4DD_zerotimederiv_dD`$=g_{\mu\nu,i}$, the $\tilde{S}_i$ evolution equation source term may be quickly constructed.
###Code
# Step 5.b.ii: Compute S_tilde source term
def compute_S_tilde_source_termD(alpha, sqrtgammaDET,g4DD_zerotimederiv_dD, T4UU):
global S_tilde_source_termD
S_tilde_source_termD = ixp.zerorank1(DIM=3)
for i in range(3):
for mu in range(4):
for nu in range(4):
S_tilde_source_termD[i] += sp.Rational(1,2)*alpha*sqrtgammaDET*T4UU[mu][nu]*g4DD_zerotimederiv_dD[mu][nu][i+1]
###Output
_____no_output_____
###Markdown
Step 6: Conversion of $v^i$ to $u^\mu$ (Courtesy Patrick Nelson) \[Back to [top](toc)\]$$\label{convertvtou}$$According to Eqs. 9-11 of [the IllinoisGRMHD paper](https://arxiv.org/pdf/1501.07276.pdf), the Valencia 3-velocity $v^i_{(n)}$ is related to the 4-velocity $u^\mu$ via\begin{align}\alpha v^i_{(n)} &= \frac{u^i}{u^0} + \beta^i \\\implies u^i &= u^0 \left(\alpha v^i_{(n)} - \beta^i\right)\end{align}Defining $v^i = \frac{u^i}{u^0}$, we get$$v^i = \alpha v^i_{(n)} - \beta^i,$$and in terms of this variable we get\begin{align}g_{00} \left(u^0\right)^2 + 2 g_{0i} u^0 u^i + g_{ij} u^i u^j &= \left(u^0\right)^2 \left(g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j\right)\\\implies u^0 &= \pm \sqrt{\frac{-1}{g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j}} \\&= \pm \sqrt{\frac{-1}{(-\alpha^2 + \beta^2) + 2 \beta_i v^i + \gamma_{ij} v^i v^j}} \\&= \pm \sqrt{\frac{1}{\alpha^2 - \gamma_{ij}\left(\beta^i + v^i\right)\left(\beta^j + v^j\right)}}\\&= \pm \sqrt{\frac{1}{\alpha^2 - \alpha^2 \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\&= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\end{align}Generally speaking, numerical errors will occasionally drive expressions under the radical to either negative values or potentially enormous values (corresponding to enormous Lorentz factors). Thus a reliable approach for computing $u^0$ requires that we first rewrite the above expression in terms of the Lorentz factor squared: $\Gamma^2=\left(\alpha u^0\right)^2$:\begin{align}u^0 &= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\\implies \left(\alpha u^0\right)^2 &= \frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}} \\\implies \gamma_{ij}v^i_{(n)}v^j_{(n)} &= 1 - \frac{1}{\left(\alpha u^0\right)^2} \\&= 1 - \frac{1}{\Gamma^2}\end{align}In order for the bottom expression to hold true, the left-hand side must be between 0 and 1. Again, this is not guaranteed due to the appearance of numerical errors. In fact, a robust algorithm will not allow $\Gamma^2$ to become too large (which might contribute greatly to the stress-energy of a given gridpoint), so let's define the largest allowed Lorentz factor as $\Gamma_{\rm max}$.Then our algorithm for computing $u^0$ is as follows:If$$R=\gamma_{ij}v^i_{(n)}v^j_{(n)}>1 - \frac{1}{\Gamma_{\rm max}^2},$$ then adjust the 3-velocity $v^i$ as follows:$$v^i_{(n)} = \sqrt{\frac{1 - \frac{1}{\Gamma_{\rm max}^2}}{R}}v^i_{(n)}.$$After this rescaling, we are then guaranteed that if $R$ is recomputed, it will be set to its ceiling value $R=R_{\rm max} = 1 - \frac{1}{\Gamma_{\rm max}^2}$.Then, regardless of whether the ceiling on $R$ was applied, $u^0$ can be safely computed via$$u^0 = \frac{1}{\alpha \sqrt{1-R}},$$and the remaining components $u^i$ via$$u^i = u^0 v^i.$$In summary our algorithm for computing $u^{\mu}$ from $v^i = \frac{u^i}{u^0}$ is as follows:1. Choose a maximum Lorentz factor $\Gamma_{\rm max}$=`GAMMA_SPEED_LIMIT`, and define $v^i_{(n)} = \frac{1}{\alpha}\left( \frac{u^i}{u^0} + \beta^i\right)$.1. Compute $R=\gamma_{ij}v^i_{(n)}v^j_{(n)}=1 - \frac{1}{\Gamma^2}$1. If $R \le 1 - \frac{1}{\Gamma_{\rm max}^2}$, then skip the next step.1. Otherwise if $R > 1 - \frac{1}{\Gamma_{\rm max}^2}$ then adjust $v^i_{(n)}= \sqrt{\frac{1 - \frac{1}{\Gamma_{\rm max}^2}}{R}}v^i_{(n)}$, which will force $R=R_{\rm max}$.1. Given the $R$ computed in the above step, $u^0 = \frac{1}{\alpha \sqrt{1-R}}$, and $u^i=u^0 v^i$.While the above algorithm is quite robust, its `if()` statement in the fourth step is not very friendly to NRPy+ or an optimizing C compiler, as it would require NRPy+ to generate separate C kernels for each branch of the `if()`. Let's instead try the following trick, which Roland Haas taught us. Define $R^*$ as$$R^* = \frac{1}{2} \left(R_{\rm max} + R - |R_{\rm max} - R| \right).$$If $R>R_{\rm max}$, then $|R_{\rm max} - R|=R - R_{\rm max}$, and we get:$$R^* = \frac{1}{2} \left(R_{\rm max} + R - (R - R_{\rm max}) \right) = \frac{1}{2} \left(2 R_{\rm max}\right) = R_{\rm max}$$If $R\le R_{\rm max}$, then $|R_{\rm max} - R|=R_{\rm max} - R$, and we get:$$R^* = \frac{1}{2} \left(R_{\rm max} + R - (R_{\rm max} - R) \right) = \frac{1}{2} \left(2 R\right) = R$$Then we can rescale *all* $v^i_{(n)}$ via$$v^i_{(n)} \to v^i_{(n)} \sqrt{\frac{R^*}{R}},$$though we must be very careful to carefully handle the case in which $R=0$. To avoid any problems in this case, we simply adjust the above rescaling by adding a tiny number [`TINYDOUBLE`](https://en.wikipedia.org/wiki/Tiny_Bubbles) to $R$ in the denominator, typically `1e-100`:$$v^i_{(n)} \to v^i_{(n)} \sqrt{\frac{R^*}{R + {\rm TINYDOUBLE}}}.$$Finally, $u^0$ can be immediately and safely computed, via:$$u^0 = \frac{1}{\alpha \sqrt{1-R^*}},$$and $u^i$ via $$u^i = u^0 v^i = u^0 \left(\alpha v^i_{(n)} - \beta^i\right).$$
###Code
# Step 6.a: Convert Valencia 3-velocity v_{(n)}^i into u^\mu, and apply a speed limiter
# Speed-limited ValenciavU is output to rescaledValenciavU global.
def u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, ValenciavU):
# Inputs: Metric lapse alpha, shift betaU, 3-metric gammaDD, Valencia 3-velocity ValenciavU
# Outputs (as globals): u4U_ito_ValenciavU, rescaledValenciavU
# R = gamma_{ij} v^i v^j
R = sp.sympify(0)
for i in range(3):
for j in range(3):
R += gammaDD[i][j]*ValenciavU[i]*ValenciavU[j]
thismodule = "GRHD"
# The default value isn't terribly important here, since we can overwrite in the main C code
GAMMA_SPEED_LIMIT = par.Cparameters("REAL", thismodule, "GAMMA_SPEED_LIMIT", 10.0) # Default value based on
# IllinoisGRMHD.
# GiRaFFE default = 2000.0
Rmax = 1 - 1 / (GAMMA_SPEED_LIMIT * GAMMA_SPEED_LIMIT)
# Now, we set Rstar = min(Rmax,R):
# If R < Rmax, then Rstar = 0.5*(Rmax+R-Rmax+R) = R
# If R >= Rmax, then Rstar = 0.5*(Rmax+R+Rmax-R) = Rmax
Rstar = sp.Rational(1, 2) * (Rmax + R - nrpyAbs(Rmax - R))
# We add TINYDOUBLE to R below to avoid a 0/0, which occurs when
# ValenciavU == 0 for all Valencia 3-velocity components.
# "Those tiny *doubles* make me warm all over
# with a feeling that I'm gonna love you till the end of time."
# - Adapted from Connie Francis' "Tiny Bubbles"
TINYDOUBLE = par.Cparameters("#define",thismodule,"TINYDOUBLE",1e-100)
# The rescaled (speed-limited) Valencia 3-velocity
# is given by, v_{(n)}^i = sqrt{Rstar/R} v^i
global rescaledValenciavU
rescaledValenciavU = ixp.zerorank1()
for i in range(3):
# If R == 0, then Rstar == 0, so Rmax/(R+TINYDOUBLE)=0/1e-100 = 0
# If your velocities are of order 1e-100 and this is physically
# meaningful, there must be something wrong with your unit conversion.
rescaledValenciavU[i] = ValenciavU[i]*sp.sqrt(Rstar/(R + TINYDOUBLE))
# Finally compute u^mu in terms of Valenciav^i
# u^0 = 1/(alpha-sqrt(1-R^*))
global u4U_ito_ValenciavU
u4U_ito_ValenciavU = ixp.zerorank1(DIM=4)
u4U_ito_ValenciavU[0] = 1/(alpha*sp.sqrt(1-Rstar))
# u^i = u^0 ( alpha v^i_{(n)} - beta^i ), where v^i_{(n)} is the Valencia 3-velocity
for i in range(3):
u4U_ito_ValenciavU[i+1] = u4U_ito_ValenciavU[0] * (alpha * rescaledValenciavU[i] - betaU[i])
# Step 6.b: Convert v^i into u^\mu, and apply a speed limiter.
# Speed-limited vU is output to rescaledvU global.
def u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, vU):
ValenciavU = ixp.zerorank1()
for i in range(3):
ValenciavU[i] = (vU[i] + betaU[i])/alpha
u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, ValenciavU)
# Since ValenciavU is written in terms of vU,
# u4U_ito_ValenciavU is actually u4U_ito_vU
global u4U_ito_vU
u4U_ito_vU = ixp.zerorank1(DIM=4)
for mu in range(4):
u4U_ito_vU[mu] = u4U_ito_ValenciavU[mu]
# Finally compute the rescaled (speed-limited) vU
global rescaledvU
rescaledvU = ixp.zerorank1(DIM=3)
for i in range(3):
rescaledvU[i] = alpha * rescaledValenciavU[i] - betaU[i]
###Output
_____no_output_____
###Markdown
Step 7: Declare ADM and hydrodynamical input variables, and construct GRHD equations \[Back to [top](toc)\]$$\label{declarevarsconstructgrhdeqs}$$
###Code
# First define hydrodynamical quantities
u4U = ixp.declarerank1("u4U", DIM=4)
rho_b,P,epsilon = sp.symbols('rho_b P epsilon',real=True)
# Then ADM quantities
gammaDD = ixp.declarerank2("gammaDD","sym01",DIM=3)
KDD = ixp.declarerank2("KDD" ,"sym01",DIM=3)
betaU = ixp.declarerank1("betaU", DIM=3)
alpha = sp.symbols('alpha', real=True)
# First compute stress-energy tensor T4UU and T4UD:
compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U)
compute_T4UD(gammaDD,betaU,alpha, T4UU)
# Next sqrt(gamma)
compute_sqrtgammaDET(gammaDD)
# Compute conservative variables in terms of primitive variables
compute_rho_star( alpha, sqrtgammaDET, rho_b,u4U)
compute_tau_tilde(alpha, sqrtgammaDET, T4UU,rho_star)
compute_S_tildeD( alpha, sqrtgammaDET, T4UD)
# Then compute v^i from u^mu
compute_vU_from_u4U__no_speed_limit(u4U)
# Next compute fluxes of conservative variables
compute_rho_star_fluxU( vU, rho_star)
compute_tau_tilde_fluxU(alpha, sqrtgammaDET, vU,T4UU, rho_star)
compute_S_tilde_fluxUD( alpha, sqrtgammaDET, T4UD)
# Then declare derivatives & compute g4DD_zerotimederiv_dD
gammaDD_dD = ixp.declarerank3("gammaDD_dD","sym01",DIM=3)
betaU_dD = ixp.declarerank2("betaU_dD" ,"nosym",DIM=3)
alpha_dD = ixp.declarerank1("alpha_dD" ,DIM=3)
compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD)
# Then compute source terms on tau_tilde and S_tilde equations
compute_s_source_term(KDD,betaU,alpha, sqrtgammaDET,alpha_dD, T4UU)
compute_S_tilde_source_termD( alpha, sqrtgammaDET,g4DD_zerotimederiv_dD, T4UU)
# Then compute the 4-velocities in terms of an input Valencia 3-velocity testValenciavU[i]
testValenciavU = ixp.declarerank1("testValenciavU",DIM=3)
u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, testValenciavU)
# Finally compute the 4-velocities in terms of an input 3-velocity testvU[i] = u^i/u^0
testvU = ixp.declarerank1("testvU",DIM=3)
u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, testvU)
###Output
_____no_output_____
###Markdown
Step 8: Code Validation against `GRHD.equations` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the SymPy expressions for the GRHD equations generated in1. this tutorial versus2. the NRPy+ [GRHD.equations](../edit/GRHD/equations.py) module.
###Code
import GRHD.equations as Ge
# First compute stress-energy tensor T4UU and T4UD:
Ge.compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U)
Ge.compute_T4UD(gammaDD,betaU,alpha, Ge.T4UU)
# Next sqrt(gamma)
Ge.compute_sqrtgammaDET(gammaDD)
# Compute conservative variables in terms of primitive variables
Ge.compute_rho_star( alpha, Ge.sqrtgammaDET, rho_b,u4U)
Ge.compute_tau_tilde(alpha, Ge.sqrtgammaDET, Ge.T4UU,Ge.rho_star)
Ge.compute_S_tildeD( alpha, Ge.sqrtgammaDET, Ge.T4UD)
# Then compute v^i from u^mu
Ge.compute_vU_from_u4U__no_speed_limit(u4U)
# Next compute fluxes of conservative variables
Ge.compute_rho_star_fluxU ( Ge.vU, Ge.rho_star)
Ge.compute_tau_tilde_fluxU(alpha, Ge.sqrtgammaDET, Ge.vU,Ge.T4UU, Ge.rho_star)
Ge.compute_S_tilde_fluxUD (alpha, Ge.sqrtgammaDET, Ge.T4UD)
# Then declare derivatives & compute g4DD_zerotimederiv_dD
# gammaDD_dD = ixp.declarerank3("gammaDD_dD","sym01",DIM=3)
# betaU_dD = ixp.declarerank2("betaU_dD" ,"nosym",DIM=3)
# alpha_dD = ixp.declarerank1("alpha_dD" ,DIM=3)
Ge.compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD)
# Finally compute source terms on tau_tilde and S_tilde equations
Ge.compute_s_source_term(KDD,betaU,alpha, Ge.sqrtgammaDET,alpha_dD, Ge.T4UU)
Ge.compute_S_tilde_source_termD( alpha, Ge.sqrtgammaDET,Ge.g4DD_zerotimederiv_dD,Ge.T4UU)
GetestValenciavU = ixp.declarerank1("testValenciavU")
Ge.u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha, betaU, gammaDD, GetestValenciavU)
GetestvU = ixp.declarerank1("testvU")
Ge.u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit( alpha, betaU, gammaDD, GetestvU)
all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="Ge."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2==None:
return basename+"["+str(idx1)+"]"
if idx3==None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
expr_list = []
exprcheck_list = []
namecheck_list = []
namecheck_list.extend(["sqrtgammaDET","rho_star","tau_tilde","s_source_term"])
exprcheck_list.extend([Ge.sqrtgammaDET,Ge.rho_star,Ge.tau_tilde,Ge.s_source_term])
expr_list.extend([sqrtgammaDET,rho_star,tau_tilde,s_source_term])
for mu in range(4):
namecheck_list.extend([gfnm("u4_ito_ValenciavU",mu),gfnm("u4U_ito_vU",mu)])
exprcheck_list.extend([ Ge.u4U_ito_ValenciavU[mu], Ge.u4U_ito_vU[mu]])
expr_list.extend( [ u4U_ito_ValenciavU[mu], u4U_ito_vU[mu]])
for nu in range(4):
namecheck_list.extend([gfnm("T4UU",mu,nu),gfnm("T4UD",mu,nu)])
exprcheck_list.extend([Ge.T4UU[mu][nu],Ge.T4UD[mu][nu]])
expr_list.extend([T4UU[mu][nu],T4UD[mu][nu]])
for delta in range(4):
namecheck_list.extend([gfnm("g4DD_zerotimederiv_dD",mu,nu,delta)])
exprcheck_list.extend([Ge.g4DD_zerotimederiv_dD[mu][nu][delta]])
expr_list.extend([g4DD_zerotimederiv_dD[mu][nu][delta]])
for i in range(3):
namecheck_list.extend([gfnm("S_tildeD",i),gfnm("vU",i),gfnm("rho_star_fluxU",i),
gfnm("tau_tilde_fluxU",i),gfnm("S_tilde_source_termD",i),
gfnm("rescaledValenciavU",i), gfnm("rescaledvU",i)])
exprcheck_list.extend([Ge.S_tildeD[i],Ge.vU[i],Ge.rho_star_fluxU[i],
Ge.tau_tilde_fluxU[i],Ge.S_tilde_source_termD[i],
Ge.rescaledValenciavU[i],Ge.rescaledvU[i]])
expr_list.extend([S_tildeD[i],vU[i],rho_star_fluxU[i],
tau_tilde_fluxU[i],S_tilde_source_termD[i],
rescaledValenciavU[i],rescaledvU[i]])
for j in range(3):
namecheck_list.extend([gfnm("S_tilde_fluxUD",i,j)])
exprcheck_list.extend([Ge.S_tilde_fluxUD[i][j]])
expr_list.extend([S_tilde_fluxUD[i][j]])
for i in range(len(expr_list)):
comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i])
if all_passed:
print("ALL TESTS PASSED!")
###Output
ALL TESTS PASSED!
###Markdown
Step 9: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-GRHD_Equations-Cartesian.pdf](Tutorial-GRHD_Equations-Cartesian.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-GRHD_Equations-Cartesian.ipynb
!pdflatex -interaction=batchmode Tutorial-GRHD_Equations-Cartesian.tex
!pdflatex -interaction=batchmode Tutorial-GRHD_Equations-Cartesian.tex
!pdflatex -interaction=batchmode Tutorial-GRHD_Equations-Cartesian.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[NbConvertApp] Converting notebook Tutorial-GRHD_Equations-Cartesian.ipynb to latex
[NbConvertApp] Writing 93682 bytes to Tutorial-GRHD_Equations-Cartesian.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Equations of General Relativistic Hydrodynamics (GRHD) Authors: Zach Etienne & Patrick Nelson This notebook documents and constructs a number of quantities useful for building symbolic (SymPy) expressions for the equations of general relativistic hydrodynamics (GRHD), using the same (Valencia) formalism as `IllinoisGRMHD`**Notebook Status:** Self-Validated **Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)** IntroductionWe write the equations of general relativistic hydrodynamics in conservative form as follows (adapted from Eqs. 41-44 of [Duez et al](https://arxiv.org/pdf/astro-ph/0503420.pdf):\begin{eqnarray}\ \partial_t \rho_* &+& \partial_j \left(\rho_* v^j\right) = 0 \\\partial_t \tilde{\tau} &+& \partial_j \left(\alpha^2 \sqrt{\gamma} T^{0j} - \rho_* v^j \right) = s \\\partial_t \tilde{S}_i &+& \partial_j \left(\alpha \sqrt{\gamma} T^j{}_i \right) = \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i},\end{eqnarray}where we assume $T^{\mu\nu}$ is the stress-energy tensor of a perfect fluid:$$T^{\mu\nu} = \rho_0 h u^{\mu} u^{\nu} + P g^{\mu\nu},$$the $s$ source term is given in terms of ADM quantities via$$s = \alpha \sqrt{\gamma}\left[\left(T^{00}\beta^i\beta^j + 2 T^{0i}\beta^j + T^{ij} \right)K_{ij}- \left(T^{00}\beta^i + T^{0i} \right)\partial_i\alpha \right],$$and \begin{align}v^j &= \frac{u^j}{u^0} \\\rho_* &= \alpha\sqrt{\gamma} \rho_0 u^0 \\h &= 1 + \epsilon + \frac{P}{\rho_0}.\end{align}Also we will write the 4-metric in terms of the ADM 3-metric, lapse, and shift using standard equations.Thus the full set of input variables include:* Spacetime quantities: * ADM quantities $\alpha$, $\beta^i$, $\gamma_{ij}$, $K_{ij}$* Hydrodynamical quantities: * Rest-mass density $\rho_0$ * Pressure $P$ * Internal energy $\epsilon$ * 4-velocity $u^\mu$For completeness, the rest of the conservative variables are given by\begin{align}\tilde{\tau} &= \alpha^2\sqrt{\gamma} T^{00} - \rho_* \\\tilde{S}_i &= \alpha \sqrt{\gamma} T^0{}_i\end{align} A Note on NotationAs is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.For instance, in calculating the first term of $b^2 u^\mu u^\nu$, we use Greek indices:```pythonT4EMUU = ixp.zerorank2(DIM=4)for mu in range(4): for nu in range(4): Term 1: b^2 u^{\mu} u^{\nu} T4EMUU[mu][nu] = smallb2*u4U[mu]*u4U[nu]```When we calculate $\beta_i = \gamma_{ij} \beta^j$, we use Latin indices:```pythonbetaD = ixp.zerorank1(DIM=3)for i in range(3): for j in range(3): betaD[i] += gammaDD[i][j] * betaU[j]```As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook). This can be seen when we handle $\frac{1}{2} \alpha \sqrt{\gamma} T^{\mu \nu}_{\rm EM} \partial_i g_{\mu \nu}$:```python \alpha \sqrt{\gamma} T^{\mu \nu}_{\rm EM} \partial_i g_{\mu \nu} / 2for i in range(3): for mu in range(4): for nu in range(4): S_tilde_rhsD[i] += alpsqrtgam * T4EMUU[mu][nu] * g4DD_zerotimederiv_dD[mu][nu][i+1] / 2``` Table of Contents$$\label{toc}$$Each family of quantities is constructed within a given function (**boldfaced** below). This notebook is organized as follows1. [Step 1](importmodules): Import needed NRPy+ & Python modules1. [Step 2](stressenergy): Define the stress-energy tensor $T^{\mu\nu}$ and $T^\mu{}_\nu$: * **compute_enthalpy()**, **compute_T4UU()**, **compute_T4UD()**: 1. [Step 3](primtoconserv): Writing the conservative variables in terms of the primitive variables: * **compute_sqrtgammaDET()**, **compute_rho_star()**, **compute_tau_tilde()**, **compute_S_tildeD()**1. [Step 4](grhdfluxes): Define the fluxes for the GRHD equations 1. [Step 4.a](rhostarfluxterm): Define $\rho_*$ flux term for GRHD equations: * **compute_vU_from_u4U__no_speed_limit()**, **compute_rho_star_fluxU()**: 1. [Step 4.b](taustildesourceterms) Define $\tilde{\tau}$ and $\tilde{S}_i$ flux terms for GRHD equations: * **compute_tau_tilde_fluxU()**, **compute_S_tilde_fluxUD()**1. [Step 5](grhdsourceterms): Define source terms on RHSs of GRHD equations 1. [Step 5.a](ssourceterm): Define $s$ source term on RHS of $\tilde{\tau}$ equation: * **compute_s_source_term()** 1. [Step 5.b](stildeisourceterm): Define source term on RHS of $\tilde{S}_i$ equation 1. [Step 5.b.i](fourmetricderivs): Compute $g_{\mu\nu,i}$ in terms of ADM quantities and their derivatives: * **compute_g4DD_zerotimederiv_dD()** 1. [Step 5.b.ii](stildeisource): Compute source term of the $\tilde{S}_i$ equation: $\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}$: * **compute_S_tilde_source_termD()**1. [Step 6](convertvtou): Conversion of $v^i$ to $u^\mu$ (Courtesy Patrick Nelson): * **u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit()**, **u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit()**1. [Step 7](declarevarsconstructgrhdeqs): Declare ADM and hydrodynamical input variables, and construct GRHD equations1. [Step 8](code_validation): Code Validation against `GRHD.equations` NRPy+ module1. [Step 9](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Import needed NRPy+ & Python modules \[Back to [top](toc)\]$$\label{importmodules}$$
###Code
# Step 1: Import needed core NRPy+ modules
from outputC import nrpyAbs # NRPy+: Core C code output module
import NRPy_param_funcs as par # NRPy+: parameter interface
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
###Output
_____no_output_____
###Markdown
Step 2: Define the stress-energy tensor $T^{\mu\nu}$ and $T^\mu{}_\nu$ \[Back to [top](toc)\]$$\label{stressenergy}$$Recall from above that$$T^{\mu\nu} = \rho_0 h u^{\mu} u^{\nu} + P g^{\mu\nu},$$where $h = 1 + \epsilon + \frac{P}{\rho_0}$. Also $$T^\mu{}_{\nu} = T^{\mu\delta} g_{\delta \nu}$$
###Code
# Step 2.a: First define h, the enthalpy:
def compute_enthalpy(rho_b,P,epsilon):
global h
h = 1 + epsilon + P/rho_b
# Step 2.b: Define T^{mu nu} (a 4-dimensional tensor)
def compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U):
global T4UU
compute_enthalpy(rho_b,P,epsilon)
# Then define g^{mu nu} in terms of the ADM quantities:
import BSSN.ADMBSSN_tofrom_4metric as AB4m
AB4m.g4UU_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha)
# Finally compute T^{mu nu}
T4UU = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
T4UU[mu][nu] = rho_b * h * u4U[mu]*u4U[nu] + P*AB4m.g4UU[mu][nu]
# Step 2.c: Define T^{mu}_{nu} (a 4-dimensional tensor)
def compute_T4UD(gammaDD,betaU,alpha, T4UU):
global T4UD
# Next compute T^mu_nu = T^{mu delta} g_{delta nu}, needed for S_tilde flux.
# First we'll need g_{alpha nu} in terms of ADM quantities:
import BSSN.ADMBSSN_tofrom_4metric as AB4m
AB4m.g4DD_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha)
T4UD = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
for delta in range(4):
T4UD[mu][nu] += T4UU[mu][delta]*AB4m.g4DD[delta][nu]
###Output
_____no_output_____
###Markdown
Step 3: Writing the conservative variables in terms of the primitive variables \[Back to [top](toc)\]$$\label{primtoconserv}$$Recall from above that the conservative variables may be written as\begin{align}\rho_* &= \alpha\sqrt{\gamma} \rho_0 u^0 \\\tilde{\tau} &= \alpha^2\sqrt{\gamma} T^{00} - \rho_* \\\tilde{S}_i &= \alpha \sqrt{\gamma} T^0{}_i\end{align}$T^{\mu\nu}$ and $T^\mu{}_\nu$ have already been defined $-$ all in terms of primitive variables. Thus we'll just need $\sqrt{\gamma}=$`gammaDET`, and all conservatives can then be written in terms of other defined quantities, which themselves are written in terms of primitive variables and the ADM metric.
###Code
# Step 3: Writing the conservative variables in terms of the primitive variables
def compute_sqrtgammaDET(gammaDD):
global sqrtgammaDET
gammaUU, gammaDET = ixp.symm_matrix_inverter3x3(gammaDD)
sqrtgammaDET = sp.sqrt(gammaDET)
def compute_rho_star(alpha, sqrtgammaDET, rho_b,u4U):
global rho_star
# Compute rho_star:
rho_star = alpha*sqrtgammaDET*rho_b*u4U[0]
def compute_tau_tilde(alpha, sqrtgammaDET, T4UU,rho_star):
global tau_tilde
tau_tilde = alpha**2*sqrtgammaDET*T4UU[0][0] - rho_star
def compute_S_tildeD(alpha, sqrtgammaDET, T4UD):
global S_tildeD
S_tildeD = ixp.zerorank1(DIM=3)
for i in range(3):
S_tildeD[i] = alpha*sqrtgammaDET*T4UD[0][i+1]
###Output
_____no_output_____
###Markdown
Step 4: Define the fluxes for the GRHD equations \[Back to [top](toc)\]$$\label{grhdfluxes}$$ Step 4.a: Define $\rho_*$ flux term for GRHD equations \[Back to [top](toc)\]$$\label{rhostarfluxterm}$$Recall from above that\begin{array}\ \partial_t \rho_* &+ \partial_j \left(\rho_* v^j\right) = 0.\end{array}Here we will define the $\rho_* v^j$ that constitutes the flux of $\rho_*$, first defining $v^j=u^j/u^0$:
###Code
# Step 4: Define the fluxes for the GRHD equations
# Step 4.a: vU from u4U may be needed for computing rho_star_flux from u4U
def compute_vU_from_u4U__no_speed_limit(u4U):
global vU
# Now compute v^i = u^i/u^0:
vU = ixp.zerorank1(DIM=3)
for j in range(3):
vU[j] = u4U[j+1]/u4U[0]
# Step 4.b: rho_star flux
def compute_rho_star_fluxU(vU, rho_star):
global rho_star_fluxU
rho_star_fluxU = ixp.zerorank1(DIM=3)
for j in range(3):
rho_star_fluxU[j] = rho_star*vU[j]
###Output
_____no_output_____
###Markdown
Step 4.b: Define $\tilde{\tau}$ and $\tilde{S}_i$ flux terms for GRHD equations \[Back to [top](toc)\]$$\label{taustildesourceterms}$$Recall from above that\begin{array}\ \partial_t \tilde{\tau} &+ \partial_j \underbrace{\left(\alpha^2 \sqrt{\gamma} T^{0j} - \rho_* v^j \right)} &= s \\\partial_t \tilde{S}_i &+ \partial_j \underbrace{\left(\alpha \sqrt{\gamma} T^j{}_i \right)} &= \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.\end{array}Here we will define all terms that go inside the $\partial_j$'s on the left-hand side of the above equations (i.e., the underbraced expressions):
###Code
# Step 4.c: tau_tilde flux
def compute_tau_tilde_fluxU(alpha, sqrtgammaDET, vU,T4UU, rho_star):
global tau_tilde_fluxU
tau_tilde_fluxU = ixp.zerorank1(DIM=3)
for j in range(3):
tau_tilde_fluxU[j] = alpha**2*sqrtgammaDET*T4UU[0][j+1] - rho_star*vU[j]
# Step 4.d: S_tilde flux
def compute_S_tilde_fluxUD(alpha, sqrtgammaDET, T4UD):
global S_tilde_fluxUD
S_tilde_fluxUD = ixp.zerorank2(DIM=3)
for j in range(3):
for i in range(3):
S_tilde_fluxUD[j][i] = alpha*sqrtgammaDET*T4UD[j+1][i+1]
###Output
_____no_output_____
###Markdown
Step 5: Define source terms on RHSs of GRHD equations \[Back to [top](toc)\]$$\label{grhdsourceterms}$$ Step 5.a: Define $s$ source term on RHS of $\tilde{\tau}$ equation \[Back to [top](toc)\]$$\label{ssourceterm}$$Recall again from above the $s$ source term on the right-hand side of the $\tilde{\tau}$ evolution equation is given in terms of ADM quantities and the stress-energy tensor via$$s = \underbrace{\alpha \sqrt{\gamma}}_{\text{Term 3}}\left[\underbrace{\left(T^{00}\beta^i\beta^j + 2 T^{0i}\beta^j + T^{ij} \right)K_{ij}}_{\text{Term 1}}\underbrace{- \left(T^{00}\beta^i + T^{0i} \right)\partial_i\alpha}_{\text{Term 2}} \right],$$
###Code
def compute_s_source_term(KDD,betaU,alpha, sqrtgammaDET,alpha_dD, T4UU):
global s_source_term
s_source_term = sp.sympify(0)
# Term 1:
for i in range(3):
for j in range(3):
s_source_term += (T4UU[0][0]*betaU[i]*betaU[j] + 2*T4UU[0][i+1]*betaU[j] + T4UU[i+1][j+1])*KDD[i][j]
# Term 2:
for i in range(3):
s_source_term += -(T4UU[0][0]*betaU[i] + T4UU[0][i+1])*alpha_dD[i]
# Term 3:
s_source_term *= alpha*sqrtgammaDET
###Output
_____no_output_____
###Markdown
Step 5.b: Define source term on RHS of $\tilde{S}_i$ equation \[Back to [top](toc)\]$$\label{stildeisourceterm}$$Recall from above$$\partial_t \tilde{S}_i + \partial_j \left(\alpha \sqrt{\gamma} T^j{}_i \right) = \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.$$Our goal here will be to compute$$\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.$$ Step 5.b.i: Compute $g_{\mu\nu,i}$ in terms of ADM quantities and their derivatives \[Back to [top](toc)\]$$\label{fourmetricderivs}$$To compute $g_{\mu\nu,i}$ we need to evaluate the first derivative of $g_{\mu\nu}$ in terms of ADM variables.We are given $\gamma_{ij}$, $\alpha$, and $\beta^i$, and the 4-metric is given in terms of these quantities via$$g_{\mu\nu} = \begin{pmatrix} -\alpha^2 + \beta^k \beta_k & \beta_i \\\beta_j & \gamma_{ij}\end{pmatrix}.$$Thus $$g_{\mu\nu,k} = \begin{pmatrix} -2 \alpha\alpha_{,i} + \beta^j_{,k} \beta_j + \beta^j \beta_{j,k} & \beta_{i,k} \\\beta_{j,k} & \gamma_{ij,k}\end{pmatrix},$$where $\beta_{i} = \gamma_{ij} \beta^j$, so$$\beta_{i,k} = \gamma_{ij,k} \beta^j + \gamma_{ij} \beta^j_{,k}$$
###Code
def compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD):
global g4DD_zerotimederiv_dD
# Eq. 2.121 in B&S
betaD = ixp.zerorank1(DIM=3)
for i in range(3):
for j in range(3):
betaD[i] += gammaDD[i][j]*betaU[j]
betaDdD = ixp.zerorank2(DIM=3)
for i in range(3):
for j in range(3):
for k in range(3):
# Recall that betaD[i] = gammaDD[i][j]*betaU[j] (Eq. 2.121 in B&S)
betaDdD[i][k] += gammaDD_dD[i][j][k]*betaU[j] + gammaDD[i][j]*betaU_dD[j][k]
# Eq. 2.122 in B&S
g4DD_zerotimederiv_dD = ixp.zerorank3(DIM=4)
for k in range(3):
# Recall that g4DD[0][0] = -alpha^2 + betaU[j]*betaD[j]
g4DD_zerotimederiv_dD[0][0][k+1] += -2*alpha*alpha_dD[k]
for j in range(3):
g4DD_zerotimederiv_dD[0][0][k+1] += betaU_dD[j][k]*betaD[j] + betaU[j]*betaDdD[j][k]
for i in range(3):
for k in range(3):
# Recall that g4DD[i][0] = g4DD[0][i] = betaD[i]
g4DD_zerotimederiv_dD[i+1][0][k+1] = g4DD_zerotimederiv_dD[0][i+1][k+1] = betaDdD[i][k]
for i in range(3):
for j in range(3):
for k in range(3):
# Recall that g4DD[i][j] = gammaDD[i][j]
g4DD_zerotimederiv_dD[i+1][j+1][k+1] = gammaDD_dD[i][j][k]
###Output
_____no_output_____
###Markdown
Step 5.b.ii: Compute source term of the $\tilde{S}_i$ equation: $\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}$ \[Back to [top](toc)\]$$\label{stildeisource}$$Now that we've computed `g4DD_zerotimederiv_dD`$=g_{\mu\nu,i}$, the $\tilde{S}_i$ evolution equation source term may be quickly constructed.
###Code
# Step 5.b.ii: Compute S_tilde source term
def compute_S_tilde_source_termD(alpha, sqrtgammaDET,g4DD_zerotimederiv_dD, T4UU):
global S_tilde_source_termD
S_tilde_source_termD = ixp.zerorank1(DIM=3)
for i in range(3):
for mu in range(4):
for nu in range(4):
S_tilde_source_termD[i] += sp.Rational(1,2)*alpha*sqrtgammaDET*T4UU[mu][nu]*g4DD_zerotimederiv_dD[mu][nu][i+1]
###Output
_____no_output_____
###Markdown
Step 6: Conversion of $v^i$ to $u^\mu$ (Courtesy Patrick Nelson) \[Back to [top](toc)\]$$\label{convertvtou}$$According to Eqs. 9-11 of [the IllinoisGRMHD paper](https://arxiv.org/pdf/1501.07276.pdf), the Valencia 3-velocity $v^i_{(n)}$ is related to the 4-velocity $u^\mu$ via\begin{align}\alpha v^i_{(n)} &= \frac{u^i}{u^0} + \beta^i \\\implies u^i &= u^0 \left(\alpha v^i_{(n)} - \beta^i\right)\end{align}Defining $v^i = \frac{u^i}{u^0}$, we get$$v^i = \alpha v^i_{(n)} - \beta^i,$$and in terms of this variable we get\begin{align}g_{00} \left(u^0\right)^2 + 2 g_{0i} u^0 u^i + g_{ij} u^i u^j &= \left(u^0\right)^2 \left(g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j\right)\\\implies u^0 &= \pm \sqrt{\frac{-1}{g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j}} \\&= \pm \sqrt{\frac{-1}{(-\alpha^2 + \beta^2) + 2 \beta_i v^i + \gamma_{ij} v^i v^j}} \\&= \pm \sqrt{\frac{1}{\alpha^2 - \gamma_{ij}\left(\beta^i + v^i\right)\left(\beta^j + v^j\right)}}\\&= \pm \sqrt{\frac{1}{\alpha^2 - \alpha^2 \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\&= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\end{align}Generally speaking, numerical errors will occasionally drive expressions under the radical to either negative values or potentially enormous values (corresponding to enormous Lorentz factors). Thus a reliable approach for computing $u^0$ requires that we first rewrite the above expression in terms of the Lorentz factor squared: $\Gamma^2=\left(\alpha u^0\right)^2$:\begin{align}u^0 &= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\\implies \left(\alpha u^0\right)^2 &= \frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}} \\\implies \gamma_{ij}v^i_{(n)}v^j_{(n)} &= 1 - \frac{1}{\left(\alpha u^0\right)^2} \\&= 1 - \frac{1}{\Gamma^2}\end{align}In order for the bottom expression to hold true, the left-hand side must be between 0 and 1. Again, this is not guaranteed due to the appearance of numerical errors. In fact, a robust algorithm will not allow $\Gamma^2$ to become too large (which might contribute greatly to the stress-energy of a given gridpoint), so let's define the largest allowed Lorentz factor as $\Gamma_{\rm max}$.Then our algorithm for computing $u^0$ is as follows:If$$R=\gamma_{ij}v^i_{(n)}v^j_{(n)}>1 - \frac{1}{\Gamma_{\rm max}^2},$$ then adjust the 3-velocity $v^i$ as follows:$$v^i_{(n)} \to \sqrt{\frac{1 - \frac{1}{\Gamma_{\rm max}^2}}{R}}v^i_{(n)}.$$After this rescaling, we are then guaranteed that if $R$ is recomputed, it will be set to its ceiling value $R=R_{\rm max} = 1 - \frac{1}{\Gamma_{\rm max}^2}$.Then, regardless of whether the ceiling on $R$ was applied, $u^0$ can be safely computed via$$u^0 = \frac{1}{\alpha \sqrt{1-R}},$$and the remaining components $u^i$ via$$u^i = u^0 v^i.$$In summary our algorithm for computing $u^{\mu}$ from $v^i = \frac{u^i}{u^0}$ is as follows:1. Choose a maximum Lorentz factor $\Gamma_{\rm max}$=`GAMMA_SPEED_LIMIT`, and define $v^i_{(n)} = \frac{1}{\alpha}\left( \frac{u^i}{u^0} + \beta^i\right)$.1. Compute $R=\gamma_{ij}v^i_{(n)}v^j_{(n)}=1 - \frac{1}{\Gamma^2}$1. If $R \le 1 - \frac{1}{\Gamma_{\rm max}^2}$, then skip the next step.1. Otherwise if $R > 1 - \frac{1}{\Gamma_{\rm max}^2}$ then adjust $v^i_{(n)}\to \sqrt{\frac{1 - \frac{1}{\Gamma_{\rm max}^2}}{R}}v^i_{(n)}$, which will force $R=R_{\rm max}$.1. Given the $R$ computed in the above step, $u^0 = \frac{1}{\alpha \sqrt{1-R}}$, and $u^i=u^0 v^i$.While the above algorithm is quite robust, its `if()` statement in the fourth step is not very friendly to NRPy+ or an optimizing C compiler, as it would require NRPy+ to generate separate C kernels for each branch of the `if()`. Let's instead try the following trick, which Roland Haas taught us. Define $R^*$ as$$R^* = \frac{1}{2} \left(R_{\rm max} + R - |R_{\rm max} - R| \right).$$If $R>R_{\rm max}$, then $|R_{\rm max} - R|=R - R_{\rm max}$, and we get:$$R^* = \frac{1}{2} \left(R_{\rm max} + R - (R - R_{\rm max}) \right) = \frac{1}{2} \left(2 R_{\rm max}\right) = R_{\rm max}$$If $R\le R_{\rm max}$, then $|R_{\rm max} - R|=R_{\rm max} - R$, and we get:$$R^* = \frac{1}{2} \left(R_{\rm max} + R - (R_{\rm max} - R) \right) = \frac{1}{2} \left(2 R\right) = R$$Then we can rescale *all* $v^i_{(n)}$ via$$v^i_{(n)} \to v^i_{(n)} \sqrt{\frac{R^*}{R}},$$though we must be very careful to carefully handle the case in which $R=0$. To avoid any problems in this case, we simply adjust the above rescaling by adding a tiny number [`TINYDOUBLE`](https://en.wikipedia.org/wiki/Tiny_Bubbles) to $R$ in the denominator, typically `1e-100`:$$v^i_{(n)} \to v^i_{(n)} \sqrt{\frac{R^*}{R + {\rm TINYDOUBLE}}}.$$Finally, $u^0$ can be immediately and safely computed, via:$$u^0 = \frac{1}{\alpha \sqrt{1-R^*}},$$and $u^i$ via $$u^i = u^0 v^i = u^0 \left(\alpha v^i_{(n)} - \beta^i\right).$$
###Code
# Step 6.a: Convert Valencia 3-velocity v_{(n)}^i into u^\mu, and apply a speed limiter
# Speed-limited ValenciavU is output to rescaledValenciavU global.
def u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, ValenciavU):
# Inputs: Metric lapse alpha, shift betaU, 3-metric gammaDD, Valencia 3-velocity ValenciavU
# Outputs (as globals): u4U_ito_ValenciavU, rescaledValenciavU
# R = gamma_{ij} v^i v^j
R = sp.sympify(0)
for i in range(3):
for j in range(3):
R += gammaDD[i][j]*ValenciavU[i]*ValenciavU[j]
thismodule = "GRHD"
# The default value isn't terribly important here, since we can overwrite in the main C code
GAMMA_SPEED_LIMIT = par.Cparameters("REAL", thismodule, "GAMMA_SPEED_LIMIT", 10.0) # Default value based on
# IllinoisGRMHD.
# GiRaFFE default = 2000.0
Rmax = 1 - 1 / (GAMMA_SPEED_LIMIT * GAMMA_SPEED_LIMIT)
# Now, we set Rstar = min(Rmax,R):
# If R < Rmax, then Rstar = 0.5*(Rmax+R-Rmax+R) = R
# If R >= Rmax, then Rstar = 0.5*(Rmax+R+Rmax-R) = Rmax
Rstar = sp.Rational(1, 2) * (Rmax + R - nrpyAbs(Rmax - R))
# We add TINYDOUBLE to R below to avoid a 0/0, which occurs when
# ValenciavU == 0 for all Valencia 3-velocity components.
# "Those tiny *doubles* make me warm all over
# with a feeling that I'm gonna love you till the end of time."
# - Adapted from Connie Francis' "Tiny Bubbles"
TINYDOUBLE = par.Cparameters("#define",thismodule,"TINYDOUBLE",1e-100)
# The rescaled (speed-limited) Valencia 3-velocity
# is given by, v_{(n)}^i = sqrt{Rstar/R} v^i
global rescaledValenciavU
rescaledValenciavU = ixp.zerorank1(DIM=3)
for i in range(3):
# If R == 0, then Rstar == 0, so sqrt( Rstar/(R+TINYDOUBLE) )=sqrt(0/1e-100) = 0
# If your velocities are of order 1e-100 and this is physically
# meaningful, there must be something wrong with your unit conversion.
rescaledValenciavU[i] = ValenciavU[i]*sp.sqrt(Rstar/(R + TINYDOUBLE))
# Finally compute u^mu in terms of Valenciav^i
# u^0 = 1/(alpha-sqrt(1-R^*))
global u4U_ito_ValenciavU
u4U_ito_ValenciavU = ixp.zerorank1(DIM=4)
u4U_ito_ValenciavU[0] = 1/(alpha*sp.sqrt(1-Rstar))
# u^i = u^0 ( alpha v^i_{(n)} - beta^i ), where v^i_{(n)} is the Valencia 3-velocity
for i in range(3):
u4U_ito_ValenciavU[i+1] = u4U_ito_ValenciavU[0] * (alpha * rescaledValenciavU[i] - betaU[i])
# Step 6.b: Convert v^i into u^\mu, and apply a speed limiter.
# Speed-limited vU is output to rescaledvU global.
def u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, vU):
ValenciavU = ixp.zerorank1(DIM=3)
for i in range(3):
ValenciavU[i] = (vU[i] + betaU[i])/alpha
u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, ValenciavU)
# Since ValenciavU is written in terms of vU,
# u4U_ito_ValenciavU is actually u4U_ito_vU
global u4U_ito_vU
u4U_ito_vU = ixp.zerorank1(DIM=4)
for mu in range(4):
u4U_ito_vU[mu] = u4U_ito_ValenciavU[mu]
# Finally compute the rescaled (speed-limited) vU
global rescaledvU
rescaledvU = ixp.zerorank1(DIM=3)
for i in range(3):
rescaledvU[i] = alpha * rescaledValenciavU[i] - betaU[i]
###Output
_____no_output_____
###Markdown
Step 7: Declare ADM and hydrodynamical input variables, and construct GRHD equations \[Back to [top](toc)\]$$\label{declarevarsconstructgrhdeqs}$$
###Code
# First define hydrodynamical quantities
u4U = ixp.declarerank1("u4U", DIM=4)
rho_b,P,epsilon = sp.symbols('rho_b P epsilon',real=True)
# Then ADM quantities
gammaDD = ixp.declarerank2("gammaDD","sym01",DIM=3)
KDD = ixp.declarerank2("KDD" ,"sym01",DIM=3)
betaU = ixp.declarerank1("betaU", DIM=3)
alpha = sp.symbols('alpha', real=True)
# First compute stress-energy tensor T4UU and T4UD:
compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U)
compute_T4UD(gammaDD,betaU,alpha, T4UU)
# Next sqrt(gamma)
compute_sqrtgammaDET(gammaDD)
# Compute conservative variables in terms of primitive variables
compute_rho_star( alpha, sqrtgammaDET, rho_b,u4U)
compute_tau_tilde(alpha, sqrtgammaDET, T4UU,rho_star)
compute_S_tildeD( alpha, sqrtgammaDET, T4UD)
# Then compute v^i from u^mu
compute_vU_from_u4U__no_speed_limit(u4U)
# Next compute fluxes of conservative variables
compute_rho_star_fluxU( vU, rho_star)
compute_tau_tilde_fluxU(alpha, sqrtgammaDET, vU,T4UU, rho_star)
compute_S_tilde_fluxUD( alpha, sqrtgammaDET, T4UD)
# Then declare derivatives & compute g4DD_zerotimederiv_dD
gammaDD_dD = ixp.declarerank3("gammaDD_dD","sym01",DIM=3)
betaU_dD = ixp.declarerank2("betaU_dD" ,"nosym",DIM=3)
alpha_dD = ixp.declarerank1("alpha_dD" ,DIM=3)
compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD)
# Then compute source terms on tau_tilde and S_tilde equations
compute_s_source_term(KDD,betaU,alpha, sqrtgammaDET,alpha_dD, T4UU)
compute_S_tilde_source_termD( alpha, sqrtgammaDET,g4DD_zerotimederiv_dD, T4UU)
# Then compute the 4-velocities in terms of an input Valencia 3-velocity testValenciavU[i]
testValenciavU = ixp.declarerank1("testValenciavU",DIM=3)
u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, testValenciavU)
# Finally compute the 4-velocities in terms of an input 3-velocity testvU[i] = u^i/u^0
testvU = ixp.declarerank1("testvU",DIM=3)
u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, testvU)
###Output
_____no_output_____
###Markdown
Step 8: Code Validation against `GRHD.equations` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the SymPy expressions for the GRHD equations generated in1. this tutorial versus2. the NRPy+ [GRHD.equations](../edit/GRHD/equations.py) module.
###Code
import GRHD.equations as Ge
# First compute stress-energy tensor T4UU and T4UD:
Ge.compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U)
Ge.compute_T4UD(gammaDD,betaU,alpha, Ge.T4UU)
# Next sqrt(gamma)
Ge.compute_sqrtgammaDET(gammaDD)
# Compute conservative variables in terms of primitive variables
Ge.compute_rho_star( alpha, Ge.sqrtgammaDET, rho_b,u4U)
Ge.compute_tau_tilde(alpha, Ge.sqrtgammaDET, Ge.T4UU,Ge.rho_star)
Ge.compute_S_tildeD( alpha, Ge.sqrtgammaDET, Ge.T4UD)
# Then compute v^i from u^mu
Ge.compute_vU_from_u4U__no_speed_limit(u4U)
# Next compute fluxes of conservative variables
Ge.compute_rho_star_fluxU ( Ge.vU, Ge.rho_star)
Ge.compute_tau_tilde_fluxU(alpha, Ge.sqrtgammaDET, Ge.vU,Ge.T4UU, Ge.rho_star)
Ge.compute_S_tilde_fluxUD (alpha, Ge.sqrtgammaDET, Ge.T4UD)
# Then declare derivatives & compute g4DD_zerotimederiv_dD
# gammaDD_dD = ixp.declarerank3("gammaDD_dD","sym01",DIM=3)
# betaU_dD = ixp.declarerank2("betaU_dD" ,"nosym",DIM=3)
# alpha_dD = ixp.declarerank1("alpha_dD" ,DIM=3)
Ge.compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD)
# Finally compute source terms on tau_tilde and S_tilde equations
Ge.compute_s_source_term(KDD,betaU,alpha, Ge.sqrtgammaDET,alpha_dD, Ge.T4UU)
Ge.compute_S_tilde_source_termD( alpha, Ge.sqrtgammaDET,Ge.g4DD_zerotimederiv_dD,Ge.T4UU)
GetestValenciavU = ixp.declarerank1("testValenciavU")
Ge.u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha, betaU, gammaDD, GetestValenciavU)
GetestvU = ixp.declarerank1("testvU")
Ge.u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit( alpha, betaU, gammaDD, GetestvU)
all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="Ge."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2 is None:
return basename+"["+str(idx1)+"]"
if idx3 is None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
expr_list = []
exprcheck_list = []
namecheck_list = []
namecheck_list.extend(["sqrtgammaDET","rho_star","tau_tilde","s_source_term"])
exprcheck_list.extend([Ge.sqrtgammaDET,Ge.rho_star,Ge.tau_tilde,Ge.s_source_term])
expr_list.extend([sqrtgammaDET,rho_star,tau_tilde,s_source_term])
for mu in range(4):
namecheck_list.extend([gfnm("u4_ito_ValenciavU",mu),gfnm("u4U_ito_vU",mu)])
exprcheck_list.extend([ Ge.u4U_ito_ValenciavU[mu], Ge.u4U_ito_vU[mu]])
expr_list.extend( [ u4U_ito_ValenciavU[mu], u4U_ito_vU[mu]])
for nu in range(4):
namecheck_list.extend([gfnm("T4UU",mu,nu),gfnm("T4UD",mu,nu)])
exprcheck_list.extend([Ge.T4UU[mu][nu],Ge.T4UD[mu][nu]])
expr_list.extend([T4UU[mu][nu],T4UD[mu][nu]])
for delta in range(4):
namecheck_list.extend([gfnm("g4DD_zerotimederiv_dD",mu,nu,delta)])
exprcheck_list.extend([Ge.g4DD_zerotimederiv_dD[mu][nu][delta]])
expr_list.extend([g4DD_zerotimederiv_dD[mu][nu][delta]])
for i in range(3):
namecheck_list.extend([gfnm("S_tildeD",i),gfnm("vU",i),gfnm("rho_star_fluxU",i),
gfnm("tau_tilde_fluxU",i),gfnm("S_tilde_source_termD",i),
gfnm("rescaledValenciavU",i), gfnm("rescaledvU",i)])
exprcheck_list.extend([Ge.S_tildeD[i],Ge.vU[i],Ge.rho_star_fluxU[i],
Ge.tau_tilde_fluxU[i],Ge.S_tilde_source_termD[i],
Ge.rescaledValenciavU[i],Ge.rescaledvU[i]])
expr_list.extend([S_tildeD[i],vU[i],rho_star_fluxU[i],
tau_tilde_fluxU[i],S_tilde_source_termD[i],
rescaledValenciavU[i],rescaledvU[i]])
for j in range(3):
namecheck_list.extend([gfnm("S_tilde_fluxUD",i,j)])
exprcheck_list.extend([Ge.S_tilde_fluxUD[i][j]])
expr_list.extend([S_tilde_fluxUD[i][j]])
for i in range(len(expr_list)):
comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i])
import sys
if all_passed:
print("ALL TESTS PASSED!")
else:
print("ERROR: AT LEAST ONE TEST DID NOT PASS")
sys.exit(1)
###Output
ALL TESTS PASSED!
###Markdown
Step 9: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-GRHD_Equations-Cartesian.pdf](Tutorial-GRHD_Equations-Cartesian.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-GRHD_Equations-Cartesian")
###Output
Created Tutorial-GRHD_Equations-Cartesian.tex, and compiled LaTeX file to
PDF file Tutorial-GRHD_Equations-Cartesian.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Equations of General Relativistic Hydrodynamics (GRHD) Authors: Zach Etienne & Patrick Nelson This notebook documents and constructs a number of quantities useful for building symbolic (SymPy) expressions for the equations of general relativistic hydrodynamics (GRHD), using the same (Valencia) formalism as `IllinoisGRMHD`**Notebook Status:** Self-Validated **Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)** IntroductionWe write the equations of general relativistic hydrodynamics in conservative form as follows (adapted from Eqs. 41-44 of [Duez et al](https://arxiv.org/pdf/astro-ph/0503420.pdf):\begin{eqnarray}\ \partial_t \rho_* &+& \partial_j \left(\rho_* v^j\right) = 0 \\\partial_t \tilde{\tau} &+& \partial_j \left(\alpha^2 \sqrt{\gamma} T^{0j} - \rho_* v^j \right) = s \\\partial_t \tilde{S}_i &+& \partial_j \left(\alpha \sqrt{\gamma} T^j{}_i \right) = \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i},\end{eqnarray}where we assume $T^{\mu\nu}$ is the stress-energy tensor of a perfect fluid:$$T^{\mu\nu} = \rho_0 h u^{\mu} u^{\nu} + P g^{\mu\nu},$$the $s$ source term is given in terms of ADM quantities via$$s = \alpha \sqrt{\gamma}\left[\left(T^{00}\beta^i\beta^j + 2 T^{0i}\beta^j + T^{ij} \right)K_{ij}- \left(T^{00}\beta^i + T^{0i} \right)\partial_i\alpha \right],$$and \begin{align}v^j &= \frac{u^j}{u^0} \\\rho_* &= \alpha\sqrt{\gamma} \rho_0 u^0 \\h &= 1 + \epsilon + \frac{P}{\rho_0}.\end{align}Also we will write the 4-metric in terms of the ADM 3-metric, lapse, and shift using standard equations.Thus the full set of input variables include:* Spacetime quantities: * ADM quantities $\alpha$, $\beta^i$, $\gamma_{ij}$, $K_{ij}$* Hydrodynamical quantities: * Rest-mass density $\rho_0$ * Pressure $P$ * Internal energy $\epsilon$ * 4-velocity $u^\mu$For completeness, the rest of the conservative variables are given by\begin{align}\tilde{\tau} &= \alpha^2\sqrt{\gamma} T^{00} - \rho_* \\\tilde{S}_i &= \alpha \sqrt{\gamma} T^0{}_i\end{align} A Note on NotationAs is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.For instance, in calculating the first term of $b^2 u^\mu u^\nu$, we use Greek indices:```pythonT4EMUU = ixp.zerorank2(DIM=4)for mu in range(4): for nu in range(4): Term 1: b^2 u^{\mu} u^{\nu} T4EMUU[mu][nu] = smallb2*u4U[mu]*u4U[nu]```When we calculate $\beta_i = \gamma_{ij} \beta^j$, we use Latin indices:```pythonbetaD = ixp.zerorank1(DIM=3)for i in range(3): for j in range(3): betaD[i] += gammaDD[i][j] * betaU[j]```As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook). This can be seen when we handle $\frac{1}{2} \alpha \sqrt{\gamma} T^{\mu \nu}_{\rm EM} \partial_i g_{\mu \nu}$:```python \alpha \sqrt{\gamma} T^{\mu \nu}_{\rm EM} \partial_i g_{\mu \nu} / 2for i in range(3): for mu in range(4): for nu in range(4): S_tilde_rhsD[i] += alpsqrtgam * T4EMUU[mu][nu] * g4DD_zerotimederiv_dD[mu][nu][i+1] / 2``` Table of Contents$$\label{toc}$$Each family of quantities is constructed within a given function (**boldfaced** below). This notebook is organized as follows1. [Step 1](importmodules): Import needed NRPy+ & Python modules1. [Step 2](stressenergy): Define the stress-energy tensor $T^{\mu\nu}$ and $T^\mu{}_\nu$: * **compute_enthalpy()**, **compute_T4UU()**, **compute_T4UD()**: 1. [Step 3](primtoconserv): Writing the conservative variables in terms of the primitive variables: * **compute_sqrtgammaDET()**, **compute_rho_star()**, **compute_tau_tilde()**, **compute_S_tildeD()**1. [Step 4](grhdfluxes): Define the fluxes for the GRHD equations 1. [Step 4.a](rhostarfluxterm): Define $\rho_*$ flux term for GRHD equations: * **compute_vU_from_u4U__no_speed_limit()**, **compute_rho_star_fluxU()**: 1. [Step 4.b](taustildesourceterms) Define $\tilde{\tau}$ and $\tilde{S}_i$ flux terms for GRHD equations: * **compute_tau_tilde_fluxU()**, **compute_S_tilde_fluxUD()**1. [Step 5](grhdsourceterms): Define source terms on RHSs of GRHD equations 1. [Step 5.a](ssourceterm): Define $s$ source term on RHS of $\tilde{\tau}$ equation: * **compute_s_source_term()** 1. [Step 5.b](stildeisourceterm): Define source term on RHS of $\tilde{S}_i$ equation 1. [Step 5.b.i](fourmetricderivs): Compute $g_{\mu\nu,i}$ in terms of ADM quantities and their derivatives: * **compute_g4DD_zerotimederiv_dD()** 1. [Step 5.b.ii](stildeisource): Compute source term of the $\tilde{S}_i$ equation: $\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}$: * **compute_S_tilde_source_termD()**1. [Step 6](convertvtou): Conversion of $v^i$ to $u^\mu$ (Courtesy Patrick Nelson): * **u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit()**, **u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit()**1. [Step 7](declarevarsconstructgrhdeqs): Declare ADM and hydrodynamical input variables, and construct GRHD equations1. [Step 8](code_validation): Code Validation against `GRHD.equations` NRPy+ module1. [Step 9](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Import needed NRPy+ & Python modules \[Back to [top](toc)\]$$\label{importmodules}$$
###Code
# Step 1: Import needed core NRPy+ modules
from outputC import nrpyAbs # NRPy+: Core C code output module
import NRPy_param_funcs as par # NRPy+: parameter interface
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
###Output
_____no_output_____
###Markdown
Step 2: Define the stress-energy tensor $T^{\mu\nu}$ and $T^\mu{}_\nu$ \[Back to [top](toc)\]$$\label{stressenergy}$$Recall from above that$$T^{\mu\nu} = \rho_0 h u^{\mu} u^{\nu} + P g^{\mu\nu},$$where $h = 1 + \epsilon + \frac{P}{\rho_0}$. Also $$T^\mu{}_{\nu} = T^{\mu\delta} g_{\delta \nu}$$
###Code
# Step 2.a: First define h, the enthalpy:
def compute_enthalpy(rho_b,P,epsilon):
global h
h = 1 + epsilon + P/rho_b
# Step 2.b: Define T^{mu nu} (a 4-dimensional tensor)
def compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U):
global T4UU
compute_enthalpy(rho_b,P,epsilon)
# Then define g^{mu nu} in terms of the ADM quantities:
import BSSN.ADMBSSN_tofrom_4metric as AB4m
AB4m.g4UU_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha)
# Finally compute T^{mu nu}
T4UU = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
T4UU[mu][nu] = rho_b * h * u4U[mu]*u4U[nu] + P*AB4m.g4UU[mu][nu]
# Step 2.c: Define T^{mu}_{nu} (a 4-dimensional tensor)
def compute_T4UD(gammaDD,betaU,alpha, T4UU):
global T4UD
# Next compute T^mu_nu = T^{mu delta} g_{delta nu}, needed for S_tilde flux.
# First we'll need g_{alpha nu} in terms of ADM quantities:
import BSSN.ADMBSSN_tofrom_4metric as AB4m
AB4m.g4DD_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha)
T4UD = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
for delta in range(4):
T4UD[mu][nu] += T4UU[mu][delta]*AB4m.g4DD[delta][nu]
###Output
_____no_output_____
###Markdown
Step 3: Writing the conservative variables in terms of the primitive variables \[Back to [top](toc)\]$$\label{primtoconserv}$$Recall from above that the conservative variables may be written as\begin{align}\rho_* &= \alpha\sqrt{\gamma} \rho_0 u^0 \\\tilde{\tau} &= \alpha^2\sqrt{\gamma} T^{00} - \rho_* \\\tilde{S}_i &= \alpha \sqrt{\gamma} T^0{}_i\end{align}$T^{\mu\nu}$ and $T^\mu{}_\nu$ have already been defined $-$ all in terms of primitive variables. Thus we'll just need $\sqrt{\gamma}=$`gammaDET`, and all conservatives can then be written in terms of other defined quantities, which themselves are written in terms of primitive variables and the ADM metric.
###Code
# Step 3: Writing the conservative variables in terms of the primitive variables
def compute_sqrtgammaDET(gammaDD):
global sqrtgammaDET
gammaUU, gammaDET = ixp.symm_matrix_inverter3x3(gammaDD)
sqrtgammaDET = sp.sqrt(gammaDET)
def compute_rho_star(alpha, sqrtgammaDET, rho_b,u4U):
global rho_star
# Compute rho_star:
rho_star = alpha*sqrtgammaDET*rho_b*u4U[0]
def compute_tau_tilde(alpha, sqrtgammaDET, T4UU,rho_star):
global tau_tilde
tau_tilde = alpha**2*sqrtgammaDET*T4UU[0][0] - rho_star
def compute_S_tildeD(alpha, sqrtgammaDET, T4UD):
global S_tildeD
S_tildeD = ixp.zerorank1(DIM=3)
for i in range(3):
S_tildeD[i] = alpha*sqrtgammaDET*T4UD[0][i+1]
###Output
_____no_output_____
###Markdown
Step 4: Define the fluxes for the GRHD equations \[Back to [top](toc)\]$$\label{grhdfluxes}$$ Step 4.a: Define $\rho_*$ flux term for GRHD equations \[Back to [top](toc)\]$$\label{rhostarfluxterm}$$Recall from above that\begin{array}\ \partial_t \rho_* &+ \partial_j \left(\rho_* v^j\right) = 0.\end{array}Here we will define the $\rho_* v^j$ that constitutes the flux of $\rho_*$, first defining $v^j=u^j/u^0$:
###Code
# Step 4: Define the fluxes for the GRHD equations
# Step 4.a: vU from u4U may be needed for computing rho_star_flux from u4U
def compute_vU_from_u4U__no_speed_limit(u4U):
global vU
# Now compute v^i = u^i/u^0:
vU = ixp.zerorank1(DIM=3)
for j in range(3):
vU[j] = u4U[j+1]/u4U[0]
# Step 4.b: rho_star flux
def compute_rho_star_fluxU(vU, rho_star):
global rho_star_fluxU
rho_star_fluxU = ixp.zerorank1(DIM=3)
for j in range(3):
rho_star_fluxU[j] = rho_star*vU[j]
###Output
_____no_output_____
###Markdown
Step 4.b: Define $\tilde{\tau}$ and $\tilde{S}_i$ flux terms for GRHD equations \[Back to [top](toc)\]$$\label{taustildesourceterms}$$Recall from above that\begin{array}\ \partial_t \tilde{\tau} &+ \partial_j \underbrace{\left(\alpha^2 \sqrt{\gamma} T^{0j} - \rho_* v^j \right)} &= s \\\partial_t \tilde{S}_i &+ \partial_j \underbrace{\left(\alpha \sqrt{\gamma} T^j{}_i \right)} &= \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.\end{array}Here we will define all terms that go inside the $\partial_j$'s on the left-hand side of the above equations (i.e., the underbraced expressions):
###Code
# Step 4.c: tau_tilde flux
def compute_tau_tilde_fluxU(alpha, sqrtgammaDET, vU,T4UU, rho_star):
global tau_tilde_fluxU
tau_tilde_fluxU = ixp.zerorank1(DIM=3)
for j in range(3):
tau_tilde_fluxU[j] = alpha**2*sqrtgammaDET*T4UU[0][j+1] - rho_star*vU[j]
# Step 4.d: S_tilde flux
def compute_S_tilde_fluxUD(alpha, sqrtgammaDET, T4UD):
global S_tilde_fluxUD
S_tilde_fluxUD = ixp.zerorank2(DIM=3)
for j in range(3):
for i in range(3):
S_tilde_fluxUD[j][i] = alpha*sqrtgammaDET*T4UD[j+1][i+1]
###Output
_____no_output_____
###Markdown
Step 5: Define source terms on RHSs of GRHD equations \[Back to [top](toc)\]$$\label{grhdsourceterms}$$ Step 5.a: Define $s$ source term on RHS of $\tilde{\tau}$ equation \[Back to [top](toc)\]$$\label{ssourceterm}$$Recall again from above the $s$ source term on the right-hand side of the $\tilde{\tau}$ evolution equation is given in terms of ADM quantities and the stress-energy tensor via$$s = \underbrace{\alpha \sqrt{\gamma}}_{\text{Term 3}}\left[\underbrace{\left(T^{00}\beta^i\beta^j + 2 T^{0i}\beta^j + T^{ij} \right)K_{ij}}_{\text{Term 1}}\underbrace{- \left(T^{00}\beta^i + T^{0i} \right)\partial_i\alpha}_{\text{Term 2}} \right],$$
###Code
def compute_s_source_term(KDD,betaU,alpha, sqrtgammaDET,alpha_dD, T4UU):
global s_source_term
s_source_term = sp.sympify(0)
# Term 1:
for i in range(3):
for j in range(3):
s_source_term += (T4UU[0][0]*betaU[i]*betaU[j] + 2*T4UU[0][i+1]*betaU[j] + T4UU[i+1][j+1])*KDD[i][j]
# Term 2:
for i in range(3):
s_source_term += -(T4UU[0][0]*betaU[i] + T4UU[0][i+1])*alpha_dD[i]
# Term 3:
s_source_term *= alpha*sqrtgammaDET
###Output
_____no_output_____
###Markdown
Step 5.b: Define source term on RHS of $\tilde{S}_i$ equation \[Back to [top](toc)\]$$\label{stildeisourceterm}$$Recall from above$$\partial_t \tilde{S}_i + \partial_j \left(\alpha \sqrt{\gamma} T^j{}_i \right) = \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.$$Our goal here will be to compute$$\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.$$ Step 5.b.i: Compute $g_{\mu\nu,i}$ in terms of ADM quantities and their derivatives \[Back to [top](toc)\]$$\label{fourmetricderivs}$$To compute $g_{\mu\nu,i}$ we need to evaluate the first derivative of $g_{\mu\nu}$ in terms of ADM variables.We are given $\gamma_{ij}$, $\alpha$, and $\beta^i$, and the 4-metric is given in terms of these quantities via$$g_{\mu\nu} = \begin{pmatrix} -\alpha^2 + \beta^k \beta_k & \beta_i \\\beta_j & \gamma_{ij}\end{pmatrix}.$$Thus $$g_{\mu\nu,k} = \begin{pmatrix} -2 \alpha\alpha_{,i} + \beta^j_{,k} \beta_j + \beta^j \beta_{j,k} & \beta_{i,k} \\\beta_{j,k} & \gamma_{ij,k}\end{pmatrix},$$where $\beta_{i} = \gamma_{ij} \beta^j$, so$$\beta_{i,k} = \gamma_{ij,k} \beta^j + \gamma_{ij} \beta^j_{,k}$$
###Code
def compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD):
global g4DD_zerotimederiv_dD
# Eq. 2.121 in B&S
betaD = ixp.zerorank1(DIM=3)
for i in range(3):
for j in range(3):
betaD[i] += gammaDD[i][j]*betaU[j]
betaDdD = ixp.zerorank2(DIM=3)
for i in range(3):
for j in range(3):
for k in range(3):
# Recall that betaD[i] = gammaDD[i][j]*betaU[j] (Eq. 2.121 in B&S)
betaDdD[i][k] += gammaDD_dD[i][j][k]*betaU[j] + gammaDD[i][j]*betaU_dD[j][k]
# Eq. 2.122 in B&S
g4DD_zerotimederiv_dD = ixp.zerorank3(DIM=4)
for k in range(3):
# Recall that g4DD[0][0] = -alpha^2 + betaU[j]*betaD[j]
g4DD_zerotimederiv_dD[0][0][k+1] += -2*alpha*alpha_dD[k]
for j in range(3):
g4DD_zerotimederiv_dD[0][0][k+1] += betaU_dD[j][k]*betaD[j] + betaU[j]*betaDdD[j][k]
for i in range(3):
for k in range(3):
# Recall that g4DD[i][0] = g4DD[0][i] = betaD[i]
g4DD_zerotimederiv_dD[i+1][0][k+1] = g4DD_zerotimederiv_dD[0][i+1][k+1] = betaDdD[i][k]
for i in range(3):
for j in range(3):
for k in range(3):
# Recall that g4DD[i][j] = gammaDD[i][j]
g4DD_zerotimederiv_dD[i+1][j+1][k+1] = gammaDD_dD[i][j][k]
###Output
_____no_output_____
###Markdown
Step 5.b.ii: Compute source term of the $\tilde{S}_i$ equation: $\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}$ \[Back to [top](toc)\]$$\label{stildeisource}$$Now that we've computed `g4DD_zerotimederiv_dD`$=g_{\mu\nu,i}$, the $\tilde{S}_i$ evolution equation source term may be quickly constructed.
###Code
# Step 5.b.ii: Compute S_tilde source term
def compute_S_tilde_source_termD(alpha, sqrtgammaDET,g4DD_zerotimederiv_dD, T4UU):
global S_tilde_source_termD
S_tilde_source_termD = ixp.zerorank1(DIM=3)
for i in range(3):
for mu in range(4):
for nu in range(4):
S_tilde_source_termD[i] += sp.Rational(1,2)*alpha*sqrtgammaDET*T4UU[mu][nu]*g4DD_zerotimederiv_dD[mu][nu][i+1]
###Output
_____no_output_____
###Markdown
Step 6: Conversion of $v^i$ to $u^\mu$ (Courtesy Patrick Nelson) \[Back to [top](toc)\]$$\label{convertvtou}$$According to Eqs. 9-11 of [the IllinoisGRMHD paper](https://arxiv.org/pdf/1501.07276.pdf), the Valencia 3-velocity $v^i_{(n)}$ is related to the 4-velocity $u^\mu$ via\begin{align}\alpha v^i_{(n)} &= \frac{u^i}{u^0} + \beta^i \\\implies u^i &= u^0 \left(\alpha v^i_{(n)} - \beta^i\right)\end{align}Defining $v^i = \frac{u^i}{u^0}$, we get$$v^i = \alpha v^i_{(n)} - \beta^i,$$and in terms of this variable we get\begin{align}g_{00} \left(u^0\right)^2 + 2 g_{0i} u^0 u^i + g_{ij} u^i u^j &= \left(u^0\right)^2 \left(g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j\right)\\\implies u^0 &= \pm \sqrt{\frac{-1}{g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j}} \\&= \pm \sqrt{\frac{-1}{(-\alpha^2 + \beta^2) + 2 \beta_i v^i + \gamma_{ij} v^i v^j}} \\&= \pm \sqrt{\frac{1}{\alpha^2 - \gamma_{ij}\left(\beta^i + v^i\right)\left(\beta^j + v^j\right)}}\\&= \pm \sqrt{\frac{1}{\alpha^2 - \alpha^2 \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\&= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\end{align}Generally speaking, numerical errors will occasionally drive expressions under the radical to either negative values or potentially enormous values (corresponding to enormous Lorentz factors). Thus a reliable approach for computing $u^0$ requires that we first rewrite the above expression in terms of the Lorentz factor squared: $\Gamma^2=\left(\alpha u^0\right)^2$:\begin{align}u^0 &= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\\implies \left(\alpha u^0\right)^2 &= \frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}} \\\implies \gamma_{ij}v^i_{(n)}v^j_{(n)} &= 1 - \frac{1}{\left(\alpha u^0\right)^2} \\&= 1 - \frac{1}{\Gamma^2}\end{align}In order for the bottom expression to hold true, the left-hand side must be between 0 and 1. Again, this is not guaranteed due to the appearance of numerical errors. In fact, a robust algorithm will not allow $\Gamma^2$ to become too large (which might contribute greatly to the stress-energy of a given gridpoint), so let's define the largest allowed Lorentz factor as $\Gamma_{\rm max}$.Then our algorithm for computing $u^0$ is as follows:If$$R=\gamma_{ij}v^i_{(n)}v^j_{(n)}>1 - \frac{1}{\Gamma_{\rm max}^2},$$ then adjust the 3-velocity $v^i$ as follows:$$v^i_{(n)} \to \sqrt{\frac{1 - \frac{1}{\Gamma_{\rm max}^2}}{R}}v^i_{(n)}.$$After this rescaling, we are then guaranteed that if $R$ is recomputed, it will be set to its ceiling value $R=R_{\rm max} = 1 - \frac{1}{\Gamma_{\rm max}^2}$.Then, regardless of whether the ceiling on $R$ was applied, $u^0$ can be safely computed via$$u^0 = \frac{1}{\alpha \sqrt{1-R}},$$and the remaining components $u^i$ via$$u^i = u^0 v^i.$$In summary our algorithm for computing $u^{\mu}$ from $v^i = \frac{u^i}{u^0}$ is as follows:1. Choose a maximum Lorentz factor $\Gamma_{\rm max}$=`GAMMA_SPEED_LIMIT`, and define $v^i_{(n)} = \frac{1}{\alpha}\left( \frac{u^i}{u^0} + \beta^i\right)$.1. Compute $R=\gamma_{ij}v^i_{(n)}v^j_{(n)}=1 - \frac{1}{\Gamma^2}$1. If $R \le 1 - \frac{1}{\Gamma_{\rm max}^2}$, then skip the next step.1. Otherwise if $R > 1 - \frac{1}{\Gamma_{\rm max}^2}$ then adjust $v^i_{(n)}\to \sqrt{\frac{1 - \frac{1}{\Gamma_{\rm max}^2}}{R}}v^i_{(n)}$, which will force $R=R_{\rm max}$.1. Given the $R$ computed in the above step, $u^0 = \frac{1}{\alpha \sqrt{1-R}}$, and $u^i=u^0 v^i$.While the above algorithm is quite robust, its `if()` statement in the fourth step is not very friendly to NRPy+ or an optimizing C compiler, as it would require NRPy+ to generate separate C kernels for each branch of the `if()`. Let's instead try the following trick, which Roland Haas taught us. Define $R^*$ as$$R^* = \frac{1}{2} \left(R_{\rm max} + R - |R_{\rm max} - R| \right).$$If $R>R_{\rm max}$, then $|R_{\rm max} - R|=R - R_{\rm max}$, and we get:$$R^* = \frac{1}{2} \left(R_{\rm max} + R - (R - R_{\rm max}) \right) = \frac{1}{2} \left(2 R_{\rm max}\right) = R_{\rm max}$$If $R\le R_{\rm max}$, then $|R_{\rm max} - R|=R_{\rm max} - R$, and we get:$$R^* = \frac{1}{2} \left(R_{\rm max} + R - (R_{\rm max} - R) \right) = \frac{1}{2} \left(2 R\right) = R$$Then we can rescale *all* $v^i_{(n)}$ via$$v^i_{(n)} \to v^i_{(n)} \sqrt{\frac{R^*}{R}},$$though we must be very careful to carefully handle the case in which $R=0$. To avoid any problems in this case, we simply adjust the above rescaling by adding a tiny number [`TINYDOUBLE`](https://en.wikipedia.org/wiki/Tiny_Bubbles) to $R$ in the denominator, typically `1e-100`:$$v^i_{(n)} \to v^i_{(n)} \sqrt{\frac{R^*}{R + {\rm TINYDOUBLE}}}.$$Finally, $u^0$ can be immediately and safely computed, via:$$u^0 = \frac{1}{\alpha \sqrt{1-R^*}},$$and $u^i$ via $$u^i = u^0 v^i = u^0 \left(\alpha v^i_{(n)} - \beta^i\right).$$
###Code
# Step 6.a: Convert Valencia 3-velocity v_{(n)}^i into u^\mu, and apply a speed limiter
# Speed-limited ValenciavU is output to rescaledValenciavU global.
def u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, ValenciavU):
# Inputs: Metric lapse alpha, shift betaU, 3-metric gammaDD, Valencia 3-velocity ValenciavU
# Outputs (as globals): u4U_ito_ValenciavU, rescaledValenciavU
# R = gamma_{ij} v^i v^j
R = sp.sympify(0)
for i in range(3):
for j in range(3):
R += gammaDD[i][j]*ValenciavU[i]*ValenciavU[j]
thismodule = "GRHD"
# The default value isn't terribly important here, since we can overwrite in the main C code
GAMMA_SPEED_LIMIT = par.Cparameters("REAL", thismodule, "GAMMA_SPEED_LIMIT", 10.0) # Default value based on
# IllinoisGRMHD.
# GiRaFFE default = 2000.0
Rmax = 1 - 1 / (GAMMA_SPEED_LIMIT * GAMMA_SPEED_LIMIT)
# Now, we set Rstar = min(Rmax,R):
# If R < Rmax, then Rstar = 0.5*(Rmax+R-Rmax+R) = R
# If R >= Rmax, then Rstar = 0.5*(Rmax+R+Rmax-R) = Rmax
Rstar = sp.Rational(1, 2) * (Rmax + R - nrpyAbs(Rmax - R))
# We add TINYDOUBLE to R below to avoid a 0/0, which occurs when
# ValenciavU == 0 for all Valencia 3-velocity components.
# "Those tiny *doubles* make me warm all over
# with a feeling that I'm gonna love you till the end of time."
# - Adapted from Connie Francis' "Tiny Bubbles"
TINYDOUBLE = par.Cparameters("#define",thismodule,"TINYDOUBLE",1e-100)
# The rescaled (speed-limited) Valencia 3-velocity
# is given by, v_{(n)}^i = sqrt{Rstar/R} v^i
global rescaledValenciavU
rescaledValenciavU = ixp.zerorank1(DIM=3)
for i in range(3):
# If R == 0, then Rstar == 0, so sqrt( Rstar/(R+TINYDOUBLE) )=sqrt(0/1e-100) = 0
# If your velocities are of order 1e-100 and this is physically
# meaningful, there must be something wrong with your unit conversion.
rescaledValenciavU[i] = ValenciavU[i]*sp.sqrt(Rstar/(R + TINYDOUBLE))
# Finally compute u^mu in terms of Valenciav^i
# u^0 = 1/(alpha-sqrt(1-R^*))
global u4U_ito_ValenciavU
u4U_ito_ValenciavU = ixp.zerorank1(DIM=4)
u4U_ito_ValenciavU[0] = 1/(alpha*sp.sqrt(1-Rstar))
# u^i = u^0 ( alpha v^i_{(n)} - beta^i ), where v^i_{(n)} is the Valencia 3-velocity
for i in range(3):
u4U_ito_ValenciavU[i+1] = u4U_ito_ValenciavU[0] * (alpha * rescaledValenciavU[i] - betaU[i])
# Step 6.b: Convert v^i into u^\mu, and apply a speed limiter.
# Speed-limited vU is output to rescaledvU global.
def u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, vU):
ValenciavU = ixp.zerorank1(DIM=3)
for i in range(3):
ValenciavU[i] = (vU[i] + betaU[i])/alpha
u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, ValenciavU)
# Since ValenciavU is written in terms of vU,
# u4U_ito_ValenciavU is actually u4U_ito_vU
global u4U_ito_vU
u4U_ito_vU = ixp.zerorank1(DIM=4)
for mu in range(4):
u4U_ito_vU[mu] = u4U_ito_ValenciavU[mu]
# Finally compute the rescaled (speed-limited) vU
global rescaledvU
rescaledvU = ixp.zerorank1(DIM=3)
for i in range(3):
rescaledvU[i] = alpha * rescaledValenciavU[i] - betaU[i]
###Output
_____no_output_____
###Markdown
Step 7: Declare ADM and hydrodynamical input variables, and construct GRHD equations \[Back to [top](toc)\]$$\label{declarevarsconstructgrhdeqs}$$
###Code
# First define hydrodynamical quantities
u4U = ixp.declarerank1("u4U", DIM=4)
rho_b,P,epsilon = sp.symbols('rho_b P epsilon',real=True)
# Then ADM quantities
gammaDD = ixp.declarerank2("gammaDD","sym01",DIM=3)
KDD = ixp.declarerank2("KDD" ,"sym01",DIM=3)
betaU = ixp.declarerank1("betaU", DIM=3)
alpha = sp.symbols('alpha', real=True)
# First compute stress-energy tensor T4UU and T4UD:
compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U)
compute_T4UD(gammaDD,betaU,alpha, T4UU)
# Next sqrt(gamma)
compute_sqrtgammaDET(gammaDD)
# Compute conservative variables in terms of primitive variables
compute_rho_star( alpha, sqrtgammaDET, rho_b,u4U)
compute_tau_tilde(alpha, sqrtgammaDET, T4UU,rho_star)
compute_S_tildeD( alpha, sqrtgammaDET, T4UD)
# Then compute v^i from u^mu
compute_vU_from_u4U__no_speed_limit(u4U)
# Next compute fluxes of conservative variables
compute_rho_star_fluxU( vU, rho_star)
compute_tau_tilde_fluxU(alpha, sqrtgammaDET, vU,T4UU, rho_star)
compute_S_tilde_fluxUD( alpha, sqrtgammaDET, T4UD)
# Then declare derivatives & compute g4DD_zerotimederiv_dD
gammaDD_dD = ixp.declarerank3("gammaDD_dD","sym01",DIM=3)
betaU_dD = ixp.declarerank2("betaU_dD" ,"nosym",DIM=3)
alpha_dD = ixp.declarerank1("alpha_dD" ,DIM=3)
compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD)
# Then compute source terms on tau_tilde and S_tilde equations
compute_s_source_term(KDD,betaU,alpha, sqrtgammaDET,alpha_dD, T4UU)
compute_S_tilde_source_termD( alpha, sqrtgammaDET,g4DD_zerotimederiv_dD, T4UU)
# Then compute the 4-velocities in terms of an input Valencia 3-velocity testValenciavU[i]
testValenciavU = ixp.declarerank1("testValenciavU",DIM=3)
u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, testValenciavU)
# Finally compute the 4-velocities in terms of an input 3-velocity testvU[i] = u^i/u^0
testvU = ixp.declarerank1("testvU",DIM=3)
u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, testvU)
###Output
_____no_output_____
###Markdown
Step 8: Code Validation against `GRHD.equations` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the SymPy expressions for the GRHD equations generated in1. this tutorial versus2. the NRPy+ [GRHD.equations](../edit/GRHD/equations.py) module.
###Code
import GRHD.equations as Ge
# First compute stress-energy tensor T4UU and T4UD:
Ge.compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U)
Ge.compute_T4UD(gammaDD,betaU,alpha, Ge.T4UU)
# Next sqrt(gamma)
Ge.compute_sqrtgammaDET(gammaDD)
# Compute conservative variables in terms of primitive variables
Ge.compute_rho_star( alpha, Ge.sqrtgammaDET, rho_b,u4U)
Ge.compute_tau_tilde(alpha, Ge.sqrtgammaDET, Ge.T4UU,Ge.rho_star)
Ge.compute_S_tildeD( alpha, Ge.sqrtgammaDET, Ge.T4UD)
# Then compute v^i from u^mu
Ge.compute_vU_from_u4U__no_speed_limit(u4U)
# Next compute fluxes of conservative variables
Ge.compute_rho_star_fluxU ( Ge.vU, Ge.rho_star)
Ge.compute_tau_tilde_fluxU(alpha, Ge.sqrtgammaDET, Ge.vU,Ge.T4UU, Ge.rho_star)
Ge.compute_S_tilde_fluxUD (alpha, Ge.sqrtgammaDET, Ge.T4UD)
# Then declare derivatives & compute g4DD_zerotimederiv_dD
# gammaDD_dD = ixp.declarerank3("gammaDD_dD","sym01",DIM=3)
# betaU_dD = ixp.declarerank2("betaU_dD" ,"nosym",DIM=3)
# alpha_dD = ixp.declarerank1("alpha_dD" ,DIM=3)
Ge.compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD)
# Finally compute source terms on tau_tilde and S_tilde equations
Ge.compute_s_source_term(KDD,betaU,alpha, Ge.sqrtgammaDET,alpha_dD, Ge.T4UU)
Ge.compute_S_tilde_source_termD( alpha, Ge.sqrtgammaDET,Ge.g4DD_zerotimederiv_dD,Ge.T4UU)
GetestValenciavU = ixp.declarerank1("testValenciavU")
Ge.u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha, betaU, gammaDD, GetestValenciavU)
GetestvU = ixp.declarerank1("testvU")
Ge.u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit( alpha, betaU, gammaDD, GetestvU)
all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="Ge."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2 is None:
return basename+"["+str(idx1)+"]"
if idx3 is None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
expr_list = []
exprcheck_list = []
namecheck_list = []
namecheck_list.extend(["sqrtgammaDET","rho_star","tau_tilde","s_source_term"])
exprcheck_list.extend([Ge.sqrtgammaDET,Ge.rho_star,Ge.tau_tilde,Ge.s_source_term])
expr_list.extend([sqrtgammaDET,rho_star,tau_tilde,s_source_term])
for mu in range(4):
namecheck_list.extend([gfnm("u4_ito_ValenciavU",mu),gfnm("u4U_ito_vU",mu)])
exprcheck_list.extend([ Ge.u4U_ito_ValenciavU[mu], Ge.u4U_ito_vU[mu]])
expr_list.extend( [ u4U_ito_ValenciavU[mu], u4U_ito_vU[mu]])
for nu in range(4):
namecheck_list.extend([gfnm("T4UU",mu,nu),gfnm("T4UD",mu,nu)])
exprcheck_list.extend([Ge.T4UU[mu][nu],Ge.T4UD[mu][nu]])
expr_list.extend([T4UU[mu][nu],T4UD[mu][nu]])
for delta in range(4):
namecheck_list.extend([gfnm("g4DD_zerotimederiv_dD",mu,nu,delta)])
exprcheck_list.extend([Ge.g4DD_zerotimederiv_dD[mu][nu][delta]])
expr_list.extend([g4DD_zerotimederiv_dD[mu][nu][delta]])
for i in range(3):
namecheck_list.extend([gfnm("S_tildeD",i),gfnm("vU",i),gfnm("rho_star_fluxU",i),
gfnm("tau_tilde_fluxU",i),gfnm("S_tilde_source_termD",i),
gfnm("rescaledValenciavU",i), gfnm("rescaledvU",i)])
exprcheck_list.extend([Ge.S_tildeD[i],Ge.vU[i],Ge.rho_star_fluxU[i],
Ge.tau_tilde_fluxU[i],Ge.S_tilde_source_termD[i],
Ge.rescaledValenciavU[i],Ge.rescaledvU[i]])
expr_list.extend([S_tildeD[i],vU[i],rho_star_fluxU[i],
tau_tilde_fluxU[i],S_tilde_source_termD[i],
rescaledValenciavU[i],rescaledvU[i]])
for j in range(3):
namecheck_list.extend([gfnm("S_tilde_fluxUD",i,j)])
exprcheck_list.extend([Ge.S_tilde_fluxUD[i][j]])
expr_list.extend([S_tilde_fluxUD[i][j]])
for i in range(len(expr_list)):
comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i])
import sys
if all_passed:
print("ALL TESTS PASSED!")
else:
print("ERROR: AT LEAST ONE TEST DID NOT PASS")
sys.exit(1)
###Output
ALL TESTS PASSED!
###Markdown
Step 9: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-GRHD_Equations-Cartesian.pdf](Tutorial-GRHD_Equations-Cartesian.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-GRHD_Equations-Cartesian")
###Output
Created Tutorial-GRHD_Equations-Cartesian.tex, and compiled LaTeX file to
PDF file Tutorial-GRHD_Equations-Cartesian.pdf
|
sandbox/unroot-dist.ipynb | ###Markdown
Issue 43
###Code
# >>> import toytree
# >>> toytree.__version__
# '2.0.3'
# >>> rooted_newick = "(A:2,(B:1,C:1):1);"
# >>> t = toytree.tree(rooted_newick)
# >>> unrooted_t = t.unroot()
# >>> unrooted_t.write()
# yields
# '(A:2,B:1,C:1);'
# correct tree should be
# '(A:3,B:1,C:1);'
import toytree
toytree.__version__
###Output
_____no_output_____
###Markdown
Fixed
###Code
rooted_newick = "(A:2,(B:1,C:1):1);"
tre = toytree.tree(rooted_newick)
tre.draw(ts='p', layout='r', width=400);
# unroot and draw
utre = t.unroot()
c, a, m = utre.draw(ts='p', layout='r', width=400);
# re-root (at midpoint) and draw
utre.root("A").draw(ts='p', layout='r', width=400);
# re-root (with known length) and draw
utre.root("A", resolve_root_dist=1.0).draw(ts='p', layout='r', width=400);
###Output
_____no_output_____ |
How To Break Into the Field.ipynb | ###Markdown
How To Break Into the FieldNow you have had a closer look at the data, and you saw how I approached looking at how the survey respondents think you should break into the field. Let's recreate those results, as well as take a look at another question.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import HowToBreakIntoTheField as t
%matplotlib inline
df = pd.read_csv('./survey_results_public.csv')
schema = pd.read_csv('./survey_results_schema.csv')
df.head()
print(df.info())
schema.info()
schema.head(10)
sth = schema.Column=='Country'
type(sth)
schema[sth]
schema.loc[sth,'Question']
wtry = schema.loc[sth,'Question']
sth = wtry.tolist()
sth[0]
###Output
_____no_output_____
###Markdown
Question 1**1.** In order to understand how to break into the field, we will look at the **CousinEducation** field. Use the **schema** dataset to answer this question. Write a function called **get_description** that takes the **schema dataframe** and the **column** as a string, and returns a string of the description for that column.
###Code
def get_description(column_name, schema=schema):
'''
INPUT - schema - pandas dataframe with the schema of the developers survey
column_name - string - the name of the column you would like to know about
OUTPUT -
desc - string - the description of the column
'''
ww = schema.loc[schema.Column==column_name,'Question'].tolist()
desc = ww[0]
return desc
#test your code
#Check your function against solution - you shouldn't need to change any of the below code
get_description(df.columns[0]) # This should return a string of the first column description
#Check your function against solution - you shouldn't need to change any of the below code
descrips = set(get_description(col) for col in df.columns)
t.check_description(descrips)
###Output
Nice job it looks like your function works correctly!
###Markdown
The question we have been focused on has been around how to break into the field. Use your **get_description** function below to take a closer look at the **CousinEducation** column.
###Code
get_description('CousinEducation')
###Output
_____no_output_____
###Markdown
Question 2**2.** Provide a pandas series of the different **CousinEducation** status values in the dataset. Store this pandas series in **cous_ed_vals**. If you are correct, you should see a bar chart of the proportion of individuals in each status. If it looks terrible, and you get no information from it, then you followed directions. However, we should clean this up!
###Code
#cous_ed_vals = #Provide a pandas series of the counts for each CousinEducation status
cous_ed_vals = df.groupby(['CousinEducation']).size()
cous_ed_vals # assure this looks right
# The below should be a bar chart of the proportion of individuals in your ed_vals
# if it is set up correctly.
(cous_ed_vals/df.shape[0]).plot(kind="bar");
plt.title("Formal Education");
###Output
_____no_output_____
###Markdown
We definitely need to clean this. Above is an example of what happens when you do not clean your data. Below I am using the same code you saw in the earlier video to take a look at the data after it has been cleaned.
###Code
possible_vals = ["Take online courses", "Buy books and work through the exercises",
"None of these", "Part-time/evening courses", "Return to college",
"Contribute to open source", "Conferences/meet-ups", "Bootcamp",
"Get a job as a QA tester", "Participate in online coding competitions",
"Master's degree", "Participate in hackathons", "Other"]
def clean_and_plot(df, title='Method of Educating Suggested', plot=True):
'''
INPUT
df - a dataframe holding the CousinEducation column
title - string the title of your plot
axis - axis object
plot - bool providing whether or not you want a plot back
OUTPUT
study_df - a dataframe with the count of how many individuals
Displays a plot of pretty things related to the CousinEducation column.
'''
study = df['CousinEducation'].value_counts().reset_index()
study.rename(columns={'index': 'method', 'CousinEducation': 'count'}, inplace=True)
study_df = t.total_count(study, 'method', 'count', possible_vals)
study_df.set_index('method', inplace=True)
if plot:
(study_df/study_df.sum()).plot(kind='bar', legend=None);
plt.title(title);
plt.show()
props_study_df = study_df/study_df.sum()
return props_study_df
props_df = clean_and_plot(df)
###Output
_____no_output_____
###Markdown
Question 3**3.** I wonder if some of the individuals might have bias towards their own degrees. Complete the function below that will apply to the elements of the **FormalEducation** column in **df**.
###Code
eduw = df.groupby(['FormalEducation']).size()
eduw
mylist = ["Master's degree", "Doctoral", "Professional degree"]
xx = "Master's degree" in mylist
xx
xx = xx*1
xx
def higher_ed(formal_ed_str):
'''
INPUT
formal_ed_str - a string of one of the values from the Formal Education column
OUTPUT
return 1 if the string is in ("Master's degree", "Doctoral", "Professional degree")
return 0 otherwise
'''
thelist = ["Master's degree", "Doctoral", "Professional degree"]
temp = formal_ed_str in thelist
temp = temp*1 # change bool to int
return temp
df["FormalEducation"].apply(higher_ed)[:5] #Test your function to assure it provides 1 and 0 values for the df
# Check your code here
df['HigherEd'] = df["FormalEducation"].apply(higher_ed)
higher_ed_perc = df['HigherEd'].mean()
t.higher_ed_test(higher_ed_perc)
###Output
Nice job! That's right. The percentage of individuals in these three groups is 0.2302376714480159.
###Markdown
Question 4**4.** Now we would like to find out if the proportion of individuals who completed one of these three programs feel differently than those that did not. Store a dataframe of only the individual's who had **HigherEd** equal to 1 in **ed_1**. Similarly, store a dataframe of only the **HigherEd** equal to 0 values in **ed_0**.Notice, you have already created the **HigherEd** column using the check code portion above, so here you only need to subset the dataframe using this newly created column.
###Code
df.HigherEd.head()
ss = df[df.HigherEd==0]
type(ss)
ss.shape
#ed_1 = # Subset df to only those with HigherEd of 1
#ed_0 = # Subset df to only those with HigherEd of 0
ed_1 = df[df.HigherEd==1]
ed_0 = df[df.HigherEd==0]
print(ed_1['HigherEd'][:5]) #Assure it looks like what you would expect
print(ed_0['HigherEd'][:5]) #Assure it looks like what you would expect
#Check your subset is correct - you should get a plot that was created using pandas styling
#which you can learn more about here: https://pandas.pydata.org/pandas-docs/stable/style.html
ed_1_perc = clean_and_plot(ed_1, 'Higher Formal Education', plot=False)
ed_0_perc = clean_and_plot(ed_0, 'Max of Bachelors Higher Ed', plot=False)
comp_df = pd.merge(ed_1_perc, ed_0_perc, left_index=True, right_index=True)
comp_df.columns = ['ed_1_perc', 'ed_0_perc']
comp_df['Diff_HigherEd_Vals'] = comp_df['ed_1_perc'] - comp_df['ed_0_perc']
comp_df.style.bar(subset=['Diff_HigherEd_Vals'], align='mid', color=['#d65f5f', '#5fba7d'])
###Output
_____no_output_____
###Markdown
Question 5**5.** What can you conclude from the above plot? Change the dictionary to mark **True** for the keys of any statements you can conclude, and **False** for any of the statements you cannot conclude.
###Code
sol = {'Everyone should get a higher level of formal education': False,
'Regardless of formal education, online courses are the top suggested form of education': True,
'There is less than a 1% difference between suggestions of the two groups for all forms of education': False,
'Those with higher formal education suggest it more than those who do not have it': True}
t.conclusions(sol)
###Output
Nice job that looks right!
###Markdown
How To Break Into the FieldNow you have had a closer look at the data, and you saw how I approached looking at how the survey respondents think you should break into the field. Let's recreate those results, as well as take a look at another question.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import HowToBreakIntoTheField as t
%matplotlib inline
df = pd.read_csv('./survey_results_public.csv')
schema = pd.read_csv('./survey_results_schema.csv')
df.head()
###Output
_____no_output_____
###Markdown
Question 1**1.** In order to understand how to break into the field, we will look at the **CousinEducation** field. Use the **schema** dataset to answer this question. Write a function called **get_description** that takes the **schema dataframe** and the **column** as a string, and returns a string of the description for that column.
###Code
def get_description(column_name, schema=schema):
'''
INPUT - schema - pandas dataframe with the schema of the developers survey
column_name - string - the name of the column you would like to know about
OUTPUT -
desc - string - the description of the column
'''
desc = list(schema[schema['Column'] == column_name]['Question'])[0]
return desc
#test your code
#Check your function against solution - you shouldn't need to change any of the below code
get_description(df.columns[0]) # This should return a string of the first column description
#Check your function against solution - you shouldn't need to change any of the below code
descrips = set(get_description(col) for col in df.columns)
t.check_description(descrips)
###Output
Nice job it looks like your function works correctly!
###Markdown
The question we have been focused on has been around how to break into the field. Use your **get_description** function below to take a closer look at the **CousinEducation** column.
###Code
get_description('CousinEducation')
###Output
_____no_output_____
###Markdown
Question 2**2.** Provide a pandas series of the different **CousinEducation** status values in the dataset. Store this pandas series in **cous_ed_vals**. If you are correct, you should see a bar chart of the proportion of individuals in each status. If it looks terrible, and you get no information from it, then you followed directions. However, we should clean this up!
###Code
cous_ed_vals = df.CousinEducation.value_counts()#Provide a pandas series of the counts for each CousinEducation status
cous_ed_vals # assure this looks right
# The below should be a bar chart of the proportion of individuals in your ed_vals
# if it is set up correctly.
(cous_ed_vals/df.shape[0]).plot(kind="bar");
plt.title("Formal Education");
###Output
_____no_output_____
###Markdown
We definitely need to clean this. Above is an example of what happens when you do not clean your data. Below I am using the same code you saw in the earlier video to take a look at the data after it has been cleaned.
###Code
possible_vals = ["Take online courses", "Buy books and work through the exercises",
"None of these", "Part-time/evening courses", "Return to college",
"Contribute to open source", "Conferences/meet-ups", "Bootcamp",
"Get a job as a QA tester", "Participate in online coding competitions",
"Master's degree", "Participate in hackathons", "Other"]
def clean_and_plot(df, title='Method of Educating Suggested', plot=True):
'''
INPUT
df - a dataframe holding the CousinEducation column
title - string the title of your plot
axis - axis object
plot - bool providing whether or not you want a plot back
OUTPUT
study_df - a dataframe with the count of how many individuals
Displays a plot of pretty things related to the CousinEducation column.
'''
study = df['CousinEducation'].value_counts().reset_index()
study.rename(columns={'index': 'method', 'CousinEducation': 'count'}, inplace=True)
study_df = t.total_count(study, 'method', 'count', possible_vals)
study_df.set_index('method', inplace=True)
if plot:
(study_df/study_df.sum()).plot(kind='bar', legend=None);
plt.title(title);
plt.show()
props_study_df = study_df/study_df.sum()
return props_study_df
props_df = clean_and_plot(df)
###Output
_____no_output_____
###Markdown
Question 3**3.** I wonder if some of the individuals might have bias towards their own degrees. Complete the function below that will apply to the elements of the **FormalEducation** column in **df**.
###Code
def higher_ed(formal_ed_str):
'''
INPUT
formal_ed_str - a string of one of the values from the Formal Education column
OUTPUT
return 1 if the string is in ("Master's degree", "Doctoral", "Professional degree")
return 0 otherwise
'''
df["FormalEducation"].apply(higher_ed)[:5] #Test your function to assure it provides 1 and 0 values for the df
# Check your code here
df['HigherEd'] = df["FormalEducation"].apply(higher_ed)
higher_ed_perc = df['HigherEd'].mean()
t.higher_ed_test(higher_ed_perc)
###Output
That doesn't look quite like expected. You can get the percentage of 1's in a 1-0 column by using the .mean() method of a pandas series.
###Markdown
Question 4**4.** Now we would like to find out if the proportion of individuals who completed one of these three programs feel differently than those that did not. Store a dataframe of only the individual's who had **HigherEd** equal to 1 in **ed_1**. Similarly, store a dataframe of only the **HigherEd** equal to 0 values in **ed_0**.Notice, you have already created the **HigherEd** column using the check code portion above, so here you only need to subset the dataframe using this newly created column.
###Code
ed_1 =df[df['HigherEd'] == 1] # Subset df to only those with HigherEd of 1
ed_0 = df[df['HigherEd'] == 0] # Subset df to only those with HigherEd of 0
print(ed_1['HigherEd'][:5]) #Assure it looks like what you would expect
print(ed_0['HigherEd'][:5]) #Assure it looks like what you would expect
#Check your subset is correct - you should get a plot that was created using pandas styling
#which you can learn more about here: https://pandas.pydata.org/pandas-docs/stable/style.html
ed_1_perc = clean_and_plot(ed_1, 'Higher Formal Education', plot=False)
ed_0_perc = clean_and_plot(ed_0, 'Max of Bachelors Higher Ed', plot=False)
comp_df = pd.merge(ed_1_perc, ed_0_perc, left_index=True, right_index=True)
comp_df.columns = ['ed_1_perc', 'ed_0_perc']
comp_df['Diff_HigherEd_Vals'] = comp_df['ed_1_perc'] - comp_df['ed_0_perc']
comp_df.style.bar(subset=['Diff_HigherEd_Vals'], align='mid', color=['#d65f5f', '#5fba7d'])
###Output
_____no_output_____
###Markdown
Question 5**5.** What can you conclude from the above plot? Change the dictionary to mark **True** for the keys of any statements you can conclude, and **False** for any of the statements you cannot conclude.
###Code
sol = {'Everyone should get a higher level of formal education': False,
'Regardless of formal education, online courses are the top suggested form of education': True,
'There is less than a 1% difference between suggestions of the two groups for all forms of education': False,
'Those with higher formal education suggest it more than those who do not have it': True}
t.conclusions(sol)
###Output
Nice job that looks right!
###Markdown
How To Break Into the FieldNow you have had a closer look at the data, and you saw how I approached looking at how the survey respondents think you should break into the field. Let's recreate those results, as well as take a look at another question.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import HowToBreakIntoTheField as t
%matplotlib inline
df = pd.read_csv('./survey_results_public.csv')
schema = pd.read_csv('./survey_results_schema.csv')
df.head()
schema.head()
###Output
_____no_output_____
###Markdown
Question 1**1.** In order to understand how to break into the field, we will look at the **CousinEducation** field. Use the **schema** dataset to answer this question. Write a function called **get_description** that takes the **schema dataframe** and the **column** as a string, and returns a string of the description for that column.
###Code
def get_description(column_name, schema=schema):
'''
INPUT - schema - pandas dataframe with the schema of the developers survey
column_name - string - the name of the column you would like to know about
OUTPUT -
desc - string - the description of the column
'''
desc = list(schema[schema['Column'] == column_name]['Question'])[0]
return desc
#test your code
#Check your function against solution - you shouldn't need to change any of the below code
get_description(df.columns[0]) # This should return a string of the first column description
get_description('CousinEducation')
#Check your function against solution - you shouldn't need to change any of the below code
descrips = set(get_description(col) for col in df.columns)
t.check_description(descrips)
###Output
Nice job it looks like your function works correctly!
###Markdown
The question we have been focused on has been around how to break into the field. Use your **get_description** function below to take a closer look at the **CousinEducation** column.
###Code
get_description('CousinEducation')
###Output
_____no_output_____
###Markdown
Question 2**2.** Provide a pandas series of the different **CousinEducation** status values in the dataset. Store this pandas series in **cous_ed_vals**. If you are correct, you should see a bar chart of the proportion of individuals in each status. If it looks terrible, and you get no information from it, then you followed directions. However, we should clean this up!
###Code
cous_ed_vals = df.CousinEducation.value_counts()#Provide a pandas series of the counts for each CousinEducation status
cous_ed_vals # assure this looks right
# The below should be a bar chart of the proportion of individuals in your ed_vals
# if it is set up correctly.
(cous_ed_vals/df.shape[0]).plot(kind="bar");
plt.title("Formal Education");
###Output
_____no_output_____
###Markdown
We definitely need to clean this. Above is an example of what happens when you do not clean your data. Below I am using the same code you saw in the earlier video to take a look at the data after it has been cleaned.
###Code
study = df['CousinEducation'].value_counts().reset_index()
study.rename(columns={'index': 'method', 'CousinEducation': 'count'}, inplace=True)
study
study_df = t.total_count(study, 'method', 'count', possible_vals)
study_df
possible_vals = ["Take online courses", "Buy books and work through the exercises",
"None of these", "Part-time/evening courses", "Return to college",
"Contribute to open source", "Conferences/meet-ups", "Bootcamp",
"Get a job as a QA tester", "Participate in online coding competitions",
"Master's degree", "Participate in hackathons", "Other"]
def clean_and_plot(df, title='Method of Educating Suggested', plot=True):
'''
INPUT
df - a dataframe holding the CousinEducation column
title - string the title of your plot
axis - axis object
plot - bool providing whether or not you want a plot back
OUTPUT
study_df - a dataframe with the count of how many individuals
Displays a plot of pretty things related to the CousinEducation column.
'''
study = df['CousinEducation'].value_counts().reset_index()
study.rename(columns={'index': 'method', 'CousinEducation': 'count'}, inplace=True)
study_df = t.total_count(study, 'method', 'count', possible_vals)
study_df.set_index('method', inplace=True)
if plot:
(study_df/study_df.sum()).plot(kind='bar', legend=None);
plt.title(title);
plt.show()
props_study_df = study_df/study_df.sum()
return props_study_df
props_df = clean_and_plot(df)
###Output
_____no_output_____
###Markdown
Question 3**3.** I wonder if some of the individuals might have bias towards their own degrees. Complete the function below that will apply to the elements of the **FormalEducation** column in **df**.
###Code
def higher_ed(formal_ed_str):
'''
INPUT
formal_ed_str - a string of one of the values from the Formal Education column
OUTPUT
return 1 if the string is in ("Master's degree", "Professional degree")
return 0 otherwise
'''
return 1 if formal_ed_str in ("Master's degree", "Professional degree") else 0
df["FormalEducation"].apply(higher_ed)[:5] #Test your function to assure it provides 1 and 0 values for the df
# Check your code here
df['HigherEd'] = df["FormalEducation"].apply(higher_ed)
higher_ed_perc = df['HigherEd'].mean()
t.higher_ed_test(higher_ed_perc)
df['HigherEd'] = df["FormalEducation"].apply(higher_ed)
df.head()
###Output
_____no_output_____
###Markdown
Question 4**4.** Now we would like to find out if the proportion of individuals who completed one of these three programs feel differently than those that did not. Store a dataframe of only the individual's who had **HigherEd** equal to 1 in **ed_1**. Similarly, store a dataframe of only the **HigherEd** equal to 0 values in **ed_0**.Notice, you have already created the **HigherEd** column using the check code portion above, so here you only need to subset the dataframe using this newly created column.
###Code
ed_1 = df[df['HigherEd'] == 1] # Subset df to only those with HigherEd of 1
ed_0 = df[df['HigherEd'] == 0]# Subset df to only those with HigherEd of 0
print(ed_1['HigherEd'][:5]) #Assure it looks like what you would expect
print(ed_0['HigherEd'][:5]) #Assure it looks like what you would expect
#Check your subset is correct - you should get a plot that was created using pandas styling
#which you can learn more about here: https://pandas.pydata.org/pandas-docs/stable/style.html
ed_1_perc = clean_and_plot(ed_1, 'Higher Formal Education', plot=False)
ed_0_perc = clean_and_plot(ed_0, 'Max of Bachelors Higher Ed', plot=False)
comp_df = pd.merge(ed_1_perc, ed_0_perc, left_index=True, right_index=True)
comp_df.columns = ['ed_1_perc', 'ed_0_perc']
comp_df['Diff_HigherEd_Vals'] = comp_df['ed_1_perc'] - comp_df['ed_0_perc']
comp_df.style.bar(subset=['Diff_HigherEd_Vals'], align='mid', color=['#d65f5f', '#5fba7d'])
###Output
_____no_output_____
###Markdown
Question 5**5.** What can you conclude from the above plot? Change the dictionary to mark **True** for the keys of any statements you can conclude, and **False** for any of the statements you cannot conclude.
###Code
sol = {'Everyone should get a higher level of formal education': False,
'Regardless of formal education, online courses are the top suggested form of education': True,
'There is less than a 1% difference between suggestions of the two groups for all forms of education': False,
'Those with higher formal education suggest it more than those who do not have it': True}
t.conclusions(sol)
###Output
Nice job that looks right!
|
notebooks/bronnen/Pose_classification_(extended).ipynb | ###Markdown
OverviewThis Colab helps to create and validate a training set for the k-NN classifier described in the MediaPipe [Pose Classification](https://google.github.io/mediapipe/solutions/pose_classification.html) soultion, test it on an arbitrary video, export to a CSV and then use it in the [ML Kit sample app](https://developers.google.com/ml-kit/vision/pose-detection/classifying-poses4_integrate_with_the_ml_kit_quickstart_app). Step 0: Start Colab Connect the Colab to hosted Python3 runtime (check top-right corner) and then install required dependencies.
###Code
!pip install pillow==8.1.0
!pip install matplotlib==3.3.4
!pip install numpy==1.19.3
!pip install opencv-python==4.5.1.48
!pip install tqdm==4.56.0
!pip install requests==2.25.1
!pip install mediapipe==0.8.3
###Output
_____no_output_____
###Markdown
Codebase Commons
###Code
from matplotlib import pyplot as plt
def show_image(img, figsize=(10, 10)):
"""Shows output PIL image."""
plt.figure(figsize=figsize)
plt.imshow(img)
plt.show()
###Output
_____no_output_____
###Markdown
Pose embedding
###Code
class FullBodyPoseEmbedder(object):
"""Converts 3D pose landmarks into 3D embedding."""
def __init__(self, torso_size_multiplier=2.5):
# Multiplier to apply to the torso to get minimal body size.
self._torso_size_multiplier = torso_size_multiplier
# Names of the landmarks as they appear in the prediction.
self._landmark_names = [
'nose',
'left_eye_inner', 'left_eye', 'left_eye_outer',
'right_eye_inner', 'right_eye', 'right_eye_outer',
'left_ear', 'right_ear',
'mouth_left', 'mouth_right',
'left_shoulder', 'right_shoulder',
'left_elbow', 'right_elbow',
'left_wrist', 'right_wrist',
'left_pinky_1', 'right_pinky_1',
'left_index_1', 'right_index_1',
'left_thumb_2', 'right_thumb_2',
'left_hip', 'right_hip',
'left_knee', 'right_knee',
'left_ankle', 'right_ankle',
'left_heel', 'right_heel',
'left_foot_index', 'right_foot_index',
]
def __call__(self, landmarks):
"""Normalizes pose landmarks and converts to embedding
Args:
landmarks - NumPy array with 3D landmarks of shape (N, 3).
Result:
Numpy array with pose embedding of shape (M, 3) where `M` is the number of
pairwise distances defined in `_get_pose_distance_embedding`.
"""
assert landmarks.shape[0] == len(self._landmark_names), 'Unexpected number of landmarks: {}'.format(landmarks.shape[0])
# Get pose landmarks.
landmarks = np.copy(landmarks)
# Normalize landmarks.
landmarks = self._normalize_pose_landmarks(landmarks)
# Get embedding.
embedding = self._get_pose_distance_embedding(landmarks)
return embedding
def _normalize_pose_landmarks(self, landmarks):
"""Normalizes landmarks translation and scale."""
landmarks = np.copy(landmarks)
# Normalize translation.
pose_center = self._get_pose_center(landmarks)
landmarks -= pose_center
# Normalize scale.
pose_size = self._get_pose_size(landmarks, self._torso_size_multiplier)
landmarks /= pose_size
# Multiplication by 100 is not required, but makes it eaasier to debug.
landmarks *= 100
return landmarks
def _get_pose_center(self, landmarks):
"""Calculates pose center as point between hips."""
left_hip = landmarks[self._landmark_names.index('left_hip')]
right_hip = landmarks[self._landmark_names.index('right_hip')]
center = (left_hip + right_hip) * 0.5
return center
def _get_pose_size(self, landmarks, torso_size_multiplier):
"""Calculates pose size.
It is the maximum of two values:
* Torso size multiplied by `torso_size_multiplier`
* Maximum distance from pose center to any pose landmark
"""
# This approach uses only 2D landmarks to compute pose size.
landmarks = landmarks[:, :2]
# Hips center.
left_hip = landmarks[self._landmark_names.index('left_hip')]
right_hip = landmarks[self._landmark_names.index('right_hip')]
hips = (left_hip + right_hip) * 0.5
# Shoulders center.
left_shoulder = landmarks[self._landmark_names.index('left_shoulder')]
right_shoulder = landmarks[self._landmark_names.index('right_shoulder')]
shoulders = (left_shoulder + right_shoulder) * 0.5
# Torso size as the minimum body size.
torso_size = np.linalg.norm(shoulders - hips)
# Max dist to pose center.
pose_center = self._get_pose_center(landmarks)
max_dist = np.max(np.linalg.norm(landmarks - pose_center, axis=1))
return max(torso_size * torso_size_multiplier, max_dist)
def _get_pose_distance_embedding(self, landmarks):
"""Converts pose landmarks into 3D embedding.
We use several pairwise 3D distances to form pose embedding. All distances
include X and Y components with sign. We differnt types of pairs to cover
different pose classes. Feel free to remove some or add new.
Args:
landmarks - NumPy array with 3D landmarks of shape (N, 3).
Result:
Numpy array with pose embedding of shape (M, 3) where `M` is the number of
pairwise distances.
"""
embedding = np.array([
# One joint.
self._get_distance(
self._get_average_by_names(landmarks, 'left_hip', 'right_hip'),
self._get_average_by_names(landmarks, 'left_shoulder', 'right_shoulder')),
self._get_distance_by_names(landmarks, 'left_shoulder', 'left_elbow'),
self._get_distance_by_names(landmarks, 'right_shoulder', 'right_elbow'),
self._get_distance_by_names(landmarks, 'left_elbow', 'left_wrist'),
self._get_distance_by_names(landmarks, 'right_elbow', 'right_wrist'),
self._get_distance_by_names(landmarks, 'left_hip', 'left_knee'),
self._get_distance_by_names(landmarks, 'right_hip', 'right_knee'),
self._get_distance_by_names(landmarks, 'left_knee', 'left_ankle'),
self._get_distance_by_names(landmarks, 'right_knee', 'right_ankle'),
# Two joints.
self._get_distance_by_names(landmarks, 'left_shoulder', 'left_wrist'),
self._get_distance_by_names(landmarks, 'right_shoulder', 'right_wrist'),
self._get_distance_by_names(landmarks, 'left_hip', 'left_ankle'),
self._get_distance_by_names(landmarks, 'right_hip', 'right_ankle'),
# Four joints.
self._get_distance_by_names(landmarks, 'left_hip', 'left_wrist'),
self._get_distance_by_names(landmarks, 'right_hip', 'right_wrist'),
# Five joints.
self._get_distance_by_names(landmarks, 'left_shoulder', 'left_ankle'),
self._get_distance_by_names(landmarks, 'right_shoulder', 'right_ankle'),
self._get_distance_by_names(landmarks, 'left_hip', 'left_wrist'),
self._get_distance_by_names(landmarks, 'right_hip', 'right_wrist'),
# Cross body.
self._get_distance_by_names(landmarks, 'left_elbow', 'right_elbow'),
self._get_distance_by_names(landmarks, 'left_knee', 'right_knee'),
self._get_distance_by_names(landmarks, 'left_wrist', 'right_wrist'),
self._get_distance_by_names(landmarks, 'left_ankle', 'right_ankle'),
# Body bent direction.
# self._get_distance(
# self._get_average_by_names(landmarks, 'left_wrist', 'left_ankle'),
# landmarks[self._landmark_names.index('left_hip')]),
# self._get_distance(
# self._get_average_by_names(landmarks, 'right_wrist', 'right_ankle'),
# landmarks[self._landmark_names.index('right_hip')]),
])
return embedding
def _get_average_by_names(self, landmarks, name_from, name_to):
lmk_from = landmarks[self._landmark_names.index(name_from)]
lmk_to = landmarks[self._landmark_names.index(name_to)]
return (lmk_from + lmk_to) * 0.5
def _get_distance_by_names(self, landmarks, name_from, name_to):
lmk_from = landmarks[self._landmark_names.index(name_from)]
lmk_to = landmarks[self._landmark_names.index(name_to)]
return self._get_distance(lmk_from, lmk_to)
def _get_distance(self, lmk_from, lmk_to):
return lmk_to - lmk_from
###Output
_____no_output_____
###Markdown
Pose classification
###Code
class PoseSample(object):
def __init__(self, name, landmarks, class_name, embedding):
self.name = name
self.landmarks = landmarks
self.class_name = class_name
self.embedding = embedding
class PoseSampleOutlier(object):
def __init__(self, sample, detected_class, all_classes):
self.sample = sample
self.detected_class = detected_class
self.all_classes = all_classes
import csv
import numpy as np
import os
class PoseClassifier(object):
"""Classifies pose landmarks."""
def __init__(self,
pose_samples_folder,
pose_embedder,
file_extension='csv',
file_separator=',',
n_landmarks=33,
n_dimensions=3,
top_n_by_max_distance=30,
top_n_by_mean_distance=10,
axes_weights=(1., 1., 0.2)):
self._pose_embedder = pose_embedder
self._n_landmarks = n_landmarks
self._n_dimensions = n_dimensions
self._top_n_by_max_distance = top_n_by_max_distance
self._top_n_by_mean_distance = top_n_by_mean_distance
self._axes_weights = axes_weights
self._pose_samples = self._load_pose_samples(pose_samples_folder,
file_extension,
file_separator,
n_landmarks,
n_dimensions,
pose_embedder)
def _load_pose_samples(self,
pose_samples_folder,
file_extension,
file_separator,
n_landmarks,
n_dimensions,
pose_embedder):
"""Loads pose samples from a given folder.
Required folder structure:
neutral_standing.csv
pushups_down.csv
pushups_up.csv
squats_down.csv
...
Required CSV structure:
sample_00001,x1,y1,z1,x2,y2,z2,....
sample_00002,x1,y1,z1,x2,y2,z2,....
...
"""
# Each file in the folder represents one pose class.
file_names = [name for name in os.listdir(pose_samples_folder) if name.endswith(file_extension)]
pose_samples = []
for file_name in file_names:
# Use file name as pose class name.
class_name = file_name[:-(len(file_extension) + 1)]
# Parse CSV.
with open(os.path.join(pose_samples_folder, file_name)) as csv_file:
csv_reader = csv.reader(csv_file, delimiter=file_separator)
for row in csv_reader:
assert len(row) == n_landmarks * n_dimensions + 1, 'Wrong number of values: {}'.format(len(row))
landmarks = np.array(row[1:], np.float32).reshape([n_landmarks, n_dimensions])
pose_samples.append(PoseSample(
name=row[0],
landmarks=landmarks,
class_name=class_name,
embedding=pose_embedder(landmarks),
))
return pose_samples
def find_pose_sample_outliers(self):
"""Classifies each sample against the entire database."""
# Find outliers in target poses
outliers = []
for sample in self._pose_samples:
# Find nearest poses for the target one.
pose_landmarks = sample.landmarks.copy()
pose_classification = self.__call__(pose_landmarks)
class_names = [class_name for class_name, count in pose_classification.items() if count == max(pose_classification.values())]
# Sample is an outlier if nearest poses have different class or more than
# one pose class is detected as nearest.
if sample.class_name not in class_names or len(class_names) != 1:
outliers.append(PoseSampleOutlier(sample, class_names, pose_classification))
return outliers
def __call__(self, pose_landmarks):
"""Classifies given pose.
Classification is done in two stages:
* First we pick top-N samples by MAX distance. It allows to remove samples
that are almost the same as given pose, but has few joints bent in the
other direction.
* Then we pick top-N samples by MEAN distance. After outliers are removed
on a previous step, we can pick samples that are closes on average.
Args:
pose_landmarks: NumPy array with 3D landmarks of shape (N, 3).
Returns:
Dictionary with count of nearest pose samples from the database. Sample:
{
'pushups_down': 8,
'pushups_up': 2,
}
"""
# Check that provided and target poses have the same shape.
assert pose_landmarks.shape == (self._n_landmarks, self._n_dimensions), 'Unexpected shape: {}'.format(pose_landmarks.shape)
# Get given pose embedding.
pose_embedding = self._pose_embedder(pose_landmarks)
flipped_pose_embedding = self._pose_embedder(pose_landmarks * np.array([-1, 1, 1]))
# Filter by max distance.
#
# That helps to remove outliers - poses that are almost the same as the
# given one, but has one joint bent into another direction and actually
# represnt a different pose class.
max_dist_heap = []
for sample_idx, sample in enumerate(self._pose_samples):
max_dist = min(
np.max(np.abs(sample.embedding - pose_embedding) * self._axes_weights),
np.max(np.abs(sample.embedding - flipped_pose_embedding) * self._axes_weights),
)
max_dist_heap.append([max_dist, sample_idx])
max_dist_heap = sorted(max_dist_heap, key=lambda x: x[0])
max_dist_heap = max_dist_heap[:self._top_n_by_max_distance]
# Filter by mean distance.
#
# After removing outliers we can find the nearest pose by mean distance.
mean_dist_heap = []
for _, sample_idx in max_dist_heap:
sample = self._pose_samples[sample_idx]
mean_dist = min(
np.mean(np.abs(sample.embedding - pose_embedding) * self._axes_weights),
np.mean(np.abs(sample.embedding - flipped_pose_embedding) * self._axes_weights),
)
mean_dist_heap.append([mean_dist, sample_idx])
mean_dist_heap = sorted(mean_dist_heap, key=lambda x: x[0])
mean_dist_heap = mean_dist_heap[:self._top_n_by_mean_distance]
# Collect results into map: (class_name -> n_samples)
class_names = [self._pose_samples[sample_idx].class_name for _, sample_idx in mean_dist_heap]
result = {class_name: class_names.count(class_name) for class_name in set(class_names)}
return result
###Output
_____no_output_____
###Markdown
Classification smoothing
###Code
class EMADictSmoothing(object):
"""Smoothes pose classification."""
def __init__(self, window_size=10, alpha=0.2):
self._window_size = window_size
self._alpha = alpha
self._data_in_window = []
def __call__(self, data):
"""Smoothes given pose classification.
Smoothing is done by computing Exponential Moving Average for every pose
class observed in the given time window. Missed pose classes arre replaced
with 0.
Args:
data: Dictionary with pose classification. Sample:
{
'pushups_down': 8,
'pushups_up': 2,
}
Result:
Dictionary in the same format but with smoothed and float instead of
integer values. Sample:
{
'pushups_down': 8.3,
'pushups_up': 1.7,
}
"""
# Add new data to the beginning of the window for simpler code.
self._data_in_window.insert(0, data)
self._data_in_window = self._data_in_window[:self._window_size]
# Get all keys.
keys = set([key for data in self._data_in_window for key, _ in data.items()])
# Get smoothed values.
smoothed_data = dict()
for key in keys:
factor = 1.0
top_sum = 0.0
bottom_sum = 0.0
for data in self._data_in_window:
value = data[key] if key in data else 0.0
top_sum += factor * value
bottom_sum += factor
# Update factor.
factor *= (1.0 - self._alpha)
smoothed_data[key] = top_sum / bottom_sum
return smoothed_data
###Output
_____no_output_____
###Markdown
Repetition counter
###Code
class RepetitionCounter(object):
"""Counts number of repetitions of given target pose class."""
def __init__(self, class_name, enter_threshold=6, exit_threshold=4):
self._class_name = class_name
# If pose counter passes given threshold, then we enter the pose.
self._enter_threshold = enter_threshold
self._exit_threshold = exit_threshold
# Either we are in given pose or not.
self._pose_entered = False
# Number of times we exited the pose.
self._n_repeats = 0
@property
def n_repeats(self):
return self._n_repeats
def __call__(self, pose_classification):
"""Counts number of repetitions happend until given frame.
We use two thresholds. First you need to go above the higher one to enter
the pose, and then you need to go below the lower one to exit it. Difference
between the thresholds makes it stable to prediction jittering (which will
cause wrong counts in case of having only one threshold).
Args:
pose_classification: Pose classification dictionary on current frame.
Sample:
{
'pushups_down': 8.3,
'pushups_up': 1.7,
}
Returns:
Integer counter of repetitions.
"""
# Get pose confidence.
pose_confidence = 0.0
if self._class_name in pose_classification:
pose_confidence = pose_classification[self._class_name]
# On the very first frame or if we were out of the pose, just check if we
# entered it on this frame and update the state.
if not self._pose_entered:
self._pose_entered = pose_confidence > self._enter_threshold
return self._n_repeats
# If we were in the pose and are exiting it, then increase the counter and
# update the state.
if pose_confidence < self._exit_threshold:
self._n_repeats += 1
self._pose_entered = False
return self._n_repeats
###Output
_____no_output_____
###Markdown
Classification visualizer
###Code
import io
from PIL import Image
from PIL import ImageFont
from PIL import ImageDraw
import requests
class PoseClassificationVisualizer(object):
"""Keeps track of claassifcations for every frame and renders them."""
def __init__(self,
class_name,
plot_location_x=0.05,
plot_location_y=0.05,
plot_max_width=0.4,
plot_max_height=0.4,
plot_figsize=(9, 4),
plot_x_max=None,
plot_y_max=None,
counter_location_x=0.85,
counter_location_y=0.05,
counter_font_path='https://github.com/googlefonts/roboto/blob/main/src/hinted/Roboto-Regular.ttf?raw=true',
counter_font_color='red',
counter_font_size=0.15):
self._class_name = class_name
self._plot_location_x = plot_location_x
self._plot_location_y = plot_location_y
self._plot_max_width = plot_max_width
self._plot_max_height = plot_max_height
self._plot_figsize = plot_figsize
self._plot_x_max = plot_x_max
self._plot_y_max = plot_y_max
self._counter_location_x = counter_location_x
self._counter_location_y = counter_location_y
self._counter_font_path = counter_font_path
self._counter_font_color = counter_font_color
self._counter_font_size = counter_font_size
self._counter_font = None
self._pose_classification_history = []
self._pose_classification_filtered_history = []
def __call__(self,
frame,
pose_classification,
pose_classification_filtered,
repetitions_count):
"""Renders pose classifcation and counter until given frame."""
# Extend classification history.
self._pose_classification_history.append(pose_classification)
self._pose_classification_filtered_history.append(pose_classification_filtered)
# Output frame with classification plot and counter.
output_img = Image.fromarray(frame)
output_width = output_img.size[0]
output_height = output_img.size[1]
# Draw the plot.
img = self._plot_classification_history(output_width, output_height)
img.thumbnail((int(output_width * self._plot_max_width),
int(output_height * self._plot_max_height)),
Image.ANTIALIAS)
output_img.paste(img,
(int(output_width * self._plot_location_x),
int(output_height * self._plot_location_y)))
# Draw the count.
output_img_draw = ImageDraw.Draw(output_img)
if self._counter_font is None:
font_size = int(output_height * self._counter_font_size)
font_request = requests.get(self._counter_font_path, allow_redirects=True)
self._counter_font = ImageFont.truetype(io.BytesIO(font_request.content), size=font_size)
output_img_draw.text((output_width * self._counter_location_x,
output_height * self._counter_location_y),
str(repetitions_count),
font=self._counter_font,
fill=self._counter_font_color)
return output_img
def _plot_classification_history(self, output_width, output_height):
fig = plt.figure(figsize=self._plot_figsize)
for classification_history in [self._pose_classification_history,
self._pose_classification_filtered_history]:
y = []
for classification in classification_history:
if classification is None:
y.append(None)
elif self._class_name in classification:
y.append(classification[self._class_name])
else:
y.append(0)
plt.plot(y, linewidth=7)
plt.grid(axis='y', alpha=0.75)
plt.xlabel('Frame')
plt.ylabel('Confidence')
plt.title('Classification history for `{}`'.format(self._class_name))
plt.legend(loc='upper right')
if self._plot_y_max is not None:
plt.ylim(top=self._plot_y_max)
if self._plot_x_max is not None:
plt.xlim(right=self._plot_x_max)
# Convert plot to image.
buf = io.BytesIO()
dpi = min(
output_width * self._plot_max_width / float(self._plot_figsize[0]),
output_height * self._plot_max_height / float(self._plot_figsize[1]))
fig.savefig(buf, dpi=dpi)
buf.seek(0)
img = Image.open(buf)
plt.close()
return img
###Output
_____no_output_____
###Markdown
Bootstrap helper
###Code
import cv2
from matplotlib import pyplot as plt
import numpy as np
import os
from PIL import Image
import sys
import tqdm
from mediapipe.python.solutions import drawing_utils as mp_drawing
from mediapipe.python.solutions import pose as mp_pose
class BootstrapHelper(object):
"""Helps to bootstrap images and filter pose samples for classification."""
def __init__(self,
images_in_folder,
images_out_folder,
csvs_out_folder):
self._images_in_folder = images_in_folder
self._images_out_folder = images_out_folder
self._csvs_out_folder = csvs_out_folder
# Get list of pose classes and print image statistics.
self._pose_class_names = sorted([n for n in os.listdir(self._images_in_folder) if not n.startswith('.')])
def bootstrap(self, per_pose_class_limit=None):
"""Bootstraps images in a given folder.
Required image in folder (same use for image out folder):
pushups_up/
image_001.jpg
image_002.jpg
...
pushups_down/
image_001.jpg
image_002.jpg
...
...
Produced CSVs out folder:
pushups_up.csv
pushups_down.csv
Produced CSV structure with pose 3D landmarks:
sample_00001,x1,y1,z1,x2,y2,z2,....
sample_00002,x1,y1,z1,x2,y2,z2,....
"""
# Create output folder for CVSs.
if not os.path.exists(self._csvs_out_folder):
os.makedirs(self._csvs_out_folder)
for pose_class_name in self._pose_class_names:
print('Bootstrapping ', pose_class_name, file=sys.stderr)
# Paths for the pose class.
images_in_folder = os.path.join(self._images_in_folder, pose_class_name)
images_out_folder = os.path.join(self._images_out_folder, pose_class_name)
csv_out_path = os.path.join(self._csvs_out_folder, pose_class_name + '.csv')
if not os.path.exists(images_out_folder):
os.makedirs(images_out_folder)
with open(csv_out_path, 'w') as csv_out_file:
csv_out_writer = csv.writer(csv_out_file, delimiter=',', quoting=csv.QUOTE_MINIMAL)
# Get list of images.
image_names = sorted([n for n in os.listdir(images_in_folder) if not n.startswith('.')])
if per_pose_class_limit is not None:
image_names = image_names[:per_pose_class_limit]
# Bootstrap every image.
for image_name in tqdm.tqdm(image_names):
# Load image.
input_frame = cv2.imread(os.path.join(images_in_folder, image_name))
input_frame = cv2.cvtColor(input_frame, cv2.COLOR_BGR2RGB)
# Initialize fresh pose tracker and run it.
with mp_pose.Pose(upper_body_only=False) as pose_tracker:
result = pose_tracker.process(image=input_frame)
pose_landmarks = result.pose_landmarks
# Save image with pose prediction (if pose was detected).
output_frame = input_frame.copy()
if pose_landmarks is not None:
mp_drawing.draw_landmarks(
image=output_frame,
landmark_list=pose_landmarks,
connections=mp_pose.POSE_CONNECTIONS)
output_frame = cv2.cvtColor(output_frame, cv2.COLOR_RGB2BGR)
cv2.imwrite(os.path.join(images_out_folder, image_name), output_frame)
# Save landmarks if pose was detected.
if pose_landmarks is not None:
# Get landmarks.
frame_height, frame_width = output_frame.shape[0], output_frame.shape[1]
pose_landmarks = np.array(
[[lmk.x * frame_width, lmk.y * frame_height, lmk.z * frame_width]
for lmk in pose_landmarks.landmark],
dtype=np.float32)
assert pose_landmarks.shape == (33, 3), 'Unexpected landmarks shape: {}'.format(pose_landmarks.shape)
csv_out_writer.writerow([image_name] + pose_landmarks.flatten().astype(np.str).tolist())
# Draw XZ projection and concatenate with the image.
projection_xz = self._draw_xz_projection(
output_frame=output_frame, pose_landmarks=pose_landmarks)
output_frame = np.concatenate((output_frame, projection_xz), axis=1)
def _draw_xz_projection(self, output_frame, pose_landmarks, r=0.5, color='red'):
frame_height, frame_width = output_frame.shape[0], output_frame.shape[1]
img = Image.new('RGB', (frame_width, frame_height), color='white')
if pose_landmarks is None:
return np.asarray(img)
# Scale radius according to the image width.
r *= frame_width * 0.01
draw = ImageDraw.Draw(img)
for idx_1, idx_2 in mp_pose.POSE_CONNECTIONS:
# Flip Z and move hips center to the center of the image.
x1, y1, z1 = pose_landmarks[idx_1] * [1, 1, -1] + [0, 0, frame_height * 0.5]
x2, y2, z2 = pose_landmarks[idx_2] * [1, 1, -1] + [0, 0, frame_height * 0.5]
draw.ellipse([x1 - r, z1 - r, x1 + r, z1 + r], fill=color)
draw.ellipse([x2 - r, z2 - r, x2 + r, z2 + r], fill=color)
draw.line([x1, z1, x2, z2], width=int(r), fill=color)
return np.asarray(img)
def align_images_and_csvs(self, print_removed_items=False):
"""Makes sure that image folders and CSVs have the same sample.
Leaves only intersetion of samples in both image folders and CSVs.
"""
for pose_class_name in self._pose_class_names:
# Paths for the pose class.
images_out_folder = os.path.join(self._images_out_folder, pose_class_name)
csv_out_path = os.path.join(self._csvs_out_folder, pose_class_name + '.csv')
# Read CSV into memory.
rows = []
with open(csv_out_path) as csv_out_file:
csv_out_reader = csv.reader(csv_out_file, delimiter=',')
for row in csv_out_reader:
rows.append(row)
# Image names left in CSV.
image_names_in_csv = []
# Re-write the CSV removing lines without corresponding images.
with open(csv_out_path, 'w') as csv_out_file:
csv_out_writer = csv.writer(csv_out_file, delimiter=',', quoting=csv.QUOTE_MINIMAL)
for row in rows:
image_name = row[0]
image_path = os.path.join(images_out_folder, image_name)
if os.path.exists(image_path):
image_names_in_csv.append(image_name)
csv_out_writer.writerow(row)
elif print_removed_items:
print('Removed image from CSV: ', image_path)
# Remove images without corresponding line in CSV.
for image_name in os.listdir(images_out_folder):
if image_name not in image_names_in_csv:
image_path = os.path.join(images_out_folder, image_name)
os.remove(image_path)
if print_removed_items:
print('Removed image from folder: ', image_path)
def analyze_outliers(self, outliers):
"""Classifies each sample agains all other to find outliers.
If sample is classified differrrently than the original class - it sould
either be deleted or more similar samples should be aadded.
"""
for outlier in outliers:
image_path = os.path.join(self._images_out_folder, outlier.sample.class_name, outlier.sample.name)
print('Outlier')
print(' sample path = ', image_path)
print(' sample class = ', outlier.sample.class_name)
print(' detected class = ', outlier.detected_class)
print(' all classes = ', outlier.all_classes)
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
show_image(img, figsize=(20, 20))
def remove_outliers(self, outliers):
"""Removes outliers from the image folders."""
for outlier in outliers:
image_path = os.path.join(self._images_out_folder, outlier.sample.class_name, outlier.sample.name)
os.remove(image_path)
def print_images_in_statistics(self):
"""Prints statistics from the input image folder."""
self._print_images_statistics(self._images_in_folder, self._pose_class_names)
def print_images_out_statistics(self):
"""Prints statistics from the output image folder."""
self._print_images_statistics(self._images_out_folder, self._pose_class_names)
def _print_images_statistics(self, images_folder, pose_class_names):
print('Number of images per pose class:')
for pose_class_name in pose_class_names:
n_images = len([
n for n in os.listdir(os.path.join(images_folder, pose_class_name))
if not n.startswith('.')])
print(' {}: {}'.format(pose_class_name, n_images))
###Output
_____no_output_____
###Markdown
Step 1: Build classifier Upload image samplesLocally create a folder named `fitness_poses_images_in` with image samples.Images should repesent terminal states of desired pose classes. I.e. if you want to classify push-up provide iamges for two classes: when person is up, and when person is down.There should be about a few hundred samples per class covering different camera angles, environment conditions, body shapes, and exercise variations to build a good classifier.Required structure of the images_in_folder:```fitness_poses_images_in/ pushups_up/ image_001.jpg image_002.jpg ... pushups_down/ image_001.jpg image_002.jpg ... ...```Zip the `fitness_poses_images_in` folder:```zip -r fitness_poses_images_in.zip fitness_poses_images_in```And run the code below to upload it to the Colab runtime
###Code
from google.colab import files
import os
uploaded = files.upload()
os.listdir('.')
###Output
_____no_output_____
###Markdown
Unzip the archive:
###Code
import zipfile
import io
zf = zipfile.ZipFile(io.BytesIO(uploaded['fitness_poses_images_in.zip']), "r")
zf.extractall()
os.listdir('.')
###Output
_____no_output_____
###Markdown
Bootstrap images
###Code
# Required structure of the images_in_folder:
#
# fitness_poses_images_in/
# pushups_up/
# image_001.jpg
# image_002.jpg
# ...
# pushups_down/
# image_001.jpg
# image_002.jpg
# ...
# ...
bootstrap_images_in_folder = 'fitness_poses_images_in'
# Output folders for bootstrapped images and CSVs.
bootstrap_images_out_folder = 'fitness_poses_images_out'
bootstrap_csvs_out_folder = 'fitness_poses_csvs_out'
# Initialize helper.
bootstrap_helper = BootstrapHelper(
images_in_folder=bootstrap_images_in_folder,
images_out_folder=bootstrap_images_out_folder,
csvs_out_folder=bootstrap_csvs_out_folder,
)
# Check how many pose classes and images for them are available.
bootstrap_helper.print_images_in_statistics()
# Bootstrap all images.
# Set limit to some small number for debug.
bootstrap_helper.bootstrap(per_pose_class_limit=None)
# Check how many images were bootstrapped.
bootstrap_helper.print_images_out_statistics()
# After initial bootstrapping images without detected poses were still saved in
# the folderd (but not in the CSVs) for debug purpose. Let's remove them.
bootstrap_helper.align_images_and_csvs(print_removed_items=False)
bootstrap_helper.print_images_out_statistics()
###Output
_____no_output_____
###Markdown
Manual filtrationPlease manually verify predictions and remove samples (images) that has wrong pose prediction. Check as if you were asked to classify pose just from predicted landmarks. If you can't - remove it.Align CSVs and image folders once you are done.
###Code
# Align CSVs with filtered images.
bootstrap_helper.align_images_and_csvs(print_removed_items=False)
bootstrap_helper.print_images_out_statistics()
###Output
_____no_output_____
###Markdown
Automatic filtrationClassify each sample against database of all other samples and check if it gets in the same class as annotated after classification.There can be two reasons for the outliers: * **Wrong pose prediction**: In this case remove such outliers. * **Wrong classification** (i.e. pose is predicted correctly and you aggree with original pose class assigned to the sample): In this case sample is from the underrepresented group (e.g. unusual angle or just very few samples). Add more similar samples and run bootstrapping from the very beginning.Even if you just removed some samples it makes sence to re-run automatic filtration one more time as database of poses has changed.**Important!!** Check that you are using the same parameters when classifying whole videos later.
###Code
# Find outliers.
# Transforms pose landmarks into embedding.
pose_embedder = FullBodyPoseEmbedder()
# Classifies give pose against database of poses.
pose_classifier = PoseClassifier(
pose_samples_folder=bootstrap_csvs_out_folder,
pose_embedder=pose_embedder,
top_n_by_max_distance=30,
top_n_by_mean_distance=10)
outliers = pose_classifier.find_pose_sample_outliers()
print('Number of outliers: ', len(outliers))
# Analyze outliers.
bootstrap_helper.analyze_outliers(outliers)
# Remove all outliers (if you don't want to manually pick).
bootstrap_helper.remove_outliers(outliers)
# Align CSVs with images after removing outliers.
bootstrap_helper.align_images_and_csvs(print_removed_items=False)
bootstrap_helper.print_images_out_statistics()
###Output
_____no_output_____
###Markdown
Dump for the AppDump filtered poses to CSV and download it.Please check this [guide](https://developers.google.com/ml-kit/vision/pose-detection/classifying-poses4_integrate_with_the_ml_kit_quickstart_app) on how to use this CSV in the ML Kit sample app.
###Code
import csv
import os
import numpy as np
def dump_for_the_app():
pose_samples_folder = 'fitness_poses_csvs_out'
pose_samples_csv_path = 'fitness_poses_csvs_out.csv'
file_extension = 'csv'
file_separator = ','
# Each file in the folder represents one pose class.
file_names = [name for name in os.listdir(pose_samples_folder) if name.endswith(file_extension)]
with open(pose_samples_csv_path, 'w') as csv_out:
csv_out_writer = csv.writer(csv_out, delimiter=file_separator, quoting=csv.QUOTE_MINIMAL)
for file_name in file_names:
# Use file name as pose class name.
class_name = file_name[:-(len(file_extension) + 1)]
# One file line: `sample_00001,x1,y1,x2,y2,....`.
with open(os.path.join(pose_samples_folder, file_name)) as csv_in:
csv_in_reader = csv.reader(csv_in, delimiter=file_separator)
for row in csv_in_reader:
row.insert(1, class_name)
csv_out_writer.writerow(row)
files.download(pose_samples_csv_path)
dump_for_the_app()
###Output
_____no_output_____
###Markdown
Step 2: Classification**Important!!** Check that you are using the same classification parameters as while building classifier.
###Code
# Upload your video.
uploaded = files.upload()
os.listdir('.')
# Specify your video name and target pose class to count the repetitions.
video_path = 'pushups-sample.mov'
class_name='pushups_down'
out_video_path = 'pushups-sample-out.mov'
# Open the video.
import cv2
video_cap = cv2.VideoCapture(video_path)
# Get some video parameters to generate output video with classificaiton.
video_n_frames = video_cap.get(cv2.CAP_PROP_FRAME_COUNT)
video_fps = video_cap.get(cv2.CAP_PROP_FPS)
video_width = int(video_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
video_height = int(video_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
# Initilize tracker, classifier and counter.
# Do that before every video as all of them have state.
from mediapipe.python.solutions import pose as mp_pose
# Folder with pose class CSVs. That should be the same folder you using while
# building classifier to output CSVs.
pose_samples_folder = 'fitness_poses_csvs_out'
# Initialize tracker.
pose_tracker = mp_pose.Pose(upper_body_only=False)
# Initialize embedder.
pose_embedder = FullBodyPoseEmbedder()
# Initialize classifier.
# Ceck that you are using the same parameters as during bootstrapping.
pose_classifier = PoseClassifier(
pose_samples_folder=pose_samples_folder,
pose_embedder=pose_embedder,
top_n_by_max_distance=30,
top_n_by_mean_distance=10)
# # Uncomment to validate target poses used by classifier and find outliers.
# outliers = pose_classifier.find_pose_sample_outliers()
# print('Number of pose sample outliers (consider removing them): ', len(outliers))
# Initialize EMA smoothing.
pose_classification_filter = EMADictSmoothing(
window_size=10,
alpha=0.2)
# Initialize counter.
repetition_counter = RepetitionCounter(
class_name=class_name,
enter_threshold=6,
exit_threshold=4)
# Initialize renderer.
pose_classification_visualizer = PoseClassificationVisualizer(
class_name=class_name,
plot_x_max=video_n_frames,
# Graphic looks nicer if it's the same as `top_n_by_mean_distance`.
plot_y_max=10)
# Run classification on a video.
import os
import tqdm
from mediapipe.python.solutions import drawing_utils as mp_drawing
# Open output video.
out_video = cv2.VideoWriter(out_video_path, cv2.VideoWriter_fourcc(*'mp4v'), video_fps, (video_width, video_height))
frame_idx = 0
output_frame = None
with tqdm.tqdm(total=video_n_frames, position=0, leave=True) as pbar:
while True:
# Get next frame of the video.
success, input_frame = video_cap.read()
if not success:
break
# Run pose tracker.
input_frame = cv2.cvtColor(input_frame, cv2.COLOR_BGR2RGB)
result = pose_tracker.process(image=input_frame)
pose_landmarks = result.pose_landmarks
# Draw pose prediction.
output_frame = input_frame.copy()
if pose_landmarks is not None:
mp_drawing.draw_landmarks(
image=output_frame,
landmark_list=pose_landmarks,
connections=mp_pose.POSE_CONNECTIONS)
if pose_landmarks is not None:
# Get landmarks.
frame_height, frame_width = output_frame.shape[0], output_frame.shape[1]
pose_landmarks = np.array([[lmk.x * frame_width, lmk.y * frame_height, lmk.z * frame_width]
for lmk in pose_landmarks.landmark], dtype=np.float32)
assert pose_landmarks.shape == (33, 3), 'Unexpected landmarks shape: {}'.format(pose_landmarks.shape)
# Classify the pose on the current frame.
pose_classification = pose_classifier(pose_landmarks)
# Smooth classification using EMA.
pose_classification_filtered = pose_classification_filter(pose_classification)
# Count repetitions.
repetitions_count = repetition_counter(pose_classification_filtered)
else:
# No pose => no classification on current frame.
pose_classification = None
# Still add empty classification to the filter to maintaing correct
# smoothing for future frames.
pose_classification_filtered = pose_classification_filter(dict())
pose_classification_filtered = None
# Don't update the counter presuming that person is 'frozen'. Just
# take the latest repetitions count.
repetitions_count = repetition_counter.n_repeats
# Draw classification plot and repetition counter.
output_frame = pose_classification_visualizer(
frame=output_frame,
pose_classification=pose_classification,
pose_classification_filtered=pose_classification_filtered,
repetitions_count=repetitions_count)
# Save the output frame.
out_video.write(cv2.cvtColor(np.array(output_frame), cv2.COLOR_RGB2BGR))
# Show intermediate frames of the video to track progress.
if frame_idx % 50 == 0:
show_image(output_frame)
frame_idx += 1
pbar.update()
# Close output video.
out_video.release()
# Release MediaPipe resources.
pose_tracker.close()
# Show the last frame of the video.
if output_frame is not None:
show_image(output_frame)
# Download generated video
files.download(out_video_path)
###Output
_____no_output_____ |
Source/.ipynb_checkpoints/TK_Regresi-checkpoint.ipynb | ###Markdown
Regresi Metode Statistika
###Code
best = train(trainingSet, 0, 0.1, 1000, 0, 0.1, 100)
xGrad = [trainingSet[0].x, trainingSet[-1].x]
yGrad = [best.value(trainingSet[0].x), best.value(trainingSet[-1].x)]
xData = []
yData = []
for i in trainingSet:
xData.append(i.x)
yData.append(i.y)
fig, ax = plt.subplots()
ax.scatter(xData, yData)
ax.plot(xGrad, yGrad, color='Red')
ax.set_xlabel("X (Input)")
ax.set_ylabel("Y (Output)")
plt.show()
###Output
_____no_output_____
###Markdown
Regresi Metode Gradient Descent
###Code
m, b, cost = gradient_descent(trainingSet, 0, 2, 0.35, 100)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(m, b, cost)
ax.set_xlabel('m (Theta1)')
ax.set_ylabel('b (Theta0)')
ax.set_zlabel('Cost Value')
plt.show()
###Output
_____no_output_____ |
notebooks/Reg-Mulitple-Linear-Regression-Co2-py-v1.ipynb | ###Markdown
Multiple Linear RegressionAbout this NotebookIn this notebook, we learn how to use scikit-learn to implement Multiple linear regression. We download a dataset that is related to fuel consumption and Carbon dioxide emission of cars. Then, we split our data into training and test sets, create a model using training set, Evaluate your model using test set, and finally use model to predict unknown value Table of contents Understanding the Data Reading the Data in Multiple Regression Model Prediction Practice Importing Needed packages
###Code
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
%matplotlib inline
###Output
_____no_output_____
###Markdown
Downloading DataTo download the data, we will use !wget to download it from IBM Object Storage.
###Code
!wget -O FuelConsumption.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/FuelConsumptionCo2.csv
###Output
--2019-05-15 18:15:25-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/FuelConsumptionCo2.csv
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.193
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.193|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 72629 (71K) [text/csv]
Saving to: ‘FuelConsumption.csv’
FuelConsumption.csv 100%[===================>] 70.93K --.-KB/s in 0.04s
2019-05-15 18:15:25 (1.65 MB/s) - ‘FuelConsumption.csv’ saved [72629/72629]
###Markdown
__Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC) Understanding the Data `FuelConsumption.csv`:We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64)- **MODELYEAR** e.g. 2014- **MAKE** e.g. Acura- **MODEL** e.g. ILX- **VEHICLE CLASS** e.g. SUV- **ENGINE SIZE** e.g. 4.7- **CYLINDERS** e.g 6- **TRANSMISSION** e.g. A6- **FUELTYPE** e.g. z- **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9- **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9- **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2- **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0 Reading the data in
###Code
df = pd.read_csv("FuelConsumption.csv")
# take a look at the dataset
df.head()
###Output
_____no_output_____
###Markdown
Lets select some features that we want to use for regression.
###Code
cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY','FUELCONSUMPTION_COMB','CO2EMISSIONS']]
cdf.head(9)
###Output
_____no_output_____
###Markdown
Lets plot Emission values with respect to Engine size:
###Code
plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
###Output
_____no_output_____
###Markdown
Creating train and test datasetTrain/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set. This will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the data. It is more realistic for real world problems.This means that we know the outcome of each data point in this dataset, making it great to test with! And since this data has not been used to train the model, the model has no knowledge of the outcome of these data points. So, in essence, it’s truly an out-of-sample testing.
###Code
msk = np.random.rand(len(df)) < 0.8
train = cdf[msk]
test = cdf[~msk]
###Output
_____no_output_____
###Markdown
Train data distribution
###Code
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
###Output
_____no_output_____
###Markdown
Multiple Regression Model In reality, there are multiple variables that predict the Co2emission. When more than one independent variable is present, the process is called multiple linear regression. For example, predicting co2emission using FUELCONSUMPTION_COMB, EngineSize and Cylinders of cars. The good thing here is that Multiple linear regression is the extension of simple linear regression model.
###Code
from sklearn import linear_model
regr = linear_model.LinearRegression()
x = np.asanyarray(train[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB']])
y = np.asanyarray(train[['CO2EMISSIONS']])
regr.fit (x, y)
# The coefficients
print ('Coefficients: ', regr.coef_)
###Output
Coefficients: [[10.60837481 7.53275836 9.60643436]]
###Markdown
As mentioned before, __Coefficient__ and __Intercept__ , are the parameters of the fit line. Given that it is a multiple linear regression, with 3 parameters, and knowing that the parameters are the intercept and coefficients of hyperplane, sklearn can estimate them from our data. Scikit-learn uses plain Ordinary Least Squares method to solve this problem. Ordinary Least Squares (OLS)OLS is a method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by minimizing the sum of the squares of the differences between the target dependent variable and those predicted by the linear function. In other words, it tries to minimizes the sum of squared errors (SSE) or mean squared error (MSE) between the target variable (y) and our predicted output ($\hat{y}$) over all samples in the dataset.OLS can find the best parameters using of the following methods: - Solving the model parameters analytically using closed-form equations - Using an optimization algorithm (Gradient Descent, Stochastic Gradient Descent, Newton’s Method, etc.) Prediction
###Code
y_hat= regr.predict(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB']])
x = np.asanyarray(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB']])
y = np.asanyarray(test[['CO2EMISSIONS']])
print("Residual sum of squares: %.2f"
% np.mean((y_hat - y) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(x, y))
###Output
_____no_output_____
###Markdown
__explained variance regression score:__ If $\hat{y}$ is the estimated target output, y the corresponding (correct) target output, and Var is Variance, the square of the standard deviation, then the explained variance is estimated as follow:$\texttt{explainedVariance}(y, \hat{y}) = 1 - \frac{Var\{ y - \hat{y}\}}{Var\{y\}}$ The best possible score is 1.0, lower values are worse. PracticeTry to use a multiple linear regression with the same dataset but this time use __FUEL CONSUMPTION in CITY__ and __FUEL CONSUMPTION in HWY__ instead of FUELCONSUMPTION_COMB. Does it result in better accuracy?
###Code
# write your code here
regr = linear_model.LinearRegression()
x = np.asanyarray(train[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY']])
y = np.asanyarray(train[['CO2EMISSIONS']])
regr.fit (x, y)
print ('Coefficients: ', regr.coef_)
y_= regr.predict(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY']])
x = np.asanyarray(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY']])
y = np.asanyarray(test[['CO2EMISSIONS']])
print("Residual sum of squares: %.2f"% np.mean((y_ - y) ** 2))
print('Variance score: %.2f' % regr.score(x, y))
###Output
Coefficients: [[10.66616626 7.18502946 6.25273632 3.04500661]]
Residual sum of squares: 556.14
Variance score: 0.88
|
04_bst/Subtree_of_Another_Tree.ipynb | ###Markdown
Subtree of Another Tree**Status:** Not Passed Given two non-empty binary trees s and t, check whether tree t has exactly the same structure and node values with a subtree of s. A subtree of s is a tree consists of a node in s and all of this node's descendants. The tree s could also be considered as a subtree of itself.Example 1:```bashGiven tree s: 3 / \ 4 5 / \ 1 2Given tree t: 4 / \ 1 2```Return true, because t has the same structure and node values with a subtree of s.Example 2:```bashGiven tree s: 3 / \ 4 5 / \ 1 2 / 0Given tree t: 4 / \ 1 2```Return false. $$ isSubtree(s, t) = \begin{cases}tree(s) == tree(t), \text{ s and t is equal for tree} \\isSubtree(s.left, t), \text{ s.left and t is is equal for tree} \\isSubtree(s.right, t), \text{ s.right and t is is equal for tree}\end{cases}$$
###Code
# Definition for a binary tree node.
class TreeNode:
def __init__(self, x):
self.val = x
self.left = None
self.right = None
class Solution:
def isSubtree(self, s, t):
"""
:type s: TreeNode
:type t: TreeNode
:rtype: bool
"""
def equals(s, t):
if not s and not t:
return True
elif not s or not t:
return False
return s.val == t.val and equals(s.left, t.left) and equals(s.right, t.right)
return s is not None and (equals(s,t) or self.isSubtree(s.left, t) or self.isSubtree(s.right, t))
def isSubtreeIncorrect(self, s, t):
"""
:type s: TreeNode
:type t: TreeNode
:rtype: bool
"""
if not s and not t:
return True
elif not s or not t:
return False
# both s and t aren't null
# issue : 1->1 and 1
# Function incorrect: s.val == t.val and self.isSubtree(s.left, t.left) and self.isSubtree(s.right, t.right)
return (s.val == t.val and self.isSubtree(s.left, t.left) and self.isSubtree(s.right, t.right)) or self.isSubtree(s.left, t) or self.isSubtree(s.right, t)
def stringToTreeNode(input):
input = input.strip()
input = input[1:-1]
if not input:
return None
inputValues = [s.strip() for s in input.split(',')]
root = TreeNode(int(inputValues[0]))
nodeQueue = [root]
front = 0
index = 1
while index < len(inputValues):
node = nodeQueue[front]
front = front + 1
item = inputValues[index]
index = index + 1
if item != "null":
leftNumber = int(item)
node.left = TreeNode(leftNumber)
nodeQueue.append(node.left)
if index >= len(inputValues):
break
item = inputValues[index]
index = index + 1
if item != "null":
rightNumber = int(item)
node.right = TreeNode(rightNumber)
nodeQueue.append(node.right)
return root
def main():
import sys
import io
def readlines():
for line in io.TextIOWrapper(sys.stdin.buffer, encoding='utf-8'):
yield line.strip('\n')
lines = readlines()
while True:
try:
line = next(lines)
s = stringToTreeNode(line);
line = next(lines)
t = stringToTreeNode(line);
ret = Solution().isSubtree(s, t)
out = (ret);
print(out)
except StopIteration:
break
if __name__ == '__main__':
main()
###Output
_____no_output_____ |
Chapter4_Container/Strings/files.ipynb | ###Markdown
Filepaths
###Code
project_path = os.path.abspath("C:/Users/Jan/Dropbox/_Coding/UdemyPythonPro/")
file_path = os.path.join(project_path, "Chapter4_Container", "Strings", "user_data.txt")
print(type(file_path))
print(file_path)
p = Path(file_path)
print([d for d in dir(p) if '_' not in d])
print(type(p))
print(p)
print(p.parent)
print(p.absolute())
print(p.is_dir())
print(p.is_file())
###Output
C:\Users\Jan\Dropbox\_Coding\UdemyPythonPro\Chapter4_Container\Strings
C:\Users\Jan\Dropbox\_Coding\UdemyPythonPro\Chapter4_Container\Strings\user_data.txt
False
True
###Markdown
Files
Context managers provide __enter__() and __exit__() methods that are invoked on entry to and exit from the body of the with statement.
###Code
with open(file_path, "r") as f:
content = f.readlines()
print(f.closed)
print(f.closed)
print(f, type(f))
print(content)
for idx, line in enumerate(content):
content[idx] = line.replace("\n", "")
print(content)
###Output
True
<_io.TextIOWrapper name='C:\\Users\\Jan\\Dropbox\\_Coding\\UdemyPythonPro\\Chapter4_Container\\Strings\\user_data.txt' mode='r' encoding='cp1252'> <class '_io.TextIOWrapper'>
['Hello my name is Jan.\n', 'Nice to meet you.\n', 'I am currently 26 years old.\n']
['Hello my name is Jan.', 'Nice to meet you.', 'I am currently 26 years old.']
###Markdown
Filepaths
###Code
project_path = os.path.abspath("C:/Users/Jan/Dropbox/_Coding/UdemyPythonPro/")
file_path = os.path.join(project_path, "Chapter4_Container", "Strings", "user_data.txt")
print(type(file_path))
print(file_path)
p = Path(file_path)
print([d for d in dir(p) if '_' not in d])
print(type(p))
print(p)
print(p.parent)
print(p.absolute())
print(p.is_dir())
print(p.is_file())
###Output
C:\Users\Jan\Dropbox\_Coding\UdemyPythonPro\Chapter4_Container\Strings
C:\Users\Jan\Dropbox\_Coding\UdemyPythonPro\Chapter4_Container\Strings\user_data.txt
False
False
###Markdown
Files Context managers provide __enter__() and __exit__() methods that are invoked on entry to and exit from the body of the with statement.
###Code
with open(file_path, "r") as f:
content = f.readlines()
print(f.closed)
print(f.closed)
print(f, type(f))
print(content)
for idx, line in enumerate(content):
content[idx] = line.replace("\n", "")
print(content)
###Output
_____no_output_____ |
notebooks/SS_image_scraper_exploratory_notebook_RJProctor.ipynb | ###Markdown
Exploratory notebook Webscraping drawings created by Upper Elementary age kiddos with literary context for Story Squad databaseUsing [Beautiful Soup](https://www.digitalocean.com/community/tutorials/how-to-scrape-web-pages-with-beautiful-soup-and-python-3) to access labled images on Artsonia website * need to scrape images * need to scrape labels identifying grade levels
###Code
!pip install beautifulsoup4
# parser(s)
!pip install lxml
# prefered for use with bs4
!pip install html5lib
# popular parser that will also work with bs4
!pip install requests
from bs4 import BeautifulSoup as bs
import requests
import csv
###Output
_____no_output_____
###Markdown
Sourcing images from Artsonia.com, upper elementary school drawings because they are already in COPPA compliance with any student created visible images having [waivers on file](https://www.artsonia.com/teachers/members/parents/reports/setup.asp) from parents/guardians and reviewed by teachers before posting. The images hosted on and scraped from the Artsonia website represent the demographics and broad spectrum of technical ability of our target user as well as meet the COPPA compliance requirements of child work products. As long as these images are used only to train the model and we do not scrape too many at one time, overburdening the Artsonia website with requests, we are operating within their documented, public facing [Artsonia Acceptable Use and Conduct Clause 12.7, 10, & 11](https://www.artsonia.com/terms/) and [Robots.txt](https://www.artsonia.com/robots.txt) use case for crawlers. None of the illustrations will be published, displayed, or sold for profit. The illustrations will be used strickly to train/retrain a machine learning model for an educational tool and provide a base for synthetic data generation.
###Code
# load webpage
#r = requests.get('https://www.artsonia.com/search/?q=grade+3%2C+grade+4%2C&gb=128')
r = requests.get('https://www.artsonia.com/museum/gallery.asp?project=1876360')
# may need to address login
# convert to a beautiful soup object
webpage = bs(r.content, 'html5lib')
# print out HTML
contents = webpage.prettify()
print(contents)
###Output
<html>
<head>
<title>
Artsonia Art Gallery - Outer Space Blacklight 3-6th
</title>
<meta content="student, kid, art, museum, gallery, portfolio, artist, children, teacher, education, school, project, idea, art contest, lesson plan, artwork, child, painting, elementary, middle, high" name="keywords"/>
<meta content="Artsonia is a gallery of student art portfolios where young artists (grades PK-12) display their art worldwide. We have a vibrant community of art teachers who also share their ideas and lesson plans." name="description"/>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type"/>
<link href="https://www.artsonia.com/shared/styles/main.css?v=20181003" rel="stylesheet" type="text/css"/>
<link href="https://www.artsonia.com/shared/lib/load-awesome-1.1.0/css/ball-clip-rotate.min.css" rel="stylesheet" type="text/css"/>
<link href="https://www.artsonia.com/shared/styles/loaders.css" rel="stylesheet" type="text/css"/>
<link href="https://www.artsonia.com/shared/jquery/css/smoothness/jquery-ui-1.12.1.min.css" rel="Stylesheet" type="text/css"/>
<script src="/shared/jquery/jquery-3.4.1.min.js" type="text/javascript">
</script>
<!--<script type="text/javascript" src="/shared/jquery/jquery-migrate-3.0.1.min.js"></script>-->
<script src="/shared/jquery/jquery-ui-1.12.1.min.js" type="text/javascript">
</script>
<script src="/shared/jquery/jquery.timers-1.2.js" type="text/javascript">
</script>
<script src="/shared/jquery/jquery.ui.touch-punch.min.js" type="text/javascript">
</script>
<script src="/shared/jquery/jquery.blockUI.js" type="text/javascript">
</script>
<script src="/shared/jquery/jquery-cookie.js" type="text/javascript">
</script>
<script src="/shared/lib/sweetalert/sweetalert.min.js">
</script>
<link href="https://www.artsonia.com/shared/lib/sweetalert/sweetalert.css" rel="stylesheet" type="text/css"/>
<script src="/shared/lib/masonry/masonry.pkgd.min.js">
</script>
<meta content="Outer Space Blacklight 3-6th from Richardsville Elementary School" property="og:title"/>
<meta content="article" property="og:type"/>
<meta content="https://images.artsonia.com/art/large/85608764.jpg" property="og:image"/>
<meta content="https://www.artsonia.com/museum/gallery.asp?project=1876360" property="og:url"/>
<meta content="Artsonia" property="og:site_name"/>
<meta content="Check out student artwork posted to Artsonia from the Outer Space Blacklight 3-6th project gallery at Richardsville Elementary School." property="og:description"/>
<meta content="589912646,100000839961478" property="fb:admins"/>
<meta content="197474573630529" property="fb:app_id"/>
<style>
#page { background: #eee; }
.grid-sizer,
.grid-item {
width: 296px;
}
.gutter-sizer { width: 30px; }
</style>
<script language="javascript">
$(function() {
var isiPad = navigator.userAgent.match(/iPad/i) != null;
var _unitsPerLoad = 20;
var _more = false;
var $grid = $('.grid').masonry({
itemSelector: '.grid-item',
columnWidth: '.grid-sizer',
gutter: '.gutter-sizer',
percentPosition: true
});
$("#FilterSchoolYear").change(function() {
$("#FilterProjectID").val("");
onUpdateFilters();
});
$("#FilterProjectID").change(function() {
onUpdateFilters();
});
$("#BtnResetFilters").click(function() {
$("#FilterSchoolYear").val("");
$("#FilterProjectID").val("");
onUpdateFilters();
});
function onUpdateFilters() {
var year = $("#FilterSchoolYear").val();
var project = $("#FilterProjectID").val();
var url = "statements.asp?id=";
if ( year != "" ) url += "&year=" + year;
if ( project != "" ) url += "&project=" + project;
window.location = url;
};
/*
$(document).on("mouseenter", ".grid-item", function() {
var artID = $(this).attr("arts-id");
$("#Overlay_"+artID).show();
$("#Actions_"+artID).show();
if ( $("#ArtImage_"+artID).height() < 250 ) {
$("#BtnMore_"+artID).hide();
}
});
$(document).on("mouseleave", ".grid-item", function() {
var artID = $(this).attr("arts-id");
$("#Overlay_"+artID).hide();
$("#OverlayFull_"+artID).hide();
$("#Actions_"+artID).hide();
$("#BtnComment_"+artID).css("opacity",1.0).css("cursor","pointer");
$("#BtnZoom_"+artID).css("opacity",1.0).css("cursor","pointer");
$("#MoreOptions_"+artID).hide();
_more = false;
});
*/
/*
$(document).on("click", ".grid-item-image", function() {
var artID = $(this).parent().attr("arts-id");
location.href = "/museum/art.asp?id="+artID;
});
$(document).on("click", ".grid-item-actions", function() {
if ( _more ) return;
var artID = $(this).parent().attr("arts-id");
location.href = "/museum/art.asp?id="+artID;
});
$(document).on("click", "div[id^='BtnComment_']", function(event) {
event.stopPropagation();
if ( _more ) return;
var artID = $(this).attr("id").split("_")[1];
location.href = '/museum/comments/comment.asp?art=' + artID;
});
$(document).on("click", "div[id^='BtnZoom_']", function(event) {
event.stopPropagation();
if ( _more ) return;
var artID = $(this).attr("id").split("_")[1];
location.href = '/museum/art.asp?id=' + artID;
});
*/
$(document).on("click", "div[id^='BtnMore_']", function(event) {
event.stopPropagation();
var artID = $(this).attr("id").split("_")[1];
if ( _more ) {
$("#Overlay_"+artID).show();
$("#BtnComment_"+artID).css("opacity",1.0).css("cursor","pointer");
$("#BtnZoom_"+artID).css("opacity",1.0).css("cursor","pointer");
$("#OverlayFull_"+artID).hide();
$("#MoreOptions_"+artID).hide();
_more = false;
return;
}
_more = true;
$("#Overlay_"+artID).hide();
$("#OverlayFull_"+artID).show();
$("#MoreOptions_"+artID).show();
$("#BtnComment_"+artID).css("opacity",0.25).css("cursor","default");
$("#BtnZoom_"+artID).css("opacity",0.25).css("cursor","default");
});
var $win = $(window);
var checkScroll = false;
$win.scroll(function() {
if ( !checkScroll ) return;
var $elem = $("#LoadBar");
var docViewTop = $win.scrollTop();
var docViewBottom = docViewTop + $win.height();
var elemTop = $elem.offset().top;
var elemBottom = elemTop + $elem.height();
if ( (elemTop >= docViewTop) && (elemTop <= docViewBottom) ) {
checkScroll = false;
loadMore(_unitsPerLoad);
}
});
$("#BtnLoadMore").click(function() {
loadMore(_unitsPerLoad);
});
var _countOnScreen = 0;
function loadMore(count) {
$("#BtnLoadMore").hide();
$("#LoadSpinner").show();
var data = "Action=GetArtwork" +
"&CountryID=" +
"&GroupID=" +
"&SchoolID=" +
"&ProjectID=1876360" +
"&Grade=" +
"&ArtistID=" +
"&StartDate=" +
"&EndDate=" +
"&StartAt=" + _countOnScreen.toString() +
"&Count=" + count.toString();
$.ajax({
type: "POST",
url: "/museum/gallery.ajax.asp",
data: data,
success: function(xml, status, xhr) {
var resultType = $(xml).find("result").attr("type");
var result = $(xml).find("result").text();
if ( resultType != "success" ) {
$("#LoadSpinner").hide();
$("#LoadError").html("Unable to load images" + (result != "" ? " ("+result+")" : ""));
$("#LoadError").show();
}
else {
$("#TheGrid").show();
var total = $(xml).find("artworks").attr("total");
var totalViewable = $(xml).find("artworks").attr("total_viewable");
var totalPrivate = $(xml).find("artworks").attr("total_private");
var totalHidden = total - totalViewable;
if ( totalPrivate > 0 && !$("#PrivateAccessNote").is(":visible")) {
$("#PrivateAccessNoteSome").toggle(totalHidden > 0);
$("#PrivateAccessNote").show();
}
$(xml).find("art").each(function() {
_countOnScreen++;
var artid = $(this).attr("id");
var isPublic = ($(this).attr("public") == "Y");
var w = parseInt($(this).find("imagewidth").text());
var h = parseInt($(this).find("imageheight").text());
var artistid = $(this).find("artist").attr("id");
var screenname = $(this).find("artist").find("screenname").text();
var grade = $(this).find("artist").find("grade").text();
var projectid = $(this).find("project").attr("id");
var projectname = $(this).find("project").find("projectname").text();
var schoolid = $(this).find("school").attr("id");
var schoolname = $(this).find("school").find("schoolname").text();
var comments = parseInt($(this).find("comments").attr("count"));
var statement = ($(this).find("statement").attr("present"))=="y";
var isVideo = ($(this).attr("video")=="Y");
var isTranscoding = ($(this).attr("transcoding")=="Y");
var isTranscodingError = ($(this).attr("error")=="Y");
var preview = "";
var cw, ch, cdx=0, offset=0;
if ( isVideo && w==0 && h==0 ) {
w=296;
h=296;
cw=296;
ch=296;
var iconWidth=Math.round(cw*0.6);
var iconHeight=Math.round(ch*0.6);
preview = "<div class='genthumb-nopreview grid-item-art' style='background:#f8f8f8;display:absolute;z-index:1;border-radius:15px;width:" +cw.toString() + "px;height:" +ch.toString() + "px;text-align:center;'>";
preview += "<img src='/images/video-nopreview.png' width='" + iconWidth.toString() + "' height='" + iconHeight.toString() + "' style='opacity:0.1;margin-top:" + Math.round((ch-iconHeight)*0.5) + "px;'>";
preview += "</div>";
}
else {
var scalew = 296 / w;
var scaleh = 400 / h;
var scale = scalew < scaleh ? scalew : scaleh;
w = Math.round(w * scale);
h = Math.round(h * scale);
cw = w;
ch = h;
cdx = (296-w)*0.5;
preview = "<img src='https://images.artsonia.com/art/"+artid+".jpg?maxwidth="+w.toString() + "&maxheight=" + h.toString() + "' width='"+w.toString()+"' height='"+h.toString()+"' border='0' class='grid-item-art'>";
}
var htm = "<a href='/museum/art.asp?id="+artid+"'>";
htm += "<div style='display:inline-block;width:" + w.toString() + "px;height:" + h.toString() + "px;' class='genthumb' data-artid='" + artid + "'>";
if ( isVideo ) {
var wIcon = Math.round(296*0.2);
var fontSize = Math.round(wIcon*0.3);
if ( isTranscodingError ) {
htm += "<div style='position:absolute;z-index:2;margin-left:"+Math.round(0.5*(w-wIcon))+"px;margin-top:"+Math.round(0.5*(h-wIcon))+"px;text-align:center;color:#f00;font-weight:bold;font-size:"+fontSize+"px;filter: drop-shadow(0px 0px 5px #fff);'><img src='/images/video-warning.png' width='"+wIcon+"' height='"+wIcon+"'><br>error</div>"
}
else if ( isTranscoding ) {
htm += "<div style='position:absolute;z-index:2;margin-left:"+Math.round(0.5*(w-wIcon))+"px;margin-top:"+Math.round(0.5*(h-wIcon))+"px;width:"+wIcon+"px;height:"+wIcon+"px;'><div class='loader' style='width:"+wIcon+"px;height:"+wIcon+"px;'></div></div>";
//htm += "<div class='la-ball-clip-rotate la-dark' style='position:absolute;z-index:2;margin-left:" + Math.round(0.5*(w-wIcon)) + "px;margin-top:" + Math.round(0.5*(h-wIcon)) + "px;'><div style='width:"+wIcon+"px;height:"+wIcon+"px;'></div></div>";
}
else {
htm += "<div style='position:absolute;z-index:2;margin-left:" + Math.round(0.5*(w-wIcon)) + "px;margin-top:" + Math.round(0.5*(h-wIcon)) + "px;'><img src='/images/video-play-btn-lg.png' width='" + wIcon + "' height='" + wIcon + "'></div>";
}
}
htm += preview + "</div>";
htm += "</a>";
var item = "";
item += "<div class='grid-item' arts-id='" + artid + "' style='background:none;box-shadow:none;'>";
item += "<div id='ArtImage_"+artid+"' class='grid-item-image' style='z-index:1;width:"+cw+"px;height:"+ch+"px;border-radius:10px;overflow:hidden;box-shadow:0px 2px 2px #666;margin-left:"+cdx+"px;'>";
item += "<div style='margin-top:"+offset+"px;'>";
//item += "<img src='https://images.artsonia.com/art/medium/"+artid+".jpg' width='"+w.toString()+"' height='"+h.toString()+"' border='0' class='grid-item-art'>";
item += htm;
item += "</div>";
item += "</div>";
if ( artistid != "" ) {
item += "<div style='padding:10px 10px 0px;'>";
item += "<table border='0' cellspacing='0' cellpadding='0' width='100%'><tr valign='middle'>";
item += "<td class='textLabel' style='color:#666;' width='100%'>";
item += "by <a href='/artists/portfolio.asp?id="+artistid+"'>" + screenname + "</a>";
if ( grade != "" ) {
item += " (grade " + grade + ")";
}
item += "</td>";
if ( statement ) {
item += "<td nowrap style='padding-left:10px;'>"
item += "<a href='/museum/art.asp?id="+artid+"'>";
item += "<img src='/museum/images/statement.png' width='16' height='16' border='0' title='artist statement'>";
item += "</a>";
item += "</td>";
}
item += "<td nowrap style='padding-left:10px;padding-top:2px;'>";
if ( comments != 0 ) item += "<a href='/museum/art.asp?id="+artid+"'>";
item += "<img src='/museum/images/comment2.png' width='16' height='16' border='0' class='"+(comments==0?"ghost":"")+"' title='comments'>";
if ( comments != 0 ) item += "</a>";
item += "</td>";
item += "<td nowrap style='padding-left:5px;color:"+(comments==0?"#aaa":"#666")+";' class='textLabel'>"+comments.toString()+"</td>";
item += "</tr></table></div>";
if ( !isPublic ) {
item += "<div style='text-align:center;'><div class='textLabel private-art' style='padding:5px 10px 0;color:#d00;display:inline-block;cursor:pointer;'>";
item += "<img src='/images/icons/lock-tiny-red.png' width='16' height='16' align='absmiddle'>";
item += " private artwork";
item += "</div></div>";
}
}
item += "</div>";
var $item = $(item);
$grid.append($item).masonry("appended", $item);
});
var more = $(xml).find("artworks").attr("more") == "y";
if ( more ) {
$("#LoadSpinner").hide();
$("#BtnLoadMore").show();
checkScroll = true;
}
else {
$("#LoadBar").hide();
if ( totalHidden > 0 ) {
item = "";
item += "<div class='grid-item' style='background:#ffe;width:100%'>";
item += "<div class='textNormal' style='color:#666;font-family:arial;font-size:12pt;padding:15px;text-align:left;'><b>";
item += totalHidden + (totalHidden < total ? " additional" : "") + " artwork" + (totalHidden > 1 ? "s" : "") + " not shown</b>";
item += " - marked as private or waiting on parent permission";
item += "</div></div>";
$item = $(item);
$grid.append($item).masonry("appended", $item);
}
}
if (_countOnScreen==0 && total == 0) {
$("#LoadBar").hide();
$("#PanelEmptySet").show();
$("#TheGrid").hide();
}
}
},
error: function(xhr, status, error) {
$("#LoadSpinner").hide();
$("#LoadError").html("Unable to load images");
$("#LoadError").show();
}
});
}
$(document).on("click", ".private-art", function() {
swal("", "Private artwork is only visible to this artist's teachers, registered parents and approved fans while they are logged into their accounts.", "info");
});
// initialize the first set
loadMore(_unitsPerLoad);
});
</script>
</head>
<body>
<div id="HeaderWrapper">
<div class="headerPromoMarch" id="SuperHeaderPromo" style="margin:0px;">
<div style="padding:15px 10px;text-align:center;font-family:arial;font-size:12pt;">
<div>
<b>
<span style="color:#fff;">
<span style="color:#ff0;">
☘☘☘
</span>
YOUTH ART MONTH Special ☘☘☘ Custom
<a href="/gifts/canvas/">
<font color="#FFFF00">
Canvas Prints
</font>
</a>
,
<a href="/gifts/masks/">
<font color="#FFFF00">
Fabric Masks
</font>
</a>
and
<a href="/gifts/magnets/">
<font color="#FFFF00">
Magnets
</font>
</a>
—
<span style="color:#ff0;">
SALE thru March 7
</span>
<span style="color:#ff0;">
☘☘☘
</span>
</span>
</b>
</div>
<div style="position:absolute;margin-top:6px;margin-left:-42px;left:50%;display:none;">
<img height="22" src="/images/nav/header_down_arrow_crimson.png" width="24"/>
</div>
</div>
</div>
<a name="top">
</a>
<div id="header" style="z-index:100;height:68px;width:100%;overflow:hidden;position:relative;background:#fff;box-shadow:0 5px 5px rgba(0,0,0,0.075);">
<div style="position:absolute;z-index:101;margin-top:67px;width:100%;background:#ccc;height:1px;overflow:hidden;">
</div>
<div class="show-for-large" style="position:absolute;z-index:103;margin-top:11px;margin-left:15px;width:160px;height:42px;">
<a href="https://www.artsonia.com/">
<img border="0" height="42" src="/images/nav/logo-2019-color.png" width="160"/>
</a>
</div>
<div class="show-for-large" style="position:absolute;z-index:102;height:68px;">
<table border="0" cellpadding="0" cellspacing="0" height="68" width="100%">
<tbody>
<tr valign="middle">
<td style="padding-left:200px;">
<div class="headerTab">
<a href="https://www.artsonia.com/teachers/">
Teachers
</a>
</div>
</td>
<td>
<div class="headerTab">
<a href="https://www.artsonia.com/parents/">
Parents
</a>
</div>
</td>
<td>
<div class="headerTab">
<a href="https://www.artsonia.com/artists/">
Artists
</a>
</div>
</td>
<td>
<div class="headerTab">
<a href="https://www.artsonia.com/gifts/">
Giftshop
</a>
</div>
</td>
<td style="padding-left:20px;">
<input autocapitalize="off" autocomplete="off" autocorrect="off" class="inputField" id="HeaderSearchText" maxlength="100" onkeydown="if (event.keyCode == 13) document.getElementById('HeaderSearchBtn').click()" placeholder="Search for anything..." size="40" spellcheck="false" style="width:250px;height:2.25em;" type="text" value=""/>
</td>
<td style="padding-left:3px;">
<input id="HeaderSearchBtn" onclick="headerSearch('')" style="margin-left:5px;width:40px;font-size:13pt;height:2.25em;border:1px #ccc solid;border-radius:5px;color:#777;cursor:pointer;" type="button" value="Go"/>
</td>
<td align="right" class="textNormal" nowrap="" style="padding:0px 15px;" width="100%">
</td>
<td class="textNormal" nowrap="" style="padding-right:15px;">
<div class="headerTab" style="background:#fff;margin-right:0px;padding:6px 12px;font-weight:normal;border:1px #ccc solid;border-radius:8px;">
<a href="https://www.artsonia.com/login.asp">
Log in
</a>
</div>
</td>
</tr>
</tbody>
</table>
</div>
<div class="show-for-small" style="position:absolute;z-index:102;height:60px;width:100%;">
<div style="position:absolute;z-index:103;width:100%;height:60px;">
<div style="position:relative;margin:10px auto 0px;width:160px;height:42px;">
<a href="https://www.artsonia.com/">
<img border="0" height="42" src="/images/nav/logo-2019-color.png" width="160"/>
</a>
</div>
</div>
<div style="position:absolute;z-index:104;margin-top:6px;margin-left:0;height:60px;width:48px;">
<img border="0" height="48" id="NavSmallMenuIcon" onclick="headerSmallMenu()" src="/images/nav/menu_slideover.png" style="cursor:pointer;" width="64"/>
</div>
<div style="position:absolute;z-index:104;margin-top:14px;margin-left:60;height:32px;width:32px;">
<img border="0" height="32" id="NavSmallSearchIcon" onclick="headerShowSmallSearch()" src="/images/nav/search_magnify_tool2.png" style="cursor:pointer;" width="32"/>
</div>
</div>
</div>
<div id="NavSmallMenuContainer" style="position:absolute;z-index:999;display:none;width:100%;">
<ul>
<li>
<a href="https://www.artsonia.com/">
Homepage
</a>
</li>
<li>
<a href="https://www.artsonia.com/gifts/">
Giftshop
</a>
</li>
<li>
<a href="https://www.artsonia.com/teachers/">
Teachers
</a>
</li>
<li>
<a href="https://www.artsonia.com/parents/">
Parents
</a>
</li>
<li>
<a href="https://www.artsonia.com/artists/">
Artists
</a>
</li>
<li style="border-bottom:0px;">
<a href="https://www.artsonia.com/login.asp">
Account Login
</a>
</li>
</ul>
</div>
<div id="HeaderSearchContainerTiny" style="background:#eee;width:100%;border-bottom:0px #eee solid;display:none;">
<div style="padding:15px;">
<div style="padding:10px;border-radius:5px;background:#fff;">
<table border="0" cellpadding="0" cellspacing="0" width="100%">
<tbody>
<tr valign="middle">
<td width="100%">
<input autocapitalize="off" autocomplete="off" autocorrect="off" class="noFocusHighlight" id="HeaderSearchTextTiny" maxlength="100" onkeydown="if (event.keyCode == 13) document.getElementById('HeaderSearchBtnTiny').click()" placeholder="Search for anything..." size="40" spellcheck="false" style="width:100%;border:0px;font-size:12pt;" type="text" value=""/>
</td>
<td style="padding-left:3px;">
<span id="HeaderSearchBtnTiny" onclick="headerSearch('Tiny')" style="cursor:pointer;">
<img border="0" height="24" src="/images/nav/search_magnify_tool.png" width="24"/>
</span>
</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
<script language="javascript">
function headerSearch(size) {
var txt = $("#HeaderSearchText"+size).val();
//var txt = document.getElementById("HeaderSearchText").value;
txt = txt.replace(/^\s+|\s+$/g, "");
if ( txt == "" ) {
swal("You did not type anything into the search box.");
}
else {
location.href = "https://www.artsonia.com/search/_default.asp?q=" + escape(txt);
}
}
function headerSmallMenu() {
$("#NavSmallMenuContainer").toggle();
//$("#HeaderSearchContainerTiny").hide(); // toggle($("#NavSmallMenuContainer").is(":visible"));
$("#NavSmallMenuIcon").css("transform","rotate(" + ($("#NavSmallMenuContainer").is(":visible") ? "90" : "0") + "deg)");
}
function headerShowSmallSearch() {
$("#NavSmallSearchIcon").hide();
$("#HeaderSearchContainerTiny").slideDown();
}
</script>
</div>
<div id="page">
<table align="center" border="0" cellpadding="0" cellspacing="0" style="border-left:0px #bbb dotted;border-right:0px #bbb dotted;" width="990">
<tbody>
<tr valign="top">
<td width="20">
<img height="500" src="/images/spacer.gif" width="20"/>
</td>
<td align="left" width="950">
<div class="textNormal" id="PrivateAccessNote" style="display:none;text-align:center;margin-top:-20px;margin-bottom:20px;background:#ff9;border-radius:0 0 10px 10px;box-shadow:0px 2px 2px #ccc;">
<div style="padding:18px 18px;">
<b>
You are logged in as
</b>
and can view
<span id="PrivateAccessNoteSome">
some
</span>
hidden artworks in this gallery.
<span style="padding-left:15px;">
<a href="/logout.asp">
Logout
</a>
</span>
</div>
</div>
<table border="0" cellpadding="0" cellspacing="0" width="100%">
<tbody>
<tr valign="middle">
<td width="100%">
<h1 class="textPageTitle" style="margin-bottom:0px;">
Outer Space Blacklight 3-6th
<span style="font-weight:normal;color:#39c;">
Gallery
</span>
</h1>
</td>
<td nowrap="">
<a href="/slideshow.asp?project=1876360">
<img border="0" height="24" src="/images/icons/play-button.png" width="24"/>
</a>
</td>
<td class="textNormal" nowrap="" style="padding-left:7px;font-family:arial;font-size:12pt;">
<a href="/slideshow.asp?project=1876360">
<font color="#000000">
slideshow
</font>
</a>
</td>
</tr>
</tbody>
</table>
<div style="margin-top:20px;background:#fff;border-radius:10px;box-shadow:0px 2px 2px #ccc;">
<div class="textNormal" style="padding:18px 18px;">
<div class="textNormal" style="margin-bottom:8px;font-size:11pt;font-family:arial;color:#939;">
ABOUT THIS PROJECT
</div>
<div class="textNormal" style="margin-bottom:20px;">
3rd grade - 6th grade
<br/>
Students are learning about the principle of design of emphasis using our various elements of art. The students are also learning about craftsmanship, overlapping, and creating a sense of depth by placement on the page. All of these projects are using various media of the students choice that work in the blacklight. These projects are being done to help celebrate the 50th anniversary of walking on the moon which is our schoolwide theme.
</div>
<div class="textNormal" style="margin-bottom:20px;">
taught by Shelly Clark
</div>
<div class="textNormal" style="margin-top:0px;">
view more projects at
<a href="/schools/school.asp?id=81878">
Richardsville Elementary School
</a>
</div>
</div>
</div>
<div class="textNormal" id="PanelEmptySet" style="margin-top:85px;color:#c00;display:none;text-align:center;">
No artwork has been published to this gallery.
</div>
<div class="grid" id="TheGrid" style="margin-top:30px;display:;">
<div class="grid-sizer">
</div>
<div class="gutter-sizer">
</div>
</div>
<div id="LoadBar" style="margin-top:35px;clear:both;">
<table border="0" cellpadding="0" cellspacing="0" height="50">
<tbody>
<tr valign="middle">
<td width="50%">
<div class="hline" style="border-top:1px #ccc solid;">
</div>
</td>
<td nowrap="" style="padding:0px 10px;">
<div id="BtnLoadMore">
<span class="button buttonBlue">
more
</span>
</div>
<div id="LoadSpinner" style="display:none;">
<img border="0" height="32" src="/images/icons/anim_refresh1.gif" width="32"/>
</div>
<div class="textNormal" id="LoadError" style="display:none;color:#999;font-family:arial;font-size:14pt;">
{ERROR}
</div>
</td>
<td width="50%">
<div class="hline" style="border-top:1px #ccc solid;">
</div>
</td>
</tr>
</tbody>
</table>
</div>
</td>
<td width="20">
<img height="500" src="/images/spacer.gif" width="20"/>
</td>
</tr>
</tbody>
</table>
<div style="height:50px;">
</div>
</div>
<div id="FooterWrapper">
<div id="footer">
<div style="height:8px;overflow:hidden;margin:0 0 25px;background-image:linear-gradient(rgba(0,0,0,0.1), rgba(0,0,0,0));">
</div>
<div align="center" class="textNormal" style="color:#999;">
<a class="hoverLink" href="https://www.artsonia.com/about/" style="color:#ddd;">
<font color="#666666">
about us
</font>
</a>
|
<a class="hoverLink" href="https://www.artsonia.com/terms/" style="color:#ddd;">
<font color="#666666">
terms
</font>
</a>
|
<a class="hoverLink" href="https://www.artsonia.com/privacy/" style="color:#ddd;font-weight:bold;">
<font color="#333333">
privacy
</font>
</a>
|
<a class="hoverLink" href="https://www.artsonia.com/contact.asp" style="color:#ddd;">
<font color="#666666">
contact us
</font>
</a>
|
<a class="hoverLink" href="https://help.artsonia.com/hc/en-us/" style="color:#ddd;">
<font color="#666666">
helpdesk
</font>
</a>
</div>
<div align="center" style="margin-top:15px;">
<table border="0" cellpadding="0" cellspacing="0">
<tbody>
<tr valign="middle">
<td class="textNormal" nowrap="">
<font color="#666666">
follow us
</font>
</td>
<td style="padding-right:5px;">
<a href="https://www.facebook.com/artsonia" target="_blank">
<img border="0" height="20" src="/images/icons/facebook_tiny.png" style="border:1px #ccc solid;" width="20"/>
</a>
</td>
<td style="padding-right:5px;">
<a href="https://twitter.com/artsonia" target="_blank">
<img border="0" height="20" hspace="0" src="/images/icons/twitter_tiny.png" style="border:1px #ccc solid;" width="20"/>
</a>
</td>
<td style="padding-right:5px;">
<a href="https://pinterest.com/artsonia" target="_blank">
<img border="0" height="20" hspace="0" src="/images/icons/pinterest_tiny.png" style="border:1px #ccc solid;" width="20"/>
</a>
</td>
<td>
<a href="https://instagram.com/artsonia" target="_blank">
<img border="0" height="20" hspace="0" src="/images/icons/instagram-tiny.gif" style="border:1px #ccc solid;" width="20"/>
</a>
</td>
<td class="textNormal" style="color:#666;padding-left:30px;">
©2000-2021 Artsonia LLC. All rights reserved.
</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</body>
</html>
###Markdown
grab an image using url...grab grade tags for grade 3, grade 4, grade 5 grabbing grade level tags:--found in div class grid item* table* tbody* tr* tduse a.get_text() to grabData cleanning --> replace (['(',')'], [' ', ' '])and (' ', strip=True)a.get_text(' ', strip=True).replace (['(',')'], [' ', ' '])
###Code
# select information contained in the page table
table = webpage.findAll(lambda tag: tag.name=='table')
# get a list of table rows from page table
#rows = table.find(lambda tag: tag.name=='tr')
# rows = []
# for
#images = webpage.findAll(name='img')
#images = webpage.find(name="img", attrs={'class':'grid-item-art'})
# select all elements with an a attribute inside div class grid item
#artworks = [img for img in images if img['src']]
#artworks = [img['src'] for img in images]
# grab text label for grid item
#grid_item_label = grid_item_info.get_text()
#print(grid_item_info)
#print(len(rows))
print(table)
###Output
[<table border="0" cellpadding="0" cellspacing="0" height="68" width="100%">
<tbody><tr valign="middle">
<td style="padding-left:200px;">
<div class="headerTab"><a href="https://www.artsonia.com/teachers/">Teachers</a></div>
</td>
<td>
<div class="headerTab"><a href="https://www.artsonia.com/parents/">Parents</a></div>
</td>
<td>
<div class="headerTab"><a href="https://www.artsonia.com/artists/">Artists</a></div>
</td>
<td>
<div class="headerTab"><a href="https://www.artsonia.com/gifts/">Giftshop</a></div>
</td>
<td style="padding-left:20px;"><input autocapitalize="off" autocomplete="off" autocorrect="off" class="inputField" id="HeaderSearchText" maxlength="100" onkeydown="if (event.keyCode == 13) document.getElementById('HeaderSearchBtn').click()" placeholder="Search for anything..." size="40" spellcheck="false" style="width:250px;height:2.25em;" type="text" value=""/></td>
<td style="padding-left:3px;"><input id="HeaderSearchBtn" onclick="headerSearch('')" style="margin-left:5px;width:40px;font-size:13pt;height:2.25em;border:1px #ccc solid;border-radius:5px;color:#777;cursor:pointer;" type="button" value="Go"/></td>
<td align="right" class="textNormal" nowrap="" style="padding:0px 15px;" width="100%">
</td>
<td class="textNormal" nowrap="" style="padding-right:15px;">
<div class="headerTab" style="background:#fff;margin-right:0px;padding:6px 12px;font-weight:normal;border:1px #ccc solid;border-radius:8px;"><a href="https://www.artsonia.com/login.asp">Log in</a></div>
</td>
</tr>
</tbody></table>, <table border="0" cellpadding="0" cellspacing="0" width="100%">
<tbody><tr valign="middle">
<td width="100%"><input autocapitalize="off" autocomplete="off" autocorrect="off" class="noFocusHighlight" id="HeaderSearchTextTiny" maxlength="100" onkeydown="if (event.keyCode == 13) document.getElementById('HeaderSearchBtnTiny').click()" placeholder="Search for anything..." size="40" spellcheck="false" style="width:100%;border:0px;font-size:12pt;" type="text" value=""/></td>
<td style="padding-left:3px;"><span id="HeaderSearchBtnTiny" onclick="headerSearch('Tiny')" style="cursor:pointer;"><img border="0" height="24" src="/images/nav/search_magnify_tool.png" width="24"/></span></td>
</tr>
</tbody></table>, <table align="center" border="0" cellpadding="0" cellspacing="0" style="border-left:0px #bbb dotted;border-right:0px #bbb dotted;" width="990">
<tbody><tr valign="top">
<td width="20"><img height="500" src="/images/spacer.gif" width="20"/></td>
<td align="left" width="950">
<div class="textNormal" id="PrivateAccessNote" style="display:none;text-align:center;margin-top:-20px;margin-bottom:20px;background:#ff9;border-radius:0 0 10px 10px;box-shadow:0px 2px 2px #ccc;"><div style="padding:18px 18px;">
<b>You are logged in as </b>
and can view <span id="PrivateAccessNoteSome">some</span> hidden artworks in this gallery.
<span style="padding-left:15px;"><a href="/logout.asp">Logout</a></span>
</div></div>
<table border="0" cellpadding="0" cellspacing="0" width="100%">
<tbody><tr valign="middle">
<td width="100%">
<h1 class="textPageTitle" style="margin-bottom:0px;">
Outer Space Blacklight 3-6th
<span style="font-weight:normal;color:#39c;">
Gallery
</span>
</h1>
</td>
<td nowrap="">
<a href="/slideshow.asp?project=1876360"><img border="0" height="24" src="/images/icons/play-button.png" width="24"/></a>
</td>
<td class="textNormal" nowrap="" style="padding-left:7px;font-family:arial;font-size:12pt;">
<a href="/slideshow.asp?project=1876360"><font color="#000000">slideshow</font></a>
</td>
</tr>
</tbody></table>
<div style="margin-top:20px;background:#fff;border-radius:10px;box-shadow:0px 2px 2px #ccc;"><div class="textNormal" style="padding:18px 18px;">
<div class="textNormal" style="margin-bottom:8px;font-size:11pt;font-family:arial;color:#939;">ABOUT THIS PROJECT</div>
<div class="textNormal" style="margin-bottom:20px;">
3rd grade - 6th grade<br/>Students are learning about the principle of design of emphasis using our various elements of art. The students are also learning about craftsmanship, overlapping, and creating a sense of depth by placement on the page. All of these projects are using various media of the students choice that work in the blacklight. These projects are being done to help celebrate the 50th anniversary of walking on the moon which is our schoolwide theme.
</div>
<div class="textNormal" style="margin-bottom:20px;">
taught by Shelly Clark
</div>
<div class="textNormal" style="margin-top:0px;">
view more projects at <a href="/schools/school.asp?id=81878">Richardsville Elementary School</a>
</div>
</div></div>
<div class="textNormal" id="PanelEmptySet" style="margin-top:85px;color:#c00;display:none;text-align:center;">
No artwork has been published to this gallery.
</div>
<div class="grid" id="TheGrid" style="margin-top:30px;display:;">
<div class="grid-sizer"></div>
<div class="gutter-sizer"></div>
</div>
<div id="LoadBar" style="margin-top:35px;clear:both;">
<table border="0" cellpadding="0" cellspacing="0" height="50">
<tbody><tr valign="middle">
<td width="50%"><div class="hline" style="border-top:1px #ccc solid;"> </div></td>
<td nowrap="" style="padding:0px 10px;">
<div id="BtnLoadMore"><span class="button buttonBlue">more</span></div>
<div id="LoadSpinner" style="display:none;"><img border="0" height="32" src="/images/icons/anim_refresh1.gif" width="32"/></div>
<div class="textNormal" id="LoadError" style="display:none;color:#999;font-family:arial;font-size:14pt;">{ERROR}</div>
</td>
<td width="50%"><div class="hline" style="border-top:1px #ccc solid;"> </div></td>
</tr>
</tbody></table>
</div>
</td>
<td width="20"><img height="500" src="/images/spacer.gif" width="20"/></td>
</tr>
</tbody></table>, <table border="0" cellpadding="0" cellspacing="0" width="100%">
<tbody><tr valign="middle">
<td width="100%">
<h1 class="textPageTitle" style="margin-bottom:0px;">
Outer Space Blacklight 3-6th
<span style="font-weight:normal;color:#39c;">
Gallery
</span>
</h1>
</td>
<td nowrap="">
<a href="/slideshow.asp?project=1876360"><img border="0" height="24" src="/images/icons/play-button.png" width="24"/></a>
</td>
<td class="textNormal" nowrap="" style="padding-left:7px;font-family:arial;font-size:12pt;">
<a href="/slideshow.asp?project=1876360"><font color="#000000">slideshow</font></a>
</td>
</tr>
</tbody></table>, <table border="0" cellpadding="0" cellspacing="0" height="50">
<tbody><tr valign="middle">
<td width="50%"><div class="hline" style="border-top:1px #ccc solid;"> </div></td>
<td nowrap="" style="padding:0px 10px;">
<div id="BtnLoadMore"><span class="button buttonBlue">more</span></div>
<div id="LoadSpinner" style="display:none;"><img border="0" height="32" src="/images/icons/anim_refresh1.gif" width="32"/></div>
<div class="textNormal" id="LoadError" style="display:none;color:#999;font-family:arial;font-size:14pt;">{ERROR}</div>
</td>
<td width="50%"><div class="hline" style="border-top:1px #ccc solid;"> </div></td>
</tr>
</tbody></table>, <table border="0" cellpadding="0" cellspacing="0">
<tbody><tr valign="middle">
<td class="textNormal" nowrap=""><font color="#666666"> follow us</font> </td>
<td style="padding-right:5px;"><a href="https://www.facebook.com/artsonia" target="_blank"><img border="0" height="20" src="/images/icons/facebook_tiny.png" style="border:1px #ccc solid;" width="20"/></a></td>
<td style="padding-right:5px;"><a href="https://twitter.com/artsonia" target="_blank"><img border="0" height="20" hspace="0" src="/images/icons/twitter_tiny.png" style="border:1px #ccc solid;" width="20"/></a></td>
<td style="padding-right:5px;"><a href="https://pinterest.com/artsonia" target="_blank"><img border="0" height="20" hspace="0" src="/images/icons/pinterest_tiny.png" style="border:1px #ccc solid;" width="20"/></a></td>
<td><a href="https://instagram.com/artsonia" target="_blank"><img border="0" height="20" hspace="0" src="/images/icons/instagram-tiny.gif" style="border:1px #ccc solid;" width="20"/></a></td>
<td class="textNormal" style="color:#666;padding-left:30px;">©2000-2021 Artsonia LLC. All rights reserved.</td>
</tr>
</tbody></table>]
###Markdown
1. may need to grab grade level first to create dictionary 2. work on grabbing a single image3. create function/loop to work with label grab There are images hidden inside a table that can be seen, but accessed using Beautiful Soup alone. To overcome this obstacle, I will use a python package that automates a web browser, Selenium. There are images hidden inside a table that can be seen, but not accessed using Beautiful Soup alone. To overcome this obstacle, I will use a python package, Selenium, that automates a web browser, Chrome.I have a Linux subsystem, Ubuntu, that runs as WSL. I am combining the following resources to install the Selenium Server on my machine. The chromedrive.exe, must be available in the PATH. Because I have a Windows machine, I need to place this executable in the same directory as my python script instead of my cwd/pwd.[Setup for Selenium ChromeDriver on Ubuntu](https://tecadmin.net/setup-selenium-chromedriver-on-ubuntu/)[Selenium and Jupyter lab](https://shanyitan.medium.com/how-to-install-selenium-and-run-it-successfully-via-jupyter-lab-c3f50d22a0d4)[How to Setup Jupyter Notebook on Ubuntu](https://www.digitalocean.com/community/tutorials/how-to-set-up-jupyter-notebook-with-python-3-on-ubuntu-18-04)[Windows ChromeDriver Path Issue Solutions](https://stackoverflow.com/questions/48581090/python-selenium-path-to-driver)run command for Selenium server:xvfb-run java -Dwebdriver.chrome.driver=/usr/bin/chromedriver -jar selenium-server-standalone-3.13.0.jar
###Code
!pip install selenium #--user
import os
import sys
from time import sleep
from selenium import webdriver
#from selenium.webdriver.chrome.service import Service
#--initial non-viable solution--
#part 1
# join the absolute and relative path regardles of where they may be into one path
#service = Service(os.path.abspath(os.getcwd(), "notebooks/chromedriver.exe")
# part 2
# service.start()
# driver = webdriver.Remote(service.service_url)
# part 3
# driver.get('http://www.google.com/');
# part 4
# time.sleep(5) # Let the user actually see something!
# part 5
# driver.quit()
#--second attempt--
# part 1
# get path
# WSL bash file PATH has been updated to include
# export BROWSER=“/mnt/c/Program Files (x86)/Google/Chrome/Application/chrome.exe”
CHROME_DRIVER_PATH = '~/mnt/c/Program Files (x86)/Google/Chrome/Application/chrome.exe'
# --non-viable solutions--
#CHROME_DRIVER_PATH = os.path.dirname(sys.executable)
#path = '../chromedriver.exe'
#path = '../chrome.exe'
#path = 'chromedriver.exe'
#path = os.environ.get('CHROME_DRIVER_PATH')
# part 2
# test path
driver = webdriver.Chrome(CHROME_DRIVER_PATH)
# --non-viable solutions--
#driver = webdriver.Chrome(path)
# part 3
driver.get('https//www.facebook.com/')
# part 4
sleep(5)
# part 5
browser.close()
###Output
_____no_output_____
###Markdown
Major Differences between Path and Classpath is: 1). Path is an environment variable which is used by the operating system to find the executables. Classpath is an environment variable which is used by the Java compiler to find the path, of classes.ie in J2EE we give the path of jar filesThe PATH is the system variable that your operating system uses to locate needed executables from the command line or Terminal window. The PATH system variable can be set using System Utility in control panel on Windows, or in your shell's startup file on Linux and Solaris.[Add Directories to PATH Variables](https://helpdeskgeek.com/windows-10/add-windows-path-environment-variable/)
###Code
grade_lvl = []
grade_lvl = webpage.find_all('div', class_='textLable')[' (grade 3)']
#grade_lvl = webpage.find_all('div', class_='textLable')[' (grade 4)'].append(grade_lvl)
#grade_lvl = webpage.find_all('div', class_='textLable')[' (grade 5)'].append(grade_lvl)
from bs4 import BeautifulSoup
import requests
import csv
source = requests.get('http://artsonia.com').text
# identifying prefered parser for bs4
webpage = BeautifulSoup(html_file, 'lxml')
csv_file = open('rjp_scrape.csv', 'w')
csv_writer = csv.writer(csv_file)
csv_writer = writerow(['image','grade_level']) #'image_src'])
for image in webpage.find_all('image'):
# art_image_title = image.h2.a.text
# print(art_image_title)
grade_level = image.find('div', class_='textLable')[[' (grade 3)', ' (grade 4)', ' (grade 5)']]
image_source = image.find('img', class_='grid-item-art')['src']
print(image_source)
try:
image_id = video_source.split('/')[4]
image_id = video_source.split('?')[0]
image_link = f'http://images.artsonia.com/art/{image_id}.jpg'
except Exception as e:
image_link = None
print(image_link)
print()
csv_writer.writerow([art_image, grade_level]) #image_src])
csv_file.close()
###Output
_____no_output_____ |
unidad1/Week1.ipynb | ###Markdown
Week 1
###Code
import random as r
def generar(n):
a = [0]*n
return list(map(lambda x : r.randint(1,100), a))
def mostar(array):
print(array)
def reversa(array):
print(array[::-1])
def minArreglo(array):
return min(array)
def mediaArreglo(array):
pos = int(len(array)/2)
return array[pos]
def ocurrencias(array):
return list(map(lambda x : array.count(x), array))
def main():
n = 10
a = generar(n)
print(a)
reversa(a)
print(minArreglo(a))
print(mediaArreglo(a))
print(ocurrencias(a))
if __name__ == "__main__":
main()
def main():
a = [0] * 100
for i in range(0,100):
a[i] = i + 1
print(a)
if __name__ == "__main__":
main()
###Output
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100]
|
Fiedler_revised/n110_hydrostatic_vs_nonhydrostatic2d.ipynb | ###Markdown
For two-dimensional flows in a vertical plane. A stratified fluid, a hydrostatic pressure solver, and periodic boundary conditionsv3.62, 12 June 2018, by Brian Fiedler$\newcommand{\V}[1]{\vec{\boldsymbol{1}}}$$\newcommand{\I}[1]{\widehat{\boldsymbol{\mathrm{1}}}}$$\newcommand{\B}[1]{\overline{1}}$ $\newcommand{\pd}[2]{\frac{\partial1}{\partial2}}$$\newcommand{\dd}[2]{\frac{\D1}{\D2}}$$\newcommand{\pdt}[1]{\frac{\partial1}{\partial t}}$$\newcommand{\ddt}[1]{\frac{\D1}{\D t}}$$\newcommand{\D}{\mathrm{d}}$$\newcommand{\Ii}{\I{\imath}}$$\newcommand{\Ij}{\I{\jmath}}$$\newcommand{\Ik}{\I{k}}$$\newcommand{\VU}{\V{U}}$$\newcommand{\del}{\boldsymbol{\nabla}}$$\newcommand{\dt}{\cdot}$$\newcommand{\x}{\times}$$\newcommand{\dv}{\del\cdot}$$\newcommand{\curl}{\del\times}$$\newcommand{\lapl}{\nabla^2}$$\newcommand{\VI}[1]{\left\langle1\right\rangle}$$\require{color}$ Introducing the hydrostatic approximation, with application to internal gravity waves.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import display,clear_output
import time as Time
import math, os
import numpy as np
import scipy.fftpack
import matplotlib
matplotlib.rcParams.update({'font.size': 22})
from IPython.core.display import HTML
import urllib.request
HTML(urllib.request.urlopen('http://metrprof.xyz/metr4323.css').read().decode())
#HTML( open('metr4323.css').read() ) #or use this, if you have downloaded metr4233.css to your computer
###Output
_____no_output_____
###Markdown
The Boussinesq approximationWe consider these dimensional incompressible, inviscid equations:$$\rho\ddt{\VU} = -\del p +\rho f\VU \x \Ik + \rho\V{g}$$$$\ddt{\rho} = 0$$A corollary of the above equation is:$$\del \dt \VU = 0$$The vertical component of the momentum equation is:$$\rho\ddt{w} = -\pd{p}{z} + \rho g$$For our convenience (plotting, understanding, etc.) we choose to construct the equations around the quantities associated with motion. We write pressure as:$$p(x,y,z,t) = \B{p}(z) + p'(x,y,z,t)$$Likewise, we decompose density as:$$\rho(x,y,z,t) = \B{\rho}(z) + \rho'(x,y,z,t)$$We require the overbar quantities by themselves to satisfy a steady, motionless, hydrostatic state:$$\pd{\overline{p}}{z} = -\B{\rho} g$$So in the momentum we see only the fluctuations $p'$ and $\rho'$ as causes of motion: $$(\B{\rho}+\rho')\ddt{\V{U}} = -\del p' + (\B{\rho}+\rho') f\V{U}\x\I{k} + \rho'\V{g}$$We furthermore assume all density variations are slight, all within a few percent of constant value $\rho_0$.So except, when taking derivatives of density, we replace $\rho+\rho'$ by $\rho_0$:$$\ddt{\V{U}} = -\frac{1}{\rho_0} \del p' + f\vec{U}\x\I{k} + \frac{\rho'}{\rho_0}\V{g}$$This is the [Boussinesq approximation]( http://tinyurl.com/BoussinesqApproximation ).For notational convenience, we define a "pressure":$$P \equiv \frac{p'}{\rho_0} $$and buoyancy$$b \equiv - g \frac{\rho'}{\rho_0}$$The momentum equation is now:$$\ddt{\V{U}} = - \del P + f\V{U}\x\I{k} + b \I{k}$$ With $\rho_0$ defined as constant, $\ddt{\rho}=0$ is:$$\ddt{}\left( \overline{\rho} + \rho' \right)=0$$We can define a *total buoyancy*:$$B \equiv \B{B} + b$$with$$\B{B} \equiv -\frac{g}{\rho_0} \B{\rho} \qquad b \equiv -\frac{g}{\rho_0} \rho'$$With $\B{B}$ only a function of $z$:$$\ddt{b} = - w \dd{\B{B}}{z}$$Very commonly we use [buoyancy frequency](http://en.wikipedia.org/wiki/Brunt%E2%80%93V%C3%A4is%C3%A4l%C3%A4_frequency) $$N^2 \equiv \dd{\B{B}}{z}$$You can see the fundamental role $N^2$ in oscillations:$$\ddt{b} = - w N^2$$combined with an approximate form of the vertical momentum equation:$$\ddt{w} = b$$leads to simple harmonic oscillation with frequency $N$. Restrictions to the (x,z) planeWe model flows in the $(x,z)$ plane and assume no gradient in the $y$ direction, and thus no advection in the $y$ direction.$$\pdt{u} + u \pd{u}{x} + w \pd{w}{z} = - \pd{P}{x} + f v$$$$\pdt{v} + u \pd{v}{x} + w\pd{v}{z}= - f u$$$$\pdt{w} + u \pd{w}{x} + w \pd{w}{z} = -\pd{P}{z} +b $$$$\pd{u}{x} + \pd{w}{z} = 0$$$$\pd{b}{t} + u \pd{b}{x} + w \pd{b}{z} = - w N^2$$ NondimensionalizationOne further simplification: we assume $N^2$ is a *constant*. When positive, the layer is thus stably stratified.When negative (which makes $N$ an imaginary number) the layer is unstable.The layer has depth $H$. We use the depth of the layer $H$ as the length scale, $1/|N|$ as the time scale, and $H|N|$ as the velocity scale.We use a * to denote a dimensionless quantity. Therefore:$$(x,z) = (x^*,z^*)H \qquadt = t^* \frac{1}{|N|} \qquad(u,v,w) = (u^*,v^*,w^*) |N|H \qquadP = P^* |N^2|H^2 \qquadb = b^* H|N^2|$$Upon substitution, the dimensionless buoyancy equation is (notice the absence of $N^2$):$$\pdt{b^*} + u^* \pd{b^*}{x^*} + w^* \pd{b^*}{z^*}= \pm w^* $$The dimensionless vertical momentum equation looks the same as the dimensional equation, but it is sprinkled with $^*$ .The two horizontal momentum equations are similar, but $f$ is replaced by $f/|N|$. Dropping the *Dragging along the $^*$ notation would be cumbersome. So we drop the $^*$, and the equations will be understood to be dimensionless, even though symbols appear to be same as in the dimensional counterparts.$$\pdt{u} + u \pd{u}{x} + w \pd{u}{z} = - \pd{P}{x} + \frac{f}{|N|} v$$$$\pdt{v} + u \pd{v}{x} + w \pd{v}{z}= - \frac{f}{|N|} u$$$$\pdt{w} + u \pd{w}{x} + w \pd{w}{z}= - \pd{P}{z} +b $$$$\pd{u}{x} + \pd{w}{z} = 0$$$$\pdt{b} + u \pd{b}{x} + w \pd{b}{z} = \pm w $$The rigid boundaries at the top and bottom are at $z=0$ and $z=1$.The dimensionless length of the domain could be denoted $L$. This number $L$ could also be called the aspect ratio of the domain. More about buoyancy The model conserves total buoyancy equation, without a source term:$$\ddt{B}=0$$The above is so simple, that it begs the question: *why work with $b$*?In our *dimensionless model*, we have$$\ddt{b} = \pdt{b} + u \pd{b}{x} + w \pd{b}{z} = \pm w = \pm \ddt{z}$$So our model equation, which has $b$ generated by $w$, is equivalent to$$\ddt{}\left(b \mp z\right)=0$$So the dimensionless total buoyancy of our model is just $B \equiv b\mp z$.We will often plot and analyze $B$. But to answer the above question: * It is computational advantageous to work with the advection and generation of $b$: it tends to be concentrated away from the boundaries where the model equations may not be ideally developed.* Our model should make connections with linear theory, which requires a buoyancy variable that is associated only with motion, and that is zero in the steady state.* Working with $b$ insures the $P$ that we solve for can cause motion, and is not merely holding fluid in place. The linear theoryThis is not a course about pencil and paper dynamics, but some pencil and paper workmay be needed here to demonstrate under what circumstances the hydrostatic approximationis justified. Let's examine an analytical solution for linear (meaning very small amplitude) waves.For linear wave analysis we can work with the *dimensional* equations. In our first exploration, wetake $f=0$:$$\pdt{u} = - \pd{P}{x} $$$$\pdt{w} = - \pd{P}{z} +b $$$$\pd{u}{x} + \pd{w}{z} = 0$$$$\pdt{b} = - w N^2$$The above are 4 first order equations for 4 unknown fields: $u$, $v$, $P$ and $b$. $$\pd{^2}{t^2} \left(\pd{^2}{x^2} +\pd{^2}{z^2} \right) w = - N^2\pd{^2}{x^2} w$$Let's guess one possible solution for $w(x,z,t)$ is$$w = A \cos(kx-\omega t) \sin(m z)$$Note $m=\pi/H$ allows for $w=0$ at $z=0$ and $z=H$.We find our guess is correct only if$$\frac{\omega^2}{k^2} = N^2\frac{1}{k^2 +m^2}$$For small $k$, $$\frac{\omega^2}{k^2} = N^2\frac{1}{m^2}$$and the waves have a phase speed $c\equiv \omega/k$ that is independent of $k$. Hydrostatic approximationThe "small $k$" result above is identical to the result beginning with the assumption that$\frac{\partial w}{\partial t}$ is negligible in the vertical momentum equation:$$0 = - \frac{\partial P}{\partial z} +b \qquad \leftarrow~ \mathrm{the~ hydrostatic~ approximation}$$STUDENTS: combine the remaining three equations with the above equation, to derive $\frac{\omega^2}{k^2}$ for hydrostatic waves.We conclude that, at least for waves, a hydrostatic approximation is valid for waves of long wavelength,or equivalently small wavenumber: $|k| << |m|$.It is important to understand that the hydrostatic approximation does NOT imply that $\frac{\partial w}{\partial t}=0$, or$w=0$, it only means the vertical intertia does not have much impact on the dynamics. The hydrostatic pressure solverSuppose the vertical acceleration is relatively small, meaning the twoterms on the right-hand-side nearly balance:$$\pdt{w} = - \pd{P}{z} +b $$So that, to a good approximation the pressure fluctuation $P$ is also hydrostatic, just as the base state $\B{p}$ is:$$\pd{P}{z} = b $$But we need to find $P(x,z,t)$ that not only is in hydrostatic balance with $b$, but also forces the solution to obey $$\pd{u}{x} + \pd{w}{z} = 0$$So we seek$$P(x,z,t) = P_h(x,z,t) + P_x(x,t)$$Where $P_h$ is *any* solution that satisfies $\pd{P_h}{z} = b$ and $P_x$ is the solution that we find for that enforces non-divergence.When using the hydrostatic solver for pressure, the solution for $w$ is not forecasted but instead is diagnosed:$$w(x,z,t) = - \int_0^z \pd{u}{x}(x,z',t) ~\D z'$$You might think non-divergence is thus easily satisfied, but there is a catch.At the top boundary ($z=H$ in the dimensional equations, or $z=1$ in the dimensionless equations), we must have$w=0$. So we seek $P_x$ that allows that to happen. Another way to describe what we are doing is that we seek $P_x$ that keeps the net convergence into any column to be zero.Let $\pd{U}{t}$ be all the accumulated accelerations, except for that from $P_x$:$$\pdt{u} = \pdt{U} - \pd{P_x}{x} $$We need$$\int_0^1 \pdt{u} \D z = 0 $$So we must have:$$\pd{P_x}{x} = \int_0^1 \pdt{U} \D z$$$P_x(x,t)$ is then easily found by "integrating" the above equation with $P_x=0$ arbitrarily at a point. Functions Some familiar functions
###Code
# Expands the margins of a matplotlib axis,
# and so prevents arrows on boundaries from being clipped.
def stop_clipping(ax,marg=.02): # default is 2% increase
l,r,b,t = ax.axis()
dx,dy = r-l, t-b
ax.axis([l-marg*dx, r+marg*dx, b-marg*dy, t+marg*dy])
# dqdt requires a list of the time derivatives for q, stored
# in order from present to the past
def ab_blend(dqdt,order):
if order==1:
return dqdt[0]
elif order==2:
return 1.5*dqdt[0]-.5*dqdt[1]
elif order==3:
return (23*dqdt[0]-16*dqdt[1]+5*dqdt[2])/12.
else:
print("order", order ," not supported ")
def advect_box(q,u,v,dx,dy):
# this function previously was call "advect"
# third-order upwind advection of q
# all fields are on the U-grid
dqdt = np.zeros(q.shape)
dqmx = np.zeros(q.shape)
dqpx = np.zeros(q.shape)
dqmy = np.zeros(q.shape)
dqpy = np.zeros(q.shape)
dqmx[:,1] = -q[:,0] + q[:,1] # 1st order, plus side at left wall
dqmx[:,2:-1] = (q[:,:-3] - 6*q[:,1:-2] + 3*q[:,2:-1] + 2*q[:,3:])/6. # 3rd order, minus side
dqpx[:,-2] = -q[:,-2] + q[:,-1] # 1st order, plus side at right wall
dqpx[:,1:-2] = (-2*q[:,0:-3] - 3*q[:,1:-2] + 6*q[:,2:-1] -1*q[:,3:])/6. #3rd order, plus side
dqmy[1,:] = -q[0,:] + q[1,:] # 1st order, minus side at bottom wall
dqmy[2:-1,:] = (q[:-3,:] - 6*q[1:-2,:] + 3*q[2:-1,:] + 2*q[3:,:])/6. # 3rd order, minus side
dqpy[-2,:] = -q[-2,:] + q[-1,:] # 1st order, plus side at top wall
dqpy[1:-2,:] = ( - 2*q[0:-3,:] - 3*q[1:-2,:] + 6*q[2:-1,:] - q[3:,:] )/6. # 3rd order, plus side
dqdx = np.where(u>0.,dqmx,dqpx)/dx # upwind, emphasize side from where fluid is coming from
dqdy = np.where(v>0.,dqmy,dqpy)/dy # ditto
dqdt += -u*dqdx
dqdt += -v*dqdy
return dqdt
#############################################################
def divergence(u,v,dx,dy):
ush = u.shape
vsh = v.shape
if ush == vsh: # must be B-grid
div = .5*( u[:-1,1:] + u[1:,1:] - u[:-1,:-1] - u[1:,:-1])/dx + \
.5*( v[1:,:-1] + v[1:,1:] - v[:-1,:-1] - v[:-1,1:])/dy
elif ush[1]-vsh[1] == 1 and vsh[0]-ush[0] == 1: #must be C-grid
div = (u[:,1:]-u[:,:-1])/dx + (v[1:,:]-v[:-1,:])/dy
else:
print("Fail divergence. Array shape implies neither B-grid or C-grid")
return div
#############################################
def laplacian(p,dx,dy, il=None, ir=None, jb=None, jt=None):
# for diffusion terms
# p is not pressure here, but any gridded variable
# Returns Laplacian of p, d^2p/dx^2 + d^2p/dy^2 + d^2p/dz^2 .
# On the boundaries, specify how to grab a point the would otherwise be outside the domain.
# Periodic boundary conditions can thus be accommodated.
# If not specified, the terms in the boundary normal direction are ignored.
rdx2 = 1./(dx*dx)
rdy2 = 1./(dy*dy)
lapl = np.zeros(p.shape)
lapl[:,1:-1] = rdx2*( p[:,:-2] -2*p[:,1:-1] + p[:,2:] )
lapl[1:-1,:] += rdy2*( p[:-2,:] -2*p[1:-1,:] + p[2:,:] )
if il in [-2,-1,0,1]:
lapl[:,0] += rdx2*( p[:,il] -2*p[:,0] + p[:,1] )
if ir in [-2,-1,0,1]:
lapl[:,-1] += rdx2*( p[:,-2] -2*p[:,-1] + p[:,ir] )
if jb in [-2,-1,0,1]:
lapl[0,:] += rdy2*( p[jb,: ] -2*p[0,:] + p[1,:] )
if jt in [-2,-1,0,1]:
lapl[-1,:] += rdy2*( p[-2,: ] -2*p[-1,:] + p[jt,:] )
return lapl
#############################################################
def vortU(u,v,dx,dy):
# dv/dx - du/dy at U-grid
ush = u.shape
vsh = v.shape
if ush == vsh: # must be B-grid
vort = np.zeros(ush)
vort[1:-1,1:-1] = (v[1:-1,2:] - v[1:-1,:-2])/(2*dx) - (u[2:,1:-1] - u[:-2,1:-1])/(2*dy)
elif ush[1]-vsh[1] == 1 and vsh[0]-ush[0] == 1: #must be C-grid
vort = np.zeros( (vsh[0], ush[1]) ) # U-grid is largest
vort[1:-1,1:-1] = (v[1:-1,1:]-v[1:-1,:-1])/dx - (u[1:,1:-1]-u[:-1,1:-1])/dy
else:
print("Fail vortU. Array shape implies neither B-grid or C-grid")
return vort
###Output
_____no_output_____
###Markdown
some familiar functions, with expanded capabilities for periodic BCs
###Code
#############################################################
# interpolates U-grid variable to the p-grid:
def U_to_p(U):
return .25*( U[:-1,1:] + U[1:,1:] + U[:-1,:-1] + U[1:,:-1])
####
def w_to_u(w,bn='rigid'):
iz,ix = w.shape
atu=np.zeros((iz-1,ix+1))
atu[:,1:-1] = .25*( w[:-1,:-1] + w[:-1,1:] + w[1:,:-1] + w[1:,1:] )
if bn == 'rigid':
atu[:,0]=atu[:,1]
atu[:,-1]=atu[:,-2]
elif bn == 'periodic':
atu[:,0 ] = .25*( w[:-1,-1] + w[:-1,0] + w[1:,-1] + w[1:,0] )
atu[:,-1] = atu[:,0]
return atu
####
def u_to_w(u):
iz,ix = u.shape
atw=np.zeros((iz+1,ix-1))
atw[1:-1,:] = .25*( u[:-1,:-1] + u[:-1,1:] + u[1:,:-1] + u[1:,1:] )
atw[0,:]=atw[1,:]
atw[-1,:]=atw[-2,:]
return atw
###
def v_to_u(v,bn='rigid'):
iz,ix = v.shape
atu=np.zeros((iz,ix+1))
atu[:,1:-1] = .5*( v[:,:-1] + v[:,1:] )
if bn == 'rigid':
atu[:,0] = atu[:,1]
atu[:,-1] = atu[:,-2]
elif bn == 'periodic':
atu[:,0] = .5*( v[:,-1] + v[:,0] )
atu[:,-1] = atu[:,0]
return atu
def u_to_p(u):
return (u[:,:-1] + u[:,1:] )*.5
def w_to_p(w):
return (w[:-1,:] + w[1:,:] )*.5
def Btotal(b,strat,zb):
if strat==0:
return b
elif strat>0.:
return b + strat*zb
else:
return b + strat*(zb-1)
def advect(q,u,w,dx,dz,periodic=False):
# 3rd-order upwind advection
# q,u,v are co-located
if not periodic: # not periodic, use standard advection with rigid boundaries:
return advect_box(q,u,w,dx,dz)
sh = q.shape
Q = np.zeros( (sh[0],sh[1]+4) )
Q[:, 2:-2 ] = q
if periodic=='U' or periodic=='u':
Q[ : , :2] = q[ : , -3:-1 ]
Q[ : , -2:] = q[ :, 1:3 ]
elif periodic=='v' or periodic=='w' or periodic=='b':
Q[ : , :2 ] = q[ : , -2: ]
Q[ : , -2: ] = q[ : , :2 ]
dqdt=np.zeros(sh)
dqmx=np.zeros(sh)
dqpx=np.zeros(sh)
dqmz=np.zeros(sh)
dqpz=np.zeros(sh)
# "m" is difference biased to the minus side, "p" to the plus side
# must use first order "#1" if too close to wall
dqmx[:,:] = (2*Q[:,3:-1] + 3*Q[:,2:-2] - 6*Q[:,1:-3] + Q[:,:-4])/6.
dqpx[:,:] = -(2*Q[:,1:-3] + 3*Q[:,2:-2] - 6*Q[:,3:-1] + Q[:,4:] )/6.
dqmz[1,:] = q[1,:]-q[0,:] #1
dqmz[2:-1,:] = (2*q[3:,:]+3*q[2:-1,:]-6*q[1:-2,:]+q[:-3,:])/6. #3
dqpz[-2,:] = q[-1,:]-q[-2,:] #1
dqpz[1:-2,:] = -(2*q[0:-3,:]+3*q[1:-2,:]-6*q[2:-1,:]+q[3:,:])/6. #3
# use derivatives biased to the upwind side:
dqdx=np.where(u>0.,dqmx,dqpx)/dx
dqdz=np.where(w>0.,dqmz,dqpz)/dz
# advective terms:
dqdt+=-u*dqdx
dqdt+=-w*dqdz
return dqdt
def poisson_p_fft_prep(Nxp,Nyp,dx,dy,lapl='discrete',periodic=False):
# returns the coefficients to multiply the vorticity Fourier amplitudes
L = dx*Nxp
W = dy*Nyp
Ka = np.arange(Nxp) # the wavenumbers of the cos functions in the x-direction
Ma = np.arange(Nyp)
if periodic: Ka[1::2] += 1 # because both cos and sin
ka = Ka*np.pi/L
ma = Ma*np.pi/W
lapl_op =np.zeros((Nyp,Nxp))
if lapl == 'discrete':
lapl_op[:] = (2*np.cos(ka*dx)-2)/dx**2
else: # the calculus Laplacian
lapl_op[:] += -ka**2
lapl_opT=lapl_op.T # reverse columns and rows
if lapl == 'discrete':
lapl_opT[:] += (2*np.cos(ma*dy)-2)/dy**2 # add to every row
else: # the calculus Laplacian
lapl_opT[:] += -ma**2
lapl_op=lapl_opT.T # reverse columns and rows
lapl_op[0,0] = 1.
invlapl = 1./lapl_op
return invlapl
def poisson_p_fft(div,invlapl,periodic=False):
sh = div.shape
if periodic:
divt = scipy.fftpack.rfft( div , axis=1) # discrete sin and cos transform of rows of div
else:
divt = scipy.fftpack.dct( div , axis=1, type=2) # discrete cos transform of rows of div
divt = scipy.fftpack.dct( divt , axis=0, type=2) # discrete cos transform of rows of div
pt = divt*invlapl
pt = scipy.fftpack.idct(pt,axis=0,type=2) # inverse transform of pt to p
if periodic:
p = scipy.fftpack.irfft(pt,axis=1) # inverse transform of pt to p
p = p/(2*sh[0]) #The need for this division is convention of fft
else:
p = scipy.fftpack.idct(pt,axis=1,type=2) # inverse transform of pt to p
p = p/(4*sh[0]*sh[1]) #The need for division is convention of fft
return p
###Output
_____no_output_____
###Markdown
C-grid hydrostatic solver
###Code
def hydrostatic_pressure_solver(dudt,b,dx,dz,divha_target=None,periodic=False):
# for the C-grid
# dudt is horizontal acceleration, without PGF added yet
# b is buoyancy
ph=np.zeros( (b.shape[0]-1, b.shape[1]) )
for k in range(1,ph.shape[0]): # note ph[0]=0.
ph[k] = ph[k-1] + dz*b[k]
dudth = (ph[:,:-1] - ph[:,1:])/dx
dudt[:,1:-1] += dudth
if periodic:
dudt[:,0] += (ph[:,-1]-ph[:,0])/dx
dudt[:,-1] = dudt[:,0]
dudtz = dudt[:].sum(0)/p.shape[0] #vertical integral of dudt
dudtp = np.zeros(dudt.shape[1])
px = np.zeros(p.shape[1])
if divha_target != None: dudtz[1:] += - dx*divha_target.cumsum()
px[1:] = dx*dudtz[1:-1].cumsum()
dudtp[1:-1] = (px[:-1] - px[1:])/dx
if periodic:
dudtp[0] += (px[-1]-px[0])/dx
dudtp[-1] = dudtp[0]
dudtm = dudt + dudtp # total modified x acceleration, note clever Python adds dudtp to each level of dudt
pm = ph + px # total modified pressure, will be returned and used as p.
pm = pm - pm.mean()
return dudtm, pm # return dudt with the -dp/dx included, and the final modified pressure field
###############################################
def wfromu(u,dx,dz):
iz,ix = u.shape
w = np.zeros( (iz+1,ix-1) )
dudx = (u[:,1:]-u[:,:-1])/dx
w[1:] = -dz*dudx.cumsum(0)
return w
###Output
_____no_output_____
###Markdown
Choose an experiment:
###Code
nexp = 2
periodic = True
plot_x_to_y = 1.0 # default square plots
if nexp == 1 : # long wave, off-centered init
xmax, zmax = 40 ,1.
strat = 1
centered = .6
if nexp == 2 : # long wave, centered init
xmax, zmax = 40 ,1.
strat = 1
centered = 1.
if nexp == 3 : # short wave, off-centered init
xmax, zmax = 4 ,1.
strat = 1
centered = .6
if nexp == 4 : # short wave, centered init
xmax, zmax = 4 ,1.
strat = 1
centered = 1.
if nexp < 20:
Nx,Nz = 65,33
if nexp==21: # the familiar rising blob
xmax=1.
zmax=1.
Nx,Nz = 129,129
strat = 0.
centered = 1
if nexp==22: # the familiar rising blob, off-centered init
xmax=1.
zmax=1.
Nx,Nz = 129,129
strat = 0.
centered = .42
if nexp == 31: # convection
strat = -1
xmax,zmax = 2.828,1.
Nx,Nz = 65,33
plot_x_to_y = xmax/zmax
# make the U-grid
dx = xmax/(Nx-1.)
dz = zmax/(Nz-1.)
x1U = np.linspace(0,xmax,Nx)
z1U = np.linspace(0,zmax,Nz)
xU,zU = np.meshgrid(x1U,z1U)
#make the other grids
print("dx=",dx,"dz=",dz)
print("range=",xU.max(),zU.max())
xp = U_to_p(xU)
zp = U_to_p(zU)
xw = 0.5*( xU[:,:-1] + xU[:,1:] )
zw = 0.5*( zU[:,:-1] + zU[:,1:] )
xb = xw
zb = zw
xu = 0.5*( xU[:-1,:] + xU[1:,:] )
zu = 0.5*( zU[:-1,:] + zU[1:,:] )
xv = xp
zv = zp
###Output
dx= 0.625 dz= 0.03125
range= 40.0 1.0
###Markdown
Test the Poisson solver
###Code
invlapl = poisson_p_fft_prep(Nx-1,Nz-1,dx,dz,lapl='discrete',periodic=periodic) # lapl='calculus' or lapl='discrete'
np.random.seed(2) # define seed, so that random produces same values each time
p_test = np.random.random(xp.shape) # a random field of p, for testing
p_test -= p_test.mean()
if periodic:
lapl_of_p = laplacian(p_test, dx, dz, il=-1,ir=0,jb=0,jt=-1)
else:
lapl_of_p = laplacian(p_test, dx, dz, il=0,ir=-1,jb=0,jt=-1)
print(p_test.shape)
print(lapl_of_p.shape)
#%%timeit
#p_solved= poisson_p_fft(lapl_of_p, invlapl)
###Output
_____no_output_____
###Markdown
Results from the `%%timeit` study.Here are the results of my *periodic* tests, so that you don't need to do it.| Nx-1 × Nz-1 | ms per loop ||---|---|---|| 63 x 31 | .372 || 64 x 32 | .157 || 65 x 33 | .237 || 127 x 63 | 3.67 || 128 x 64 | .406 || 129 x 65 | 1.05 |
###Code
# Does the solution of the Poisson equation give us
# the pressure we started with?
p_solved = poisson_p_fft(lapl_of_p, invlapl,periodic=periodic)
p_solved -= p_solved.mean()
diff = p_test - p_solved
diff2 = diff**2
print( "\nr.m.s. error should be very small:", diff2.mean() )
###Output
r.m.s. error should be very small: 1.14821483726e-25
###Markdown
Initialize the fields:
###Code
ui = np.zeros((Nz-1,Nx))
vi = np.zeros((Nz-1,Nx-1))
wi = np.zeros((Nz,Nx-1))
pi = np.zeros((Nz-1,Nx-1))
######
xc = xmax/2.
gwidth = .125*xmax
if nexp < 10: # wave
bi = -0.2*np.sin(np.pi*zb)*np.exp(-((xb-centered*xc)/gwidth)**2)
elif 20<nexp<25:
xcen = .5*centered
zcen=.2
r2 = (xb-xcen)**2 + (zb-zcen)**2
rbmax=.2
rb = np.sqrt(r2)/rbmax
bi = np.where(rb<1.,.5*(1.+np.cos(np.pi*rb)),0.)
elif nexp == 31: # unstable convection
bi = -0.2*np.sin(np.pi*zb)*(np.cos(2*np.pi*xb/xmax)+np.sin(2*np.pi*xb/xmax)) # periodic
else:
print(nexp," not valid")
quick,simple = plt.subplots(figsize=(10,10))
simple.contourf(xb,zb,Btotal(bi,strat,zb),20) # total buoyancy B
simple.contour(xb,zb,bi,10,colors=['k',]) # buoyancy b
simple.set_title("total buoyancy B (colors), buoyancy b (contours)")
stop_clipping(simple)
###Output
_____no_output_____
###Markdown
Set up the animation plot:
###Code
myfig = plt.figure(figsize=(10,10/plot_x_to_y),facecolor='lightgrey')
ax2 = myfig.add_axes([0.1, 0.1, 0.8, .8], frameon=False) # contour axes
ax3 = myfig.add_axes([0.0, 0.1, 0.08, .8]) # for colorbar
ax3.axis('off')
ax2.axis('off')
plt.setp( ax2.get_xticklabels(), visible=False);
plt.close()
cbar_exists = False
def doplot():
global cbar_exists
ax2.clear()
CF=ax2.contourf(xb, zb,Btotal(b,strat,zb), buoylevs, zorder=1)
ax2.contour(xp, zp, p, preslevs, colors='white', zorder=2)
ax2.contour(xp,zp,v,vlevs,colors='grey',zorder=3)
u_at_p = (u[:,:-1]+u[:,1:])*.5
w_at_p = (w[:-1,:]+w[1:,:])*.5
stretch = zmax/xmax*plot_x_to_y # for making arrows aligned with streamlines
Q = ax2.quiver(xp[::vd,::vd],zp[::vd,::vd],u_at_p[::vd,::vd]*stretch,w_at_p[::vd,::vd],
scale=anticipatedmaxw*Nz/vd, units='height', zorder=3)
ax2.axis('off')
stop_clipping(ax2)
if not cbar_exists: #bad things happen if cbar is called more than once
cbar_exists = True
mycbar = myfig.colorbar(CF,ax=ax3,fraction=0.4)
mycbar.ax.yaxis.set_ticks_position('left')
sooner = mycbar.ax.yaxis.get_ticklabels()
for boomer in sooner:
boomer.set_fontsize(12)
#annotations
wmax=w.max()
umax=u.max()
ax2.text(-.1*xmax,1.01,
't={0:5.3f} xmax={1:4.0f} umax={2:6.4f} wmax={3:6.4f}'.format(t,xmax,umax,wmax),fontsize=16)
if hydrostatic:
ax2.text(.99*xmax,1.01,'H',fontsize=16)
if periodic:
ax2.text(.96*xmax,1.01,'P',fontsize=16)
ax2.text(.5*xmax,-.05,expt,fontsize=12)
ax2.text(0.*xmax,-.05,"f={0:5.2f}".format(fcoriolis),fontsize=16)
clear_output(wait=True)
display(myfig)
if outdir != None:
timestamp = 100*round(t,2)
pngname = outdir+'/%06d.png' % round(timestamp)
myfig.savefig(pngname, dpi=72, facecolor='w', edgecolor='w', orientation='portrait')
###Output
_____no_output_____
###Markdown
Set parameters for the numerical solution:
###Code
hydrostatic = True
# periodic = True # normally set at top of notebook
invlapl = poisson_p_fft_prep(Nx-1,Nz-1, dx, dz, periodic=periodic)
thermal_wind = False # with fcoriolis>0, initialize with thermal wind balance in waves
fcoriolis = 0.0 # 0,0.05,.1,.2 are interesting choices
cfl = 0.3 # what fraction of a grid space is a wave allowed to travel in one time unit?
speedmax = 1/3.1415 # estimated maximum wave speed, or fluid speed, that will occur
if 20<nexp<30: speedmax=1
if hydrostatic:
dt = cfl*dx/speedmax
else:
dt = cfl*min(dx,dz)/speedmax # this may be too restrictive
if nexp == 31: #convection
diffusion_coef = epsicrit*.5
dt = .05*dt # allow for diffusion limit
else:
diffusion_coef = 0.
aborder = 3 # Adams-Bashforth order: 1, 2 or 3
expt = '%d,%d,%d,%5.1f,%3.2f,%5.4f,%9.4e' % (Nx,Nz, nexp, xmax,cfl,dt,diffusion_coef)
outdir = "wave" # set = to a directory name, if you want to save png, or None
if outdir != None and not os.path.exists(outdir): os.mkdir(outdir)
print(expt)
vd = 2 # arrow density (vd=1 plot all, vd=2 skip)
if Nz> 64: vd=4
buoylevs = np.linspace(-.05,1.05,12)
preslevs = np.linspace(-.5,.5,101)
vlevs = np.arange(-1.0125,1.0126,.025)
if nexp == 31:
anticipatedmaxw = .5 # for quiver scaling of arrows
else:
anticipatedmaxw = .7*zmax/xmax
###Output
_____no_output_____
###Markdown
Start from t=0:
###Code
# copy initial fields, except pressure, to model:
u = ui.copy()
v = vi.copy()
w = wi.copy()
b = bi.copy()
p = 0.*xp
dwdt = 0.*w
dudt = 0.*u
dwdt[1:-1] += b[1:-1]
# Now compute initial pressure
if hydrostatic:
dudt,p = hydrostatic_pressure_solver(dudt,b,dx,dz,divha_target=None,periodic=periodic)
else:
lapl_of_p = divergence(dudt,dwdt,dx,dz) # the needed lapl_of_p
p = poisson_p_fft(lapl_of_p, invlapl, periodic=periodic)
dudt[:,1:-1] += (p[:,:-1]-p[:,1:])/dx
dwdt[1:-1,:] += (p[:-1,:]-p[1:,:])/dz
if periodic:
dudt[:,0] += (p[:,-1]-p[:,0])/dx
dudt[:,-1] = dudt[:,0]
####
dbdta = [None]*3
dudta = [None]*3
dvdta = [None]*3
dwdta = [None]*3
nstep = 0
monitor = []
monitor_title=''
times = []
kinEn = []
potEn = []
totEn = []
t=0. # initial value for t
quick,simple = plt.subplots(figsize=(10,10))
simple.contourf(xb,zb,Btotal(bi,strat,zb),20) # total buoyancy B
if thermal_wind and fcoriolis>0.:
v = u_to_p(-dudt/fcoriolis)
simple.contour(xv,zv,v,20,colors="grey")
simple.set_title("v and p, in thermal wind balance")
else:
simple.contour(xu,zu,dudt,20,colors="black")
simple.set_title("dudt and p ")
simple.contour(xp,zp,p,20,colors="white")
print("hydrostatic=",hydrostatic," periodic=",periodic, " thermal_wind=",thermal_wind)
###Output
hydrostatic= True periodic= True thermal_wind= False
###Markdown
Run the model:
###Code
print(expt)
print(fcoriolis)
print("t=",t)
if nexp >= 30: # convection
tstop = 15
dplot = .5 # time between plots
elif nexp >=20:
tstop = 4.
dplot =.2
else: # wave
tstop = xmax*3.1415
dplot = tstop/20
tplot = t # time for next plot
print(t,tstop)
if t==0: doplot()
while t < tstop + dt/2.:
nstep += 1
abnow = min(nstep,aborder)
if periodic:
bnd="periodic"
else:
bnd="rigid"
uw = u_to_w(u)
wu = w_to_u(w,bnd)
wv = w_to_p(w)
uv = u_to_p(u)
vu = v_to_u(v,bnd)
if periodic: # for now, diffusion is only implemented for periodic conditions
dbdt = advect(b,uw,w,dx,dz,'b') - strat*w + diffusion_coef*laplacian(b,dx,dz,il=-1,ir=0)
dudt = advect(u,u,wu,dx,dz,'u') + fcoriolis*vu + diffusion_coef*laplacian(u,dx,dz,il=-2,ir=1,jb=0,jt=-1)
dvdt = advect(v,uv,wv,dx,dz,'v') - fcoriolis*uv + diffusion_coef*laplacian(v,dx,dz,il=-1,ir=0,jb=0,jt=-1)
else:
dbdt = advect(b,uw,w,dx,dz) - strat*w
dudt = advect(u,u,wu,dx,dz) + fcoriolis*vu
dvdt = advect(v,uv,wv,dx,dz) - fcoriolis*uv
if hydrostatic:
dudt,p = hydrostatic_pressure_solver(dudt,b,dx,dz,periodic=periodic)
else:
dwdt = advect(w,uw,w,dx,dz,'w')
dwdt[1:-1] += b[1:-1] # is this really necessary to not include boundary points?
if periodic: dwdt += diffusion_coef*laplacian(w,dx,dz,il=-1,ir=0)
lapl_of_p = divergence(dudt,dwdt,dx,dz)
if periodic:
p = poisson_p_fft(lapl_of_p, invlapl, periodic=periodic)
else:
p = poisson_p_fft(lapl_of_p, invlapl)
dudt[:,1:-1] += (p[:,:-1]-p[:,1:])/dx
dwdt[1:-1,:] += (p[:-1,:]-p[1:,:])/dz
if periodic:
dudt[:,0] += (p[:,-1]-p[:,0])/dx
dudt[:,-1] = dudt[:,0]
dbdta = [dbdt.copy()] + dbdta[:-1]
dudta = [dudt.copy()] + dudta[:-1]
dvdta = [dvdt.copy()] + dvdta[:-1]
if not hydrostatic:
dwdta = [dwdt.copy()] + dwdta[:-1]
b += dt*ab_blend(dbdta,abnow)
u += dt*ab_blend(dudta,abnow)
v += dt*ab_blend(dvdta,abnow)
if not hydrostatic:
w += dt*ab_blend(dwdta,abnow)
else:
w = wfromu(u,dx,dz)
wtop = w[-1].copy() # should be very close to zero
t = t + dt
times.append(t)
KE = .5*dx*dz * (u_to_p(u)**2 + w_to_p(w)**2 + v**2).sum()
kinEn.append(KE)
# potEn.append(PE)
# totEn.append(PE+KE)
assert u.max()<2.e10, 'kaboom!'
if t > tplot - dt/2. :
doplot()
tplot = min(tstop,tplot + dplot)
plt.close() # prevents mysterious second plot from popping up
###Output
_____no_output_____
###Markdown
energy history:
###Code
quick,simple = plt.subplots(figsize=(12,6))
simple.plot(times,kinEn,'b')
simple.set_title('total kinetic energy');
###Output
_____no_output_____
###Markdown
Student tasks: 1. Internal waves with the hydrostatic approximationSee reference to "STUDENTS" in the section [Hydrostatic approximation](Hydrostatic-approximation) 2. In which regime is the hydrostatic approximation valid?Run the centered wave with four different sets of parameters:| `xmax` | `hydrostatic`||----|----||4 | False||4 | True ||40 | False ||40 | True |Note that `hydrostatic=True` uses the hydrostatic *approximation* to increase the computational speed of the model, but the equations have approximate physics. Is the hydrostatic approximation accurate in the long-wavelength regime, or short-wavelength regime? 3. Forecast for Mars and EarthA Martian colony of dolphins lives in a long trough of saline water, with the density increasing linearly by 1% from top to bottom. The trough is 4 km wide and 0.1 km deep. Lo and behold, on one fine day the initial deflection of the isopycnal surfaces looks exactly like `nexp=2`. The dolphin colony wants to know how minutes elapse before the crests of the waves impact the end boundaries. What is your forecast? Assume the trough is on the equator.By strange coincidence, the following day an Earth colony of dolphins asks for a forecast. The dolphins live in a long trough of saline water, with the density increasing linearly by 1% from top to bottom. The trough is 40 km wide and 1.0 km deep. Coincidentally, the initial deflection of the isopycnal surfaces looks exactly like `nexp=2`. The dolphin colony wants to know how minutes elapse before the crests of the waves impact the end boundaries. What is your forecast? Assume the trough is on the equator. 4. Potential energy in a stratified fluidIn a previous notebook we derived the energy equations for $\ddt{b}=0$. Here we derive the energy equationfor$$\ddt{b} = -N^2 w$$So the production of KE is:$$\VI{wb} = \VI{\frac{-1}{N^2} \ddt{b} b}= \frac{-1}{N^2} \VI{ \ddt{}\frac{b^2}{2} } = \frac{-1}{N^2} \pdt{}\VI{ \frac{b^2}{2} }$$where we have used our theorem that for incompressible flow with impenetrable boundaries:$$\VI{\ddt{q}} = \pdt{}\VI{q} $$Thus$$\pdt{}\VI{\frac{u^2+w^2}{2} + \frac{1}{N^2}\frac{b^2}{2} } = 0$$The integral is total energy `E` in the model. The integral of first term is kinetic energy `KE` and of the second term is potential energy `PE`.In our Python model, it would be best to interpolate $u$ and $v$ to the p-grid to evaluate these integrals. The volume integral of an array `q` in the p-grid would be simply `q.sum()*dx*dz`. Show, with a plot, that in the model,though `KE` and`PE` vary with time, the sum `E` is nearly invariant with time.**Note: In our dimenionsionless model $N^2=1$.** finish the plotNow finish the monitoring of `potEn` and `totEn`. In `nexp==2`, study the simulations with `fcoriolis` set to `0`,`0.05`,`0.1`,`0.2`. Based on the appearance of the contour plots, and your revised "energy history" plot, what do you conclude about the effect of `fcoriolis` on the behavior of the wave? Appendix unstable convection This will be explained later.
###Code
# from my pencil and paper analysis:
pi = math.pi
epsicrit2 = 4/(3**3*pi**4)
print(math.sqrt(epsicrit2))
rayc = 27/4 * pi**4
print(rayc)
kcrit = pi/math.sqrt(2)
print(kcrit)
lambdacrit = 2*pi/kcrit
print(lambdacrit)
epsicrit = 1/math.sqrt(rayc)
print(epsicrit)
###Output
0.038998541767010154
657.5113644795163
2.221441469079183
2.8284271247461903
0.038998541767010154
|
notes/notebooks/looking_at_the_abeysuriya_model.ipynb | ###Markdown
Looking At the Abeysuriya / Robinson Model Setup
###Code
# Generic stuff
import os,sys,numpy as np,pandas as pd
from scipy.io import loadmat
# Vizualization stuff
# Choose which setting; for widget, need to use %matplotlib notebook
%matplotlib notebook
#%matplotlib inline
from matplotlib import pyplot as plt
# Spectral models stuff
sys.path.append('../../code/')
from robinson import Abeysuriya2015Model
#from numpy import pi
###Output
_____no_output_____
###Markdown
Interactive exporation with the widget
###Code
%matplotlib notebook
###Output
_____no_output_____
###Markdown
Default parameters
###Code
mod = Abeysuriya2015Model()
mod.plot_widget(logx=True,normalize=True)#,xrange=[0.,300])#,xrange=[0,60])#,logx=False)#,xrange=[0,60])
from ipywidgets import *
widgets.FloatSlider
newparams = dict(G_ee=7.,G_ei=-27.,G_ese=78.,G_esre=-22.,G_srs=-1.,
alpha=25.,beta=600.)#,A_EMG=0.)#200.,beta=900,t=84.)
mod = Abeysuriya2015Model()#freqs=freqs)
for k,v in newparams.items(): setattr(mod,k,v)
mod.plot_widget(logx=True,normalize=True)#,xrange=[0.,300])#,xrange=[0,60])#,logx=False)#,xrange=[0,60])
#mod.widg_ax.set_xlim([10E-3,10E-0])
###Output
_____no_output_____
###Markdown
Params giving an alpha peak
###Code
newparams = dict(G_ee=7.,G_ei=-27.,G_ese=78.,G_esre=-22.,G_srs=-1.,
alpha=25.,beta=600.)#200.,beta=900,t=84.)
mod = Abeysuriya2015Model()
for k,v in newparams.items(): setattr(mod,k,v)
mod.plot_widget(logx=True,normalize=True)#,xrange=[0.,300])#,xrange=[0,60])#,logx=False)#,xrange=[0,60])
%matplotlib inline
fig, ax = plt.subplots()
mod.compute_P(mod.freqs,return_df=True,normalize=True)['P_EEG_EMG'].loc[:60].plot(logx=True,logy=True,xlim=[1,80],ax=ax)
fit_df.loc[:60].data.plot(ax=ax,logy=True,logx=True)
fit_df.shape
mod.freqs
%%time
newparams = dict(G_ee=7.,G_ei=-27.,G_ese=78.,G_esre=-22.,G_srs=-1.,
alpha=200.,beta=700,t=84.)
mod = Abeysuriya2015Model()
for k,v in newparams.items(): setattr(mod,k,v)
fit_res,fit_df = mod.fit(data,freqs,param_list,5.,normalize=True,fit_log=True)# alse)#True)
print(fit_res)
fig, ax = plt.subplots()
fit_df[['P_EEG_EMG', 'data']].plot(logx=True,logy=True,ax=ax)
fig, ax = plt.subplots()
mod.compute_P(mod.freqs,return_df=True)['P_EEG_EMG'].plot(logx=True,logy=True,xlim=[5,80],ax=ax)
%%time
newparams = dict(G_ee=7.,G_ei=-27.,G_ese=78.,G_esre=-22.,G_srs=-1.,
alpha=200.,beta=700,t=84.)
mod = Abeysuriya2015Model()
for k,v in newparams.items(): setattr(mod,k,v)
fit_res,fit_df = mod.fit(data,freqs,param_list,0.1,normalize=True,fit_log=True)# alse)#True)
print(fit_res)
fig, ax = plt.subplots()
fit_df[['P_EEG_EMG', 'data']].plot(logx=True,logy=True,ax=ax)
fig, ax = plt.subplots()
mod.compute_P(mod.freqs,return_df=True)['P_EEG_EMG'].plot(logx=True,logy=True,xlim=[5,80],ax=ax)
###Output
fun: 290.9702248306202
hess_inv: <9x9 LbfgsInvHessProduct with dtype=float64>
jac: array([-2.93455855e+00, -3.50732989e+00, -1.37578127e+00, -1.34405695e+00,
-3.98138695e+01, -6.52335075e-02, -2.79101187e-03, 5.81278755e+01,
-1.07334358e+06])
message: 'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'
nfev: 70
nit: 1
status: 0
success: True
x: array([ 7.00061949e+00, -2.69992298e+01, 7.80003033e+01, -2.19997090e+01,
-9.94965951e-01, 2.00000000e+02, 7.00000001e+02, 8.39918496e+01,
5.00119858e-13])
CPU times: user 1min 4s, sys: 719 ms, total: 1min 5s
Wall time: 1min 8s
###Markdown
Compute + plot power spectrum with specified parameters
###Code
plt.figure()
mod = Abeysuriya2015Model()
mod.G_ee = 11.
mod.G_ei = -9.
mod.G_ese = 8.
mod.compute_P(mod.freqs,return_df=True)['P_EEG_EMG'].plot(logx=True,logy=True,xlim=[5,120])
###Output
_____no_output_____
###Markdown
Observations on model parameter effects
###Code
# (to do...)
###Output
_____no_output_____
###Markdown
Fitting
###Code
mat = loadmat('../../scratch/100307_MEG_3-Restin_powavg.mat',struct_as_record=False,squeeze_me=True)['freq']
hcp_ps = pd.DataFrame(mat.powspctrm,columns=mat.freq,index=mat.label).T
data = hcp_ps.mean(axis=1).values
freqs = hcp_ps.index.values
data_mul = (hcp_ps.mean(axis=1).values*10**24).astype(float)
param_list = ['G_ee','G_ei','G_ese','G_esre','G_srs',
'alpha','beta','t0','A_EMG']
%matplotlib inline
###Output
_____no_output_____
###Markdown
Non-normalized power spectrum
###Code
%%time
newparams = dict(G_ee=7.,G_ei=-27.,G_ese=78.,G_esre=-22.,G_srs=-1.,
alpha=25.,beta=600.)#200.,beta=900,t=84.)
fit_mod = Abeysuriya2015Model()#freqs=freqs)
for k,v in newparams.items(): setattr(fit_mod,k,v)
fit_mod.freqs = freqs
fit_mod.data = data_mul
fig, ax = plt.subplots()#ncols=2, figsize=(12,3))
fit_mod.compute_P(fit_mod.freqs,return_df=True,normalize=False)['P_EEG_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='b')
fit_res,fit_df = fit_mod.fit(data_mul,freqs,param_list,0.1,normalize=False)# alse)#True)
print(fit_res)
fit_mod.compute_P(fit_mod.freqs,return_df=True,normalize=False)['P_EEG_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='r')
#fit_df.loc[:60].data.plot(ax=ax,logy=True,logx=True)
fit_df['P_EEG_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='g')
fit_df['data'].loc[:60].plot(logx=True,logy=True,ax=ax,c='orange')
fit_df['P_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='k')
fit_df['P_EEG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='y')
%%time
newparams = dict(G_ee=7.,G_ei=-27.,G_ese=78.,G_esre=-22.,G_srs=-1.,
alpha=25.,beta=600.,A_EMG=0.)#200.,beta=900,t=84.)
fit_mod = Abeysuriya2015Model()#freqs=freqs)
for k,v in newparams.items(): setattr(fit_mod,k,v)
fit_mod.freqs = freqs
fit_mod.data = data_mul
fig, ax = plt.subplots()#ncols=2, figsize=(12,3))
fit_mod.compute_P(fit_mod.freqs,return_df=True,normalize=False)['P_EEG_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='b')
fit_res,fit_df = fit_mod.fit(data_mul,freqs,param_list,0.1,normalize=False)# alse)#True)
print(fit_res)
fit_mod.compute_P(fit_mod.freqs,return_df=True,normalize=False)['P_EEG_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='r')
#fit_df.loc[:60].data.plot(ax=ax,logy=True,logx=True)
fit_df['P_EEG_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='g')
fit_df['data'].loc[:60].plot(logx=True,logy=True,ax=ax,c='orange')
fit_df['P_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='k')
fit_df['P_EEG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='y')
###Output
fun: 5357.946285073562
hess_inv: <9x9 LbfgsInvHessProduct with dtype=float64>
jac: array([ 4.27462510e-03, 3.21051630e-02, -1.81898940e-04, -3.63797881e-04,
3.45607987e-03, -9.09494702e-05, 0.00000000e+00, -3.81987775e-03,
1.02987452e+09])
message: 'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
nfev: 20
nit: 1
status: 0
success: True
x: array([2.00000000e+01, 0.00000000e+00, 2.00000000e+02, 1.00000000e+02,
2.00000000e+01, 1.00000000e+01, 5.99982537e+02, 5.00000000e+01,
1.00000000e-05])
CPU times: user 20.4 s, sys: 250 ms, total: 20.7 s
Wall time: 21.4 s
###Markdown
Normalized power spectrum
###Code
%%time
newparams = dict(G_ee=7.,G_ei=-27.,G_ese=78.,G_esre=-22.,G_srs=-1.,
alpha=25.,beta=600.)#200.,beta=900,t=84.)
fit_mod = Abeysuriya2015Model()#freqs=freqs)
for k,v in newparams.items(): setattr(fit_mod,k,v)
fit_mod.freqs = freqs
fit_mod.data = data
fig, ax = plt.subplots()#ncols=2, figsize=(12,3))
fit_mod.compute_P(fit_mod.freqs,return_df=True,normalize=True)['P_EEG_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='b')
fit_res,fit_df = fit_mod.fit(data,freqs,param_list,0.1,normalize=True)# alse)#True)
print(fit_res)
fit_mod.compute_P(fit_mod.freqs,return_df=True,normalize=True)['P_EEG_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='r')
#fit_df.loc[:60].data.plot(ax=ax,logy=True,logx=True)
fit_df['P_EEG_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='g')
fit_df['data'].loc[:60].plot(logx=True,logy=True,ax=ax,c='orange')
fit_df['P_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='k')
fit_df['P_EEG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='y')
###Output
fun: 320.5065132598005
hess_inv: <9x9 LbfgsInvHessProduct with dtype=float64>
jac: array([-1.99918304e-01, -2.04005346e-01, -7.82222287e-02, -8.07915512e-02,
-1.69234227e+00, -3.01270120e-04, 0.00000000e+00, 3.23268523e-02,
-2.85747440e+03])
message: 'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'
nfev: 40
nit: 1
status: 0
success: True
x: array([ 6.79866251e+00, -2.73099860e+01, 7.78975077e+01, -2.21696853e+01,
-1.55391057e+00, 2.50045380e+01, 6.00000006e+02, 8.52816789e+01,
1.00000000e-05])
CPU times: user 39 s, sys: 828 ms, total: 39.8 s
Wall time: 43.1 s
###Markdown
Fit less data
###Code
%%time
newparams = dict(G_ee=7.,G_ei=-27.,G_ese=78.,G_esre=-22.,G_srs=-1.,
alpha=25.,beta=600.)#200.,beta=900,t=84.)
fit_mod = Abeysuriya2015Model()#freqs=freqs)
for k,v in newparams.items(): setattr(fit_mod,k,v)
fit_mod.freqs = freqs[:60]
fit_mod.data = data[:60]
fig, ax = plt.subplots()#ncols=2, figsize=(12,3))
fit_mod.compute_P(fit_mod.freqs,return_df=True,normalize=True)['P_EEG_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='b')
fit_res,fit_df = fit_mod.fit(data[:60],freqs[:60],param_list,0.1,normalize=True)# alse)#True)
print(fit_res)
fit_mod.compute_P(fit_mod.freqs,return_df=True,normalize=True)['P_EEG_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='r')
#fit_df.loc[:60].data.plot(ax=ax,logy=True,logx=True)
fit_df['P_EEG_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='g')
fit_df['data'].loc[:60].plot(logx=True,logy=True,ax=ax,c='orange')
fit_df['P_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='k')
fit_df['P_EEG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='y')
%%time
newparams = dict(G_ee=7.,G_ei=-27.,G_ese=78.,G_esre=-22.,G_srs=-1.,
alpha=25.,beta=600.)#200.,beta=900,t=84.)
fit_mod = Abeysuriya2015Model()#freqs=freqs)
for k,v in newparams.items(): setattr(fit_mod,k,v)
fit_mod.freqs = freqs[:60]
fit_mod.data = data[:60]
fig, ax = plt.subplots()#ncols=2, figsize=(12,3))
fit_mod.compute_P(fit_mod.freqs,return_df=True,normalize=True)['P_EEG_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='b')
fit_res,fit_df = fit_mod.fit(data[:60],freqs[:60],param_list,1E-6,normalize=True)# alse)#True)
print(fit_res)
fit_mod.compute_P(fit_mod.freqs,return_df=True,normalize=True)['P_EEG_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='r')
#fit_df.loc[:60].data.plot(ax=ax,logy=True,logx=True)
fit_df['P_EEG_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='g')
fit_df['data'].loc[:60].plot(logx=True,logy=True,ax=ax,c='orange')
fit_df['P_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='k')
fit_df['P_EEG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='y')
%%time
newparams = dict(G_ee=7.,G_ei=-27.,G_ese=78.,G_esre=-22.,G_srs=-1.,
alpha=25.,beta=600.)#200.,beta=900,t=84.)
fit_mod = Abeysuriya2015Model()#freqs=freqs)
for k,v in newparams.items(): setattr(fit_mod,k,v)
fit_mod.freqs = freqs[:60]
fit_mod.data = data[:60]
fig, ax = plt.subplots()#ncols=2, figsize=(12,3))
fit_mod.compute_P(fit_mod.freqs,return_df=True,normalize=True)['P_EEG_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='b')
fit_res,fit_df = fit_mod.fit(data[:60],freqs[:60],param_list,1E-10,normalize=True)# alse)#True)
print(fit_res)
fit_mod.compute_P(fit_mod.freqs,return_df=True,normalize=True)['P_EEG_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='r')
#fit_df.loc[:60].data.plot(ax=ax,logy=True,logx=True)
fit_df['P_EEG_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='g')
fit_df['data'].loc[:60].plot(logx=True,logy=True,ax=ax,c='orange')
fit_df['P_EMG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='k')
fit_df['P_EEG'].loc[:60].plot(logx=True,logy=True,ax=ax,c='y')
###Output
fun: 47.426315272477645
hess_inv: <9x9 LbfgsInvHessProduct with dtype=float64>
jac: array([-1.93428917e-01, -3.00092040e-01, -1.06754072e-01, -6.11386497e-02,
8.05332689e-01, -1.49891832e-01, -3.05533376e-05, -4.60603644e-01,
-3.96308053e-01])
message: 'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'
nfev: 970
nit: 66
status: 0
success: True
x: array([ 7.00641510e+00, -2.70831587e+01, 7.79732723e+01, -2.19813776e+01,
-3.34936925e-02, 2.48932678e+01, 6.00000180e+02, 8.44431830e+01,
9.93747235e-02])
CPU times: user 6min 5s, sys: 3.83 s, total: 6min 9s
Wall time: 6min 27s
|
research/object_detection/colab_tutorials/object_detection_tutorial.ipynb | ###Markdown
Object Detection API Demo Run in Google Colab View source on GitHub Welcome to the [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. > **Important**: This tutorial is to help you through the first step towards using [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to build models. If you just just need an off the shelf model that does the job, see the [TFHub object detection example](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb). Setup Important: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This notebook includes only what's necessary to run in Colab. Install
###Code
!pip install -U --pre tensorflow=="2.*"
!pip install tf_slim
###Output
_____no_output_____
###Markdown
Make sure you have `pycocotools` installed
###Code
!pip install pycocotools
###Output
_____no_output_____
###Markdown
Get `tensorflow/models` or `cd` to parent directory of the repository.
###Code
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Compile protobufs and install the object_detection package
###Code
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
%%bash
cd models/research
pip install .
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
###Output
_____no_output_____
###Markdown
Import the object detection module.
###Code
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
###Output
_____no_output_____
###Markdown
Patches:
###Code
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
###Output
_____no_output_____
###Markdown
Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path.By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader
###Code
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
model = model.signatures['serving_default']
return model
###Output
_____no_output_____
###Markdown
Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
###Code
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
###Output
_____no_output_____
###Markdown
For the sake of simplicity we will test on 2 images:
###Code
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
###Output
_____no_output_____
###Markdown
Detection Load an object detection model:
###Code
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
detection_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
Check the model's input signature, it expects a batch of 3-color images of type uint8:
###Code
print(detection_model.inputs)
###Output
_____no_output_____
###Markdown
And returns several outputs:
###Code
detection_model.output_dtypes
detection_model.output_shapes
###Output
_____no_output_____
###Markdown
Add a wrapper function to call the model, and cleanup the outputs:
###Code
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
output_dict = model(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
###Output
_____no_output_____
###Markdown
Run it on each test image and show the results:
###Code
def show_inference(model, image_path):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = np.array(Image.open(image_path))
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
display(Image.fromarray(image_np))
for image_path in TEST_IMAGE_PATHS:
show_inference(detection_model, image_path)
###Output
_____no_output_____
###Markdown
Instance Segmentation
###Code
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
The instance segmentation model includes a `detection_masks` output:
###Code
masking_model.output_shapes
for image_path in TEST_IMAGE_PATHS:
show_inference(masking_model, image_path)
###Output
_____no_output_____
###Markdown
Object Detection API Demo Run in Google Colab View source on GitHub Welcome to the [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. > **Important**: This tutorial is to help you through the first step towards using [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to build models. If you just just need an off the shelf model that does the job, see the [TFHub object detection example](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb). Setup Important: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This notebook includes only what's necessary to run in Colab. Install
###Code
!pip install -U --pre tensorflow=="2.*"
!pip install tf_slim
###Output
_____no_output_____
###Markdown
Make sure you have `pycocotools` installed
###Code
!pip install pycocotools
###Output
_____no_output_____
###Markdown
Get `tensorflow/models` or `cd` to parent directory of the repository.
###Code
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Compile protobufs and install the object_detection package
###Code
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
%%bash
cd models/research
pip install .
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
###Output
_____no_output_____
###Markdown
Import the object detection module.
###Code
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
###Output
_____no_output_____
###Markdown
Patches:
###Code
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
###Output
_____no_output_____
###Markdown
Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path.By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader
###Code
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
return model
###Output
_____no_output_____
###Markdown
Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
###Code
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
###Output
_____no_output_____
###Markdown
For the sake of simplicity we will test on 2 images:
###Code
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
###Output
_____no_output_____
###Markdown
Detection Load an object detection model:
###Code
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
detection_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
Check the model's input signature, it expects a batch of 3-color images of type uint8:
###Code
print(detection_model.signatures['serving_default'].inputs)
###Output
_____no_output_____
###Markdown
And returns several outputs:
###Code
detection_model.signatures['serving_default'].output_dtypes
detection_model.signatures['serving_default'].output_shapes
###Output
_____no_output_____
###Markdown
Add a wrapper function to call the model, and cleanup the outputs:
###Code
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
model_fn = model.signatures['serving_default']
output_dict = model_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
###Output
_____no_output_____
###Markdown
Run it on each test image and show the results:
###Code
def show_inference(model, image_path):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = np.array(Image.open(image_path))
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
display(Image.fromarray(image_np))
for image_path in TEST_IMAGE_PATHS:
show_inference(detection_model, image_path)
###Output
_____no_output_____
###Markdown
Instance Segmentation
###Code
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
The instance segmentation model includes a `detection_masks` output:
###Code
masking_model.output_shapes
for image_path in TEST_IMAGE_PATHS:
show_inference(masking_model, image_path)
###Output
_____no_output_____
###Markdown
Object Detection API Demo Run in Google Colab View source on GitHub Welcome to the [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. > **Important**: This tutorial is to help you through the first step towards using [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to build models. If you just just need an off the shelf model that does the job, see the [TFHub object detection example](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb). Setup Important: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This notebook includes only what's necessary to run in Colab. Install
###Code
!pip install -U --pre tensorflow=="2.*"
!pip install tf_slim
###Output
_____no_output_____
###Markdown
Make sure you have `pycocotools` installed
###Code
!pip install pycocotools
###Output
_____no_output_____
###Markdown
Get `tensorflow/models` or `cd` to parent directory of the repository.
###Code
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/lorynebissuel/models
###Output
_____no_output_____
###Markdown
Compile protobufs and install the object_detection package
###Code
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
%%bash
cd models/research
pip install .
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
###Output
_____no_output_____
###Markdown
Import the object detection module.
###Code
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
###Output
_____no_output_____
###Markdown
Patches:
###Code
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
###Output
_____no_output_____
###Markdown
Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path.By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader
###Code
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
return model
###Output
_____no_output_____
###Markdown
Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
###Code
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
###Output
_____no_output_____
###Markdown
For the sake of simplicity we will test on 2 images:
###Code
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
###Output
_____no_output_____
###Markdown
Detection Load an object detection model:
###Code
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
detection_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
Check the model's input signature, it expects a batch of 3-color images of type uint8:
###Code
print(detection_model.signatures['serving_default'].inputs)
###Output
_____no_output_____
###Markdown
And returns several outputs:
###Code
detection_model.signatures['serving_default'].output_dtypes
detection_model.signatures['serving_default'].output_shapes
###Output
_____no_output_____
###Markdown
Add a wrapper function to call the model, and cleanup the outputs:
###Code
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
model_fn = model.signatures['serving_default']
output_dict = model_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
###Output
_____no_output_____
###Markdown
Run it on each test image and show the results:
###Code
def show_inference(model, image_path):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = np.array(Image.open(image_path))
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
display(Image.fromarray(image_np))
for image_path in TEST_IMAGE_PATHS:
show_inference(detection_model, image_path)
###Output
_____no_output_____
###Markdown
Instance Segmentation
###Code
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
The instance segmentation model includes a `detection_masks` output:
###Code
masking_model.output_shapes
for image_path in TEST_IMAGE_PATHS:
show_inference(masking_model, image_path)
###Output
_____no_output_____
###Markdown
Object Detection API Demo Run in Google Colab View source on GitHub Welcome to the [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. > **Important**: This tutorial is to help you through the first step towards using [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to build models. If you just just need an off the shelf model that does the job, see the [TFHub object detection example](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb). Setup Important: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This notebook includes only what's necessary to run in Colab. Install
###Code
!pip install -U --pre tensorflow=="2.*"
!pip install tf_slim
###Output
_____no_output_____
###Markdown
Make sure you have `pycocotools` installed
###Code
!pip install pycocotools
###Output
_____no_output_____
###Markdown
Get `tensorflow/models` or `cd` to parent directory of the repository.
###Code
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Compile protobufs and install the object_detection package
###Code
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
%%bash
cd models/research
cp object_detection/packages/tf2/setup.py .
pip install .
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
###Output
_____no_output_____
###Markdown
Import the object detection module.
###Code
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
###Output
_____no_output_____
###Markdown
Patches:
###Code
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
###Output
_____no_output_____
###Markdown
Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path.By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader
###Code
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
return model
###Output
_____no_output_____
###Markdown
Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
###Code
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
###Output
_____no_output_____
###Markdown
For the sake of simplicity we will test on 2 images:
###Code
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
###Output
_____no_output_____
###Markdown
Detection Load an object detection model:
###Code
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
detection_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
Check the model's input signature, it expects a batch of 3-color images of type uint8:
###Code
print(detection_model.signatures['serving_default'].inputs)
###Output
_____no_output_____
###Markdown
And returns several outputs:
###Code
detection_model.signatures['serving_default'].output_dtypes
detection_model.signatures['serving_default'].output_shapes
###Output
_____no_output_____
###Markdown
Add a wrapper function to call the model, and cleanup the outputs:
###Code
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
model_fn = model.signatures['serving_default']
output_dict = model_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
###Output
_____no_output_____
###Markdown
Run it on each test image and show the results:
###Code
def show_inference(model, image_path):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = np.array(Image.open(image_path))
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
display(Image.fromarray(image_np))
for image_path in TEST_IMAGE_PATHS:
show_inference(detection_model, image_path)
###Output
_____no_output_____
###Markdown
Instance Segmentation
###Code
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
The instance segmentation model includes a `detection_masks` output:
###Code
masking_model.output_shapes
for image_path in TEST_IMAGE_PATHS:
show_inference(masking_model, image_path)
###Output
_____no_output_____
###Markdown
Object Detection API Demo Run in Google Colab View source on GitHub Welcome to the [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. > **Important**: This tutorial is to help you through the first step towards using [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to build models. If you just just need an off the shelf model that does the job, see the [TFHub object detection example](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb). Setup Important: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2.md). This notebook includes only what's necessary to run in Colab. Install
###Code
!pip install -U --pre tensorflow=="2.*"
!pip install tf_slim
###Output
_____no_output_____
###Markdown
Make sure you have `pycocotools` installed
###Code
!pip install pycocotools
###Output
_____no_output_____
###Markdown
Get `tensorflow/models` or `cd` to parent directory of the repository.
###Code
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Compile protobufs and install the object_detection package
###Code
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
%%bash
cd models/research
pip install .
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
###Output
_____no_output_____
###Markdown
Import the object detection module.
###Code
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
###Output
_____no_output_____
###Markdown
Patches:
###Code
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
###Output
_____no_output_____
###Markdown
Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path.By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader
###Code
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
return model
###Output
_____no_output_____
###Markdown
Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
###Code
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
###Output
_____no_output_____
###Markdown
For the sake of simplicity we will test on 2 images:
###Code
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
###Output
_____no_output_____
###Markdown
Detection Load an object detection model:
###Code
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
detection_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
Check the model's input signature, it expects a batch of 3-color images of type uint8:
###Code
print(detection_model.signatures['serving_default'].inputs)
###Output
_____no_output_____
###Markdown
And returns several outputs:
###Code
detection_model.signatures['serving_default'].output_dtypes
detection_model.signatures['serving_default'].output_shapes
###Output
_____no_output_____
###Markdown
Add a wrapper function to call the model, and cleanup the outputs:
###Code
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
model_fn = model.signatures['serving_default']
output_dict = model_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
###Output
_____no_output_____
###Markdown
Run it on each test image and show the results:
###Code
def show_inference(model, image_path):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = np.array(Image.open(image_path))
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
display(Image.fromarray(image_np))
for image_path in TEST_IMAGE_PATHS:
show_inference(detection_model, image_path)
###Output
_____no_output_____
###Markdown
Instance Segmentation
###Code
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
The instance segmentation model includes a `detection_masks` output:
###Code
masking_model.output_shapes
for image_path in TEST_IMAGE_PATHS:
show_inference(masking_model, image_path)
###Output
_____no_output_____
###Markdown
Object Detection API Demo Run in Google Colab View source on GitHub Welcome to the [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. > **Important**: This tutorial is to help you through the first step towards using [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to build models. If you just just need an off the shelf model that does the job, see the [TFHub object detection example](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb). Setup Important: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This notebook includes only what's necessary to run in Colab. Install
###Code
!pip install -U --pre tensorflow=="2.*"
!pip install tf_slim
###Output
_____no_output_____
###Markdown
Make sure you have `pycocotools` installed
###Code
!pip install pycocotools
###Output
_____no_output_____
###Markdown
Get `tensorflow/models` or `cd` to parent directory of the repository.
###Code
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Compile protobufs and install the object_detection package
###Code
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
%%bash
cd models/research
pip install .
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
###Output
_____no_output_____
###Markdown
Import the object detection module.
###Code
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
###Output
_____no_output_____
###Markdown
Patches:
###Code
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
###Output
_____no_output_____
###Markdown
Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path.By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader
###Code
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
return model
###Output
_____no_output_____
###Markdown
Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
###Code
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = pathlib.Path('models/research/object_detection/data/mscoco_label_map.pbtxt')
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
###Output
_____no_output_____
###Markdown
For the sake of simplicity we will test on 2 images:
###Code
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
###Output
_____no_output_____
###Markdown
Detection Load an object detection model:
###Code
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
detection_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
Check the model's input signature, it expects a batch of 3-color images of type uint8:
###Code
print(detection_model.signatures['serving_default'].inputs)
###Output
_____no_output_____
###Markdown
And returns several outputs:
###Code
detection_model.signatures['serving_default'].output_dtypes
detection_model.signatures['serving_default'].output_shapes
###Output
_____no_output_____
###Markdown
Add a wrapper function to call the model, and cleanup the outputs:
###Code
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
model_fn = model.signatures['serving_default']
output_dict = model_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
###Output
_____no_output_____
###Markdown
Run it on each test image and show the results:
###Code
def show_inference(model, image_path):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = np.array(Image.open(image_path))
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
display(Image.fromarray(image_np))
for image_path in TEST_IMAGE_PATHS:
show_inference(detection_model, image_path)
###Output
_____no_output_____
###Markdown
Instance Segmentation
###Code
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
The instance segmentation model includes a `detection_masks` output:
###Code
masking_model.output_shapes
for image_path in TEST_IMAGE_PATHS:
show_inference(masking_model, image_path)
###Output
_____no_output_____
###Markdown
Object Detection API Demo Run in Google Colab View source on GitHub Welcome to the [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. > **Important**: This tutorial is to help you through the first step towards using [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to build models. If you just just need an off the shelf model that does the job, see the [TFHub object detection example](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb). Setup Important: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This notebook includes only what's necessary to run in Colab. Install
###Code
!pip install -U --pre tensorflow=="2.*"
!pip install tf_slim
###Output
_____no_output_____
###Markdown
Make sure you have `pycocotools` installed
###Code
!pip install pycocotools
###Output
_____no_output_____
###Markdown
Get `tensorflow/models` or `cd` to parent directory of the repository.
###Code
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Compile protobufs and install the object_detection package
###Code
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
%%bash
cd models/research
pip install .
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
###Output
_____no_output_____
###Markdown
Import the object detection module.
###Code
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
###Output
_____no_output_____
###Markdown
Patches:
###Code
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
###Output
_____no_output_____
###Markdown
Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path.By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader
###Code
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
return model
###Output
_____no_output_____
###Markdown
Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
###Code
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
###Output
_____no_output_____
###Markdown
For the sake of simplicity we will test on 2 images:
###Code
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
###Output
_____no_output_____
###Markdown
Detection Load an object detection model:
###Code
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
detection_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
Check the model's input signature, it expects a batch of 3-color images of type uint8:
###Code
print(detection_model.signatures['serving_default'].inputs)
###Output
_____no_output_____
###Markdown
And returns several outputs:
###Code
detection_model.signatures['serving_default'].output_dtypes
detection_model.signatures['serving_default'].output_shapes
###Output
_____no_output_____
###Markdown
Add a wrapper function to call the model, and cleanup the outputs:
###Code
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
model_fn = model.signatures['serving_default']
output_dict = model_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
###Output
_____no_output_____
###Markdown
Run it on each test image and show the results:
###Code
def show_inference(model, image_path):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = np.array(Image.open(image_path))
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
display(Image.fromarray(image_np))
for image_path in TEST_IMAGE_PATHS:
show_inference(detection_model, image_path)
###Output
_____no_output_____
###Markdown
Instance Segmentation
###Code
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
The instance segmentation model includes a `detection_masks` output:
###Code
masking_model.output_shapes
for image_path in TEST_IMAGE_PATHS:
show_inference(masking_model, image_path)
###Output
_____no_output_____
###Markdown
Object Detection API Demo Run in Google Colab View source on GitHub Welcome to the [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. > **Important**: This tutorial is to help you through the first step towards using [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to build models. If you just just need an off the shelf model that does the job, see the [TFHub object detection example](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb). Setup Important: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This notebook includes only what's necessary to run in Colab. Install
###Code
!pip install -U --pre tensorflow=="2.*"
!pip install tf_slim
###Output
_____no_output_____
###Markdown
Make sure you have `pycocotools` installed
###Code
!pip install pycocotools
###Output
_____no_output_____
###Markdown
Get `tensorflow/models` or `cd` to parent directory of the repository.
###Code
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Compile protobufs and install the object_detection package
###Code
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
%%bash
cd models/research
pip install .
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
###Output
_____no_output_____
###Markdown
Import the object detection module.
###Code
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
###Output
_____no_output_____
###Markdown
Patches:
###Code
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
###Output
_____no_output_____
###Markdown
Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path.By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader
###Code
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
return model
###Output
_____no_output_____
###Markdown
Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
###Code
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
###Output
_____no_output_____
###Markdown
For the sake of simplicity we will test on 2 images:
###Code
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
###Output
_____no_output_____
###Markdown
Detection Load an object detection model:
###Code
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
detection_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
Check the model's input signature, it expects a batch of 3-color images of type uint8:
###Code
print(detection_model.signatures['serving_default'].inputs)
###Output
_____no_output_____
###Markdown
And returns several outputs:
###Code
detection_model.signatures['serving_default'].output_dtypes
detection_model.signatures['serving_default'].output_shapes
###Output
_____no_output_____
###Markdown
Add a wrapper function to call the model, and cleanup the outputs:
###Code
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
model_fn = model.signatures['serving_default']
output_dict = model_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
###Output
_____no_output_____
###Markdown
Run it on each test image and show the results:
###Code
def show_inference(model, image_path):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = np.array(Image.open(image_path))
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
display(Image.fromarray(image_np))
for image_path in TEST_IMAGE_PATHS:
show_inference(detection_model, image_path)
###Output
_____no_output_____
###Markdown
Instance Segmentation
###Code
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
The instance segmentation model includes a `detection_masks` output:
###Code
masking_model.output_shapes
for image_path in TEST_IMAGE_PATHS:
show_inference(masking_model, image_path)
###Output
_____no_output_____
###Markdown
Object Detection API Demo Run in Google Colab View source on GitHub Welcome to the [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. > **Important**: This tutorial is to help you through the first step towards using [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to build models. If you just just need an off the shelf model that does the job, see the [TFHub object detection example](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb). Setup Important: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2.md). This notebook includes only what's necessary to run in Colab. Install
###Code
!pip install -U --pre tensorflow=="2.*"
!pip install tf_slim
###Output
_____no_output_____
###Markdown
Make sure you have `pycocotools` installed
###Code
!pip install pycocotools
###Output
_____no_output_____
###Markdown
Get `tensorflow/models` or `cd` to parent directory of the repository.
###Code
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Compile protobufs and install the object_detection package
###Code
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
%%bash
cd models/research
pip install .
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
###Output
_____no_output_____
###Markdown
Import the object detection module.
###Code
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
###Output
_____no_output_____
###Markdown
Patches:
###Code
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
###Output
_____no_output_____
###Markdown
Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path.By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader
###Code
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
return model
###Output
_____no_output_____
###Markdown
Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
###Code
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
###Output
_____no_output_____
###Markdown
For the sake of simplicity we will test on 2 images:
###Code
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
###Output
_____no_output_____
###Markdown
Detection Load an object detection model:
###Code
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
detection_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
Check the model's input signature, it expects a batch of 3-color images of type uint8:
###Code
print(detection_model.signatures['serving_default'].inputs)
###Output
_____no_output_____
###Markdown
And returns several outputs:
###Code
detection_model.signatures['serving_default'].output_dtypes
detection_model.signatures['serving_default'].output_shapes
###Output
_____no_output_____
###Markdown
Add a wrapper function to call the model, and cleanup the outputs:
###Code
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
model_fn = model.signatures['serving_default']
output_dict = model_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
###Output
_____no_output_____
###Markdown
Run it on each test image and show the results:
###Code
def show_inference(model, image_path):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = np.array(Image.open(image_path))
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
display(Image.fromarray(image_np))
for image_path in TEST_IMAGE_PATHS:
show_inference(detection_model, image_path)
###Output
_____no_output_____
###Markdown
Instance Segmentation
###Code
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
The instance segmentation model includes a `detection_masks` output:
###Code
masking_model.output_shapes
for image_path in TEST_IMAGE_PATHS:
show_inference(masking_model, image_path)
###Output
_____no_output_____
###Markdown
Object Detection API Demo Run in Google Colab View source on GitHub Welcome to the [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. > **Important**: This tutorial is to help you through the first step towards using [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to build models. If you just just need an off the shelf model that does the job, see the [TFHub object detection example](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb). Setup Important: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This notebook includes only what's necessary to run in Colab. Install
###Code
!pip install -U --pre tensorflow=="2.*"
!pip install tf_slim
###Output
_____no_output_____
###Markdown
Make sure you have `pycocotools` installed
###Code
!pip install pycocotools
###Output
_____no_output_____
###Markdown
Get `tensorflow/models` or `cd` to parent directory of the repository.
###Code
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Compile protobufs and install the object_detection package
###Code
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
%%bash
cd models/research
pip install .
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
###Output
_____no_output_____
###Markdown
Import the object detection module.
###Code
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
###Output
_____no_output_____
###Markdown
Patches:
###Code
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
###Output
_____no_output_____
###Markdown
Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path.By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader
###Code
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
return model
###Output
_____no_output_____
###Markdown
Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
###Code
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
###Output
_____no_output_____
###Markdown
For the sake of simplicity we will test on 2 images:
###Code
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
###Output
_____no_output_____
###Markdown
Detection Load an object detection model:
###Code
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
detection_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
Check the model's input signature, it expects a batch of 3-color images of type uint8:
###Code
print(detection_model.signatures['serving_default'].inputs)
###Output
_____no_output_____
###Markdown
And returns several outputs:
###Code
detection_model.signatures['serving_default'].output_dtypes
detection_model.signatures['serving_default'].output_shapes
###Output
_____no_output_____
###Markdown
Add a wrapper function to call the model, and cleanup the outputs:
###Code
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
model_fn = model.signatures['serving_default']
output_dict = model_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
###Output
_____no_output_____
###Markdown
Run it on each test image and show the results:
###Code
def show_inference(model, image_path):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = np.array(Image.open(image_path))
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
display(Image.fromarray(image_np))
for image_path in TEST_IMAGE_PATHS:
show_inference(detection_model, image_path)
###Output
_____no_output_____
###Markdown
Instance Segmentation
###Code
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
The instance segmentation model includes a `detection_masks` output:
###Code
masking_model.output_shapes
for image_path in TEST_IMAGE_PATHS:
show_inference(masking_model, image_path)
###Output
_____no_output_____
###Markdown
Object Detection API Demo Run in Google Colab View source on GitHub Welcome to the [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. > **Important**: This tutorial is to help you through the first step towards using [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to build models. If you just just need an off the shelf model that does the job, see the [TFHub object detection example](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb). Setup Important: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This notebook includes only what's necessary to run in Colab. Install
###Code
!pip install -U --pre tensorflow=="2.*"
!pip install tf_slim
###Output
_____no_output_____
###Markdown
Make sure you have `pycocotools` installed
###Code
!pip install pycocotools
###Output
_____no_output_____
###Markdown
Get `tensorflow/models` or `cd` to parent directory of the repository.
###Code
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Compile protobufs and install the object_detection package
###Code
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
%%bash
cd models/research
pip install .
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
###Output
_____no_output_____
###Markdown
Import the object detection module.
###Code
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
###Output
_____no_output_____
###Markdown
Patches:
###Code
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
###Output
_____no_output_____
###Markdown
Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path.By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader
###Code
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
return model
###Output
_____no_output_____
###Markdown
Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
###Code
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
###Output
_____no_output_____
###Markdown
For the sake of simplicity we will test on 2 images:
###Code
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
###Output
_____no_output_____
###Markdown
Detection Load an object detection model:
###Code
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
detection_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
Check the model's input signature, it expects a batch of 3-color images of type uint8:
###Code
print(detection_model.signatures['serving_default'].inputs)
###Output
_____no_output_____
###Markdown
And returns several outputs:
###Code
detection_model.signatures['serving_default'].output_dtypes
detection_model.signatures['serving_default'].output_shapes
###Output
_____no_output_____
###Markdown
Add a wrapper function to call the model, and cleanup the outputs:
###Code
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
model_fn = model.signatures['serving_default']
output_dict = model_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
###Output
_____no_output_____
###Markdown
Run it on each test image and show the results:
###Code
def show_inference(model, image_path):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = np.array(Image.open(image_path))
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
display(Image.fromarray(image_np))
for image_path in TEST_IMAGE_PATHS:
show_inference(detection_model, image_path)
###Output
_____no_output_____
###Markdown
Instance Segmentation
###Code
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
The instance segmentation model includes a `detection_masks` output:
###Code
masking_model.output_shapes
for image_path in TEST_IMAGE_PATHS:
show_inference(masking_model, image_path)
###Output
_____no_output_____
###Markdown
Object Detection API Demo Run in Google Colab View source on GitHub Welcome to the [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. > **Important**: This tutorial is to help you through the first step towards using [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to build models. If you just just need an off the shelf model that does the job, see the [TFHub object detection example](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb). Setup Important: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This notebook includes only what's necessary to run in Colab. Install
###Code
!pip install -U --pre tensorflow=="2.*"
!pip install tf_slim
###Output
_____no_output_____
###Markdown
Make sure you have `pycocotools` installed
###Code
!pip install pycocotools
###Output
_____no_output_____
###Markdown
Get `tensorflow/models` or `cd` to parent directory of the repository.
###Code
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Compile protobufs and install the object_detection package
###Code
%%bash
cd models/research
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf2/setup.py .
python -m pip install .
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
###Output
_____no_output_____
###Markdown
Import the object detection module.
###Code
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
###Output
_____no_output_____
###Markdown
Patches:
###Code
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
###Output
_____no_output_____
###Markdown
Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path.By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader
###Code
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
return model
###Output
_____no_output_____
###Markdown
Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
###Code
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
###Output
_____no_output_____
###Markdown
For the sake of simplicity we will test on 2 images:
###Code
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
###Output
_____no_output_____
###Markdown
Detection Load an object detection model:
###Code
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
detection_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
Check the model's input signature, it expects a batch of 3-color images of type uint8:
###Code
print(detection_model.signatures['serving_default'].inputs)
###Output
_____no_output_____
###Markdown
And returns several outputs:
###Code
detection_model.signatures['serving_default'].output_dtypes
detection_model.signatures['serving_default'].output_shapes
###Output
_____no_output_____
###Markdown
Add a wrapper function to call the model, and cleanup the outputs:
###Code
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
model_fn = model.signatures['serving_default']
output_dict = model_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
###Output
_____no_output_____
###Markdown
Run it on each test image and show the results:
###Code
def show_inference(model, image_path):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = np.array(Image.open(image_path))
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
display(Image.fromarray(image_np))
for image_path in TEST_IMAGE_PATHS:
show_inference(detection_model, image_path)
###Output
_____no_output_____
###Markdown
Instance Segmentation
###Code
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
The instance segmentation model includes a `detection_masks` output:
###Code
masking_model.output_shapes
for image_path in TEST_IMAGE_PATHS:
show_inference(masking_model, image_path)
###Output
_____no_output_____
###Markdown
Object Detection API Demo Run in Google Colab View source on GitHub Welcome to the [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. > **Important**: This tutorial is to help you through the first step towards using [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to build models. If you just just need an off the shelf model that does the job, see the [TFHub object detection example](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb). Setup Important: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This notebook includes only what's necessary to run in Colab. Install
###Code
!pip install -U --pre tensorflow=="2.*"
!pip install tf_slim
###Output
_____no_output_____
###Markdown
Make sure you have `pycocotools` installed
###Code
!pip install pycocotools
###Output
_____no_output_____
###Markdown
Get `tensorflow/models` or `cd` to parent directory of the repository.
###Code
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Compile protobufs and install the object_detection package
###Code
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
%%bash
cd models/research
pip install .
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
###Output
_____no_output_____
###Markdown
Import the object detection module.
###Code
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
###Output
_____no_output_____
###Markdown
Patches:
###Code
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
###Output
_____no_output_____
###Markdown
Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path.By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader
###Code
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
return model
###Output
_____no_output_____
###Markdown
Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
###Code
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
###Output
_____no_output_____
###Markdown
For the sake of simplicity we will test on 2 images:
###Code
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
###Output
_____no_output_____
###Markdown
Detection Load an object detection model:
###Code
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
detection_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
Check the model's input signature, it expects a batch of 3-color images of type uint8:
###Code
print(detection_model.signatures['serving_default'].inputs)
###Output
_____no_output_____
###Markdown
And returns several outputs:
###Code
detection_model.signatures['serving_default'].output_dtypes
detection_model.signatures['serving_default'].output_shapes
###Output
_____no_output_____
###Markdown
Add a wrapper function to call the model, and cleanup the outputs:
###Code
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
model_fn = model.signatures['serving_default']
output_dict = model_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
###Output
_____no_output_____
###Markdown
Run it on each test image and show the results:
###Code
def show_inference(model, image_path):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = np.array(Image.open(image_path))
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
display(Image.fromarray(image_np))
for image_path in TEST_IMAGE_PATHS:
show_inference(detection_model, image_path)
###Output
_____no_output_____
###Markdown
Instance Segmentation
###Code
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
The instance segmentation model includes a `detection_masks` output:
###Code
masking_model.output_shapes
for image_path in TEST_IMAGE_PATHS:
show_inference(masking_model, image_path)
###Output
_____no_output_____
###Markdown
Object Detection API Demo Run in Google Colab View source on GitHub Welcome to the [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. > **Important**: This tutorial is to help you through the first step towards using [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to build models. If you just just need an off the shelf model that does the job, see the [TFHub object detection example](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb). Setup Important: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2.md). This notebook includes only what's necessary to run in Colab. Install
###Code
!pip install --upgrade pip
!pip install -U --pre tensorflow=="2.*"
!pip install tf_slim
###Output
_____no_output_____
###Markdown
Make sure you have `pycocotools` installed
###Code
!pip install pycocotools
###Output
_____no_output_____
###Markdown
Get `tensorflow/models` or `cd` to parent directory of the repository.
###Code
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Compile protobufs and install the object_detection package
###Code
%%bash
cd models/research
# Compile protos.
protoc object_detection/protos/*.proto --python_out=.
# Install TensorFlow Object Detection API.
cp object_detection/packages/tf2/setup.py .
python -m pip install --use-feature=2020-resolver .
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
###Output
_____no_output_____
###Markdown
Import the object detection module.
###Code
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
###Output
_____no_output_____
###Markdown
Patches:
###Code
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
###Output
_____no_output_____
###Markdown
Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path.By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader
###Code
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
return model
###Output
_____no_output_____
###Markdown
Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
###Code
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
###Output
_____no_output_____
###Markdown
For the sake of simplicity we will test on 2 images:
###Code
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
###Output
_____no_output_____
###Markdown
Detection Load an object detection model:
###Code
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
detection_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
Check the model's input signature, it expects a batch of 3-color images of type uint8:
###Code
print(detection_model.signatures['serving_default'].inputs)
###Output
_____no_output_____
###Markdown
And returns several outputs:
###Code
detection_model.signatures['serving_default'].output_dtypes
detection_model.signatures['serving_default'].output_shapes
###Output
_____no_output_____
###Markdown
Add a wrapper function to call the model, and cleanup the outputs:
###Code
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
model_fn = model.signatures['serving_default']
output_dict = model_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
###Output
_____no_output_____
###Markdown
Run it on each test image and show the results:
###Code
def show_inference(model, image_path):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = np.array(Image.open(image_path))
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
display(Image.fromarray(image_np))
for image_path in TEST_IMAGE_PATHS:
show_inference(detection_model, image_path)
###Output
_____no_output_____
###Markdown
Instance Segmentation
###Code
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
The instance segmentation model includes a `detection_masks` output:
###Code
masking_model.signatures['serving_default'].output_shapes
for image_path in TEST_IMAGE_PATHS:
show_inference(masking_model, image_path)
###Output
_____no_output_____
###Markdown
Object Detection API Demo Run in Google Colab View source on GitHub Welcome to the [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. > **Important**: This tutorial is to help you through the first step towards using [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to build models. If you just just need an off the shelf model that does the job, see the [TFHub object detection example](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb). Setup Important: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/5245161c96cc057dc7a883ef4283ed7fab735bcf/research/object_detection/g3doc/tf2.mdinstallation). This notebook includes only what's necessary to run in Colab. Install
###Code
!pip install -U --pre tensorflow=="2.*"
!pip install tf_slim
###Output
_____no_output_____
###Markdown
Make sure you have `pycocotools` installed
###Code
!pip install pycocotools
###Output
_____no_output_____
###Markdown
Get `tensorflow/models` or `cd` to parent directory of the repository.
###Code
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Compile protobufs and install the object_detection package
###Code
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
%%bash
cd models/research
pip install .
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
###Output
_____no_output_____
###Markdown
Import the object detection module.
###Code
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
###Output
_____no_output_____
###Markdown
Patches:
###Code
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
###Output
_____no_output_____
###Markdown
Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path.By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader
###Code
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
return model
###Output
_____no_output_____
###Markdown
Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
###Code
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
###Output
_____no_output_____
###Markdown
For the sake of simplicity we will test on 2 images:
###Code
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
###Output
_____no_output_____
###Markdown
Detection Load an object detection model:
###Code
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
detection_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
Check the model's input signature, it expects a batch of 3-color images of type uint8:
###Code
print(detection_model.signatures['serving_default'].inputs)
###Output
_____no_output_____
###Markdown
And returns several outputs:
###Code
detection_model.signatures['serving_default'].output_dtypes
detection_model.signatures['serving_default'].output_shapes
###Output
_____no_output_____
###Markdown
Add a wrapper function to call the model, and cleanup the outputs:
###Code
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
model_fn = model.signatures['serving_default']
output_dict = model_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
###Output
_____no_output_____
###Markdown
Run it on each test image and show the results:
###Code
def show_inference(model, image_path):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = np.array(Image.open(image_path))
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
display(Image.fromarray(image_np))
for image_path in TEST_IMAGE_PATHS:
show_inference(detection_model, image_path)
###Output
_____no_output_____
###Markdown
Instance Segmentation
###Code
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
The instance segmentation model includes a `detection_masks` output:
###Code
masking_model.output_shapes
for image_path in TEST_IMAGE_PATHS:
show_inference(masking_model, image_path)
###Output
_____no_output_____
###Markdown
Object Detection API Demo Run in Google Colab View source on GitHub Welcome to the [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. > **Important**: This tutorial is to help you through the first step towards using [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to build models. If you just just need an off the shelf model that does the job, see the [TFHub object detection example](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb). Setup Important: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This notebook includes only what's necessary to run in Colab. Install
###Code
!pip install -U --pre tensorflow=="2.*"
!pip install tf_slim
###Output
_____no_output_____
###Markdown
Make sure you have `pycocotools` installed
###Code
!pip install pycocotools
###Output
_____no_output_____
###Markdown
Get `tensorflow/models` or `cd` to parent directory of the repository.
###Code
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Compile protobufs and install the object_detection package
###Code
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
%%bash
cd models/research
pip install .
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
###Output
_____no_output_____
###Markdown
Import the object detection module.
###Code
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
###Output
_____no_output_____
###Markdown
Patches:
###Code
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
###Output
_____no_output_____
###Markdown
Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path.By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader
###Code
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
return model
###Output
_____no_output_____
###Markdown
Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
###Code
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
###Output
_____no_output_____
###Markdown
For the sake of simplicity we will test on 2 images:
###Code
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
###Output
_____no_output_____
###Markdown
Detection Load an object detection model:
###Code
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
detection_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
Check the model's input signature, it expects a batch of 3-color images of type uint8:
###Code
print(detection_model.signatures['serving_default'].inputs)
###Output
_____no_output_____
###Markdown
And returns several outputs:
###Code
detection_model.signatures['serving_default'].output_dtypes
detection_model.signatures['serving_default'].output_shapes
###Output
_____no_output_____
###Markdown
Add a wrapper function to call the model, and cleanup the outputs:
###Code
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
model_fn = model.signatures['serving_default']
output_dict = model_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
###Output
_____no_output_____
###Markdown
Run it on each test image and show the results:
###Code
def show_inference(model, image_path):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = np.array(Image.open(image_path))
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
display(Image.fromarray(image_np))
for image_path in TEST_IMAGE_PATHS:
show_inference(detection_model, image_path)
###Output
_____no_output_____
###Markdown
Instance Segmentation
###Code
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model(model_name)
###Output
_____no_output_____
###Markdown
The instance segmentation model includes a `detection_masks` output:
###Code
masking_model.output_shapes
for image_path in TEST_IMAGE_PATHS:
show_inference(masking_model, image_path)
###Output
_____no_output_____ |
01_Getting_&_Knowing_Your_Data/Chipotle/Solutions.ipynb | ###Markdown
Ex2 - Getting and Knowing your Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
l
###Output
_____no_output_____
###Markdown
Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
연습2 - 데이터에 감 잡기 이번에는 인터넷에서 직접 데이터를 가져오자. 데이터와 자료를 공유해준 https://github.com/justmarkham 에게 특별히 감사를 전한다. 단계 1. 필요한 라이브러리를 가져온다. 단계 2. 이 [주소](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv) 에서 필요한 데이터를 가져온다. 단계 3. chipo라는 변수에 저장한다. 단계 4. 첫 10개 행을 본다. 단계 5. 데이터 안에 몇 개의 관측치가 들어 있나?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
단계 6. 데이터에 열columns은 몇 개인가? 단계 7. 모든 열의 이름을 출력하라. 단계 8. 데이터는 어떻게 색인되어 있나? 단계 9. 어떤 물건을 가장 많이 구매하나? 단계 10. 가장 많이 구매하는 물건을 총 몇 개나 구매했나? 단계 11. choice_description 열에서 가장 많이 구매한 아이템은 무엇인가? 단계 12. 모두 합쳐서 몇 개의 아이템을 구매했나? 단계 13. 아이템 가격item_price을 float 자료형으로 바꿔라 단계 13.a. 아이템 가격의 자료형을 확인하자. 단계 13.b. 람다 함수를 만들어서 아이템 가격을 바꿔보자. 단계 13.c. 다시 아이템 가격의 자료형을 확인하자. 단계 14. 데이터에 나온 기간 동안 수익revenue은 얼마인가? 단계 15. 이 기간 동안 얼마나 많은 구매가 이루어졌나? 단계 16. 구매 당 평균 수익은 얼마인가?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv).
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv'
df = pd.read_csv(url, "\t")
df
###Output
C:\Users\rosat\AppData\Roaming\Python\Python310\site-packages\IPython\core\interactiveshell.py:3251: FutureWarning: In a future version of pandas all arguments of read_csv except for the argument 'filepath_or_buffer' will be keyword-only.
exec(code_obj, self.user_global_ns, self.user_ns)
###Markdown
Step 3. Assign it to a variable called chipo.
###Code
chipo = df
###Output
_____no_output_____
###Markdown
Step 4. See the first 10 entries
###Code
chipo.head(10)
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
chipo.info()
# Solution 2
chipo.shape # will tell us the size of the matrix `chipo`, which is 4622x5
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo.
###Code
chipo = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv', sep='\t')
###Output
_____no_output_____
###Markdown
Step 4. See the first 10 entries
###Code
chipo.head(10)
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
chipo.shape
# Solution 2
chipo.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 order_id 4622 non-null int64
1 quantity 4622 non-null int64
2 item_name 4622 non-null object
3 choice_description 3376 non-null object
4 item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.7+ KB
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
chipo.shape[1]
###Output
_____no_output_____
###Markdown
Step 7. Print the name of all the columns.
###Code
chipo.columns
###Output
_____no_output_____
###Markdown
Step 8. How is the dataset indexed?
###Code
chipo.index
###Output
_____no_output_____
###Markdown
Step 9. Which was the most-ordered item?
###Code
item = chipo.groupby(['item_name']).agg({'quantity': 'sum'})
item.sort_values(['quantity'],ascending =False)[:1]
###Output
_____no_output_____
###Markdown
Step 10. For the most-ordered item, how many items were ordered?
###Code
c = chipo.groupby('item_name').sum()
#c = c.sum()
c = c.sort_values(['quantity'], ascending=False)
c.head(1)
###Output
_____no_output_____
###Markdown
Step 11. What was the most ordered item in the choice_description column?
###Code
c= chipo.groupby('choice_description').sum()
c= c.sort_values(['quantity'],ascending = False)
c.head(1)
###Output
_____no_output_____
###Markdown
Step 12. How many items were orderd in total?
###Code
chipo.quantity.sum()
###Output
_____no_output_____
###Markdown
Step 13. Turn the item price into a float Step 13.a. Check the item price type
###Code
chipo.item_price.dtype
###Output
_____no_output_____
###Markdown
Step 13.b. Create a lambda function and change the type of item price
###Code
lam_item= lambda x: float(x[1:-1])
chipo_item_price =chipo.item_price.apply(lam_item)
###Output
_____no_output_____
###Markdown
Step 13.c. Check the item price type
###Code
chipo_item_price.dtype
###Output
_____no_output_____
###Markdown
Step 14. How much was the revenue for the period in the dataset?
###Code
#revenue = (chipo['quantity']*chipo['item_price']).sum()
#print('Revenu was : $' + str(np.round(revenue,2)))
import numpy as np
revenue = (chipo['quantity'] * chipo['item_price']).sum()
print('Revenue was: $'+ str(round(revenue,2)))
###Output
_____no_output_____
###Markdown
Step 15. How many orders were made in the period?
###Code
orders = chipo.order_id.value_counts().count()
orders
###Output
_____no_output_____
###Markdown
Step 16. What is the average revenue amount per order?
###Code
# Solution 1
chipo['revenue'] = chipo['quantity'] * chipo['item_price']
order_grouped = chipo.groupby(by=['order_id']).sum()
order_grouped['revenue'].mean()
#chipo.item_price.mean()
# Solution 2
chipo.groupby(by=['order_id']).sum().mean()
###Output
_____no_output_____
###Markdown
Step 17. How many different items are sold?
###Code
chipo.item_name.value_counts().count()
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset?
###Code
print("Revenue was: $34500.16")
###Output
Revenue was: $34500.16
###Markdown
Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
18.81142857142869
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). test to update from github2 from masterupdated master from github. now pulling master to testb1 Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price
###Code
x = lambda
###Output
_____no_output_____
###Markdown
Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo.
###Code
chipo = pd.read_csv("https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv", sep=None)
###Output
_____no_output_____
###Markdown
Step 4. See the first 10 entries
###Code
chipo.head(10)
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
chipo.info()
# Solution 2
chipo.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
chipo.shape[1]
###Output
_____no_output_____
###Markdown
Step 7. Print the name of all the columns.
###Code
print(chipo.columns)
###Output
_____no_output_____
###Markdown
Step 8. How is the dataset indexed?
###Code
chipo.index
###Output
_____no_output_____
###Markdown
Step 9. Which was the most-ordered item?
###Code
most_ordered = chipo.groupby("item_name") \
.sum() \
.sort_values("quantity", 0, False) \
.head(1)
###Output
_____no_output_____
###Markdown
Step 10. For the most-ordered item, how many items were ordered?
###Code
most_ordered["quantity"][0]
###Output
_____no_output_____
###Markdown
Step 11. What was the most ordered item in the choice_description column?
###Code
(chipo.groupby("choice_description") \
.sum() \
.sort_values("quantity", 0, False) \
.head(1))["quantity"]
###Output
_____no_output_____
###Markdown
Step 12. How many items were orderd in total?
###Code
chipo["quantity"].sum()
###Output
_____no_output_____
###Markdown
Step 13. Turn the item price into a float Step 13.a. Check the item price type
###Code
price = chipo["item_price"]
chipo["item_price"] = price.apply(lambda p: float(str(p)[1:]))
chipo["item_price"].dtype
###Output
_____no_output_____
###Markdown
Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset?
###Code
import numpy as np
revenue = (chipo.item_price * chipo.quantity).sum()
np.round(revenue, 2)
###Output
_____no_output_____
###Markdown
Step 15. How many orders were made in the period?
###Code
len(chipo.groupby("order_id").groups) # not idomatic
order_count = chipo.order_id.value_counts().count()
###Output
_____no_output_____
###Markdown
Step 16. What is the average revenue amount per order?
###Code
# Solution 1
chipo["item_price"].sum() / chipo["order_id"].count() # wrong
avg_order_rev = revenue / order_count
chipo["revenue"] = chipo.item_price * chipo.quantity
chipo.groupby("order_id")["revenue"].sum().mean()
# Solution 2
chipo["item_price"].mean() # wrong
chipo.groupby("order_id").sum().mean()["revenue"]
###Output
_____no_output_____
###Markdown
Step 17. How many different items are sold?
###Code
len(chipo.groupby(['item_name']).groups) # not idomatic
chipo.item_name.value_counts().count()
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your DataCheck out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____
###Markdown
Ex2 - Getting and Knowing your Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. Step 4. See the first 10 entries Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
# Solution 2
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
order_id 4622 non-null int64
quantity 4622 non-null int64
item_name 4622 non-null object
choice_description 3376 non-null object
item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.6+ KB
###Markdown
Step 6. What is the number of columns in the dataset? Step 7. Print the name of all the columns. Step 8. How is the dataset indexed? Step 9. Which was the most-ordered item? Step 10. For the most-ordered item, how many items were ordered? Step 11. What was the most ordered item in the choice_description column? Step 12. How many items were orderd in total? Step 13. Turn the item price into a float Step 13.a. Check the item price type Step 13.b. Create a lambda function and change the type of item price Step 13.c. Check the item price type Step 14. How much was the revenue for the period in the dataset? Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
# Solution 2
###Output
_____no_output_____ |
docs/_downloads/20c6c97a66c99b6ee98d9d133ba58e4d/Intro_to_TorchScript_tutorial.ipynb | ###Markdown
TorchScript 소개===========================**Author**: James Reed ([email protected]), Michael Suo ([email protected]), rev2**번역**: `강준혁 `_이 튜토리얼은 C++와 같은 고성능 환경에서 실행될 수 있는PyTorch 모델(``nn.Module`` 의 하위클래스)의 중간 표현인TorchScript에 대한 소개입니다.이 튜토리얼에서는 다음을 다룰 것입니다:1. 다음을 포함한 PyTorch의 모델 제작의 기본:- 모듈(Modules)- ``forward`` 함수 정의하기- 모듈을 계층 구조로 구성하기2. PyTorch 모듈을 고성능 배포 런타임인 TorchScript로 변환하는 특정 방법- 기존 모듈 추적하기- 스크립트를 사용하여 모듈을 직접 컴파일하기- 두 가지 접근 방법을 구성하는 방법- TorchScript 모듈 저장 및 불러오기이 튜토리얼을 완료한 후에는`다음 학습서 `_를 통해 C++에서 TorchScript 모델을 실제로 호출하는 예제를 안내합니다.
###Code
import torch # This is all you need to use both PyTorch and TorchScript!
print(torch.__version__)
###Output
_____no_output_____
###Markdown
PyTorch 모델 작성의 기초---------------------------------간단한 ``Module`` 을 정의하는 것부터 시작하겠습니다. ``Module`` 은 PyTorch의기본 구성 단위입니다. 이것은 다음을 포함합니다:1. 호출을 위해 모듈을 준비하는 생성자2. ``Parameters`` 집합과 하위 ``Module`` . 이것들은 생성자에 의해 초기화되며 호출 중에 모듈에 의해 사용될 수 있습니다.3. ``forward`` 함수. 모듈이 호출될 때 실행되는 코드입니다.작은 예제로 시작해 보겟습니다:
###Code
class MyCell(torch.nn.Module):
def __init__(self):
super(MyCell, self).__init__()
def forward(self, x, h):
new_h = torch.tanh(x + h)
return new_h, new_h
my_cell = MyCell()
x = torch.rand(3, 4)
h = torch.rand(3, 4)
print(my_cell(x, h))
###Output
_____no_output_____
###Markdown
우리는 다음 작업을 수행했습니다.:1. 하위 클래스로 ``torch.nn.Module`` 을 갖는 클래스를 생성했습니다.2. 생성자를 정의했습니다. 생성자는 많은 작업을 수행하지 않고 ``super`` 로 생성자를 호출합니다.3. 두 개의 입력을 받아 두 개의 출력을 반환하는 ``forward`` 함수를 정의했습니다. ``forward`` 함수의 실제 내용은 크게 중요하진 않지만, 가짜 `RNN cell `__ 의 일종입니다. 즉, 반복(loop)에 적용되는 함수입니다.모듈을 인스턴스화하고, 3x4 크기의 무작위 값들로 이루어진 행렬 ``x`` 와 ``h`` 를만들었습니다.그런 다음, ``my_cell(x, h)`` 를 이용해 cell을 호출했습니다. 이것은 ``forward``함수를 호출합니다.좀 더 흥미로운 것을 해봅시다:
###Code
class MyCell(torch.nn.Module):
def __init__(self):
super(MyCell, self).__init__()
self.linear = torch.nn.Linear(4, 4)
def forward(self, x, h):
new_h = torch.tanh(self.linear(x) + h)
return new_h, new_h
my_cell = MyCell()
print(my_cell)
print(my_cell(x, h))
###Output
_____no_output_____
###Markdown
모듈 ``MyCell`` 을 재정의했지만, 이번에는 ``self.linear`` 속성을 추가하고forward 함수에서 ``self.linear`` 을 호출했습니다.여기서 무슨 일이 일어날까요? ``torch.nn.Linear`` 은 ``MyCell`` 과마찬가지로 PyTorch 표준 라이브러리의 ``Module`` 입니다. 이것은 호출 구문을사용하여 호출할 수 있습니다. 우리는 ``Module`` 의 계층을 구축하고 있습니다.``Module`` 에서 ``print`` 하는 것은 ``Module`` 의 하위 클래스 계층에 대한시각적 표현을 제공할 것입니다. 이 예제에서는 ``Linear`` 의 하위 클래스와하위 클래스의 매개 변수를 볼 수 있습니다.``Module`` 을 이런 방식으로 작성하면, 재사용 가능한 구성 요소를 사용하여모델을 간결하고 읽기 쉽게 작성할 수 있습니다.여러분은 출력된 내용에서 ``grad_fn`` 을 확인하셨을 것입니다. 이것은`오토그라드(autograd) `__라 불리는 PyTorch의 자동 미분 방법의 세부 정보입니다. 요컨데, 이 시스템은잠재적으로 복잡한 프로그램을 통해 미분을 계산할 수 있게 합니다. 이 디자인은모델 제작에 엄청난 유연성을 제공합니다.이제 유연성을 시험해 보겠습니다.
###Code
class MyDecisionGate(torch.nn.Module):
def forward(self, x):
if x.sum() > 0:
return x
else:
return -x
class MyCell(torch.nn.Module):
def __init__(self):
super(MyCell, self).__init__()
self.dg = MyDecisionGate()
self.linear = torch.nn.Linear(4, 4)
def forward(self, x, h):
new_h = torch.tanh(self.dg(self.linear(x)) + h)
return new_h, new_h
my_cell = MyCell()
print(my_cell)
print(my_cell(x, h))
###Output
_____no_output_____
###Markdown
MyCell 클래스를 다시 정의했지만, 여기선 ``MyDecisionGate`` 를 정의했습니다.이 모듈은 **제어 흐름** 을 활용합니다. 제어 흐름은 루프와 ``if`` 명령문과같은 것으로 구성됩니다.많은 프레임워크들은 주어진 프로그램 코드로부터 기호식 미분(symbolicderivatives)을 계산하는 접근법을 취하고 있습니다. 하지만, PyTorch에서는 변화도테이프(gradient tape)를 사용합니다. 연산이 발생할 때 이를 기록하고, 미분값을계산할 때 거꾸로 재생합니다. 이런 방식으로, 프레임워크는 언어의 모든 구문에대한 미분값을 명시적으로 정의할 필요가 없습니다... figure:: https://github.com/pytorch/pytorch/raw/master/docs/source/_static/img/dynamic_graph.gif :alt: 오토그라드가 작동하는 방식 오토그라드가 작동하는 방식 TorchScript의 기초---------------------이제 실행 예제를 살펴보고 TorchScript를 적용하는 방법을 살펴보겠습니다.한마디로, TorchScript는 PyTorch의 유연하고 동적인 특성을 고려하여 모델 정의를캡쳐할 수 있는 도구를 제공합니다.**추적(tracing)** 이라 부르는 것을 검사하는 것으로 시작하겠습니다.``Module`` 추적~~~~~~~~~~~~~~~~~~~
###Code
class MyCell(torch.nn.Module):
def __init__(self):
super(MyCell, self).__init__()
self.linear = torch.nn.Linear(4, 4)
def forward(self, x, h):
new_h = torch.tanh(self.linear(x) + h)
return new_h, new_h
my_cell = MyCell()
x, h = torch.rand(3, 4), torch.rand(3, 4)
traced_cell = torch.jit.trace(my_cell, (x, h))
print(traced_cell)
traced_cell(x, h)
###Output
_____no_output_____
###Markdown
살짝 앞으로 돌아가 ``MyCell`` 의 두 번째 버전을 가져왔습니다. 이전에 이것을인스턴스화 했지만 이번엔 ``torch.jit.trace`` 를 호출하고, ``Module`` 을전달했으며, 네트워크가 볼 수 있는 *입력 예* 를 전달했습니다.여기서 무슨 일이 발생했습니까? ``Module`` 을 호출하였고, ``Module`` 이 돌아갈 때발생한 연산을 기록하였고, ``torch.jit.ScriptModule`` 의 인스터스를 생성했습니다.( ``TracedModule`` 은 인스턴스입니다)TorchScript는 일반적으로 딥 러닝에서 *그래프* 라고 하는 중간 표현(또는 IR)에정의를 기록합니다. ``.graph`` 속성으로 그래프를 확인해볼 수 있습니다:
###Code
print(traced_cell.graph)
###Output
_____no_output_____
###Markdown
그러나, 이것은 저수준의 표현이며 그래프에 포함된 대부분의 정보는최종 사용자에게 유용하지 않습니다. 대신, ``.code`` 속성을 사용하여 코드에대한 Python 구문 해석을 제공할 수 있습니다:
###Code
print(traced_cell.code)
###Output
_____no_output_____
###Markdown
**어째서** 이런 일들을 했을까요? 여기에는 몇 가지 이유가 있습니다:1. TorchScript 코드는 기본적으로 제한된 Python 인터프리터인 자체 인터프리터에서 호출될 수 있습니다. 이 인터프리터는 GIL(Global Interpreter Lock)을 얻지 않으므로 동일한 인스턴스에서 동시에 많은 요청을 처리할 수 있습니다.2. 이 형식을 사용하면 전체 모델을 디스크에 저장하고 Python 이외의 언어로 작성된 서버와 같은 다른 환경에서 불러올 수 있습니다.3. TorchScript는 보다 효율적인 실행을 제공하기 위해 코드에서 컴파일러 최적화를 수행할 수 있는 표현을 제공합니다.4. TorchScript를 사용하면 개별 연산자보다 프로그램의 더 넓은 관점을 요구하는 많은 백엔드/장치 런타임과 상호작용(interface)할 수 있습니다.``traced_cell`` 을 호출하면 Python 모듈과 동일한 결과가 생성됩니다:
###Code
print(my_cell(x, h))
print(traced_cell(x, h))
###Output
_____no_output_____
###Markdown
스크립팅을 사용하여 모듈 변환----------------------------------제어 흐름이 포함된(control-flow-laden) 하위 모듈이 아닌 모듈 버전 2를 사용하는이유가 있습니다. 지금 살펴봅시다:
###Code
class MyDecisionGate(torch.nn.Module):
def forward(self, x):
if x.sum() > 0:
return x
else:
return -x
class MyCell(torch.nn.Module):
def __init__(self, dg):
super(MyCell, self).__init__()
self.dg = dg
self.linear = torch.nn.Linear(4, 4)
def forward(self, x, h):
new_h = torch.tanh(self.dg(self.linear(x)) + h)
return new_h, new_h
my_cell = MyCell(MyDecisionGate())
traced_cell = torch.jit.trace(my_cell, (x, h))
print(traced_cell.dg.code)
print(traced_cell.code)
###Output
_____no_output_____
###Markdown
``.code`` 출력을 보면, ``if-else`` 분기가 어디에도 없다는 것을 알 수 있습니다!어째서일까요? 추적은 코드를 실행하고 *발생하는* 작업을 기록하며 정확하게 수행하는스크립트 모듈(ScriptModule)을 구성하는 일을 수행합니다. 불행하게도, 제어 흐름과같은 것들은 지워집니다.TorchScript에서 이 모듈을 어떻게 충실하게 나타낼 수 있을까요? Python 소스 코드를직접 분석하여 TorchScript로 변환하는 **스크립트 컴파일러(script compiler)** 를제공합니다. ``MyDecisionGate`` 를 스크립트 컴파일러를 사용하여 변환해 봅시다:
###Code
scripted_gate = torch.jit.script(MyDecisionGate())
my_cell = MyCell(scripted_gate)
scripted_cell = torch.jit.script(my_cell)
print(scripted_gate.code)
print(scripted_cell.code)
###Output
_____no_output_____
###Markdown
만세! 이제 TorchScript에서 프로그램의 동작을 충실하게 캡쳐했습니다. 이제프로그램을 실행해 봅시다:
###Code
# 새로운 입력
x, h = torch.rand(3, 4), torch.rand(3, 4)
traced_cell(x, h)
###Output
_____no_output_____
###Markdown
스크립팅과 추적 혼합~~~~~~~~~~~~~~~~~~~~~~~~~~~~어떤 상황에서는 스크립팅보다는 추적을 사용해야 합니다. (예: 모듈에는 TorchScript에표시하지 않으려는 Python 상수 값을 기반으로 만들어진 많은 구조적인결정(architectural decisions)이 있습니다.) 이 경우, 스크립팅은 추적으로구성될 수 있습니다: ``torch.jit.script`` 는 추적된 모듈의 코드를 인라인(inline)할 것이고, 추적은 스크립트 된 모듈의 코드를 인라인 할 것입니다.첫 번째 경우의 예:
###Code
class MyRNNLoop(torch.nn.Module):
def __init__(self):
super(MyRNNLoop, self).__init__()
self.cell = torch.jit.trace(MyCell(scripted_gate), (x, h))
def forward(self, xs):
h, y = torch.zeros(3, 4), torch.zeros(3, 4)
for i in range(xs.size(0)):
y, h = self.cell(xs[i], h)
return y, h
rnn_loop = torch.jit.script(MyRNNLoop())
print(rnn_loop.code)
###Output
_____no_output_____
###Markdown
두 번째 경우의 예:
###Code
class WrapRNN(torch.nn.Module):
def __init__(self):
super(WrapRNN, self).__init__()
self.loop = torch.jit.script(MyRNNLoop())
def forward(self, xs):
y, h = self.loop(xs)
return torch.relu(y)
traced = torch.jit.trace(WrapRNN(), (torch.rand(10, 3, 4)))
print(traced.code)
###Output
_____no_output_____
###Markdown
이러한 방식으로, 스크립팅과 추적은 상황에 따라서 따로 사용되거나, 함께사용될 수 있습니다.모델 저장 및 불러오기-------------------------TorchScript 모듈을 아카이브 형식으로 디스크에 저장하고 불러오는 API를 제공합니다.이 형식은 코드, 매개 변수, 속성과 디버그 정보를 포함합니다. 이것은 그 아카이브가완전히 별개의 프로세스로 로드할 수 있는 모델의 독립 표현임을 의미합니다.랩핑 된 RNN 모듈을 저장하고 로드해 봅시다:
###Code
traced.save('wrapped_rnn.pt')
loaded = torch.jit.load('wrapped_rnn.pt')
print(loaded)
print(loaded.code)
###Output
_____no_output_____ |
DSND_Final_Analysis.ipynb | ###Markdown
Patient Access to and Use of Electronic Medical Records -- Data AnalysisIn this notebook, we'll take the cleaned and condensed datasets and run a machine learning analysis to predict access to and use of electronic medical records (EMRs). Table of Contents Machine Learning Analysis Final Data Manipulations Model for `offeredaccesseither` Outcome Preliminary Model Model Tuning Binarizing the Outcome Tuning the Binary Model Manual Feature Reduction Summary of Final Variables Model for `accessonlinerecord` Outcome Preliminary Model Model Tuning Binarizing the Outcome Tuning the Binary Model Manual Feature Reduction Summary of Final Variables Overall Summary Conclusions Limitations Areas for Improvement and Future Study
###Code
# import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sb
import statsmodels.api as sm
from sklearn.feature_selection import RFE, RFECV
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import accuracy_score, confusion_matrix, recall_score, precision_score
###Output
_____no_output_____
###Markdown
Machine Learning Analysis Final Data Manipulations
###Code
# read in the data; these variables were pre-screened in the exploratory notebook
df_train = pd.read_csv('data/HINTS_train.csv.gz', compression='gzip')
df_test = pd.read_csv('data/HINTS_test.csv.gz', compression='gzip')
# check that data loaded
display(df_train.head())
display(df_test.head())
# remove the index columns
df_train.drop(columns = ['Unnamed: 0'], inplace = True)
df_test.drop(columns = ['Unnamed: 0'], inplace = True)
df_train.columns
###Output
_____no_output_____
###Markdown
We will use logistic regression as our model, since it gives interpretable coefficients that will allow us to identify groups who are more and less likely to be offered use of an EMR and who are more and less likely to use the EMR. Since the outcome variables have > 2 categories, we'll use a multinomial model. To set up the model we'll need to create dummy variables for any categorical variable with > 2 categories. It will also be helpful to recode the predictor and outcome variables as N=0, Y=1 (where applicable) to simplify model interpretation and to make "No" the reference category. The predictor variables currently are coded Y=1, N=2. To refresh our memory, let's look at the values for each column again.
###Code
for col in df_train.columns:
print(col, '\n', df_train[col].value_counts())
###Output
stratum
HM 3185
LM 2305
Name: stratum, dtype: int64
highspanli
2 5031
1 459
Name: highspanli, dtype: int64
nchsurcode2013
1 1952
2 1291
3 1134
4 456
5 389
6 268
Name: nchsurcode2013, dtype: int64
censdiv
5 1309
9 883
7 770
3 708
2 579
8 446
4 297
6 281
1 217
Name: censdiv, dtype: int64
useinternet
1 4615
2 875
Name: useinternet, dtype: int64
internet_broadbnd
2 2320
1 2295
-1 875
Name: internet_broadbnd, dtype: int64
electronic_selfhealthinfo
1 3999
2 1491
Name: electronic_selfhealthinfo, dtype: int64
havedevice_cat
5 2995
2 1527
3 497
1 258
4 213
Name: havedevice_cat, dtype: int64
tablethealthwellnessapps
1 2561
2 2044
-1 710
3 175
Name: tablethealthwellnessapps, dtype: int64
tablet_achievegoal
2 2614
1 2166
-1 710
Name: tablet_achievegoal, dtype: int64
tablet_makedecision
2 2856
1 1924
-1 710
Name: tablet_makedecision, dtype: int64
tablet_discussionshcp
2 2865
1 1915
-1 710
Name: tablet_discussionshcp, dtype: int64
intrsn_visitedsocnet
1 3799
2 1691
Name: intrsn_visitedsocnet, dtype: int64
intrsn_sharedsocnet
2 4725
1 765
Name: intrsn_sharedsocnet, dtype: int64
intrsn_supportgroup
2 5077
1 413
Name: intrsn_supportgroup, dtype: int64
intrsn_youtube
2 3558
1 1932
Name: intrsn_youtube, dtype: int64
regularprovider
1 3963
2 1527
Name: regularprovider, dtype: int64
freqgoprovider
2 1088
5 900
3 818
1 804
4 750
0 618
6 512
Name: freqgoprovider, dtype: int64
qualitycare
2 1992
1 1704
3 905
-1 618
4 231
5 40
Name: qualitycare, dtype: int64
healthinsurance
1 5233
2 257
Name: healthinsurance, dtype: int64
accessonlinerecord
0 3196
1 965
2 763
3 296
4 270
Name: accessonlinerecord, dtype: int64
generalhealth
2 2077
3 1936
1 687
4 668
5 122
Name: generalhealth, dtype: int64
ownabilitytakecarehealth
2 2652
1 1338
3 1287
4 165
5 48
Name: ownabilitytakecarehealth, dtype: int64
medconditions_diabetes
2 4377
1 1113
Name: medconditions_diabetes, dtype: int64
medconditions_highbp
2 3086
1 2404
Name: medconditions_highbp, dtype: int64
medconditions_heartcondition
2 4986
1 504
Name: medconditions_heartcondition, dtype: int64
medconditions_depression
2 4220
1 1270
Name: medconditions_depression, dtype: int64
everhadcancer
2 4619
1 871
Name: everhadcancer, dtype: int64
maritalstatus
1 2835
6 928
3 849
4 524
2 245
5 109
Name: maritalstatus, dtype: int64
selfgender
2 3014
1 2233
3 243
Name: selfgender, dtype: int64
agegrpb
3 1787
4 1163
2 1151
1 748
5 641
Name: agegrpb, dtype: int64
educa
4 2700
3 1641
2 853
1 296
Name: educa, dtype: int64
raceethn5
1 3479
3 742
2 624
-9 241
4 219
5 185
Name: raceethn5, dtype: int64
hhinc
5 2121
4 983
1 767
2 631
3 619
-9 369
Name: hhinc, dtype: int64
smokestat
3 3430
2 1402
1 658
Name: smokestat, dtype: int64
survey_cycle
3 2032
2 1640
5 1112
4 706
Name: survey_cycle, dtype: int64
offeredaccesseither
1 3418
2 1512
3 560
Name: offeredaccesseither, dtype: int64
phq4_cat
1 4037
2 896
3 321
4 236
Name: phq4_cat, dtype: int64
bmi_cat
3 1405
1 1385
4 1357
2 1343
Name: bmi_cat, dtype: int64
wkminmodex_cat
1 1495
5 1236
3 1094
2 918
4 747
Name: wkminmodex_cat, dtype: int64
avgdrinks_cat
1 2841
2 1429
3 649
4 308
5 263
Name: avgdrinks_cat, dtype: int64
whruseinet_pubvother
2 3781
-1 875
1 834
Name: whruseinet_pubvother, dtype: int64
healthins_pubpriv
1 2961
2 1761
3 768
Name: healthins_pubpriv, dtype: int64
###Markdown
The following are of type Y/N. They'll be recoded from Y=1, N=2, to Y=1, N=0.
###Code
# make list of Y/N vars to recode
vars_yn = ['highspanli', 'useinternet', 'electronic_selfhealthinfo', 'intrsn_visitedsocnet', 'intrsn_sharedsocnet',\
'intrsn_supportgroup', 'intrsn_youtube', 'regularprovider', 'healthinsurance', 'medconditions_diabetes', \
'medconditions_highbp', 'medconditions_heartcondition', 'medconditions_depression', 'everhadcancer']
# make df to hold recoded vars
df_train_recode = df_train.copy()
df_test_recode = df_test.copy()
# recode the Y/N vars; all we have to do is change "N" from 2 to 0
for var in vars_yn:
df_train_recode.loc[df_train[var] == 2, var] = 0
# check that it worked
for col in df_train_recode.columns:
print(col, '\n', df_train_recode[col].value_counts())
# repeat for test set
for var in vars_yn:
df_test_recode.loc[df_test[var] == 2, var] = 0
###Output
_____no_output_____
###Markdown
All of the rest of the variables except `stratum` (which is coded as a categorical) have >= 3 categories, so we'll need to make dummy variables for them.
###Code
# list of variables that don't need to be dummied
vars_nodum = vars_yn
vars_nodum.append('stratum')
vars_nodum.append('accessonlinerecord')
vars_nodum.append('offeredaccesseither')
# list of variables that need dummies made
vars_dum = list(set(df_train.columns) - set(vars_nodum))
vars_dum
###Output
_____no_output_____
###Markdown
We'll generate dummy variables, dropping the first category (it will be the reference group) of each.
###Code
# create dummy variables -- training set
df_train_dums = pd.get_dummies(df_train_recode, prefix = vars_dum, columns = vars_dum, drop_first = True)
# check to see if it worked
df_train_dums.head()
df_train_dums.info(verbose=True)
# OK, it worked. Repeat for test set
df_test_dums = pd.get_dummies(df_test_recode, prefix = vars_dum, columns = vars_dum, drop_first = True)
###Output
_____no_output_____
###Markdown
`stratum` is supposed to be categorical but it seems to have been convert to an object again. For consistency, and to avoid having to worry about what type it is, let's just code it as 0 (LM) or 1 (HM).
###Code
# make an integer code for stratum, since doesn't seem to stay as categorical
df_train_dums['stratum_cd'] = 0
df_train_dums.loc[df_train_recode['stratum'] == 'HM', 'stratum_cd'] = 1
# check it
display(df_train_dums['stratum'].value_counts())
display(df_train_dums['stratum_cd'].value_counts())
# OK, can drop stratum
df_train_dums.drop(columns = ['stratum'], inplace = True)
# repeat for test set
df_test_dums['stratum_cd'] = 0
df_test_dums.loc[df_test_recode['stratum'] == 'HM', 'stratum_cd'] = 1
df_test_dums.drop(columns = ['stratum'], inplace = True)
# how many predictor variables? (-2 for the outcome variables)
df_train_dums.shape[1] - 2
###Output
_____no_output_____
###Markdown
Expanding the categoricals has more than doubled the number of predictors. We will want to winnow this down. Let's also look at the coding of the outcome variables.
###Code
df_train['offeredaccesseither'].value_counts()
###Output
_____no_output_____
###Markdown
`offeredaccesseither` has the same Y=1/N=2 coding with "don't know" as 3. We previously decided to keep "don't know" as a separate category. This ordering makes sense, so we'll leave it alone.
###Code
df_train['accessonlinerecord'].value_counts()
###Output
_____no_output_____
###Markdown
This variable is ordered by increasing frequency of use with none/never = 0. We left it as multicategorical to see if we can predict who uses the EMR less and more. This coding will be fine. Model for `offeredaccesseither` Outcome This outcome variable denotes whether a respondent has been offered EMR access by either their healthcare provider (HCP) or their insurer. The categories, as above, are "yes", "no", and "don't know". Our preliminary decision was to maintain these three categories, since "don't know" may delineate a different group, perhaps less engaged. However, this category has few entries relative to the other two, and may be hard to accurately predict, so we may need to reassess this decision. Make the X and Y matrices for this outcome variable. In the univariate analysis, `medconditions_diabetes`, `medconditions_highBP`, and `medconditions_heartcondition` were non-significant predictors this outcome, so they need to be removed from the X matrices.
###Code
# make the X matrices for offeredaccesseither
X_train_ofracceith = df_train_dums.drop(columns = ['offeredaccesseither', 'accessonlinerecord', 'medconditions_diabetes',\
'medconditions_highbp','medconditions_heartcondition'])
X_test_ofracceith = df_test_dums.drop(columns = ['offeredaccesseither', 'accessonlinerecord', 'medconditions_diabetes',\
'medconditions_highbp','medconditions_heartcondition'])
# check the columns
X_train_ofracceith.info(verbose=True)
# make the Y matrices
Y_train_ofracceith = df_train_dums['offeredaccesseither']
Y_test_ofracceith = df_test_dums['offeredaccesseither']
###Output
_____no_output_____
###Markdown
Preliminary ModelSince we have so many predictor variables (features), we are in danger of creating an overfit model. We'll use scikit-learn's Recursive Feature Elimination with Cross-Validation ([RFECV](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html)) to search for an optimal set of variables. Since the "don't know" outcome is somewhat rare (~10% of outcomes in the training set), we'll use balanced class weights to deal with the imbalance. To minimize both false negatives & false positives, we'll use the F1 score as the scoring metric for RFECV.
###Code
# fit the model using recursive feature elimination and logistic regression
logreg1 = LogisticRegression(class_weight = 'balanced', multi_class = 'multinomial', C = 1, n_jobs = -1)
model1 = RFECV(logreg1, scoring = 'f1_weighted', verbose = 2, n_jobs = -1)
model1.fit(X_train_ofracceith, Y_train_ofracceith)
Y_pred1 = model1.predict(X_train_ofracceith)
# comparing training and test scores to assess for possible overfitting
print('training set scores: ')
prec, rec, acc = get_scores(Y_train_ofracceith, Y_pred1)
Y_pred2 = model1.predict(X_test_ofracceith)
print('test set scores: ')
prec, rec, acc = get_scores(Y_test_ofracceith, Y_pred2)
###Output
Fitting estimator with 106 features.
Fitting estimator with 105 features.
Fitting estimator with 104 features.
Fitting estimator with 103 features.
Fitting estimator with 102 features.
Fitting estimator with 101 features.
Fitting estimator with 100 features.
Fitting estimator with 99 features.
Fitting estimator with 98 features.
Fitting estimator with 97 features.
Fitting estimator with 96 features.
Fitting estimator with 95 features.
Fitting estimator with 94 features.
Fitting estimator with 93 features.
Fitting estimator with 92 features.
Fitting estimator with 91 features.
Fitting estimator with 90 features.
Fitting estimator with 89 features.
Fitting estimator with 88 features.
Fitting estimator with 87 features.
Fitting estimator with 86 features.
Fitting estimator with 85 features.
Fitting estimator with 84 features.
Fitting estimator with 83 features.
Fitting estimator with 82 features.
Fitting estimator with 81 features.
Fitting estimator with 80 features.
Fitting estimator with 79 features.
Fitting estimator with 78 features.
Fitting estimator with 77 features.
Fitting estimator with 76 features.
Fitting estimator with 75 features.
Fitting estimator with 74 features.
Fitting estimator with 73 features.
Fitting estimator with 72 features.
Fitting estimator with 71 features.
Fitting estimator with 70 features.
Fitting estimator with 69 features.
Fitting estimator with 68 features.
Fitting estimator with 67 features.
Fitting estimator with 66 features.
Fitting estimator with 65 features.
training set scores:
confusion matrix:
[[2316 554 548]
[ 396 670 446]
[ 132 133 295]]
precision: 0.66633; recall: 0.59763, accuracy: 0.59763
test set scores:
confusion matrix:
[[919 249 251]
[198 261 203]
[ 69 65 113]]
precision: 0.62254; recall: 0.55541, accuracy: 0.55541
###Markdown
RFECV reduces the model from 106 to 64 features. However, precision and accuracy are relatively poor. There is not a big difference between the training and test set results that would obviously point to overfitting. Model TuningNext, we'll attempt to tune this model to see if the fit improves.We can check the effect of using balanced class weights, explore different values for the logistic regression regularization parameter `C` and try different step sizes to remove more than one parameter at each iteration.
###Code
# tune the multinomial LR model
parameters = [
{
'estimator__class_weight': ['balanced', None],
'estimator__C': [0.01, 0.1, 1, 10, 100],
'step': [1, 3, 5],
}]
logreg2 = LogisticRegression(multi_class = 'multinomial', n_jobs = -1)
rfecv = RFECV(logreg2, scoring = 'f1_weighted', verbose = 2, n_jobs = -1)
clf = GridSearchCV(rfecv, parameters, n_jobs = -1, verbose = 2)
clf.fit(X_train_ofracceith, Y_train_ofracceith)
Y_pred3 = clf.predict(X_train_ofracceith)
print('training set scores: ')
prec, rec, acc = get_scores(Y_train_ofracceith, Y_pred3)
Y_pred4 = clf.predict(X_test_ofracceith)
print('test set scores: ')
prec, rec, acc = get_scores(Y_test_ofracceith, Y_pred4)
print("\nBest Parameters:", clf.best_params_)
###Output
Fitting 5 folds for each of 30 candidates, totalling 150 fits
###Markdown
The tuned model selected more features (81 vs 64) using a lower `C` value (0.01 vs 1). Again, results for the training and test sets are similar, indicating overfitting is likely not an issue. However, precision and recall barely changed and are still poor. Binarizing the Outcome It is possible that in trying to increase correct classification of the rare "don't know" category, the model is doing worse at classifying the "yes" and "no" categories. A workaround for this would be to merge "don't know" with "no", since respondents who answered either way don't have access to the EMR, and both would similarly be candidates for better education/outreach. Let's take this approach. We'll recode "no" and "don't know" as 0, and leave "yes" as 1, keeping with logistic regression convention.
###Code
# try binary outcome; combine "no" and "don't know"
Y_train_ofracceith_bin = Y_train_ofracceith.copy()
Y_train_ofracceith_bin.loc[(Y_train_ofracceith == 2) | (Y_train_ofracceith == 3) ] = 0
# check this came out right
Y_train_ofracceith_bin.value_counts()
# repeat for test set
Y_test_ofracceith_bin = Y_test_ofracceith.copy()
Y_test_ofracceith_bin.loc[(Y_test_ofracceith == 2) | (Y_test_ofracceith == 3) ] = 0
# fit a LR model to the binary outcome variable
logreg3 = LogisticRegression(class_weight = 'balanced', C = 1, n_jobs = -1)
model2 = RFECV(logreg3, scoring = 'f1_weighted', verbose = 2, n_jobs = -1)
model2.fit(X_train_ofracceith, Y_train_ofracceith_bin)
Y_pred5 = model2.predict(X_train_ofracceith)
print('training set scores: ')
prec, rec, acc = get_scores(Y_train_ofracceith_bin, Y_pred5)
Y_pred6 = model2.predict(X_test_ofracceith)
print('test set scores: ')
prec, rec, acc = get_scores(Y_test_ofracceith_bin, Y_pred6)
###Output
Fitting estimator with 106 features.
Fitting estimator with 105 features.
Fitting estimator with 104 features.
Fitting estimator with 103 features.
Fitting estimator with 102 features.
Fitting estimator with 101 features.
Fitting estimator with 100 features.
Fitting estimator with 99 features.
Fitting estimator with 98 features.
Fitting estimator with 97 features.
Fitting estimator with 96 features.
Fitting estimator with 95 features.
Fitting estimator with 94 features.
Fitting estimator with 93 features.
Fitting estimator with 92 features.
Fitting estimator with 91 features.
Fitting estimator with 90 features.
Fitting estimator with 89 features.
Fitting estimator with 88 features.
Fitting estimator with 87 features.
Fitting estimator with 86 features.
Fitting estimator with 85 features.
Fitting estimator with 84 features.
Fitting estimator with 83 features.
Fitting estimator with 82 features.
Fitting estimator with 81 features.
Fitting estimator with 80 features.
Fitting estimator with 79 features.
Fitting estimator with 78 features.
Fitting estimator with 77 features.
Fitting estimator with 76 features.
Fitting estimator with 75 features.
Fitting estimator with 74 features.
Fitting estimator with 73 features.
Fitting estimator with 72 features.
Fitting estimator with 71 features.
Fitting estimator with 70 features.
Fitting estimator with 69 features.
Fitting estimator with 68 features.
Fitting estimator with 67 features.
Fitting estimator with 66 features.
Fitting estimator with 65 features.
Fitting estimator with 64 features.
Fitting estimator with 63 features.
Fitting estimator with 62 features.
Fitting estimator with 61 features.
Fitting estimator with 60 features.
Fitting estimator with 59 features.
Fitting estimator with 58 features.
Fitting estimator with 57 features.
Fitting estimator with 56 features.
Fitting estimator with 55 features.
Fitting estimator with 54 features.
Fitting estimator with 53 features.
training set scores:
confusion matrix:
[[1426 646]
[ 895 2523]]
precision: 0.72755; recall: 0.71931, accuracy: 0.71931
test set scores:
confusion matrix:
[[ 599 310]
[ 397 1022]]
precision: 0.70250; recall: 0.69631, accuracy: 0.69631
###Markdown
Tuning the Binary ModelThe preliminary binary model results in 52 features and ~10% improved precision and ~25% improved recall. Now we'll try tuning this model. We'll use the same parameter grid as before.
###Code
# tune the binary LR model.
# use same parameter grid as above
logreg4 = LogisticRegression(n_jobs = -1)
rfecv2 = RFECV(logreg4, scoring = 'f1_weighted', verbose = 2, n_jobs = -1)
clf2 = GridSearchCV(rfecv2, parameters, n_jobs = -1, verbose = 2)
clf2.fit(X_train_ofracceith, Y_train_ofracceith_bin)
Y_pred7 = clf2.predict(X_train_ofracceith)
print('training set scores: ')
prec, rec, acc = get_scores(Y_train_ofracceith_bin, Y_pred7)
Y_pred8 = clf2.predict(X_test_ofracceith)
print('test set scores: ')
prec, rec, acc = get_scores(Y_test_ofracceith_bin, Y_pred8)
print("\nBest Parameters:", clf2.best_params_)
###Output
Fitting 5 folds for each of 30 candidates, totalling 150 fits
###Markdown
This model yields a very small improvement in precision and recall, but has 41 more features. Since the first model is more parsimonious, we'll keep it.
###Code
# make a dataframe containing only the columns identified by the initial binary model (52 features)
X_train_ofracceith_rfe = X_train_ofracceith.loc[:, model2.support_]
# see which columns are left
X_train_ofracceith_rfe.columns
# repeat for test set
X_test_ofracceith_rfe = X_test_ofracceith.loc[:, model2.support_]
###Output
_____no_output_____
###Markdown
Manual Feature ReductionThis model is reasonable, but despite using RFECV to reduce the features, it is still large. Let's see what effect manually limiting the number of parameters with RFE has on the accuracy and precision.
###Code
# make a datframe to hold the results.
col_names = ['Features', 'Precision', 'Recall']
df_ofracceith_rfe = pd.DataFrame(columns = col_names)
# add the baseline model
df_ofracceith_rfe.loc[0, 'Features'] = 52
df_ofracceith_rfe.loc[0, 'Precision'] = 0.70250
df_ofracceith_rfe.loc[0, 'Recall'] = 0.69631
df_ofracceith_rfe.head()
# do RFE with manual thresholds for number of parameters to keep
# C = 1 and balanced class weights worked well before, so keep them
RFE_cuts = [5, 10, 15, 20, 25, 30, 35, 40, 45, 50]
for i in range(len(RFE_cuts)):
# use RFE to fit the model
logregrfe = LogisticRegression(class_weight = 'balanced', C = 1, n_jobs = -1)
rfe = RFE(logregrfe, n_features_to_select = RFE_cuts[i], verbose = 2)
# this X matrix contains only the 53 parameters identified by RFECV above
rfe.fit(X_train_ofracceith_rfe, Y_train_ofracceith_bin)
Y_predrfe1 = rfe.predict(X_train_ofracceith_rfe)
print('training set scores: ')
prec, rec, acc = get_scores(Y_train_ofracceith_bin, Y_predrfe1)
Y_predrfe2 = rfe.predict(X_test_ofracceith_rfe)
print('test set scores: ')
prec, rec, acc = get_scores(Y_test_ofracceith_bin, Y_predrfe2)
# save the parameters to plot
df_ofracceith_rfe.loc[i+1, 'Features'] = RFE_cuts[i]
df_ofracceith_rfe.loc[i+1, 'Precision'] = prec
df_ofracceith_rfe.loc[i+1, 'Recall'] = rec
###Output
Fitting estimator with 52 features.
Fitting estimator with 51 features.
Fitting estimator with 50 features.
Fitting estimator with 49 features.
Fitting estimator with 48 features.
Fitting estimator with 47 features.
Fitting estimator with 46 features.
Fitting estimator with 45 features.
Fitting estimator with 44 features.
Fitting estimator with 43 features.
Fitting estimator with 42 features.
Fitting estimator with 41 features.
Fitting estimator with 40 features.
Fitting estimator with 39 features.
Fitting estimator with 38 features.
Fitting estimator with 37 features.
Fitting estimator with 36 features.
Fitting estimator with 35 features.
Fitting estimator with 34 features.
Fitting estimator with 33 features.
Fitting estimator with 32 features.
Fitting estimator with 31 features.
Fitting estimator with 30 features.
Fitting estimator with 29 features.
Fitting estimator with 28 features.
Fitting estimator with 27 features.
Fitting estimator with 26 features.
Fitting estimator with 25 features.
Fitting estimator with 24 features.
Fitting estimator with 23 features.
Fitting estimator with 22 features.
Fitting estimator with 21 features.
Fitting estimator with 20 features.
Fitting estimator with 19 features.
Fitting estimator with 18 features.
Fitting estimator with 17 features.
Fitting estimator with 16 features.
Fitting estimator with 15 features.
Fitting estimator with 14 features.
Fitting estimator with 13 features.
Fitting estimator with 12 features.
Fitting estimator with 11 features.
Fitting estimator with 10 features.
Fitting estimator with 9 features.
Fitting estimator with 8 features.
Fitting estimator with 7 features.
Fitting estimator with 6 features.
training set scores:
confusion matrix:
[[1207 865]
[ 839 2579]]
precision: 0.68886; recall: 0.68962, accuracy: 0.68962
test set scores:
confusion matrix:
[[ 520 389]
[ 372 1047]]
precision: 0.67204; recall: 0.67311, accuracy: 0.67311
Fitting estimator with 52 features.
Fitting estimator with 51 features.
Fitting estimator with 50 features.
Fitting estimator with 49 features.
Fitting estimator with 48 features.
Fitting estimator with 47 features.
Fitting estimator with 46 features.
Fitting estimator with 45 features.
Fitting estimator with 44 features.
Fitting estimator with 43 features.
Fitting estimator with 42 features.
Fitting estimator with 41 features.
Fitting estimator with 40 features.
Fitting estimator with 39 features.
Fitting estimator with 38 features.
Fitting estimator with 37 features.
Fitting estimator with 36 features.
Fitting estimator with 35 features.
Fitting estimator with 34 features.
Fitting estimator with 33 features.
Fitting estimator with 32 features.
Fitting estimator with 31 features.
Fitting estimator with 30 features.
Fitting estimator with 29 features.
Fitting estimator with 28 features.
Fitting estimator with 27 features.
Fitting estimator with 26 features.
Fitting estimator with 25 features.
Fitting estimator with 24 features.
Fitting estimator with 23 features.
Fitting estimator with 22 features.
Fitting estimator with 21 features.
Fitting estimator with 20 features.
Fitting estimator with 19 features.
Fitting estimator with 18 features.
Fitting estimator with 17 features.
Fitting estimator with 16 features.
Fitting estimator with 15 features.
Fitting estimator with 14 features.
Fitting estimator with 13 features.
Fitting estimator with 12 features.
Fitting estimator with 11 features.
training set scores:
confusion matrix:
[[1321 751]
[ 866 2552]]
precision: 0.70900; recall: 0.70546, accuracy: 0.70546
test set scores:
confusion matrix:
[[ 554 355]
[ 369 1050]]
precision: 0.68989; recall: 0.68900, accuracy: 0.68900
Fitting estimator with 52 features.
Fitting estimator with 51 features.
Fitting estimator with 50 features.
Fitting estimator with 49 features.
Fitting estimator with 48 features.
Fitting estimator with 47 features.
Fitting estimator with 46 features.
Fitting estimator with 45 features.
Fitting estimator with 44 features.
Fitting estimator with 43 features.
Fitting estimator with 42 features.
Fitting estimator with 41 features.
Fitting estimator with 40 features.
Fitting estimator with 39 features.
Fitting estimator with 38 features.
Fitting estimator with 37 features.
Fitting estimator with 36 features.
Fitting estimator with 35 features.
Fitting estimator with 34 features.
Fitting estimator with 33 features.
Fitting estimator with 32 features.
Fitting estimator with 31 features.
Fitting estimator with 30 features.
Fitting estimator with 29 features.
Fitting estimator with 28 features.
Fitting estimator with 27 features.
Fitting estimator with 26 features.
Fitting estimator with 25 features.
Fitting estimator with 24 features.
Fitting estimator with 23 features.
Fitting estimator with 22 features.
Fitting estimator with 21 features.
Fitting estimator with 20 features.
Fitting estimator with 19 features.
Fitting estimator with 18 features.
Fitting estimator with 17 features.
Fitting estimator with 16 features.
training set scores:
confusion matrix:
[[1388 684]
[ 918 2500]]
precision: 0.71601; recall: 0.70820, accuracy: 0.70820
test set scores:
confusion matrix:
[[ 595 314]
[ 409 1010]]
precision: 0.69638; recall: 0.68943, accuracy: 0.68943
Fitting estimator with 52 features.
Fitting estimator with 51 features.
Fitting estimator with 50 features.
Fitting estimator with 49 features.
Fitting estimator with 48 features.
Fitting estimator with 47 features.
Fitting estimator with 46 features.
Fitting estimator with 45 features.
Fitting estimator with 44 features.
Fitting estimator with 43 features.
Fitting estimator with 42 features.
Fitting estimator with 41 features.
Fitting estimator with 40 features.
Fitting estimator with 39 features.
Fitting estimator with 38 features.
Fitting estimator with 37 features.
Fitting estimator with 36 features.
Fitting estimator with 35 features.
Fitting estimator with 34 features.
Fitting estimator with 33 features.
Fitting estimator with 32 features.
Fitting estimator with 31 features.
Fitting estimator with 30 features.
Fitting estimator with 29 features.
Fitting estimator with 28 features.
Fitting estimator with 27 features.
Fitting estimator with 26 features.
Fitting estimator with 25 features.
Fitting estimator with 24 features.
Fitting estimator with 23 features.
Fitting estimator with 22 features.
Fitting estimator with 21 features.
training set scores:
confusion matrix:
[[1409 663]
[ 940 2478]]
precision: 0.71756; recall: 0.70801, accuracy: 0.70801
test set scores:
confusion matrix:
[[598 311]
[422 997]]
precision: 0.69353; recall: 0.68514, accuracy: 0.68514
Fitting estimator with 52 features.
Fitting estimator with 51 features.
Fitting estimator with 50 features.
Fitting estimator with 49 features.
Fitting estimator with 48 features.
Fitting estimator with 47 features.
Fitting estimator with 46 features.
Fitting estimator with 45 features.
Fitting estimator with 44 features.
Fitting estimator with 43 features.
Fitting estimator with 42 features.
Fitting estimator with 41 features.
Fitting estimator with 40 features.
Fitting estimator with 39 features.
Fitting estimator with 38 features.
Fitting estimator with 37 features.
Fitting estimator with 36 features.
Fitting estimator with 35 features.
Fitting estimator with 34 features.
Fitting estimator with 33 features.
Fitting estimator with 32 features.
Fitting estimator with 31 features.
Fitting estimator with 30 features.
Fitting estimator with 29 features.
Fitting estimator with 28 features.
Fitting estimator with 27 features.
Fitting estimator with 26 features.
training set scores:
confusion matrix:
[[1388 684]
[ 912 2506]]
precision: 0.71685; recall: 0.70929, accuracy: 0.70929
test set scores:
confusion matrix:
[[ 598 311]
[ 391 1028]]
precision: 0.70406; recall: 0.69845, accuracy: 0.69845
Fitting estimator with 52 features.
Fitting estimator with 51 features.
Fitting estimator with 50 features.
Fitting estimator with 49 features.
Fitting estimator with 48 features.
Fitting estimator with 47 features.
Fitting estimator with 46 features.
Fitting estimator with 45 features.
Fitting estimator with 44 features.
Fitting estimator with 43 features.
Fitting estimator with 42 features.
###Markdown
Plot the data to see where the optimal precision & recall cut-offs are
###Code
plt.scatter(data=df_ofracceith_rfe, x = 'Features', y = 'Precision', color = 'red', label = 'Precision')
plt.scatter(data=df_ofracceith_rfe, x = 'Features', y = 'Recall', color = 'green', label = 'Recall')
plt.legend()
plt.ylabel('Precision or Recall')
plt.xlabel('Number of Features')
plt.title('Precision and Recall by Number of Features via RFE\n Model for EMR Access');
###Output
_____no_output_____
###Markdown
It looks like with manual RFE we actually get better precision and recall at 30 features. This is also a more parsimonious model, so we'll use it. Make new dataframes for the reduced set of variables. Also save the coefficients and predicted probabilities.First, we need to run that model again though.
###Code
# run RFE with 30 features so we can get the model coefficients etc.
logregrfe = LogisticRegression(class_weight = 'balanced', C = 1, n_jobs = -1)
rfe = RFE(logregrfe, n_features_to_select = 30, verbose = 2)
# this X matrix contains only the 53 parameters identified by RFECV above
rfe.fit(X_train_ofracceith_rfe, Y_train_ofracceith_bin)
Y_predrfe1 = rfe.predict(X_train_ofracceith_rfe)
print('training set scores: ')
prec, rec, acc = get_scores(Y_train_ofracceith_bin, Y_predrfe1)
Y_predrfe2 = rfe.predict(X_test_ofracceith_rfe)
print('test set scores: ')
prec, rec, acc = get_scores(Y_test_ofracceith_bin, Y_predrfe2)
# make a dataframe containing only the columns identified by the RFE model (30 features)
X_train_ofracceith_rfe_final = X_train_ofracceith_rfe.loc[:, rfe.support_]
# see which columns are left
X_train_ofracceith_rfe_final.columns
# repeat for test set
X_test_ofracceith_rfe_final = X_test_ofracceith_rfe.loc[:, rfe.support_]
# save the model coefficients
offeraccesseither_coefs = rfe.estimator_.coef_
# check the size; should be 30
offeraccesseither_coefs.shape
# and the intercept
offeraccesseither_inters = rfe.estimator_.intercept_
# check the class order
rfe.estimator_.classes_
# get the predicted probabilities for the test set (for class 1/Yes)
offeraccesseither_probs = rfe.estimator_.predict_proba(X_test_ofracceith_rfe_final)[:, 1]
# check this; should be 1d array with 2328 entries
display(offeraccesseither_probs)
display(offeraccesseither_probs.shape)
# check the spread of the predicted probabilities
print('max pred. probability:', offeraccesseither_probs.max())
print('min pred. probability:', offeraccesseither_probs.min())
###Output
min pred. probability: 0.013339408148235022
###Markdown
There is a wide range of predicted probabilities in the test set. This will be useful in differentiating which respondents are more and less likely to have been offered access to an EMR. Summary of Final VariablesLet's summarize the 30 variables left in the final model, and look at their coefficients. Since the outcome is coded as N = 0, Y =1 and all the features are positive integers, a positive coefficient implies that a respondent with the characteristic is more likely to be offered EMR access, and a negative coefficient, less likely.
###Code
# look at the list of variables with their logistic regression coefficients
col_list = list(X_train_ofracceith_rfe_final.columns)
for i in range(len(col_list)):
print('Column: ', col_list[i], ' Coefficient: ', offeraccesseither_coefs[0, i])
###Output
Column: highspanli Coefficient: -0.35927242348285127
Column: useinternet Coefficient: 0.5932368186550663
Column: electronic_selfhealthinfo Coefficient: 0.28944742573848237
Column: regularprovider Coefficient: 0.5084818065447673
Column: healthinsurance Coefficient: 0.773727823991863
Column: everhadcancer Coefficient: 0.29061587533810196
Column: freqgoprovider_2 Coefficient: 0.3343205903054712
Column: freqgoprovider_3 Coefficient: 0.4326797909830113
Column: freqgoprovider_4 Coefficient: 0.4144735850885778
Column: freqgoprovider_5 Coefficient: 0.6279983139224948
Column: freqgoprovider_6 Coefficient: 0.6462323099624266
Column: agegrpb_4 Coefficient: 0.27937738922704636
Column: avgdrinks_cat_5 Coefficient: -0.26304463169100517
Column: whruseinet_pubvother_1 Coefficient: 0.2922562813807814
Column: whruseinet_pubvother_2 Coefficient: 0.30098053727427854
Column: educa_4 Coefficient: 0.3315758158819357
Column: healthins_pubpriv_2 Coefficient: -0.35457786408371345
Column: survey_cycle_3 Coefficient: 0.32581404013240317
Column: survey_cycle_4 Coefficient: 0.5348607034860742
Column: survey_cycle_5 Coefficient: 0.4198055512910455
Column: selfgender_2 Coefficient: 0.3896667095945533
Column: qualitycare_1 Coefficient: 0.6728554164704014
Column: qualitycare_2 Coefficient: 0.4070981220745362
Column: maritalstatus_6 Coefficient: -0.3072028829012911
Column: raceethn5_4 Coefficient: -0.36144032927422887
Column: tablethealthwellnessapps_1 Coefficient: 0.44957715125605924
Column: hhinc_1 Coefficient: -0.29062345977083914
Column: censdiv_6 Coefficient: -0.23327631657195635
Column: ownabilitytakecarehealth_5 Coefficient: -0.4239799479213866
Column: tablet_discussionshcp_1 Coefficient: 0.4111554624228518
###Markdown
**Variables associated with higher likelihood of being offered EMR access:** _Demographic & Temporal:_- `educa_4` : College or higher education (vs all other levels)- `selfgender_2` : Female (vs male or no answer)- `survey_cycle_3` : 2019 (vs 2018, 2020 pre- & post-pandemic)- `survey_cycle_4` : 2020 pre-pandemic (vs 2018, 2019, 2020 post-pandemic)- `survey_cycle_5` : 2020 post-pandemic (vs 2018, 2019, 2020 pre-pandemic)- `agegrpb_4` : Age 65-74 (vs. all other age strata; highest is \>= 75)_Health-Related:_- `regularprovider` : Have regular HCP (vs do not)- `healthinsurance` : Have some form of health insurance (vs do not)- `everhadcancer` : Ever diagnosed with cancer (vs never)- `qualitycare_1` : Rate quality of HCP's care "excellent" (vs. don't go, very good, good, fair, poor)- `qualitycare_2` : Rate quality of HCP's care "very good" (vs. don't go, excellent, good, fair, poor)- `freqgoprovider_2` : See HCP 2 times yearly (vs. 0, 1, 3, 4, 5-9, and >=10)- `freqgoprovider_3` : See HCP 3 times yearly (vs. 0, 1, 2, 4, 5-9, and >=10)- `freqgoprovider_4` : See HCP 4 times yearly (vs. 0, 1, 2, 3, 5-9, and >=10)- `freqgoprovider_5` : See HCP 5-9 times yearly (vs. 0, 1, 2, 3, 4, and >=10)- `freqgoprovider_6` : See HCP >= 10 times yearly (vs. 0, 1, 2, 3, 4, and 5-9)_Electronic Device & Internet-Related:_- `useinternet` : Use internet for web browsing/email (vs do not)- `electronic_selfhealthinfo` : Have used electronic means to search for health-related info in last 12 mos (vs haven't)- `whruseinet_pubvother_1` : Use internet in public place (eg library) "often" or "sometimes"(vs never or don't use internet)- `whruseinet_pubvother_2` : Do not use internet in public place (eg library) (vs often/sometimes or don't use internet)- `tablethealthwellnessapps_1` : Have health/wellness apps on a tablet (vs no or don't own tablet)- `tablet_discussionshcp_1` : Use tablet as aid for discussion with HCP (vs no or don't own tablet) **Variables associated with lower likelihood of being offered EMR access:** _Demographic:_- `highspanli` : Linguistically isolated (high prevalence less proficient English speakers)- `raceethn5_4` : Non-Hispanic Asian (vs all other racial groupings)- `censdiv_6` : East South Central census division (KY, TN, MS, AL; vs all other divisions)- `hhinc_1` : Household income in lowest category (\< \$20k/yr; vs all higher categories & not reported)- `maritalstatus_6` : Single (vs all other categories)_Health-Related:_- `healthins_pubpriv_2` : Public insurance (Medicare/Medicaid) without employer-provided insurance (vs private/employer-provided or non)- `avgdrinks_cat_5` : \>= 150\% of number drinks CDC classifies as heavy drinking (M \>= 23, F \>= 13; other: \>=18; this is highest category; vs all lower categories)- `ownabilitytakecarehealth_5` : "Not at all" confident in own ability to take care of health (vs completely, very, somewhat, or a little confident) We can also plot the odds ratios for each variable to get a graphical representation of the relative strength of each feature.
###Code
# calculate odds ratios
offeraccesseither_ORs = np.exp(offeraccesseither_coefs).reshape(30)
# check range
print('min OR: ', np.min(offeraccesseither_ORs))
print('max OR: ', np.max(offeraccesseither_ORs))
# put the ORs and column labels in a df so it can be sorted
df_oae_OR = pd.DataFrame(columns = ['col', 'OR'])
df_oae_OR['col'] = col_list
df_oae_OR['OR'] = offeraccesseither_ORs
#sort
df_oae_OR.sort_values(by='OR', inplace = True)
df_oae_OR.head(10)
fig, bar1 = plt.subplots(figsize = (12,12))
y_labs = ['Self-care ability rated "Not at all"', 'Non-Hispanic Asian', 'Linguistic isolation', 'Medicare/Medicaid only',\
'Single','Household income: < $20k/yr', '>= 150% CDC heavy drinker', 'East S Ctrl Census Div',\
'Age: 65-74', 'Searches web for health info', 'History of cancer', 'Uses public internet',\
"Doesn't use public internet", 'Cycle: 2019', 'Education: >= college degree', 'Sees HCP 2x/yr', 'Female',\
'Care rated very good', 'Uses tablet for HCP discussion', 'Sees HCP 4x/yr', 'Cycle: 2020 post-pandemic',\
'Sees HCP 3x/yr', 'Tablet has health apps', 'Has regular HCP', 'Cycle: 2020 pre-pandemic','Uses internet',\
'Sees HCP 5-9x/yr', 'Sees HCP >= 10x/yr', 'Care rated excellent', 'Has health ins.']
df_oae_OR['ORlt1'] = df_oae_OR['OR'] < 1.0
bar1 = plt.barh(data=df_oae_OR, y=np.arange(30), width='OR', tick_label=y_labs,\
color=df_oae_OR['ORlt1'].map({True:'r', False:'g'}))
plt.title('Odds Ratios for Features Associated with EMR Access')
plt.xlabel('Odds Ratio')
plt.ylabel('Feature')
plt.xticks(np.arange(0.1, 2.3, 0.1))
plt.axvline(x=1.0, color='black', linestyle='--');
###Output
_____no_output_____
###Markdown
In the figure, the dividing line at 1.0 demarcates features associated with EMR access (in green; odds ratio \> 1.0) and those not associated with EMR use (in red; odds ratio \< 1.0). Having insurance of any type is most strongly associated with being offered EMR access, followed by care rated "excellent". Female gender, higher educational attainment, and older age are also significant. The only chronic condition with a significant effect is a history of cancer, although more frequent visits to an HCP are associated with higher likelihood of being offered access. Using the internet, as well as using it and e-devices for health-related purposes are also predictors. Finally, the 2019-2020 (versus 2018) survey cycles are associated with increased EMR access, with the highest weight for 2020 pre-pandemic, followed by 2020 post-pandemic, then 2019, indicating there is a time effect although perhaps not a linear one. By contrast, being "not at all confident" in one's ability to take care of one's health is most associated with _not_ been offered EMR access, followed by by Non-Hispanic Asian racial indentity. Being in the lowest income stratum, being single, having only Medicare and/or Medicaid, and residing in the East South Central census division or in a linguistically isolated area also associated with reduced access. No chronic condition appears, but very heavy drinking also predicts reduced access. Another way to look at this is to assess differences in characteristics of those predicted to be highly likely and highly unlikely to be offered EMR access. Let's see what the probability distribution looks like.
###Code
# plot the probability of being offered EMR access
plt.hist(x=offeraccesseither_probs, bins=20)
plt.xticks(np.arange(0.0, 1.1, step = 0.1))
plt.xlabel('Probability')
plt.ylabel('Count')
plt.title('Predicted Probability of Being Offered EMR access');
###Output
_____no_output_____
###Markdown
The distribution tends to increase slightly between 0.2 and about 0.75, then tails off at high and low values.
###Code
print('N >= 90% probability:', offeraccesseither_probs[offeraccesseither_probs >= 0.90].shape[0])
print('N <= 10% probability:', offeraccesseither_probs[offeraccesseither_probs <= 0.10].shape[0])
###Output
N >= 90% probability: 18
N <= 10% probability: 92
###Markdown
Relatively few predictions are at the high end (\>= 90%). Let's widen the range.
###Code
print('N >= 80% probability:', offeraccesseither_probs[offeraccesseither_probs >= 0.80].shape[0])
print('N <= 20% probability:', offeraccesseither_probs[offeraccesseither_probs <= 0.20].shape[0])
###Output
N >= 80% probability: 288
N <= 20% probability: 302
###Markdown
This range gives better balance between those with higher and lower predicted probabilities. We want to look at characteristics of respondents in the high and low probability groups. For this, it will be useful to have a dataframe with the probabilities added on to the X matrix.
###Code
# append probabilities to the final X matrix
X_test_ofracceith_finprobs = pd.concat([X_test_ofracceith_rfe_final, pd.Series(offeraccesseither_probs)], axis = 1)
# check it
display(X_test_ofracceith_finprobs.head())
display(X_test_ofracceith_finprobs.shape)
# rename the probability column
X_test_ofracceith_finprobs.rename(columns = {0:"pred_proba"}, inplace = True)
display(X_test_ofracceith_finprobs.columns)
# Get the top 10 categories by %age for predict probability >= 80 and <= 20
# denominators for %age
N_oae_80 = offeraccesseither_probs[offeraccesseither_probs >= 0.80].shape[0]
N_oae_20 = offeraccesseither_probs[offeraccesseither_probs <= 0.20].shape[0]
# list of columns
oae_cols = list(X_test_ofracceith_finprobs.columns)
oae_cols.remove('pred_proba')
# make a df for the results
df_oae_pcts = pd.DataFrame(index = oae_cols, columns = ['pct_ge80', 'pct_le20'])
# check structure of df
df_oae_pcts.head()
# get the %age of each variable in the prob >= 80 and prob <= 20 groups
for col in oae_cols:
df_oae_pcts.loc[col, 'pct_ge80'] = X_test_ofracceith_finprobs[col][X_test_ofracceith_finprobs['pred_proba'] >= 0.80].sum()/(0.01*N_oae_80)
df_oae_pcts.loc[col, 'pct_le20'] = X_test_ofracceith_finprobs[col][X_test_ofracceith_finprobs['pred_proba'] <= 0.20].sum()/(0.01*N_oae_20)
# check it
df_oae_pcts.head()
# get top 10 for those with >= 80% probability
df_top80 = df_oae_pcts.sort_values(by='pct_ge80', ascending=False).head(10)
df_top80
# get top 10 for those with <= 20% probability
df_top20 = df_oae_pcts.sort_values(by='pct_le20', ascending=False).head(10)
df_top20
###Output
_____no_output_____
###Markdown
Based on those results, it looks like finding the variables with the largest difference between the two groups might be more useful.
###Code
# find difference in percentages
df_oae_pcts['diff'] = df_oae_pcts['pct_ge80'] - df_oae_pcts['pct_le20']
# but we'll want to sort by the absolute difference
df_oae_pcts['absdiff'] = np.abs(df_oae_pcts['diff'])
df_oae_pcts.head()
df_topabs = df_oae_pcts.sort_values(by='absdiff', ascending=False).head(10)
df_topabs
fig, bar2 = plt.subplots(figsize = (12,8))
y_labs = ['Has tablet with health apps', 'Uses tablet for HCP discussion', 'Education: >= college degree', \
'Searches web for health info', 'Uses internet', 'Has regular HCP', 'Care rated excellent', \
"Doesn't use public internet", 'Female', 'Household income: < $20k/yr']
df_topabs['negpos'] = df_topabs['diff'] < 0
bar2 = plt.barh(data=df_topabs, y=np.arange(10), width='diff', tick_label=y_labs,\
color=df_topabs['negpos'].map({True:'r', False:'g'}))
plt.title('Features with Largest Difference in % Prevalence between Patients with \n Probability of Being Offered EMR Access >= 80% and <= 20%')
plt.xlabel('Difference in Prevalence (%)')
plt.ylabel('Feature')
plt.xticks(np.arange(-50, 90, 10));
###Output
_____no_output_____
###Markdown
In the above figure, the 10 features with the biggest difference in prevalence between predicted probabilities of \>= 80% and \<= 20% of being offered access to an EMR are shown. Red bars indicate features more prevalent in patients with predicted probability \<= 20% of being offered EMR access. The only variable in the top 10 more prevalent in this group is being in the lowest household income stratum (\< \$20,000/yr). Green bars indicate features more prevalent in patients with predicted probability \>= 80% of being offered EMR access. Demographically, these patients are more likely to be female and have at least a college degree. Medically, they are more likely to have a regular HCP and to give the highest rating (excellent) to the quality of the HCP's care. The rest of the variables relate to internet access and use: they are more likely to use the internet, but less likely to do so via public access (e.g. a library). They are more likely to use the internet and a device like a tablet to look for health information, monitor their health, and have discussions with their HCP. Model for `accessonlinerecord` Outcome This outcome variable denotes how frequently a respondent has used an EMR access in the past 12 months. The categories are "None" (which includes those who don't have access to an EMR), 1-2 times, 3-5 times, 6-9 times, and \>= 10 times. Our preliminary decision was to maintain all the categories, to see if we can gain insight into differences between less and more frequent users. However, this many categories may give us trouble with accurate classification, as it did with the previous outcome variable. Make the X and Y matrices for this outcome variable.
###Code
X_train_acconlrec = df_train_dums.drop(columns = ['offeredaccesseither', 'accessonlinerecord'])
X_test_acconlrec = df_test_dums.drop(columns = ['offeredaccesseither', 'accessonlinerecord'])
# check the columns
X_train_acconlrec.info(verbose=True)
# make the Y matrices
Y_train_acconlrec = df_train_dums['accessonlinerecord']
Y_test_acconlrec = df_test_dums['accessonlinerecord']
###Output
_____no_output_____
###Markdown
Preliminary modelThe approach will be the same as with the other outcome variable: we'll use logistic regression, with RFECV to reduce the size of the feature set. Categories 3 and 4 are rare, so balanced class weights will again be used, as will F1 scoring.
###Code
# fit the model using recursive feature elimination and logistic regression
logreg5 = LogisticRegression(class_weight = 'balanced', multi_class = 'multinomial', C = 1, n_jobs = -1)
model3 = RFECV(logreg5, scoring = 'f1_weighted', verbose = 2, n_jobs = -1)
model3.fit(X_train_acconlrec, Y_train_acconlrec)
Y_pred9 = model3.predict(X_train_acconlrec)
print('training set scores: ')
prec, rec, acc = get_scores(Y_train_acconlrec, Y_pred9)
Y_pred10 = model3.predict(X_test_acconlrec)
print('test set scores: ')
prec, rec, acc = get_scores(Y_test_acconlrec, Y_pred10)
###Output
Fitting estimator with 109 features.
Fitting estimator with 108 features.
Fitting estimator with 107 features.
Fitting estimator with 106 features.
Fitting estimator with 105 features.
Fitting estimator with 104 features.
Fitting estimator with 103 features.
Fitting estimator with 102 features.
Fitting estimator with 101 features.
Fitting estimator with 100 features.
Fitting estimator with 99 features.
Fitting estimator with 98 features.
Fitting estimator with 97 features.
Fitting estimator with 96 features.
Fitting estimator with 95 features.
Fitting estimator with 94 features.
Fitting estimator with 93 features.
Fitting estimator with 92 features.
Fitting estimator with 91 features.
Fitting estimator with 90 features.
Fitting estimator with 89 features.
Fitting estimator with 88 features.
Fitting estimator with 87 features.
Fitting estimator with 86 features.
Fitting estimator with 85 features.
Fitting estimator with 84 features.
Fitting estimator with 83 features.
training set scores:
confusion matrix:
[[1896 577 279 235 209]
[ 185 394 165 129 92]
[ 77 158 239 150 139]
[ 22 37 49 129 59]
[ 14 24 30 46 156]]
precision: 0.62668; recall: 0.51257, accuracy: 0.51257
test set scores:
confusion matrix:
[[786 281 146 86 99]
[ 81 150 68 55 55]
[ 28 62 70 64 58]
[ 12 20 20 40 23]
[ 11 8 24 30 51]]
precision: 0.60728; recall: 0.47122, accuracy: 0.47122
###Markdown
RFECV reduces the size from 109 to 82 features, but the fit is very poor (recall < 0.5). There is not a major difference between training and test sets to indicate overfitting. Model TuningWe'll also try to tune this, though if the previous results are any indication, it is unlikely to significantly improve. Because the classes are more imbalanced here, we won't bother trying "None" for weighting. Instead, we'll expand the range of `C`.
###Code
# tune the multinomial LR model
# will leave class weight as balanced here since latter categories are rare
# expand range of C to see if that makes any difference
parameters2 = [
{
'estimator__C': [1e-5, 1e-4, 1e-3, 0.01, 0.1, 1, 10],
'step': [1, 3, 5],
}]
logreg6 = LogisticRegression(class_weight = 'balanced', multi_class = 'multinomial', n_jobs = -1)
rfecv3 = RFECV(logreg6, scoring = 'f1_weighted', verbose = 2, n_jobs = -1)
clf3 = GridSearchCV(rfecv3, parameters2, n_jobs = -1, verbose = 2)
clf3.fit(X_train_acconlrec, Y_train_acconlrec)
Y_pred11 = clf3.predict(X_train_acconlrec)
print('training set scores: ')
prec, rec, acc = get_scores(Y_train_acconlrec, Y_pred11)
Y_pred12 = clf3.predict(X_test_acconlrec)
print('test set scores: ')
prec, rec, acc = get_scores(Y_test_acconlrec, Y_pred12)
print("\nBest Parameters:", clf3.best_params_)
###Output
Fitting 5 folds for each of 21 candidates, totalling 105 fits
###Markdown
Recall improves slightly with parameter tuning, but precision is worse, and both are still very low. There are also 17 more features. As with the previous analysis, this is likely due to trying to fit the rare classes. Binarizing the Outcome As we did with the previous variable, we can combine categories. Overall, the most important division is between those who never used the EMR and those who did use it. So, let's make a binary outcome variable with "never used" vs "ever used".
###Code
# reclassify as "never used" (0) vs "ever used" (1)
Y_train_acconlrec_bin = Y_train_acconlrec
Y_train_acconlrec_bin.loc[Y_train_acconlrec > 1] = 1
# check it reclassified correctly
Y_train_acconlrec_bin.value_counts()
# repeat for test set
Y_test_acconlrec_bin = Y_test_acconlrec
Y_test_acconlrec_bin.loc[Y_test_acconlrec > 1] = 1
# fit a LR model to the binary outcome variable
logreg7 = LogisticRegression(class_weight = 'balanced', C = 1, n_jobs = -1)
model4 = RFECV(logreg7, scoring = 'f1_weighted', verbose = 2, n_jobs = -1)
model4.fit(X_train_acconlrec, Y_train_acconlrec_bin)
Y_pred13 = model4.predict(X_train_acconlrec)
print('training set scores: ')
prec, rec, acc = get_scores(Y_train_acconlrec_bin, Y_pred13)
Y_pred14 = model4.predict(X_test_acconlrec)
print('test set scores: ')
prec, rec, acc = get_scores(Y_test_acconlrec_bin, Y_pred14)
###Output
Fitting estimator with 109 features.
Fitting estimator with 108 features.
Fitting estimator with 107 features.
Fitting estimator with 106 features.
Fitting estimator with 105 features.
Fitting estimator with 104 features.
Fitting estimator with 103 features.
Fitting estimator with 102 features.
Fitting estimator with 101 features.
Fitting estimator with 100 features.
Fitting estimator with 99 features.
Fitting estimator with 98 features.
Fitting estimator with 97 features.
Fitting estimator with 96 features.
Fitting estimator with 95 features.
Fitting estimator with 94 features.
Fitting estimator with 93 features.
Fitting estimator with 92 features.
Fitting estimator with 91 features.
Fitting estimator with 90 features.
Fitting estimator with 89 features.
Fitting estimator with 88 features.
Fitting estimator with 87 features.
Fitting estimator with 86 features.
Fitting estimator with 85 features.
Fitting estimator with 84 features.
Fitting estimator with 83 features.
Fitting estimator with 82 features.
Fitting estimator with 81 features.
Fitting estimator with 80 features.
Fitting estimator with 79 features.
Fitting estimator with 78 features.
Fitting estimator with 77 features.
Fitting estimator with 76 features.
Fitting estimator with 75 features.
Fitting estimator with 74 features.
Fitting estimator with 73 features.
Fitting estimator with 72 features.
Fitting estimator with 71 features.
Fitting estimator with 70 features.
Fitting estimator with 69 features.
Fitting estimator with 68 features.
Fitting estimator with 67 features.
Fitting estimator with 66 features.
Fitting estimator with 65 features.
Fitting estimator with 64 features.
Fitting estimator with 63 features.
training set scores:
confusion matrix:
[[2272 924]
[ 502 1792]]
precision: 0.75250; recall: 0.74026, accuracy: 0.74026
test set scores:
confusion matrix:
[[971 427]
[216 714]]
precision: 0.74122; recall: 0.72380, accuracy: 0.72380
###Markdown
This model has 62 features. Consolidating the outcome variable to binary results in a large improvement in precision and recall, with the fomer increasing by ~0.15 and the latter by > 0.2 from the better of the two mutliclass models. Tuning the Binary ModelLet's see if it improves further with tuning. We'll use the same parameter grid as we used for the multinomial model.
###Code
# tune the binary LR model
# will use same parameter grid as for multinomial.
logreg8 = LogisticRegression(class_weight = 'balanced', n_jobs = -1)
rfecv4 = RFECV(logreg8, scoring = 'f1_weighted', verbose = 2, n_jobs = -1)
clf4 = GridSearchCV(rfecv4, parameters2, n_jobs = -1, verbose = 2)
clf4.fit(X_train_acconlrec, Y_train_acconlrec_bin)
Y_pred15 = clf4.predict(X_train_acconlrec)
print('training set scores: ')
prec, rec, acc = get_scores(Y_train_acconlrec_bin, Y_pred15)
Y_pred16 = clf4.predict(X_test_acconlrec)
print('test set scores: ')
prec, rec, acc = get_scores(Y_test_acconlrec_bin, Y_pred16)
print("\nBest Parameters:", clf4.best_params_)
###Output
Fitting 5 folds for each of 21 candidates, totalling 105 fits
###Markdown
This tuned model has fewer features (54 vs 62), but the precision and accuracy are noticeably worse, both by ~0.04. We'll stick with the original model.
###Code
# make a dataframe containing only the columns identified by the initial binary model (62 features)
X_train_acconlrec_rfe = X_train_acconlrec.loc[:, model4.support_]
# see which columns are left
X_train_acconlrec_rfe.columns
# repeat for test set
X_test_acconlrec_rfe = X_test_acconlrec.loc[:, model4.support_]
###Output
_____no_output_____
###Markdown
Manual Feature ReductionAs with the other outcome variable, we still have a large model despite pruning with RFECV. Let's try manual RFE pruning again.
###Code
# make a datframe to hold the results. we already have the column names defined
df_acconlrec_rfe = pd.DataFrame(columns = col_names)
# add the baseline model
df_acconlrec_rfe.loc[0, 'Features'] = 62
df_acconlrec_rfe.loc[0, 'Precision'] = 0.74122
df_acconlrec_rfe.loc[0, 'Recall'] = 0.72380
# do RFE with manual thresholds for number of parameters to keep
# C = 1 and balanced class weights worked well before, so keep them
RFE_cuts = [5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60]
for i in range(len(RFE_cuts)):
# use RFE to fit the model
logregrfe = LogisticRegression(class_weight = 'balanced', C = 1, n_jobs = -1)
rfe = RFE(logregrfe, n_features_to_select = RFE_cuts[i], verbose = 2)
# this X matrix contains only the 62 parameters identified by RFECV above
rfe.fit(X_train_acconlrec_rfe, Y_train_acconlrec_bin)
Y_predrfe1 = rfe.predict(X_train_acconlrec_rfe)
print('training set scores: ')
prec, rec, acc = get_scores(Y_train_acconlrec_bin, Y_predrfe1)
Y_predrfe2 = rfe.predict(X_test_acconlrec_rfe)
print('test set scores: ')
prec, rec, acc = get_scores(Y_test_acconlrec_bin, Y_predrfe2)
# save the parameters to plot
df_acconlrec_rfe.loc[i+1, 'Features'] = RFE_cuts[i]
df_acconlrec_rfe.loc[i+1, 'Precision'] = prec
df_acconlrec_rfe.loc[i+1, 'Recall'] = rec
###Output
Fitting estimator with 62 features.
Fitting estimator with 61 features.
Fitting estimator with 60 features.
Fitting estimator with 59 features.
Fitting estimator with 58 features.
Fitting estimator with 57 features.
Fitting estimator with 56 features.
Fitting estimator with 55 features.
Fitting estimator with 54 features.
Fitting estimator with 53 features.
Fitting estimator with 52 features.
Fitting estimator with 51 features.
Fitting estimator with 50 features.
Fitting estimator with 49 features.
Fitting estimator with 48 features.
Fitting estimator with 47 features.
Fitting estimator with 46 features.
Fitting estimator with 45 features.
Fitting estimator with 44 features.
Fitting estimator with 43 features.
Fitting estimator with 42 features.
Fitting estimator with 41 features.
Fitting estimator with 40 features.
Fitting estimator with 39 features.
Fitting estimator with 38 features.
Fitting estimator with 37 features.
Fitting estimator with 36 features.
Fitting estimator with 35 features.
Fitting estimator with 34 features.
Fitting estimator with 33 features.
Fitting estimator with 32 features.
Fitting estimator with 31 features.
Fitting estimator with 30 features.
Fitting estimator with 29 features.
Fitting estimator with 28 features.
Fitting estimator with 27 features.
Fitting estimator with 26 features.
Fitting estimator with 25 features.
Fitting estimator with 24 features.
Fitting estimator with 23 features.
Fitting estimator with 22 features.
Fitting estimator with 21 features.
Fitting estimator with 20 features.
Fitting estimator with 19 features.
Fitting estimator with 18 features.
Fitting estimator with 17 features.
Fitting estimator with 16 features.
Fitting estimator with 15 features.
Fitting estimator with 14 features.
Fitting estimator with 13 features.
Fitting estimator with 12 features.
Fitting estimator with 11 features.
Fitting estimator with 10 features.
Fitting estimator with 9 features.
Fitting estimator with 8 features.
Fitting estimator with 7 features.
Fitting estimator with 6 features.
training set scores:
confusion matrix:
[[2338 858]
[ 808 1486]]
precision: 0.69753; recall: 0.69654, accuracy: 0.69654
test set scores:
confusion matrix:
[[1002 396]
[ 337 593]]
precision: 0.68891; recall: 0.68514, accuracy: 0.68514
Fitting estimator with 62 features.
Fitting estimator with 61 features.
Fitting estimator with 60 features.
Fitting estimator with 59 features.
Fitting estimator with 58 features.
Fitting estimator with 57 features.
Fitting estimator with 56 features.
Fitting estimator with 55 features.
Fitting estimator with 54 features.
Fitting estimator with 53 features.
Fitting estimator with 52 features.
Fitting estimator with 51 features.
Fitting estimator with 50 features.
Fitting estimator with 49 features.
Fitting estimator with 48 features.
Fitting estimator with 47 features.
Fitting estimator with 46 features.
Fitting estimator with 45 features.
Fitting estimator with 44 features.
Fitting estimator with 43 features.
Fitting estimator with 42 features.
Fitting estimator with 41 features.
Fitting estimator with 40 features.
Fitting estimator with 39 features.
Fitting estimator with 38 features.
Fitting estimator with 37 features.
Fitting estimator with 36 features.
Fitting estimator with 35 features.
Fitting estimator with 34 features.
Fitting estimator with 33 features.
Fitting estimator with 32 features.
Fitting estimator with 31 features.
Fitting estimator with 30 features.
Fitting estimator with 29 features.
Fitting estimator with 28 features.
Fitting estimator with 27 features.
Fitting estimator with 26 features.
Fitting estimator with 25 features.
Fitting estimator with 24 features.
Fitting estimator with 23 features.
Fitting estimator with 22 features.
Fitting estimator with 21 features.
Fitting estimator with 20 features.
Fitting estimator with 19 features.
Fitting estimator with 18 features.
Fitting estimator with 17 features.
Fitting estimator with 16 features.
Fitting estimator with 15 features.
Fitting estimator with 14 features.
Fitting estimator with 13 features.
Fitting estimator with 12 features.
Fitting estimator with 11 features.
training set scores:
confusion matrix:
[[2209 987]
[ 580 1714]]
precision: 0.72625; recall: 0.71457, accuracy: 0.71457
test set scores:
confusion matrix:
[[952 446]
[245 685]]
precision: 0.71955; recall: 0.70318, accuracy: 0.70318
Fitting estimator with 62 features.
Fitting estimator with 61 features.
Fitting estimator with 60 features.
Fitting estimator with 59 features.
Fitting estimator with 58 features.
Fitting estimator with 57 features.
Fitting estimator with 56 features.
Fitting estimator with 55 features.
Fitting estimator with 54 features.
Fitting estimator with 53 features.
Fitting estimator with 52 features.
Fitting estimator with 51 features.
Fitting estimator with 50 features.
Fitting estimator with 49 features.
Fitting estimator with 48 features.
Fitting estimator with 47 features.
Fitting estimator with 46 features.
Fitting estimator with 45 features.
Fitting estimator with 44 features.
Fitting estimator with 43 features.
Fitting estimator with 42 features.
Fitting estimator with 41 features.
Fitting estimator with 40 features.
Fitting estimator with 39 features.
Fitting estimator with 38 features.
Fitting estimator with 37 features.
Fitting estimator with 36 features.
Fitting estimator with 35 features.
Fitting estimator with 34 features.
Fitting estimator with 33 features.
Fitting estimator with 32 features.
Fitting estimator with 31 features.
Fitting estimator with 30 features.
Fitting estimator with 29 features.
Fitting estimator with 28 features.
Fitting estimator with 27 features.
Fitting estimator with 26 features.
Fitting estimator with 25 features.
Fitting estimator with 24 features.
Fitting estimator with 23 features.
Fitting estimator with 22 features.
Fitting estimator with 21 features.
Fitting estimator with 20 features.
Fitting estimator with 19 features.
Fitting estimator with 18 features.
Fitting estimator with 17 features.
Fitting estimator with 16 features.
training set scores:
confusion matrix:
[[2207 989]
[ 527 1767]]
precision: 0.73784; recall: 0.72386, accuracy: 0.72386
test set scores:
confusion matrix:
[[947 451]
[219 711]]
precision: 0.73216; recall: 0.71220, accuracy: 0.71220
Fitting estimator with 62 features.
Fitting estimator with 61 features.
Fitting estimator with 60 features.
Fitting estimator with 59 features.
Fitting estimator with 58 features.
Fitting estimator with 57 features.
Fitting estimator with 56 features.
Fitting estimator with 55 features.
Fitting estimator with 54 features.
Fitting estimator with 53 features.
Fitting estimator with 52 features.
Fitting estimator with 51 features.
Fitting estimator with 50 features.
Fitting estimator with 49 features.
Fitting estimator with 48 features.
Fitting estimator with 47 features.
Fitting estimator with 46 features.
Fitting estimator with 45 features.
Fitting estimator with 44 features.
Fitting estimator with 43 features.
Fitting estimator with 42 features.
Fitting estimator with 41 features.
Fitting estimator with 40 features.
Fitting estimator with 39 features.
Fitting estimator with 38 features.
Fitting estimator with 37 features.
Fitting estimator with 36 features.
Fitting estimator with 35 features.
Fitting estimator with 34 features.
Fitting estimator with 33 features.
Fitting estimator with 32 features.
Fitting estimator with 31 features.
Fitting estimator with 30 features.
Fitting estimator with 29 features.
Fitting estimator with 28 features.
Fitting estimator with 27 features.
Fitting estimator with 26 features.
Fitting estimator with 25 features.
Fitting estimator with 24 features.
Fitting estimator with 23 features.
Fitting estimator with 22 features.
Fitting estimator with 21 features.
training set scores:
confusion matrix:
[[2161 1035]
[ 482 1812]]
precision: 0.74193; recall: 0.72368, accuracy: 0.72368
test set scores:
confusion matrix:
[[911 487]
[212 718]]
precision: 0.72518; recall: 0.69974, accuracy: 0.69974
Fitting estimator with 62 features.
Fitting estimator with 61 features.
Fitting estimator with 60 features.
Fitting estimator with 59 features.
###Markdown
Plot the precision & recall results
###Code
plt.scatter(data=df_acconlrec_rfe, x = 'Features', y = 'Precision', color = 'red', label = 'Precision')
plt.scatter(data=df_acconlrec_rfe, x = 'Features', y = 'Recall', color = 'green', label = 'Recall')
plt.legend()
plt.ylabel('Precision or Recall')
plt.xlabel('Number of Features')
plt.title('Precision and Recall by Number of Features via RFE\n Model for EMR Use');
###Output
_____no_output_____
###Markdown
For this model, the best results are at 45 and 50 features (slightly better than 60 and the RFECV-defined 62). We'll keep the most parsimonious model with 45 features. Re-run the model with 45 features, then get the reduced datasets, coefficients, and probabilities.
###Code
# run RFE with 45 features so we can get the model coefficients etc.
logregrfe = LogisticRegression(class_weight = 'balanced', C = 1, n_jobs = -1)
rfe = RFE(logregrfe, n_features_to_select = 45, verbose = 2)
# this X matrix contains only the 45 parameters identified by RFECV above
rfe.fit(X_train_acconlrec_rfe, Y_train_acconlrec_bin)
Y_predrfe1 = rfe.predict(X_train_acconlrec_rfe)
print('training set scores: ')
prec, rec, acc = get_scores(Y_train_acconlrec_bin, Y_predrfe1)
Y_predrfe2 = rfe.predict(X_test_acconlrec_rfe)
print('test set scores: ')
prec, rec, acc = get_scores(Y_test_acconlrec_bin, Y_predrfe2)
# make a dataframe containing only the columns identified by the RFE model (45 features)
X_train_acconlrec_rfe_final = X_train_acconlrec_rfe.loc[:, rfe.support_]
# see which columns are left
X_train_acconlrec_rfe_final.columns
# repeat for test set
X_test_acconlrec_rfe_final = X_test_acconlrec_rfe.loc[:, rfe.support_]
# save the model coefficients
accessonlinerec_coefs = rfe.estimator_.coef_
accessonlinerec_coefs.shape
# and the intercept
accessonlinerec_inters = rfe.estimator_.intercept_
# check the order of classes
rfe.estimator_.classes_
# get the predicted probabilities for the test set (for class 1/Yes)
accessonlinerec_probs = rfe.estimator_.predict_proba(X_test_acconlrec_rfe_final)[:, 1]
# check this; should be 1d array with 2328 entries
display(accessonlinerec_probs)
display(accessonlinerec_probs.shape)
# check the spread of the predicted probabilities
print('max pred. probability:', accessonlinerec_probs.max())
print('min pred. probability:', accessonlinerec_probs.min())
###Output
min pred. probability: 0.0020900136452592873
###Markdown
There is again a wide range of predicted probabilities in the test set. This will be useful in differentiating which respondents are more and less likely to have been used an EMR. Summary of Final VariablesWe'll summarize the 45 variables left in the final model, along with their influence as we did above. Again, the outcome is coded as N = 0, Y =1 and all the features are positive integers, so a positive coefficient implies that a respondent with the characteristic is more likely to have used an EMR, and a negative coefficient, less likely.
###Code
# look at the list of variables with their logistic regression coefficients
col_list = list(X_train_acconlrec_rfe_final.columns)
for i in range(len(col_list)):
print('Column: ', col_list[i], ' Coefficient: ', accessonlinerec_coefs[0, i])
###Output
Column: highspanli Coefficient: -0.531423205818427
Column: useinternet Coefficient: 0.5959222073509762
Column: electronic_selfhealthinfo Coefficient: 0.6964149012204862
Column: intrsn_visitedsocnet Coefficient: 0.2942769985853987
Column: regularprovider Coefficient: 0.5968813880249698
Column: healthinsurance Coefficient: 0.659759207516666
Column: medconditions_diabetes Coefficient: 0.3192491976133308
Column: everhadcancer Coefficient: 0.22935447833466927
Column: freqgoprovider_3 Coefficient: 0.28388684702828315
Column: freqgoprovider_4 Coefficient: 0.2795044098695676
Column: freqgoprovider_5 Coefficient: 0.44126198734763367
Column: freqgoprovider_6 Coefficient: 0.5621935741822116
Column: avgdrinks_cat_4 Coefficient: -0.23956024219829986
Column: whruseinet_pubvother_1 Coefficient: 0.31748937633133517
Column: whruseinet_pubvother_2 Coefficient: 0.27843283101963956
Column: educa_2 Coefficient: 0.4739405968408064
Column: educa_3 Coefficient: 0.5735795931317962
Column: educa_4 Coefficient: 0.8025204123621499
Column: internet_broadbnd_1 Coefficient: 0.2678436165087341
Column: nchsurcode2013_4 Coefficient: -0.25515176031683884
Column: nchsurcode2013_5 Coefficient: -0.2582844814769422
Column: survey_cycle_3 Coefficient: 0.3055187314357205
Column: survey_cycle_4 Coefficient: 0.6426305100420422
Column: survey_cycle_5 Coefficient: 0.4658477595945543
Column: selfgender_2 Coefficient: 0.2242925194730612
Column: qualitycare_1 Coefficient: 0.9733329912098911
Column: qualitycare_2 Coefficient: 0.7765776218057594
Column: qualitycare_3 Coefficient: 0.6539653473699402
Column: qualitycare_4 Coefficient: 0.7314879067403012
Column: qualitycare_5 Coefficient: 0.4713966513608936
Column: havedevice_cat_5 Coefficient: 0.33287159356339696
Column: maritalstatus_5 Coefficient: -0.45775046040484063
Column: smokestat_2 Coefficient: 0.3530058023591039
Column: smokestat_3 Coefficient: 0.38357127202121283
Column: raceethn5_3 Coefficient: -0.248344352071759
Column: tablethealthwellnessapps_1 Coefficient: 0.591695081975543
Column: hhinc_1 Coefficient: -0.3904715048263131
Column: hhinc_2 Coefficient: -0.2840487919323571
Column: censdiv_2 Coefficient: -0.2903835209964346
Column: censdiv_6 Coefficient: -0.557688096492949
Column: censdiv_8 Coefficient: -0.3621199291429272
Column: censdiv_9 Coefficient: 0.18514387799598012
Column: ownabilitytakecarehealth_5 Coefficient: -0.43855027131099455
Column: phq4_cat_4 Coefficient: -0.3429753265464434
Column: tablet_discussionshcp_1 Coefficient: 0.7791963959652493
###Markdown
**Variables associated with higher likelihood of having used an EMR:** _Demographic & Temporal:_ - `educa_2` : High school education (vs all other levels; lowest/reference is \< high school)- `educa_3` : Some college education (vs all other levels)- `educa_4` : College or higher education (vs all other levels)- `selfgender_2` : Female (vs male or no answer)- `censdiv_9` : Pacific census division (CA, OR, WA, AK, HI; vs all other divisions)- `survey_cycle_3` : 2019 (vs 2018, 2020 pre- & post-pandemic)- `survey_cycle_4` : 2020 pre-pandemic (vs 2018, 2019, 2020 post-pandemic)- `survey_cycle_5` : 2020 post-pandemic (vs 2018, 2019, 2020 pre-pandemic)_Health-Related:_ - `regularprovider` : Have regular HCP (vs do not)- `healthinsurance` : Have some form of health insurance (vs do not)- `medconditions_diabetes` : Ever diagnosed with diabetes (vs never)- `everhadcancer` : Ever diagnosed with cancer (vs never)- `qualitycare_1` : Rate quality of HCP's care "excellent" (vs. don't go, very good, good, fair, poor)- `qualitycare_2` : Rate quality of HCP's care "very good" (vs. don't go, excellent, good, fair, poor)- `qualitycare_3` : Rate quality of HCP's care "good" (vs. don't go, excellent, very good, fair, poor)- `qualitycare_4` : Rate quality of HCP's care "fair" (vs. don't go, excellent, very good, good, poor)- `qualitycare_5` : Rate quality of HCP's care "poor" (vs. don't go, excellent, very good, good, fair)- `freqgoprovider_3` : See HCP 3 times yearly (vs. 0, 1, 2, 4, 5-9, and >=10)- `freqgoprovider_4` : See HCP 4 times yearly (vs. 0, 1, 2, 3, 5-9, and >=10)- `freqgoprovider_5` : See HCP 5-9 times yearly (vs. 0, 1, 2, 3, 4, and >=10)- `freqgoprovider_6` : See HCP >= 10 times yearly (vs. 0, 1, 2, 3, 4, and 5-9)- `smokestat_2` : Former smoker (vs current, never)- `smokestat_3` : Never smoker (vs current, former)_Electronic Device & Internet-Related:_- `useinternet` : Use internet for web browsing/email (vs do not)- `electronic_selfhealthinfo` : Have used electronic means to search for health-related info in last 12 mos (vs haven't)- `intrsn_visitedsocnet` : Used internet to visit social network (vs no or don't browse)- `whruseinet_pubvother_1` : Use internet in public place (eg library) "often" or "sometimes" (vs never or don't use internet)- `whruseinet_pubvother_2` : Do not use internet in public place (eg library) (vs often/sometimes or don't use internet)- `tablethealthwellnessapps_1` : Have health/wellness apps on a tablet (vs no or don't own tablet)- `tablet_discussionshcp_1` : Use tablet as aid for discussion with HCP (vs no or don't own tablet)- `havedevice_cat_5` : Have multiple electronic devices (cell phone, regular phone, tablet; vs none or one of these)- `internet_broadbnd_1` : Access the internet through a broadband connection (vs. don't or no internet) **Variables associated with lower likelihood of having used an EMR:** _Demographic:_- `highspanli` : Linguistically isolated (high prevalence less proficient English speakers)- `raceethn5_3` : Hispanic (vs all other racial groupings)- `censdiv_2`: Middle Atlantic census division (NJ, NY, PA; vs all other divisions)- `censdiv_6` : East South Central census divsion (KY, TN, MS, AL; vs all other divisions)- `censdiv_8` : Mountain census divsion (AZ, CO, ID, NM, MT, UT, NV, WY; vs all other divisions)- `nchsurcode2013_4` : Metropolitan: small metro urban vs rural classification (4th smallest of 6; vs all other classifications)- `nchsurcode2013_5` : Non-metropolitan: micropolitan urban vs rural classification (5th smallest of 6; vs all other classifications)- `hhinc_1` : Household income in lowest category (\< \$20k/yr; vs all higher categories & not reported)- `hhinc_2` : Household income in second-lowest category (\$20-34.99k/yr; vs all other categories & not reported)- `maritalstatus_5` : Separated (vs all other categories)_Health-Related:_- `phq4_cat_4` : Severe psychological distress based on PHQ-4 score (vs none, mild, or moderate)- `avgdrinks_cat_4` : \>= 100% to \< 150% of number drinks CDC classifies as heavy drinking (M: 15-22, F: 8-12; missing: 12-17; this is second-highest category; vs other categories)- `ownabilitytakecarehealth_5` : "Not at all" confident in own ability to take care of health (vs completely, very, somewhat, or a little confident) Plot the odds ratios.
###Code
# calculate odds ratios
accessonlinerec_ORs = np.exp(accessonlinerec_coefs).reshape(45)
# check range
print('min OR: ', np.min(accessonlinerec_ORs))
print('max OR: ', np.max(accessonlinerec_ORs))
# put the ORs and column labels in a df so it can be sorted
df_aor_OR = pd.DataFrame(columns = ['col', 'OR'])
df_aor_OR['col'] = col_list
df_aor_OR['OR'] = accessonlinerec_ORs
#sort
df_aor_OR.sort_values(by='OR', inplace = True)
df_aor_OR.head(10)
# plot the ORs
fig, bar3 = plt.subplots(figsize = (12,14))
y_labs = ['East S Ctrl Census Div', 'Linguistic isolation', 'Separated', 'Self-care ability rated "Not at all"',\
'Household income: < $20k/yr', 'Mountain Census Div', 'Severe psych. distress', 'Mid Atlantic Census Div',\
'Household income: $20-34.99k/yr', 'Municipality: Non-metro/micropolitan', 'Municipality: Small metropolitan', \
'Hispanic', '>= 100% to < 150% CDC heavy drinker', 'Pacific Census Div', 'Female', 'History of cancer',\
'Has broadband access', "Doesn't use public internet", 'Sees HCP 4x/yr', 'Sees HCP 3x/yr',\
'Visits social network sites', 'Cycle: 2019', 'Uses public internet', 'Diabetes', 'Owns multiple e-devices',\
'Former smoker', 'Never smoker', 'Sees HCP 5-9x/yr', 'Cycle: 2020 post-pandemic', 'Care rated poor',\
'Education: HS graduate', 'Sees HCP >= 10x/yr', 'Education: some college', 'Tablet has health apps',\
'Uses internet', 'Has regular HCP', 'Cycle: 2020 pre-pandemic', 'Care rated good', 'Has health ins.',\
'Searches web for health info', 'Care rated fair', 'Care rated very good', 'Uses tablet for HCP discussion',\
'Education: >= college degree', 'Care rated excellent']
df_aor_OR['ORlt1'] = df_aor_OR['OR'] < 1.0
bar3 = plt.barh(data=df_aor_OR, y=np.arange(45), width= 'OR', tick_label=y_labs,\
color=df_aor_OR['ORlt1'].map({True:'r', False:'g'}))
plt.title('Odds Ratios for Features Associated with EMR Use')
plt.xlabel('Odds Ratio')
plt.ylabel('Feature')
plt.xticks(np.arange(0.1, 2.9, 0.1))
plt.axvline(x=1.0, color='black', linestyle='--');
###Output
_____no_output_____
###Markdown
As previously, the dividing line at 1.0 demarcates features associated with EMR use (in green) and those not associated with EMR use (in red). Here, rating care "excellent" is most associated with having used an EMR, followed by having attained a college degree or higher level of education. By contrast, residing in the East South Central census division is most associated with _not_ having used an EMR, followed by living in linguistically isolated area. We want to compare characteristics of those more and less likely to use an EMR. Let's see what the probability distribution looks like.
###Code
# plot the probability of having used an EMR
plt.hist(x=accessonlinerec_probs, bins=20)
plt.xticks(np.arange(0.0, 1.1, step = 0.1))
plt.xlabel('Probability')
plt.ylabel('Count')
plt.title('Predicted Probability of Having Used an EMR');
###Output
_____no_output_____
###Markdown
This distribution has a large peak at the low end (\ 0.90.
###Code
print('N >= 90% probability:', accessonlinerec_probs[accessonlinerec_probs >= 0.90].shape[0])
print('N <= 10% probability:', accessonlinerec_probs[accessonlinerec_probs <= 0.10].shape[0])
###Output
N >= 90% probability: 67
N <= 10% probability: 348
###Markdown
As the graph shows, this is very skewed toward lower probabilities. Let's try a wider range.
###Code
print('N >= 80% probability:', accessonlinerec_probs[accessonlinerec_probs >= 0.80].shape[0])
print('N <= 20% probability:', accessonlinerec_probs[accessonlinerec_probs <= 0.20].shape[0])
###Output
N >= 80% probability: 367
N <= 20% probability: 580
###Markdown
This is still skewed toward the lower end, but less so. Again, we want to look at characteristics of respondents in the high and low probability groups. As we did previously, we'll look at the 10 variables with the largest difference between predicted probability >= 0.80 and <= 0.20.
###Code
# append probabilities to the final X matrix
X_test_acconlrec_finprobs = pd.concat([X_test_acconlrec_rfe_final, pd.Series(accessonlinerec_probs)], axis = 1)
# check it
display(X_test_acconlrec_finprobs.head())
display(X_test_acconlrec_finprobs.shape)
# rename the probability column
X_test_acconlrec_finprobs.rename(columns = {0:"pred_proba"}, inplace = True)
display(X_test_acconlrec_finprobs.columns)
# Get the top 10 categories by %age for predict probability >= 80 and <= 20
# denominators for %age
N_aor_80 = accessonlinerec_probs[accessonlinerec_probs >= 0.80].shape[0]
N_aor_20 = accessonlinerec_probs[accessonlinerec_probs <= 0.20].shape[0]
# list of columns
aor_cols = list(X_test_acconlrec_finprobs.columns)
aor_cols.remove('pred_proba')
# make a df for the results
df_aor_pcts = pd.DataFrame(index = aor_cols, columns = ['pct_ge80', 'pct_le20'])
# get the %age of each variable in the prob >= 80 and prob <= 20 groups
for col in aor_cols:
df_aor_pcts.loc[col, 'pct_ge80'] = X_test_acconlrec_finprobs[col][X_test_acconlrec_finprobs['pred_proba'] >= 0.80].sum()/(0.01*N_aor_80)
df_aor_pcts.loc[col, 'pct_le20'] = X_test_acconlrec_finprobs[col][X_test_acconlrec_finprobs['pred_proba'] <= 0.20].sum()/(0.01*N_aor_20)
# get the difference
df_aor_pcts['diff'] = df_aor_pcts['pct_ge80'] - df_aor_pcts['pct_le20']
# and absolute difference for sorting
df_aor_pcts['absdiff'] = np.abs(df_aor_pcts['diff'])
df_topabs2 = df_aor_pcts.sort_values(by='absdiff', ascending=False).head(10)
df_topabs2
fig, bar4 = plt.subplots(figsize = (12,8))
y_labs2 = ['Uses tablet for HCP discussion', 'Has tablet with health apps', 'Searches web for health info',\
'Owns multiple portable e-devices', 'Education: >= college degree', 'Uses internet', \
'Visits social network sites', 'Has broadband access', 'Has regular HCP', "Doesn't use public internet"]
df_topabs2['negpos'] = df_topabs2['diff'] < 0
bar4 = plt.barh(data=df_topabs2, y=np.arange(10), width='diff', tick_label=y_labs2,\
color=df_topabs2['negpos'].map({True:'r', False:'g'}))
plt.title('Features with Largest Difference in % Prevalence between Patients with \n Probability of Using EMR >= 80% and <= 20%')
plt.xlabel('Difference in Prevalence (%)')
plt.ylabel('Feature')
plt.xticks(np.arange(0, 90, 10));
###Output
_____no_output_____
###Markdown
The 10 features with the biggest difference in prevalence between predicted probabilities of \>= 0.80 and \<= 0.20 of being having used an EMR in the past 12 months are shown above. Red bars would indicate features more prevalent in patients with predicted probability \<= 0.20 of having used an EMR. None of the features fall into this category. Green bars indicate features more prevalent in patients with predicted probability \>= 0.80 of having used an EMR. Unlike those offered access, there is no gender difference. Similarly, these patients are more likely to have at least a college degree. Medically, they are again more likely to have a regular HCP. Again similar to those offered access, the rest of the variables relate to internet access and use: they are more likely to use the internet and to have broadband internet access. They are less likely to access the internet via public resources (e.g. a library). They are more likely to use the internet to access social networking sites. They tend to have multiple portable electronic devices, to look for health information on the web, monitor their health with table apps, and use a tablet in discussions with their HCP.
###Code
# scoring metrics for the ML models
def get_scores(y_test, y_pred):
'''
get scores for the fitted models
inputs:
y_test (series) = true labels
y_preds (series) = model predictions
returns:
prec (float) : precision score
rec (float): recall score
acc (flot) : accuracy score
'''
cm = confusion_matrix(y_test, y_pred)
prec = precision_score(y_test, y_pred, average = 'weighted')
rec = recall_score(y_test, y_pred, average = 'weighted')
acc = accuracy_score(y_test, y_pred)
print('confusion matrix:\n', cm)
print('precision: {:.5f}; recall: {:.5f}, accuracy: {:.5f}'.format(prec, rec, acc))
return prec, rec, acc
###Output
_____no_output_____ |
notebooks/network_mapping/CLX_Supervised_Asset_Classification.ipynb | ###Markdown
CLX Asset Classification (Supervised) Authors- Eli Fajardo (NVIDIA)- Görkem Batmaz (NVIDIA)- Bhargav Suryadevara (NVIDIA) Table of Contents * Introduction* Dataset* Reading in the datasets* Training and inference* References Introduction In this notebook, we will show how to predict the function of a server with Windows Event Logs using cudf, cuml and pytorch. The machines are labeled as DC, SQL, WEB, DHCP, MAIL and SAP. The dependent variable will be the type of the machine. The features are selected from Windows Event Logs which is in a tabular format. This is a first step to learn the behaviours of certain types of machines in data-centres by classifying them probabilistically. It could help to detect unusual behaviour in a data-centre. For example, some compromised computers might be acting as web/database servers but with their original tag. This work could be expanded by using different log types or different events from the machines as features to improve accuracy. Various labels can be selected to cover different types of machines or data-centres. Library imports
###Code
from clx.analytics.asset_classification import AssetClassification
import cudf
from cuml.preprocessing import train_test_split
from cuml.preprocessing import LabelEncoder
import torch
from sklearn.metrics import accuracy_score, f1_score, confusion_matrix
import pandas as pd
###Output
_____no_output_____
###Markdown
Initialize variables 10000 is chosen as the batch size to optimise the performance for this dataset. It can be changed depending on the data loading mechanism or the setup used. EPOCH should also be adjusted depending on convergence for a specific dataset. label_col indicates the total number of features used plus the dependent variable. Feature names are listed below.
###Code
batch_size = 10000
label_col = '19'
epochs = 15
ac = AssetClassification()
###Output
_____no_output_____
###Markdown
Read the dataset into a GPU dataframe with `cudf.read_csv()` The original data had many other fields. Many of them were either static or mostly blank. After filtering those, there were 18 meaningful columns left. In this notebook we use a fake continuous feature to show the inclusion of continuous features too. When you are using raw data the cell below need to be uncommented
###Code
# win_events_gdf = cudf.read_csv("raw_features_and_labels.csv")
###Output
_____no_output_____
###Markdown
```win_events_gdf.dtypeseventcode int64keywords objectprivileges objectmessage objectsourcename objecttaskcategory objectaccount_for_which_logon_failed_account_domain objectdetailed_authentication_information_authentication_package objectdetailed_authentication_information_key_length float64detailed_authentication_information_logon_process objectdetailed_authentication_information_package_name_ntlm_only objectlogon_type float64network_information_workstation_name objectnew_logon_security_id objectimpersonation_level objectnetwork_information_protocol float64network_information_direction objectfilter_information_layer_name objectcont1 int64label objectdtype: object``` Define categorical and continuous feature columns.
###Code
cat_cols = [
"eventcode",
"keywords",
"privileges",
"message",
"sourcename",
"taskcategory",
"account_for_which_logon_failed_account_domain",
"detailed_authentication_information_authentication_package",
"detailed_authentication_information_key_length",
"detailed_authentication_information_logon_process",
"detailed_authentication_information_package_name_ntlm_only",
"logon_type",
"network_information_workstation_name",
"new_logon_security_id",
"impersonation_level",
"network_information_protocol",
"network_information_direction",
"filter_information_layer_name",
"label"
]
cont_cols = [
"cont1"
]
###Output
_____no_output_____
###Markdown
The following are functions used to preprocess categorical and continuous feature columns. This can very depending on what best fits your application and data.
###Code
def categorize_columns(cat_gdf):
for col in cat_gdf.columns:
cat_gdf[col] = cat_gdf[col].astype('str')
cat_gdf[col] = cat_gdf[col].fillna("NA")
cat_gdf[col] = LabelEncoder().fit_transform(cat_gdf[col])
cat_gdf[col] = cat_gdf[col].astype('int16')
return cat_gdf
def normalize_conts(cont_gdf):
means, stds = (cont_gdf.mean(0), cont_gdf.std(ddof=0))
cont_gdf = (cont_gdf - means) / stds
return cont_gdf
###Output
_____no_output_____
###Markdown
Preprocessing steps below are not executed in this notebook, because we release already preprocessed data.
###Code
#win_events_gdf[cat_cols] = categorize_columns(win_events_gdf[cat_cols])
#win_events_gdf[cont_cols] = normalize_conts(win_events_gdf[cont_cols])
###Output
_____no_output_____
###Markdown
Read Windows Event data already preprocessed by above steps
###Code
win_events_gdf = cudf.read_csv("win_events_features_preproc.csv")
win_events_gdf.head()
###Output
_____no_output_____
###Markdown
Split the dataset into training and test sets using cuML `train_test_split` functionColumn 19 contains the ground truth about each machine's function that the logs come from. i.e. DC, SQL, WEB, DHCP, MAIL and SAP. Hence it will be used as a label.
###Code
X_train, X_test, Y_train, Y_test = train_test_split(win_events_gdf, "label", train_size=0.9)
X_train["label"] = Y_train
X_train.head()
Y_train.unique()
###Output
_____no_output_____
###Markdown
Print LabelsMaking sure the test set contains all labels
###Code
Y_test.unique()
###Output
_____no_output_____
###Markdown
Training Asset Classification training uses the fastai tabular model. More details can be found at https://github.com/fastai/fastai/blob/master/fastai/tabular/models.pyL6Feature columns will be embedded so that they can be used as categorical values. The limit can be changed depending on the accuracy of the dataset. Adam is the optimizer used in the training process; it is popular because it produces good results in various tasks. In its paper, computing the first and the second moment estimates and updating the parameters are summarized as follows $$\alpha_{t}=\alpha \cdot \sqrt{1-\beta_{2}^{t}} /\left(1-\beta_{1}^{t}\right)$$ More detailson Adam can be found at https://arxiv.org/pdf/1412.6980.pdf We have found that the way we partition the dataframes with a 10000 batch size gives us the optimum data loading capability. The **batch_size** argument can be adjusted for different sizes of datasets.
###Code
cat_cols.remove("label")
ac.train_model(X_train, cat_cols, cont_cols, "label", batch_size, epochs, lr=0.01, wd=0.0)
###Output
/opt/conda/envs/rapids/lib/python3.7/site-packages/cudf/io/dlpack.py:74: UserWarning: WARNING: cuDF to_dlpack() produces column-major (Fortran order) output. If the output tensor needs to be row major, transpose the output of this function.
return libdlpack.to_dlpack(gdf_cols)
###Markdown
Evaluation
###Code
pred_results = ac.predict(X_test, cat_cols, cont_cols).to_array()
true_results = Y_test.to_array()
f1_score_ = f1_score(pred_results, true_results, average='micro')
print('micro F1 score: %s'%(f1_score_))
torch.cuda.empty_cache()
labels = ["DC","DHCP","MAIL","SAP","SQL","WEB"]
a = confusion_matrix(true_results, pred_results)
pd.DataFrame(a, index=labels, columns=labels)
###Output
_____no_output_____
###Markdown
CLX Asset Classification (Supervised) Authors- Eli Fajardo (NVIDIA)- Görkem Batmaz (NVIDIA)- Bhargav Suryadevara (NVIDIA) Table of Contents * Introduction* Dataset* Reading in the datasets* Training and inference* References Introduction In this notebook, we will show how to predict the function of a server with Windows Event Logs using cudf, cuml and pytorch. The machines are labeled as DC, SQL, WEB, DHCP, MAIL and SAP. The dependent variable will be the type of the machine. The features are selected from Windows Event Logs which is in a tabular format. This is a first step to learn the behaviours of certain types of machines in data-centres by classifying them probabilistically. It could help to detect unusual behaviour in a data-centre. For example, some compromised computers might be acting as web/database servers but with their original tag. This work could be expanded by using different log types or different events from the machines as features to improve accuracy. Various labels can be selected to cover different types of machines or data-centres. Library imports
###Code
from clx.analytics.asset_classification import AssetClassification
import cudf
from cuml.preprocessing import train_test_split
from cuml.preprocessing import LabelEncoder
import torch
from sklearn.metrics import accuracy_score, f1_score, confusion_matrix
import pandas as pd
from os import path
import s3fs
###Output
_____no_output_____
###Markdown
Initialize variables 10000 is chosen as the batch size to optimise the performance for this dataset. It can be changed depending on the data loading mechanism or the setup used. EPOCH should also be adjusted depending on convergence for a specific dataset. label_col indicates the total number of features used plus the dependent variable. Feature names are listed below.
###Code
batch_size = 10000
label_col = '19'
epochs = 15
ac = AssetClassification()
###Output
_____no_output_____
###Markdown
Read the dataset into a GPU dataframe with `cudf.read_csv()` The original data had many other fields. Many of them were either static or mostly blank. After filtering those, there were 18 meaningful columns left. In this notebook we use a fake continuous feature to show the inclusion of continuous features too. When you are using raw data the cell below need to be uncommented
###Code
# win_events_gdf = cudf.read_csv("raw_features_and_labels.csv")
###Output
_____no_output_____
###Markdown
```win_events_gdf.dtypeseventcode int64keywords objectprivileges objectmessage objectsourcename objecttaskcategory objectaccount_for_which_logon_failed_account_domain objectdetailed_authentication_information_authentication_package objectdetailed_authentication_information_key_length float64detailed_authentication_information_logon_process objectdetailed_authentication_information_package_name_ntlm_only objectlogon_type float64network_information_workstation_name objectnew_logon_security_id objectimpersonation_level objectnetwork_information_protocol float64network_information_direction objectfilter_information_layer_name objectcont1 int64label objectdtype: object``` Define categorical and continuous feature columns.
###Code
cat_cols = [
"eventcode",
"keywords",
"privileges",
"message",
"sourcename",
"taskcategory",
"account_for_which_logon_failed_account_domain",
"detailed_authentication_information_authentication_package",
"detailed_authentication_information_key_length",
"detailed_authentication_information_logon_process",
"detailed_authentication_information_package_name_ntlm_only",
"logon_type",
"network_information_workstation_name",
"new_logon_security_id",
"impersonation_level",
"network_information_protocol",
"network_information_direction",
"filter_information_layer_name",
"label"
]
cont_cols = [
"cont1"
]
###Output
_____no_output_____
###Markdown
The following are functions used to preprocess categorical and continuous feature columns. This can very depending on what best fits your application and data.
###Code
def categorize_columns(cat_gdf):
for col in cat_gdf.columns:
cat_gdf[col] = cat_gdf[col].astype('str')
cat_gdf[col] = cat_gdf[col].fillna("NA")
cat_gdf[col] = LabelEncoder().fit_transform(cat_gdf[col])
cat_gdf[col] = cat_gdf[col].astype('int16')
return cat_gdf
def normalize_conts(cont_gdf):
means, stds = (cont_gdf.mean(0), cont_gdf.std(ddof=0))
cont_gdf = (cont_gdf - means) / stds
return cont_gdf
###Output
_____no_output_____
###Markdown
Preprocessing steps below are not executed in this notebook, because we release already preprocessed data.
###Code
#win_events_gdf[cat_cols] = categorize_columns(win_events_gdf[cat_cols])
#win_events_gdf[cont_cols] = normalize_conts(win_events_gdf[cont_cols])
###Output
_____no_output_____
###Markdown
Read Windows Event data already preprocessed by above steps
###Code
S3_BASE_PATH = "rapidsai-data/cyber/clx"
WINEVT_PREPROC_CSV = "win_events_features_preproc.csv"
# Download Zeek conn log
if not path.exists(WINEVT_PREPROC_CSV):
fs = s3fs.S3FileSystem(anon=True)
fs.get(S3_BASE_PATH + "/" + WINEVT_PREPROC_CSV, WINEVT_PREPROC_CSV)
win_events_gdf = cudf.read_csv("win_events_features_preproc.csv")
win_events_gdf.head()
###Output
_____no_output_____
###Markdown
Split the dataset into training and test sets using cuML `train_test_split` functionColumn 19 contains the ground truth about each machine's function that the logs come from. i.e. DC, SQL, WEB, DHCP, MAIL and SAP. Hence it will be used as a label.
###Code
X_train, X_test, Y_train, Y_test = train_test_split(win_events_gdf, "label", train_size=0.9)
X_train["label"] = Y_train
X_train.head()
Y_train.unique()
###Output
_____no_output_____
###Markdown
Print LabelsMaking sure the test set contains all labels
###Code
Y_test.unique()
###Output
_____no_output_____
###Markdown
Training Asset Classification training uses the fastai tabular model. More details can be found at https://github.com/fastai/fastai/blob/master/fastai/tabular/models.pyL6Feature columns will be embedded so that they can be used as categorical values. The limit can be changed depending on the accuracy of the dataset. Adam is the optimizer used in the training process; it is popular because it produces good results in various tasks. In its paper, computing the first and the second moment estimates and updating the parameters are summarized as follows $$\alpha_{t}=\alpha \cdot \sqrt{1-\beta_{2}^{t}} /\left(1-\beta_{1}^{t}\right)$$ More detailson Adam can be found at https://arxiv.org/pdf/1412.6980.pdf We have found that the way we partition the dataframes with a 10000 batch size gives us the optimum data loading capability. The **batch_size** argument can be adjusted for different sizes of datasets.
###Code
cat_cols.remove("label")
ac.train_model(X_train, cat_cols, cont_cols, "label", batch_size, epochs, lr=0.01, wd=0.0)
###Output
/opt/conda/envs/rapids/lib/python3.7/site-packages/cudf/io/dlpack.py:74: UserWarning: WARNING: cuDF to_dlpack() produces column-major (Fortran order) output. If the output tensor needs to be row major, transpose the output of this function.
return libdlpack.to_dlpack(gdf_cols)
###Markdown
Evaluation
###Code
pred_results = ac.predict(X_test, cat_cols, cont_cols).to_array()
true_results = Y_test.to_array()
f1_score_ = f1_score(pred_results, true_results, average='micro')
print('micro F1 score: %s'%(f1_score_))
torch.cuda.empty_cache()
labels = ["DC","DHCP","MAIL","SAP","SQL","WEB"]
a = confusion_matrix(true_results, pred_results)
pd.DataFrame(a, index=labels, columns=labels)
###Output
_____no_output_____
###Markdown
CLX Asset Classification (Supervised) Authors- Eli Fajardo (NVIDIA)- Görkem Batmaz (NVIDIA)- Bhargav Suryadevara (NVIDIA) Table of Contents * Introduction* Dataset* Reading in the datasets* Training and inference* References Introduction In this notebook, we will show how to predict the function of a server with Windows Event Logs using cudf, cuml and pytorch. The machines are labeled as DC, SQL, WEB, DHCP, MAIL and SAP. The dependent variable will be the type of the machine. The features are selected from Windows Event Logs which is in a tabular format. This is a first step to learn the behaviours of certain types of machines in data-centres by classifying them probabilistically. It could help to detect unusual behaviour in a data-centre. For example, some compromised computers might be acting as web/database servers but with their original tag. This work could be expanded by using different log types or different events from the machines as features to improve accuracy. Various labels can be selected to cover different types of machines or data-centres. Library imports
###Code
from clx.analytics.asset_classification import AssetClassification
import cudf
from cuml.preprocessing import train_test_split
from cuml.preprocessing import LabelEncoder
import torch
from sklearn.metrics import accuracy_score, f1_score, confusion_matrix
import pandas as pd
from os import path
import s3fs
###Output
_____no_output_____
###Markdown
Initialize variables 10000 is chosen as the batch size to optimise the performance for this dataset. It can be changed depending on the data loading mechanism or the setup used. EPOCH should also be adjusted depending on convergence for a specific dataset. label_col indicates the total number of features used plus the dependent variable. Feature names are listed below.
###Code
batch_size = 3000
label_col = '19'
epochs = 15
ac = AssetClassification()
###Output
_____no_output_____
###Markdown
Read the dataset into a GPU dataframe with `cudf.read_csv()` The original data had many other fields. Many of them were either static or mostly blank. After filtering those, there were 18 meaningful columns left. In this notebook we use a fake continuous feature to show the inclusion of continuous features too. When you are using raw data the cell below need to be uncommented
###Code
# win_events_gdf = cudf.read_csv("raw_features_and_labels.csv")
###Output
_____no_output_____
###Markdown
```win_events_gdf.dtypeseventcode int64keywords objectprivileges objectmessage objectsourcename objecttaskcategory objectaccount_for_which_logon_failed_account_domain objectdetailed_authentication_information_authentication_package objectdetailed_authentication_information_key_length float64detailed_authentication_information_logon_process objectdetailed_authentication_information_package_name_ntlm_only objectlogon_type float64network_information_workstation_name objectnew_logon_security_id objectimpersonation_level objectnetwork_information_protocol float64network_information_direction objectfilter_information_layer_name objectcont1 int64label objectdtype: object``` Define categorical and continuous feature columns.
###Code
cat_cols = [
"eventcode",
"keywords",
"privileges",
"message",
"sourcename",
"taskcategory",
"account_for_which_logon_failed_account_domain",
"detailed_authentication_information_authentication_package",
"detailed_authentication_information_key_length",
"detailed_authentication_information_logon_process",
"detailed_authentication_information_package_name_ntlm_only",
"logon_type",
"network_information_workstation_name",
"new_logon_security_id",
"impersonation_level",
"network_information_protocol",
"network_information_direction",
"filter_information_layer_name",
"label"
]
cont_cols = [
"cont1"
]
###Output
_____no_output_____
###Markdown
The following are functions used to preprocess categorical and continuous feature columns. This can very depending on what best fits your application and data.
###Code
def categorize_columns(cat_gdf):
for col in cat_gdf.columns:
cat_gdf[col] = cat_gdf[col].astype('str')
cat_gdf[col] = cat_gdf[col].fillna("NA")
cat_gdf[col] = LabelEncoder().fit_transform(cat_gdf[col])
cat_gdf[col] = cat_gdf[col].astype('int16')
return cat_gdf
def normalize_conts(cont_gdf):
means, stds = (cont_gdf.mean(0), cont_gdf.std(ddof=0))
cont_gdf = (cont_gdf - means) / stds
return cont_gdf
###Output
_____no_output_____
###Markdown
Preprocessing steps below are not executed in this notebook, because we release already preprocessed data.
###Code
#win_events_gdf[cat_cols] = categorize_columns(win_events_gdf[cat_cols])
#win_events_gdf[cont_cols] = normalize_conts(win_events_gdf[cont_cols])
###Output
_____no_output_____
###Markdown
Read Windows Event data already preprocessed by above steps
###Code
S3_BASE_PATH = "rapidsai-data/cyber/clx"
WINEVT_PREPROC_CSV = "win_events_features_preproc.csv"
# Download Zeek conn log
if not path.exists(WINEVT_PREPROC_CSV):
fs = s3fs.S3FileSystem(anon=True)
fs.get(S3_BASE_PATH + "/" + WINEVT_PREPROC_CSV, WINEVT_PREPROC_CSV)
win_events_gdf = cudf.read_csv("win_events_features_preproc.csv")
win_events_gdf.head()
###Output
_____no_output_____
###Markdown
Split the dataset into training and test sets using cuML `train_test_split` functionColumn 19 contains the ground truth about each machine's function that the logs come from. i.e. DC, SQL, WEB, DHCP, MAIL and SAP. Hence it will be used as a label.
###Code
X_train, X_test, Y_train, Y_test = train_test_split(win_events_gdf, "label", train_size=0.9)
X_train["label"] = Y_train
X_train.head()
Y_train.unique()
###Output
_____no_output_____
###Markdown
Print LabelsMaking sure the test set contains all labels
###Code
Y_test.unique()
###Output
_____no_output_____
###Markdown
Training Asset Classification training uses the fastai tabular model. More details can be found at https://github.com/fastai/fastai/blob/master/fastai/tabular/models.pyL6Feature columns will be embedded so that they can be used as categorical values. The limit can be changed depending on the accuracy of the dataset. Adam is the optimizer used in the training process; it is popular because it produces good results in various tasks. In its paper, computing the first and the second moment estimates and updating the parameters are summarized as follows $$\alpha_{t}=\alpha \cdot \sqrt{1-\beta_{2}^{t}} /\left(1-\beta_{1}^{t}\right)$$ More detailson Adam can be found at https://arxiv.org/pdf/1412.6980.pdf We have found that the way we partition the dataframes with a 10000 batch size gives us the optimum data loading capability. The **batch_size** argument can be adjusted for different sizes of datasets.
###Code
cat_cols.remove("label")
ac.train_model(X_train, cat_cols, cont_cols, "label", batch_size, epochs, lr=0.01, wd=0.0)
###Output
training loss: 1.0111992062101818
valid loss 0.583 and accuracy 0.814
training loss: 0.4620857691055216
valid loss 0.391 and accuracy 0.876
training loss: 0.33254117653094556
valid loss 0.312 and accuracy 0.902
training loss: 0.28072822153262583
valid loss 0.279 and accuracy 0.910
training loss: 0.2554837790583415
valid loss 0.263 and accuracy 0.914
training loss: 0.2408174945388092
valid loss 0.250 and accuracy 0.915
training loss: 0.23049960875422962
valid loss 0.244 and accuracy 0.916
training loss: 0.2221764272199862
valid loss 0.238 and accuracy 0.918
training loss: 0.2154606228360371
valid loss 0.234 and accuracy 0.919
training loss: 0.210641215422796
valid loss 0.233 and accuracy 0.921
training loss: 0.2069480326228095
valid loss 0.234 and accuracy 0.922
training loss: 0.20380194447335698
valid loss 0.238 and accuracy 0.923
training loss: 0.20021527777256393
valid loss 0.236 and accuracy 0.923
training loss: 0.19645206474967966
valid loss 0.230 and accuracy 0.923
training loss: 0.1930822879757292
valid loss 0.231 and accuracy 0.923
###Markdown
Evaluation
###Code
pred_results = ac.predict(X_test, cat_cols, cont_cols).to_arrow().to_pylist()
true_results = Y_test.to_arrow().to_pylist()
f1_score_ = f1_score(pred_results, true_results, average='micro')
print('micro F1 score: %s'%(f1_score_))
torch.cuda.empty_cache()
labels = ["DC","DHCP","MAIL","SAP","SQL","WEB"]
a = confusion_matrix(true_results, pred_results)
pd.DataFrame(a, index=labels, columns=labels)
###Output
_____no_output_____ |
Energy and Momentum/MomentumBarGraph2D.ipynb | ###Markdown
Momentum Bar Charts: PH211 Why?Momentum bar charts are analogoud to energy bar charts as a tool for tracking term in our conservation laws. In some ways momentum bar charts are a little less complex since all of the bars represent the same calculation $\bar{p} = m\bar{v}$ although there is the issue of tracking components in whatever coordinate system we are using. This noteboook is a modification of the energy bar chart notebook LibrariesThere are a number of different widget libraries. In the end the ipywidgets was most adaptable to my purposes. I suspect this would change if I were seeking to build this tool as a webpage. References that I used in sorting this all out are given in my [InteractiveStudy notebook](https://github.com/smithrockmaker/ENGR212/blob/main/InteractiveStudy.ipynbhttps://github.com/smithrockmaker/ENGR212/blob/main/InteractiveStudy.ipynb). At the moment (2/21) this is miserably documented but the references contained therein are much better if they are still live.
###Code
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
from IPython.display import display
from ipywidgets import interact, interactive, fixed, interact_manual, Layout
###Output
_____no_output_____
###Markdown
Setting Up the Bar GraphThis is where the decisions about how many bars and how they will be labelled are made. In the end I opted to create an enormous text str to label the bars which the barLabels. The locate and locateShift lists articulate x values (locations) for each of the bar. This involves skipping values to leave space for the vertical dividers that help this all make sense to me conceptually.
###Code
# set up locations for bars and other objects
# start with how objects and assume possibility of 2D
numObjects = 2
xBars0 = numObjects
yBars0 = numObjects
xBarsf = numObjects
yBarsf = numObjects
# formatting - 4 dividers, x0; y0; initial/final; xf; yf; netx/nety
numDividers = 5
# total number of bars that are interactive. Gaps and other spacing issues handled at end of cell
# last 2 are the netx and net y bars
Nbase = xBars0 + yBars0 + xBarsf + yBarsf + numDividers + 2
locate = np.arange(Nbase)
# shifted locations for labels
locateShift = locate - 0.4
# the x locations for the groups
# Having them in separate lists allows me to choose different colors for each section
# of the bar graph more easily (without creating a color list that I need to edit)
x0Loc = locate[0:xBars0]
y0Loc = locate[xBars0+1:xBars0 + yBars0 +1]
xfLoc = locate[xBars0 + yBars0 + 2:xBars0 + yBars0 + xBarsf + 2]
yfLoc = locate[xBars0 + yBars0 + xBarsf + 3:xBars0 + yBars0 + xBarsf + yBarsf + 3]
netLoc = locate[Nbase - 2:Nbase]
# check alignments -- I had a lot of trouble making sure that everything lined up
# appropriately. These are diagnostic print statements to be sure I'm visualizing
# the bar and divider locations correctly.
print("x0 Bars:",x0Loc)
print("y0 Bars:",y0Loc)
print("xf Bars:",xfLoc)
print("yf Bars:",yfLoc)
print("Net Bars:",netLoc)
print("locate:",locate)
# Structure bar width - this is a proportional value apparently
# it scales with plot figure size.
width = 0.4
# bar labels
labelx10 = 'p10x' # initial
labelx20 = 'p20x' # initial
labely10 = 'p10y' # initial
labely20 = 'p10y' # unknown source of energy initial
labelx1f = 'p1fx' # initial
labelx2f = 'p2fx' # initial
labely1f = 'p1fy' # initial
labely2f = 'p2fy' # initial
labelnetX = 'netX' # final
labelnetY = 'netY' # final
vertBar = ''
lSpace = ' '
lScale = 7
# assemble labels for each section. Spacing is most easily adjusted using the lScale variabkel above
#initialLabels = labelKEi + (lScale)*lSpace + labelPEgi + (lScale)*lSpace
#transLabels = labelPM1 + lScale*lSpace + labelPM2 + (lScale)*lSpace + labelPM3 + (lScale +1)*lSpace
#finalLabels = labelKEf + lScale*lSpace + labelPEgf + (lScale)*lSpace
#netLabels = labelNet
#vertLabel = vertBar
# put it all together for labels
#barLabels = initialLabels + lScale*lSpace + transLabels + lScale*lSpace + finalLabels + lScale*lSpace + netLabels + lScale*lSpace
# check the label string if needed.
#print("barlabels:", barLabels)
###Output
x0 Bars: [0 1]
y0 Bars: [3 4]
xf Bars: [6 7]
yf Bars: [ 9 10]
Net Bars: [13 14]
locate: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14]
###Markdown
Energy Bar Graph FunctionThis may not be the only or best way to do this but eventually it seemed easiest given my experience or lack of it. I tested everything using fixed values for the bars (you can see this in early version of this notebook). Because I decided I wanted to update the values of each bar on the plot I also needed to generate a dynamic text string that depended on the bar values passed to the plotting function. barValues represents this aspect of the plot.The plot scales vertically relatively smoothly. It will **NOT** scale horizontally since the text strings probably won't follow the bars properly. I can imagine how to sort that out but it's not important enough to take that time at this point. Very basic intro to bar plots is linked below.[pyplot.bar documentation](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.bar.html)[
###Code
def energyBar(p10x, KEf, PEg0, PEgf, WF1, WF2, WF3):
# create array of bar heights (energy)
initialHeights = [KE0, PEg0]
transHeights = [WF1, WF2, WF3]
finalHeights = [KEf, PEgf]
netEnergy = KE0 + PEg0 +WF1 + WF2 + WF3 - (KEf + PEgf)
netHeights = [netEnergy]
# truncate current bar values and create value array to display current value under each bar
# for creating text string for labels
sLabel = ' '
sScale = 7
# initial values
KE0Val = str(np.trunc(KE0))
PEg0Val = str(np.trunc(PEg0))
initialValues =KE0Val + (sScale)*sLabel + PEg0Val + (sScale+1)*sLabel
# add/remove values
WF1Val = str(np.trunc(WF1))
WF2Val = str(np.trunc(WF2))
WF3Val = str(np.trunc(WF3))
# WF4Val = str(np.trunc(WF4))
transValues = WF1Val + sScale*sLabel + WF2Val + sScale*sLabel + WF3Val + (sScale+2)*sLabel
# final values
KEfVal = str(np.trunc(KEf))
PEgfVal = str(np.trunc(PEgf))
finalValues =KEfVal + (sScale)*sLabel + PEg0Val + (sScale+1)*sLabel
# net value
netValue = str(np.trunc(netEnergy))
# current value string
barValues = initialValues + (sScale-1)*sLabel + transValues + (sScale-1)*sLabel + finalValues + (sScale-1)*sLabel + netValue
# determine plot max/min
initMax = np.max(initialHeights)
transMax = np.max(transHeights)
finalMax = np.max(finalHeights)
# include 10 as a lower limit on the top of plot
collectMax = [initMax,transMax,finalMax, 10]
globalMax = 1.1*np.max(collectMax)
initMin = np.min(initialHeights)
transMin= np.min(transHeights)
finalMin = np.min(finalHeights)
collectMin = [initMin,transMin,finalMin, -5.]
globalMin = 1.1*np.min(collectMin)
if np.abs(globalMin) < globalMax:
yLim = globalMax
else:
yLim = np.abs(globalMin)
# create the plot
fig1, ax1 = plt.subplots()
# bar graph sections
ax1.bar(initialLoc,
initialHeights,
width,
color = 'red',
label= 'initial energy',
alpha = 0.4)
ax1.bar(transLoc,
transHeights,
width,
color = 'purple',
label= 'added/removed',
alpha = 0.4)
ax1.bar(finalLoc,
finalHeights,
width,
color = 'blue',
label= 'final energy',
alpha = 0.4)
ax1.bar(netLoc,
netHeights,
width,
color = 'green',
label= 'net energy',
alpha = 0.4)
# dividing lines
ax1.vlines(vlineLoc, -.95*yLim, .95*yLim, linestyles= 'dashed', color = 'navy')
ax1.vlines(vline2Loc, -.95*yLim, .95*yLim, linestyles= '-', color = 'red')
# limits of plot
plt.xlim(-1, Nbase)
plt.ylim(-yLim, yLim)
# turn on plot grid
ax1.grid()
# labeling stuff
#ax1.tick_params(axis="x",direction="in", pad=-200)
#plt.xticks(locateShift, barLabels, fontsize = 12)
plt.text(-.5, -.1*yLim, barLabels)
plt.text(-.5, -.2*yLim, barValues)
#ax1.tick_params(axis="x",direction="in", pad=-170)
#plt.xticks(locate, barLabels, fontsize = 12)
# axis labels
# currently forcing plt.legend to put legend top right for consistency
plt.xlabel('energy type', fontsize = 20)
plt.ylabel('energy', fontsize = 20)
plt.title('Energy Bar Chart', fontsize = 20)
plt.legend(loc = 1)
# Set the size of my plot for better visibility
fig1.set_size_inches(12, 6)
#fig.savefig("myplot.png")
plt.show()
###Output
_____no_output_____
###Markdown
Setting up widgets and interactivityOnce the active function is defined then we define the interactive widgets which are mostly sliders for visual connection to the bar graph. In hindsight I might have done well to make the sliders vertical so they move in the same direction as the bars but hey .... got to save something for a rainy day.The cap variables are strings for labeling the different sections of the slider array. Hbox and VBox are used to lay out the panel. Last two lines pull the trigger and set up the interactivity.
###Code
# Set up widgetsm - captions
cap1 = widgets.Label(value='.....Initial Energy')
cap2 = widgets.Label(value=' Add/Removed')
cap3 = widgets.Label(value='.....Final Energy')
cap4 = widgets.Label(value='Object 1:')
cap5 = widgets.Label(value='Force 1:')
cap6 = widgets.Label(value='Force 2:')
cap7 = widgets.Label(value='Force 3:')
cap8 = widgets.Label(value='Force 4:')
cap9 = widgets.Label(value='Net Energy:')
# kinetic energy sliders
KE0=widgets.FloatText(min=0, max=100, value=.1, description = 'Initial KE',continuous_update=False,
layout=Layout(width='60%'))
KEf=widgets.FloatText(min=0, max=100, value=.1, description = 'Final KE',continuous_update=False,
layout=Layout(width='60%'))
# gravitational energy sliders
PEg0=widgets.FloatText(min=-100, max=100, value=.1, description = 'Initial PE_g',continuous_update=False,
layout=Layout(width='60%'))
PEgf=widgets.FloatText(min=-100, max=100, value=.1, description = 'Final PE_g',continuous_update=False,
layout=Layout(width='60%'))
# nonconservative force - energy sliders
WF1=widgets.FloatText(min=-100, max=100, value=.1, description = 'Work F1',continuous_update=False,
layout=Layout(width='60%'))
WF2=widgets.FloatText(min=-100, max=100, value=.1, description = 'Work F2',continuous_update=False,
layout=Layout(width='60%'))
WF3=widgets.FloatText(min=-100, max=100, value=.1, description = 'Work F2',continuous_update=False,
layout=Layout(width='60%'))
# An HBox lays out its children horizontally, VBox lays them out vertically
col1 = widgets.VBox([cap1, cap4, KE0, PEg0])
col2 = widgets.VBox([cap2, cap5, WF1, cap6, WF2, cap7, WF3])
col3 = widgets.VBox([cap3, cap4, KEf, PEgf])
panel = widgets.HBox([col1, col2, col3])
out = widgets.interactive_output(energyBar, {'KE0': KE0, 'KEf': KEf,
'PEg0': PEg0, 'PEgf': PEgf,
'WF1': WF1,'WF2': WF2,
'WF3': WF3})
display(out, panel)
###Output
_____no_output_____ |
example_notebooks/hyperparameter_example.ipynb | ###Markdown
[](https://colab.research.google.com/github/ourownstory/neural_prophet/blob/master/example_notebooks/autoregression_yosemite_temps.ipynb) Hyperparameter optimization with Ray TuneWe introduce the module for hyperparameter otpimization with Ray Tune.It supports automatic tuning, with predefined by us hyperparameter sets, as well as manual tuning with user-provided configuration of the parameters.Firstly, we will show how it works with NP model in automated mode.
###Code
# install NeuralProphet from our repository
!pip install git+https://github.com/adasegroup/neural_prophet.git # may take a while
!pip install tensorboardX
import pandas as pd
import numpy as np
from neuralprophet import NeuralProphet
from neuralprophet.hyperparameter_tuner import tune_hyperparameters
if 'google.colab' in str(get_ipython()):
data_location = "https://raw.githubusercontent.com/adasegroup/neural_prophet/master/"
else:
data_location = "../"
df = pd.read_csv(data_location + "example_data/yosemite_temps.csv")
df.head(3)
freq = '5min'
best_params, results_df = tune_hyperparameters('NP',
df,
freq)
###Output
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[36m(pid=18489)[0m INFO - (NP.config.__post_init__) - Trend reg lambda ignored due to no changepoints.
[2m[36m(pid=18489)[0m INFO - (NP.config.__post_init__) - Note: Fourier-based seasonality regularization is experimental.
[2m[36m(pid=18489)[0m INFO - (NP.forecaster._handle_missing_data) - 12 NaN values in column y were auto-imputed.
[2m[36m(pid=18489)[0m INFO - (NP.config.set_auto_batch_epoch) - Auto-set batch_size to 64
[2m[36m(pid=18492)[0m INFO - (NP.config.__post_init__) - Note: Fourier-based seasonality regularization is experimental.
[2m[36m(pid=18492)[0m INFO - (NP.forecaster._handle_missing_data) - 12 NaN values in column y were auto-imputed.
0%| | 0/100 [00:00<?, ?it/s]GPU available: False, used: False
[2m[36m(pid=18489)[0m TPU available: False, using: 0 TPU cores
[2m[36m(pid=18489)[0m GPU available: False, used: False
[2m[36m(pid=18489)[0m TPU available: False, using: 0 TPU cores
[2m[36m(pid=18492)[0m INFO - (NP.config.set_auto_batch_epoch) - Auto-set batch_size to 64
[2m[36m(pid=18486)[0m INFO - (NP.config.__post_init__) - Trend reg lambda ignored due to no changepoints.
[2m[36m(pid=18486)[0m INFO - (NP.config.__post_init__) - Note: Fourier-based seasonality regularization is experimental.
[2m[36m(pid=18489)[0m
[2m[36m(pid=18489)[0m | Name | Type | Params
[2m[36m(pid=18489)[0m ------------------------------------------------
[2m[36m(pid=18489)[0m 0 | season_params | ParameterDict | 6
[2m[36m(pid=18489)[0m 1 | ar_net | ModuleList | 18.0 K
[2m[36m(pid=18489)[0m 2 | loss_func | MSELoss | 0
[2m[36m(pid=18489)[0m ------------------------------------------------
[2m[36m(pid=18489)[0m 18.1 K Trainable params
[2m[36m(pid=18489)[0m 0 Non-trainable params
[2m[36m(pid=18489)[0m 18.1 K Total params
[2m[36m(pid=18489)[0m 0.072 Total estimated model params size (MB)
[2m[36m(pid=18489)[0m WARNING - (py.warnings._showwarnmsg) - /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:68: UserWarning: The dataloader, train dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 8 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
[2m[36m(pid=18489)[0m warnings.warn(*args, **kwargs)
[2m[36m(pid=18489)[0m
[2m[36m(pid=18489)[0m WARNING - (py.warnings._showwarnmsg) - /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:68: UserWarning: The dataloader, val dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 8 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
[2m[36m(pid=18489)[0m warnings.warn(*args, **kwargs)
[2m[36m(pid=18489)[0m
[2m[36m(pid=18486)[0m INFO - (NP.forecaster._handle_missing_data) - 12 NaN values in column y were auto-imputed.
###Markdown
This function by default outputs the dictionary of best hyperparameters chosen. It additionally will output the dataframe with detailed result of each trial if return_results is set to True.
###Code
results_df[['config.growth', 'config.n_changepoints', 'config.changepoints_range',
'config.trend_reg', 'config.yearly_seasonality',
'config.weekly_seasonality', 'config.daily_seasonality',
'config.seasonality_mode', 'config.seasonality_reg', 'config.n_lags',
'config.d_hidden', 'config.num_hidden_layers', 'config.ar_sparsity',
'config.learning_rate', 'config.loss_func', 'config.normalize']]
best_params
###Output
_____no_output_____
###Markdown
This dictionary can further be used in initialization of NeuralProphet model.This function has also additional parameters:- **num_epochs**: Max possible number of epochs to train each model.- **num_samples**: Number of samples from hyperparameter spaces to check.- **resources_per_trial**: Resources per trial setting for ray.tune.run, {'cpu': 1, 'gpu': 2} for example Manual modeIn case of manual mode, a user must provide a config dictionary with hyperparameter spaces in compatability with Ray Tune api.We provide a minimal example below, for more information on Search Spaces withit this link https://docs.ray.io/en/master/tune/api_docs/search_space.html?highlight=tune.choice
###Code
from ray import tune
config = {'n_lags': tune.grid_search([10, 20, 30]),
'learning_rate': tune.loguniform(1e-4, 1e-1),
'num_hidden_layers': tune.choice([2, 8, 16])}
freq = '5min'
best_params, results_df = tune_hyperparameters('NP',
df,
freq,
mode = 'manual',
config = config)
results_df
best_params
###Output
_____no_output_____
###Markdown
[](https://colab.research.google.com/github/ourownstory/neural_prophet/blob/master/example_notebooks/autoregression_yosemite_temps.ipynb) Hyperparameter optimization with Ray TuneWe introduce the module for hyperparameter otpimization with Ray Tune.It supports automatic tuning, with predefined by us hyperparameter sets, as well as manual tuning with user-provided configuration of the parameters.Firstly, we will show how it works with NP model in automated mode.
###Code
import pandas as pd
import numpy as np
from neuralprophet import NeuralProphet
from neuralprophet.hyperparameter_tuner import tune_hyperparameters
if 'google.colab' in str(get_ipython()):
!pip install git+https://github.com/ourownstory/neural_prophet.git # may take a while
#!pip install neuralprophet # much faster, but may not have the latest upgrades/bugfixes
data_location = "https://raw.githubusercontent.com/ourownstory/neural_prophet/master/"
else:
data_location = "../"
df = pd.read_csv(data_location + "example_data/yosemite_temps.csv")
df.head(3)
freq = '5min'
best_params, results_df = tune_hyperparameters('NP',
df,
freq)
###Output
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[33m(raylet)[0m /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
[2m[33m(raylet)[0m "update your install command.", FutureWarning)
[2m[36m(pid=18489)[0m INFO - (NP.config.__post_init__) - Trend reg lambda ignored due to no changepoints.
[2m[36m(pid=18489)[0m INFO - (NP.config.__post_init__) - Note: Fourier-based seasonality regularization is experimental.
[2m[36m(pid=18489)[0m INFO - (NP.forecaster._handle_missing_data) - 12 NaN values in column y were auto-imputed.
[2m[36m(pid=18489)[0m INFO - (NP.config.set_auto_batch_epoch) - Auto-set batch_size to 64
[2m[36m(pid=18492)[0m INFO - (NP.config.__post_init__) - Note: Fourier-based seasonality regularization is experimental.
[2m[36m(pid=18492)[0m INFO - (NP.forecaster._handle_missing_data) - 12 NaN values in column y were auto-imputed.
0%| | 0/100 [00:00<?, ?it/s]GPU available: False, used: False
[2m[36m(pid=18489)[0m TPU available: False, using: 0 TPU cores
[2m[36m(pid=18489)[0m GPU available: False, used: False
[2m[36m(pid=18489)[0m TPU available: False, using: 0 TPU cores
[2m[36m(pid=18492)[0m INFO - (NP.config.set_auto_batch_epoch) - Auto-set batch_size to 64
[2m[36m(pid=18486)[0m INFO - (NP.config.__post_init__) - Trend reg lambda ignored due to no changepoints.
[2m[36m(pid=18486)[0m INFO - (NP.config.__post_init__) - Note: Fourier-based seasonality regularization is experimental.
[2m[36m(pid=18489)[0m
[2m[36m(pid=18489)[0m | Name | Type | Params
[2m[36m(pid=18489)[0m ------------------------------------------------
[2m[36m(pid=18489)[0m 0 | season_params | ParameterDict | 6
[2m[36m(pid=18489)[0m 1 | ar_net | ModuleList | 18.0 K
[2m[36m(pid=18489)[0m 2 | loss_func | MSELoss | 0
[2m[36m(pid=18489)[0m ------------------------------------------------
[2m[36m(pid=18489)[0m 18.1 K Trainable params
[2m[36m(pid=18489)[0m 0 Non-trainable params
[2m[36m(pid=18489)[0m 18.1 K Total params
[2m[36m(pid=18489)[0m 0.072 Total estimated model params size (MB)
[2m[36m(pid=18489)[0m WARNING - (py.warnings._showwarnmsg) - /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:68: UserWarning: The dataloader, train dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 8 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
[2m[36m(pid=18489)[0m warnings.warn(*args, **kwargs)
[2m[36m(pid=18489)[0m
[2m[36m(pid=18489)[0m WARNING - (py.warnings._showwarnmsg) - /Users/polina/.conda/envs/neural_prophet/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:68: UserWarning: The dataloader, val dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 8 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
[2m[36m(pid=18489)[0m warnings.warn(*args, **kwargs)
[2m[36m(pid=18489)[0m
[2m[36m(pid=18486)[0m INFO - (NP.forecaster._handle_missing_data) - 12 NaN values in column y were auto-imputed.
###Markdown
This function by default outputs the dictionary of best hyperparameters chosen. It additionally will output the dataframe with detailed result of each trial if return_results is set to True.
###Code
results_df[['config.growth', 'config.n_changepoints', 'config.changepoints_range',
'config.trend_reg', 'config.yearly_seasonality',
'config.weekly_seasonality', 'config.daily_seasonality',
'config.seasonality_mode', 'config.seasonality_reg', 'config.n_lags',
'config.d_hidden', 'config.num_hidden_layers', 'config.ar_sparsity',
'config.learning_rate', 'config.loss_func', 'config.normalize']]
best_params
###Output
_____no_output_____
###Markdown
This dictionary can further be used in initialization of NeuralProphet model.This function has also additional parameters:- **num_epochs**: Max possible number of epochs to train each model.- **num_samples**: Number of samples from hyperparameter spaces to check.- **resources_per_trial**: Resources per trial setting for ray.tune.run, {'cpu': 1, 'gpu': 2} for example Manual modeIn case of manual mode, a user must provide a config dictionary with hyperparameter spaces in compatability with Ray Tune api.We provide a minimal example below, for more information on Search Spaces withit this link https://docs.ray.io/en/master/tune/api_docs/search_space.html?highlight=tune.choice
###Code
from ray import tune
config = {'n_lags': tune.grid_search([10, 20, 30]),
'learning_rate': tune.loguniform(1e-4, 1e-1),
'num_hidden_layers': tune.choice([2, 8, 16])}
freq = '5min'
best_params, results_df = tune_hyperparameters('NP',
df,
freq,
mode = 'manual',
config = config)
results_df
best_params
###Output
_____no_output_____ |
docs/notebooks/06_component_circuit_mask.ipynb | ###Markdown
Component -> Circuit -> Maskgdsfactory easily enables you to go from a Component, to a higher level Component (circuit), or even higher level Component (Mask)For a component it's important that you spend some time early to parametrize it correctly. Don't be afraid to spend some time using pen and paper and choosing easy to understand names.Lets for example define a ring resontator. A ring resonator is already a circuit made of waveguides, bends and couplers. ComponentYou can define any new Component define as a function that returns a component
###Code
from typing import Optional
import gdsfactory as gf
from gdsfactory.component import Component
from gdsfactory.components.bend_euler import bend_euler
from gdsfactory.components.coupler90 import coupler90 as coupler90function
from gdsfactory.components.coupler_straight import (
coupler_straight as coupler_straight_function,
)
from gdsfactory.cross_section import strip
from gdsfactory.snap import assert_on_2nm_grid
from gdsfactory.types import ComponentFactory, CrossSectionFactory
@gf.cell
def coupler_ring(
gap: float = 0.2,
radius: float = 5.0,
length_x: float = 4.0,
coupler90: ComponentFactory = coupler90function,
bend: Optional[ComponentFactory] = None,
coupler_straight: ComponentFactory = coupler_straight_function,
cross_section: CrossSectionFactory = strip,
**kwargs
) -> Component:
r"""Coupler for ring.
Args:
gap: spacing between parallel coupled straight waveguides.
radius: of the bends.
length_x: length of the parallel coupled straight waveguides.
coupler90: straight coupled to a 90deg bend.
straight: library for straight waveguides.
bend: library for bend
coupler_straight: two parallel coupled straight waveguides.
cross_section:
kwargs: cross_section settings
.. code::
2 3
| |
\ /
\ /
---=========---
1 length_x 4
"""
bend = bend or bend_euler
c = Component()
assert_on_2nm_grid(gap)
# define subcells
coupler90_component = (
coupler90(
gap=gap, radius=radius, bend=bend, cross_section=cross_section, **kwargs
)
if callable(coupler90)
else coupler90
)
coupler_straight_component = (
coupler_straight(
gap=gap, length=length_x, cross_section=cross_section, **kwargs
)
if callable(coupler_straight)
else coupler_straight
)
# add references to subcells
cbl = c << coupler90_component
cbr = c << coupler90_component
cs = c << coupler_straight_component
# connect references
y = coupler90_component.y
cs.connect(port="o4", destination=cbr.ports["o1"])
cbl.reflect(p1=(0, y), p2=(1, y))
cbl.connect(port="o2", destination=cs.ports["o2"])
c.absorb(cbl)
c.absorb(cbr)
c.absorb(cs)
c.add_port("o1", port=cbl.ports["o3"])
c.add_port("o2", port=cbl.ports["o4"])
c.add_port("o3", port=cbr.ports["o3"])
c.add_port("o4", port=cbr.ports["o4"])
c.auto_rename_ports()
return c
coupler = coupler_ring(cache=False)
coupler.plot()
###Output
_____no_output_____
###Markdown
CircuitsYou can define a circuit also with a function, and some of the parameters can also be functions that returns other componentsFor example, lets define a ring function that also accepts other component functions (straight, coupler, bend) Circuit function
###Code
import gdsfactory as gf
@gf.cell
def ring_single(
gap: float = 0.2,
radius: float = 10.0,
length_x: float = 4.0,
length_y: float = 0.6,
coupler_ring: gf.types.ComponentFactory = coupler_ring,
straight: gf.types.ComponentFactory = gf.components.straight,
bend: gf.types.ComponentFactory = gf.components.bend_euler,
cross_section: gf.types.CrossSectionFactory = gf.cross_section.strip,
**kwargs
) -> gf.Component:
"""Single bus ring made of a ring coupler (cb: bottom)
connected with two vertical straights (sl: left, sr: right)
two bends (bl, br) and horizontal straight (wg: top)
Args:
gap: gap between for coupler
radius: for the bend and coupler
length_x: ring coupler length
length_y: vertical straight length
coupler_ring: ring coupler function
straight: straight function
bend: 90 degrees bend function
cross_section:
**kwargs: cross_section settings
.. code::
bl-st-br
| |
sl sr length_y
| |
--==cb==-- gap
length_x
"""
gf.snap.assert_on_2nm_grid(gap)
coupler_ring = gf.partial(
coupler_ring,
bend=bend,
gap=gap,
radius=radius,
length_x=length_x,
cross_section=cross_section,
**kwargs
)
straight_side = gf.partial(
straight, length=length_y, cross_section=cross_section, **kwargs
)
straight_top = gf.partial(
straight, length=length_x, cross_section=cross_section, **kwargs
)
bend = gf.partial(bend, radius=radius, cross_section=cross_section, **kwargs)
c = gf.Component()
cb = c << coupler_ring()
sl = c << straight_side()
sr = c << straight_side()
bl = c << bend()
br = c << bend()
st = c << straight_top()
# st.mirror(p1=(0, 0), p2=(1, 0))
sl.connect(port="o1", destination=cb.ports["o2"])
bl.connect(port="o2", destination=sl.ports["o2"])
st.connect(port="o2", destination=bl.ports["o1"])
br.connect(port="o2", destination=st.ports["o1"])
sr.connect(port="o1", destination=br.ports["o1"])
sr.connect(port="o2", destination=cb.ports["o3"])
c.add_port("o2", port=cb.ports["o4"])
c.add_port("o1", port=cb.ports["o1"])
return c
ring = ring_single()
ring.plot()
###Output
_____no_output_____
###Markdown
How do you customize components?You can use `functools.partial` to customize the default settings from any component
###Code
ring_single3 = gf.partial(ring_single, radius=3)
ring_single3()
###Output
_____no_output_____
###Markdown
Circuit netlistSometimes, when a component is mostly composed of sub-components adjacent to eachother, it can be easier to define the component by sub-component connections and by which ports are part of the new components.This can be done using a netlist based approach where these 3 parts are defined:- components: a dictionary of `{component reference name: (component, transform)}`- connections: a list of `(component ref name 1, port name A, component ref name 2, port name B)`- ports_map: a dictionary of which ports are being exposed together with their new name `{port_name: (component ref name, port name)}`The code below illustrates how a simple MZI can be formed using this method.
###Code
import gdsfactory as gf
yaml = """
name: simple_mzi
instances:
mmi1:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 10
mmi2:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 5
straight:
component: straight
placements:
mmi2:
x: 100
mirror: True
straight:
x: 40
y: 40
routes:
route_name1:
links:
mmi1,o3: mmi2,o3
route_name2:
links:
mmi1,o2: straight,o1
route_name3:
links:
mmi2,o2: straight,o2
"""
mzi = gf.read.from_yaml(yaml)
mzi.show()
mzi.plot()
###Output
_____no_output_____
###Markdown
Exporting connectivity map from a GDS is the first step towards verification.- Adding ports to *every* cells in the GDS- Generating the netlist
###Code
mzi.plot_netlist()
###Output
_____no_output_____
###Markdown
MaskOnce you have your components and circuits defined, you can add them into a mask that you will send to the foundry for fabrication.You will need to consider:- what design variations do you want to include in the mask? You need to define your Design Of Experiment or DOE- obey DRC (Design rule checking) foundry rules for manufacturability. Foundry usually provides those rules for each layer (min width, min space, min density, max density)- make sure you will be able to test te devices after fabriation. Obey DFT (design for testing) rules. For exammple, if your test setup works only for fiber array, what is the fiber array spacing (127 or 250um?)- if you plan to package your device, make sure you follow your packaging guidelines from your packaging house (min pad size, min pad pitch, max number of rows for wire bonding ...)
###Code
import toolz
import gdsfactory as gf
ring_te = toolz.compose(gf.routing.add_fiber_array, gf.components.ring_single)
rings = gf.grid([ring_te(radius=r) for r in [10, 20, 50]])
@gf.cell
def mask(size=(1000, 1000)):
c = gf.Component()
c << gf.components.die(size=size)
c << rings
return c
m = mask(cache=False)
m
gdspath = m.write_gds_with_metadata(gdspath="mask.gds")
###Output
_____no_output_____
###Markdown
Make sure you save the GDS with metadata so when the chip comes back you remember what you put on itYou can also save the labels for automatic testing.
###Code
labels_path = gdspath.with_suffix('.csv')
gf.mask.write_labels(gdspath=gdspath, layer_label=(66, 0))
mask_metadata =gf.mask.read_metadata(gdspath=gdspath)
tm = gf.mask.merge_test_metadata(mask_metadata=mask_metadata, labels_path=labels_path)
tm.keys()
###Output
_____no_output_____
###Markdown
Component -> Circuit -> Maskgdsfactory easily enables you to go from a Component, to a higher level Component (circuit), or even higher level Component (Mask)For a component it's important that you spend some time early to parametrize it correctly. Don't be afraid to spend some time using pen and paper and choosing easy to understand names.Lets for example define a ring resontator. A ring resonator is already a circuit made of waveguides, bends and couplers. ComponentYou can define any new Component define as a function that returns a component
###Code
from typing import Optional
import gdsfactory as gf
from gdsfactory.component import Component
from gdsfactory.components.bend_euler import bend_euler
from gdsfactory.components.coupler90 import coupler90 as coupler90function
from gdsfactory.components.coupler_straight import (
coupler_straight as coupler_straight_function,
)
from gdsfactory.cross_section import strip
from gdsfactory.snap import assert_on_2nm_grid
from gdsfactory.types import ComponentFactory, CrossSectionFactory
@gf.cell
def coupler_ring(
gap: float = 0.2,
radius: float = 5.0,
length_x: float = 4.0,
coupler90: ComponentFactory = coupler90function,
bend: Optional[ComponentFactory] = None,
coupler_straight: ComponentFactory = coupler_straight_function,
cross_section: CrossSectionFactory = strip,
**kwargs
) -> Component:
r"""Coupler for ring.
Args:
gap: spacing between parallel coupled straight waveguides.
radius: of the bends.
length_x: length of the parallel coupled straight waveguides.
coupler90: straight coupled to a 90deg bend.
straight: library for straight waveguides.
bend: library for bend
coupler_straight: two parallel coupled straight waveguides.
cross_section:
kwargs: cross_section settings
.. code::
2 3
| |
\ /
\ /
---=========---
1 length_x 4
"""
bend = bend or bend_euler
c = Component()
assert_on_2nm_grid(gap)
# define subcells
coupler90_component = (
coupler90(
gap=gap, radius=radius, bend=bend, cross_section=cross_section, **kwargs
)
if callable(coupler90)
else coupler90
)
coupler_straight_component = (
coupler_straight(
gap=gap, length=length_x, cross_section=cross_section, **kwargs
)
if callable(coupler_straight)
else coupler_straight
)
# add references to subcells
cbl = c << coupler90_component
cbr = c << coupler90_component
cs = c << coupler_straight_component
# connect references
y = coupler90_component.y
cs.connect(port="o4", destination=cbr.ports["o1"])
cbl.reflect(p1=(0, y), p2=(1, y))
cbl.connect(port="o2", destination=cs.ports["o2"])
c.absorb(cbl)
c.absorb(cbr)
c.absorb(cs)
c.add_port("o1", port=cbl.ports["o3"])
c.add_port("o2", port=cbl.ports["o4"])
c.add_port("o3", port=cbr.ports["o3"])
c.add_port("o4", port=cbr.ports["o4"])
c.auto_rename_ports()
return c
coupler = coupler_ring(cache=False)
coupler
###Output
_____no_output_____
###Markdown
CircuitsYou can define a circuit also with a function, and some of the parameters can also be functions that returns other componentsFor example, lets define a ring function that also accepts other component functions (straight, coupler, bend) Circuit function
###Code
import gdsfactory as gf
@gf.cell
def ring_single(
gap: float = 0.2,
radius: float = 10.0,
length_x: float = 4.0,
length_y: float = 0.6,
coupler_ring: gf.types.ComponentFactory = coupler_ring,
straight: gf.types.ComponentFactory = gf.c.straight,
bend: gf.types.ComponentFactory = gf.c.bend_euler,
cross_section: gf.types.CrossSectionFactory = gf.cross_section.strip,
**kwargs
) -> gf.Component:
"""Single bus ring made of a ring coupler (cb: bottom)
connected with two vertical straights (sl: left, sr: right)
two bends (bl, br) and horizontal straight (wg: top)
Args:
gap: gap between for coupler
radius: for the bend and coupler
length_x: ring coupler length
length_y: vertical straight length
coupler_ring: ring coupler function
straight: straight function
bend: 90 degrees bend function
cross_section:
**kwargs: cross_section settings
.. code::
bl-st-br
| |
sl sr length_y
| |
--==cb==-- gap
length_x
"""
gf.snap.assert_on_2nm_grid(gap)
coupler_ring = gf.partial(
coupler_ring,
bend=bend,
gap=gap,
radius=radius,
length_x=length_x,
cross_section=cross_section,
**kwargs
)
straight_side = gf.partial(
straight, length=length_y, cross_section=cross_section, **kwargs
)
straight_top = gf.partial(
straight, length=length_x, cross_section=cross_section, **kwargs
)
bend = gf.partial(bend, radius=radius, cross_section=cross_section, **kwargs)
c = gf.Component()
cb = c << coupler_ring()
sl = c << straight_side()
sr = c << straight_side()
bl = c << bend()
br = c << bend()
st = c << straight_top()
# st.mirror(p1=(0, 0), p2=(1, 0))
sl.connect(port="o1", destination=cb.ports["o2"])
bl.connect(port="o2", destination=sl.ports["o2"])
st.connect(port="o2", destination=bl.ports["o1"])
br.connect(port="o2", destination=st.ports["o1"])
sr.connect(port="o1", destination=br.ports["o1"])
sr.connect(port="o2", destination=cb.ports["o3"])
c.add_port("o2", port=cb.ports["o4"])
c.add_port("o1", port=cb.ports["o1"])
return c
ring = ring_single()
ring.plot()
###Output
_____no_output_____
###Markdown
How do you customize components?You can use `functools.partial` to customize the default settings from any component
###Code
ring_single3 = gf.partial(ring_single, radius=3)
ring_single3()
###Output
_____no_output_____
###Markdown
Circuit netlistSometimes, when a component is mostly composed of sub-components adjacent to eachother, it can be easier to define the component by sub-component connections and by which ports are part of the new components.This can be done using a netlist based approach where these 3 parts are defined:- components: a dictionary of `{component reference name: (component, transform)}`- connections: a list of `(component ref name 1, port name A, component ref name 2, port name B)`- ports_map: a dictionary of which ports are being exposed together with their new name `{port_name: (component ref name, port name)}`The code below illustrates how a simple MZI can be formed using this method.
###Code
import gdsfactory as gf
yaml = """
instances:
mmi1:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 10
mmi2:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 5
straight:
component: straight
placements:
mmi2:
x: 100
mirror: True
straight:
x: 40
y: 40
routes:
route_name1:
links:
mmi1,o3: mmi2,o3
route_name2:
links:
mmi1,o2: straight,o1
route_name3:
links:
mmi2,o2: straight,o2
"""
mzi = gf.read.from_yaml(yaml)
mzi.show()
mzi.plot()
###Output
_____no_output_____
###Markdown
Exporting connectivity map from a GDS is the first step towards verification.- Adding ports to *every* cells in the GDS- Generating the netlist
###Code
mzi.plot_netlist()
###Output
_____no_output_____
###Markdown
MaskOnce you have your components and circuits defined, you can add them into a mask that you will send to the foundry for fabrication.You will need to consider:- what design variations do you want to include in the mask? You need to define your Design Of Experiment or DOE- obey DRC (Design rule checking) foundry rules for manufacturability. Foundry usually provides those rules for each layer (min width, min space, min density, max density)- make sure you will be able to test te devices after fabriation. Obey DFT (design for testing) rules. For exammple, if your test setup works only for fiber array, what is the fiber array spacing (127 or 250um?)- if you plan to package your device, make sure you follow your packaging guidelines from your packaging house (min pad size, min pad pitch, max number of rows for wire bonding ...)
###Code
import toolz
import gdsfactory as gf
ring_te = toolz.compose(gf.routing.add_fiber_array, gf.c.ring_single)
rings = gf.grid([ring_te(radius=r) for r in [10, 20, 50]])
@gf.cell
def mask(size=(1000, 1000)):
c = gf.Component()
c << gf.components.die(size=size)
c << rings
return c
m = mask(cache=False)
m
gdspath = m.write_gds_with_metadata("mask.gds")
###Output
_____no_output_____
###Markdown
Make sure you save the GDS with metadata so when the chip comes back you remember what you put on itYou can also save the labels for automatic testing.
###Code
gf.mask.write_labels(gdspath, label_layer=(66, 0))
metadata = gf.mask.merge_test_metadata(gdspath)
metadata.keys()
###Output
_____no_output_____
###Markdown
Component -> Circuit -> Maskgdsfactory easily enables you to go from a Component, to a higher level Component (circuit), or even higher level Component (Mask)For a component it's important that you spend some time early to parametrize it correctly. Don't be afraid to spend some time using pen and paper and choosing easy to understand names.Lets for example define a ring resontator. A ring resonator is already a circuit made of waveguides, bends and couplers. ComponentYou can define any new Component define as a function that returns a component
###Code
from typing import Optional
import gdsfactory as gf
from gdsfactory.component import Component
from gdsfactory.components.bend_euler import bend_euler
from gdsfactory.components.coupler90 import coupler90 as coupler90function
from gdsfactory.components.coupler_straight import (
coupler_straight as coupler_straight_function,
)
from gdsfactory.cross_section import strip
from gdsfactory.snap import assert_on_2nm_grid
from gdsfactory.types import ComponentFactory, CrossSectionFactory
@gf.cell
def coupler_ring(
gap: float = 0.2,
radius: float = 5.0,
length_x: float = 4.0,
coupler90: ComponentFactory = coupler90function,
bend: Optional[ComponentFactory] = None,
coupler_straight: ComponentFactory = coupler_straight_function,
cross_section: CrossSectionFactory = strip,
**kwargs
) -> Component:
r"""Coupler for ring.
Args:
gap: spacing between parallel coupled straight waveguides.
radius: of the bends.
length_x: length of the parallel coupled straight waveguides.
coupler90: straight coupled to a 90deg bend.
straight: library for straight waveguides.
bend: library for bend
coupler_straight: two parallel coupled straight waveguides.
cross_section:
kwargs: cross_section settings
.. code::
2 3
| |
\ /
\ /
---=========---
1 length_x 4
"""
bend = bend or bend_euler
c = Component()
assert_on_2nm_grid(gap)
# define subcells
coupler90_component = (
coupler90(
gap=gap, radius=radius, bend=bend, cross_section=cross_section, **kwargs
)
if callable(coupler90)
else coupler90
)
coupler_straight_component = (
coupler_straight(
gap=gap, length=length_x, cross_section=cross_section, **kwargs
)
if callable(coupler_straight)
else coupler_straight
)
# add references to subcells
cbl = c << coupler90_component
cbr = c << coupler90_component
cs = c << coupler_straight_component
# connect references
y = coupler90_component.y
cs.connect(port="o4", destination=cbr.ports["o1"])
cbl.reflect(p1=(0, y), p2=(1, y))
cbl.connect(port="o2", destination=cs.ports["o2"])
c.absorb(cbl)
c.absorb(cbr)
c.absorb(cs)
c.add_port("o1", port=cbl.ports["o3"])
c.add_port("o2", port=cbl.ports["o4"])
c.add_port("o3", port=cbr.ports["o3"])
c.add_port("o4", port=cbr.ports["o4"])
c.auto_rename_ports()
return c
coupler = coupler_ring(cache=False)
coupler
###Output
_____no_output_____
###Markdown
CircuitsYou can define a circuit also with a function, and some of the parameters can also be functions that returns other componentsFor example, lets define a ring function that also accepts other component functions (straight, coupler, bend) Circuit function
###Code
import gdsfactory as gf
@gf.cell
def ring_single(
gap: float = 0.2,
radius: float = 10.0,
length_x: float = 4.0,
length_y: float = 0.6,
coupler_ring: gf.types.ComponentFactory = coupler_ring,
straight: gf.types.ComponentFactory = gf.c.straight,
bend: gf.types.ComponentFactory = gf.c.bend_euler,
cross_section: gf.types.CrossSectionFactory = gf.cross_section.strip,
**kwargs
) -> gf.Component:
"""Single bus ring made of a ring coupler (cb: bottom)
connected with two vertical straights (sl: left, sr: right)
two bends (bl, br) and horizontal straight (wg: top)
Args:
gap: gap between for coupler
radius: for the bend and coupler
length_x: ring coupler length
length_y: vertical straight length
coupler_ring: ring coupler function
straight: straight function
bend: 90 degrees bend function
cross_section:
**kwargs: cross_section settings
.. code::
bl-st-br
| |
sl sr length_y
| |
--==cb==-- gap
length_x
"""
gf.snap.assert_on_2nm_grid(gap)
coupler_ring = gf.partial(
coupler_ring,
bend=bend,
gap=gap,
radius=radius,
length_x=length_x,
cross_section=cross_section,
**kwargs
)
straight_side = gf.partial(
straight, length=length_y, cross_section=cross_section, **kwargs
)
straight_top = gf.partial(
straight, length=length_x, cross_section=cross_section, **kwargs
)
bend = gf.partial(bend, radius=radius, cross_section=cross_section, **kwargs)
c = gf.Component()
cb = c << coupler_ring()
sl = c << straight_side()
sr = c << straight_side()
bl = c << bend()
br = c << bend()
st = c << straight_top()
# st.mirror(p1=(0, 0), p2=(1, 0))
sl.connect(port="o1", destination=cb.ports["o2"])
bl.connect(port="o2", destination=sl.ports["o2"])
st.connect(port="o2", destination=bl.ports["o1"])
br.connect(port="o2", destination=st.ports["o1"])
sr.connect(port="o1", destination=br.ports["o1"])
sr.connect(port="o2", destination=cb.ports["o3"])
c.add_port("o2", port=cb.ports["o4"])
c.add_port("o1", port=cb.ports["o1"])
return c
ring = ring_single()
ring.plot()
###Output
_____no_output_____
###Markdown
How do you customize components?You can use `functools.partial` to customize the default settings from any component
###Code
ring_single3 = gf.partial(ring_single, radius=3)
ring_single3()
###Output
_____no_output_____
###Markdown
Circuit netlistSometimes, when a component is mostly composed of sub-components adjacent to eachother, it can be easier to define the component by sub-component connections and by which ports are part of the new components.This can be done using a netlist based approach where these 3 parts are defined:- components: a dictionary of `{component reference name: (component, transform)}`- connections: a list of `(component ref name 1, port name A, component ref name 2, port name B)`- ports_map: a dictionary of which ports are being exposed together with their new name `{port_name: (component ref name, port name)}`The code below illustrates how a simple MZI can be formed using this method.
###Code
import gdsfactory as gf
yaml = """
instances:
mmi1:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 10
mmi2:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 5
straight:
component: straight
placements:
mmi2:
x: 100
mirror: True
straight:
x: 40
y: 40
routes:
route_name1:
links:
mmi1,o3: mmi2,o3
route_name2:
links:
mmi1,o2: straight,o1
route_name3:
links:
mmi2,o2: straight,o2
"""
mzi = gf.read.from_yaml(yaml)
mzi.show()
mzi.plot()
###Output
_____no_output_____
###Markdown
Exporting connectivity map from a GDS is the first step towards verification.- Adding ports to *every* cells in the GDS- Generating the netlist
###Code
mzi.plot_netlist()
###Output
_____no_output_____
###Markdown
MaskOnce you have your components and circuits defined, you can add them into a mask that you will send to the foundry for fabrication.You will need to consider:- what design variations do you want to include in the mask? You need to define your Design Of Experiment or DOE- obey DRC (Design rule checking) foundry rules for manufacturability. Foundry usually provides those rules for each layer (min width, min space, min density, max density)- make sure you will be able to test te devices after fabriation. Obey DFT (design for testing) rules. For exammple, if your test setup works only for fiber array, what is the fiber array spacing (127 or 250um?)- if you plan to package your device, make sure you follow your packaging guidelines from your packaging house (min pad size, min pad pitch, max number of rows for wire bonding ...)
###Code
import toolz
import gdsfactory as gf
ring_te = toolz.compose(gf.routing.add_fiber_array, gf.c.ring_single)
rings = gf.grid([ring_te(radius=r) for r in [10, 20, 50]])
@gf.cell
def mask(size=(1000, 1000)):
c = gf.Component()
c << gf.components.die(size=size)
c << rings
return c
m = mask(cache=False)
m
gdspath = m.write_gds_with_metadata("mask.gds")
labels_path = gdspath.with_suffix('.csv')
###Output
_____no_output_____
###Markdown
Make sure you save the GDS with metadata so when the chip comes back you remember what you put on itYou can also save the labels for automatic testing.
###Code
gf.mask.write_labels(gdspath, layer_label=(66, 0))
metadata = gf.mask.merge_metadata(gdspath=gdspath)
tm = gf.mask.merge_test_metadata(mask_metadata=metadata, labels_path=labels_path)
tm.keys()
###Output
_____no_output_____ |
hackernews-post-analysis/hacker-news-post-analysis.ipynb | ###Markdown
Correlation between the Types of Posts and User Interest in Hacker NewsHacker News is a site started by the startup incubator Y Combinator, where user-submitted stories (known as "posts") are voted and commented upon, similar to reddit. Hacker News is extremely popular in technology and startup circles, and posts that make it to the top of Hacker News' listings can get hundreds of thousands of visitors as a result. IntroductionBelow are descriptions of the columns:- `id`: The unique identifier from Hacker News for the post- `title`: The title of the post- `url`: The URL that the posts links to, if the post has a URL- `num_points`: The number of points the post acquired, calculated as the total number of upvotes minus the total number of downvotes- `num_comments`: The number of comments that were made on the post- `author`: The username of the person who submitted the post- `created_at`: The date and time at which the post was submittedThe titles of some posts begin with either `Ask HN` or `Show HN`. Users submit `Ask HN` posts to ask the Hacker News community a specific question. Below are a couple examples:>Ask HN: How to improve my personal website?Ask HN: Am I the only one outraged by Twitter shutting down share counts?Ask HN: Aby recent changes to CSS that broke mobile?Likewise, users submit `Show HN` posts to show the Hacker News community a project, product, or just generally something interesting. Below are a couple of examples:>Show HN: Wio Link ESP8266 Based Web of Things Hardware Development Platform'Show HN: Something pointless I madeShow HN: Shanhu.io, a programming playground powered by e8vmThese two types of posts will be compared to determine the following:Do `Ask HN` or `Show HN` receive more comments on average?Do posts created at a certain time receive more comments on average?First, the necessary libraries are imported, and the data set is converted into a list of lists.
###Code
from csv import reader
hn = open('hn-2016-dataset.csv', encoding="utf-8")
hn = reader(hn)
hn = list(hn)
print(hn[:5])
###Output
[['id', 'title', 'url', 'num_points', 'num_comments', 'author', 'created_at'], ['12579008', 'You have two days to comment if you want stem cells to be classified as your own', 'http://www.regulations.gov/document?D=FDA-2015-D-3719-0018', '1', '0', 'altstar', '9/26/2016 3:26'], ['12579005', 'SQLAR the SQLite Archiver', 'https://www.sqlite.org/sqlar/doc/trunk/README.md', '1', '0', 'blacksqr', '9/26/2016 3:24'], ['12578997', 'What if we just printed a flatscreen television on the side of our boxes?', 'https://medium.com/vanmoof/our-secrets-out-f21c1f03fdc8#.ietxmez43', '1', '0', 'pavel_lishin', '9/26/2016 3:19'], ['12578989', 'algorithmic music', 'http://cacm.acm.org/magazines/2011/7/109891-algorithmic-composition/fulltext', '1', '0', 'poindontcare', '9/26/2016 3:16']]
###Markdown
Removing HeadersWhen the first five rows of the data set are printed, it is found that the first list in the inner lists contains the column headers, and the lists after contain the data for one row. Thus, the row containing the column headers has to be removed.
###Code
headers = hn[1]
hn = hn[1:]
print(headers)
print(hn[:5])
###Output
['12579008', 'You have two days to comment if you want stem cells to be classified as your own', 'http://www.regulations.gov/document?D=FDA-2015-D-3719-0018', '1', '0', 'altstar', '9/26/2016 3:26']
[['12579008', 'You have two days to comment if you want stem cells to be classified as your own', 'http://www.regulations.gov/document?D=FDA-2015-D-3719-0018', '1', '0', 'altstar', '9/26/2016 3:26'], ['12579005', 'SQLAR the SQLite Archiver', 'https://www.sqlite.org/sqlar/doc/trunk/README.md', '1', '0', 'blacksqr', '9/26/2016 3:24'], ['12578997', 'What if we just printed a flatscreen television on the side of our boxes?', 'https://medium.com/vanmoof/our-secrets-out-f21c1f03fdc8#.ietxmez43', '1', '0', 'pavel_lishin', '9/26/2016 3:19'], ['12578989', 'algorithmic music', 'http://cacm.acm.org/magazines/2011/7/109891-algorithmic-composition/fulltext', '1', '0', 'poindontcare', '9/26/2016 3:16'], ['12578979', 'How the Data Vault Enables the Next-Gen Data Warehouse and Data Lake', 'https://www.talend.com/blog/2016/05/12/talend-and-Â\x93the-data-vaultÂ\x94', '1', '0', 'markgainor1', '9/26/2016 3:14']]
###Markdown
Extracting Ask HN and Show HN PostsThe posts were distributed into three different categories:- `ask_posts`, which includes the `Ask HN` posts,- `show_posts`, which includes the `Show HN` posts,- `other_posts`, which includes the rest of the posts.Then, the number of posts in each category was printed:
###Code
ask_posts = []
show_posts = []
other_posts = []
for row in hn:
title = row[1]
title = title.lower()
if title.startswith("ask hn"):
ask_posts.append(row)
elif title.startswith("show hn"):
show_posts.append(row)
else:
other_posts.append(row)
print(len(ask_posts), len(show_posts), len(other_posts))
###Output
9139 10158 273822
###Markdown
Calculating the Average Number of Comments for Each CategoryNext, the average numbers of the comments in each category of posts were calculated.
###Code
total_ask_comments = 0
for row in ask_posts:
num_comments = int(row[4])
total_ask_comments += num_comments
avg_ask_comments = total_ask_comments / len(ask_posts)
total_show_comments = 0
for row in show_posts:
num_comments = int(row[4])
total_show_comments += num_comments
avg_show_comments = total_show_comments / len(show_posts)
print(avg_ask_comments)
print(avg_show_comments)
###Output
10.393478498741656
4.886099625910612
###Markdown
Show posts received about 10 comments per post on average, and ask posts received about 5 comments per post on average. Since ask posts are more likely to receive comments, the remaining analysis will focus on these posts. Finding the Amount of Ask Posts and Comments by Hour CreatedThe next goal is to find if ask posts created at a certain *time* are more likely to attract comments. The following steps will be used to perform this analysis:- Calculate the amount of ask posts created in each hour of the day, along with the number of comments received.- Then, calculate the average number of comments ask posts receive by hour created.The first technique was used to find the amount of ask posts created per hour, along with the total amount of comments.
###Code
from datetime import *
result_list = []
for row in ask_posts:
l = [row[6], int(row[4])]
result_list.append(l)
counts_by_hour = {}
comments_by_hour = {}
for row in result_list:
created_at_int = row[0]
created_at_dt = datetime.strptime(created_at_int, "%m/%d/%Y %H:%M")
h = created_at_dt.hour
if h not in counts_by_hour:
counts_by_hour[h] = 1
comments_by_hour[h] = row[1]
else:
counts_by_hour[h] += 1
comments_by_hour[h] += row[1]
###Output
_____no_output_____
###Markdown
Here, two dictionaries were created:- `counts_by_hour`: contains the number of ask posts created during each hour of the day.- `comments_by_hour`: contains the corresponding number of comments ask posts created at each hour received. Calculating the Average Number of Comments for Ask HN Posts by HourThe two dictionaries created above were used to calculate the average number of comments for posts created during each hour of day. The printed result is a list of lists whose first elements are hours and second elements are the corresponding average number of comments.
###Code
avg_by_hour = []
for key in counts_by_hour:
avg_comments = comments_by_hour[key] / counts_by_hour[key]
l = [key, avg_comments]
avg_by_hour.append(l)
print(avg_by_hour)
###Output
[[2, 11.137546468401487], [1, 7.407801418439717], [22, 8.804177545691905], [21, 8.687258687258687], [19, 7.163043478260869], [17, 9.449744463373083], [15, 28.676470588235293], [14, 9.692007797270955], [13, 16.31756756756757], [11, 8.96474358974359], [10, 10.684397163120567], [9, 6.653153153153153], [7, 7.013274336283186], [3, 7.948339483394834], [23, 6.696793002915452], [20, 8.749019607843136], [16, 7.713298791018998], [8, 9.190661478599221], [0, 7.5647840531561465], [18, 7.94299674267101], [12, 12.380116959064328], [4, 9.7119341563786], [6, 6.782051282051282], [5, 8.794258373205741]]
###Markdown
Sorting and Printing Values from a List of ListsSince the printed result is difficult to identify the hours with the highest values, the list of lists was sorted so that it can print the five highest values in a format that is easier to read.
###Code
swap_avg_by_hour = []
for row in avg_by_hour:
l = [row[1], row[0]]
swap_avg_by_hour.append(l)
sorted_swap = sorted(swap_avg_by_hour, reverse = True)
print("<Top 5 Hours for Asks Posts Comments>")
for row in sorted_swap[:5]:
form = "{}: {:.2f} average comments per post"
time_dt = datetime.strptime(str(row[1]), "%H")
time_str = time_dt.strftime("%H:%M")
text = form.format(time_str, row[0])
print(text)
###Output
<Top 5 Hours for Asks Posts Comments>
15:00: 28.68 average comments per post
13:00: 16.32 average comments per post
12:00: 12.38 average comments per post
02:00: 11.14 average comments per post
10:00: 10.68 average comments per post
|
Quadcopter-RL/home/.ipynb_checkpoints/Quadcopter_Project-checkpoint.ipynb | ###Markdown
Project: Train a Quadcopter How to FlyDesign an agent to fly a quadcopter, and then train it using a reinforcement learning algorithm of your choice! Try to apply the techniques you have learnt, but also feel free to come up with innovative ideas and test them. InstructionsTake a look at the files in the directory to better understand the structure of the project. - `task.py`: Define your task (environment) in this file.- `agents/`: Folder containing reinforcement learning agents. - `policy_search.py`: A sample agent has been provided here. - `agent.py`: Develop your agent here.- `physics_sim.py`: This file contains the simulator for the quadcopter. **DO NOT MODIFY THIS FILE**.For this project, you will define your own task in `task.py`. Although we have provided a example task to get you started, you are encouraged to change it. Later in this notebook, you will learn more about how to amend this file.You will also design a reinforcement learning agent in `agent.py` to complete your chosen task. You are welcome to create any additional files to help you to organize your code. For instance, you may find it useful to define a `model.py` file defining any needed neural network architectures. Controlling the QuadcopterWe provide a sample agent in the code cell below to show you how to use the sim to control the quadcopter. This agent is even simpler than the sample agent that you'll examine (in `agents/policy_search.py`) later in this notebook!The agent controls the quadcopter by setting the revolutions per second on each of its four rotors. The provided agent in the `Basic_Agent` class below always selects a random action for each of the four rotors. These four speeds are returned by the `act` method as a list of four floating-point numbers. For this project, the agent that you will implement in `agents/agent.py` will have a far more intelligent method for selecting actions!
###Code
import random
class Basic_Agent():
def __init__(self, task):
self.task = task
def act(self):
new_thrust = random.gauss(450., 25.)
return [new_thrust + random.gauss(0., 1.) for x in range(4)]
###Output
_____no_output_____
###Markdown
Run the code cell below to have the agent select actions to control the quadcopter. Feel free to change the provided values of `runtime`, `init_pose`, `init_velocities`, and `init_angle_velocities` below to change the starting conditions of the quadcopter.The `labels` list below annotates statistics that are saved while running the simulation. All of this information is saved in a text file `data.txt` and stored in the dictionary `results`.
###Code
%load_ext autoreload
%autoreload 2
import csv
import numpy as np
from task import Task
# Modify the values below to give the quadcopter a different starting position.
runtime = 5. # time limit of the episode
init_pose = np.array([0., 0., 10., 0., 0., 0.]) # initial pose
init_velocities = np.array([0., 0., 0.]) # initial velocities
init_angle_velocities = np.array([0., 0., 0.]) # initial angle velocities
file_output = 'data.txt' # file name for saved results
# Setup
task = Task(init_pose, init_velocities, init_angle_velocities, runtime)
agent = Basic_Agent(task)
done = False
labels = ['time', 'x', 'y', 'z', 'phi', 'theta', 'psi', 'x_velocity',
'y_velocity', 'z_velocity', 'phi_velocity', 'theta_velocity',
'psi_velocity', 'rotor_speed1', 'rotor_speed2', 'rotor_speed3', 'rotor_speed4']
results = {x : [] for x in labels}
# Run the simulation, and save the results.
with open(file_output, 'w') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(labels)
while True:
rotor_speeds = agent.act()
_, _, done = task.step(rotor_speeds)
to_write = [task.sim.time] + list(task.sim.pose) + list(task.sim.v) + list(task.sim.angular_v) + list(rotor_speeds)
for ii in range(len(labels)):
results[labels[ii]].append(to_write[ii])
writer.writerow(to_write)
if done:
break
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Run the code cell below to visualize how the position of the quadcopter evolved during the simulation.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(results['time'], results['x'], label='x')
plt.plot(results['time'], results['y'], label='y')
plt.plot(results['time'], results['z'], label='z')
plt.legend()
_ = plt.ylim()
###Output
_____no_output_____
###Markdown
The next code cell visualizes the velocity of the quadcopter.
###Code
plt.plot(results['time'], results['x_velocity'], label='x_hat')
plt.plot(results['time'], results['y_velocity'], label='y_hat')
plt.plot(results['time'], results['z_velocity'], label='z_hat')
plt.legend()
_ = plt.ylim()
###Output
_____no_output_____
###Markdown
Next, you can plot the Euler angles (the rotation of the quadcopter over the $x$-, $y$-, and $z$-axes),
###Code
plt.plot(results['time'], results['phi'], label='phi')
plt.plot(results['time'], results['theta'], label='theta')
plt.plot(results['time'], results['psi'], label='psi')
plt.legend()
_ = plt.ylim()
###Output
_____no_output_____
###Markdown
before plotting the velocities (in radians per second) corresponding to each of the Euler angles.
###Code
plt.plot(results['time'], results['phi_velocity'], label='phi_velocity')
plt.plot(results['time'], results['theta_velocity'], label='theta_velocity')
plt.plot(results['time'], results['psi_velocity'], label='psi_velocity')
plt.legend()
_ = plt.ylim()
###Output
_____no_output_____
###Markdown
Finally, you can use the code cell below to print the agent's choice of actions.
###Code
plt.plot(results['time'], results['rotor_speed1'], label='Rotor 1 revolutions / second')
plt.plot(results['time'], results['rotor_speed2'], label='Rotor 2 revolutions / second')
plt.plot(results['time'], results['rotor_speed3'], label='Rotor 3 revolutions / second')
plt.plot(results['time'], results['rotor_speed4'], label='Rotor 4 revolutions / second')
plt.legend()
_ = plt.ylim()
###Output
_____no_output_____
###Markdown
When specifying a task, you will derive the environment state from the simulator. Run the code cell below to print the values of the following variables at the end of the simulation:- `task.sim.pose` (the position of the quadcopter in ($x,y,z$) dimensions and the Euler angles),- `task.sim.v` (the velocity of the quadcopter in ($x,y,z$) dimensions), and- `task.sim.angular_v` (radians/second for each of the three Euler angles).
###Code
# the pose, velocity, and angular velocity of the quadcopter at the end of the episode
print(task.sim.pose)
print(task.sim.v)
print(task.sim.angular_v)
###Output
[ -9.48740601e+00 1.01003910e+01 3.19880308e+01 4.52192329e-01
4.37935829e-04 0.00000000e+00]
[-3.9280001 7.05265681 5.05802481]
[ 0.22851258 -0.08241196 0. ]
###Markdown
In the sample task in `task.py`, we use the 6-dimensional pose of the quadcopter to construct the state of the environment at each timestep. However, when amending the task for your purposes, you are welcome to expand the size of the state vector by including the velocity information. You can use any combination of the pose, velocity, and angular velocity - feel free to tinker here, and construct the state to suit your task. The TaskA sample task has been provided for you in `task.py`. Open this file in a new window now. The `__init__()` method is used to initialize several variables that are needed to specify the task. - The simulator is initialized as an instance of the `PhysicsSim` class (from `physics_sim.py`). - Inspired by the methodology in the original DDPG paper, we make use of action repeats. For each timestep of the agent, we step the simulation `action_repeats` timesteps. If you are not familiar with action repeats, please read the **Results** section in [the DDPG paper](https://arxiv.org/abs/1509.02971).- We set the number of elements in the state vector. For the sample task, we only work with the 6-dimensional pose information. To set the size of the state (`state_size`), we must take action repeats into account. - The environment will always have a 4-dimensional action space, with one entry for each rotor (`action_size=4`). You can set the minimum (`action_low`) and maximum (`action_high`) values of each entry here.- The sample task in this provided file is for the agent to reach a target position. We specify that target position as a variable.The `reset()` method resets the simulator. The agent should call this method every time the episode ends. You can see an example of this in the code cell below.The `step()` method is perhaps the most important. It accepts the agent's choice of action `rotor_speeds`, which is used to prepare the next state to pass on to the agent. Then, the reward is computed from `get_reward()`. The episode is considered done if the time limit has been exceeded, or the quadcopter has travelled outside of the bounds of the simulation.In the next section, you will learn how to test the performance of an agent on this task. The AgentThe sample agent given in `agents/policy_search.py` uses a very simplistic linear policy to directly compute the action vector as a dot product of the state vector and a matrix of weights. Then, it randomly perturbs the parameters by adding some Gaussian noise, to produce a different policy. Based on the average reward obtained in each episode (`score`), it keeps track of the best set of parameters found so far, how the score is changing, and accordingly tweaks a scaling factor to widen or tighten the noise.Run the code cell below to see how the agent performs on the sample task.
###Code
import sys
import pandas as pd
from agents.policy_search import PolicySearch_Agent
from task import Task
num_episodes = 1000
target_pos = np.array([0., 0., 10.])
task = Task(target_pos=target_pos)
agent = PolicySearch_Agent(task)
for i_episode in range(1, num_episodes+1):
state = agent.reset_episode() # start a new episode
while True:
action = agent.act(state)
next_state, reward, done = task.step(action)
agent.step(reward, done)
state = next_state
if done:
print("\rEpisode = {:4d}, score = {:7.3f} (best = {:7.3f}), noise_scale = {}".format(
i_episode, agent.score, agent.best_score, agent.noise_scale), end="") # [debug]
break
sys.stdout.flush()
###Output
Episode = 1000, score = -0.387 (best = 0.050), noise_scale = 3.2625
###Markdown
This agent should perform very poorly on this task. And that's where you come in! Define the Task, Design the Agent, and Train Your Agent!Amend `task.py` to specify a task of your choosing. If you're unsure what kind of task to specify, you may like to teach your quadcopter to takeoff, hover in place, land softly, or reach a target pose. After specifying your task, use the sample agent in `agents/policy_search.py` as a template to define your own agent in `agents/agent.py`. You can borrow whatever you need from the sample agent, including ideas on how you might modularize your code (using helper methods like `act()`, `learn()`, `reset_episode()`, etc.).Note that it is **highly unlikely** that the first agent and task that you specify will learn well. You will likely have to tweak various hyperparameters and the reward function for your task until you arrive at reasonably good behavior.As you develop your agent, it's important to keep an eye on how it's performing. Use the code above as inspiration to build in a mechanism to log/save the total rewards obtained in each episode to file. If the episode rewards are gradually increasing, this is an indication that your agent is learning.
###Code
## TODO: Train your agent here.
import sys
import pandas as pd
from agents.actor import Actor
from agents.ddpg import DDPG
import math
from new_task import new_task
num_episodes = 100
target_pos = np.array([0., 0., 0.])
init_pose = np.array([0.,0.,10., 0., 0., 0.]) #Start in the sky
task = new_task(target_pos=target_pos, init_pose=init_pose)
agent = DDPG(task)
best_reward = -float("inf")
best_episode_reward = -float("inf")
cumSum = 0
labels = ['episode','cummulative']
results = {x : [] for x in labels}
for i_episode in range(1, num_episodes+1):
state = agent.reset_episode()
best_episode_reward = -float("inf")
cumSum = 0
while True:
action = agent.act(state)
next_state, reward, done = task.step(action)
agent.step(action, reward, next_state, done)
state = next_state
cumSum += reward
if reward > best_episode_reward:
best_episode_reward = reward
if done:
if best_episode_reward > best_reward:
best_reward = best_episode_reward
to_write = [i_episode] + [cumSum]
for ii in range(len(labels)):
results[labels[ii]].append(to_write[ii])
print("\rEpisode = {:4d} Reward = {:4f} Best Reward = {:4f} Cum Reward = {:4f} \n".format(i_episode, best_episode_reward, best_reward, cumSum), task.getPose(), "\n", end="") # [debug]
break
sys.stdout.flush()
###Output
Episode = 1 Reward = 0.198791 Best Reward = 0.198791 Cum Reward = 1.272701
[ 2.66808305 -0.22662194 0. 4.30460425 3.65648794 0. ]
Episode = 2 Reward = 0.210882 Best Reward = 0.210882 Cum Reward = 0.907985
[-1.24195115 3.69755734 0. 2.72506853 1.9835916 0. ]
Episode = 3 Reward = 0.310303 Best Reward = 0.310303 Cum Reward = 1.332669
[-2.69562405 0.94427432 0. 4.96098465 2.8870313 0. ]
Episode = 4 Reward = 0.169272 Best Reward = 0.310303 Cum Reward = 1.625834
[ 3.08838128 -0.58887041 0. 4.75938954 5.23371212 0. ]
Episode = 5 Reward = 0.277901 Best Reward = 0.310303 Cum Reward = 2.372596
[ 0.96085847 -0.18431353 0. 0.07546354 1.90263923 0. ]
Episode = 6 Reward = 0.282522 Best Reward = 0.310303 Cum Reward = 6.400214
[-39.46560347 3.41814538 65.93023228 0.09450943 1.9486924 0. ]
Episode = 7 Reward = 0.285035 Best Reward = 0.310303 Cum Reward = 7.757076
[ -1.78343091e+01 -5.96849959e-04 7.90387871e+01 2.09576738e-05
8.23552540e-01 0.00000000e+00]
Episode = 8 Reward = 0.285120 Best Reward = 0.310303 Cum Reward = 8.057289
[ -1.61229196e+01 -2.66876270e-03 8.07688805e+01 6.28302582e+00
3.26005403e-01 0.00000000e+00]
Episode = 9 Reward = 0.285051 Best Reward = 0.310303 Cum Reward = 8.605699
[ 3.21589818e+00 -2.06058060e-02 8.28746106e+01 6.28271859e+00
1.87121603e-01 0.00000000e+00]
Episode = 10 Reward = 0.285055 Best Reward = 0.310303 Cum Reward = 6.567760
[ 5.16891176e+01 -1.63805585e-04 7.07668178e+01 1.75343413e+00
3.51705054e+00 0.00000000e+00]
Episode = 11 Reward = 0.285026 Best Reward = 0.310303 Cum Reward = 6.004605
[ 2.50854125 -34.43705324 6.18630194 6.07092338 3.77550404 0. ]
Episode = 12 Reward = 0.186920 Best Reward = 0.310303 Cum Reward = 1.035227
[-0.41994739 -1.37029449 0. 5.1981457 3.33379535 0. ]
Episode = 13 Reward = 0.444476 Best Reward = 0.444476 Cum Reward = 2.112656
[ 1.58604073 3.06361469 0. 3.14237701 5.6773452 0. ]
Episode = 14 Reward = 0.162935 Best Reward = 0.444476 Cum Reward = 1.589677
[-1.87873616 -0.80866738 0. 3.44042969 4.39246799 0. ]
Episode = 15 Reward = 0.148756 Best Reward = 0.444476 Cum Reward = 0.881774
[ 4.72563563 3.77149491 0. 5.43750989 0.72339939 0. ]
Episode = 16 Reward = 0.207593 Best Reward = 0.444476 Cum Reward = 1.368750
[-1.15416778 -1.81557899 0. 5.74311433 0.85146842 0. ]
Episode = 17 Reward = 0.342128 Best Reward = 0.444476 Cum Reward = 1.774171
[ 2.26529527 -0.6927653 0. 0.8131096 0.44671889 0. ]
Episode = 18 Reward = 0.187197 Best Reward = 0.444476 Cum Reward = 1.357437
[ 1.43975899 -0.94126761 0. 2.13088906 0.5588805 0. ]
Episode = 19 Reward = 0.200068 Best Reward = 0.444476 Cum Reward = 1.582813
[ 1.22975541 2.58026505 0. 2.16261627 0.16836526 0. ]
Episode = 20 Reward = 0.139913 Best Reward = 0.444476 Cum Reward = 1.095508
[ 3.6696293 5.3704383 0. 4.31731499 1.95623702 0. ]
Episode = 21 Reward = 0.233715 Best Reward = 0.444476 Cum Reward = 1.280166
[ 0.79562931 -5.01776684 0. 3.86407553 0.73764828 0. ]
Episode = 22 Reward = 0.262418 Best Reward = 0.444476 Cum Reward = 1.237240
[ 3.32258964 2.28493837 0. 4.32521726 5.71375712 0. ]
Episode = 23 Reward = 0.243358 Best Reward = 0.444476 Cum Reward = 1.985342
[ 0.91501302 1.76511681 0. 5.50125129 3.50742436 0. ]
Episode = 24 Reward = 0.141347 Best Reward = 0.444476 Cum Reward = 0.705949
[-0.12792347 0.42090981 0. 4.67585768 5.72316668 0. ]
Episode = 25 Reward = 0.180501 Best Reward = 0.444476 Cum Reward = 1.463545
[ 2.50978948 1.2751261 0. 5.51745857 6.26215387 0. ]
Episode = 26 Reward = 0.322249 Best Reward = 0.444476 Cum Reward = 1.565962
[ 2.92392586 0.86629334 0. 3.06643067 0.19655503 0. ]
Episode = 27 Reward = 0.196472 Best Reward = 0.444476 Cum Reward = 1.633840
[ 2.75181164 0.39894745 0. 0.73809765 0.67752911 0. ]
Episode = 28 Reward = 0.149209 Best Reward = 0.444476 Cum Reward = 1.336947
[-1.11944199 -1.55675899 0. 3.35776204 2.05991955 0. ]
Episode = 29 Reward = 0.142673 Best Reward = 0.444476 Cum Reward = 1.389588
[ 4.12113947 -0.24131562 0. 4.11263691 1.18902855 0. ]
Episode = 30 Reward = 0.200572 Best Reward = 0.444476 Cum Reward = 1.525492
[ 2.36472244 -4.8265725 0. 0.95657263 4.90182456 0. ]
Episode = 31 Reward = 0.572462 Best Reward = 0.572462 Cum Reward = 2.200963
[ 0.84189743 -0.44118671 0. 1.34704775 1.59781403 0. ]
Episode = 32 Reward = 0.391441 Best Reward = 0.572462 Cum Reward = 1.873877
[-1.09412772 0.04611258 0. 3.08535756 1.35435805 0. ]
Episode = 33 Reward = 0.174175 Best Reward = 0.572462 Cum Reward = 1.301660
[-1.16536733 -0.17016733 0. 1.84451195 0.64708975 0. ]
Episode = 34 Reward = 0.284259 Best Reward = 0.572462 Cum Reward = 1.727923
[ 4.39913843 2.80716622 0. 3.19390843 3.93693358 0. ]
Episode = 35 Reward = 0.233913 Best Reward = 0.572462 Cum Reward = 1.349647
[ 3.08988734 -0.47913227 0. 0.0193289 2.32908385 0. ]
Episode = 36 Reward = 0.116871 Best Reward = 0.572462 Cum Reward = 1.148061
[ 4.35662381 0.54682847 0. 3.25818235 0.47999363 0. ]
Episode = 37 Reward = 0.151415 Best Reward = 0.572462 Cum Reward = 1.363803
[ 3.16038455 -1.5215697 0. 0.12827564 4.90305059 0. ]
Episode = 38 Reward = 0.163402 Best Reward = 0.572462 Cum Reward = 1.196699
[ 2.30274369 4.28698242 0. 1.63632837 2.30165852 0. ]
Episode = 39 Reward = 0.389189 Best Reward = 0.572462 Cum Reward = 2.139658
[ 1.35481375 -1.41097382 0. 3.83932708 2.85397999 0. ]
Episode = 40 Reward = 0.152247 Best Reward = 0.572462 Cum Reward = 1.232004
[ 4.59371254 5.33801087 0. 5.16050294 0.38942923 0. ]
Episode = 41 Reward = 0.160985 Best Reward = 0.572462 Cum Reward = 1.084832
[ 3.36614855 0.99700342 0. 1.58833928 2.89347162 0. ]
Episode = 42 Reward = 0.220161 Best Reward = 0.572462 Cum Reward = 1.911477
[ 1.48502628 -1.92417526 0. 1.03686251 6.24379044 0. ]
Episode = 43 Reward = 0.486563 Best Reward = 0.572462 Cum Reward = 2.130241
[ 1.42454795 1.00684194 0. 4.64620017 1.90139383 0. ]
Episode = 44 Reward = 0.153447 Best Reward = 0.572462 Cum Reward = 1.248452
[ 2.68682935 0.8639355 0. 4.82615625 3.98491409 0. ]
Episode = 45 Reward = 0.072461 Best Reward = 0.572462 Cum Reward = 0.585212
[ 2.82726688 2.90877218 0. 4.83877653 2.68659196 0. ]
Episode = 46 Reward = 0.134962 Best Reward = 0.572462 Cum Reward = 1.147013
[ 2.68072921 2.35609466 0. 4.82854939 3.44706476 0. ]
Episode = 47 Reward = 0.434222 Best Reward = 0.572462 Cum Reward = 1.550955
[-0.10936113 2.59756661 0. 6.0150795 2.20613619 0. ]
Episode = 48 Reward = 0.232564 Best Reward = 0.572462 Cum Reward = 2.082141
[ 0.77717356 -0.33106069 0. 5.95986452 1.677397 0. ]
Episode = 49 Reward = 0.138586 Best Reward = 0.572462 Cum Reward = 0.991336
[ 4.48589566 6.19862658 0. 5.39969437 0.41845109 0. ]
Episode = 50 Reward = 0.211470 Best Reward = 0.572462 Cum Reward = 1.161052
[ 3.41962135 2.52689247 0. 5.17885825 1.44304798 0. ]
Episode = 51 Reward = 0.245943 Best Reward = 0.572462 Cum Reward = 2.028080
[ 3.00903069 2.12945199 0. 5.60034918 5.25189783 0. ]
Episode = 52 Reward = 0.177330 Best Reward = 0.572462 Cum Reward = 1.196609
[ 0.110324 -4.56197755 0. 4.14057839 0.04565839 0. ]
###Markdown
Plot the RewardsOnce you are satisfied with your performance, plot the episode rewards, either from a single run, or averaged over multiple runs.
###Code
## TODO: Plot the rewards.
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(results['episode'], results['cummulative'], label='reward')
plt.xlabel('episode')
plt.ylabel('cumulative reward')
plt.legend()
###Output
_____no_output_____
###Markdown
Reflections**Question 1**: Describe the task that you specified in `task.py`. How did you design the reward function?**Answer**:I had just worked on z axis of the quadcopter . I have implemented a landing task where the quadcopter begins at a specific height and the target point is at the ground . **Question 2**: Discuss your agent briefly, using the following questions as a guide:- What learning algorithm(s) did you try? What worked best for you?- What was your final choice of hyperparameters (such as $\alpha$, $\gamma$, $\epsilon$, etc.)?- What neural network architecture did you use (if any)? Specify layers, sizes, activation functions, etc.**Answer**:I have used template provided of Actor-Critic method .In the DDPG template , I had made changes to theta to 0.085 , gamma to 0.70 and alpha to 0.15 .I had used the network architecture that was provided as a template . **Question 3**: Using the episode rewards plot, discuss how the agent learned over time.- Was it an easy task to learn or hard?- Was there a gradual learning curve, or an aha moment?- How good was the final performance of the agent? (e.g. mean rewards over the last 10 episodes)**Answer**:It was a hard task to learn. I think setting up the reward function was the important part .Hyperparameters were required to be tuned multiple times .The mean is shown in the above code cell .
###Code
#Final Peformance
last_mean = np.mean(results['cummulative'][-10:])
print("Mean rewards over the last 10 episodes: ",last_mean)
###Output
The mean over the last 10 episodes is: 1.50777547097
|
3.2_interlude.ipynb | ###Markdown
Module 3 Interlude: Chisel Standard Library**Prev: [Generators: Collections](3.2_collections.ipynb)****Next: [Higher-Order Functions](3.3_higher-order_functions.ipynb)** MotivationChisel is all about re-use, so it only makes sense to provide a standard library of interfaces (encouraging interoperability of RTL) and generators for commonly-used hardware blocks. Setup
###Code
val path = System.getProperty("user.dir") + "/source/load-ivy.sc"
interp.load.module(ammonite.ops.Path(java.nio.file.FileSystems.getDefault().getPath(path)))
import chisel3._
import chisel3.util._
import chisel3.iotesters.{ChiselFlatSpec, Driver, PeekPokeTester}
###Output
_____no_output_____
###Markdown
--- The CheatsheetThe [Chisel3 cheatsheet](https://chisel.eecs.berkeley.edu/doc/chisel-cheatsheet3.pdf) contains a summary of all the major hardware construction APIs, including some of the standard library utilities that we'll introduce below. Decoupled: A Standard Ready-Valid InterfaceOne of the commonly used interfaces provided by Chisel is `DecoupledIO`, providing a ready-valid interface for transferring data. The idea is that the source drives the `bits` signal with the data to be transferred and the `valid` signal when there is data to be transferred. The sink drives the `ready` signal when it is ready to accept data, and data is considered transferred when both `ready` and `valid` are asserted on a cycle.This provides a flow control mechanism in both directions for data transfer, including a backpressure mechanism.Note: `ready` and `valid` should not be combinationally coupled, otherwise this may result in unsynthesizable combinational loops. `ready` should only be dependent on whether the sink is able to receive data, and `valid` should only be dependent on whether the source has data. Only after the transaction (on the next clock cycle) should the values update.Any Chisel data can be wrapped in a `DecoupledIO` (used as the `bits` field) as follows:```scalaval myChiselData = UInt(8.W)// or any Chisel data type, such as Bool(), SInt(...), or even custom Bundlesval myDecoupled = Decoupled(myChiselData)```The above creates a new `DecoupledIO` Bundle with fields- `valid`: Output(Bool)- `ready`: Input(Bool)- `bits`: Output(UInt(8.W))___The rest of the section will be structured somewhat differently from the ones before: instead of giving you coding exercises, we're going to give some code examples and testcases that print the circuit state. Try to predict what will be printed before just running the tests. Queues`Queue` creates a FIFO (first-in, first-out) queue with Decoupled interfaces on both sides, allowing backpressure. Both the data type and number of elements are configurable.
###Code
Driver(() => new Module {
// Example circuit using a Queue
val io = IO(new Bundle {
val in = Flipped(Decoupled(UInt(8.W)))
val out = Decoupled(UInt(8.W))
})
val queue = Queue(io.in, 2) // 2-element queue
io.out <> queue
}) { c => new PeekPokeTester(c) {
// Example testsequence showing the use and behavior of Queue
poke(c.io.out.ready, 0)
poke(c.io.in.valid, 1) // Enqueue an element
poke(c.io.in.bits, 42)
println(s"Starting:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
poke(c.io.in.valid, 1) // Enqueue another element
poke(c.io.in.bits, 43)
// What do you think io.out.valid and io.out.bits will be?
println(s"After first enqueue:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
poke(c.io.in.valid, 1) // Read a element, attempt to enqueue
poke(c.io.in.bits, 44)
poke(c.io.out.ready, 1)
// What do you think io.in.ready will be, and will this enqueue succeed, and what will be read?
println(s"On first read:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
poke(c.io.in.valid, 0) // Read elements out
poke(c.io.out.ready, 1)
// What do you think will be read here?
println(s"On second read:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
// Will a third read produce anything?
println(s"On third read:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
} }
###Output
_____no_output_____
###Markdown
ArbitersArbiters routes data from _n_ `DecoupledIO` sources to one `DecoupledIO` sink, given a prioritization.There are two types included in Chisel:- `Arbiter`: prioritizes lower-index producers- `RRArbiter`: runs in round-robin orderNote that Arbiter routing is implemented in combinational logic.The below example will demonstrate the use of the priority arbiter (which you will also implement in the next section):
###Code
Driver(() => new Module {
// Example circuit using a priority arbiter
val io = IO(new Bundle {
val in = Flipped(Vec(2, Decoupled(UInt(8.W))))
val out = Decoupled(UInt(8.W))
})
// Arbiter doesn't have a convenience constructor, so it's built like any Module
val arbiter = Module(new Arbiter(UInt(8.W), 2)) // 2 to 1 Priority Arbiter
arbiter.io.in <> io.in
io.out <> arbiter.io.out
}) { c => new PeekPokeTester(c) {
poke(c.io.in(0).valid, 0)
poke(c.io.in(1).valid, 0)
println(s"Start:")
println(s"\tin(0).ready=${peek(c.io.in(0).ready)}, in(1).ready=${peek(c.io.in(1).ready)}")
println(s"\tout.valid=${peek(c.io.out.valid)}, out.bits=${peek(c.io.out.bits)}")
poke(c.io.in(1).valid, 1) // Valid input 1
poke(c.io.in(1).bits, 42)
// What do you think the output will be?
println(s"valid input 1:")
println(s"\tin(0).ready=${peek(c.io.in(0).ready)}, in(1).ready=${peek(c.io.in(1).ready)}")
println(s"\tout.valid=${peek(c.io.out.valid)}, out.bits=${peek(c.io.out.bits)}")
poke(c.io.in(0).valid, 1) // Valid inputs 0 and 1
poke(c.io.in(0).bits, 43)
// What do you think the output will be? Which inputs will be ready?
println(s"valid inputs 0 and 1:")
println(s"\tin(0).ready=${peek(c.io.in(0).ready)}, in(1).ready=${peek(c.io.in(1).ready)}")
println(s"\tout.valid=${peek(c.io.out.valid)}, out.bits=${peek(c.io.out.bits)}")
poke(c.io.in(1).valid, 0) // Valid input 0
// What do you think the output will be?
println(s"valid input 0:")
println(s"\tin(0).ready=${peek(c.io.in(0).ready)}, in(1).ready=${peek(c.io.in(1).ready)}")
println(s"\tout.valid=${peek(c.io.out.valid)}, out.bits=${peek(c.io.out.bits)}")
} }
###Output
_____no_output_____
###Markdown
Misc Function BlocksChisel Utils has some helpers that perform stateless functions. Bitwise Utilities PopCountPopCount returns the number of high (1) bits in the input as a `UInt`. ReverseReverse returns the bit-reversed input.
###Code
Driver(() => new Module {
// Example circuit using Reverse
val io = IO(new Bundle {
val in = Input(UInt(8.W))
val out = Output(UInt(8.W))
})
io.out := PopCount(io.in)
}) { c => new PeekPokeTester(c) {
// Integer.parseInt is used create an Integer from a binary specification
poke(c.io.in, Integer.parseInt("00000000", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
poke(c.io.in, Integer.parseInt("00001111", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
poke(c.io.in, Integer.parseInt("11001010", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
poke(c.io.in, Integer.parseInt("11111111", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
} }
Driver(() => new Module {
// Example circuit using Reverse
val io = IO(new Bundle {
val in = Input(UInt(8.W))
val out = Output(UInt(8.W))
})
io.out := Reverse(io.in)
}) { c => new PeekPokeTester(c) {
// Integer.parseInt is used create an Integer from a binary specification
poke(c.io.in, Integer.parseInt("01010101", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, Integer.parseInt("00001111", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, Integer.parseInt("11110000", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, Integer.parseInt("11001010", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=0b${peek(c.io.out).toInt.toBinaryString}")
} }
###Output
_____no_output_____
###Markdown
OneHot encoding utilitiesOneHot is an encoding of integers where there is one wire for each value, and exactly one wire is high. This allows the efficient creation of some functions, for example muxes. However, behavior may be undefined if the one-wire-high condition is not held.The below two functions provide conversion between binary (`UInt`) and OneHot encodings, and are inverses of each other:- UInt to OneHot: `UIntToOH`- OneHot to UInt: `OHToUInt`
###Code
Driver(() => new Module {
// Example circuit using UIntToOH
val io = IO(new Bundle {
val in = Input(UInt(4.W))
val out = Output(UInt(16.W))
})
io.out := UIntToOH(io.in)
}) { c => new PeekPokeTester(c) {
poke(c.io.in, 0)
println(s"in=${peek(c.io.in)}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, 1)
println(s"in=${peek(c.io.in)}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, 8)
println(s"in=${peek(c.io.in)}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, 15)
println(s"in=${peek(c.io.in)}, out=0b${peek(c.io.out).toInt.toBinaryString}")
} }
Driver(() => new Module {
// Example circuit using OHToUInt
val io = IO(new Bundle {
val in = Input(UInt(16.W))
val out = Output(UInt(4.W))
})
io.out := OHToUInt(io.in)
}) { c => new PeekPokeTester(c) {
poke(c.io.in, Integer.parseInt("0000 0000 0000 0001".replace(" ", ""), 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
poke(c.io.in, Integer.parseInt("0000 0000 1000 0000".replace(" ", ""), 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
poke(c.io.in, Integer.parseInt("1000 0000 0000 0001".replace(" ", ""), 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
// Some invalid inputs:
// None high
poke(c.io.in, Integer.parseInt("0000 0000 0000 0000".replace(" ", ""), 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
// Multiple high
poke(c.io.in, Integer.parseInt("0001 0100 0010 0000".replace(" ", ""), 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
} }
###Output
_____no_output_____
###Markdown
MuxesThese muxes take in a list of values with select signals, and output the value associated with the lowest-index select signal.These can either take a list of (select: Bool, value: Data) tuples, or corresponding lists of selects and values as arguments. For simplicity, the examples below only demonstrate the second form. Priority MuxA `PriorityMux` outputs the value associated with the lowest-index asserted select signal. OneHot MuxAn `Mux1H` provides an efficient implementation when it is guaranteed that exactly one of the select signals will be high. Behavior is undefined if the assumption is not true.
###Code
Driver(() => new Module {
// Example circuit using PriorityMux
val io = IO(new Bundle {
val in_sels = Input(Vec(2, Bool()))
val in_bits = Input(Vec(2, UInt(8.W)))
val out = Output(UInt(8.W))
})
io.out := PriorityMux(io.in_sels, io.in_bits)
}) { c => new PeekPokeTester(c) {
poke(c.io.in_bits(0), 10)
poke(c.io.in_bits(1), 20)
// Select higher index only
poke(c.io.in_sels(0), 0)
poke(c.io.in_sels(1), 1)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select both - arbitration needed
poke(c.io.in_sels(0), 1)
poke(c.io.in_sels(1), 1)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select lower index only
poke(c.io.in_sels(0), 1)
poke(c.io.in_sels(1), 0)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
} }
Driver(() => new Module {
// Example circuit using Mux1H
val io = IO(new Bundle {
val in_sels = Input(Vec(2, Bool()))
val in_bits = Input(Vec(2, UInt(8.W)))
val out = Output(UInt(8.W))
})
io.out := Mux1H(io.in_sels, io.in_bits)
}) { c => new PeekPokeTester(c) {
poke(c.io.in_bits(0), 10)
poke(c.io.in_bits(1), 20)
// Select index 1
poke(c.io.in_sels(0), 0)
poke(c.io.in_sels(1), 1)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select index 0
poke(c.io.in_sels(0), 1)
poke(c.io.in_sels(1), 0)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select none (invalid)
poke(c.io.in_sels(0), 0)
poke(c.io.in_sels(1), 0)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select both (invalid)
poke(c.io.in_sels(0), 1)
poke(c.io.in_sels(1), 1)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
} }
###Output
_____no_output_____
###Markdown
Counter`Counter` is a counter that can be incremented once every cycle, up to some specified limit, at which point it overflows. Note that it is **not** a Module, and its value is accessible.
###Code
Driver(() => new Module {
// Example circuit using Mux1H
val io = IO(new Bundle {
val count = Input(Bool())
val out = Output(UInt(2.W))
})
val counter = Counter(3) // 3-count Counter (outputs range [0...2])
when(io.count) {
counter.inc()
}
io.out := counter.value
}) { c => new PeekPokeTester(c) {
poke(c.io.count, 1)
println(s"start: counter value=${peek(c.io.out)}")
step(1)
println(s"step 1: counter value=${peek(c.io.out)}")
step(1)
println(s"step 2: counter value=${peek(c.io.out)}")
poke(c.io.count, 0)
step(1)
println(s"step without increment: counter value=${peek(c.io.out)}")
poke(c.io.count, 1)
step(1)
println(s"step again: counter value=${peek(c.io.out)}")
} }
###Output
_____no_output_____
###Markdown
Module 3 Interlude: Chisel Standard Library**Prev: [Generators: Collections](3.2_collections.ipynb)****Next: [Higher-Order Functions](3.3_higher-order_functions.ipynb)** MotivationChisel is all about re-use, so it only makes sense to provide a standard library of interfaces (encouraging interoperability of RTL) and generators for commonly-used hardware blocks. Setup
###Code
val path = System.getProperty("user.dir") + "/source/load-ivy.sc"
interp.load.module(ammonite.ops.Path(java.nio.file.FileSystems.getDefault().getPath(path)))
import chisel3._
import chisel3.util._
import chisel3.iotesters.{ChiselFlatSpec, Driver, PeekPokeTester}
###Output
_____no_output_____
###Markdown
--- The CheatsheetThe [Chisel3 cheatsheet](https://github.com/freechipsproject/chisel-cheatsheet/releases/latest/download/chisel_cheatsheet.pdf) contains a summary of all the major hardware construction APIs, including some of the standard library utilities that we'll introduce below. Decoupled: A Standard Ready-Valid InterfaceOne of the commonly used interfaces provided by Chisel is `DecoupledIO`, providing a ready-valid interface for transferring data. The idea is that the source drives the `bits` signal with the data to be transferred and the `valid` signal when there is data to be transferred. The sink drives the `ready` signal when it is ready to accept data, and data is considered transferred when both `ready` and `valid` are asserted on a cycle.This provides a flow control mechanism in both directions for data transfer, including a backpressure mechanism.Note: `ready` and `valid` should not be combinationally coupled, otherwise this may result in unsynthesizable combinational loops. `ready` should only be dependent on whether the sink is able to receive data, and `valid` should only be dependent on whether the source has data. Only after the transaction (on the next clock cycle) should the values update.Any Chisel data can be wrapped in a `DecoupledIO` (used as the `bits` field) as follows:```scalaval myChiselData = UInt(8.W)// or any Chisel data type, such as Bool(), SInt(...), or even custom Bundlesval myDecoupled = Decoupled(myChiselData)```The above creates a new `DecoupledIO` Bundle with fields- `valid`: Output(Bool)- `ready`: Input(Bool)- `bits`: Output(UInt(8.W))___The rest of the section will be structured somewhat differently from the ones before: instead of giving you coding exercises, we're going to give some code examples and testcases that print the circuit state. Try to predict what will be printed before just running the tests. Queues`Queue` creates a FIFO (first-in, first-out) queue with Decoupled interfaces on both sides, allowing backpressure. Both the data type and number of elements are configurable.
###Code
Driver(() => new Module {
// Example circuit using a Queue
val io = IO(new Bundle {
val in = Flipped(Decoupled(UInt(8.W)))
val out = Decoupled(UInt(8.W))
})
val queue = Queue(io.in, 2) // 2-element queue
io.out <> queue
}) { c => new PeekPokeTester(c) {
// Example testsequence showing the use and behavior of Queue
poke(c.io.out.ready, 0)
poke(c.io.in.valid, 1) // Enqueue an element
poke(c.io.in.bits, 42)
println(s"Starting:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
poke(c.io.in.valid, 1) // Enqueue another element
poke(c.io.in.bits, 43)
// What do you think io.out.valid and io.out.bits will be?
println(s"After first enqueue:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
poke(c.io.in.valid, 1) // Read a element, attempt to enqueue
poke(c.io.in.bits, 44)
poke(c.io.out.ready, 1)
// What do you think io.in.ready will be, and will this enqueue succeed, and what will be read?
println(s"On first read:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
poke(c.io.in.valid, 0) // Read elements out
poke(c.io.out.ready, 1)
// What do you think will be read here?
println(s"On second read:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
// Will a third read produce anything?
println(s"On third read:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
} }
###Output
_____no_output_____
###Markdown
ArbitersArbiters routes data from _n_ `DecoupledIO` sources to one `DecoupledIO` sink, given a prioritization.There are two types included in Chisel:- `Arbiter`: prioritizes lower-index producers- `RRArbiter`: runs in round-robin orderNote that Arbiter routing is implemented in combinational logic.The below example will demonstrate the use of the priority arbiter (which you will also implement in the next section):
###Code
Driver(() => new Module {
// Example circuit using a priority arbiter
val io = IO(new Bundle {
val in = Flipped(Vec(2, Decoupled(UInt(8.W))))
val out = Decoupled(UInt(8.W))
})
// Arbiter doesn't have a convenience constructor, so it's built like any Module
val arbiter = Module(new Arbiter(UInt(8.W), 2)) // 2 to 1 Priority Arbiter
arbiter.io.in <> io.in
io.out <> arbiter.io.out
}) { c => new PeekPokeTester(c) {
poke(c.io.in(0).valid, 0)
poke(c.io.in(1).valid, 0)
println(s"Start:")
println(s"\tin(0).ready=${peek(c.io.in(0).ready)}, in(1).ready=${peek(c.io.in(1).ready)}")
println(s"\tout.valid=${peek(c.io.out.valid)}, out.bits=${peek(c.io.out.bits)}")
poke(c.io.in(1).valid, 1) // Valid input 1
poke(c.io.in(1).bits, 42)
// What do you think the output will be?
println(s"valid input 1:")
println(s"\tin(0).ready=${peek(c.io.in(0).ready)}, in(1).ready=${peek(c.io.in(1).ready)}")
println(s"\tout.valid=${peek(c.io.out.valid)}, out.bits=${peek(c.io.out.bits)}")
poke(c.io.in(0).valid, 1) // Valid inputs 0 and 1
poke(c.io.in(0).bits, 43)
// What do you think the output will be? Which inputs will be ready?
println(s"valid inputs 0 and 1:")
println(s"\tin(0).ready=${peek(c.io.in(0).ready)}, in(1).ready=${peek(c.io.in(1).ready)}")
println(s"\tout.valid=${peek(c.io.out.valid)}, out.bits=${peek(c.io.out.bits)}")
poke(c.io.in(1).valid, 0) // Valid input 0
// What do you think the output will be?
println(s"valid input 0:")
println(s"\tin(0).ready=${peek(c.io.in(0).ready)}, in(1).ready=${peek(c.io.in(1).ready)}")
println(s"\tout.valid=${peek(c.io.out.valid)}, out.bits=${peek(c.io.out.bits)}")
} }
###Output
_____no_output_____
###Markdown
Misc Function BlocksChisel Utils has some helpers that perform stateless functions. Bitwise Utilities PopCountPopCount returns the number of high (1) bits in the input as a `UInt`. ReverseReverse returns the bit-reversed input.
###Code
Driver(() => new Module {
// Example circuit using Reverse
val io = IO(new Bundle {
val in = Input(UInt(8.W))
val out = Output(UInt(8.W))
})
io.out := PopCount(io.in)
}) { c => new PeekPokeTester(c) {
// Integer.parseInt is used create an Integer from a binary specification
poke(c.io.in, Integer.parseInt("00000000", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
poke(c.io.in, Integer.parseInt("00001111", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
poke(c.io.in, Integer.parseInt("11001010", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
poke(c.io.in, Integer.parseInt("11111111", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
} }
Driver(() => new Module {
// Example circuit using Reverse
val io = IO(new Bundle {
val in = Input(UInt(8.W))
val out = Output(UInt(8.W))
})
io.out := Reverse(io.in)
}) { c => new PeekPokeTester(c) {
// Integer.parseInt is used create an Integer from a binary specification
poke(c.io.in, Integer.parseInt("01010101", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, Integer.parseInt("00001111", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, Integer.parseInt("11110000", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, Integer.parseInt("11001010", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=0b${peek(c.io.out).toInt.toBinaryString}")
} }
###Output
_____no_output_____
###Markdown
OneHot encoding utilitiesOneHot is an encoding of integers where there is one wire for each value, and exactly one wire is high. This allows the efficient creation of some functions, for example muxes. However, behavior may be undefined if the one-wire-high condition is not held.The below two functions provide conversion between binary (`UInt`) and OneHot encodings, and are inverses of each other:- UInt to OneHot: `UIntToOH`- OneHot to UInt: `OHToUInt`
###Code
Driver(() => new Module {
// Example circuit using UIntToOH
val io = IO(new Bundle {
val in = Input(UInt(4.W))
val out = Output(UInt(16.W))
})
io.out := UIntToOH(io.in)
}) { c => new PeekPokeTester(c) {
poke(c.io.in, 0)
println(s"in=${peek(c.io.in)}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, 1)
println(s"in=${peek(c.io.in)}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, 8)
println(s"in=${peek(c.io.in)}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, 15)
println(s"in=${peek(c.io.in)}, out=0b${peek(c.io.out).toInt.toBinaryString}")
} }
Driver(() => new Module {
// Example circuit using OHToUInt
val io = IO(new Bundle {
val in = Input(UInt(16.W))
val out = Output(UInt(4.W))
})
io.out := OHToUInt(io.in)
}) { c => new PeekPokeTester(c) {
poke(c.io.in, Integer.parseInt("0000 0000 0000 0001".replace(" ", ""), 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
poke(c.io.in, Integer.parseInt("0000 0000 1000 0000".replace(" ", ""), 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
poke(c.io.in, Integer.parseInt("1000 0000 0000 0001".replace(" ", ""), 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
// Some invalid inputs:
// None high
poke(c.io.in, Integer.parseInt("0000 0000 0000 0000".replace(" ", ""), 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
// Multiple high
poke(c.io.in, Integer.parseInt("0001 0100 0010 0000".replace(" ", ""), 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
} }
###Output
_____no_output_____
###Markdown
MuxesThese muxes take in a list of values with select signals, and output the value associated with the lowest-index select signal.These can either take a list of (select: Bool, value: Data) tuples, or corresponding lists of selects and values as arguments. For simplicity, the examples below only demonstrate the second form. Priority MuxA `PriorityMux` outputs the value associated with the lowest-index asserted select signal. OneHot MuxAn `Mux1H` provides an efficient implementation when it is guaranteed that exactly one of the select signals will be high. Behavior is undefined if the assumption is not true.
###Code
Driver(() => new Module {
// Example circuit using PriorityMux
val io = IO(new Bundle {
val in_sels = Input(Vec(2, Bool()))
val in_bits = Input(Vec(2, UInt(8.W)))
val out = Output(UInt(8.W))
})
io.out := PriorityMux(io.in_sels, io.in_bits)
}) { c => new PeekPokeTester(c) {
poke(c.io.in_bits(0), 10)
poke(c.io.in_bits(1), 20)
// Select higher index only
poke(c.io.in_sels(0), 0)
poke(c.io.in_sels(1), 1)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select both - arbitration needed
poke(c.io.in_sels(0), 1)
poke(c.io.in_sels(1), 1)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select lower index only
poke(c.io.in_sels(0), 1)
poke(c.io.in_sels(1), 0)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
} }
Driver(() => new Module {
// Example circuit using Mux1H
val io = IO(new Bundle {
val in_sels = Input(Vec(2, Bool()))
val in_bits = Input(Vec(2, UInt(8.W)))
val out = Output(UInt(8.W))
})
io.out := Mux1H(io.in_sels, io.in_bits)
}) { c => new PeekPokeTester(c) {
poke(c.io.in_bits(0), 10)
poke(c.io.in_bits(1), 20)
// Select index 1
poke(c.io.in_sels(0), 0)
poke(c.io.in_sels(1), 1)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select index 0
poke(c.io.in_sels(0), 1)
poke(c.io.in_sels(1), 0)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select none (invalid)
poke(c.io.in_sels(0), 0)
poke(c.io.in_sels(1), 0)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select both (invalid)
poke(c.io.in_sels(0), 1)
poke(c.io.in_sels(1), 1)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
} }
###Output
_____no_output_____
###Markdown
Counter`Counter` is a counter that can be incremented once every cycle, up to some specified limit, at which point it overflows. Note that it is **not** a Module, and its value is accessible.
###Code
Driver(() => new Module {
// Example circuit using Mux1H
val io = IO(new Bundle {
val count = Input(Bool())
val out = Output(UInt(2.W))
})
val counter = Counter(3) // 3-count Counter (outputs range [0...2])
when(io.count) {
counter.inc()
}
io.out := counter.value
}) { c => new PeekPokeTester(c) {
poke(c.io.count, 1)
println(s"start: counter value=${peek(c.io.out)}")
step(1)
println(s"step 1: counter value=${peek(c.io.out)}")
step(1)
println(s"step 2: counter value=${peek(c.io.out)}")
poke(c.io.count, 0)
step(1)
println(s"step without increment: counter value=${peek(c.io.out)}")
poke(c.io.count, 1)
step(1)
println(s"step again: counter value=${peek(c.io.out)}")
} }
###Output
_____no_output_____
###Markdown
Module 3 Interlude: Chisel Standard Library**Prev: [Generators: Collections](3.2_collections.ipynb)****Next: [Higher-Order Functions](3.3_higher-order_functions.ipynb)** MotivationChisel is all about re-use, so it only makes sense to provide a standard library of interfaces (encouraging interoperability of RTL) and generators for commonly-used hardware blocks. Setup
###Code
val path = System.getProperty("user.dir") + "/source/load-ivy.sc"
interp.load.module(ammonite.ops.Path(java.nio.file.FileSystems.getDefault().getPath(path)))
import chisel3._
import chisel3.util._
import chisel3.iotesters.{ChiselFlatSpec, Driver, PeekPokeTester}
###Output
_____no_output_____
###Markdown
--- The CheatsheetThe [Chisel3 cheatsheet](https://github.com/freechipsproject/chisel-cheatsheet/releases/latest/download/chisel_cheatsheet.pdf) contains a summary of all the major hardware construction APIs, including some of the standard library utilities that we'll introduce below. Decoupled: A Standard Ready-Valid InterfaceOne of the commonly used interfaces provided by Chisel is `DecoupledIO`, providing a ready-valid interface for transferring data. The idea is that the source drives the `bits` signal with the data to be transferred and the `valid` signal when there is data to be transferred. The sink drives the `ready` signal when it is ready to accept data, and data is considered transferred when both `ready` and `valid` are asserted on a cycle.This provides a flow control mechanism in both directions for data transfer, including a backpressure mechanism.Note: `ready` and `valid` should not be combinationally coupled, otherwise this may result in unsynthesizable combinational loops. `ready` should only be dependent on whether the sink is able to receive data, and `valid` should only be dependent on whether the source has data. Only after the transaction (on the next clock cycle) should the values update.Any Chisel data can be wrapped in a `DecoupledIO` (used as the `bits` field) as follows:```scalaval myChiselData = UInt(8.W)// or any Chisel data type, such as Bool(), SInt(...), or even custom Bundlesval myDecoupled = Decoupled(myChiselData)```The above creates a new `DecoupledIO` Bundle with fields- `valid`: Output(Bool)- `ready`: Input(Bool)- `bits`: Output(UInt(8.W))___The rest of the section will be structured somewhat differently from the ones before: instead of giving you coding exercises, we're going to give some code examples and testcases that print the circuit state. Try to predict what will be printed before just running the tests. Queues`Queue` creates a FIFO (first-in, first-out) queue with Decoupled interfaces on both sides, allowing backpressure. Both the data type and number of elements are configurable.
###Code
Driver(() => new Module {
// Example circuit using a Queue
val io = IO(new Bundle {
val in = Flipped(Decoupled(UInt(8.W)))
val out = Decoupled(UInt(8.W))
})
val queue = Queue(io.in, 2) // 2-element queue
io.out <> queue
}) { c => new PeekPokeTester(c) {
// Example testsequence showing the use and behavior of Queue
poke(c.io.out.ready, 0)
poke(c.io.in.valid, 1) // Enqueue an element
poke(c.io.in.bits, 42)
println(s"Starting:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
poke(c.io.in.valid, 1) // Enqueue another element
poke(c.io.in.bits, 43)
// What do you think io.out.valid and io.out.bits will be?
println(s"After first enqueue:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
poke(c.io.in.valid, 1) // Read a element, attempt to enqueue
poke(c.io.in.bits, 44)
poke(c.io.out.ready, 1)
// What do you think io.in.ready will be, and will this enqueue succeed, and what will be read?
println(s"On first read:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
poke(c.io.in.valid, 0) // Read elements out
poke(c.io.out.ready, 1)
// What do you think will be read here?
println(s"On second read:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
// Will a third read produce anything?
println(s"On third read:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
} }
###Output
_____no_output_____
###Markdown
ArbitersArbiters routes data from _n_ `DecoupledIO` sources to one `DecoupledIO` sink, given a prioritization.There are two types included in Chisel:- `Arbiter`: prioritizes lower-index producers- `RRArbiter`: runs in round-robin orderNote that Arbiter routing is implemented in combinational logic.The below example will demonstrate the use of the priority arbiter (which you will also implement in the next section):
###Code
Driver(() => new Module {
// Example circuit using a priority arbiter
val io = IO(new Bundle {
val in = Flipped(Vec(2, Decoupled(UInt(8.W))))
val out = Decoupled(UInt(8.W))
})
// Arbiter doesn't have a convenience constructor, so it's built like any Module
val arbiter = Module(new Arbiter(UInt(8.W), 2)) // 2 to 1 Priority Arbiter
arbiter.io.in <> io.in
io.out <> arbiter.io.out
}) { c => new PeekPokeTester(c) {
poke(c.io.in(0).valid, 0)
poke(c.io.in(1).valid, 0)
println(s"Start:")
println(s"\tin(0).ready=${peek(c.io.in(0).ready)}, in(1).ready=${peek(c.io.in(1).ready)}")
println(s"\tout.valid=${peek(c.io.out.valid)}, out.bits=${peek(c.io.out.bits)}")
poke(c.io.in(1).valid, 1) // Valid input 1
poke(c.io.in(1).bits, 42)
// What do you think the output will be?
println(s"valid input 1:")
println(s"\tin(0).ready=${peek(c.io.in(0).ready)}, in(1).ready=${peek(c.io.in(1).ready)}")
println(s"\tout.valid=${peek(c.io.out.valid)}, out.bits=${peek(c.io.out.bits)}")
poke(c.io.in(0).valid, 1) // Valid inputs 0 and 1
poke(c.io.in(0).bits, 43)
// What do you think the output will be? Which inputs will be ready?
println(s"valid inputs 0 and 1:")
println(s"\tin(0).ready=${peek(c.io.in(0).ready)}, in(1).ready=${peek(c.io.in(1).ready)}")
println(s"\tout.valid=${peek(c.io.out.valid)}, out.bits=${peek(c.io.out.bits)}")
poke(c.io.in(1).valid, 0) // Valid input 0
// What do you think the output will be?
println(s"valid input 0:")
println(s"\tin(0).ready=${peek(c.io.in(0).ready)}, in(1).ready=${peek(c.io.in(1).ready)}")
println(s"\tout.valid=${peek(c.io.out.valid)}, out.bits=${peek(c.io.out.bits)}")
} }
###Output
_____no_output_____
###Markdown
Misc Function BlocksChisel Utils has some helpers that perform stateless functions. Bitwise Utilities PopCountPopCount returns the number of high (1) bits in the input as a `UInt`. ReverseReverse returns the bit-reversed input.
###Code
Driver(() => new Module {
// Example circuit using Reverse
val io = IO(new Bundle {
val in = Input(UInt(8.W))
val out = Output(UInt(8.W))
})
io.out := PopCount(io.in)
}) { c => new PeekPokeTester(c) {
// Integer.parseInt is used create an Integer from a binary specification
poke(c.io.in, Integer.parseInt("00000000", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
poke(c.io.in, Integer.parseInt("00001111", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
poke(c.io.in, Integer.parseInt("11001010", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
poke(c.io.in, Integer.parseInt("11111111", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
} }
Driver(() => new Module {
// Example circuit using Reverse
val io = IO(new Bundle {
val in = Input(UInt(8.W))
val out = Output(UInt(8.W))
})
io.out := Reverse(io.in)
}) { c => new PeekPokeTester(c) {
// Integer.parseInt is used create an Integer from a binary specification
poke(c.io.in, Integer.parseInt("01010101", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, Integer.parseInt("00001111", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, Integer.parseInt("11110000", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, Integer.parseInt("11001010", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=0b${peek(c.io.out).toInt.toBinaryString}")
} }
###Output
_____no_output_____
###Markdown
OneHot encoding utilitiesOneHot is an encoding of integers where there is one wire for each value, and exactly one wire is high. This allows the efficient creation of some functions, for example muxes. However, behavior may be undefined if the one-wire-high condition is not held.The below two functions provide conversion between binary (`UInt`) and OneHot encodings, and are inverses of each other:- UInt to OneHot: `UIntToOH`- OneHot to UInt: `OHToUInt`
###Code
Driver(() => new Module {
// Example circuit using UIntToOH
val io = IO(new Bundle {
val in = Input(UInt(4.W))
val out = Output(UInt(16.W))
})
io.out := UIntToOH(io.in)
}) { c => new PeekPokeTester(c) {
poke(c.io.in, 0)
println(s"in=${peek(c.io.in)}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, 1)
println(s"in=${peek(c.io.in)}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, 8)
println(s"in=${peek(c.io.in)}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, 15)
println(s"in=${peek(c.io.in)}, out=0b${peek(c.io.out).toInt.toBinaryString}")
} }
Driver(() => new Module {
// Example circuit using OHToUInt
val io = IO(new Bundle {
val in = Input(UInt(16.W))
val out = Output(UInt(4.W))
})
io.out := OHToUInt(io.in)
}) { c => new PeekPokeTester(c) {
poke(c.io.in, Integer.parseInt("0000 0000 0000 0001".replace(" ", ""), 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
poke(c.io.in, Integer.parseInt("0000 0000 1000 0000".replace(" ", ""), 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
// Some invalid inputs:
// None high
poke(c.io.in, Integer.parseInt("0000 0000 0000 0000".replace(" ", ""), 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
// Multiple high
poke(c.io.in, Integer.parseInt("0001 0100 0010 0000".replace(" ", ""), 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
} }
###Output
_____no_output_____
###Markdown
MuxesThese muxes take in a list of values with select signals, and output the value associated with the lowest-index select signal.These can either take a list of (select: Bool, value: Data) tuples, or corresponding lists of selects and values as arguments. For simplicity, the examples below only demonstrate the second form. Priority MuxA `PriorityMux` outputs the value associated with the lowest-index asserted select signal. OneHot MuxAn `Mux1H` provides an efficient implementation when it is guaranteed that exactly one of the select signals will be high. Behavior is undefined if the assumption is not true.
###Code
Driver(() => new Module {
// Example circuit using PriorityMux
val io = IO(new Bundle {
val in_sels = Input(Vec(2, Bool()))
val in_bits = Input(Vec(2, UInt(8.W)))
val out = Output(UInt(8.W))
})
io.out := PriorityMux(io.in_sels, io.in_bits)
}) { c => new PeekPokeTester(c) {
poke(c.io.in_bits(0), 10)
poke(c.io.in_bits(1), 20)
// Select higher index only
poke(c.io.in_sels(0), 0)
poke(c.io.in_sels(1), 1)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select both - arbitration needed
poke(c.io.in_sels(0), 1)
poke(c.io.in_sels(1), 1)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select lower index only
poke(c.io.in_sels(0), 1)
poke(c.io.in_sels(1), 0)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
} }
Driver(() => new Module {
// Example circuit using Mux1H
val io = IO(new Bundle {
val in_sels = Input(Vec(2, Bool()))
val in_bits = Input(Vec(2, UInt(8.W)))
val out = Output(UInt(8.W))
})
io.out := Mux1H(io.in_sels, io.in_bits)
}) { c => new PeekPokeTester(c) {
poke(c.io.in_bits(0), 10)
poke(c.io.in_bits(1), 20)
// Select index 1
poke(c.io.in_sels(0), 0)
poke(c.io.in_sels(1), 1)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select index 0
poke(c.io.in_sels(0), 1)
poke(c.io.in_sels(1), 0)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select none (invalid)
poke(c.io.in_sels(0), 0)
poke(c.io.in_sels(1), 0)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select both (invalid)
poke(c.io.in_sels(0), 1)
poke(c.io.in_sels(1), 1)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
} }
###Output
_____no_output_____
###Markdown
Counter`Counter` is a counter that can be incremented once every cycle, up to some specified limit, at which point it overflows. Note that it is **not** a Module, and its value is accessible.
###Code
Driver(() => new Module {
// Example circuit using Mux1H
val io = IO(new Bundle {
val count = Input(Bool())
val out = Output(UInt(2.W))
})
val counter = Counter(3) // 3-count Counter (outputs range [0...2])
when(io.count) {
counter.inc()
}
io.out := counter.value
}) { c => new PeekPokeTester(c) {
poke(c.io.count, 1)
println(s"start: counter value=${peek(c.io.out)}")
step(1)
println(s"step 1: counter value=${peek(c.io.out)}")
step(1)
println(s"step 2: counter value=${peek(c.io.out)}")
poke(c.io.count, 0)
step(1)
println(s"step without increment: counter value=${peek(c.io.out)}")
poke(c.io.count, 1)
step(1)
println(s"step again: counter value=${peek(c.io.out)}")
} }
###Output
_____no_output_____
###Markdown
Module 3 Interlude: Chisel Standard Library**Prev: [Generators: Collections](3.2_collections.ipynb)****Next: [Higher-Order Functions](3.3_higher-order_functions.ipynb)** MotivationChisel is all about re-use, so it only makes sense to provide a standard library of interfaces (encouraging interoperability of RTL) and generators for commonly-used hardware blocks. Setup
###Code
val path = System.getProperty("user.dir") + "/source/load-ivy.sc"
interp.load.module(ammonite.ops.Path(java.nio.file.FileSystems.getDefault().getPath(path)))
import chisel3._
import chisel3.util._
import chisel3.iotesters.{ChiselFlatSpec, Driver, PeekPokeTester}
###Output
_____no_output_____
###Markdown
--- The CheatsheetThe [Chisel3 cheatsheet](https://chisel.eecs.berkeley.edu/doc/chisel-cheatsheet3.pdf) contains a summary of all the major hardware construction APIs, including some of the standard library utilities that we'll introduce below. Decoupled: A Standard Ready-Valid InterfaceOne of the commonly used interfaces provided by Chisel is `DecoupledIO`, providing a ready-valid interface for transferring data. The idea is that the source drives the `bits` signal with the data to be transferred and the `valid` signal when there is data to be transferred. The sink drives the `ready` signal when it is ready to accept data, and data is considered transferred when both `ready` and `valid` are asserted on a cycle.This provides a flow control mechanism in both directions for data transfer, including a backpressure mechanism.Note: `ready` and `valid` should not be combinationally coupled, otherwise this may result in unsynthesizable combinational loops. `ready` should only be dependent on whether the sink is able to receive data, and `valid` should only be dependent on whether the source has data. Only after the transaction (on the next clock cycle) should the values update.Any Chisel data can be wrapped in a `DecoupledIO` (used as the `bits` field) as follows:```scalaval myChiselData = UInt(8.W)// or any Chisel data type, such as Bool(), SInt(...), or even custom Bundlesval myDecoupled = Decoupled(myChiselData)```The above creates a new `DecoupledIO` Bundle with fields- `valid`: Output(Bool)- `ready`: Input(Bool)- `bits`: Output(UInt(8.W))___The rest of the section will be structured somewhat differently from the ones before: instead of giving you coding exercises, we're going to give some code examples and testcases that print the circuit state. Try to predict what will be printed before just running the tests. Queues`Queue` creates a FIFO (first-in, first-out) queue with Decoupled interfaces on both sides, allowing backpressure. Both the data type and number of elements are configurable.
###Code
Driver(() => new Module {
// Example circuit using a Queue
val io = IO(new Bundle {
val in = Flipped(Decoupled(UInt(8.W)))
val out = Decoupled(UInt(8.W))
})
val queue = Queue(io.in, 2) // 2-element queue
io.out <> queue
}) { c => new PeekPokeTester(c) {
// Example testsequence showing the use and behavior of Queue
poke(c.io.out.ready, 0)
poke(c.io.in.valid, 1) // Enqueue an element
poke(c.io.in.bits, 42)
println(s"Starting:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
poke(c.io.in.valid, 1) // Enqueue another element
poke(c.io.in.bits, 43)
// What do you think io.out.valid and io.out.bits will be?
println(s"After first enqueue:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
poke(c.io.in.valid, 1) // Read a element, attempt to enqueue
poke(c.io.in.bits, 44)
poke(c.io.out.ready, 1)
// What do you think io.in.ready will be, and will this enqueue succeed, and what will be read?
println(s"On first read:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
poke(c.io.in.valid, 0) // Read elements out
poke(c.io.out.ready, 1)
// What do you think will be read here?
println(s"On second read:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
// Will a third read produce anything?
println(s"On third read:")
println(s"\tio.in: ready=${peek(c.io.in.ready)}")
println(s"\tio.out: valid=${peek(c.io.out.valid)}, bits=${peek(c.io.out.bits)}")
step(1)
} }
###Output
[[35minfo[0m] [0.001] Elaborating design...
[[35minfo[0m] [0.141] Done elaborating.
Total FIRRTL Compile Time: 461.3 ms
Total FIRRTL Compile Time: 86.3 ms
End of dependency graph
Circuit state created
[[35minfo[0m] [0.002] SEED 1536195652625
[[35minfo[0m] [0.003] Starting:
[[35minfo[0m] [0.007] io.in: ready=1
[[35minfo[0m] [0.007] io.out: valid=0, bits=255
[[35minfo[0m] [0.010] After first enqueue:
[[35minfo[0m] [0.012] io.in: ready=1
[[35minfo[0m] [0.012] io.out: valid=1, bits=42
[[35minfo[0m] [0.014] On first read:
[[35minfo[0m] [0.016] io.in: ready=0
[[35minfo[0m] [0.017] io.out: valid=1, bits=42
[[35minfo[0m] [0.021] On second read:
[[35minfo[0m] [0.023] io.in: ready=1
[[35minfo[0m] [0.023] io.out: valid=1, bits=43
[[35minfo[0m] [0.026] On third read:
[[35minfo[0m] [0.026] io.in: ready=1
[[35minfo[0m] [0.026] io.out: valid=0, bits=42
test cmd2WrapperHelperanonfun1anon1 Success: 0 tests passed in 10 cycles taking 0.061678 seconds
[[35minfo[0m] [0.029] RAN 5 CYCLES PASSED
###Markdown
ArbitersArbiters routes data from _n_ `DecoupledIO` sources to one `DecoupledIO` sink, given a prioritization.There are two types included in Chisel:- `Arbiter`: prioritizes lower-index producers- `RRArbiter`: runs in round-robin orderNote that Arbiter routing is implemented in combinational logic.The below example will demonstrate the use of the priority arbiter (which you will also implement in the next section):
###Code
Driver(() => new Module {
// Example circuit using a priority arbiter
val io = IO(new Bundle {
val in = Flipped(Vec(2, Decoupled(UInt(8.W))))
val out = Decoupled(UInt(8.W))
})
// Arbiter doesn't have a convenience constructor, so it's built like any Module
val arbiter = Module(new Arbiter(UInt(8.W), 2)) // 2 to 1 Priority Arbiter
arbiter.io.in <> io.in
io.out <> arbiter.io.out
}) { c => new PeekPokeTester(c) {
poke(c.io.in(0).valid, 0)
poke(c.io.in(1).valid, 0)
println(s"Start:")
println(s"\tin(0).ready=${peek(c.io.in(0).ready)}, in(1).ready=${peek(c.io.in(1).ready)}")
println(s"\tout.valid=${peek(c.io.out.valid)}, out.bits=${peek(c.io.out.bits)}")
poke(c.io.in(1).valid, 1) // Valid input 1
poke(c.io.in(1).bits, 42)
// What do you think the output will be?
println(s"valid input 1:")
println(s"\tin(0).ready=${peek(c.io.in(0).ready)}, in(1).ready=${peek(c.io.in(1).ready)}")
println(s"\tout.valid=${peek(c.io.out.valid)}, out.bits=${peek(c.io.out.bits)}")
poke(c.io.in(0).valid, 1) // Valid inputs 0 and 1
poke(c.io.in(0).bits, 43)
// What do you think the output will be? Which inputs will be ready?
println(s"valid inputs 0 and 1:")
println(s"\tin(0).ready=${peek(c.io.in(0).ready)}, in(1).ready=${peek(c.io.in(1).ready)}")
println(s"\tout.valid=${peek(c.io.out.valid)}, out.bits=${peek(c.io.out.bits)}")
poke(c.io.in(1).valid, 0) // Valid input 0
// What do you think the output will be?
println(s"valid input 0:")
println(s"\tin(0).ready=${peek(c.io.in(0).ready)}, in(1).ready=${peek(c.io.in(1).ready)}")
println(s"\tout.valid=${peek(c.io.out.valid)}, out.bits=${peek(c.io.out.bits)}")
} }
###Output
[[35minfo[0m] [0.000] Elaborating design...
[[35minfo[0m] [0.039] Done elaborating.
Total FIRRTL Compile Time: 82.1 ms
Total FIRRTL Compile Time: 30.5 ms
End of dependency graph
Circuit state created
[[35minfo[0m] [0.000] SEED 1536195737049
[[35minfo[0m] [0.002] Start:
[[35minfo[0m] [0.004] in(0).ready=1, in(1).ready=1
[[35minfo[0m] [0.004] out.valid=0, out.bits=113
[[35minfo[0m] [0.004] valid input 1:
[[35minfo[0m] [0.005] in(0).ready=1, in(1).ready=1
[[35minfo[0m] [0.005] out.valid=1, out.bits=42
[[35minfo[0m] [0.005] valid inputs 0 and 1:
[[35minfo[0m] [0.006] in(0).ready=1, in(1).ready=0
[[35minfo[0m] [0.007] out.valid=1, out.bits=43
[[35minfo[0m] [0.007] valid input 0:
[[35minfo[0m] [0.009] in(0).ready=1, in(1).ready=0
[[35minfo[0m] [0.010] out.valid=1, out.bits=43
test cmd3WrapperHelperanonfun1anon1 Success: 0 tests passed in 5 cycles taking 0.020403 seconds
[[35minfo[0m] [0.011] RAN 0 CYCLES PASSED
###Markdown
Misc Function BlocksChisel Utils has some helpers that perform stateless functions. Bitwise Utilities PopCountPopCount returns the number of high (1) bits in the input as a `UInt`. ReverseReverse returns the bit-reversed input.
###Code
Driver(() => new Module {
// Example circuit using Reverse
val io = IO(new Bundle {
val in = Input(UInt(8.W))
val out = Output(UInt(8.W))
})
io.out := PopCount(io.in)
}) { c => new PeekPokeTester(c) {
// Integer.parseInt is used create an Integer from a binary specification
poke(c.io.in, Integer.parseInt("00000000", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
poke(c.io.in, Integer.parseInt("00001111", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
poke(c.io.in, Integer.parseInt("11001010", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
poke(c.io.in, Integer.parseInt("11111111", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
} }
Driver(() => new Module {
// Example circuit using Reverse
val io = IO(new Bundle {
val in = Input(UInt(8.W))
val out = Output(UInt(8.W))
})
io.out := Reverse(io.in)
}) { c => new PeekPokeTester(c) {
// Integer.parseInt is used create an Integer from a binary specification
poke(c.io.in, Integer.parseInt("01010101", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, Integer.parseInt("00001111", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, Integer.parseInt("11110000", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, Integer.parseInt("11001010", 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=0b${peek(c.io.out).toInt.toBinaryString}")
} }
###Output
[[35minfo[0m] [0.000] Elaborating design...
[[35minfo[0m] [0.006] Done elaborating.
Total FIRRTL Compile Time: 35.9 ms
Total FIRRTL Compile Time: 33.6 ms
End of dependency graph
Circuit state created
[[35minfo[0m] [0.000] SEED 1536195807957
[[35minfo[0m] [0.001] in=0b1010101, out=0b10101010
[[35minfo[0m] [0.002] in=0b1111, out=0b11110000
[[35minfo[0m] [0.003] in=0b11110000, out=0b1111
[[35minfo[0m] [0.004] in=0b11001010, out=0b1010011
test cmd5WrapperHelperanonfun1anon1 Success: 0 tests passed in 5 cycles taking 0.009305 seconds
[[35minfo[0m] [0.004] RAN 0 CYCLES PASSED
###Markdown
OneHot encoding utilitiesOneHot is an encoding of integers where there is one wire for each value, and exactly one wire is high. This allows the efficient creation of some functions, for example muxes. However, behavior may be undefined if the one-wire-high condition is not held.The below two functions provide conversion between binary (`UInt`) and OneHot encodings, and are inverses of each other:- UInt to OneHot: `UIntToOH`- OneHot to UInt: `OHToUInt`
###Code
Driver(() => new Module {
// Example circuit using UIntToOH
val io = IO(new Bundle {
val in = Input(UInt(4.W))
val out = Output(UInt(16.W))
})
io.out := UIntToOH(io.in)
}) { c => new PeekPokeTester(c) {
poke(c.io.in, 0)
println(s"in=${peek(c.io.in)}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, 1)
println(s"in=${peek(c.io.in)}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, 8)
println(s"in=${peek(c.io.in)}, out=0b${peek(c.io.out).toInt.toBinaryString}")
poke(c.io.in, 15)
println(s"in=${peek(c.io.in)}, out=0b${peek(c.io.out).toInt.toBinaryString}")
} }
Driver(() => new Module {
// Example circuit using OHToUInt
val io = IO(new Bundle {
val in = Input(UInt(16.W))
val out = Output(UInt(4.W))
})
io.out := OHToUInt(io.in)
}) { c => new PeekPokeTester(c) {
poke(c.io.in, Integer.parseInt("0000 0000 0000 0001".replace(" ", ""), 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
poke(c.io.in, Integer.parseInt("0000 0000 1000 0000".replace(" ", ""), 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
poke(c.io.in, Integer.parseInt("1000 0000 0000 0001".replace(" ", ""), 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
// Some invalid inputs:
// None high
poke(c.io.in, Integer.parseInt("0000 0000 0000 0000".replace(" ", ""), 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
// Multiple high
poke(c.io.in, Integer.parseInt("0001 0100 0010 0000".replace(" ", ""), 2))
println(s"in=0b${peek(c.io.in).toInt.toBinaryString}, out=${peek(c.io.out)}")
} }
###Output
[[35minfo[0m] [0.000] Elaborating design...
[[35minfo[0m] [0.005] Done elaborating.
Total FIRRTL Compile Time: 17.9 ms
Total FIRRTL Compile Time: 17.5 ms
End of dependency graph
Circuit state created
[[35minfo[0m] [0.000] SEED 1536195870680
[[35minfo[0m] [0.001] in=0b1, out=0
[[35minfo[0m] [0.002] in=0b10000000, out=7
[[35minfo[0m] [0.002] in=0b1000000000000001, out=15
[[35minfo[0m] [0.003] in=0b0, out=0
[[35minfo[0m] [0.003] in=0b1010000100000, out=15
test cmd7WrapperHelperanonfun1anon1 Success: 0 tests passed in 5 cycles taking 0.006347 seconds
[[35minfo[0m] [0.004] RAN 0 CYCLES PASSED
###Markdown
MuxesThese muxes take in a list of values with select signals, and output the value associated with the lowest-index select signal.These can either take a list of (select: Bool, value: Data) tuples, or corresponding lists of selects and values as arguments. For simplicity, the examples below only demonstrate the second form. Priority MuxA `PriorityMux` outputs the value associated with the lowest-index asserted select signal. OneHot MuxAn `Mux1H` provides an efficient implementation when it is guaranteed that exactly one of the select signals will be high. Behavior is undefined if the assumption is not true.
###Code
Driver(() => new Module {
// Example circuit using PriorityMux
val io = IO(new Bundle {
val in_sels = Input(Vec(2, Bool()))
val in_bits = Input(Vec(2, UInt(8.W)))
val out = Output(UInt(8.W))
})
io.out := PriorityMux(io.in_sels, io.in_bits)
}) { c => new PeekPokeTester(c) {
poke(c.io.in_bits(0), 10)
poke(c.io.in_bits(1), 20)
// Select higher index only
poke(c.io.in_sels(0), 0)
poke(c.io.in_sels(1), 1)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select both - arbitration needed
poke(c.io.in_sels(0), 1)
poke(c.io.in_sels(1), 1)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select lower index only
poke(c.io.in_sels(0), 1)
poke(c.io.in_sels(1), 0)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
} }
Driver(() => new Module {
// Example circuit using Mux1H
val io = IO(new Bundle {
val in_sels = Input(Vec(2, Bool()))
val in_bits = Input(Vec(2, UInt(8.W)))
val out = Output(UInt(8.W))
})
io.out := Mux1H(io.in_sels, io.in_bits)
}) { c => new PeekPokeTester(c) {
poke(c.io.in_bits(0), 10)
poke(c.io.in_bits(1), 20)
// Select index 1
poke(c.io.in_sels(0), 0)
poke(c.io.in_sels(1), 1)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select index 0
poke(c.io.in_sels(0), 1)
poke(c.io.in_sels(1), 0)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select none (invalid)
poke(c.io.in_sels(0), 0)
poke(c.io.in_sels(1), 0)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
// Select both (invalid)
poke(c.io.in_sels(0), 1)
poke(c.io.in_sels(1), 1)
println(s"in_sels=${peek(c.io.in_sels)}, out=${peek(c.io.out)}")
} }
###Output
[[35minfo[0m] [0.000] Elaborating design...
[[35minfo[0m] [0.007] Done elaborating.
Total FIRRTL Compile Time: 11.9 ms
Total FIRRTL Compile Time: 12.7 ms
End of dependency graph
Circuit state created
[[35minfo[0m] [0.000] SEED 1536195953397
[[35minfo[0m] [0.001] in_sels=Vector(0, 1), out=20
[[35minfo[0m] [0.001] in_sels=Vector(1, 0), out=10
[[35minfo[0m] [0.002] in_sels=Vector(0, 0), out=0
[[35minfo[0m] [0.002] in_sels=Vector(1, 1), out=30
test cmd9WrapperHelperanonfun1anon1 Success: 0 tests passed in 5 cycles taking 0.003994 seconds
[[35minfo[0m] [0.003] RAN 0 CYCLES PASSED
###Markdown
Counter`Counter` is a counter that can be incremented once every cycle, up to some specified limit, at which point it overflows. Note that it is **not** a Module, and its value is accessible.
###Code
Driver(() => new Module {
// Example circuit using Mux1H
val io = IO(new Bundle {
val count = Input(Bool())
val out = Output(UInt(2.W))
})
val counter = Counter(3) // 3-count Counter (outputs range [0...2])
when(io.count) {
counter.inc()
}
io.out := counter.value
}) { c => new PeekPokeTester(c) {
poke(c.io.count, 1)
println(s"start: counter value=${peek(c.io.out)}")
step(1)
println(s"step 1: counter value=${peek(c.io.out)}")
step(1)
println(s"step 2: counter value=${peek(c.io.out)}")
poke(c.io.count, 0)
step(1)
println(s"step without increment: counter value=${peek(c.io.out)}")
poke(c.io.count, 1)
step(1)
println(s"step again: counter value=${peek(c.io.out)}")
} }
###Output
[[35minfo[0m] [0.000] Elaborating design...
[[35minfo[0m] [0.006] Done elaborating.
Total FIRRTL Compile Time: 14.7 ms
Total FIRRTL Compile Time: 15.8 ms
End of dependency graph
Circuit state created
[[35minfo[0m] [0.000] SEED 1536195979735
[[35minfo[0m] [0.001] start: counter value=0
[[35minfo[0m] [0.002] step 1: counter value=1
[[35minfo[0m] [0.002] step 2: counter value=2
[[35minfo[0m] [0.002] step without increment: counter value=2
[[35minfo[0m] [0.003] step again: counter value=0
test cmd10WrapperHelperanonfun2anon1 Success: 0 tests passed in 9 cycles taking 0.004137 seconds
[[35minfo[0m] [0.003] RAN 4 CYCLES PASSED
|
SPY_CREDIT_DATA_ANALYSIS.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
df = pd.read_csv('spy_credit.csv')
df.head()
df.shape
df.isna().sum()
df2 = df.dropna()
df2.head()
df2.shape
df3 = df2.drop(["Open", "High", "Low"], axis=1)
df3.head()
df3.dtypes
df3.isna().sum()
df3["HYAS"] = df3.HYAS.convert_objects(convert_numeric=True)
df3["Baa_10Y"] = df3.Baa_10Y.convert_objects(convert_numeric=True)
df3["HYBaa_OS"] = df3.HYBaa_OS.convert_objects(convert_numeric=True)
df3["OBR"] = df3.OBR.convert_objects(convert_numeric=True)
df3["Libor"] = df3.Libor.convert_objects(convert_numeric=True)
df3["Prim_cr_rate"] = df3.Prim_cr_rate.convert_objects(convert_numeric=True)
df3.dtypes
df3.describe()
import matplotlib as plt
import seaborn as sns
sns.pairplot(df3)
###Output
/usr/local/lib/python3.6/dist-packages/numpy/lib/histograms.py:824: RuntimeWarning: invalid value encountered in greater_equal
keep = (tmp_a >= first_edge)
/usr/local/lib/python3.6/dist-packages/numpy/lib/histograms.py:825: RuntimeWarning: invalid value encountered in less_equal
keep &= (tmp_a <= last_edge)
|
prepare_model.ipynb | ###Markdown
Prepare regression models- You can create a new regression model with your original database- Algorithms are basically based on the following paper - https://pubs.acs.org/doi/10.1021/jacs.9b11442 Load settings- Most global settings are set in "settings.yaml" - TODO: Some codes do not refer "setting_path" in the notebook - path is given (hard-coded) as "setting.yaml" in some modules
###Code
import joblib
import yaml
import sys
%load_ext autoreload
%autoreload 2
sys.path.append("ion_manager/ion_predictor")
from ion_manager.ion_predictor.ml.auto_trainer import auto_prepare_model
from ion_manager.ion_predictor.ml import pretrain_descriptors
# load global settings
setting_path = "settings.yaml"
with open(setting_path) as file:
settings = yaml.safe_load(file)
###Output
_____no_output_____
###Markdown
Train GNN- You don't always have to run these codes Dump pretraining descriptor data from SMILES data- it takes time- you can tune the number of training molecules by changing "num_learning_molecules" in setting.yaml
###Code
pretrain_descriptors.dump(settings)
###Output
100%|██████████| 1/1 [00:00<00:00, 13.34it/s]
100%|██████████| 300/300 [00:55<00:00, 5.45it/s]
###Markdown
Train gnn model- Some hyperparams can be changed at settings yaml- The trained neural net is saved in the cache folder- The code is for CPU, but you can accelerate it by GPU with code modification
###Code
auto_prepare_model(settings)
###Output
/home/user/anaconda3/envs/ion/lib/python3.7/site-packages/dgl/base.py:45: DGLWarning: Recommend creating graphs by `dgl.graph(data)` instead of `dgl.DGLGraph(data)`.
return warnings.warn(message, category=category, stacklevel=1)
10%|█ | 21/200 [00:02<00:19, 9.16it/s]
###Markdown
Prepare regression model- You should run these codes to refresh a regression model with your custom database
###Code
from ion_manager.ion_predictor.composite.auto_data_preparer import load_ion_excel
import pandas as pd
from ion_manager.ion_predictor.ml.regressor import initiate_regressor
from ion_manager.ion_predictor.ml.dataset_utils import get_number_and_category_cols
from ion_manager.ion_predictor.django_wrapper.auto_predictor import compensate_columns
# load CSV data dumped by django
composite_path = "database/composite_train.csv"
composite_path = "database/composite_all.csv" #train all: this contains data of composite_test in JACS2020
compound_path = "database/compounds.csv"
composite_df = pd.read_csv(composite_path)
compound_df = pd.read_csv(compound_path)
# calc neural descriptors etc
parsed_df = load_ion_excel(
settings, compound_df=compound_df, composite_df=composite_df)
# check for numeric and category caolumns
y_label = settings["y_label"]
X = parsed_df.drop([y_label, "ID"], axis=1)
X = X.sort_index(axis=1, ascending=False)
y = parsed_df[[y_label]]
# default model is a pipeline of imputers, scalers, and random forest regressor
number_columns, category_columns = get_number_and_category_cols(
parsed_df, y_label)
model = initiate_regressor(number_columns, category_columns)
# fit
model.fit(X, y)
# dump model for django
joblib.dump([model, list(X.columns)], settings["regressor_path"])
###Output
/home/user/anaconda3/envs/ion/lib/python3.7/site-packages/sklearn/pipeline.py:346: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().
self._final_estimator.fit(Xt, y, **fit_params_last_step)
###Markdown
Check prediction
###Code
import numpy as np
composite_test_path = "database/composite_test.csv"
composite_df = pd.read_csv(composite_test_path)
test_df = load_ion_excel(
settings, compound_df=compound_df, composite_df=composite_df)
# add some lacking columns emerged during preprocessing processes
test_X = test_df.drop([y_label, "ID"], axis=1)
test_X = compensate_columns(test_X, X.columns)
test_y = test_df[[y_label]]
import matplotlib.pyplot as plt
from sklearn.metrics import mean_absolute_error, r2_score
ax = plt
pred_train_y = model.predict(X)
pred_test_y = model.predict(test_X)
ax.scatter(y, pred_train_y, s=2)
ax.scatter(test_y, pred_test_y, s=10)
plt.xlim(-14, 0)
plt.ylim(-14, 0)
ax.plot((-14, 0), (-14, 0), c="black", linewidth=1)
test_mae = mean_absolute_error(test_y, pred_test_y)
ax.text(-10, -1, f"MAE: {test_mae:.2f}")
###Output
_____no_output_____ |
Image-Captioning-Project-master/0_Dataset.ipynb | ###Markdown
Computer Vision Nanodegree Project: Image Captioning---The Microsoft **C**ommon **O**bjects in **CO**ntext (MS COCO) dataset is a large-scale dataset for scene understanding. The dataset is commonly used to train and benchmark object detection, segmentation, and captioning algorithms. You can read more about the dataset on the [website](http://cocodataset.org/home) or in the [research paper](https://arxiv.org/pdf/1405.0312.pdf).In this notebook, you will explore this dataset, in preparation for the project. Step 1: Initialize the COCO APIWe begin by initializing the [COCO API](https://github.com/cocodataset/cocoapi) that you will use to obtain the data.
###Code
import os
import sys
sys.path.append('/opt/cocoapi/PythonAPI')
from pycocotools.coco import COCO
import json
# initialize COCO API for instance annotations
dataDir = 'D:/Datasets/cocoapi'#'/opt/cocoapi'
dataType = 'val2014'
instances_annFile = os.path.join(dataDir, 'annotations','instances_{}.json'.format(dataType))
print(instances_annFile)
#dataset = json.load(open(instances_annFile, 'r'))
coco = COCO(instances_annFile)
# initialize COCO API for caption annotations
captions_annFile = os.path.join(dataDir, 'annotations','captions_{}.json'.format(dataType))
coco_caps = COCO(captions_annFile)
# get image ids
ids = list(coco.anns.keys())
###Output
D:/Datasets/cocoapi\annotations\instances_val2014.json
loading annotations into memory...
Done (t=5.23s)
creating index...
index created!
loading annotations into memory...
Done (t=0.49s)
creating index...
index created!
###Markdown
Step 2: Plot a Sample ImageNext, we plot a random image from the dataset, along with its five corresponding captions. Each time you run the code cell below, a different image is selected. In the project, you will use this dataset to train your own model to generate captions from images!
###Code
import numpy as np
import skimage.io as io
import matplotlib.pyplot as plt
%matplotlib inline
# pick a random image and obtain the corresponding URL
ann_id = np.random.choice(ids)
img_id = coco.anns[ann_id]['image_id']
img = coco.loadImgs(img_id)[0]
url = img['coco_url']
# print URL and visualize corresponding image
print(url)
I = io.imread(url)
plt.axis('off')
plt.imshow(I)
plt.show()
# load and display captions
annIds = coco_caps.getAnnIds(imgIds=img['id']);
anns = coco_caps.loadAnns(annIds)
coco_caps.showAnns(anns)
###Output
http://images.cocodataset.org/val2014/COCO_val2014_000000401862.jpg
|
jupyter_notebooks/D4/supplementaries/notebooks/subclass/printgraph.ipynb | ###Markdown
Example subclass of the Graph class. Copyright (C) 2004-2015 byAric Hagberg Dan Schult Pieter Swart All rights reserved.BSD license.
###Code
from networkx import Graph
from networkx.exception import NetworkXException, NetworkXError
import networkx.convert as convert
from copy import deepcopy
class PrintGraph(Graph):
"""
Example subclass of the Graph class.
Prints activity log to file or standard output.
"""
def __init__(self, data=None, name='', file=None, **attr):
Graph.__init__(self, data=data, name=name, **attr)
if file is None:
import sys
self.fh=sys.stdout
else:
self.fh=open(file,'w')
def add_node(self, n, attr_dict=None, **attr):
Graph.add_node(self, n, attr_dict=attr_dict, **attr)
self.fh.write("Add node: %s\n"%n)
def add_nodes_from(self, nodes, **attr):
for n in nodes:
self.add_node(n, **attr)
def remove_node(self, n):
Graph.remove_node(self, n)
self.fh.write("Remove node: %s\n"%n)
def remove_nodes_from(self, nodes):
adj = self.adj
for n in nodes:
self.remove_node(n)
def add_edge(self, u, v, attr_dict=None, **attr):
Graph.add_edge(self, u, v, attr_dict=attr_dict, **attr)
self.fh.write("Add edge: %s-%s\n"%(u,v))
def add_edges_from(self, ebunch, attr_dict=None, **attr):
for e in ebunch:
u,v=e[0:2]
self.add_edge(u, v, attr_dict=attr_dict, **attr)
def remove_edge(self, u, v):
Graph.remove_edge(self, u, v)
self.fh.write("Remove edge: %s-%s\n"%(u, v))
def remove_edges_from(self, ebunch):
for e in ebunch:
u,v=e[0:2]
self.remove_edge(u, v)
def clear(self):
self.name = ''
self.adj.clear()
self.node.clear()
self.graph.clear()
self.fh.write("Clear graph\n")
def subgraph(self, nbunch, copy=True):
# subgraph is needed here since it can destroy edges in the
# graph (copy=False) and we want to keep track of all changes.
#
# Also for copy=True Graph() uses dictionary assignment for speed
# Here we use H.add_edge()
bunch = set(self.nbunch_iter(nbunch))
if not copy:
# remove all nodes (and attached edges) not in nbunch
self.remove_nodes_from([n for n in self if n not in bunch])
self.name = "Subgraph of (%s)"%(self.name)
return self
else:
# create new graph and copy subgraph into it
H = self.__class__()
H.name = "Subgraph of (%s)"%(self.name)
# add nodes
H.add_nodes_from(bunch)
# add edges
seen = set()
for u, nbrs in self.adjacency():
if u in bunch:
for v, datadict in nbrs.items():
if v in bunch and v not in seen:
dd = deepcopy(datadict)
H.add_edge(u, v, dd)
seen.add(u)
# copy node and graph attr dicts
H.node = dict( (n, deepcopy(d))
for (n, d) in self.node.items() if n in H)
H.graph = deepcopy(self.graph)
return H
G = PrintGraph()
G.add_node('foo')
G.add_nodes_from('bar', weight=8)
G.remove_node('b')
G.remove_nodes_from('ar')
list(G.nodes(data=True))
G.add_edge(0, 1, weight=10)
list(G.edges(data=True))
G.remove_edge(0, 1)
G.add_edges_from(list(zip(list(range(0o3)), list(range(1, 4)))), weight=10)
list(G.edges(data=True))
G.remove_edges_from(list(zip(list(range(0o3)), list(range(1, 4)))))
list(G.edges(data=True))
G = PrintGraph()
G.add_path(list(range(10)))
H1=G.subgraph(list(range(4)), copy=False)
list(H1.edges())
###Output
_____no_output_____ |
Notebooks/Combinations.ipynb | ###Markdown
Да си поиграем с комбинации от резултатите на различните модели
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Embeddings & LSTM
###Code
emb_lstm1 = pd.read_csv('../Submissions/embeddings-lstm-split1.csv')
emb_lstm1[:3]
emb_lstm2 = pd.read_csv('../Submissions/embeddings-lstm-split2.csv')
emb_lstm2[:3]
emb_lstm = pd.read_csv('../Submissions/embeddings-lstm.csv')
emb_lstm[:3]
###Output
_____no_output_____
###Markdown
1. (Първа половина + Втора половина + Всички данни * 2) / 4
###Code
emb_lstm_combined = (emb_lstm1.drop('id', axis=1) + emb_lstm2.drop('id', axis=1) + emb_lstm.drop('id', axis=1) * 2) / 4
emb_lstm_combined.index = emb_lstm['id']
emb_lstm_combined[:3]
emb_lstm_combined.to_csv('../Submissions/embeddings-lstm-all-combined.csv')
###Output
_____no_output_____
###Markdown
0.9767 - Подобрение с 0.0010 над комбинацията на 2те половинки 2. (Първа половина + Втора половина + Всички данни) / 3 - по-нисък резултат (0.9766) TFIDF & Logistic Regression + Embeddings & LSTM
###Code
tfidf_logreg = pd.read_csv('../Submissions/tfidf-logistic-regression.csv')
tfidf_logreg[:3]
logreg_lstm_comb = (emb_lstm.drop('id', axis=1) + tfidf_logreg.drop('id', axis=1)) / 2
logreg_lstm_comb.index = emb_lstm['id']
logreg_lstm_comb[:3]
logreg_lstm_comb.to_csv('../Submissions/logreg_lstm_combined.csv')
###Output
_____no_output_____
###Markdown
Резултат 0.9789 - силно подобрение А сега разделените
###Code
tfidf_logreg1 = pd.read_csv('../Submissions/tfidf-logistic-regression-half-clean1.csv')
tfidf_logreg1[:3]
tfidf_logreg2 = pd.read_csv('../Submissions/tfidf-logistic-regression-half-clean2.csv')
tfidf_logreg2[:3]
halves_combined = (emb_lstm1.drop('id', axis=1) + emb_lstm2.drop('id', axis=1) + tfidf_logreg1.drop('id', axis=1) + tfidf_logreg2.drop('id', axis=1)) / 4
halves_combined.index = emb_lstm['id']
halves_combined[:3]
halves_combined.to_csv('../Submissions/halves-combined.csv')
###Output
_____no_output_____ |
units/SLU08_Data_Problems/Examples Notebook - SLU8 - Data Problems.ipynb | ###Markdown
Examples Notebook - SLU8 - Data Problems Imports (feel free to skip)
###Code
import pandas as pd
data = pd.read_csv('data_with_problems.csv', index_col=0);
###Output
_____no_output_____
###Markdown
Getting the number of unique values:
###Code
data.gender.nunique();
###Output
_____no_output_____
###Markdown
Finding unexpected data problems:
###Code
data.gender.value_counts(dropna=False)
###Output
_____no_output_____
###Markdown
String operations on a column:
###Code
data.gender.str.lower();
###Output
_____no_output_____
###Markdown
Replacing data uniques (does the same as the above)
###Code
data.gender.replace({"MALE": "male"});
###Output
_____no_output_____
###Markdown
Getting a mask of duplicates:
###Code
duplicated_mask = data.duplicated();
###Output
_____no_output_____
###Markdown
Counting the duplicates:
###Code
duplicated_mask.sum();
###Output
_____no_output_____
###Markdown
Dropping duplicates
###Code
data.drop_duplicates(inplace=True, subset=None, keep='first')
###Output
_____no_output_____
###Markdown
Missing data mask:
###Code
data.isnull();
###Output
_____no_output_____
###Markdown
Counting missing data:
###Code
data.isnull().sum();
###Output
_____no_output_____
###Markdown
Replacing missing data in a Series with a value:
###Code
data.age.fillna(0, inplace=True);
###Output
_____no_output_____
###Markdown
Replacing missing data in a Series with the median of the series:
###Code
data.height.fillna(data.height.median(), inplace=True);
###Output
_____no_output_____
###Markdown
Replacing missing data in a series of strings, with a placeholder string
###Code
data.gender.fillna('unknown', inplace=True);
###Output
_____no_output_____
###Markdown
Binning continuous variables to deal with outliers
###Code
height_bins = pd.qcut(data['height'],
5,
labels=['very short', 'short', 'average', 'tall', 'very tall'])
###Output
_____no_output_____ |
10_ErrorHandling.ipynb | ###Markdown
Please download the new class notes. Step 1 : Navigate to the directory where your files are stored. Open a terminal. Using `cd`, navigate to *inside* the ILAS_Python_for_engineers folder on your computer. Step 3 : Update the course notes by downloading the changesIn the terminal type:>`git add -Agit commit -m "commit"git fetch upstreamgit merge -X theirs upstream/master` Error Handling Syntax Errors Exceptions    Exception Types    Longer Error Messages    Raising Exceptions    Example : Parameter Validity Checking    Catching and Handling Exceptions `try` and `except` Checking Interactive User Input    Re-requesting User Input    Binary Numbers    Binary Numbers    Example: Numpy Integer Overflow    Example: Error Handling with Integer Type Conversion `finally` Extension Topic: A very brief introduction to the IDE debugger Summary Test-Yourself Exercises Review Exercises Lesson Goal__Understand the meaning__ of errors generated by your programs and take logical steps to solve them.Be able to __write exceptions__ to prevent your program from allowing erroneous code to proceed undetected. Fundamental programming concepts - Understand the diffrence between *syntax* errors and *exceptions*. - Understand how to interpret an error message. - Be able to generate exceptions of your own. - Be able to write code that *catches* exceptions to prevent your program from quitting if something unexpected happens. When writing code, you make mistakes; it happens to everyone.An important part of learing to program is learning to:- fix things when they go wrong.- anticipate things that might go wrong and prepare for them. To identify errors (or bugs), it often helps to: - test small parts of the code separately (Jupyter notebook is very useful for this). - write lots of print statements. Let's look at an example error message:
###Code
for i in range(4):
print i
###Output
_____no_output_____
###Markdown
*First*, error messages show you __where__ the error occurred.Python prints the line(s) in which the error occurred. *Second*, error messages print information that is designed to tell you __what__ you are doing wrong. The strategy to find out what is going on is to read the last sentence of the error message. Sometimes it is easy for Python to determine what is wrong and the error message is very informative. Other times you make a more confusing error.In this case Python often generates an error message gives little explanation of what you did wrong. Let's look at some examples of error messages that you are likely to encounter and may have already encountered. Errors (or *bugs*) can be divided into two types: *syntax errors* and *exceptions*... Syntax ErrorsSyntax errors occur when the code you write does not conform to the rules of the language. You will probably have seen many of syntax error messages by now! `invalid syntax`A common error message is `invalid syntax`. This means you have coded something that Python doesn't understand. For example, this is often: - a typo, which you can often spot by looking carefully at the code. - a missing symbol (e.g. when expressing a conditional or a loop) Example : `invalid syntax`The code below should: - check the value of `a` - print the message if the value of `a` is 7. What's wrong with the code below?
###Code
a = 7
if a = 7:
print('the value of a equals 7')
###Output
_____no_output_____
###Markdown
Python shows with the `^` symbol to point to which part of your line of code it doesn't understand. __Try it yourself__Write the corrected code in the cell below and run it again: Example : `^`Use the `^` symbol to work out what is wrong with the code below:
###Code
avalue = 7
if avalue < 10
print('the value of avalue is smaller than 10')
###Output
_____no_output_____
###Markdown
__Try it yourself__Fix the code and re-run it in the cell below
###Code
###Output
_____no_output_____
###Markdown
Example : `invalid syntax`Other times, the syntax error message may be less obvious... What is wrong with this code?
###Code
plt.plot([1,2,3]
plt.title('Nice plot')
###Output
_____no_output_____
###Markdown
Python reads `plt.title('Nice plot')` as part of the `plt.plot` function. In this context, `plt.title('Nice plot')` makes no sense so the position of the error `^` is indicated here. __Try it yourself__Fix the code and re-run it in the cell below ExceptionsExceptions are when the *syntax* is correct but something unexpected or anomalous occurs during the execution of a program. Python detects some instances of this automatically, e.g.: - attempting to divide by zero. Attempting to divide by zero:
###Code
a = 1/0
###Output
_____no_output_____
###Markdown
Attempting to compute the dot product of two vectors of different lengths.
###Code
a = [1, 2, 3]
b = [1, 2, 3, 4]
c = np.dot(a, b)
###Output
_____no_output_____
###Markdown
Exception TypesThe error message contains: - the __exception type__ designed to tell you the nature of the problem. - a message designed to tell you what you are doing wrong.A full list of Python exception types can be found here: https://docs.python.org/3/library/exceptions.html Here are a few definitions of exception types: - `ValueError` : when a function argument that has the right type but an inappropriate value, and the situation is not described by a more precise exception such as IndexError. - `TypeError` : when an operation or function is applied to an object of inappropriate type. The associated value is a string giving details about the type mismatch. - `IndexError` : when a sequence subscript is out of range. - `SyntaxError` : when the syntax used is not recognised by Python Let's look at a few examples of errors generated by Python automatically. `IndexError: list index out of range`
###Code
x = [1, 2, 3]
for i in range(4):
print(x[i])
###Output
1
2
3
###Markdown
Error message:`IndexError: list index out of range`The length of the array `x` is 3 (so `x[0]`, `x[1]`, and `x[2]`), while you are trying to print `x[3]`. An ----> arrow points to where this problem was encountered. __Try it yourself__In the cell below, fix the code and run it again. Longer Error MessagesRemember that error messages *first* show you __where__ the error occurred.If the code you write contains imported modules, this message appears as a *traceback* from the function that generates the error, all the way down to the code that you wrote. Python will show the step that was violated in every file between the original function and your code.If the code you write contains imported modules that themselves import modules, this message can be very long. For each file, it prints a few lines of the code to the screen and points to the line where the error occurred with an ---> arrow. In the code below, the error occurs in the line `plt.plot(xdata, ydata)`, which calls a function in the `matplotlib` package.The matplotlib function generates the error when it tries to plot `y` vs. `x`. *Note:* the is a generic error message from `matplotlib`; it doesn't substitute the names of the arrays you have assigned in your code.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
def func(x, a=2, b=3):
y = b * -a * x
return y
xdata = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
ydata = func(xdata, b=4, a=1)
print(ydata)
plt.plot(xdata, ydata);
###Output
[ -4 -8 -12 -16 -20 -24 -28 -32 -36 -40]
###Markdown
The problem is that `x and y must not be None`. In this case `x` and `y` refer to `xdata` and `ydata`, because that is what the variables are called in the `matplotlib` function. Let's print `xdata` and `ydata` to see what is wrong:
###Code
print(xdata)
print(ydata)
###Output
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
None
###Markdown
`xdata` is indeed an array with 10 values.`ydata` is equal to `None` i.e. it exists but has no value assigned to it. Why is `ydata` equal to `None`?Look carefully at the function again to find what needs correcting: ```Pythondef func(x, a=2, b=3): y = b * np.exp(-a * x)``` __Try it yourself__Re-write the function in the cell below and run the code again: When you have resolved all the errors that Python has detected, your code will run.Unfortunatley, this doesn't necessarily mean that your program will do what you want it to... Raising ExceptionsBecause the intended functionality of the program is only known by the programmer, exceptions can require more effort to detect than syntax errors. Examples, where the code will run but the output will be *incorrect*: - receiving negative data when only positive data is permitted, e.g. a negative integer for the number students in a class.- unexpected integer overflows If invalid data is encountered, the program should output an informative message, just like when an error is detected automatically. Example : Parameter Validity Checking __Hydrostatic Pressure 静水圧__The hydrostatic pressure on a submerged object due to the overlying fluid can be found by:$$P = \rho g h$$Units Pa = Nm$^{-2}$ = kg m$^{-1}$s$^{-2}$$g$ = acceleration due to gravity, m s$^{-2}$ $\rho $ = fluid density, kg m$^{-3}$ $h$ = height of the fluid above the object, m.
###Code
def hp(h, rho = 1000, g = 9.81):
"""
Computes the hydrostatic pressure acting on a submerged object given:
- the density of the fluid in which is it submerged, rho
- the acceleration due to garvity, g
- the height of fluid above the object, h
"""
return rho * g * h
###Output
_____no_output_____
###Markdown
This expression makes sense only for $\rho g$ and $h > 0$.However, we can input negative values for h of these parameters without raising an error.
###Code
hp(-300, -20)
###Output
_____no_output_____
###Markdown
It is easy to input negative values by mistake, for example : - *the user makes a mistake* - *another function takes the same quantity expressed using the opposite sign.* ```Python def position(t, r0, v0=0.0, a=-9.81): return r0 + (v0 * t) + (0.5 * a * t**2) ``` Rather than return an incorrect result, which could easily be overlooked, we can raise an exception in the case of invalid data. How to Raise an Exception - The keyword `raise` - The type of the exception - A string saying what caused it in () parentheses.
###Code
def hp(h, rho = 1000, g = 9.81):
"""
Computes the hydrostatic pressure acting on a submerged object.
h = height of fluid above object, rho = fluid density, g = gravity
"""
if h < 0:
raise ValueError("Height of fluid, h, must be greater than or equal to zero")
if rho < 0:
raise ValueError("Density of fluid, rho, must be greater than or equal to zero")
if g < 0:
raise ValueError("Acceleration due to gravity, g, must be greater than or equal to zero")
return rho * g * h
###Output
_____no_output_____
###Markdown
The type of exception must be one that Python recognises.It must appear of the list of built-in Python exceptions: https://docs.python.org/3/library/exceptions.html(You can even write your own exception types but that is outside the scope of this course.) There are no fixed rules about which error type to use. Choose the one that is most appropriate. Above, we have used the exception type `ValueError`. - `ValueError` : when a function argument that has the right type but an inappropriate value, and the situation is not described by a more precise exception such as IndexError. Note: These are the same types that are generated when Python automatically raises an error. Now if we run the same function again...
###Code
hp(-300, -20)
###Output
_____no_output_____
###Markdown
Note that only the *first* exception that Python encounters gets raised. The program exits at the first error, just like automaticaly generated errors. Catching and Handling ExceptionsWe don't always want the programs we write to exit when an error is encountered.Sometimes we want the program to 'catch' the exception and then continue to do something else. Let's use a real-world example to illustrate this: USS Yorktown was a US Navy "Smart Ship" with a computer system fitted to operate a control centre from the ship's bridge. In 1997, a crew member entered data into the system that led to an attempted division by zero. The program exited, causing the ship's computer systems and the ship's propulsion systems to shut down. Code similar to that shown in the following cell would have been used to accept a user input and divide a number by that input. If we input a non-zero numerical value, the, code works.If we enter zero, it generates an error.
###Code
# Input a value and convert it from a string to a numerical type
val = int(input("input a number "))
new_val = 1 / val
###Output
input a number 0
###Markdown
It is undesirable for the ships software to: - __stop__ if input data leads to a divide-by-zero. - __proceed erroneously__ and without warning. The software needs to 'catch' the divide-by-zero exception, and do something else. What could we make the to program do instead of exiting? One solution might be to: - reduce the propulsion force. - ask for revised input. `try` and `except`In Python, the key words `try` and `except` are used to catch errors:```pythontry: Attempt to do something here that might raise an exception If no 'FooError' exception is raised: - Run this indented code. - Skip the indented code after except except FooError: If a 'FooError' exception is raised above: - Skip the indented code after try. - Run this indented code. For exception types other than FooError: - the exception will not be caught. - the program will stop. - the error message will be printed. If FooError is omitted, ANY exception type will be caught``` So for the Smart Ship, `try` and `except` could have been used to prevent the program from exiting if a `ZeroDivisionError` was generated:
###Code
val = 0
try:
new_val = 1 / val
print(f"new number = {new_val}")
except ZeroDivisionError:
print("Zero is not a valid input. Reducing propulsion force...")
###Output
Zero is not a valid input. Reducing propulsion force...
###Markdown
Several `except` statements can be used to take care of different errors.This can include assigning several exception types to a single `except` statement by placing them inside of a tuple. The following sudo-code shows example with a series of `except` statements.
###Code
try:
# do something
pass
except ValueError:
# handle ValueError exception
pass
except (TypeError, ZeroDivisionError):
# handle multiple exceptions
# TypeError and ZeroDivisionError
pass
except:
# handle all other exceptions
pass
###Output
_____no_output_____
###Markdown
Checking Interactive User InputIn the case of the smart ship, the input value is given by the user:
###Code
try:
# Ships computer system requests number from user
val = int(input("input a number "))
new_val = 1 / val
print(f"new number = {new_val}")
except ZeroDivisionError:
print("Zero is not a valid input. Reducing propulsion force...")
###Output
input a number 0
Zero is not a valid input. Reducing propulsion force...
###Markdown
By catching the exception, we avoid running the part of the code that will generate the error and stop the prgram.However, that means we have not created a variable called new_val, which the problem code section was intended to do.This can cause problems later in the program. Re-requesting User InputRecall our example error-catching solution for the smart ship - if an error is generated: - reduce the propulsion force. - __ask for revised input.__ One way to do this is to use a `while` loop with a `break` statement.We keep requesting user input until valid input is given.At that point, the `break` statement exits the loop.
###Code
while True:
try:
x = int(input("Please enter an even number: "))
if (x % 2 != 0):
raise ValueError("Odd number entered")
break
except ValueError:
print("Not a valid number. Try again...")
###Output
_____no_output_____
###Markdown
To make our program more readable we can also encapsulate the code in a __recursive__ function.For example, for the smart ship:
###Code
def SmartShip():
try:
# Ships computer system requests number from user
val = int(input("input a number "))
new_val = 1 / val
return new_val
except ZeroDivisionError:
print("Zero is not a valid input. Reducing propulsion force...")
# Request new input by re-running the function.
return SmartShip()
new_val = SmartShip()
print(f"new_val = {new_val}")
###Output
input a number 0
Zero is not a valid input. Reducing propulsion force...
input a number 3
new_val = 0.3333333333333333
###Markdown
This first example features an exception that *prevents* Python's default response to the error (i.e. exiting the code). __Try it yourself__Using the same format as the `SmartShip` example:```pythontry: Attempt to do something here that might raise an exception If no 'FooError' exception is raised: - Run this indented code. - Skip the indented code after except except FooError: If a 'FooError' exception is raised above: - Skip the indented code after try. - Run this indented code. For exception types other than FooError: - the exception will not be caught. - the program will stop. - the error message will be printed. If FooError is omitted, ANY exception type will be caught```write a function that:- asks the user to input their age.- returns the users age.- raises an exception if the user's age is >0 and asks the user to try again. Checking Automatically Generated ValuesIt can also be useful to check values that are generated automatically (e.g. due to imported data such as files or sensor readings). Background: bits and bytesThe smallest unit of computer memory is the *bit*; and each bit can take on one of two values; 0 or 1. For many computer architectures the smallest usable 'block' is a *byte*.One byte is made up of 8 bits. (e.g. a 64-bit operating system, a 32-bit operating system ... the number of bits will almost always be a multiple of 8 (one byte).) The 'bigger' a thing we want to store, the more bytes we need. In calculations, 'bigger' can mean:- how large or small the number can be.- the accuracy with which we want to store a number. Binary NumbersWhen using the binary system each number is represented by summing a combination of base 2 numbers ($2^0, 2^1, 2^2....$). For example, the table show the binary representation of number 0 to 15 (the maximum number that can be represeted by 4 bits.The sum of the base 2 columns marked with a 1 is found as the decimal number in the left hand column.The combination of 1s and 0a used to generate this decimal number, is its binary representation.|Decimal| Binary |||||:------------:|:-----------:|:-----------:|:-----------:|:---------:|| |$2^3=8$ |$2^2=4$ |$2^1=2$ |$2^0=1$ | |0 |0 |0 |0 |0 | |1 |0 |0 |0 |1 | |2 |0 |0 |1 |0 | |3 |0 |0 |1 |1 | |4 |0 |1 |0 |0 | |5 |0 |1 |0 |1 | |6 |0 |1 |1 |0 | |7 |0 |1 |1 |1 | |8 |1 |0 |0 |0 | |9 |1 |0 |0 |1 | |10 |1 |0 |1 |0 | |11 |1 |0 |1 |1 | |12 |1 |1 |0 |0 | |13 |1 |1 |0 |1 | |14 |1 |1 |1 |0 | |15 |1 |1 |1 |1 | The __largest number__ that can be represented by $n$ bits is:$2^{n} - 1$We can see this from the table. The -1 comes from the fact that we start counting at 0 (i.e. $2^0$), rather than at 1 (i.e. $2^{1}$). Another way to think about this is by considering what happens when we reach the __largest number__ that can be represented by $n$ bits.If we want to store a larger number, we need more bits. Let's increase our 4 bit number to a 5 bit number.The binary number `10000` (5 bits) represents the decimal number $2^4$.From the pattern of 1s and 0s in the table, we can see that by subtracting 1:$2^4-1$ we should get the 4 bit number binary number `1111`. The __largest postitive integer__ that can be represented by $n$ bits is:$2^{n-1} - 1$The power $n-1$ is becuase there is one less bit available when storing a *signed* integer.One bit is used to store the sign; + positive or - negative (represented as a 0 or a 1) The __largest negative integer__ that can be represented by $n$ bits is:$2^{n-1}$ The first number when counting in the positive direction (0000 in the 4 bit example above) is zero.Zero does not need a second representation in the negative scale.Therefore, when counting in the negative direction: - 0000 = -1 (not 0) - 0001 = -2 - .... __Examples: 4 bit numbers__The __largest unsigned integer__ that can be represented by 4 bits is:$2^{4} - 1 = 15$The __largest positive signed integer__ that can be represented by 4 bits is:$2^{4-1} - 1 = 7$The __largest negative signed integer__ that can be represented by 4 bits is:$2^{4-1} = 8$ Integer Storage and OverflowIn most languages (C, C++ etc), a default number of bits are used to store a given type of number.Python is different in that it *automatically* assigns a variable type to a variable. Therefore it also automatically assigns the number of bits used to store the variable. This means it will assign as many bytes as needed to represent the number entered by the user. It starts with a 32 bit number and assigns more bytes as needed. The largest (and smallest! - we will see how decimals are stored in next weeks seminar) number that Python can store is theoreticaly infinite. The number size is, however, limited by the computer's memory. However, when using the mathematics package Numpy, C-style fixed precision integers are used.It is possible for an integer to *overflow* as when using C. We will use the Numpy package to demonstrate this.
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
In this case, a maximum size of 64 bits are used.$2^{64-1} - 1 = 9.223372037 \times 10^{18}$So if we use a number greater than $2^{64-1} - 1$ the integer will *overflow*. Example: Numpy Integer Overflow In the array below:- The value with index `a[0]` is $2^{63} - 1$, the maximum storable value.- the data type is specified to make sure is an int.
###Code
a = np.array([2**63 - 1], dtype=int)
print(a, a.dtype)
###Output
[9223372036854775807] int64
###Markdown
The `bin` function prints the number in binary form, as a string.(prefix `0b` for positive numbers, prefix `-0b` for positive numbers)It is important to note that values are represented as regular binary number, NOT using their signed storage representation.e.g. 0b101 = 5, -0b101 = -5
###Code
print(bin(5), bin(-5))
print(a, a.dtype)
print(bin(a[0]))
print(type(a[0]))
print(len(bin(a[0]))) # Number of characters in binary string representation
###Output
[9223372036854775807] int64
0b111111111111111111111111111111111111111111111111111111111111111
<class 'numpy.int64'>
65
###Markdown
[9223372036854775807] int64 0b111111111111111111111111111111111111111111111111111111111111111 65There are 65 characters in the string.The first two show:- `0` : positive number.- `b` : binary number.The 63 characters that follow are all `1`.Therefore the number is $2^{63}-1$.($-2^{63}-1$ is the largest value that can be stored by a 64 bit signed integer). Adding 1 to the array will cause it to overflow.Overflow means that the number's value loops round to start again from it's smallest possible value.
###Code
a += 1
print(bin(a[0]))
print(type(a[0]))
print(len(bin(a[0]))) # Number of characters in binary string representation
###Output
-0b1000000000000000000000000000000000000000000000000000000000000000
<class 'numpy.int64'>
67
###Markdown
-0b1000000000000000000000000000000000000000000000000000000000000000 67There are 67 characters in the string.The first *three* show:`-0` : negative number.`b` : binary number.The *64* characters that follow tell us that the number is $2^{63}$.$-(2^{63})$ is the lowest value that can be stored by a 64 bit signed integer. Remember that, when printed, values are represented as regular binary number, NOT using their signed storage representation.e.g. 0b101 = 5, -0b101 = -5 Hence an extra bit (65 bits) is needed to represent the lowest negative number that can be stored by a 64 bit number (1 bit: negative sign, 63 bits: `1`) To see the number of bits required to store a number, use the bit_length method.
###Code
b = 8**12
print(b, type(b))
print(b.bit_length(), end="\n\n")
b = 8**24
print(b, type(b))
print(b.bit_length())
###Output
68719476736 <class 'int'>
37
4722366482869645213696 <class 'int'>
73
###Markdown
Example: Error Handling with Integer Type ConversionAn un-caught error due to storage limits led to the explosion of an un-manned rocket, *Ariane 5* (European Space Agency), shortly after lift-off (1996).We will reproduce the precise mistake the developers of the Ariane 5 software made. The Ariane 5 rocket explosion was caused by an integer overflow. The speed of the rocket was stored as a 64-bit float.This was converted in the navigation software to a 16-bit integer. However, the value of the float was greater than $2^{16-1}-1 = 32767$, (the largest number a 16-bit integer can represent).This led to an overflow that in turn caused the navigation system to fail and the rocket to explode. We can demonstrate what happened in the rocket program. Consider a speed of 40000.44 stored as a `float` (64 bits)(units are unecessary for demonstrating this process):
###Code
speed_float = 40000.44
###Output
_____no_output_____
###Markdown
Let's first convert the float to a 32-bit `int`.We can use NumPy to cast the variable as an integer with a fixed number of bits.
###Code
speed_int = np.int32(speed_float)
print(speed_int)
print(bin(speed_int))
###Output
40000
0b1001110001000000
###Markdown
40000 can be represented using 32 bits$40000 < 2^{32-1}-1$$40000 < 2,147,483,647$ The conversion behaves as we would expect. Now, if we convert the speed from the `float` to a 16-bit integer...
###Code
speed_int = np.int16(speed_float)
print(speed_int)
print(bin(speed_int))
###Output
-25536
-0b110001111000000
###Markdown
We see clearly the result of an integer overflow since the 16-bit integer has too few bits to represent the number 40000. What can we do to avoid the integer overflow? In this example, a 16 bit integer was chosen. Minimising memory usage was clearly an objective when writing the program. One solution is to incrementally step through increasing integer sizes (16 bit, 32 bit, 64 bit ... ).When we find an integer size that is large enough to hold the variable, we store the variable. This means we:- always select the minimimum possible variable size.- avoid overflow errors. One way to do this is using `if` and `else`.This is known as LBYL (look before you leap) programming.
###Code
speed_float = 32_10.0 # (small enough for a 16-bit int)
speed_float = 42_767.0 # (too large for a 16-bit int)
speed_float = 2_147_500_00.0 # (too large for a 32-bit int)
# Check if the number to store will fit in a 16 bit integer.
if abs(speed_float) <= (2**(16-1) - 1):
vel = np.int16(abs(speed_float))
# Check if the number to store will fit in a 32 bit integer.
elif abs(speed_float) <= (2**(32-1) - 1):
vel = np.int32(abs(speed_float))
else:
raise OverflowError("Value too large for 32-bit int.")
###Output
_____no_output_____
###Markdown
We can use `try` and `except` to do the same thing. In general, the main advantages of using `try` and `except`:- speed-ups (e.g. preventing extra lookups: `if...and...and...and...`)- cleaner code (less lines/easier to read)- jumping more than one level of logic (e.g. where a break doesn't go far enough)- where the outcome is likely to be unexpected (e.g. it is difficult to define `if` and `elif` conditional statements). This is known as EAFP (easier to ask for forgiveness than permission) programming. Remember the `try` and `except` structure:```pythontry: Attempt to do something here that might raise an exception If no 'FooError' exception is raised: - Run this indented code. - Skip the indented code after except except FooError: If a 'FooError' exception is raised above: - Skip the indented code after try. - Run this indented code. For exception types other than FooError: - the exception will not be caught. - the program will stop. - the error message will be printed. If FooError is omitted, ANY exception type will be caught``` Let's write two functions to try:
###Code
def cast_v_16(v):
"Convert to a 16-bit int."
if abs(v) <= (2**(16-1) - 1):
return np.int16(v)
else:
raise OverflowError("Value too large for 16-bit int.")
def cast_v_32(v):
"Convert to a 32-bit int."
if abs(v) <= (2**(32-1) - 1):
return np.int32(v)
else:
raise OverflowError("Value too large for 32-bit int.")
###Output
_____no_output_____
###Markdown
Then use each of the functions in the `try` except structure.
###Code
v = 32_10.0 # (small enough for a 16-bit int)
v = 42_767.0 # (too large for a 16-bit int)
v = 2_147_500_000.0 # (too large for a 32-bit int)
try:
# Try to cast v as 16-bit int using function.
vel = cast_v_16(v)
print(vel)
except OverflowError:
# If cast as 16-bit int failed, raise exception.
# Try to cast v as 32-bit int, using function.
try:
vel = cast_v_32(v)
print(vel)
except OverflowError:
# If cast as 32-bit int failed, raise exception
raise RuntimeError("Could not cast velocity to an available int type.")
print(type(vel))
###Output
_____no_output_____
###Markdown
This block of code can itself be placed inside of a function to make the code more concise.The only change made is returning the cast variable instead of storing it as the variable `vel`.
###Code
def cast_velocity(v):
try:
# Try to cast v to a 16-bit int
return cast_v_16(v)
except OverflowError:
# If cast to 16-bit int failed (and exception raised), try casting to a 32-bit int
try:
return cast_v_32(v)
except OverflowError:
# If cast to 32-bit int failed, raise exception
raise RuntimeError("Could cast v to an available int type.")
# v fits into a 16-bit int
v_int = cast_velocity(32_10.0)
print(v_int, type(v_int))
# v too large for a 16-bit int
v_int = cast_velocity(42_767.0)
print(v_int, type(v_int))
# # v too large for a 32-bit int
v_int = cast_velocity(2_147_500_000.0)
print(v_int, type(v_int))
###Output
3210 <class 'numpy.int16'>
42767 <class 'numpy.int32'>
###Markdown
Gangnam StyleIn 2014, Google switched from 32-bit integers to 64-bit integers to count views when the video "Gangnam Style" was viewed more than 2,147,483,647 times, the limit of 32-bit integers. Note: We can replace the calculation for the maximum value storable by an integer type with the method `np.iinfo(TYPE).max`, replacing `TYPE` with the integer type. e.g.For example:```pythondef cast_v_16(v): "Convert to a 16-bit int." if abs(v) <= (2**(16-1) - 1): return np.int16(v) ```can be written:```pythondef cast_v_16(v): "Convert to a 16-bit int." if abs(v) <= np.iinfo(np.int16).max: return np.int16(v) ``` `finally`The `try` statement in Python can have an optional `finally` clause. The indented code following finally is executed, regardless of the outcome of the preceding `try` (and `except`).
###Code
def cast_velocity(v):
try:
# Try to cast v to a 16-bit int
return cast_v_16(v)
except OverflowError:
# If cast to 16-bit int failed (and exception raised), try casting to a 32-bit int
try:
return cast_v_32(v)
except OverflowError:
# If cast to 32-bit int failed, raise exception
raise RuntimeError("Could cast v to an available int type.")
finally:
print("32 bit integer tried")
finally:
print("16 bit integer tried")
v_int = cast_velocity(42_767.0)
v_int = cast_velocity(2_147_500_000.0)
###Output
32 bit integer tried
16 bit integer tried
32 bit integer tried
16 bit integer tried
###Markdown
This is often used to "clean up".For example, we may be working with a file.```Pythontry: f = open("test.txt") perform file operationsfinally: f.close() ``` Extension Topic: A very brief introduction to the IDE debuggerMany IDEs such as Spyder, MATLAB and PyCharm feature a debugger mode; a mode of running your code that is designed to make removing errors easier. The underlying idea is to break your code into smaller chunks and run them sequentially.This is a little like running a sequence of Jupyter notebook cell one after the other.Running your code in this way can make it easier to spot where a bug occurs and wheat is causing it. BreakpointsA breakpoint can be added next to a line of code.In Spyder, and in many other IDEs, a break point is added by double clicking in the margin, to the left of the line number. Every time the line with the break point is reached, the program will pause.When the programmer presses resume, the code will advance until the next break point.This is a bit like running the individual cells of a Jupyter notebook. You can add as many breakpoints as you like.To remove the breakpoint simply click on it. So that you can switch easily between running the code with and without breakpoints, there are seperate buttons to run the code with and without break points.In Spyder:the button to run the code normally is: the button to run the code in debugger mode is: the button to advance the code to the next breakpoint is: All of these can be found in the toolbar at the top of the main window. On the main advantages of running your code using breakpoints, is that you can check the value of variables at different points in your program. For example, as we saw earlier, the following code will automatically raise a `ZeroDivisionError`: a = 0 a = 1 / a If we, for example, unknowlingly import a variable with value zero from an external file, it can be difficult to spot the source of error.
###Code
import numpy as np
a = np.loadtxt('sample_data/sample_data_seminar10.dat')
a = int(a[0][0])
a = 1 / a
print(a)
###Output
_____no_output_____
###Markdown
In this case, if we run the code, we can see that as `a = 0`, `a = 1 / a` raised an exception.It does not reveal that the imported value was the origin of the `ZeroDivisionError`. If we place a break point on the line: a = int(a[0][0]) we see that the value of `a` *immediately before* the line was run was an imported array of values equal to zero.The line that will run when we click advance is highlighted in pink. Our next break point is on the line that generates the error a = 1 / aThe value of `a` is 0.If we click advance, we generate error as expected, however, we have now where the zero value came from that is causing the error. The Spyder debugger mode is a little dificult to use and minimal documentation is provided.For those of you wishing to run Python using an IDE, I highly recommend PyCharm: https://www.jetbrains.com/pycharm/ It is free to download if you have a university email address.Clear, step-by-step instructions for running the PyCharm debugger mode (along with many other tutorials) can be found here: https://www.jetbrains.com/help/pycharm/step-2-debugging-your-first-python-application.html Summary - Errors (or *bugs*) can be divided into two types: *syntax errors* and *exceptions*. - Syntax errors occur when the code you write does not conform to the rules of the Python language. - Exceptions are when the *syntax* is correct but something unexpected occurs during the execution of a program. - Python detects some instances of this automatically. - The keyword `raise` causes Python to stop the program and generate an error message. - The keywords `try` and `except` can be used to *catch* exceptions; preventing anticipated errors from stopping the program. - `try` is optionally followed by the keyword `finally` (somewhere in the same block of code) which executes code regardless of the outcome of the `try` statement. Test-Yourself ExercisesCompete the Test-Youself exercises below.Save your answers as .py files and email them to:[email protected] Test-Yourself Exercise: Identifiying and fixing syntax errors.Each example contains one or two syntactical errors. Copy and paste the section of code in the cell below the example (so that you retain the original version with errors for comparison).Fix the error so that the code runs properly. Note that you will need to make changes to only one or two lines in each example. Example 1
###Code
# Example 1
y = (xvalues + 2) * (xvalues - 1) * (xvalues - 2)
xvalues = linspace(-3, 3, 100)
plt.plot(xvalues, y, 'r--')
plt.plot([-2, 1, 2], [0 ,0, 0], 'bo', markersize=10)
plt.xlabel('x-axis')
plt.ylabel('y-axis')
plt.title('Nice Python figure!')
# Copy and paste code here
###Output
_____no_output_____
###Markdown
Example 2
###Code
# Example 2
def test(x, alpha):
return np.exp(-alpha * x) * np.cos(x)
x = np.linspace(0, 10np.pi, 100)
alpha = 0.2
y = test(x)
plt.plot(x, y, 'b')
plt.xlabel('x')
plt.ylabel('f(x)')
# Copy and paste code here
###Output
_____no_output_____
###Markdown
Example 3
###Code
# Example 3
a = np.array([2, 2, 4, 2, 4, 4])
for i in range(a):
if a[i] < 3: # replace value with 77 when value equals 2
a[i] = 77
else: # otherwise replace value with -77
a[i] = -77
print('modified a:' a)
# Copy and paste code here
###Output
_____no_output_____
###Markdown
Example 4
###Code
# Example 4
y = np.zeros(20, 20)
y[8:13] = 10
plt.matshow(y)
plt.title(image of array y);
# Copy and paste code here
###Output
_____no_output_____
###Markdown
Please download the new class notes. Step 1 : Navigate to the directory where your files are stored. Open a terminal. Using `cd`, navigate to *inside* the ILAS_Python_for_engineers folder on your computer. Step 3 : Update the course notes by downloading the changesIn the terminal type:>`git add -Agit commit -m "commit"git fetch upstreamgit merge -X theirs upstream/master` Error Handling Syntax Errors Exceptions    Exception Types    Longer Error Messages    Raising Exceptions    Example : Parameter Validity Checking    Catching and Handling Exceptions `try` and `except` Checking Interactive User Input    Re-requesting User Input    Binary Numbers    Binary Numbers    Example: Numpy Integer Overflow    Example: Error Handling with Integer Type Conversion `finally` Extension Topic: A very brief introduction to the IDE debugger Summary Test-Yourself Exercises Review Exercises Lesson Goal__Understand the meaning__ of errors generated by your programs and take logical steps to solve them.Be able to __write exceptions__ to prevent your program from allowing erroneous code to proceed undetected. Fundamental programming concepts - Understand the diffrence between *syntax* errors and *exceptions*. - Understand how to interpret an error message. - Be able to generate exceptions of your own. - Be able to write code that *catches* exceptions to prevent your program from quitting if something unexpected happens. When writing code, you make mistakes; it happens to everyone.An important part of learing to program is learning to:- fix things when they go wrong.- anticipate things that might go wrong and prepare for them. To identify errors (or bugs), it often helps to: - test small parts of the code separately (Jupyter notebook is very useful for this). - write lots of print statements. Let's look at an example error message:
###Code
for i in range(4):
print i
###Output
_____no_output_____
###Markdown
*First*, error messages show you __where__ the error occurred.Python prints the line(s) in which the error occurred. *Second*, error messages print information that is designed to tell you __what__ you are doing wrong. The strategy to find out what is going on is to read the last sentence of the error message. Sometimes it is easy for Python to determine what is wrong and the error message is very informative. Other times you make a more confusing error.In this case Python often generates an error message gives little explanation of what you did wrong. Let's look at some examples of error messages that you are likely to encounter and may have already encountered. Errors (or *bugs*) can be divided into two types: *syntax errors* and *exceptions*... Syntax ErrorsSyntax errors occur when the code you write does not conform to the rules of the language. You will probably have seen many of syntax error messages by now! `invalid syntax`A common error message is `invalid syntax`. This means you have coded something that Python doesn't understand. For example, this is often: - a typo, which you can often spot by looking carefully at the code. - a missing symbol (e.g. when expressing a conditional or a loop) Example : `invalid syntax`The code below should: - check the value of `a` - print the message if the value of `a` is 7. What's wrong with the code below?
###Code
a = 7
if a = 7:
print('the value of a equals 7')
###Output
_____no_output_____
###Markdown
Python shows with the `^` symbol to point to which part of your line of code it doesn't understand. __Try it yourself__Write the corrected code in the cell below and run it again: Example : `^`Use the `^` symbol to work out what is wrong with the code below:
###Code
avalue = 7
if avalue < 10
print('the value of avalue is smaller than 10')
###Output
_____no_output_____
###Markdown
__Try it yourself__Fix the code and re-run it in the cell below
###Code
###Output
_____no_output_____
###Markdown
Example : `invalid syntax`Other times, the syntax error message may be less obvious... What is wrong with this code?
###Code
plt.plot([1,2,3]
plt.title('Nice plot')
###Output
_____no_output_____
###Markdown
Python reads `plt.title('Nice plot')` as part of the `plt.plot` function. In this context, `plt.title('Nice plot')` makes no sense so the position of the error `^` is indicated here. __Try it yourself__Fix the code and re-run it in the cell below ExceptionsExceptions are when the *syntax* is correct but something unexpected or anomalous occurs during the execution of a program. Python detects some instances of this automatically, e.g.: - attempting to divide by zero. Attempting to divide by zero:
###Code
a = 1/0
###Output
_____no_output_____
###Markdown
Attempting to compute the dot product of two vectors of different lengths.
###Code
a = [1, 2, 3]
b = [1, 2, 3, 4]
c = np.dot(a, b)
###Output
_____no_output_____
###Markdown
Exception TypesThe error message contains: - the __exception type__ designed to tell you the nature of the problem. - a message designed to tell you what you are doing wrong.A full list of Python exception types can be found here: https://docs.python.org/3/library/exceptions.html Here are a few definitions of exception types: - `ValueError` : when a function argument that has the right type but an inappropriate value, and the situation is not described by a more precise exception such as IndexError. - `TypeError` : when an operation or function is applied to an object of inappropriate type. The associated value is a string giving details about the type mismatch. - `IndexError` : when a sequence subscript is out of range. - `SyntaxError` : when the syntax used is not recognised by Python Let's look at a few examples of errors generated by Python automatically. `IndexError: list index out of range`
###Code
x = [1, 2, 3]
for i in range(4):
print(x[i])
###Output
1
2
3
###Markdown
Error message:`IndexError: list index out of range`The length of the array `x` is 3 (so `x[0]`, `x[1]`, and `x[2]`), while you are trying to print `x[3]`. An ----> arrow points to where this problem was encountered. __Try it yourself__In the cell below, fix the code and run it again. Longer Error MessagesRemember that error messages *first* show you __where__ the error occurred.If the code you write contains imported modules, this message appears as a *traceback* from the function that generates the error, all the way down to the code that you wrote. Python will show the step that was violated in every file between the original function and your code.If the code you write contains imported modules that themselves import modules, this message can be very long. For each file, it prints a few lines of the code to the screen and points to the line where the error occurred with an ---> arrow. In the code below, the error occurs in the line `plt.plot(xdata, ydata)`, which calls a function in the `matplotlib` package.The matplotlib function generates the error when it tries to plot `y` vs. `x`. *Note:* the is a generic error message from `matplotlib`; it doesn't substitute the names of the arrays you have assigned in your code.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
def func(x, a=2, b=3):
y = b * -a * x
return y
xdata = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
ydata = func(xdata, b=4, a=1)
print(ydata)
plt.plot(xdata, ydata);
###Output
[ -4 -8 -12 -16 -20 -24 -28 -32 -36 -40]
###Markdown
The problem is that `x and y must not be None`. In this case `x` and `y` refer to `xdata` and `ydata`, because that is what the variables are called in the `matplotlib` function. Let's print `xdata` and `ydata` to see what is wrong:
###Code
print(xdata)
print(ydata)
###Output
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
None
###Markdown
`xdata` is indeed an array with 10 values.`ydata` is equal to `None` i.e. it exists but has no value assigned to it. Why is `ydata` equal to `None`?Look carefully at the function again to find what needs correcting: ```Pythondef func(x, a=2, b=3): y = b * np.exp(-a * x)``` __Try it yourself__Re-write the function in the cell below and run the code again: When you have resolved all the errors that Python has detected, your code will run.Unfortunatley, this doesn't necessarily mean that your program will do what you want it to... Raising ExceptionsBecause the intended functionality of the program is only known by the programmer, exceptions can require more effort to detect than syntax errors. Examples, where the code will run but the output will be *incorrect*: - receiving negative data when only positive data is permitted, e.g. a negative integer for the number students in a class.- unexpected integer overflows If invalid data is encountered, the program should output an informative message, just like when an error is detected automatically. Example : Parameter Validity Checking __Hydrostatic Pressure 静水圧__The hydrostatic pressure on a submerged object due to the overlying fluid can be found by:$$P = \rho g h$$Units Pa = Nm$^{-2}$ = kg m$^{-1}$s$^{-2}$$g$ = acceleration due to gravity, m s$^{-2}$ $\rho $ = fluid density, kg m$^{-3}$ $h$ = height of the fluid above the object, m.
###Code
def hp(h, rho = 1000, g = 9.81):
"""
Computes the hydrostatic pressure acting on a submerged object given:
- the density of the fluid in which is it submerged, rho
- the acceleration due to garvity, g
- the height of fluid above the object, h
"""
return rho * g * h
###Output
_____no_output_____
###Markdown
This expression makes sense only for $\rho g$ and $h > 0$.However, we can input negative values for h of these parameters without raising an error.
###Code
hp(-300, -20)
###Output
_____no_output_____
###Markdown
It is easy to input negative values by mistake, for example : - *the user makes a mistake* - *another function takes the same quantity expressed using the opposite sign.* ```Python def position(t, r0, v0=0.0, a=-9.81): return r0 + (v0 * t) + (0.5 * a * t**2) ``` Rather than return an incorrect result, which could easily be overlooked, we can raise an exception in the case of invalid data. How to Raise an Exception - The keyword `raise` - The type of the exception - A string saying what caused it in () parentheses.
###Code
def hp(h, rho = 1000, g = 9.81):
"""
Computes the hydrostatic pressure acting on a submerged object.
h = height of fluid above object, rho = fluid density, g = gravity
"""
if h < 0:
raise ValueError("Height of fluid, h, must be greater than or equal to zero")
if rho < 0:
raise ValueError("Density of fluid, rho, must be greater than or equal to zero")
if g < 0:
raise ValueError("Acceleration due to gravity, g, must be greater than or equal to zero")
return rho * g * h
###Output
_____no_output_____
###Markdown
The type of exception must be one that Python recognises.It must appear of the list of built-in Python exceptions: https://docs.python.org/3/library/exceptions.html(You can even write your own exception types but that is outside the scope of this course.) There are no fixed rules about which error type to use. Choose the one that is most appropriate. Above, we have used the exception type `ValueError`. - `ValueError` : when a function argument that has the right type but an inappropriate value, and the situation is not described by a more precise exception such as IndexError. Note: These are the same types that are generated when Python automatically raises an error. Now if we run the same function again...
###Code
hp(-300, -20)
###Output
_____no_output_____
###Markdown
Note that only the *first* exception that Python encounters gets raised. The program exits at the first error, just like automaticaly generated errors. Catching and Handling ExceptionsWe don't always want the programs we write to exit when an error is encountered.Sometimes we want the program to 'catch' the exception and then continue to do something else. Let's use a real-world example to illustrate this: USS Yorktown was a US Navy "Smart Ship" with a computer system fitted to operate a control centre from the ship's bridge. In 1997, a crew member entered data into the system that led to an attempted division by zero. The program exited, causing the ship's computer systems and the ship's propulsion systems to shut down. Code similar to that shown in the following cell would have been used to accept a user input and divide a number by that input. If we input a non-zero numerical value, the, code works.If we enter zero, it generates an error.
###Code
# Input a value and convert it from a string to a numerical type
val = int(input("input a number "))
new_val = 1 / val
###Output
input a number 0
###Markdown
It is undesirable for the ships software to: - __stop__ if input data leads to a divide-by-zero. - __proceed erroneously__ and without warning. The software needs to 'catch' the divide-by-zero exception, and do something else. What could we make the to program do instead of exiting? One solution might be to: - reduce the propulsion force. - ask for revised input. `try` and `except`In Python, the key words `try` and `except` are used to catch errors:```pythontry: Attempt to do something here that might raise an exception If no 'FooError' exception is raised: - Run this indented code. - Skip the indented code after except except FooError: If a 'FooError' exception is raised above: - Skip the indented code after try. - Run this indented code. For exception types other than FooError: - the exception will not be caught. - the program will stop. - the error message will be printed. If FooError is omitted, ANY exception type will be caught``` So for the Smart Ship, `try` and `except` could have been used to prevent the program from exiting if a `ZeroDivisionError` was generated:
###Code
val = 0
try:
new_val = 1 / val
print(f"new number = {new_val}")
except ZeroDivisionError:
print("Zero is not a valid input. Reducing propulsion force...")
###Output
Zero is not a valid input. Reducing propulsion force...
###Markdown
Several `except` statements can be used to take care of different errors.This can include assigning several exception types to a single `except` statement by placing them inside of a tuple. The following sudo-code shows example with a series of `except` statements.
###Code
try:
# do something
pass
except ValueError:
# handle ValueError exception
pass
except (TypeError, ZeroDivisionError):
# handle multiple exceptions
# TypeError and ZeroDivisionError
pass
except:
# handle all other exceptions
pass
###Output
_____no_output_____
###Markdown
Checking Interactive User InputIn the case of the smart ship, the input value is given by the user:
###Code
try:
# Ships computer system requests number from user
val = int(input("input a number "))
new_val = 1 / val
print(f"new number = {new_val}")
except ZeroDivisionError:
print("Zero is not a valid input. Reducing propulsion force...")
###Output
input a number 0
Zero is not a valid input. Reducing propulsion force...
###Markdown
By catching the exception, we avoid running the part of the code that will generate the error and stop the prgram.However, that means we have not created a variable called new_val, which the problem code section was intended to do.This can cause problems later in the program. Re-requesting User InputRecall our example error-catching solution for the smart ship - if an error is generated: - reduce the propulsion force. - __ask for revised input.__ One way to do this is to use a `while` loop with a `break` statement.We keep requesting user input until valid input is given.At that point, the `break` statement exits the loop.
###Code
while True:
try:
x = int(input("Please enter an even number: "))
if (x % 2 != 0):
raise ValueError("Odd number entered")
break
except ValueError:
print("Not a valid number. Try again...")
###Output
Please enter an even number: 3
Not a valid number. Try again...
Please enter an even number: 5
Not a valid number. Try again...
Please enter an even number: 7
Not a valid number. Try again...
Please enter an even number: 7
Not a valid number. Try again...
Please enter an even number: 8
###Markdown
To make our program more readable we can also encapsulate the code in a __recursive__ function.For example, for the smart ship:
###Code
def SmartShip():
try:
# Ships computer system requests number from user
val = int(input("input a number "))
new_val = 1 / val
return new_val
except ZeroDivisionError:
print("Zero is not a valid input. Reducing propulsion force...")
# Request new input by re-running the function.
return SmartShip()
new_val = SmartShip()
print(f"new_val = {new_val}")
###Output
input a number 0
Zero is not a valid input. Reducing propulsion force...
input a number 3
new_val = 0.3333333333333333
###Markdown
This first example features an exception that *prevents* Python's default response to the error (i.e. exiting the code). __Try it yourself__Using the same format as the `SmartShip` example:```pythontry: Attempt to do something here that might raise an exception If no 'FooError' exception is raised: - Run this indented code. - Skip the indented code after except except FooError: If a 'FooError' exception is raised above: - Skip the indented code after try. - Run this indented code. For exception types other than FooError: - the exception will not be caught. - the program will stop. - the error message will be printed. If FooError is omitted, ANY exception type will be caught```write a function that:- asks the user to input their age.- returns the users age.- raises an exception if the user's age is >0 and asks the user to try again. Checking Automatically Generated ValuesIt can also be useful to check values that are generated automatically (e.g. due to imported data such as files or sensor readings). Background: bits and bytesThe smallest unit of computer memory is the *bit*; and each bit can take on one of two values; 0 or 1. For many computer architectures the smallest usable 'block' is a *byte*.One byte is made up of 8 bits. (e.g. a 64-bit operating system, a 32-bit operating system ... the number of bits will almost always be a multiple of 8 (one byte).) The 'bigger' a thing we want to store, the more bytes we need. In calculations, 'bigger' can mean:- how large or small the number can be.- the accuracy with which we want to store a number. Binary NumbersWhen using the binary system each number is represented by summing a combination of base 2 numbers ($2^0, 2^1, 2^2....$). For example, the table show the binary representation of number 0 to 15 (the maximum number that can be represeted by 4 bits.The sum of the base 2 columns marked with a 1 is found as the decimal number in the left hand column.The combination of 1s and 0a used to generate this decimal number, is its binary representation.|Decimal| Binary |||||:------------:|:-----------:|:-----------:|:-----------:|:---------:|| |$2^3=8$ |$2^2=4$ |$2^1=2$ |$2^0=1$ | |0 |0 |0 |0 |0 | |1 |0 |0 |0 |1 | |2 |0 |0 |1 |0 | |3 |0 |0 |1 |1 | |4 |0 |1 |0 |0 | |5 |0 |1 |0 |1 | |6 |0 |1 |1 |0 | |7 |0 |1 |1 |1 | |8 |1 |0 |0 |0 | |9 |1 |0 |0 |1 | |10 |1 |0 |1 |0 | |11 |1 |0 |1 |1 | |12 |1 |1 |0 |0 | |13 |1 |1 |0 |1 | |14 |1 |1 |1 |0 | |15 |1 |1 |1 |1 | The __largest number__ that can be represented by $n$ bits is:$2^{n} - 1$We can see this from the table. The -1 comes from the fact that we start counting at 0 (i.e. $2^0$), rather than at 1 (i.e. $2^{1}$). Another way to think about this is by considering what happens when we reach the __largest number__ that can be represented by $n$ bits.If we want to store a larger number, we need more bits. Let's increase our 4 bit number to a 5 bit number.The binary number `10000` (5 bits) represents the decimal number $2^4$.From the pattern of 1s and 0s in the table, we can see that by subtracting 1:$2^4-1$ we should get the 4 bit number binary number `1111`. The __largest postitive integer__ that can be represented by $n$ bits is:$2^{n-1} - 1$The power $n-1$ is becuase there is one less bit available when storing a *signed* integer.One bit is used to store the sign; + positive or - negative (represented as a 0 or a 1) The __largest negative integer__ that can be represented by $n$ bits is:$2^{n-1}$ The first number when counting in the positive direction (0000 in the 4 bit example above) is zero.Zero does not need a second representation in the negative scale.Therefore, when counting in the negative direction: - 0000 = -1 (not 0) - 0001 = -2 - .... __Examples: 4 bit numbers__The __largest unsigned integer__ that can be represented by 4 bits is:$2^{4} - 1 = 15$The __largest positive signed integer__ that can be represented by 4 bits is:$2^{4-1} - 1 = 7$The __largest negative signed integer__ that can be represented by 4 bits is:$2^{4-1} = 8$ Integer Storage and OverflowIn most languages (C, C++ etc), a default number of bits are used to store a given type of number.Python is different in that it *automatically* assigns a variable type to a variable. Therefore it also automatically assigns the number of bits used to store the variable. This means it will assign as many bytes as needed to represent the number entered by the user. It starts with a 32 bit number and assigns more bytes as needed. The largest (and smallest! - we will see how decimals are stored in next weeks seminar) number that Python can store is theoreticaly infinite. The number size is, however, limited by the computer's memory. However, when using the mathematics package Numpy, C-style fixed precision integers are used.It is possible for an integer to *overflow* as when using C. We will use the Numpy package to demonstrate this.
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
In this case, a maximum size of 64 bits are used.$2^{64-1} - 1 = 9.223372037 \times 10^{18}$So if we use a number greater than $2^{64-1} - 1$ the integer will *overflow*. Example: Numpy Integer Overflow In the array below:- The value with index `a[0]` is $2^{63} - 1$, the maximum storable value.- the data type is specified to make sure is an int.
###Code
a = np.array([2**63 - 1], dtype=int)
print(a, a.dtype)
###Output
[9223372036854775807] int64
###Markdown
The `bin` function prints the number in binary form, as a string.(prefix `0b` for positive numbers, prefix `-0b` for positive numbers)It is important to note that values are represented as regular binary number, NOT using their signed storage representation.e.g. 0b101 = 5, -0b101 = -5
###Code
print(bin(5), bin(-5))
print(a, a.dtype)
print(bin(a[0]))
print(type(a[0]))
print(len(bin(a[0]))) # Number of characters in binary string representation
###Output
[9223372036854775807] int64
0b111111111111111111111111111111111111111111111111111111111111111
<class 'numpy.int64'>
65
###Markdown
[9223372036854775807] int64 0b111111111111111111111111111111111111111111111111111111111111111 65There are 65 characters in the string.The first two show:- `0` : positive number.- `b` : binary number.The 63 characters that follow are all `1`.Therefore the number is $2^{63}-1$.($-2^{63}-1$ is the largest value that can be stored by a 64 bit signed integer). Adding 1 to the array will cause it to overflow.Overflow means that the number's value loops round to start again from it's smallest possible value.
###Code
a += 1
print(bin(a[0]))
print(type(a[0]))
print(len(bin(a[0]))) # Number of characters in binary string representation
###Output
-0b1000000000000000000000000000000000000000000000000000000000000000
<class 'numpy.int64'>
67
###Markdown
-0b1000000000000000000000000000000000000000000000000000000000000000 67There are 67 characters in the string.The first *three* show:`-0` : negative number.`b` : binary number.The *64* characters that follow tell us that the number is $2^{63}$.$-(2^{63})$ is the lowest value that can be stored by a 64 bit signed integer. Remember that, when printed, values are represented as regular binary number, NOT using their signed storage representation.e.g. 0b101 = 5, -0b101 = -5 Hence an extra bit (65 bits) is needed to represent the lowest negative number that can be stored by a 64 bit number (1 bit: negative sign, 63 bits: `1`) To see the number of bits required to store a number, use the bit_length method.
###Code
b = 8**12
print(b, type(b))
print(b.bit_length(), end="\n\n")
b = 8**24
print(b, type(b))
print(b.bit_length())
###Output
68719476736 <class 'int'>
37
4722366482869645213696 <class 'int'>
73
###Markdown
Example: Error Handling with Integer Type ConversionAn un-caught error due to storage limits led to the explosion of an un-manned rocket, *Ariane 5* (European Space Agency), shortly after lift-off (1996).We will reproduce the precise mistake the developers of the Ariane 5 software made. The Ariane 5 rocket explosion was caused by an integer overflow. The speed of the rocket was stored as a 64-bit float.This was converted in the navigation software to a 16-bit integer. However, the value of the float was greater than $2^{16-1}-1 = 32767$, (the largest number a 16-bit integer can represent).This led to an overflow that in turn caused the navigation system to fail and the rocket to explode. We can demonstrate what happened in the rocket program. Consider a speed of 40000.44 stored as a `float` (64 bits)(units are unecessary for demonstrating this process):
###Code
speed_float = 40000.44
###Output
_____no_output_____
###Markdown
Let's first convert the float to a 32-bit `int`.We can use NumPy to cast the variable as an integer with a fixed number of bits.
###Code
speed_int = np.int32(speed_float)
print(speed_int)
print(bin(speed_int))
###Output
40000
0b1001110001000000
###Markdown
40000 can be represented using 32 bits$40000 < 2^{32-1}-1$$40000 < 2,147,483,647$ The conversion behaves as we would expect. Now, if we convert the speed from the `float` to a 16-bit integer...
###Code
speed_int = np.int16(speed_float)
print(speed_int)
print(bin(speed_int))
###Output
-25536
-0b110001111000000
###Markdown
We see clearly the result of an integer overflow since the 16-bit integer has too few bits to represent the number 40000. What can we do to avoid the integer overflow? In this example, a 16 bit integer was chosen. Minimising memory usage was clearly an objective when writing the program. One solution is to incrementally step through increasing integer sizes (16 bit, 32 bit, 64 bit ... ).When we find an integer size that is large enough to hold the variable, we store the variable. This means we:- always select the minimimum possible variable size.- avoid overflow errors. One way to do this is using `if` and `else`.This is known as LBYL (look before you leap) programming.
###Code
speed_float = 32_10.0 # (small enough for a 16-bit int)
speed_float = 42_767.0 # (too large for a 16-bit int)
speed_float = 2_147_500_00.0 # (too large for a 32-bit int)
# Check if the number to store will fit in a 16 bit integer.
if abs(speed_float) <= (2**(16-1) - 1):
vel = np.int16(abs(speed_float))
# Check if the number to store will fit in a 32 bit integer.
elif abs(speed_float) <= (2**(32-1) - 1):
vel = np.int32(abs(speed_float))
else:
raise OverflowError("Value too large for 32-bit int.")
###Output
_____no_output_____
###Markdown
We can use `try` and `except` to do the same thing. In general, the main advantages of using `try` and `except`:- speed-ups (e.g. preventing extra lookups: `if...and...and...and...`)- cleaner code (less lines/easier to read)- jumping more than one level of logic (e.g. where a break doesn't go far enough)- where the outcome is likely to be unexpected (e.g. it is difficult to define `if` and `elif` conditional statements). This is known as EAFP (easier to ask for forgiveness than permission) programming. Remember the `try` and `except` structure:```pythontry: Attempt to do something here that might raise an exception If no 'FooError' exception is raised: - Run this indented code. - Skip the indented code after except except FooError: If a 'FooError' exception is raised above: - Skip the indented code after try. - Run this indented code. For exception types other than FooError: - the exception will not be caught. - the program will stop. - the error message will be printed. If FooError is omitted, ANY exception type will be caught``` Let's write two functions to try:
###Code
def cast_v_16(v):
"Convert to a 16-bit int."
if abs(v) <= (2**(16-1) - 1):
return np.int16(v)
else:
raise OverflowError("Value too large for 16-bit int.")
def cast_v_32(v):
"Convert to a 32-bit int."
if abs(v) <= (2**(32-1) - 1):
return np.int32(v)
else:
raise OverflowError("Value too large for 32-bit int.")
###Output
_____no_output_____
###Markdown
Then use each of the functions in the `try` except structure.
###Code
v = 32_10.0 # (small enough for a 16-bit int)
v = 42_767.0 # (too large for a 16-bit int)
v = 2_147_500_000.0 # (too large for a 32-bit int)
try:
# Try to cast v as 16-bit int using function.
vel = cast_v_16(v)
print(vel)
except OverflowError:
# If cast as 16-bit int failed, raise exception.
# Try to cast v as 32-bit int, using function.
try:
vel = cast_v_32(v)
print(vel)
except OverflowError:
# If cast as 32-bit int failed, raise exception
raise RuntimeError("Could not cast velocity to an available int type.")
print(type(vel))
###Output
_____no_output_____
###Markdown
This block of code can itself be placed inside of a function to make the code more concise.The only change made is returning the cast variable instead of storing it as the variable `vel`.
###Code
def cast_velocity(v):
try:
# Try to cast v to a 16-bit int
return cast_v_16(v)
except OverflowError:
# If cast to 16-bit int failed (and exception raised), try casting to a 32-bit int
try:
return cast_v_32(v)
except OverflowError:
# If cast to 32-bit int failed, raise exception
raise RuntimeError("Could cast v to an available int type.")
# v fits into a 16-bit int
v_int = cast_velocity(32_10.0)
print(v_int, type(v_int))
# v too large for a 16-bit int
v_int = cast_velocity(42_767.0)
print(v_int, type(v_int))
# # v too large for a 32-bit int
v_int = cast_velocity(2_147_500_000.0)
print(v_int, type(v_int))
###Output
3210 <class 'numpy.int16'>
42767 <class 'numpy.int32'>
###Markdown
Gangnam StyleIn 2014, Google switched from 32-bit integers to 64-bit integers to count views when the video "Gangnam Style" was viewed more than 2,147,483,647 times, the limit of 32-bit integers. Note: We can replace the calculation for the maximum value storable by an integer type with the method `np.iinfo(TYPE).max`, replacing `TYPE` with the integer type. e.g.For example:```pythondef cast_v_16(v): "Convert to a 16-bit int." if abs(v) <= (2**(16-1) - 1): return np.int16(v) ```can be written:```pythondef cast_v_16(v): "Convert to a 16-bit int." if abs(v) <= np.iinfo(np.int16).max: return np.int16(v) ``` `finally`The `try` statement in Python can have an optional `finally` clause. The indented code following finally is executed, regardless of the outcome of the preceding `try` (and `except`).
###Code
def cast_velocity(v):
try:
# Try to cast v to a 16-bit int
return cast_v_16(v)
except OverflowError:
# If cast to 16-bit int failed (and exception raised), try casting to a 32-bit int
try:
return cast_v_32(v)
except OverflowError:
# If cast to 32-bit int failed, raise exception
raise RuntimeError("Could cast v to an available int type.")
finally:
print("32 bit integer tried")
finally:
print("16 bit integer tried")
v_int = cast_velocity(42_767.0)
v_int = cast_velocity(2_147_500_000.0)
###Output
32 bit integer tried
16 bit integer tried
32 bit integer tried
16 bit integer tried
###Markdown
This is often used to "clean up".For example, we may be working with a file.```Pythontry: f = open("test.txt") perform file operationsfinally: f.close() ``` Extension Topic: A very brief introduction to the IDE debuggerMany IDEs such as Spyder, MATLAB and PyCharm feature a debugger mode; a mode of running your code that is designed to make removing errors easier. The underlying idea is to break your code into smaller chunks and run them sequentially.This is a little like running a sequence of Jupyter notebook cell one after the other.Running your code in this way can make it easier to spot where a bug occurs and wheat is causing it. BreakpointsA breakpoint can be added next to a line of code.In Spyder, and in many other IDEs, a break point is added by double clicking in the margin, to the left of the line number. Every time the line with the break point is reached, the program will pause.When the programmer presses resume, the code will advance until the next break point.This is a bit like running the individual cells of a Jupyter notebook. You can add as many breakpoints as you like.To remove the breakpoint simply click on it. So that you can switch easily between running the code with and without breakpoints, there are seperate buttons to run the code with and without break points.In Spyder:the button to run the code normally is: the button to run the code in debugger mode is: the button to advance the code to the next breakpoint is: All of these can be found in the toolbar at the top of the main window. On the main advantages of running your code using breakpoints, is that you can check the value of variables at different points in your program. For example, as we saw earlier, the following code will automatically raise a `ZeroDivisionError`: a = 0 a = 1 / a If we, for example, unknowlingly import a variable with value zero from an external file, it can be difficult to spot the source of error.
###Code
import numpy as np
a = np.loadtxt('sample_data/sample_data_seminar10.dat')
a = int(a[0][0])
a = 1 / a
print(a)
###Output
_____no_output_____
###Markdown
In this case, if we run the code, we can see that as `a = 0`, `a = 1 / a` raised an exception.It does not reveal that the imported value was the origin of the `ZeroDivisionError`. If we place a break point on the line: a = int(a[0][0]) we see that the value of `a` *immediately before* the line was run was an imported array of values equal to zero.The line that will run when we click advance is highlighted in pink. Our next break point is on the line that generates the error a = 1 / aThe value of `a` is 0.If we click advance, we generate error as expected, however, we have now where the zero value came from that is causing the error. The Spyder debugger mode is a little dificult to use and minimal documentation is provided.For those of you wishing to run Python using an IDE, I highly recommend PyCharm: https://www.jetbrains.com/pycharm/ It is free to download if you have a university email address.Clear, step-by-step instructions for running the PyCharm debugger mode (along with many other tutorials) can be found here: https://www.jetbrains.com/help/pycharm/step-2-debugging-your-first-python-application.html Summary - Errors (or *bugs*) can be divided into two types: *syntax errors* and *exceptions*. - Syntax errors occur when the code you write does not conform to the rules of the Python language. - Exceptions are when the *syntax* is correct but something unexpected occurs during the execution of a program. - Python detects some instances of this automatically. - The keyword `raise` causes Python to stop the program and generate an error message. - The keywords `try` and `except` can be used to *catch* exceptions; preventing anticipated errors from stopping the program. - `try` is optionally followed by the keyword `finally` (somewhere in the same block of code) which executes code regardless of the outcome of the `try` statement. Test-Yourself ExercisesCompete the Test-Youself exercises below.Save your answers as .py files and email them to:[email protected] Test-Yourself Exercise: Identifiying and fixing syntax errors.Each example contains one or two syntactical errors. Copy and paste the section of code in the cell below the example (so that you retain the original version with errors for comparison).Fix the error so that the code runs properly. Note that you will need to make changes to only one or two lines in each example. Example 1
###Code
# Example 1
y = (xvalues + 2) * (xvalues - 1) * (xvalues - 2)
xvalues = linspace(-3, 3, 100)
plt.plot(xvalues, y, 'r--')
plt.plot([-2, 1, 2], [0 ,0, 0], 'bo', markersize=10)
plt.xlabel('x-axis')
plt.ylabel('y-axis')
plt.title('Nice Python figure!')
# Copy and paste code here
###Output
_____no_output_____
###Markdown
Example 2
###Code
# Example 2
def test(x, alpha):
return np.exp(-alpha * x) * np.cos(x)
x = np.linspace(0, 10np.pi, 100)
alpha = 0.2
y = test(x)
plt.plot(x, y, 'b')
plt.xlabel('x')
plt.ylabel('f(x)')
# Copy and paste code here
###Output
_____no_output_____
###Markdown
Example 3
###Code
# Example 3
a = np.array([2, 2, 4, 2, 4, 4])
for i in range(a):
if a[i] < 3: # replace value with 77 when value equals 2
a[i] = 77
else: # otherwise replace value with -77
a[i] = -77
print('modified a:' a)
# Copy and paste code here
###Output
_____no_output_____
###Markdown
Example 4
###Code
# Example 4
y = np.zeros(20, 20)
y[8:13] = 10
plt.matshow(y)
plt.title(image of array y);
# Copy and paste code here
###Output
_____no_output_____
###Markdown
Please download the new class notes. Step 1 : Navigate to the directory where your files are stored. Open a terminal. Using `cd`, navigate to *inside* the ILAS_Python_for_engineers folder on your computer. Step 3 : Update the course notes by downloading the changesIn the terminal type:>`git add -Agit commit -m "commit"git fetch upstreamgit merge -X theirs upstream/master` Error Handling Syntax Errors Exceptions    Exception Types    Longer Error Messages    Raising Exceptions    Example : Parameter Validity Checking    Catching and Handling Exceptions `try` and `except` Checking Interactive User Input    Re-requesting User Input    Binary Numbers    Binary Numbers    Example: Numpy Integer Overflow    Example: Error Handling with Integer Type Conversion `finally` Extension Topic: A very brief introduction to the IDE debugger Summary Test-Yourself Exercises Review Exercises Lesson Goal__Understand the meaning__ of errors generated by your programs and take logical steps to solve them.Be able to __write exceptions__ to prevent your program from allowing erroneous code to proceed undetected. Fundamental programming concepts - Understand the diffrence between *syntax* errors and *exceptions*. - Understand how to interpret an error message. - Be able to generate exceptions of your own. - Be able to write code that *catches* exceptions to prevent your program from quitting if something unexpected happens. When writing code, you make mistakes; it happens to everyone.An important part of learing to program is learning to:- fix things when they go wrong.- anticipate things that might go wrong and prepare for them. To identify errors (or bugs), it often helps to: - test small parts of the code separately (Jupyter notebook is very useful for this). - write lots of print statements. Let's look at an example error message:
###Code
for i in range(4):
print (i)
###Output
0
1
2
3
###Markdown
*First*, error messages show you __where__ the error occurred.Python prints the line(s) in which the error occurred. *Second*, error messages print information that is designed to tell you __what__ you are doing wrong. The strategy to find out what is going on is to read the last sentence of the error message. Sometimes it is easy for Python to determine what is wrong and the error message is very informative. Other times you make a more confusing error.In this case Python often generates an error message gives little explanation of what you did wrong. Let's look at some examples of error messages that you are likely to encounter and may have already encountered. Errors (or *bugs*) can be divided into two types: *syntax errors* and *exceptions*... Syntax ErrorsSyntax errors occur when the code you write does not conform to the rules of the language. You will probably have seen many of syntax error messages by now! `invalid syntax`A common error message is `invalid syntax`. This means you have coded something that Python doesn't understand. For example, this is often: - a typo, which you can often spot by looking carefully at the code. - a missing symbol (e.g. when expressing a conditional or a loop) Example : `invalid syntax`The code below should: - check the value of `a` - print the message if the value of `a` is 7. What's wrong with the code below?
###Code
a = 7
if a == 7:
print('the value of a equals 7')
###Output
the value of a equals 7
###Markdown
Python shows with the `^` symbol to point to which part of your line of code it doesn't understand. __Try it yourself__Write the corrected code in the cell below and run it again: Example : `^`Use the `^` symbol to work out what is wrong with the code below:
###Code
avalue = 7
if avalue < 10 :
print('the value of avalue is smaller than 10')
###Output
the value of avalue is smaller than 10
###Markdown
__Try it yourself__Fix the code and re-run it in the cell below
###Code
###Output
_____no_output_____
###Markdown
Example : `invalid syntax`Other times, the syntax error message may be less obvious... What is wrong with this code?
###Code
plt.plot([1,2,3])
plt.title('Nice plot')
###Output
_____no_output_____
###Markdown
Python reads `plt.title('Nice plot')` as part of the `plt.plot` function. In this context, `plt.title('Nice plot')` makes no sense so the position of the error `^` is indicated here. __Try it yourself__Fix the code and re-run it in the cell below ExceptionsExceptions are when the *syntax* is correct but something unexpected or anomalous occurs during the execution of a program. Python detects some instances of this automatically, e.g.: - attempting to divide by zero. Attempting to divide by zero:
###Code
a = 1/0
###Output
_____no_output_____
###Markdown
Attempting to compute the dot product of two vectors of different lengths.
###Code
a = [1, 2, 3]
b = [1, 2, 3, 4]
c = np.dot(a, b)
###Output
_____no_output_____
###Markdown
Exception TypesThe error message contains: - the __exception type__ designed to tell you the nature of the problem. - a message designed to tell you what you are doing wrong.A full list of Python exception types can be found here: https://docs.python.org/3/library/exceptions.html Here are a few definitions of exception types: - `ValueError` : when a function argument that has the right type but an inappropriate value, and the situation is not described by a more precise exception such as IndexError. - `TypeError` : when an operation or function is applied to an object of inappropriate type. The associated value is a string giving details about the type mismatch. - `IndexError` : when a sequence subscript is out of range. - `SyntaxError` : when the syntax used is not recognised by Python Let's look at a few examples of errors generated by Python automatically. `IndexError: list index out of range`
###Code
x = [1, 2, 3]
for i in range(4):
print(x[i])
###Output
1
2
3
###Markdown
Error message:`IndexError: list index out of range`The length of the array `x` is 3 (so `x[0]`, `x[1]`, and `x[2]`), while you are trying to print `x[3]`. An ----> arrow points to where this problem was encountered. __Try it yourself__In the cell below, fix the code and run it again. Longer Error MessagesRemember that error messages *first* show you __where__ the error occurred.If the code you write contains imported modules, this message appears as a *traceback* from the function that generates the error, all the way down to the code that you wrote. Python will show the step that was violated in every file between the original function and your code.If the code you write contains imported modules that themselves import modules, this message can be very long. For each file, it prints a few lines of the code to the screen and points to the line where the error occurred with an ---> arrow. In the code below, the error occurs in the line `plt.plot(xdata, ydata)`, which calls a function in the `matplotlib` package.The matplotlib function generates the error when it tries to plot `y` vs. `x`. *Note:* the is a generic error message from `matplotlib`; it doesn't substitute the names of the arrays you have assigned in your code.
###Code
import matplotlib.pyplot as plt
#import numpy as np
%matplotlib inline
def func(x, a=2, b=3):
y = b * -a * x
return y
xdata = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
ydata = func(xdata, b=4, a=1)
print(ydata)
plt.plot(xdata, ydata);
###Output
[ -4 -8 -12 -16 -20 -24 -28 -32 -36 -40]
###Markdown
The problem is that `x and y must not be None`. In this case `x` and `y` refer to `xdata` and `ydata`, because that is what the variables are called in the `matplotlib` function. Let's print `xdata` and `ydata` to see what is wrong:
###Code
print(xdata)
print(ydata)
###Output
[ 1 2 3 4 5 6 7 8 9 10]
[ -4 -8 -12 -16 -20 -24 -28 -32 -36 -40]
###Markdown
`xdata` is indeed an array with 10 values.`ydata` is equal to `None` i.e. it exists but has no value assigned to it. Why is `ydata` equal to `None`?Look carefully at the function again to find what needs correcting: ```Pythondef func(x, a=2, b=3): y = b * np.exp(-a * x)``` __Try it yourself__Re-write the function in the cell below and run the code again: When you have resolved all the errors that Python has detected, your code will run.Unfortunatley, this doesn't necessarily mean that your program will do what you want it to... Raising ExceptionsBecause the intended functionality of the program is only known by the programmer, exceptions can require more effort to detect than syntax errors. Examples, where the code will run but the output will be *incorrect*: - receiving negative data when only positive data is permitted, e.g. a negative integer for the number students in a class.- unexpected integer overflows If invalid data is encountered, the program should output an informative message, just like when an error is detected automatically. Example : Parameter Validity Checking __Hydrostatic Pressure 静水圧__The hydrostatic pressure on a submerged object due to the overlying fluid can be found by:$$P = \rho g h$$Units Pa = Nm$^{-2}$ = kg m$^{-1}$s$^{-2}$$g$ = acceleration due to gravity, m s$^{-2}$ $\rho $ = fluid density, kg m$^{-3}$ $h$ = height of the fluid above the object, m.
###Code
def hp(h, rho = 1000, g = 9.81):
"""
Computes the hydrostatic pressure acting on a submerged object given:
- the density of the fluid in which is it submerged, rho
- the acceleration due to garvity, g
- the height of fluid above the object, h
"""
if h<0:
raise ValueError("H is lessthan zero")
return rho * g * h
###Output
_____no_output_____
###Markdown
This expression makes sense only for $\rho g$ and $h > 0$.However, we can input negative values for h of these parameters without raising an error.
###Code
hp(-300, -20)
###Output
_____no_output_____
###Markdown
It is easy to input negative values by mistake, for example : - *the user makes a mistake* - *another function takes the same quantity expressed using the opposite sign.* ```Python def position(t, r0, v0=0.0, a=-9.81): return r0 + (v0 * t) + (0.5 * a * t**2) ``` Rather than return an incorrect result, which could easily be overlooked, we can raise an exception in the case of invalid data. How to Raise an Exception - The keyword `raise` - The type of the exception - A string saying what caused it in () parentheses.
###Code
def hp(h, rho = 1000, g = 9.81):
"""
Computes the hydrostatic pressure acting on a submerged object.
h = height of fluid above object, rho = fluid density, g = gravity
"""
if h < 0:
raise ValueError("Height of fluid, h, must be greater than or equal to zero")
if rho < 0:
raise ValueError("Density of fluid, rho, must be greater than or equal to zero")
if g < 0:
raise ValueError("Acceleration due to gravity, g, must be greater than or equal to zero")
return rho * g * h
###Output
_____no_output_____
###Markdown
The type of exception must be one that Python recognises.It must appear of the list of built-in Python exceptions: https://docs.python.org/3/library/exceptions.html(You can even write your own exception types but that is outside the scope of this course.) There are no fixed rules about which error type to use. Choose the one that is most appropriate. Above, we have used the exception type `ValueError`. - `ValueError` : when a function argument that has the right type but an inappropriate value, and the situation is not described by a more precise exception such as IndexError. Note: These are the same types that are generated when Python automatically raises an error. Now if we run the same function again...
###Code
hp(-300, -20)
###Output
_____no_output_____
###Markdown
Note that only the *first* exception that Python encounters gets raised. The program exits at the first error, just like automaticaly generated errors. Catching and Handling ExceptionsWe don't always want the programs we write to exit when an error is encountered.Sometimes we want the program to 'catch' the exception and then continue to do something else. Let's use a real-world example to illustrate this: USS Yorktown was a US Navy "Smart Ship" with a computer system fitted to operate a control centre from the ship's bridge. In 1997, a crew member entered data into the system that led to an attempted division by zero. The program exited, causing the ship's computer systems and the ship's propulsion systems to shut down. Code similar to that shown in the following cell would have been used to accept a user input and divide a number by that input. If we input a non-zero numerical value, the, code works.If we enter zero, it generates an error.
###Code
# Input a value and convert it from a string to a numerical type
val = int(input("input a number "))
new_val = 1 / val
###Output
input a number 0
###Markdown
It is undesirable for the ships software to: - __stop__ if input data leads to a divide-by-zero. - __proceed erroneously__ and without warning. The software needs to 'catch' the divide-by-zero exception, and do something else. What could we make the to program do instead of exiting? One solution might be to: - reduce the propulsion force. - ask for revised input. `try` and `except`In Python, the key words `try` and `except` are used to catch errors:```pythontry: Attempt to do something here that might raise an exception If no 'FooError' exception is raised: - Run this indented code. - Skip the indented code after except except FooError: If a 'FooError' exception is raised above: - Skip the indented code after try. - Run this indented code. For exception types other than FooError: - the exception will not be caught. - the program will stop. - the error message will be printed. If FooError is omitted, ANY exception type will be caught``` So for the Smart Ship, `try` and `except` could have been used to prevent the program from exiting if a `ZeroDivisionError` was generated:
###Code
val = 0
try:
new_val = 1 / val
print(f"new number = {new_val}")
except ZeroDivisionError:
print("Zero is not a valid input. Reducing propulsion force...")
###Output
Zero is not a valid input. Reducing propulsion force...
###Markdown
Several `except` statements can be used to take care of different errors.This can include assigning several exception types to a single `except` statement by placing them inside of a tuple. The following sudo-code shows example with a series of `except` statements.
###Code
try:
# do something
pass
except ValueError:
# handle ValueError exception
pass
except (TypeError, ZeroDivisionError):
# handle multiple exceptions
# TypeError and ZeroDivisionError
pass
except:
# handle all other exceptions
pass
###Output
_____no_output_____
###Markdown
Checking Interactive User InputIn the case of the smart ship, the input value is given by the user:
###Code
try:
# Ships computer system requests number from user
val = int(input("input a number "))
new_val = 1 / val
print(f"new number = {new_val}")
except ZeroDivisionError:
print("Zero is not a valid input. Reducing propulsion force...")
###Output
input a number 0
Zero is not a valid input. Reducing propulsion force...
###Markdown
By catching the exception, we avoid running the part of the code that will generate the error and stop the prgram.However, that means we have not created a variable called new_val, which the problem code section was intended to do.This can cause problems later in the program. Re-requesting User InputRecall our example error-catching solution for the smart ship - if an error is generated: - reduce the propulsion force. - __ask for revised input.__ One way to do this is to use a `while` loop with a `break` statement.We keep requesting user input until valid input is given.At that point, the `break` statement exits the loop.
###Code
while True:
try:
x = int(input("Please enter an even number: "))
if (x % 2 != 0):
raise ValueError("Odd number entered")
break
except ValueError:
print("Not a valid number. Try again...")
###Output
Please enter an even number: 3
Not a valid number. Try again...
Please enter an even number: 5
Not a valid number. Try again...
Please enter an even number: 7
Not a valid number. Try again...
Please enter an even number: 7
Not a valid number. Try again...
Please enter an even number: 8
###Markdown
To make our program more readable we can also encapsulate the code in a __recursive__ function.For example, for the smart ship:
###Code
def SmartShip():
try:
# Ships computer system requests number from user
val = int(input("input a number "))
new_val = 1 / val
return new_val
except ZeroDivisionError:
print("Zero is not a valid input. Reducing propulsion force...")
# Request new input by re-running the function.
return SmartShip()
new_val = SmartShip()
print(f"new_val = {new_val}")
###Output
input a number 1
new_val = 1.0
###Markdown
This first example features an exception that *prevents* Python's default response to the error (i.e. exiting the code). __Try it yourself__Using the same format as the `SmartShip` example:```pythontry: Attempt to do something here that might raise an exception If no 'FooError' exception is raised: - Run this indented code. - Skip the indented code after except except FooError: If a 'FooError' exception is raised above: - Skip the indented code after try. - Run this indented code. For exception types other than FooError: - the exception will not be caught. - the program will stop. - the error message will be printed. If FooError is omitted, ANY exception type will be caught```write a function that:- asks the user to input their age.- returns the users age.- raises an exception if the user's age is >0 and asks the user to try again.
###Code
try
###Output
_____no_output_____
###Markdown
Checking Automatically Generated ValuesIt can also be useful to check values that are generated automatically (e.g. due to imported data such as files or sensor readings). Background: bits and bytesThe smallest unit of computer memory is the *bit*; and each bit can take on one of two values; 0 or 1. For many computer architectures the smallest usable 'block' is a *byte*.One byte is made up of 8 bits. (e.g. a 64-bit operating system, a 32-bit operating system ... the number of bits will almost always be a multiple of 8 (one byte).) The 'bigger' a thing we want to store, the more bytes we need. In calculations, 'bigger' can mean:- how large or small the number can be.- the accuracy with which we want to store a number. Binary NumbersWhen using the binary system each number is represented by summing a combination of base 2 numbers ($2^0, 2^1, 2^2....$). For example, the table show the binary representation of number 0 to 15 (the maximum number that can be represeted by 4 bits.The sum of the base 2 columns marked with a 1 is found as the decimal number in the left hand column.The combination of 1s and 0a used to generate this decimal number, is its binary representation.|Decimal| Binary |||||:------------:|:-----------:|:-----------:|:-----------:|:---------:|| |$2^3=8$ |$2^2=4$ |$2^1=2$ |$2^0=1$ | |0 |0 |0 |0 |0 | |1 |0 |0 |0 |1 | |2 |0 |0 |1 |0 | |3 |0 |0 |1 |1 | |4 |0 |1 |0 |0 | |5 |0 |1 |0 |1 | |6 |0 |1 |1 |0 | |7 |0 |1 |1 |1 | |8 |1 |0 |0 |0 | |9 |1 |0 |0 |1 | |10 |1 |0 |1 |0 | |11 |1 |0 |1 |1 | |12 |1 |1 |0 |0 | |13 |1 |1 |0 |1 | |14 |1 |1 |1 |0 | |15 |1 |1 |1 |1 | The __largest number__ that can be represented by $n$ bits is:$2^{n} - 1$We can see this from the table. The -1 comes from the fact that we start counting at 0 (i.e. $2^0$), rather than at 1 (i.e. $2^{1}$). Another way to think about this is by considering what happens when we reach the __largest number__ that can be represented by $n$ bits.If we want to store a larger number, we need more bits. Let's increase our 4 bit number to a 5 bit number.The binary number `10000` (5 bits) represents the decimal number $2^4$.From the pattern of 1s and 0s in the table, we can see that by subtracting 1:$2^4-1$ we should get the 4 bit number binary number `1111`. The __largest postitive integer__ that can be represented by $n$ bits is:$2^{n-1} - 1$The power $n-1$ is becuase there is one less bit available when storing a *signed* integer.One bit is used to store the sign; + positive or - negative (represented as a 0 or a 1) The __largest negative integer__ that can be represented by $n$ bits is:$2^{n-1}$ The first number when counting in the positive direction (0000 in the 4 bit example above) is zero.Zero does not need a second representation in the negative scale.Therefore, when counting in the negative direction: - 0000 = -1 (not 0) - 0001 = -2 - .... __Examples: 4 bit numbers__The __largest unsigned integer__ that can be represented by 4 bits is:$2^{4} - 1 = 15$The __largest positive signed integer__ that can be represented by 4 bits is:$2^{4-1} - 1 = 7$The __largest negative signed integer__ that can be represented by 4 bits is:$2^{4-1} = 8$ Integer Storage and OverflowIn most languages (C, C++ etc), a default number of bits are used to store a given type of number.Python is different in that it *automatically* assigns a variable type to a variable. Therefore it also automatically assigns the number of bits used to store the variable. This means it will assign as many bytes as needed to represent the number entered by the user. It starts with a 32 bit number and assigns more bytes as needed. The largest (and smallest! - we will see how decimals are stored in next weeks seminar) number that Python can store is theoreticaly infinite. The number size is, however, limited by the computer's memory. However, when using the mathematics package Numpy, C-style fixed precision integers are used.It is possible for an integer to *overflow* as when using C. We will use the Numpy package to demonstrate this.
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
In this case, a maximum size of 64 bits are used.$2^{64-1} - 1 = 9.223372037 \times 10^{18}$So if we use a number greater than $2^{64-1} - 1$ the integer will *overflow*. Example: Numpy Integer Overflow In the array below:- The value with index `a[0]` is $2^{63} - 1$, the maximum storable value.- the data type is specified to make sure is an int.
###Code
a = np.array([2**63 - 1], dtype=int)
print(a, a.dtype)
###Output
[9223372036854775807] int64
###Markdown
The `bin` function prints the number in binary form, as a string.(prefix `0b` for positive numbers, prefix `-0b` for positive numbers)It is important to note that values are represented as regular binary number, NOT using their signed storage representation.e.g. 0b101 = 5, -0b101 = -5
###Code
print(bin(5), bin(-5))
print(a, a.dtype)
print(bin(a[0]))
print(type(a[0]))
print(len(bin(a[0]))) # Number of characters in binary string representation
###Output
[9223372036854775807] int64
0b111111111111111111111111111111111111111111111111111111111111111
<class 'numpy.int64'>
65
###Markdown
[9223372036854775807] int64 0b111111111111111111111111111111111111111111111111111111111111111 65There are 65 characters in the string.The first two show:- `0` : positive number.- `b` : binary number.The 63 characters that follow are all `1`.Therefore the number is $2^{63}-1$.($-2^{63}-1$ is the largest value that can be stored by a 64 bit signed integer). Adding 1 to the array will cause it to overflow.Overflow means that the number's value loops round to start again from it's smallest possible value.
###Code
a += 1
print(bin(a[0]))
print(type(a[0]))
print(len(bin(a[0]))) # Number of characters in binary string representation
###Output
-0b1000000000000000000000000000000000000000000000000000000000000000
<class 'numpy.int64'>
67
###Markdown
-0b1000000000000000000000000000000000000000000000000000000000000000 67There are 67 characters in the string.The first *three* show:`-0` : negative number.`b` : binary number.The *64* characters that follow tell us that the number is $2^{63}$.$-(2^{63})$ is the lowest value that can be stored by a 64 bit signed integer. Remember that, when printed, values are represented as regular binary number, NOT using their signed storage representation.e.g. 0b101 = 5, -0b101 = -5 Hence an extra bit (65 bits) is needed to represent the lowest negative number that can be stored by a 64 bit number (1 bit: negative sign, 63 bits: `1`) To see the number of bits required to store a number, use the bit_length method.
###Code
b = 8**12
print(b, type(b))
print(b.bit_length(), end="\n\n")
b = 8**24
print(b, type(b))
print(b.bit_length())
###Output
68719476736 <class 'int'>
37
4722366482869645213696 <class 'int'>
73
###Markdown
Example: Error Handling with Integer Type ConversionAn un-caught error due to storage limits led to the explosion of an un-manned rocket, *Ariane 5* (European Space Agency), shortly after lift-off (1996).We will reproduce the precise mistake the developers of the Ariane 5 software made. The Ariane 5 rocket explosion was caused by an integer overflow. The speed of the rocket was stored as a 64-bit float.This was converted in the navigation software to a 16-bit integer. However, the value of the float was greater than $2^{16-1}-1 = 32767$, (the largest number a 16-bit integer can represent).This led to an overflow that in turn caused the navigation system to fail and the rocket to explode. We can demonstrate what happened in the rocket program. Consider a speed of 40000.44 stored as a `float` (64 bits)(units are unecessary for demonstrating this process):
###Code
speed_float = 40000.44
###Output
_____no_output_____
###Markdown
Let's first convert the float to a 32-bit `int`.We can use NumPy to cast the variable as an integer with a fixed number of bits.
###Code
speed_int = np.int32(speed_float)
print(speed_int)
print(bin(speed_int))
###Output
40000
0b1001110001000000
###Markdown
40000 can be represented using 32 bits$40000 < 2^{32-1}-1$$40000 < 2,147,483,647$ The conversion behaves as we would expect. Now, if we convert the speed from the `float` to a 16-bit integer...
###Code
speed_int = np.int16(speed_float)
print(speed_int)
print(bin(speed_int))
###Output
-25536
-0b110001111000000
###Markdown
We see clearly the result of an integer overflow since the 16-bit integer has too few bits to represent the number 40000. What can we do to avoid the integer overflow? In this example, a 16 bit integer was chosen. Minimising memory usage was clearly an objective when writing the program. One solution is to incrementally step through increasing integer sizes (16 bit, 32 bit, 64 bit ... ).When we find an integer size that is large enough to hold the variable, we store the variable. This means we:- always select the minimimum possible variable size.- avoid overflow errors. One way to do this is using `if` and `else`.This is known as LBYL (look before you leap) programming.
###Code
speed_float = 32_10.0 # (small enough for a 16-bit int)
speed_float = 42_767.0 # (too large for a 16-bit int)
speed_float = 2_147_500_00.0 # (too large for a 32-bit int)
# Check if the number to store will fit in a 16 bit integer.
if abs(speed_float) <= (2**(16-1) - 1):
vel = np.int16(abs(speed_float))
# Check if the number to store will fit in a 32 bit integer.
elif abs(speed_float) <= (2**(32-1) - 1):
vel = np.int32(abs(speed_float))
else:
raise OverflowError("Value too large for 32-bit int.")
###Output
_____no_output_____
###Markdown
We can use `try` and `except` to do the same thing. In general, the main advantages of using `try` and `except`:- speed-ups (e.g. preventing extra lookups: `if...and...and...and...`)- cleaner code (less lines/easier to read)- jumping more than one level of logic (e.g. where a break doesn't go far enough)- where the outcome is likely to be unexpected (e.g. it is difficult to define `if` and `elif` conditional statements). This is known as EAFP (easier to ask for forgiveness than permission) programming. Remember the `try` and `except` structure:```pythontry: Attempt to do something here that might raise an exception If no 'FooError' exception is raised: - Run this indented code. - Skip the indented code after except except FooError: If a 'FooError' exception is raised above: - Skip the indented code after try. - Run this indented code. For exception types other than FooError: - the exception will not be caught. - the program will stop. - the error message will be printed. If FooError is omitted, ANY exception type will be caught``` Let's write two functions to try:
###Code
def cast_v_16(v):
"Convert to a 16-bit int."
if abs(v) <= (2**(16-1) - 1):
return np.int16(v)
else:
raise OverflowError("Value too large for 16-bit int.")
def cast_v_32(v):
"Convert to a 32-bit int."
if abs(v) <= (2**(32-1) - 1):
return np.int32(v)
else:
raise OverflowError("Value too large for 32-bit int.")
###Output
_____no_output_____
###Markdown
Then use each of the functions in the `try` except structure.
###Code
v = 32_10.0 # (small enough for a 16-bit int)
v = 42_767.0 # (too large for a 16-bit int)
v = 2_147_500_000.0 # (too large for a 32-bit int)
try:
# Try to cast v as 16-bit int using function.
vel = cast_v_16(v)
print(vel)
except OverflowError:
# If cast as 16-bit int failed, raise exception.
# Try to cast v as 32-bit int, using function.
try:
vel = cast_v_32(v)
print(vel)
except OverflowError:
# If cast as 32-bit int failed, raise exception
raise RuntimeError("Could not cast velocity to an available int type.")
print(type(vel))
###Output
_____no_output_____
###Markdown
This block of code can itself be placed inside of a function to make the code more concise.The only change made is returning the cast variable instead of storing it as the variable `vel`.
###Code
def cast_velocity(v):
try:
# Try to cast v to a 16-bit int
return cast_v_16(v)
except OverflowError:
# If cast to 16-bit int failed (and exception raised), try casting to a 32-bit int
try:
return cast_v_32(v)
except OverflowError:
# If cast to 32-bit int failed, raise exception
raise RuntimeError("Could cast v to an available int type.")
# v fits into a 16-bit int
v_int = cast_velocity(32_10.0)
print(v_int, type(v_int))
# v too large for a 16-bit int
v_int = cast_velocity(42_767.0)
print(v_int, type(v_int))
# # v too large for a 32-bit int
v_int = cast_velocity(2_147_500_000.0)
print(v_int, type(v_int))
###Output
3210 <class 'numpy.int16'>
42767 <class 'numpy.int32'>
###Markdown
Gangnam StyleIn 2014, Google switched from 32-bit integers to 64-bit integers to count views when the video "Gangnam Style" was viewed more than 2,147,483,647 times, the limit of 32-bit integers. Note: We can replace the calculation for the maximum value storable by an integer type with the method `np.iinfo(TYPE).max`, replacing `TYPE` with the integer type. e.g.For example:```pythondef cast_v_16(v): "Convert to a 16-bit int." if abs(v) <= (2**(16-1) - 1): return np.int16(v) ```can be written:```pythondef cast_v_16(v): "Convert to a 16-bit int." if abs(v) <= np.iinfo(np.int16).max: return np.int16(v) ``` `finally`The `try` statement in Python can have an optional `finally` clause. The indented code following finally is executed, regardless of the outcome of the preceding `try` (and `except`).
###Code
def cast_velocity(v):
try:
# Try to cast v to a 16-bit int
return cast_v_16(v)
except OverflowError:
# If cast to 16-bit int failed (and exception raised), try casting to a 32-bit int
try:
return cast_v_32(v)
except OverflowError:
# If cast to 32-bit int failed, raise exception
raise RuntimeError("Could cast v to an available int type.")
finally:
print("32 bit integer tried")
finally:
print("16 bit integer tried")
v_int = cast_velocity(42_767.0)
v_int = cast_velocity(2_147_500_000.0)
###Output
32 bit integer tried
16 bit integer tried
32 bit integer tried
16 bit integer tried
###Markdown
This is often used to "clean up".For example, we may be working with a file.```Pythontry: f = open("test.txt") perform file operationsfinally: f.close() ``` Extension Topic: A very brief introduction to the IDE debuggerMany IDEs such as Spyder, MATLAB and PyCharm feature a debugger mode; a mode of running your code that is designed to make removing errors easier. The underlying idea is to break your code into smaller chunks and run them sequentially.This is a little like running a sequence of Jupyter notebook cell one after the other.Running your code in this way can make it easier to spot where a bug occurs and wheat is causing it. BreakpointsA breakpoint can be added next to a line of code.In Spyder, and in many other IDEs, a break point is added by double clicking in the margin, to the left of the line number. Every time the line with the break point is reached, the program will pause.When the programmer presses resume, the code will advance until the next break point.This is a bit like running the individual cells of a Jupyter notebook. You can add as many breakpoints as you like.To remove the breakpoint simply click on it. So that you can switch easily between running the code with and without breakpoints, there are seperate buttons to run the code with and without break points.In Spyder:the button to run the code normally is: the button to run the code in debugger mode is: the button to advance the code to the next breakpoint is: All of these can be found in the toolbar at the top of the main window. On the main advantages of running your code using breakpoints, is that you can check the value of variables at different points in your program. For example, as we saw earlier, the following code will automatically raise a `ZeroDivisionError`: a = 0 a = 1 / a If we, for example, unknowlingly import a variable with value zero from an external file, it can be difficult to spot the source of error.
###Code
import numpy as np
a = np.loadtxt('sample_data/sample_data_seminar10.dat')
a = int(a[0][0])
a = 1 / a
print(a)
###Output
_____no_output_____
###Markdown
In this case, if we run the code, we can see that as `a = 0`, `a = 1 / a` raised an exception.It does not reveal that the imported value was the origin of the `ZeroDivisionError`. If we place a break point on the line: a = int(a[0][0]) we see that the value of `a` *immediately before* the line was run was an imported array of values equal to zero.The line that will run when we click advance is highlighted in pink. Our next break point is on the line that generates the error a = 1 / aThe value of `a` is 0.If we click advance, we generate error as expected, however, we have now where the zero value came from that is causing the error. The Spyder debugger mode is a little dificult to use and minimal documentation is provided.For those of you wishing to run Python using an IDE, I highly recommend PyCharm: https://www.jetbrains.com/pycharm/ It is free to download if you have a university email address.Clear, step-by-step instructions for running the PyCharm debugger mode (along with many other tutorials) can be found here: https://www.jetbrains.com/help/pycharm/step-2-debugging-your-first-python-application.html Summary - Errors (or *bugs*) can be divided into two types: *syntax errors* and *exceptions*. - Syntax errors occur when the code you write does not conform to the rules of the Python language. - Exceptions are when the *syntax* is correct but something unexpected occurs during the execution of a program. - Python detects some instances of this automatically. - The keyword `raise` causes Python to stop the program and generate an error message. - The keywords `try` and `except` can be used to *catch* exceptions; preventing anticipated errors from stopping the program. - `try` is optionally followed by the keyword `finally` (somewhere in the same block of code) which executes code regardless of the outcome of the `try` statement. Test-Yourself ExercisesCompete the Test-Youself exercises below.Save your answers as .py files and email them to:[email protected] Test-Yourself Exercise: Identifiying and fixing syntax errors.Each example contains one or two syntactical errors. Copy and paste the section of code in the cell below the example (so that you retain the original version with errors for comparison).Fix the error so that the code runs properly. Note that you will need to make changes to only one or two lines in each example. Example 1
###Code
# Example 1
y = (xvalues + 2) * (xvalues - 1) * (xvalues - 2)
xvalues = linspace(-3, 3, 100)
plt.plot(xvalues, y, 'r--')
plt.plot([-2, 1, 2], [0 ,0, 0], 'bo', markersize=10)
plt.xlabel('x-axis')
plt.ylabel('y-axis')
plt.title('Nice Python figure!')
# Copy and paste code here
###Output
_____no_output_____
###Markdown
Example 2
###Code
# Example 2
def test(x, alpha):
return np.exp(-alpha * x) * np.cos(x)
x = np.linspace(0, 10np.pi, 100)
alpha = 0.2
y = test(x)
plt.plot(x, y, 'b')
plt.xlabel('x')
plt.ylabel('f(x)')
# Copy and paste code here
###Output
_____no_output_____
###Markdown
Example 3
###Code
# Example 3
a = np.array([2, 2, 4, 2, 4, 4])
for i in range(a):
if a[i] < 3: # replace value with 77 when value equals 2
a[i] = 77
else: # otherwise replace value with -77
a[i] = -77
print('modified a:' a)
# Copy and paste code here
###Output
_____no_output_____
###Markdown
Example 4
###Code
# Example 4
y = np.zeros(20, 20)
y[8:13] = 10
plt.matshow(y)
plt.title(image of array y);
# Copy and paste code here
###Output
_____no_output_____ |
playbook/tactics/defense-evasion/T1553.001.ipynb | ###Markdown
T1553.001 - Subvert Trust Controls: Gatekeeper BypassAdversaries may modify file attributes that signify programs are from untrusted sources to subvert Gatekeeper controls. In macOS and OS X, when applications or programs are downloaded from the internet, there is a special attribute set on the file called com.apple.quarantine. This attribute is read by Apple's Gatekeeper defense program at execution time and provides a prompt to the user to allow or deny execution. Apps loaded onto the system from USB flash drive, optical disk, external hard drive, or even from a drive shared over the local network won’t set this flag. Additionally, it is possible to avoid setting this flag using [Drive-by Compromise](https://attack.mitre.org/techniques/T1189). This completely bypasses the built-in Gatekeeper check. (Citation: Methods of Mac Malware Persistence) The presence of the quarantine flag can be checked by the xattr command xattr /path/to/MyApp.app for com.apple.quarantine. Similarly, given sudo access or elevated permission, this attribute can be removed with xattr as well, sudo xattr -r -d com.apple.quarantine /path/to/MyApp.app. (Citation: Clearing quarantine attribute) (Citation: OceanLotus for OS X) In typical operation, a file will be downloaded from the internet and given a quarantine flag before being saved to disk. When the user tries to open the file or application, macOS’s gatekeeper will step in and check for the presence of this flag. If it exists, then macOS will then prompt the user to confirmation that they want to run the program and will even provide the URL where the application came from. However, this is all based on the file being downloaded from a quarantine-savvy application. (Citation: Bypassing Gatekeeper) Atomic Tests
###Code
#Import the Module before running the tests.
# Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts.
Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force
###Output
_____no_output_____
###Markdown
Atomic Test 1 - Gatekeeper BypassGatekeeper Bypass via command line**Supported Platforms:** macosElevation Required (e.g. root or admin) Attack Commands: Run with `sh````shsudo xattr -r -d com.apple.quarantine myapp.appsudo spctl --master-disable```
###Code
Invoke-AtomicTest T1553.001 -TestNumbers 1
###Output
_____no_output_____ |
week10_rl/reinforce_pytorch.ipynb | ###Markdown
REINFORCE in pytorchJust like we did before for q-learning, this time we'll design a lasagne network to learn `CartPole-v0` via policy gradient (REINFORCE).Most of the code in this notebook is taken from approximate qlearning, so you'll find it more or less familiar and even simpler.
###Code
%env THEANO_FLAGS='floatX=float32'
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY"))==0:
!bash ../xvfb start
%env DISPLAY=:1
import gym
import numpy as np, pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make("CartPole-v0").env
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape
plt.imshow(env.render("rgb_array"))
###Output
_____no_output_____
###Markdown
Building the network for REINFORCE For REINFORCE algorithm, we'll need a model that predicts action probabilities given states. Let's define such a model below.
###Code
import torch, torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
# Build a simple neural network that predicts policy logits. Keep it simple: CartPole isn't worth deep architectures.
agent = nn.Sequential()
< YOUR CODE HERE: define a neural network that predicts policy logits >
###Output
_____no_output_____
###Markdown
Predict function
###Code
def predict_proba(states):
"""
Predict action probabilities given states.
:param states: numpy array of shape [batch, state_shape]
:returns: numpy array of shape [batch, n_actions]
"""
# convert states, compute logits, use softmax to get probability
<your code here>
return < your code >
test_states = np.array([env.reset() for _ in range(5)])
test_probas = predict_proba(test_states)
assert isinstance(test_probas, np.ndarray), "you must return np array and not %s" % type(test_probas)
assert tuple(test_probas.shape) == (test_states.shape[0], n_actions), "wrong output shape: %s" % np.shape(test_probas)
assert np.allclose(np.sum(test_probas, axis = 1), 1), "probabilities do not sum to 1"
###Output
_____no_output_____
###Markdown
Play the gameWe can now use our newly built agent to play the game.
###Code
def generate_session(t_max=1000):
"""
play a full session with REINFORCE agent and train at the session end.
returns sequences of states, actions andrewards
"""
#arrays to record session
states,actions,rewards = [],[],[]
s = env.reset()
for t in range(t_max):
#action probabilities array aka pi(a|s)
action_probas = predict_proba(np.array([s]))[0]
a = <sample action with given probabilities>
new_s,r,done,info = env.step(a)
#record session history to train later
states.append(s)
actions.append(a)
rewards.append(r)
s = new_s
if done: break
return states, actions, rewards
# test it
states, actions, rewards = generate_session()
###Output
_____no_output_____
###Markdown
Computing cumulative rewards
###Code
def get_cumulative_rewards(rewards, #rewards at each step
gamma = 0.99 #discount for reward
):
"""
take a list of immediate rewards r(s,a) for the whole session
compute cumulative returns (a.k.a. G(s,a) in Sutton '16)
G_t = r_t + gamma*r_{t+1} + gamma^2*r_{t+2} + ...
The simple way to compute cumulative rewards is to iterate from last to first time tick
and compute G_t = r_t + gamma*G_{t+1} recurrently
You must return an array/list of cumulative rewards with as many elements as in the initial rewards.
"""
<your code here>
return <array of cumulative rewards>
get_cumulative_rewards(rewards)
assert len(get_cumulative_rewards(list(range(100)))) == 100
assert np.allclose(get_cumulative_rewards([0,0,1,0,0,1,0],gamma=0.9),[1.40049, 1.5561, 1.729, 0.81, 0.9, 1.0, 0.0])
assert np.allclose(get_cumulative_rewards([0,0,1,-2,3,-4,0],gamma=0.5), [0.0625, 0.125, 0.25, -1.5, 1.0, -4.0, 0.0])
assert np.allclose(get_cumulative_rewards([0,0,1,2,3,4,0],gamma=0), [0, 0, 1, 2, 3, 4, 0])
print("looks good!")
###Output
looks good!
###Markdown
Loss function and updatesWe now need to define objective and update over policy gradient.Our objective function is$$ J \approx { 1 \over N } \sum _{s_i,a_i} \pi_\theta (a_i | s_i) \cdot G(s_i,a_i) $$Following the REINFORCE algorithm, we can define our objective as follows: $$ \hat J \approx { 1 \over N } \sum _{s_i,a_i} log \pi_\theta (a_i | s_i) \cdot G(s_i,a_i) $$When you compute gradient of that function over network weights $ \theta $, it will become exactly the policy gradient.
###Code
def to_one_hot(y, n_dims=None):
""" Take an integer vector (tensor of variable) and convert it to 1-hot matrix. """
y_tensor = y.data if isinstance(y, Variable) else y
y_tensor = y_tensor.type(torch.LongTensor).view(-1, 1)
n_dims = n_dims if n_dims is not None else int(torch.max(y_tensor)) + 1
y_one_hot = torch.zeros(y_tensor.size()[0], n_dims).scatter_(1, y_tensor, 1)
return Variable(y_one_hot) if isinstance(y, Variable) else y_one_hot
# Your code: define optimizers
def train_on_session(states, actions, rewards, gamma = 0.99):
"""
Takes a sequence of states, actions and rewards produced by generate_session.
Updates agent's weights by following the policy gradient above.
Please use Adam optimizer with default parameters.
"""
# cast everything into a variable
states = Variable(torch.FloatTensor(states))
actions = Variable(torch.IntTensor(actions))
cumulative_returns = np.array(get_cumulative_rewards(rewards, gamma))
cumulative_returns = Variable(torch.FloatTensor(cumulative_returns))
# predict logits, probas and log-probas using an agent.
logits = <your code here>
probas = <your code here>
logprobas = <your code here>
assert all(isinstance(v, Variable) for v in [logits, probas, logprobas]), \
"please use compute using torch tensors and don't use predict_proba function"
# select log-probabilities for chosen actions, log pi(a_i|s_i)
logprobas_for_actions = torch.sum(logprobas * to_one_hot(actions), dim = 1)
# REINFORCE objective function
J_hat = <policy objective as in the formula for J_hat. Please use mean, not sum.>
#regularize with entropy
entropy_reg = <compute mean entropy of probas. Don't forget the sign!>
loss = - J_hat - 0.1 * entropy_reg
# Gradient descent step
< your code >
# technical: return session rewards to print them later
return np.sum(rewards)
###Output
_____no_output_____
###Markdown
The actual training
###Code
for i in range(100):
rewards = [train_on_session(*generate_session()) for _ in range(100)] #generate new sessions
print ("mean reward:%.3f"%(np.mean(rewards)))
if np.mean(rewards) > 500:
print ("You Win!") # but you can train even further
break
###Output
mean reward:21.380
mean reward:23.330
mean reward:50.980
mean reward:112.400
mean reward:144.060
mean reward:95.870
mean reward:183.300
mean reward:127.050
mean reward:120.070
mean reward:123.760
mean reward:142.540
mean reward:587.320
You Win!
###Markdown
Video
###Code
#record sessions
import gym.wrappers
env = gym.wrappers.Monitor(gym.make("CartPole-v0"),directory="videos",force=True)
sessions = [generate_session() for _ in range(100)]
env.close()
#show video
from IPython.display import HTML
import os
video_names = list(filter(lambda s:s.endswith(".mp4"),os.listdir("./videos/")))
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format("./videos/"+video_names[-1])) #this may or may not be _last_ video. Try other indices
###Output
_____no_output_____
###Markdown
REINFORCE in pytorchJust like we did before for q-learning, this time we'll design a pytorch network to learn `CartPole-v0` via policy gradient (REINFORCE).Most of the code in this notebook is taken from approximate qlearning, so you'll find it more or less familiar and even simpler.
###Code
# # in google colab uncomment this
# import os
# os.system('apt-get install -y xvfb')
# os.system('wget https://raw.githubusercontent.com/yandexdataschool/Practical_DL/fall18/xvfb -O ../xvfb')
# os.system('apt-get install -y python-opengl ffmpeg')
# os.system('pip install pyglet==1.2.4')
# os.system('python -m pip install -U pygame --user')
# print('setup complete')
# XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
%env DISPLAY = : 1
import gym
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make("CartPole-v0").env
env.reset()
plt.imshow(env.render("rgb_array"))
###Output
_____no_output_____
###Markdown
Building the network for REINFORCE For REINFORCE algorithm, we'll need a model that predicts action probabilities given states. Let's define such a model below.
###Code
import torch
import torch.nn as nn
# Build a simple neural network that predicts policy logits.
# Keep it simple: CartPole isn't worth deep architectures.
model = nn.Sequential(
< YOUR CODE HERE: define a neural network that predicts policy logits >
)
###Output
_____no_output_____
###Markdown
Predict function
###Code
def predict_probs(states):
"""
Predict action probabilities given states.
:param states: numpy array of shape [batch, state_shape]
:returns: numpy array of shape [batch, n_actions]
"""
# convert states, compute logits, use softmax to get probability
<your code here >
return < your code >
test_states = np.array([env.reset() for _ in range(5)])
test_probas = predict_probs(test_states)
assert isinstance(
test_probas, np.ndarray), "you must return np array and not %s" % type(test_probas)
assert tuple(test_probas.shape) == (
test_states.shape[0], env.action_space.n), "wrong output shape: %s" % np.shape(test_probas)
assert np.allclose(np.sum(test_probas, axis=1),
1), "probabilities do not sum to 1"
###Output
_____no_output_____
###Markdown
Play the gameWe can now use our newly built agent to play the game.
###Code
def generate_session(t_max=1000):
"""
play a full session with REINFORCE agent and train at the session end.
returns sequences of states, actions andrewards
"""
# arrays to record session
states, actions, rewards = [], [], []
s = env.reset()
for t in range(t_max):
# action probabilities array aka pi(a|s)
action_probs = predict_probs(np.array([s]))[0]
# Sample action with given probabilities.
a = < your code >
new_s, r, done, info = env.step(a)
# record session history to train later
states.append(s)
actions.append(a)
rewards.append(r)
s = new_s
if done:
break
return states, actions, rewards
# test it
states, actions, rewards = generate_session()
###Output
_____no_output_____
###Markdown
Computing cumulative rewards
###Code
def get_cumulative_rewards(rewards, # rewards at each step
gamma=0.99 # discount for reward
):
"""
take a list of immediate rewards r(s,a) for the whole session
compute cumulative returns (a.k.a. G(s,a) in Sutton '16)
G_t = r_t + gamma*r_{t+1} + gamma^2*r_{t+2} + ...
The simple way to compute cumulative rewards is to iterate from last to first time tick
and compute G_t = r_t + gamma*G_{t+1} recurrently
You must return an array/list of cumulative rewards with as many elements as in the initial rewards.
"""
<your code here >
return < array of cumulative rewards >
get_cumulative_rewards(rewards)
assert len(get_cumulative_rewards(list(range(100)))) == 100
assert np.allclose(get_cumulative_rewards([0, 0, 1, 0, 0, 1, 0], gamma=0.9), [
1.40049, 1.5561, 1.729, 0.81, 0.9, 1.0, 0.0])
assert np.allclose(get_cumulative_rewards(
[0, 0, 1, -2, 3, -4, 0], gamma=0.5), [0.0625, 0.125, 0.25, -1.5, 1.0, -4.0, 0.0])
assert np.allclose(get_cumulative_rewards(
[0, 0, 1, 2, 3, 4, 0], gamma=0), [0, 0, 1, 2, 3, 4, 0])
print("looks good!")
###Output
_____no_output_____
###Markdown
Loss function and updatesWe now need to define objective and update over policy gradient.Our objective function is$$ J \approx { 1 \over N } \sum _{s_i,a_i} \pi_\theta (a_i | s_i) \cdot G(s_i,a_i) $$Following the REINFORCE algorithm, we can define our objective as follows: $$ \hat J \approx { 1 \over N } \sum _{s_i,a_i} log \pi_\theta (a_i | s_i) \cdot G(s_i,a_i) $$When you compute gradient of that function over network weights $ \theta $, it will become exactly the policy gradient.
###Code
def to_one_hot(y_tensor, ndims):
""" helper: take an integer vector and convert it to 1-hot matrix. """
y_tensor = y_tensor.type(torch.LongTensor).view(-1, 1)
y_one_hot = torch.zeros(
y_tensor.size()[0], ndims).scatter_(1, y_tensor, 1)
return y_one_hot
# Your code: define optimizers
optimizer = torch.optim.Adam(model.parameters(), 1e-3)
def train_on_session(states, actions, rewards, gamma=0.99, entropy_coef=1e-2):
"""
Takes a sequence of states, actions and rewards produced by generate_session.
Updates agent's weights by following the policy gradient above.
Please use Adam optimizer with default parameters.
"""
# cast everything into torch tensors
states = torch.tensor(states, dtype=torch.float32)
actions = torch.tensor(actions, dtype=torch.int32)
cumulative_returns = np.array(get_cumulative_rewards(rewards, gamma))
cumulative_returns = torch.tensor(cumulative_returns, dtype=torch.float32)
# predict logits, probas and log-probas using an agent.
logits = model(states)
probs = nn.functional.softmax(logits, -1)
log_probs = nn.functional.log_softmax(logits, -1)
assert all(isinstance(v, torch.Tensor) for v in [logits, probs, log_probs]), \
"please use compute using torch tensors and don't use predict_probs function"
# select log-probabilities for chosen actions, log pi(a_i|s_i)
log_probs_for_actions = torch.sum(
log_probs * to_one_hot(actions, env.action_space.n), dim=1)
# Compute loss here. Don't forgen entropy regularization with `entropy_coef`
entropy = < your code >
loss = < your code
# Gradient descent step
< your code >
# technical: return session rewards to print them later
return np.sum(rewards)
###Output
_____no_output_____
###Markdown
The actual training
###Code
for i in range(100):
rewards = [train_on_session(*generate_session())
for _ in range(100)] # generate new sessions
print("mean reward:%.3f" % (np.mean(rewards)))
if np.mean(rewards) > 500:
print("You Win!") # but you can train even further
break
###Output
_____no_output_____
###Markdown
Video
###Code
# record sessions
import gym.wrappers
env = gym.wrappers.Monitor(gym.make("CartPole-v0"),
directory="videos", force=True)
sessions = [generate_session() for _ in range(100)]
env.close()
# show video
from IPython.display import HTML
import os
video_names = list(
filter(lambda s: s.endswith(".mp4"), os.listdir("./videos/")))
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format("./videos/"+video_names[-1])) # this may or may not be the _last_ video. Try other indices
###Output
_____no_output_____ |
notebooks/0106-LINQ.ipynb | ###Markdown
Session 6: LINQ and Extension Methods[LINQ (Language Integrated Query)](https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/linq/?WT.mc_id=visualstudio-twitch-jefritz) is a collection of methods and language features that allow you to interact with collections of data. In our last session, we focused on **LINQ to Objects** which allows us to use method predicates to interact with those collections.Let's setup our `Card` class and `FritzSet` collection object to work with again in this workbook
###Code
class Card {
public Card(string def) {
var values = def.Split('-');
Rank = values[0];
Suit = values[1];
}
public string Rank;
public int RankValue {
get {
var faceCards = new Dictionary<string,int> { {"J", 11}, {"Q", 12}, {"K", 13}, {"A", 14} };
return faceCards.ContainsKey(Rank) ? faceCards[Rank] : int.Parse(Rank);
}
}
public string Suit;
public override string ToString() {
return $"{Rank}-{Suit}";
}
private static bool IsLegalCardNotation(string notation) {
var segments = notation.Split('-');
if (segments.Length != 2) return false;
var validSuits = new [] {"c","d","h","s"};
if (!validSuits.Any(s => s == segments[1])) return false;
var validRanks = new [] {"A","2","3","4","5","6","7","8","9","10","J","Q","K"};
if (!validRanks.Any(r => r == segments[0])) return false;
return true;
}
public static implicit operator Card(string id) {
if (IsLegalCardNotation(id)) return new Card(id);
return null;
}
}
class FritzSet<T> : IEnumerable<T> {
private List<T> _Inner = new List<T>();
public IEnumerator<T> GetEnumerator()
{
return _Inner.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return _Inner.GetEnumerator();
}
public FritzSet<T> Add(T newItem) {
var insertAt = _Inner.Count == 0 ? 0 : new Random().Next(0,_Inner.Count+1);
_Inner.Insert(insertAt, newItem);
return this;
}
public FritzSet<T> Shuffle() {
_Inner = _Inner.OrderBy(_ => Guid.NewGuid()).ToList();
return this;
}
}
var TheDeck = new FritzSet<Card>();
TheDeck.Add("A-c").Add("A-d");TheDeck.Add("A-h");TheDeck.Add("A-s");TheDeck.Add("2-c");TheDeck.Add("2-d");TheDeck.Add("2-h");TheDeck.Add("2-s");TheDeck.Add("3-c");TheDeck.Add("3-d");TheDeck.Add("3-h");TheDeck.Add("3-s");TheDeck.Add("4-c");TheDeck.Add("4-d");TheDeck.Add("4-h");TheDeck.Add("4-s");
TheDeck.Add("5-c");TheDeck.Add("5-d");TheDeck.Add("5-h");TheDeck.Add("5-s");TheDeck.Add("6-c");TheDeck.Add("6-d");TheDeck.Add("6-h");TheDeck.Add("6-s");TheDeck.Add("7-c");TheDeck.Add("7-d");TheDeck.Add("7-h");TheDeck.Add("7-s");TheDeck.Add("8-c");TheDeck.Add("8-d");TheDeck.Add("8-h");TheDeck.Add("8-s");
TheDeck.Add("9-c");TheDeck.Add("9-d");TheDeck.Add("9-h");TheDeck.Add("9-s");TheDeck.Add("10-c");TheDeck.Add("10-d");TheDeck.Add("10-h");TheDeck.Add("10-s");TheDeck.Add("J-c");TheDeck.Add("J-d");TheDeck.Add("J-h");TheDeck.Add("J-s");
TheDeck.Add("Q-c");TheDeck.Add("Q-d");TheDeck.Add("Q-h");TheDeck.Add("Q-s");TheDeck.Add("K-c");TheDeck.Add("K-d");TheDeck.Add("K-h");TheDeck.Add("K-s");
// TheDeck
TheDeck.Shuffle().Shuffle().Shuffle().Shuffle().Shuffle();
//TheDeck
Card PriyanksCard = "Joker"; // Fix this
//display(PriyanksCard ?? "No card assigned");
###Output
_____no_output_____
###Markdown
In review, we can write a little bit of code to work with this collection to deal cards appropriately for a Texas Hold 'em poker game:
###Code
var ourDeck = TheDeck.Shuffle().Shuffle();
var hand1 = new List<Card>();
var hand2 = new List<Card>();
var hand3 = new List<Card>();
hand1.Add(ourDeck.Skip(1).First());
hand2.Add(ourDeck.Skip(2).First());
hand3.Add(ourDeck.Skip(3).First());
hand1.Add(ourDeck.Skip(4).First());
hand2.Add(ourDeck.Skip(5).First());
hand3.Add(ourDeck.Skip(6).First());
display("Hand 1");
display(hand1);
display("Hand 2");
display(hand2);
display("Hand 3");
display(hand3);
// Burn a card and deal the next 3 cards called 'the flop'
display("The Flop");
display(ourDeck.Skip(8).Take(3));
// Burn a card and take one card called 'the turn'
display("The Turn");
display(ourDeck.Skip(12).First());
// Burn a card and take the final card called 'the river'
display("The River");
display(ourDeck.Skip(14).First());
###Output
_____no_output_____
###Markdown
Language Integrated QueryYou can build [expressions](https://docs.microsoft.com/dotnet/csharp/linq/query-expression-basics?WT.mc_id=visualstudio-twitch-jefritzwhat-is-a-query-and-what-does-it-do) in the middle of your C code that _LOOKS_ like SQL turned sideways. Query Expressions begin with a `from` clause and there's also a mandatory `select` clause to specify the values to return. By convention, many C developers who use this syntax align the clauses to the right of the `=` symbol. Let's dig into that syntax a bit more:
###Code
// The simplest query
var outValues = from card in TheDeck // the required collection we are querying
select card; // the values to be returned
outValues
###Output
_____no_output_____
###Markdown
Where and OrderBy clausesThat's a boring and non-productive query. You can start to make queries more interesting by adding a [where](https://docs.microsoft.com/dotnet/csharp/language-reference/keywords/where-clause?WT.mc_id=visualstudio-twitch-jefritz) clause with an appropriate test in a format similar to that you would find in an `if` statement. You can also optionally add an [orderby](https://docs.microsoft.com/dotnet/csharp/language-reference/keywords/orderby-clause?WT.mc_id=visualstudio-twitch-jefritz) clause with an **ALSO** optional [descending](https://docs.microsoft.com/dotnet/csharp/language-reference/keywords/descending?WT.mc_id=visualstudio-twitch-jefritz) keyword. Tinker with the query in the next block to learn more about these clauses
###Code
var results = from card in TheDeck
where card.Suit == "h" // Return just the Hearts
orderby card.RankValue descending
select card;
results
###Output
_____no_output_____
###Markdown
Additionally, nothing is requiring you to return the object in the collection. You can return different properties and values by changing up the `select` clause:
###Code
var results = from card in TheDeck
where card.Suit == "h" && card.RankValue > 10
select card.Rank;
results
###Output
_____no_output_____
###Markdown
JoinsJust like SQL syntax, you can correlate two collections and work with the combined result. The [Join keyword](https://docs.microsoft.com/dotnet/csharp/language-reference/keywords/join-clause?WT.mc_id=visualstudio-twitch-jefritz) allows you to relate two collections based on a matching key value in each collection. There is a similar [Join method in LINQ to Objects](https://docs.microsoft.com/dotnet/api/system.linq.enumerable.join?view=netcore-3.1&WT.mc_id=visualstudio-twitch-jefritz) that delivers the same feature. Joins are slightly more involved and can be confusing topic, and we've embedded the official sample from the docs here. This sample relates `Person` records to their `Pets` that they own. The `Join` method receives each collection and uses two expression bodied members to select the key properties from each collection. Finally, it provides a projection method to create the resultant object.I have annotated this sample and the `Join` method to make it clearer
###Code
class Person
{
public string Name { get; set; }
}
class Pet
{
public string Name { get; set; }
public Person Owner { get; set; }
}
Person magnus = new Person { Name = "Hedlund, Magnus" };
Person terry = new Person { Name = "Adams, Terry" };
Person charlotte = new Person { Name = "Weiss, Charlotte" };
// Declare the set of 4 pets and their owners
Pet barley = new Pet { Name = "Barley", Owner = terry };
Pet boots = new Pet { Name = "Boots", Owner = terry };
Pet whiskers = new Pet { Name = "Whiskers", Owner = charlotte };
Pet daisy = new Pet { Name = "Daisy", Owner = magnus };
List<Person> people = new List<Person> { magnus, terry, charlotte };
List<Pet> pets = new List<Pet> { barley, boots, whiskers, daisy };
// Create a list of Person-Pet pairs where
// each element is an anonymous type that contains a
// Pet's name and the name of the Person that owns the Pet.
var query =
people.Join(pets, // Join the People and Pets collections
person => person, // We will match the Person object
pet => pet.Owner, // with the Owner property in the Pet record
(person, pet) => // The combined output of Person and Pet
// is an object with OwnerName and the Pet's Name
new { OwnerName = person.Name, Pet = pet.Name });
foreach (var obj in query)
{
display(string.Format("{0} - {1}",
obj.OwnerName,
obj.Pet));
}
###Output
_____no_output_____
###Markdown
Grouping data with the Group clauseData in your query can be grouped together using the [group clause](https://docs.microsoft.com/dotnet/csharp/language-reference/keywords/group-clause?WT.mc_id=visualstudio-twitch-jefritz). The `group` clause can be used in place of the `select` clause or can be used with the `select` clause to aggregate data in various groupings. Let's try using the `group` keywords
###Code
var results = from card in TheDeck
group card by card.Suit;
display(results.GetType());
results
###Output
_____no_output_____
###Markdown
Interestingly, we are returned a collection with all of the cards grouped by their suits. If we also wanted to select the suit and create a grouped result we could expand our query like this:
###Code
var results = from card in TheDeck
group card by card.Suit into suit
select new {TheSuit=suit.Key, suit};
display(results.GetType());
results
###Output
_____no_output_____
###Markdown
Now this is **VERY INTERESTING** we have created an [Anonymous Type](https://docs.microsoft.com/dotnet/csharp/programming-guide/classes-and-structs/anonymous-types?WT.mc_id=visualstudio-twitch-jefritz), a type on the fly that contains a string field for `TheSuit` and a collection of `Card` objects in a field called `suit`. We'll get more into **Anonymous Types** next week, but you need to know that you can use the `new` keyword with curly braces `{ }` to create a type and make it available in your code. Many C veterans will recommend against exposing the anonymous type outside of the method it is created in and instead suggest creating a concrete type to return in that `select` clause. Our groupings can take some interesting calculations. Let's write a grouping for all of the face cards (and the Ace too):
###Code
var results = from card in TheDeck
group card by card.RankValue > 10 into facecards
select new {TheSuit=facecards.Key, facecards};
results
###Output
_____no_output_____
###Markdown
That looks strange, but we have two groups: 1 group that are the numeric cards and a second group that are the face cards. Let's tinker with that method a little more:
###Code
var results = from card in TheDeck
where card.RankValue > 10
group card by card.Rank into facecards
select new {Face=facecards.Key, facecards};
results
###Output
_____no_output_____
###Markdown
Now this sets up for a simplified **Sam the Bellhop** classic card trick. Take a few minutes and enjoy magician and former Philadelphia Eagles player [Jon Dorenbos performing this trick](https://www.youtube.com/watch?v=fwKPDrtgXRs) where he sorts and finds cards while telling the story of Sam the Bellhop. Loading data from CSVWe've worked with objects and data that we've specified here in the notebook. Let's use an external library, in .NET we call them **NuGet Packages** from www.nuget.org called [LINQtoCSV](https://www.nuget.org/packages/LinqToCsv/) to load Atlantic Hurricane Season data (courtesy of [Wikipedia](https://en.wikipedia.org/wiki/Atlantic_hurricane_season)).
###Code
#r "nuget:LinqToCsv"
using LINQtoCSV;
class MyDataRow {
[CsvColumn(Name = "Year", FieldIndex = 1)]
public int Year {get; set;}
[CsvColumn(Name = "Number of tropical storms", FieldIndex = 2)]
public byte TropicalStormCount { get; set;}
[CsvColumn(Name = "Number of hurricanes", FieldIndex = 3)]
public byte HurricaneCount { get; set;}
[CsvColumn(Name = "Number of major hurricanes", FieldIndex = 4)]
public byte MajorHurricaneCount { get; set;}
// Accumulated Cyclone Energy
[CsvColumn(Name = "ACE", FieldIndex = 5)]
public decimal ACE { get; set; }
[CsvColumn(Name = "Deaths", FieldIndex = 6)]
public int Deaths { get; set; }
[CsvColumn(Name="Strongest storm", FieldIndex = 7)]
public string StrongestStorm { get; set; }
[CsvColumn(Name = "Damage USD", FieldIndex = 8)]
public string DamageUSD { get; set; }
[CsvColumn(Name = "Retired names", FieldIndex = 9)]
public string RetiredNames { get; set; }
[CsvColumn(Name = "Notes", FieldIndex = 10)]
public string Notes { get; set; }
}
var inputFileDescription = new CsvFileDescription
{
SeparatorChar = ',',
FirstLineHasColumnNames = true
};
var context = new CsvContext();
var hurricanes = context.Read<MyDataRow>("data/atlantic_hurricanes.csv", inputFileDescription);
display(hurricanes.OrderByDescending(h => h.Year).Take(10).Select(h => new {h.Year, h.TropicalStormCount, h.HurricaneCount, h.StrongestStorm}));
var results = from storm in hurricanes
orderby storm.DamageUSD descending
where storm.HurricaneCount >= 10
select new {storm.Year, storm.HurricaneCount, storm.ACE, storm.StrongestStorm, storm.DamageUSD};
results
###Output
_____no_output_____
###Markdown
Session 6: LINQ and Extension Methods[LINQ (Language Integrated Query)](https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/linq/?WT.mc_id=visualstudio-twitch-jefritz) is a collection of methods and language features that allow you to interact with collections of data. In our last session, we focused on **LINQ to Objects** which allows us to use method predicates to interact with those collections.Let's setup our `Card` class and `FritzSet` collection object to work with again in this workbook
###Code
class Card {
public Card(string def) {
var values = def.Split('-');
Rank = values[0];
Suit = values[1];
}
public string Rank;
public int RankValue {
get {
var faceCards = new Dictionary<string,int> { {"J", 11}, {"Q", 12}, {"K", 13}, {"A", 14} };
return faceCards.ContainsKey(Rank) ? faceCards[Rank] : int.Parse(Rank);
}
}
public string Suit;
public override string ToString() {
return $"{Rank}-{Suit}";
}
private static bool IsLegalCardNotation(string notation) {
var segments = notation.Split('-').Length;
if (segments != 2) return false;
var validSuits = new [] {"c","d","h","s"};
if (!validSuits.Any(s => s == segments[1])) return false;
var validRanks = new [] {"A","2","3","4","5","6","7","8","9","10","J","Q","K"};
if (!validRanks.Any(r => r == segments[0])) return false;
return true;
}
public static implicit operator Card(string id) {
if (IsLegalCardNotation(id)) return new Card(id);
return null;
}
}
class FritzSet<T> : IEnumerable<T> {
private List<T> _Inner = new List<T>();
public IEnumerator<T> GetEnumerator()
{
return _Inner.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return _Inner.GetEnumerator();
}
public FritzSet<T> Add(T newItem) {
var insertAt = _Inner.Count == 0 ? 0 : new Random().Next(0,_Inner.Count+1);
_Inner.Insert(insertAt, newItem);
return this;
}
public FritzSet<T> Shuffle() {
_Inner = _Inner.OrderBy(_ => Guid.NewGuid()).ToList();
return this;
}
}
var TheDeck = new FritzSet<Card>();
TheDeck.Add("A-c").Add("A-d");TheDeck.Add("A-h");TheDeck.Add("A-s");TheDeck.Add("2-c");TheDeck.Add("2-d");TheDeck.Add("2-h");TheDeck.Add("2-s");TheDeck.Add("3-c");TheDeck.Add("3-d");TheDeck.Add("3-h");TheDeck.Add("3-s");TheDeck.Add("4-c");TheDeck.Add("4-d");TheDeck.Add("4-h");TheDeck.Add("4-s");
TheDeck.Add("5-c");TheDeck.Add("5-d");TheDeck.Add("5-h");TheDeck.Add("5-s");TheDeck.Add("6-c");TheDeck.Add("6-d");TheDeck.Add("6-h");TheDeck.Add("6-s");TheDeck.Add("7-c");TheDeck.Add("7-d");TheDeck.Add("7-h");TheDeck.Add("7-s");TheDeck.Add("8-c");TheDeck.Add("8-d");TheDeck.Add("8-h");TheDeck.Add("8-s");
TheDeck.Add("9-c");TheDeck.Add("9-d");TheDeck.Add("9-h");TheDeck.Add("9-s");TheDeck.Add("10-c");TheDeck.Add("10-d");TheDeck.Add("10-h");TheDeck.Add("10-s");TheDeck.Add("J-c");TheDeck.Add("J-d");TheDeck.Add("J-h");TheDeck.Add("J-s");
TheDeck.Add("Q-c");TheDeck.Add("Q-d");TheDeck.Add("Q-h");TheDeck.Add("Q-s");TheDeck.Add("K-c");TheDeck.Add("K-d");TheDeck.Add("K-h");TheDeck.Add("K-s");
// TheDeck
TheDeck.Shuffle().Shuffle().Shuffle().Shuffle().Shuffle();
//TheDeck
Card PriyanksCard = "Joker"; // Fix this
//display(PriyanksCard ?? "No card assigned");
###Output
_____no_output_____
###Markdown
In review, we can write a little bit of code to work with this collection to deal cards appropriately for a Texas Hold 'em poker game:
###Code
var ourDeck = TheDeck.Shuffle().Shuffle();
var hand1 = new List<Card>();
var hand2 = new List<Card>();
var hand3 = new List<Card>();
hand1.Add(ourDeck.Skip(1).First());
hand2.Add(ourDeck.Skip(2).First());
hand3.Add(ourDeck.Skip(3).First());
hand1.Add(ourDeck.Skip(4).First());
hand2.Add(ourDeck.Skip(5).First());
hand3.Add(ourDeck.Skip(6).First());
display("Hand 1");
display(hand1);
display("Hand 2");
display(hand2);
display("Hand 3");
display(hand3);
// Burn a card and deal the next 3 cards called 'the flop'
display("The Flop");
display(ourDeck.Skip(8).Take(3));
// Burn a card and take one card called 'the turn'
display("The Turn");
display(ourDeck.Skip(12).First());
// Burn a card and take the final card called 'the river'
display("The River");
display(ourDeck.Skip(14).First());
###Output
_____no_output_____
###Markdown
Language Integrated QueryYou can build [expressions](https://docs.microsoft.com/dotnet/csharp/linq/query-expression-basics?WT.mc_id=visualstudio-twitch-jefritzwhat-is-a-query-and-what-does-it-do) in the middle of your C code that _LOOKS_ like SQL turned sideways. Query Expressions begin with a `from` clause and there's also a mandatory `select` clause to specify the values to return. By convention, many C developers who use this syntax align the clauses to the right of the `=` symbol. Let's dig into that syntax a bit more:
###Code
// The simplest query
var outValues = from card in TheDeck // the required collection we are querying
select card; // the values to be returned
outValues
###Output
_____no_output_____
###Markdown
Where and OrderBy clausesThat's a boring and non-productive query. You can start to make queries more interesting by adding a [where](https://docs.microsoft.com/dotnet/csharp/language-reference/keywords/where-clause?WT.mc_id=visualstudio-twitch-jefritz) clause with an appropriate test in a format similar to that you would find in an `if` statement. You can also optionally add an [orderby](https://docs.microsoft.com/dotnet/csharp/language-reference/keywords/orderby-clause?WT.mc_id=visualstudio-twitch-jefritz) clause with an **ALSO** optional [descending](https://docs.microsoft.com/dotnet/csharp/language-reference/keywords/descending?WT.mc_id=visualstudio-twitch-jefritz) keyword. Tinker with the query in the next block to learn more about these clauses
###Code
var results = from card in TheDeck
where card.Suit == "h" // Return just the Hearts
orderby card.RankValue descending
select card;
results
###Output
_____no_output_____
###Markdown
Additionally, nothing is requiring you to return the object in the collection. You can return different properties and values by changing up the `select` clause:
###Code
var results = from card in TheDeck
where card.Suit == "h" && card.RankValue > 10
select card.Rank;
results
###Output
_____no_output_____
###Markdown
JoinsJust like SQL syntax, you can correlate two collections and work with the combined result. The [Join keyword](https://docs.microsoft.com/dotnet/csharp/language-reference/keywords/join-clause?WT.mc_id=visualstudio-twitch-jefritz) allows you to relate two collections based on a matching key value in each collection. There is a similar [Join method in LINQ to Objects](https://docs.microsoft.com/dotnet/api/system.linq.enumerable.join?view=netcore-3.1&WT.mc_id=visualstudio-twitch-jefritz) that delivers the same feature. Joins are slightly more involved and can be confusing topic, and we've embedded the official sample from the docs here. This sample relates `Person` records to their `Pets` that they own. The `Join` method receives each collection and uses two expression bodied members to select the key properties from each collection. Finally, it provides a projection method to create the resultant object.I have annotated this sample and the `Join` method to make it clearer
###Code
class Person
{
public string Name { get; set; }
}
class Pet
{
public string Name { get; set; }
public Person Owner { get; set; }
}
Person magnus = new Person { Name = "Hedlund, Magnus" };
Person terry = new Person { Name = "Adams, Terry" };
Person charlotte = new Person { Name = "Weiss, Charlotte" };
// Declare the set of 4 pets and their owners
Pet barley = new Pet { Name = "Barley", Owner = terry };
Pet boots = new Pet { Name = "Boots", Owner = terry };
Pet whiskers = new Pet { Name = "Whiskers", Owner = charlotte };
Pet daisy = new Pet { Name = "Daisy", Owner = magnus };
List<Person> people = new List<Person> { magnus, terry, charlotte };
List<Pet> pets = new List<Pet> { barley, boots, whiskers, daisy };
// Create a list of Person-Pet pairs where
// each element is an anonymous type that contains a
// Pet's name and the name of the Person that owns the Pet.
var query =
people.Join(pets, // Join the People and Pets collections
person => person, // We will match the Person object
pet => pet.Owner, // with the Owner property in the Pet record
(person, pet) => // The combined output of Person and Pet
// is an object with OwnerName and the Pet's Name
new { OwnerName = person.Name, Pet = pet.Name });
foreach (var obj in query)
{
display(string.Format("{0} - {1}",
obj.OwnerName,
obj.Pet));
}
###Output
_____no_output_____
###Markdown
Grouping data with the Group clauseData in your query can be grouped together using the [group clause](https://docs.microsoft.com/dotnet/csharp/language-reference/keywords/group-clause?WT.mc_id=visualstudio-twitch-jefritz). The `group` clause can be used in place of the `select` clause or can be used with the `select` clause to aggregate data in various groupings. Let's try using the `group` keywords
###Code
var results = from card in TheDeck
group card by card.Suit;
display(results.GetType());
results
###Output
_____no_output_____
###Markdown
Interestingly, we are returned a collection with all of the cards grouped by their suits. If we also wanted to select the suit and create a grouped result we could expand our query like this:
###Code
var results = from card in TheDeck
group card by card.Suit into suit
select new {TheSuit=suit.Key, suit};
display(results.GetType());
results
###Output
_____no_output_____
###Markdown
Now this is **VERY INTERESTING** we have created an [Anonymous Type](https://docs.microsoft.com/dotnet/csharp/programming-guide/classes-and-structs/anonymous-types?WT.mc_id=visualstudio-twitch-jefritz), a type on the fly that contains a string field for `TheSuit` and a collection of `Card` objects in a field called `suit`. We'll get more into **Anonymous Types** next week, but you need to know that you can use the `new` keyword with curly braces `{ }` to create a type and make it available in your code. Many C veterans will recommend against exposing the anonymous type outside of the method it is created in and instead suggest creating a concrete type to return in that `select` clause. Our groupings can take some interesting calculations. Let's write a grouping for all of the face cards (and the Ace too):
###Code
var results = from card in TheDeck
group card by card.RankValue > 10 into facecards
select new {TheSuit=facecards.Key, facecards};
results
###Output
_____no_output_____
###Markdown
That looks strange, but we have two groups: 1 group that are the numeric cards and a second group that are the face cards. Let's tinker with that method a little more:
###Code
var results = from card in TheDeck
where card.RankValue > 10
group card by card.Rank into facecards
select new {Face=facecards.Key, facecards};
results
###Output
_____no_output_____
###Markdown
Now this sets up for a simplified **Sam the Bellhop** classic card trick. Take a few minutes and enjoy magician and former Philadelphia Eagles player [Jon Dorenbos performing this trick](https://www.youtube.com/watch?v=fwKPDrtgXRs) where he sorts and finds cards while telling the story of Sam the Bellhop. Loading data from CSVWe've worked with objects and data that we've specified here in the notebook. Let's use an external library, in .NET we call them **NuGet Packages** from www.nuget.org called [LINQtoCSV](https://www.nuget.org/packages/LinqToCsv/) to load Atlantic Hurricane Season data (courtesy of [Wikipedia](https://en.wikipedia.org/wiki/Atlantic_hurricane_season)).
###Code
#r "nuget:LinqToCsv"
using LINQtoCSV;
class MyDataRow {
[CsvColumn(Name = "Year", FieldIndex = 1)]
public int Year {get; set;}
[CsvColumn(Name = "Number of tropical storms", FieldIndex = 2)]
public byte TropicalStormCount { get; set;}
[CsvColumn(Name = "Number of hurricanes", FieldIndex = 3)]
public byte HurricaneCount { get; set;}
[CsvColumn(Name = "Number of major hurricanes", FieldIndex = 4)]
public byte MajorHurricaneCount { get; set;}
// Accumulated Cyclone Energy
[CsvColumn(Name = "ACE", FieldIndex = 5)]
public decimal ACE { get; set; }
[CsvColumn(Name = "Deaths", FieldIndex = 6)]
public int Deaths { get; set; }
[CsvColumn(Name="Strongest storm", FieldIndex = 7)]
public string StrongestStorm { get; set; }
[CsvColumn(Name = "Damage USD", FieldIndex = 8)]
public string DamageUSD { get; set; }
[CsvColumn(Name = "Retired names", FieldIndex = 9)]
public string RetiredNames { get; set; }
[CsvColumn(Name = "Notes", FieldIndex = 10)]
public string Notes { get; set; }
}
var inputFileDescription = new CsvFileDescription
{
SeparatorChar = ',',
FirstLineHasColumnNames = true
};
var context = new CsvContext();
var hurricanes = context.Read<MyDataRow>("data/atlantic_hurricanes.csv", inputFileDescription);
display(hurricanes.OrderByDescending(h => h.Year).Take(10).Select(h => new {h.Year, h.TropicalStormCount, h.HurricaneCount, h.StrongestStorm}));
var results = from storm in hurricanes
orderby storm.DamageUSD descending
where storm.HurricaneCount >= 10
select new {storm.Year, storm.HurricaneCount, storm.ACE, storm.StrongestStorm, storm.DamageUSD};
results
###Output
_____no_output_____
###Markdown
Session 6: LINQ and Extension Methods[LINQ (Language Integrated Query)](https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/linq/?WT.mc_id=visualstudio-twitch-jefritz) is a collection of methods and language features that allow you to interact with collections of data. In our last session, we focused on **LINQ to Objects** which allows us to use method predicates to interact with those collections.Let's setup our `Card` class and `FritzSet` collection object to work with again in this workbook
###Code
class Card {
public Card(string def) {
var values = def.Split('-');
Rank = values[0];
Suit = values[1];
}
public string Rank;
public int RankValue {
get {
var faceCards = new Dictionary<string,int> { {"J", 11}, {"Q", 12}, {"K", 13}, {"A", 14} };
return faceCards.ContainsKey(Rank) ? faceCards[Rank] : int.Parse(Rank);
}
}
public string Suit;
public override string ToString() {
return $"{Rank}-{Suit}";
}
private static bool IsLegalCardNotation(string notation) {
var segments = notation.Split('-');
if (segments.Length != 2) return false;
var validSuits = new [] {"c","d","h","s"};
if (!validSuits.Any(s => s == segments[1])) return false;
var validRanks = new [] {"A","2","3","4","5","6","7","8","9","10","J","Q","K"};
if (!validRanks.Any(r => r == segments[0])) return false;
return true;
}
public static implicit operator Card(string id) {
if (IsLegalCardNotation(id)) return new Card(id);
return null;
}
}
class FritzSet<T> : IEnumerable<T> {
private List<T> _Inner = new List<T>();
public IEnumerator<T> GetEnumerator()
{
return _Inner.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return _Inner.GetEnumerator();
}
public FritzSet<T> Add(T newItem) {
var insertAt = _Inner.Count == 0 ? 0 : new Random().Next(0,_Inner.Count+1);
_Inner.Insert(insertAt, newItem);
return this;
}
public FritzSet<T> Shuffle() {
_Inner = _Inner.OrderBy(_ => Guid.NewGuid()).ToList();
return this;
}
}
var TheDeck = new FritzSet<Card>();
TheDeck.Add("A-c").Add("A-d");TheDeck.Add("A-h");TheDeck.Add("A-s");TheDeck.Add("2-c");TheDeck.Add("2-d");TheDeck.Add("2-h");TheDeck.Add("2-s");TheDeck.Add("3-c");TheDeck.Add("3-d");TheDeck.Add("3-h");TheDeck.Add("3-s");TheDeck.Add("4-c");TheDeck.Add("4-d");TheDeck.Add("4-h");TheDeck.Add("4-s");
TheDeck.Add("5-c");TheDeck.Add("5-d");TheDeck.Add("5-h");TheDeck.Add("5-s");TheDeck.Add("6-c");TheDeck.Add("6-d");TheDeck.Add("6-h");TheDeck.Add("6-s");TheDeck.Add("7-c");TheDeck.Add("7-d");TheDeck.Add("7-h");TheDeck.Add("7-s");TheDeck.Add("8-c");TheDeck.Add("8-d");TheDeck.Add("8-h");TheDeck.Add("8-s");
TheDeck.Add("9-c");TheDeck.Add("9-d");TheDeck.Add("9-h");TheDeck.Add("9-s");TheDeck.Add("10-c");TheDeck.Add("10-d");TheDeck.Add("10-h");TheDeck.Add("10-s");TheDeck.Add("J-c");TheDeck.Add("J-d");TheDeck.Add("J-h");TheDeck.Add("J-s");
TheDeck.Add("Q-c");TheDeck.Add("Q-d");TheDeck.Add("Q-h");TheDeck.Add("Q-s");TheDeck.Add("K-c");TheDeck.Add("K-d");TheDeck.Add("K-h");TheDeck.Add("K-s");
// TheDeck
TheDeck.Shuffle().Shuffle().Shuffle().Shuffle().Shuffle();
//TheDeck
Card PriyanksCard = "Joker"; // Fix this
//display(PriyanksCard ?? "No card assigned");
###Output
_____no_output_____
###Markdown
In review, we can write a little bit of code to work with this collection to deal cards appropriately for a Texas Hold 'em poker game:
###Code
var ourDeck = TheDeck.Shuffle().Shuffle();
var hand1 = new List<Card>();
var hand2 = new List<Card>();
var hand3 = new List<Card>();
hand1.Add(ourDeck.Skip(1).First());
hand2.Add(ourDeck.Skip(2).First());
hand3.Add(ourDeck.Skip(3).First());
hand1.Add(ourDeck.Skip(4).First());
hand2.Add(ourDeck.Skip(5).First());
hand3.Add(ourDeck.Skip(6).First());
display("Hand 1");
display(hand1);
display("Hand 2");
display(hand2);
display("Hand 3");
display(hand3);
// Burn a card and deal the next 3 cards called 'the flop'
display("The Flop");
display(ourDeck.Skip(8).Take(3));
// Burn a card and take one card called 'the turn'
display("The Turn");
display(ourDeck.Skip(12).First());
// Burn a card and take the final card called 'the river'
display("The River");
display(ourDeck.Skip(14).First());
###Output
_____no_output_____
###Markdown
Language Integrated QueryYou can build [expressions](https://docs.microsoft.com/dotnet/csharp/linq/query-expression-basics?WT.mc_id=visualstudio-twitch-jefritzwhat-is-a-query-and-what-does-it-do) in the middle of your C code that _LOOKS_ like SQL turned sideways. Query Expressions begin with a `from` clause and there's also a mandatory `select` clause to specify the values to return. By convention, many C developers who use this syntax align the clauses to the right of the `=` symbol. Let's dig into that syntax a bit more:
###Code
// The simplest query
var outValues = from card in TheDeck // the required collection we are querying
select card; // the values to be returned
outValues
###Output
_____no_output_____
###Markdown
Where and OrderBy clausesThat's a boring and non-productive query. You can start to make queries more interesting by adding a [where](https://docs.microsoft.com/dotnet/csharp/language-reference/keywords/where-clause?WT.mc_id=visualstudio-twitch-jefritz) clause with an appropriate test in a format similar to that you would find in an `if` statement. You can also optionally add an [orderby](https://docs.microsoft.com/dotnet/csharp/language-reference/keywords/orderby-clause?WT.mc_id=visualstudio-twitch-jefritz) clause with an **ALSO** optional [descending](https://docs.microsoft.com/dotnet/csharp/language-reference/keywords/descending?WT.mc_id=visualstudio-twitch-jefritz) keyword. Tinker with the query in the next block to learn more about these clauses
###Code
var results = from card in TheDeck
where card.Suit == "h" // Return just the Hearts
orderby card.RankValue descending
select card;
results
###Output
_____no_output_____
###Markdown
Additionally, nothing is requiring you to return the object in the collection. You can return different properties and values by changing up the `select` clause:
###Code
var results = from card in TheDeck
where card.Suit == "h" && card.RankValue > 10
select card.Rank;
results
###Output
_____no_output_____
###Markdown
JoinsJust like SQL syntax, you can correlate two collections and work with the combined result. The [Join keyword](https://docs.microsoft.com/dotnet/csharp/language-reference/keywords/join-clause?WT.mc_id=visualstudio-twitch-jefritz) allows you to relate two collections based on a matching key value in each collection. There is a similar [Join method in LINQ to Objects](https://docs.microsoft.com/dotnet/api/system.linq.enumerable.join?view=netcore-3.1&WT.mc_id=visualstudio-twitch-jefritz) that delivers the same feature. Joins are slightly more involved and can be confusing topic, and we've embedded the official sample from the docs here. This sample relates `Person` records to their `Pets` that they own. The `Join` method receives each collection and uses two expression bodied members to select the key properties from each collection. Finally, it provides a projection method to create the resultant object.I have annotated this sample and the `Join` method to make it clearer
###Code
class Person
{
public string Name { get; set; }
}
class Pet
{
public string Name { get; set; }
public Person Owner { get; set; }
}
Person magnus = new Person { Name = "Hedlund, Magnus" };
Person terry = new Person { Name = "Adams, Terry" };
Person charlotte = new Person { Name = "Weiss, Charlotte" };
// Declare the set of 4 pets and their owners
Pet barley = new Pet { Name = "Barley", Owner = terry };
Pet boots = new Pet { Name = "Boots", Owner = terry };
Pet whiskers = new Pet { Name = "Whiskers", Owner = charlotte };
Pet daisy = new Pet { Name = "Daisy", Owner = magnus };
List<Person> people = new List<Person> { magnus, terry, charlotte };
List<Pet> pets = new List<Pet> { barley, boots, whiskers, daisy };
// Create a list of Person-Pet pairs where
// each element is an anonymous type that contains a
// Pet's name and the name of the Person that owns the Pet.
var query =
people.Join(pets, // Join the People and Pets collections
person => person, // We will match the Person object
pet => pet.Owner, // with the Owner property in the Pet record
(person, pet) => // The combined output of Person and Pet
// is an object with OwnerName and the Pet's Name
new { OwnerName = person.Name, Pet = pet.Name });
foreach (var obj in query)
{
display(string.Format("{0} - {1}",
obj.OwnerName,
obj.Pet));
}
###Output
_____no_output_____
###Markdown
Grouping data with the Group clauseData in your query can be grouped together using the [group clause](https://docs.microsoft.com/dotnet/csharp/language-reference/keywords/group-clause?WT.mc_id=visualstudio-twitch-jefritz). The `group` clause can be used in place of the `select` clause or can be used with the `select` clause to aggregate data in various groupings. Let's try using the `group` keywords
###Code
var results = from card in TheDeck
group card by card.Suit;
display(results.GetType());
results
###Output
_____no_output_____
###Markdown
Interestingly, we are returned a collection with all of the cards grouped by their suits. If we also wanted to select the suit and create a grouped result we could expand our query like this:
###Code
var results = from card in TheDeck
group card by card.Suit into suit
select new {TheSuit=suit.Key, suit};
display(results.GetType());
results
###Output
_____no_output_____
###Markdown
Now this is **VERY INTERESTING** we have created an [Anonymous Type](https://docs.microsoft.com/dotnet/csharp/programming-guide/classes-and-structs/anonymous-types?WT.mc_id=visualstudio-twitch-jefritz), a type on the fly that contains a string field for `TheSuit` and a collection of `Card` objects in a field called `suit`. We'll get more into **Anonymous Types** next week, but you need to know that you can use the `new` keyword with curly braces `{ }` to create a type and make it available in your code. Many C veterans will recommend against exposing the anonymous type outside of the method it is created in and instead suggest creating a concrete type to return in that `select` clause. Our groupings can take some interesting calculations. Let's write a grouping for all of the face cards (and the Ace too):
###Code
var results = from card in TheDeck
group card by card.RankValue > 10 into facecards
select new {TheSuit=facecards.Key, facecards};
results
###Output
_____no_output_____
###Markdown
That looks strange, but we have two groups: 1 group that are the numeric cards and a second group that are the face cards. Let's tinker with that method a little more:
###Code
var results = from card in TheDeck
where card.RankValue > 10
group card by card.Rank into facecards
select new {Face=facecards.Key, facecards};
results
###Output
_____no_output_____
###Markdown
Now this sets up for a simplified **Sam the Bellhop** classic card trick. Take a few minutes and enjoy magician and former Philadelphia Eagles player [Jon Dorenbos performing this trick](https://www.youtube.com/watch?v=fwKPDrtgXRs) where he sorts and finds cards while telling the story of Sam the Bellhop. Loading data from CSVWe've worked with objects and data that we've specified here in the notebook. Let's use an external library, in .NET we call them **NuGet Packages** from www.nuget.org called [LINQtoCSV](https://www.nuget.org/packages/LinqToCsv/) to load Atlantic Hurricane Season data (courtesy of [Wikipedia](https://en.wikipedia.org/wiki/Atlantic_hurricane_season)).
###Code
#r "nuget:LinqToCsv"
using LINQtoCSV;
class MyDataRow {
[CsvColumn(Name = "Year", FieldIndex = 1)]
public int Year {get; set;}
[CsvColumn(Name = "Number of tropical storms", FieldIndex = 2)]
public byte TropicalStormCount { get; set;}
[CsvColumn(Name = "Number of hurricanes", FieldIndex = 3)]
public byte HurricaneCount { get; set;}
[CsvColumn(Name = "Number of major hurricanes", FieldIndex = 4)]
public byte MajorHurricaneCount { get; set;}
// Accumulated Cyclone Energy
[CsvColumn(Name = "ACE", FieldIndex = 5)]
public decimal ACE { get; set; }
[CsvColumn(Name = "Deaths", FieldIndex = 6)]
public int Deaths { get; set; }
[CsvColumn(Name="Strongest storm", FieldIndex = 7)]
public string StrongestStorm { get; set; }
[CsvColumn(Name = "Damage USD", FieldIndex = 8)]
public string DamageUSD { get; set; }
[CsvColumn(Name = "Retired names", FieldIndex = 9)]
public string RetiredNames { get; set; }
[CsvColumn(Name = "Notes", FieldIndex = 10)]
public string Notes { get; set; }
}
var inputFileDescription = new CsvFileDescription
{
SeparatorChar = ',',
FirstLineHasColumnNames = true
};
var context = new CsvContext();
var hurricanes = context.Read<MyDataRow>("data/atlantic_hurricanes.csv", inputFileDescription);
display(hurricanes.OrderByDescending(h => h.Year).Take(10).Select(h => new {h.Year, h.TropicalStormCount, h.HurricaneCount, h.StrongestStorm}));
var results = from storm in hurricanes
orderby storm.DamageUSD descending
where storm.HurricaneCount >= 10
select new {storm.Year, storm.HurricaneCount, storm.ACE, storm.StrongestStorm, storm.DamageUSD};
results
###Output
_____no_output_____ |
examples/npendulum/n-pendulum-control.ipynb | ###Markdown
IntroductionSeveral pieces of the puzzle have come together lately to really demonstrate the power of the scientific python software packages to handle complex dynamic and controls problems (i.e. IPython notebooks, matplotlib animations, python-control, and our software packages: sympy.physics.mechanics and PyDy).This [blog post by Wolfram](http://blog.wolfram.com/2011/03/01/stabilized-n-link-pendulum/) demonstrates Mathematica's ability to symbolically derive the equations of motion for the n-link pendulum and stabilize it with an LQR controller. This blog post inspired us to replicate the example with all free and open source software.In this example problem, we derive the equations of motion of an n-link pendulum on a laterally sliding cart and then develop a controller to stabilize it. Balancing a single inverted pendulum is a classic problem that is often a student's first experience with non-linear dynamics and control. The problem here is extended to a general n-link pendulum in which the equations of motion quickly get messy with greater than 2 links.The diagram below shows the general description of the problem.
###Code
from IPython.display import SVG
SVG(filename='n-pendulum-with-cart.svg')
###Output
_____no_output_____
###Markdown
Setup===This example depends on the following software:- IPython- NumPy- SciPy- SymPy >= 0.7.6- matplotlibThe easiest way to install the Python packages it is to use conda:```$ conda install ipython-notebook numpy scipy sympy matplotlib```To create animations you need a video encoder like ffmpeg installed. Equations of Motion===================We'll start by generating the equations of motion for the system with SymPy **[mechanics](http://docs.sympy.org/dev/modules/physics/mechanics/index.html)**. The functionality that mechanics provides is much more in depth than Mathematica's functionality. In the Mathematica example, Lagrangian mechanics were implemented manually with Mathematica's symbolic functionality. **mechanics** provides an assortment of functions and classes to derive the equations of motion for arbitrarily complex (i.e. configuration constraints, nonholonomic motion constraints, etc) multibody systems in a very natural way. First we import the necessary functionality from SymPy.
###Code
from __future__ import division, print_function
import sympy as sm
import sympy.physics.mechanics as me
###Output
_____no_output_____
###Markdown
We can enable mathematical rendering of the resulting equations in the notebook with the following command.
###Code
me.init_vprinting()
###Output
_____no_output_____
###Markdown
Now specify the number of links, $n$. I'll start with 5 since the Wolfram folks only showed four.
###Code
n = 5
###Output
_____no_output_____
###Markdown
**mechanics** will need the generalized coordinates, generalized speeds, and the input force which are all time dependent variables and the bob masses, link lengths, and acceleration due to gravity which are all constants. Time, $t$, is also made available because we will need to differentiate with respect to time.
###Code
q = me.dynamicsymbols('q:{}'.format(n + 1)) # Generalized coordinates
u = me.dynamicsymbols('u:{}'.format(n + 1)) # Generalized speeds
f = me.dynamicsymbols('f') # Force applied to the cart
m = sm.symbols('m:{}'.format(n + 1)) # Mass of each bob
l = sm.symbols('l:{}'.format(n)) # Length of each link
g, t = sm.symbols('g t') # Gravity and time
###Output
_____no_output_____
###Markdown
Now we can create and inertial reference frame $I$ and define the point, $O$, as the origin.
###Code
I = me.ReferenceFrame('I') # Inertial reference frame
O = me.Point('O') # Origin point
O.set_vel(I, 0) # Origin's velocity is zero
###Output
_____no_output_____
###Markdown
Secondly, we define the define the first point of the pendulum as a particle which has mass. This point can only move laterally and represents the motion of the "cart".
###Code
P0 = me.Point('P0') # Hinge point of top link
P0.set_pos(O, q[0] * I.x) # Set the position of P0
P0.set_vel(I, u[0] * I.x) # Set the velocity of P0
Pa0 = me.Particle('Pa0', P0, m[0]) # Define a particle at P0
###Output
_____no_output_____
###Markdown
Now we can define the $n$ reference frames, particles, gravitational forces, and kinematical differential equations for each of the pendulum links. This is easily done with a loop.
###Code
frames = [I] # List to hold the n + 1 frames
points = [P0] # List to hold the n + 1 points
particles = [Pa0] # List to hold the n + 1 particles
forces = [(P0, f * I.x - m[0] * g * I.y)] # List to hold the n + 1 applied forces, including the input force, f
kindiffs = [q[0].diff(t) - u[0]] # List to hold kinematic ODE's
for i in range(n):
Bi = I.orientnew('B' + str(i), 'Axis', [q[i + 1], I.z]) # Create a new frame
Bi.set_ang_vel(I, u[i + 1] * I.z) # Set angular velocity
frames.append(Bi) # Add it to the frames list
Pi = points[-1].locatenew('P' + str(i + 1), l[i] * Bi.x) # Create a new point
Pi.v2pt_theory(points[-1], I, Bi) # Set the velocity
points.append(Pi) # Add it to the points list
Pai = me.Particle('Pa' + str(i + 1), Pi, m[i + 1]) # Create a new particle
particles.append(Pai) # Add it to the particles list
forces.append((Pi, -m[i + 1] * g * I.y)) # Set the force applied at the point
kindiffs.append(q[i + 1].diff(t) - u[i + 1]) # Define the kinematic ODE: dq_i / dt - u_i = 0
###Output
_____no_output_____
###Markdown
With all of the necessary point velocities and particle masses defined, the `KanesMethod` class can be used to derive the equations of motion of the system automatically.
###Code
kane = me.KanesMethod(I, q_ind=q, u_ind=u, kd_eqs=kindiffs) # Initialize the object
fr, frstar = kane.kanes_equations(forces, particles) # Generate EoM's fr + frstar = 0
###Output
_____no_output_____
###Markdown
The equations of motion are quite long as can been seen below. This is the general nature of most non-simple mutlibody problems. That is why a SymPy is so useful; no more mistakes in algebra, differentiation, or copying hand written equations. Note that `trigsimp` can take quite a while to complete for extremely large expressions. Below we print $\tilde{M}$ and $\tilde{f}$ from $\tilde{M}\dot{u}=\tilde{f}$ to show the size of the expressions.
###Code
sm.trigsimp(kane.mass_matrix)
###Output
_____no_output_____
###Markdown
$\tilde{M}$ is a function of the constant parameters and the configuration.
###Code
me.find_dynamicsymbols(kane.mass_matrix)
sm.trigsimp(kane.forcing)
###Output
_____no_output_____
###Markdown
$\tilde{f}$ is a function of the constant parameters, configuration, speeds, and the applied force.
###Code
me.find_dynamicsymbols(kane.forcing)
###Output
_____no_output_____
###Markdown
Simulation==========Now that the symbolic equations of motion are available we can simulate the pendulum's motion. We will need some more SymPy functionality and several NumPy functions, and most importantly the integration function from SciPy, `odeint`.
###Code
import numpy as np
from numpy.linalg import solve
from scipy.integrate import odeint
###Output
_____no_output_____
###Markdown
First, define some numeric values for all of the constant parameters in the problem.
###Code
arm_length = 1. / n # The maximum length of the pendulum is 1 meter
bob_mass = 0.01 / n # The maximum mass of the bobs is 10 grams
parameters = [g, m[0]] # Parameter definitions starting with gravity and the first bob
parameter_vals = [9.81, 0.01 / n] # Numerical values for the first two
for i in range(n): # Then each mass and length
parameters += [l[i], m[i + 1]]
parameter_vals += [arm_length, bob_mass]
###Output
_____no_output_____
###Markdown
Mathematica has a really nice `NDSolve` function for quickly integrating their symbolic differential equations. We make use of SymPy's lambdify function to do something similar, i.e. to create functions that will evaluate the "full" mass matrix, $M$, and "full" forcing vector, $f$ from $M\dot{x} = f(x, r, t)$ as a NumPy function.
###Code
dynamic = q + u # Make a list of the states
dynamic.append(f) # Add the input force
M_func = sm.lambdify(dynamic + parameters, kane.mass_matrix_full) # Create a callable function to evaluate the mass matrix
f_func = sm.lambdify(dynamic + parameters, kane.forcing_full) # Create a callable function to evaluate the forcing vector
###Output
_____no_output_____
###Markdown
To integrate the ODE's we need to define a function that returns the derivatives of the states given the current state and time.
###Code
def right_hand_side(x, t, args):
"""Returns the derivatives of the states.
Parameters
----------
x : ndarray, shape(2 * (n + 1))
The current state vector.
t : float
The current time.
args : ndarray
The constants.
Returns
-------
dx : ndarray, shape(2 * (n + 1))
The derivative of the state.
"""
r = 0.0 # The input force is always zero
arguments = np.hstack((x, r, args)) # States, input, and parameters
dx = np.array(solve(M_func(*arguments), # Solving for the derivatives
f_func(*arguments))).T[0]
return dx
###Output
_____no_output_____
###Markdown
Now that we have the right hand side function, the initial conditions are set such that the pendulum is in the vertical equilibrium and a slight initial rate is set for each speed to ensure the pendulum falls. The equations can then be integrated with SciPy's `odeint` function given a time series.
###Code
x0 = np.hstack((0.0, # q0
np.pi / 2 * np.ones(len(q) - 1), # q1...qn+1
1e-3 * np.ones(len(u)))) # u0...un+1
t = np.linspace(0.0, 10.0, num=500) # Time vector
x = odeint(right_hand_side, x0, t, args=(parameter_vals,)) # Numerical integration
###Output
_____no_output_____
###Markdown
Plotting========The results of the simulation can be plotted with matplotlib. First, load the plotting functionality.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.core.pylabtools import figsize
figsize(8.0, 6.0)
###Output
_____no_output_____
###Markdown
The coordinate trajectories are plotted below.
###Code
lines = plt.plot(t, x[:, :x.shape[1] // 2])
lab = plt.xlabel('Time [sec]')
leg = plt.legend(dynamic[:x.shape[1] // 2])
###Output
_____no_output_____
###Markdown
And the generalized speed trajectories.
###Code
lines = plt.plot(t, x[:, x.shape[1] // 2:])
lab = plt.xlabel('Time [sec]')
leg = plt.legend(dynamic[x.shape[1] // 2:])
###Output
_____no_output_____
###Markdown
Animation=========matplotlib now includes very nice animation functions for animating matplotlib plots. First we import the necessary functions for creating the animation.
###Code
from matplotlib import animation
from matplotlib.patches import Rectangle
###Output
_____no_output_____
###Markdown
The following function was modeled from Jake Vanderplas's [post on matplotlib animations](http://jakevdp.github.com/blog/2012/08/18/matplotlib-animation-tutorial/). The default animation writer is used (typically ffmpeg), you can change it by adding `writer` argument to `anim.save` call.
###Code
def animate_pendulum(t, states, length, filename=None):
"""Animates the n-pendulum and optionally saves it to file.
Parameters
----------
t : ndarray, shape(m)
Time array.
states: ndarray, shape(m,p)
State time history.
length: float
The length of the pendulum links.
filename: string or None, optional
If true a movie file will be saved of the animation. This may take some time.
Returns
-------
fig : matplotlib.Figure
The figure.
anim : matplotlib.FuncAnimation
The animation.
"""
# the number of pendulum bobs
numpoints = states.shape[1] // 2
# first set up the figure, the axis, and the plot elements we want to animate
fig = plt.figure()
# some dimesions
cart_width = 0.4
cart_height = 0.2
# set the limits based on the motion
xmin = np.around(states[:, 0].min() - cart_width / 2.0, 1)
xmax = np.around(states[:, 0].max() + cart_width / 2.0, 1)
# create the axes
ax = plt.axes(xlim=(xmin, xmax), ylim=(-1.1, 1.1), aspect='equal')
# display the current time
time_text = ax.text(0.04, 0.9, '', transform=ax.transAxes)
# create a rectangular cart
rect = Rectangle([states[0, 0] - cart_width / 2.0, -cart_height / 2],
cart_width, cart_height, fill=True, color='red',
ec='black')
ax.add_patch(rect)
# blank line for the pendulum
line, = ax.plot([], [], lw=2, marker='o', markersize=6)
# initialization function: plot the background of each frame
def init():
time_text.set_text('')
rect.set_xy((0.0, 0.0))
line.set_data([], [])
return time_text, rect, line,
# animation function: update the objects
def animate(i):
time_text.set_text('time = {:2.2f}'.format(t[i]))
rect.set_xy((states[i, 0] - cart_width / 2.0, -cart_height / 2))
x = np.hstack((states[i, 0], np.zeros((numpoints - 1))))
y = np.zeros((numpoints))
for j in np.arange(1, numpoints):
x[j] = x[j - 1] + length * np.cos(states[i, j])
y[j] = y[j - 1] + length * np.sin(states[i, j])
line.set_data(x, y)
return time_text, rect, line,
# call the animator function
anim = animation.FuncAnimation(fig, animate, frames=len(t), init_func=init,
interval=t[-1] / len(t) * 1000, blit=True, repeat=False)
# save the animation if a filename is given
if filename is not None:
anim.save(filename, fps=30, codec='libx264')
###Output
_____no_output_____
###Markdown
Now we can create the animation of the pendulum. This animation will show the open loop dynamics.
###Code
animate_pendulum(t, x, arm_length, filename="open-loop.mp4")
from IPython.display import HTML
html = \
"""
<video width="640" height="480" controls>
<source src="open-loop.mp4" type="video/mp4">
Your browser does not support the video tag, check out the YouTube version instead: http://youtu.be/Nj3_npq7MZI.
</video>
"""
HTML(html)
###Output
_____no_output_____
###Markdown
Controller Design=================The n-link pendulum can be balanced such that all of the links are inverted above the cart by applying the correct lateral force to the cart. We can design a full state feedback controller based from a linear model of the pendulum about its upright equilibrium point. We'll start by specifying the equilibrium point and parameters in dictionaries. We make sure to use SymPy types in the equilibrium point to ensure proper cancelations in the linearization.
###Code
equilibrium_point = [sm.S(0)] + [sm.pi / 2] * (len(q) - 1) + [sm.S(0)] * len(u)
equilibrium_dict = dict(zip(q + u, equilibrium_point))
equilibrium_dict
###Output
_____no_output_____
###Markdown
The `KanesMethod` class has method that linearizes the forcing vector about generic state and input perturbation vectors. The equilibrium point and numerical constants can then be substituted in to give the linear system in this form: $M\dot{x}=F_Ax+F_Br$. The state and input matrices, $A$ and $B$, can then be computed by left side multiplication by the inverse of the mass matrix: $A=M^{-1}F_A$ and $B=M^{-1}F_B$.
###Code
M, F_A, F_B, r = kane.linearize(new_method=True, op_point=equilibrium_dict)
sm.simplify(M)
sm.simplify(F_A)
sm.simplify(F_B)
###Output
_____no_output_____
###Markdown
Now the numerical $A$ and $B$ matrices can be formed. First substitute numerical parameter values into $M$, $F_A$, and $F_B$.
###Code
parameter_dict = dict(zip(parameters, parameter_vals))
parameter_dict
M_num = sm.matrix2numpy(M.subs(parameter_dict), dtype=float)
F_A_num = sm.matrix2numpy(F_A.subs(parameter_dict), dtype=float)
F_B_num = sm.matrix2numpy(F_B.subs(parameter_dict), dtype=float)
A = np.linalg.solve(M_num, F_A_num)
B = np.linalg.solve(M_num ,F_B_num)
print(A)
print(B)
###Output
[[ 0.00000000e+00]
[ 0.00000000e+00]
[ 0.00000000e+00]
[ 0.00000000e+00]
[ 0.00000000e+00]
[ 0.00000000e+00]
[ 5.00000000e+02]
[ 2.50000000e+03]
[ -1.15648232e-12]
[ 1.09865820e-12]
[ -5.49329101e-13]
[ -0.00000000e+00]]
###Markdown
Also convert `equilibrium_point` to a numeric array:
###Code
equilibrium_point = np.asarray([x.evalf() for x in equilibrium_point], dtype=float)
###Output
_____no_output_____
###Markdown
Now that we have a linear system, the SciPy package can be used to design an optimal controller for the system.
###Code
from numpy.linalg import matrix_rank
from scipy.linalg import solve_continuous_are
###Output
_____no_output_____
###Markdown
First we can check to see if the system is, in fact, controllable. The rank of the controllability matrix must be equal to the number of rows in $A$, but the `matrix_rank` algorithm is numerically ill conditioned and for certain values of $n$ this will fail, as seen below for $n=5$. Nevertheless, the system is controllable, no matter the number of links.
###Code
def controllable(a, b):
"""Returns true if the system is controllable and false if not.
Parameters
----------
a : array_like, shape(n,n)
The state matrix.
b : array_like, shape(n,r)
The input matrix.
Returns
-------
controllable : boolean
"""
a = np.matrix(a)
b = np.matrix(b)
n = a.shape[0]
controllability_matrix = []
for i in range(n):
controllability_matrix.append(a ** i * b)
controllability_matrix = np.hstack(controllability_matrix)
return np.linalg.matrix_rank(controllability_matrix) == n
controllable(A, B)
###Output
_____no_output_____
###Markdown
So now we can compute the optimal gains with a linear quadratic regulator. I chose identity matrices for the weightings for simplicity.
###Code
Q = np.eye(A.shape[0])
R = np.eye(B.shape[1])
S = solve_continuous_are(A, B, Q, R);
K = np.dot(np.dot(np.linalg.inv(R), B.T), S)
K
###Output
_____no_output_____
###Markdown
The gains can now be used to define the required input during simulation to stabilize the system. The input $r$ is simply the gain vector multiplied by the error in the state vector from the equilibrium point, $r(t)=K(x_{eq} - x(t))$.
###Code
def right_hand_side(x, t, args):
"""Returns the derivatives of the states.
Parameters
----------
x : ndarray, shape(2 * (n + 1))
The current state vector.
t : float
The current time.
args : ndarray
The constants.
Returns
-------
dx : ndarray, shape(2 * (n + 1))
The derivative of the state.
"""
r = np.dot(K, equilibrium_point - x) # The controller
arguments = np.hstack((x, r, args)) # States, input, and parameters
dx = np.array(solve(M_func(*arguments), # Solving for the derivatives
f_func(*arguments))).T[0]
return dx
###Output
_____no_output_____
###Markdown
Now we can simulate and animate the system to see if the controller works.
###Code
x0 = np.hstack((0,
np.pi / 2 * np.ones(len(q) - 1),
1 * np.ones(len(u))))
t = np.linspace(0.0, 10.0, num=500)
x = odeint(right_hand_side, x0, t, args=(parameter_vals,))
###Output
_____no_output_____
###Markdown
The plots show that we seem to have a stable system.
###Code
lines = plt.plot(t, x[:, :x.shape[1] // 2])
lab = plt.xlabel('Time [sec]')
leg = plt.legend(dynamic[:x.shape[1] // 2])
lines = plt.plot(t, x[:, x.shape[1] // 2:])
lab = plt.xlabel('Time [sec]')
leg = plt.legend(dynamic[x.shape[1] // 2:])
animate_pendulum(t, x, arm_length, filename="closed-loop.mp4")
from IPython.display import HTML
html = \
"""
<video width="640" height="480" controls>
<source src="closed-loop.mp4" type="video/mp4">
Your browser does not support the video tag, check out the YouTube version instead: http://youtu.be/SpgBHqW9om0
</video>
"""
HTML(html)
###Output
_____no_output_____
###Markdown
The video clearly shows that the controller can balance all $n$ of the pendulum links. The weightings in the lqr design can be tweaked to give different performance if needed.This example shows that the free and open source scientific Python tools for dynamics are easily comparable in ability and quality to a commercial package such as Mathematica. The IPython notebook for this example can be downloaded from https://github.com/pydy/pydy/tree/master/examples/npendulum. You can try out different $n$ values. I've gotten the equations of motion to compute for an open loop simulation of 10 links. My computer ran out of memory when I tried to compute for $n=50$. The controller weightings and initial conditions will probably have to be adjusted for better performance for $n>5$, but it should work.
###Code
# Install with pip install version_information
%load_ext version_information
%version_information numpy, sympy, scipy, matplotlib, control
###Output
_____no_output_____
###Markdown
IntroductionSeveral pieces of the puzzle have come together lately to really demonstrate the power of the scientific python software packages to handle complex dynamic and controls problems (i.e. IPython notebooks, matplotlib animations, python-control, and our software packages: sympy.physics.mechanics and PyDy).This [blog post by Wolfram](http://blog.wolfram.com/2011/03/01/stabilized-n-link-pendulum/) demonstrates Mathematica's ability to symbolically derive the equations of motion for the n-link pendulum and stabilize it with an LQR controller. This blog post inspired us to replicate the example with all free and open source software.In this example problem, we derive the equations of motion of an n-link pendulum on a laterally sliding cart and then develop a controller to stabilize it. Balancing a single inverted pendulum is a classic problem that is often a student's first experience with non-linear dynamics and control. The problem here is extended to a general n-link pendulum in which the equations of motion quickly get messy with greater than 2 links.The diagram below shows the general description of the problem.
###Code
from IPython.display import SVG
SVG(filename='n-pendulum-with-cart.svg')
###Output
_____no_output_____
###Markdown
Setup===This example depends on the following software:- IPython- NumPy- SciPy- SymPy >= 0.7.6- matplotlib- python-control- avconvThe easiest way to install the Python packages it is to use conda:```$ conda install ipython-notebook, numpy, scipy, sympy, matplotlib$ conda install -c https://conda.binstar.org/cwrowley control````avconv` should be installed as per its recommended procedure for your operating system.Equations of Motion===================We'll start by generating the equations of motion for the system with SymPy **[mechanics](http://docs.sympy.org/dev/modules/physics/mechanics/index.html)**. The functionality that mechanics provides is much more in depth than Mathematica's functionality. In the Mathematica example, Lagrangian mechanics were implemented manually with Mathematica's symbolic functionality. **mechanics** provides an assortment of functions and classes to derive the equations of motion for arbitrarily complex (i.e. configuration constraints, nonholonomic motion constraints, etc) multibody systems in a very natural way. First we import the necessary functionality from SymPy.
###Code
import sympy as sm
import sympy.physics.mechanics as me
###Output
_____no_output_____
###Markdown
We can enable mathematical rendering of the resulting equations in the notebook with the following command.
###Code
me.init_vprinting()
###Output
_____no_output_____
###Markdown
Now specify the number of links, $n$. I'll start with 5 since the Wolfram folks only showed four.
###Code
n = 5
###Output
_____no_output_____
###Markdown
**mechanics** will need the generalized coordinates, generalized speeds, and the input force which are all time dependent variables and the bob masses, link lengths, and acceleration due to gravity which are all constants. Time, $t$, is also made available because we will need to differentiate with respect to time.
###Code
q = me.dynamicsymbols('q:{}'.format(n + 1)) # Generalized coordinates
u = me.dynamicsymbols('u:{}'.format(n + 1)) # Generalized speeds
f = me.dynamicsymbols('f') # Force applied to the cart
m = sm.symbols('m:{}'.format(n + 1)) # Mass of each bob
l = sm.symbols('l:{}'.format(n)) # Length of each link
g, t = sm.symbols('g t') # Gravity and time
###Output
_____no_output_____
###Markdown
Now we can create and inertial reference frame $I$ and define the point, $O$, as the origin.
###Code
I = me.ReferenceFrame('I') # Inertial reference frame
O = me.Point('O') # Origin point
O.set_vel(I, 0) # Origin's velocity is zero
###Output
_____no_output_____
###Markdown
Secondly, we define the define the first point of the pendulum as a particle which has mass. This point can only move laterally and represents the motion of the "cart".
###Code
P0 = me.Point('P0') # Hinge point of top link
P0.set_pos(O, q[0] * I.x) # Set the position of P0
P0.set_vel(I, u[0] * I.x) # Set the velocity of P0
Pa0 = me.Particle('Pa0', P0, m[0]) # Define a particle at P0
###Output
_____no_output_____
###Markdown
Now we can define the $n$ reference frames, particles, gravitational forces, and kinematical differential equations for each of the pendulum links. This is easily done with a loop.
###Code
frames = [I] # List to hold the n + 1 frames
points = [P0] # List to hold the n + 1 points
particles = [Pa0] # List to hold the n + 1 particles
forces = [(P0, f * I.x - m[0] * g * I.y)] # List to hold the n + 1 applied forces, including the input force, f
kindiffs = [q[0].diff(t) - u[0]] # List to hold kinematic ODE's
for i in range(n):
Bi = I.orientnew('B' + str(i), 'Axis', [q[i + 1], I.z]) # Create a new frame
Bi.set_ang_vel(I, u[i + 1] * I.z) # Set angular velocity
frames.append(Bi) # Add it to the frames list
Pi = points[-1].locatenew('P' + str(i + 1), l[i] * Bi.x) # Create a new point
Pi.v2pt_theory(points[-1], I, Bi) # Set the velocity
points.append(Pi) # Add it to the points list
Pai = me.Particle('Pa' + str(i + 1), Pi, m[i + 1]) # Create a new particle
particles.append(Pai) # Add it to the particles list
forces.append((Pi, -m[i + 1] * g * I.y)) # Set the force applied at the point
kindiffs.append(q[i + 1].diff(t) - u[i + 1]) # Define the kinematic ODE: dq_i / dt - u_i = 0
###Output
_____no_output_____
###Markdown
With all of the necessary point velocities and particle masses defined, the `KanesMethod` class can be used to derive the equations of motion of the system automatically.
###Code
kane = me.KanesMethod(I, q_ind=q, u_ind=u, kd_eqs=kindiffs) # Initialize the object
fr, frstar = kane.kanes_equations(forces, particles) # Generate EoM's fr + frstar = 0
###Output
_____no_output_____
###Markdown
The equations of motion are quite long as can been seen below. This is the general nature of most non-simple mutlibody problems. That is why a SymPy is so useful; no more mistakes in algebra, differentiation, or copying hand written equations. Note that `trigsimp` can take quite a while to complete for extremely large expressions. Below we print $\tilde{M}$ and $\tilde{f}$ from $\tilde{M}\dot{u}=\tilde{f}$ to show the size of the expressions.
###Code
sm.trigsimp(kane.mass_matrix)
###Output
_____no_output_____
###Markdown
$\tilde{M}$ is a function of the constant parameters and the configuration.
###Code
me.find_dynamicsymbols(kane.mass_matrix)
sm.trigsimp(kane.forcing)
###Output
_____no_output_____
###Markdown
$\tilde{f}$ is a function of the constant parameters, configuration, speeds, and the applied force.
###Code
me.find_dynamicsymbols(kane.forcing)
###Output
_____no_output_____
###Markdown
Simulation==========Now that the symbolic equations of motion are available we can simulate the pendulum's motion. We will need some more SymPy functionality and several NumPy functions, and most importantly the integration function from SciPy, `odeint`.
###Code
import numpy as np
from numpy.linalg import solve
from scipy.integrate import odeint
###Output
_____no_output_____
###Markdown
First, define some numeric values for all of the constant parameters in the problem.
###Code
arm_length = 1. / n # The maximum length of the pendulum is 1 meter
bob_mass = 0.01 / n # The maximum mass of the bobs is 10 grams
parameters = [g, m[0]] # Parameter definitions starting with gravity and the first bob
parameter_vals = [9.81, 0.01 / n] # Numerical values for the first two
for i in range(n): # Then each mass and length
parameters += [l[i], m[i + 1]]
parameter_vals += [arm_length, bob_mass]
###Output
_____no_output_____
###Markdown
Mathematica has a really nice `NDSolve` function for quickly integrating their symbolic differential equations. We make use of SymPy's lambdify function to do something similar, i.e. to create functions that will evaluate the "full" mass matrix, $M$, and "full" forcing vector, $f$ from $M\dot{x} = f(x, r, t)$ as a NumPy function.
###Code
dynamic = q + u # Make a list of the states
dynamic.append(f) # Add the input force
M_func = sm.lambdify(dynamic + parameters, kane.mass_matrix_full) # Create a callable function to evaluate the mass matrix
f_func = sm.lambdify(dynamic + parameters, kane.forcing_full) # Create a callable function to evaluate the forcing vector
###Output
_____no_output_____
###Markdown
To integrate the ODE's we need to define a function that returns the derivatives of the states given the current state and time.
###Code
def right_hand_side(x, t, args):
"""Returns the derivatives of the states.
Parameters
----------
x : ndarray, shape(2 * (n + 1))
The current state vector.
t : float
The current time.
args : ndarray
The constants.
Returns
-------
dx : ndarray, shape(2 * (n + 1))
The derivative of the state.
"""
r = 0.0 # The input force is always zero
arguments = np.hstack((x, r, args)) # States, input, and parameters
dx = np.array(solve(M_func(*arguments), # Solving for the derivatives
f_func(*arguments))).T[0]
return dx
###Output
_____no_output_____
###Markdown
Now that we have the right hand side function, the initial conditions are set such that the pendulum is in the vertical equilibrium and a slight initial rate is set for each speed to ensure the pendulum falls. The equations can then be integrated with SciPy's `odeint` function given a time series.
###Code
x0 = np.hstack((0.0, # q0
np.pi / 2 * np.ones(len(q) - 1), # q1...qn+1
1e-3 * np.ones(len(u)))) # u0...un+1
t = np.linspace(0.0, 10.0, num=500) # Time vector
x = odeint(right_hand_side, x0, t, args=(parameter_vals,)) # Numerical integration
###Output
_____no_output_____
###Markdown
Plotting========The results of the simulation can be plotted with matplotlib. First, load the plotting functionality.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.core.pylabtools import figsize
figsize(8.0, 6.0)
###Output
_____no_output_____
###Markdown
The coordinate trajectories are plotted below.
###Code
lines = plt.plot(t, x[:, :x.shape[1] / 2])
lab = plt.xlabel('Time [sec]')
leg = plt.legend(dynamic[:x.shape[1] / 2])
###Output
_____no_output_____
###Markdown
And the generalized speed trajectories.
###Code
lines = plt.plot(t, x[:, x.shape[1] / 2:])
lab = plt.xlabel('Time [sec]')
leg = plt.legend(dynamic[x.shape[1] / 2:])
###Output
_____no_output_____
###Markdown
Animation=========matplotlib now includes very nice animation functions for animating matplotlib plots. First we import the necessary functions for creating the animation.
###Code
from matplotlib import animation
from matplotlib.patches import Rectangle
###Output
_____no_output_____
###Markdown
The following function was modeled from Jake Vanderplas's [post on matplotlib animations](http://jakevdp.github.com/blog/2012/08/18/matplotlib-animation-tutorial/).
###Code
def animate_pendulum(t, states, length, filename=None):
"""Animates the n-pendulum and optionally saves it to file.
Parameters
----------
t : ndarray, shape(m)
Time array.
states: ndarray, shape(m,p)
State time history.
length: float
The length of the pendulum links.
filename: string or None, optional
If true a movie file will be saved of the animation. This may take some time.
Returns
-------
fig : matplotlib.Figure
The figure.
anim : matplotlib.FuncAnimation
The animation.
"""
# the number of pendulum bobs
numpoints = states.shape[1] / 2
# first set up the figure, the axis, and the plot elements we want to animate
fig = plt.figure()
# some dimesions
cart_width = 0.4
cart_height = 0.2
# set the limits based on the motion
xmin = np.around(states[:, 0].min() - cart_width / 2.0, 1)
xmax = np.around(states[:, 0].max() + cart_width / 2.0, 1)
# create the axes
ax = plt.axes(xlim=(xmin, xmax), ylim=(-1.1, 1.1), aspect='equal')
# display the current time
time_text = ax.text(0.04, 0.9, '', transform=ax.transAxes)
# create a rectangular cart
rect = Rectangle([states[0, 0] - cart_width / 2.0, -cart_height / 2],
cart_width, cart_height, fill=True, color='red',
ec='black')
ax.add_patch(rect)
# blank line for the pendulum
line, = ax.plot([], [], lw=2, marker='o', markersize=6)
# initialization function: plot the background of each frame
def init():
time_text.set_text('')
rect.set_xy((0.0, 0.0))
line.set_data([], [])
return time_text, rect, line,
# animation function: update the objects
def animate(i):
time_text.set_text('time = {:2.2f}'.format(t[i]))
rect.set_xy((states[i, 0] - cart_width / 2.0, -cart_height / 2))
x = np.hstack((states[i, 0], np.zeros((numpoints - 1))))
y = np.zeros((numpoints))
for j in np.arange(1, numpoints):
x[j] = x[j - 1] + length * np.cos(states[i, j])
y[j] = y[j - 1] + length * np.sin(states[i, j])
line.set_data(x, y)
return time_text, rect, line,
# call the animator function
anim = animation.FuncAnimation(fig, animate, frames=len(t), init_func=init,
interval=t[-1] / len(t) * 1000, blit=True, repeat=False)
# save the animation if a filename is given
if filename is not None:
anim.save(filename, fps=30, writer="avconv", codec='libx264')
###Output
_____no_output_____
###Markdown
Now we can create the animation of the pendulum. This animation will show the open loop dynamics.
###Code
animate_pendulum(t, x, arm_length, filename="open-loop.mp4")
from IPython.display import HTML
html = \
"""
<video width="640" height="480" controls>
<source src="open-loop.mp4" type="video/mp4">
Your browser does not support the video tag, check out the YouTube version instead: http://youtu.be/Nj3_npq7MZI.
</video>
"""
HTML(html)
###Output
_____no_output_____
###Markdown
Controller Design=================The n-link pendulum can be balanced such that all of the links are inverted above the cart by applying the correct lateral force to the cart. We can design a full state feedback controller based from a linear model of the pendulum about its upright equilibrium point. We'll start by specifying the equilibrium point and parameters in dictionaries. We make sure to use SymPy types in the equilibrium point to ensure proper cancelations in the linearization.
###Code
equilibrium_point = [sm.S(0)] + [sm.pi / 2] * (len(q) - 1) + [sm.S(0)] * len(u)
equilibrium_dict = dict(zip(q + u, equilibrium_point))
equilibrium_dict
###Output
_____no_output_____
###Markdown
The `KanesMethod` class has method that linearizes the forcing vector about generic state and input perturbation vectors. The equilibrium point and numerical constants can then be substituted in to give the linear system in this form: $M\dot{x}=F_Ax+F_Br$. The state and input matrices, $A$ and $B$, can then be computed by left side multiplication by the inverse of the mass matrix: $A=M^{-1}F_A$ and $B=M^{-1}F_B$.
###Code
M, F_A, F_B, r = kane.linearize(new_method=True, op_point=equilibrium_dict)
sm.simplify(M)
sm.simplify(F_A)
sm.simplify(F_B)
###Output
_____no_output_____
###Markdown
Now the numerical $A$ and $B$ matrices can be formed. First substitute numerical parameter values into $M$, $F_A$, and $F_B$.
###Code
parameter_dict = dict(zip(parameters, parameter_vals))
parameter_dict
M_num = sm.matrix2numpy(M.subs(parameter_dict), dtype=float)
F_A_num = sm.matrix2numpy(F_A.subs(parameter_dict), dtype=float)
F_B_num = sm.matrix2numpy(F_B.subs(parameter_dict), dtype=float)
A = np.linalg.solve(M_num, F_A_num)
B = np.linalg.solve(M_num ,F_B_num)
print(A)
print(B)
###Output
[[ 0.00000000e+00]
[ 0.00000000e+00]
[ 0.00000000e+00]
[ 0.00000000e+00]
[ 0.00000000e+00]
[ 0.00000000e+00]
[ 5.00000000e+02]
[ 2.50000000e+03]
[ -1.73472348e-13]
[ 3.46944695e-13]
[ -1.73472348e-13]
[ -0.00000000e+00]]
###Markdown
Now that we have a linear system, the python-control package can be used to design an optimal controller for the system.
###Code
import control
from numpy.linalg import matrix_rank
###Output
_____no_output_____
###Markdown
First we can check to see if the system is, in fact, controllable. The rank of the controllability matrix must be equal to the number of rows in $A$, but the `matrix_rank` algorithm is numerically ill conditioned and for certain values of $n$ this will fail, as seen below for $n=5$. Nevertheless, the system is controllable, no matter the number of links.
###Code
matrix_rank(control.ctrb(A, B)) == A.shape[0]
###Output
_____no_output_____
###Markdown
So now we can compute the optimal gains with a linear quadratic regulator. I chose identity matrices for the weightings for simplicity.
###Code
K, X, E = control.lqr(A, B, np.ones(A.shape), 1);
###Output
_____no_output_____
###Markdown
The gains can now be used to define the required input during simulation to stabilize the system. The input $r$ is simply the gain vector multiplied by the error in the state vector from the equilibrium point, $r(t)=K(x_{eq} - x(t))$.
###Code
def right_hand_side(x, t, args):
"""Returns the derivatives of the states.
Parameters
----------
x : ndarray, shape(2 * (n + 1))
The current state vector.
t : float
The current time.
args : ndarray
The constants.
Returns
-------
dx : ndarray, shape(2 * (n + 1))
The derivative of the state.
"""
r = np.dot(K, equilibrium_point - x) # The controller
arguments = np.hstack((x, r, args)) # States, input, and parameters
dx = np.array(solve(M_func(*arguments), # Solving for the derivatives
f_func(*arguments))).T[0]
return dx
###Output
_____no_output_____
###Markdown
Now we can simulate and animate the system to see if the controller works.
###Code
x0 = np.hstack((0,
np.pi / 2 * np.ones(len(q) - 1),
1 * np.ones(len(u))))
t = np.linspace(0.0, 10.0, num=500)
x = odeint(right_hand_side, x0, t, args=(parameter_vals,))
###Output
_____no_output_____
###Markdown
The plots show that we seem to have a stable system.
###Code
lines = plt.plot(t, x[:, :x.shape[1] / 2])
lab = plt.xlabel('Time [sec]')
leg = plt.legend(dynamic[:x.shape[1] / 2])
lines = plt.plot(t, x[:, x.shape[1] / 2:])
lab = plt.xlabel('Time [sec]')
leg = plt.legend(dynamic[x.shape[1] / 2:])
animate_pendulum(t, x, arm_length, filename="closed-loop.mp4")
from IPython.display import HTML
html = \
"""
<video width="640" height="480" controls>
<source src="closed-loop.mp4" type="video/mp4">
Your browser does not support the video tag, check out the YouTube version instead: http://youtu.be/SpgBHqW9om0
</video>
"""
HTML(html)
###Output
_____no_output_____
###Markdown
The video clearly shows that the controller can balance all $n$ of the pendulum links. The weightings in the lqr design can be tweaked to give different performance if needed.This example shows that the free and open source scientific Python tools for dynamics are easily comparable in ability and quality to a commercial package such as Mathematica. The IPython notebook for this example can be downloaded from https://github.com/pydy/pydy/tree/master/examples/npendulum. You can try out different $n$ values. I've gotten the equations of motion to compute for an open loop simulation of 10 links. My computer ran out of memory when I tried to compute for $n=50$. The controller weightings and initial conditions will probably have to be adjusted for better performance for $n>5$, but it should work.
###Code
%install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py
%load_ext version_information
%version_information numpy, sympy, scipy, matplotlib, control
###Output
Installed version_information.py. To use it, type:
%load_ext version_information
|
ImageProcessing/3-filtering-spectrum.ipynb | ###Markdown
フィルタリング 平均値フィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
im = resize(im, (im.shape[0]//5, im.shape[1]//5))
im[25, 70] = 0
vals = (val_start, val_end, val_step) = 1, 21, 2
val_default = 3
@interact(N=vals)
def g(N=val_default):
fig = plt.figure(figsize=(10, 3))
ax = fig.add_subplot(1, 2, 1)
imshow(im)
# plt.axis('off')
plt.title('original image {0}x{1}'.format(im.shape[0], im.shape[1]))
ax = fig.add_subplot(1, 2, 2)
w = np.ones((N, N)) / (N ** 2) # N×N平滑化フィルタ
imshow(ndimage.convolve(im, w))
plt.axis('off')
plt.title('{0}x{0} average filter'.format(N))
plt.show()
###Output
_____no_output_____
###Markdown
ガウスフィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
im = resize(im, (im.shape[0]//5, im.shape[1]//5))
size = min(im.shape[0], im.shape[1])
impuse_response = np.zeros((size, size))
impuse_response[size//2, size//2] = 1
vals = (val_start, val_end, val_step) = 1, 20, 1
val_default = 3
@interact(sigma=vals)
def g(sigma=val_default):
fig = plt.figure(figsize=(13, 3))
ax = fig.add_subplot(1, 3, 1)
imshow(im)
plt.axis('off')
plt.title('original image {0}x{1}'.format(im.shape[0], im.shape[1]))
ax = fig.add_subplot(1, 3, 2)
imshow(gaussian(im, sigma=sigma))
plt.axis('off')
plt.title('filterd image')
ax = fig.add_subplot(1, 3, 3)
imshow(gaussian(impuse_response, sigma=sigma))
plt.colorbar()
plt.tight_layout()
plt.title('Gaussian filter with $\sigma$={}'.format(sigma))
plt.show()
###Output
_____no_output_____
###Markdown
ガボールフィルタ
###Code
fig = plt.figure(figsize=(20,9))
for j in tqdm(range(3)):
for i in tqdm(range(5), leave=False):
ax = fig.add_subplot(3, 5, i+1 + j*5)
imshow(gabor_kernel(frequency=0.1, bandwidth=1/(2*j+1), theta=0.4 * i).real, cmap="gray")
plt.tight_layout()
plt.colorbar()
plt.show()
im = rgb2gray(imread('girl.jpg'))
im = resize(im, (im.shape[0]//5, im.shape[1]//5))
fig = plt.figure(figsize=(20,9))
for j in tqdm(range(3)):
for i in tqdm(range(5), leave=False):
ax = fig.add_subplot(3, 5, i+1 + j*5)
gabor = gabor_kernel(frequency=0.1, bandwidth=1/(2*j+1), theta=0.4 * i).real
im_gabor = signal.fftconvolve(im, gabor, mode='same') # use FFT for convolution
imshow(im_gabor, cmap="gray")
plt.tight_layout()
plt.colorbar()
plt.plot()
###Output
_____no_output_____
###Markdown
微分フィルタ ソーベルフィルタ,プレウィットフィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
im = resize(im, (im.shape[0]//5, im.shape[1]//5))
kernels = {'diff': np.array([[ 0,0,0],
[-1,0,1],
[ 0,0,0]]) / 2,
'prewitt': np.array([[-1,0,1],
[-1,0,1],
[-1,0,1]]) / 6,
'sobel': np.array([[-1,0,1],
[-2,0,2],
[-1,0,1]]) / 8
}
@interact(kernel=['diff', 'prewitt', 'sobel'],
val_max=(0.1, 0.5, 0.1))
def g(kernel='diff', val_max=0.1):
k = kernels[kernel]
imh = ndimage.convolve(im, k)
imv = ndimage.convolve(im, k.T)
fig = plt.figure(figsize=(15, 3))
ax = fig.add_subplot(1, 3, 1)
imshow(imh, cmap="bwr", vmin=-0.5, vmax=0.5)
plt.axis('off')
plt.colorbar()
plt.title('$I_x$')
ax = fig.add_subplot(1, 3, 2)
imshow(imv, cmap="bwr", vmin=-0.5, vmax=0.5)
plt.axis('off')
plt.colorbar()
plt.title('$I_y$')
ax = fig.add_subplot(1, 3, 3)
imshow(np.sqrt(imv**2 + imh**2), cmap="gray", vmin=0, vmax=val_max)
plt.axis('off')
plt.colorbar()
plt.title('$\sqrt{I_x^2 + I_y^2}$')
plt.show()
###Output
_____no_output_____
###Markdown
ソーベル,プレウィット,ロバーツ
###Code
@interact(val_max=(0.1, 0.5, 0.1))
def g(val_max=0.1):
fig = plt.figure(figsize=(15, 3))
ax = fig.add_subplot(1, 3, 1)
imshow(sobel(im), vmin=0, vmax=val_max)
plt.axis('off')
plt.colorbar()
plt.title('Sobel')
ax = fig.add_subplot(1, 3, 2)
imshow(prewitt(im), vmin=0, vmax=val_max)
plt.axis('off')
plt.colorbar()
plt.title('Prewitt')
ax = fig.add_subplot(1, 3, 3)
imshow(roberts(im), vmin=0, vmax=val_max)
plt.axis('off')
plt.colorbar()
plt.title('Roberts')
plt.show()
###Output
_____no_output_____
###Markdown
ラプラシアンフィルタ
###Code
L4 = np.array([[0, 1, 0],
[1,-4, 1],
[0, 1, 0]])
imshow(ndimage.convolve(im, L4), cmap="bwr", vmin=-0.5, vmax=0.5)
plt.axis('off')
plt.colorbar()
plt.show()
imshow(ndimage.convolve(gaussian(im, sigma=1), L4), cmap="bwr", vmin=-0.5, vmax=0.5)
plt.axis('off')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Laplacian of Gaussian (LoG)とゼロ交差
###Code
fig = plt.figure(figsize=(20,6))
for i in range(5):
ax = fig.add_subplot(2, 5, i+1)
iml = ndimage.convolve(gaussian(im, sigma=i+1), L4)
m = np.abs(iml).max() / 2
imshow(iml, cmap="bwr", vmin=-m, vmax=m)
plt.axis('off')
ax = fig.add_subplot(2, 5, i+1 + 5)
plt.contour(iml, levels=[0])
plt.gca().invert_yaxis()
plt.axis('off')
plt.show()
@interact(sigma=(0.1,10,0.1))
def g(sigma=2):
fig = plt.figure(figsize=(20,6))
ax = fig.add_subplot(1, 2, 1)
iml = ndimage.convolve(gaussian(im, sigma=sigma), L4)
m = np.abs(iml).max() / 2
imshow(iml, cmap="bwr", vmin=-m, vmax=m)
plt.axis('off')
ax = fig.add_subplot(1, 2, 2)
plt.contour(iml, levels=[0])
plt.gca().invert_yaxis()
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Laplacian-of-GaussianとDifference-of-Gaussian LoG
###Code
def gauss(x, sigma=1):
return np.exp(- x**2 / 2 / sigma**2) / 2 / np.pi / sigma
def grad_gauss(x, sigma, n=1):
return derivative(gauss, x, dx=1e-6, n=n, args=({sigma:sigma})) # n次導関数を計算
@interact(sigma=(0.1, 2, 0.05))
def g(sigma=1):
x = np.arange(-5, 5, 0.1)
plt.plot(x, gauss(x, sigma=sigma), label="f(x)")
plt.plot(x, grad_gauss(x, sigma=sigma), label="f'(x)")
plt.plot(x, grad_gauss(x, sigma=sigma, n=2), label="f''(x)")
plt.title("Gauss f(x) and derivatives f'(x), f''(x)")
plt.xlabel("x")
plt.ylim(-0.5, 0.5)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
DoG
###Code
@interact(sigma1=(0.1, 2, 0.05),
sigma2=(0.1, 2, 0.05))
def g(sigma1=1,sigma2=2):
x = np.arange(-5, 5, 0.1)
plt.plot(x, gauss(x, sigma=sigma1), label="f1(x)")
plt.plot(x, gauss(x, sigma=sigma2), label="f2(x)")
plt.plot(x, gauss(x, sigma=sigma1) - gauss(x, sigma=sigma2), label="f1 - f2")
plt.title("f1(x), f2(x), and f1(x) - f2(x)")
plt.xlabel("x")
plt.ylim(-0.5, 0.5)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Cannyエッジ
###Code
im = rgb2gray(imread('girl.jpg'))
@interact(sigma=(0.1, 10, 0.1),
th_low=(0, 255, 1),
th_high=(0, 255, 1)
)
def g(sigma=5, th_low=0, th_high=40):
fig = plt.figure(figsize=(20,6))
fig.add_subplot(1, 2, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
fig.add_subplot(1, 2, 2)
im_edge = canny(im, sigma=sigma,
low_threshold=th_low/255,
high_threshold=th_high/255)
imshow(im_edge, cmap='gray_r')
plt.axis('off')
plt.title('Canny edge with th_low={0} and th_high={1}'.format(th_low, th_high))
plt.show()
###Output
_____no_output_____
###Markdown
アンシャープマスキング
###Code
im = rgb2gray(imread('girl.jpg'))
@interact(sigma=(0, 10, 1), k=(1,10,1))
def g(sigma=7, k=3):
fig = plt.figure(figsize=(15, 5))
im_s = gaussian(im, sigma=sigma)
img1 = im + (im - im_s) * k
img1[img1 > 1] = 1
img1[img1 < 0] = 0
ax = fig.add_subplot(1, 2, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
ax = fig.add_subplot(1, 2, 2)
imshow(img1)
plt.axis('off')
plt.title('shapend image')
plt.show()
def box(x, th=2):
return 1 if np.abs(x) < th else 0
def gauss(x, sigma=1):
return np.exp(- x**2 / 2 / sigma**2) / 2 / np.pi / sigma
@interact(sigma=(0, 2, 0.1), k=(0.1,3,0.1),
show_f=True, show_h=True, show_g=True, show_fg=True, show_result=True )
def g(sigma=0.2, k=3, show_f=True, show_h=True, show_g=True, show_fg=True, show_result=True):
x = np.arange(-5, 5, 0.01)
f = np.array([box(i) for i in x])
h = gauss(x, sigma=sigma)
if show_f: plt.plot(x, f, label="f")
if show_h: plt.plot(x, h, label="h")
g = signal.convolve(f, h, mode='same') / sum(h)
if show_g: plt.plot(x, g, label='g=f*h')
if show_fg: plt.plot(x, f - g, label='f - g')
if show_result: plt.plot(x, f + k * (f - g), label='f + k(f - g)')
plt.ylim(-2, 3)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
非線形フィルタ:メディアンフィルタ
###Code
im = imread('salt_and_pepper.png')
@interact(sigma=(0, 10, 1), N=(1, 10, 1))
def g(sigma=2, N=3):
fig = plt.figure(figsize=(15, 5))
ax = fig.add_subplot(1, 3, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
ax = fig.add_subplot(1, 3, 2)
imshow(gaussian(im, sigma=sigma))
plt.axis('off')
plt.title('Gaussian filter with $\sigma$={}'.format(sigma))
ax = fig.add_subplot(1, 3, 3)
imshow(median(im, square(N)))
plt.axis('off')
plt.title('Median filter with {0}x{0} patch'.format(N))
plt.show()
###Output
_____no_output_____
###Markdown
非線形フィルタ:バイラテラルフィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
@interact(sigma_spatial=(0, 15, 1), sigma_color=(0, 0.5, 0.1))
def g(sigma_spatial=1, sigma_color=0.1):
fig = plt.figure(figsize=(15, 3))
ax = fig.add_subplot(1, 3, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
ax = fig.add_subplot(1, 3, 2)
imshow(gaussian(im, sigma=sigma_spatial))
plt.axis('off')
plt.title('Gaussian filter with sigma={}'.format(sigma_spatial))
ax = fig.add_subplot(1, 3, 3)
im_denoise = denoise_bilateral(im,
sigma_spatial=sigma_spatial,
sigma_color=sigma_color)
imshow(im_denoise)
plt.axis('off')
plt.title('sigma_spatial={0} simga_color={1}'.format(sigma_spatial, sigma_color))
plt.show()
###Output
_____no_output_____
###Markdown
非線形フィルタ:ノンローカルミーンフィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
fig = plt.figure(figsize=(15, 3))
ax = fig.add_subplot(1, 3, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
ax = fig.add_subplot(1, 3, 2)
im_denoise = denoise_bilateral(im, sigma_spatial=5, sigma_color=0.1)
imshow(im_denoise)
plt.axis('off')
plt.title('bilateral filter')
ax = fig.add_subplot(1, 3, 3)
im_denoise = denoise_nl_means(im, patch_size=7, patch_distance=11)
imshow(im_denoise)
plt.axis('off')
plt.title('non-local mean filter')
plt.show()
###Output
_____no_output_____
###Markdown
音声データのフーリエ変換
###Code
def wavread(file, dtype=np.int16):
chunk_size = 1024 * 8
with wave.open(file, 'rb') as f:
nchannels = f.getnchannels()
audio_data = []
while True:
chunk = f.readframes(chunk_size)
audio_data.append(chunk)
if chunk == b'': break
audio_data = b''.join(audio_data)
audio_data = np.frombuffer(audio_data, dtype=dtype)
audio_data = audio_data.reshape((-1, nchannels)).transpose()
return audio_data
audio_data = wavread('start.wav') # 22kHz, 2 channels stereo, 16 bits/sample
sr = 22000 # sampling rate
plt.plot(audio_data[0], label='L channel')
plt.plot(audio_data[1], label='R channel')
plt.title('wave file')
plt.xlabel('time [sec]')
plt.legend()
idx = np.arange(0, audio_data.shape[1], sr * 0.25) # 1/22000 sec per sample, tick every 0.25 sec
plt.xticks(idx, idx / sr)
plt.show()
plt.plot(audio_data[0, :1000], label='L channel')
plt.plot(audio_data[1, :1000], label='R channel')
plt.title('first 1000 sampling points')
plt.xlabel('time [sec]')
plt.legend()
idx = np.arange(0, audio_data.shape[1], sr * 0.01) # 1/22000 sec per sample, tick every 0.01 sec
plt.xticks(idx, idx / sr)
plt.xlim(0, 1000)
plt.show()
power_spec = np.abs(fft(audio_data[0])) # FFT power spectrum (absolute value of complex spectrum)
db_power_spec = np.log10(power_spec) * 20 # in dB
fps = sr / len(db_power_spec) # frequency per sample
tick_idx = np.arange(0, len(db_power_spec), 2000 / fps) # tick every 2000 Hz
tick_label = np.ceil(tick_idx * fps / 1000).astype(int) # in kHz
plt.plot(db_power_spec)
plt.title('power spectrum')
plt.xlabel('frequency [kHz]')
plt.ylabel('power [dB]')
plt.xticks(tick_idx, tick_label)
plt.show()
plt.plot(db_power_spec[:len(db_power_spec)//2])
plt.title('power spectrum')
plt.xlabel('frequency [kHz]')
plt.ylabel('power [dB]')
plt.xticks(tick_idx, tick_label)
plt.xlim(0, len(db_power_spec)//2)
plt.show()
###Output
_____no_output_____
###Markdown
短時間フーリエ変換によるスペクトログラムの表示
###Code
sr = 22000 # sampling rate
B, F, T = mlab.specgram(audio_data[0], # left channel
Fs=sr)
imshow(B,
norm=colors.LogNorm(),
cmap='jet')
def find_closest_val(T, t):
X = np.abs(T - t)
idx = np.where(X == X.min())
return idx[0][0]
yticks = np.arange(0, 11, 2) # 0, 2, 4, ..., 11 kHz for x-axis
yidx = [find_closest_val(F/1000, f) for f in yticks]
xticks = np.arange(0, 1.4, 0.25) # 0, 0.25, 0.5, ..., 1.25 sec for y-axis
xidx = [find_closest_val(T, t) for t in xticks]
plt.yticks(yidx, yticks)
plt.xticks(xidx, xticks)
plt.xlabel('time [sec]')
plt.ylabel('frequency [kHz]')
plt.gca().invert_yaxis()
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
画像の二次元フーリエ変換
###Code
im = rgb2gray(imread('honeycomb.jpg'))
@interact(angle=(0, 360, 5))
def g(angle=0):
fig = plt.figure(figsize=(10,5))
fig.add_subplot(1, 2, 1)
im_rot = rotate(im, angle=angle, preserve_range=True)
imshow(im_rot)
plt.axis('off')
plt.title('original image')
fig.add_subplot(1, 2, 2)
im_freq = np.fft.fft2(im_rot)
h, w = im_freq.shape
# im_freq = np.roll(im_freq, h//2, 0)
# im_freq = np.roll(im_freq, w//2, 1)
im_freq = np.fft.fftshift(im_freq)
imshow(np.log10(np.abs(im_freq)) * 20, vmin=0)
plt.axis('off')
plt.title('power spectrum (log scale)')
plt.show()
###Output
_____no_output_____
###Markdown
FFTと通常のフィルタリングの計算量の比較
###Code
im = rgb2gray(imread('girl.jpg'))
time_ndconv = []
time_sigconv = []
time_sigconvfft = []
time_fftconv = []
N_range = range(3, 31, 2)
for N in N_range:
w = np.ones((N, N)) / (N ** 2) # N×N平均値フィルタ
print(w.shape)
st = time()
ndimage.convolve(im, w)
time_ndconv.append(time() - st)
if N < 15:
st = time()
signal.convolve(im, w, method='direct', mode='same')
time_sigconv.append(time() - st)
st = time()
signal.convolve(im, w, method='fft', mode='same')
time_sigconvfft.append(time() - st)
st = time()
signal.fftconvolve(im, w, mode='same')
time_fftconv.append(time() - st)
for yscale,ymin in [('linear', 0), ('log', 0.01)]:
plt.plot(N_range, time_ndconv, label='ndimage.convolve')
plt.plot(N_range[:len(time_sigconv)], time_sigconv, label='signal.convolve')
plt.plot(N_range, time_sigconvfft, label='signal.convolve with FFT')
plt.plot(N_range, time_fftconv, label='signal.fftconvolve')
plt.legend()
plt.ylabel('time [s]')
plt.xlabel('filter size N')
plt.yscale(yscale)
plt.ylim(ymin)
plt.show()
###Output
_____no_output_____
###Markdown
ローパスフィルタ 円形ボックスフィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
@interact(radius=(0, 200, 5))
def g(radius=30):
fig = plt.figure(figsize=(20,5))
fig.add_subplot(1, 4, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
fig.add_subplot(1, 4, 2)
im_freq = np.fft.fft2(im)
h, w = im_freq.shape
# im_freq = np.roll(im_freq, h//2, 0)
# im_freq = np.roll(im_freq, w//2, 1)
im_freq = np.fft.fftshift(im_freq)
imshow(np.log10(np.abs(im_freq)) * 20, vmin=0)
plt.axis('off')
plt.title('fourier spectrum')
im_freq2 = im_freq.copy()
im_freq2 *= 0.0001
rr, cc = skimage.draw.circle(h//2, w//2, radius)
im_freq2[rr, cc] = im_freq[rr, cc]
fig.add_subplot(1, 4, 3)
imshow(np.log10(np.abs(im_freq2)) * 20, vmin=0)
plt.axis('off')
plt.title('filtered spectrum')
fig.add_subplot(1, 4, 4)
# im_freq2 = np.roll(im_freq2, h//2, 0)
# im_freq2 = np.roll(im_freq2, w//2, 1)
im_freq2 = np.fft.fftshift(im_freq2)
g = np.fft.ifft2(im_freq2)
imshow(np.abs(g))
plt.axis('off')
plt.title('filtered image')
plt.show()
###Output
_____no_output_____
###Markdown
ガウス型ローパスフィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
impulse = np.ones(im.shape) * np.finfo(np.float32).eps # avoid 0-division in log
h, w = im.shape
impulse[h//2, w//2] = 1
@interact(sigma=(1, 50, 5))
def g(sigma=3):
fig = plt.figure(figsize=(20,5))
fig.add_subplot(1, 4, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
fig.add_subplot(1, 4, 2)
im_freq = np.fft.fft2(im)
h, w = im_freq.shape
# im_freq = np.roll(im_freq, h//2, 0)
# im_freq = np.roll(im_freq, w//2, 1)
im_freq = np.fft.fftshift(im_freq)
imshow(np.log10(np.abs(im_freq) * 20), vmin=0)
plt.axis('off')
plt.title('fourier spectrum')
im_freq2 = im_freq.copy()
im_freq2 *= gaussian(impulse, sigma=sigma)
fig.add_subplot(1, 4, 3)
imshow(np.log10(np.abs(im_freq2) * 20), vmin=0)
plt.axis('off')
plt.title('filtered spectrum')
fig.add_subplot(1, 4, 4)
# im_freq2 = np.roll(im_freq2, h//2, 0)
# im_freq2 = np.roll(im_freq2, w//2, 1)
im_freq2 = np.fft.fftshift(im_freq2)
g = np.fft.ifft2(im_freq2)
imshow(np.abs(g))
plt.axis('off')
plt.title('filtered image')
plt.show()
###Output
_____no_output_____
###Markdown
ハイパスフィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
@interact(radius=(0, 20, 1))
def g(radius=10):
fig = plt.figure(figsize=(20, 2.5))
fig.add_subplot(1, 4, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
im_freq = np.fft.fft2(im)
h, w = im_freq.shape
# im_freq = np.roll(im_freq, h//2, 0)
# im_freq = np.roll(im_freq, w//2, 1)
im_freq = np.fft.fftshift(im_freq)
fig.add_subplot(1, 4, 2)
imshow(np.log10(np.abs(im_freq) * 20), vmin=0)
plt.axis('off')
plt.title('fourier spectrum')
im_freq2 = im_freq.copy()
rr, cc = skimage.draw.circle(h//2, w//2, radius)
im_freq2[rr, cc] = 0.0001
fig.add_subplot(1, 4, 3)
imshow(np.log10(np.abs(im_freq2) * 20), vmin=0)
plt.axis('off')
plt.title('filtered spectrum')
fig.add_subplot(1, 4, 4)
# im_freq2 = np.roll(im_freq2, h//2, 0)
# im_freq2 = np.roll(im_freq2, w//2, 1)
im_freq2 = np.fft.fftshift(im_freq2)
g = np.fft.ifft2(im_freq2)
# imshow(np.abs(g))
imshow(g.real)
plt.axis('off')
plt.title('filtered image')
plt.show()
###Output
_____no_output_____
###Markdown
ガウス型ハイパスフィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
impulse = np.ones(im.shape) * np.finfo(np.float32).eps # avoid 0-division in log
h, w = im.shape
impulse[h//2, w//2] = 1
@interact(sigma=(1, 20, 1))
def g(sigma=5):
fig = plt.figure(figsize=(20,5))
fig.add_subplot(1, 4, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
fig.add_subplot(1, 4, 2)
im_freq = np.fft.fft2(im)
h, w = im_freq.shape
# im_freq = np.roll(im_freq, h//2, 0)
# im_freq = np.roll(im_freq, w//2, 1)
im_freq = np.fft.fftshift(im_freq)
imshow(np.log10(np.abs(im_freq) * 20), vmin=0, vmax=5)
plt.axis('off')
plt.title('fourier spectrum')
im_freq2 = im_freq.copy()
gauss = gaussian(impulse, sigma=sigma)
im_freq2 *= (gauss.max()*1.01 - gauss)
fig.add_subplot(1, 4, 3)
imshow(np.log10(np.abs(im_freq2) * 20), vmin=0, vmax=5)
plt.axis('off')
plt.title('filtered spectrum')
fig.add_subplot(1, 4, 4)
# im_freq2 = np.roll(im_freq2, h//2, 0)
# im_freq2 = np.roll(im_freq2, w//2, 1)
im_freq2 = np.fft.fftshift(im_freq2)
g = np.fft.ifft2(im_freq2)
# imshow(np.abs(g))
imshow(g.real)
plt.axis('off')
plt.title('filtered image')
plt.show()
###Output
_____no_output_____
###Markdown
バンドパスフィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
val_range = (0, 200, 10)
@interact(radius1=val_range,
radius2=val_range)
def g(radius1=60, radius2=20):
fig = plt.figure(figsize=(20, 2.5))
fig.add_subplot(1, 4, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
fig.add_subplot(1, 4, 2)
im_freq = np.fft.fft2(im)
h, w = im_freq.shape
# im_freq = np.roll(im_freq, h//2, 0)
# im_freq = np.roll(im_freq, w//2, 1)
im_freq = np.fft.fftshift(im_freq)
imshow(np.log10(np.abs(im_freq)) * 20, vmin=0)
plt.axis('off')
plt.title('fourier spectrum')
fig.add_subplot(1, 4, 3)
im_freq2 = im_freq.copy()
im_freq2 *= 0.0001
rr, cc = skimage.draw.circle(h//2, w//2, radius1)
im_freq2[rr, cc] = im_freq[rr, cc]
rr, cc = skimage.draw.circle(h//2, w//2, radius2)
im_freq2[rr, cc] = 0.0001
imshow(np.log10(np.abs(im_freq2)) * 20, vmin=0)
plt.axis('off')
plt.title('filtered spectrum')
fig.add_subplot(1, 4, 4)
# im_freq2 = np.roll(im_freq2, h//2, 0)
# im_freq2 = np.roll(im_freq2, w//2, 1)
im_freq2 = np.fft.fftshift(im_freq2)
g = np.fft.ifft2(im_freq2)
# imshow(np.abs(g))
imshow(g.real)
plt.axis('off')
plt.title('filtered image')
plt.show()
###Output
_____no_output_____
###Markdown
フィルタリング 平均値フィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
im = resize(im, (im.shape[0]//5, im.shape[1]//5))
im[25, 70] = 0
vals = (val_start, val_end, val_step) = 1, 21, 2
val_default = 3
@interact(N=vals)
def g(N=val_default):
fig = plt.figure(figsize=(10, 3))
ax = fig.add_subplot(1, 2, 1)
imshow(im)
# plt.axis('off')
plt.title('original image {0}x{1}'.format(im.shape[0], im.shape[1]))
ax = fig.add_subplot(1, 2, 2)
w = np.ones((N, N)) / (N ** 2) # N×N平滑化フィルタ
imshow(ndimage.convolve(im, w))
plt.axis('off')
plt.title('{0}x{0} average filter'.format(N))
plt.show()
###Output
_____no_output_____
###Markdown
ガウスフィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
im = resize(im, (im.shape[0]//5, im.shape[1]//5))
size = min(im.shape[0], im.shape[1])
impuse_response = np.zeros((size, size))
impuse_response[size//2, size//2] = 1
vals = (val_start, val_end, val_step) = 1, 20, 1
val_default = 3
@interact(sigma=vals)
def g(sigma=val_default):
fig = plt.figure(figsize=(13, 3))
ax = fig.add_subplot(1, 3, 1)
imshow(im)
plt.axis('off')
plt.title('original image {0}x{1}'.format(im.shape[0], im.shape[1]))
ax = fig.add_subplot(1, 3, 2)
imshow(gaussian(im, sigma=sigma))
plt.axis('off')
plt.title('filterd image')
ax = fig.add_subplot(1, 3, 3)
imshow(gaussian(impuse_response, sigma=sigma))
plt.colorbar()
plt.tight_layout()
plt.title('Gaussian filter with $\sigma$={}'.format(sigma))
plt.show()
###Output
_____no_output_____
###Markdown
ガボールフィルタ
###Code
fig = plt.figure(figsize=(20,9))
for j in tqdm(range(3)):
for i in tqdm(range(5), leave=False):
ax = fig.add_subplot(3, 5, i+1 + j*5)
imshow(gabor_kernel(frequency=0.1, bandwidth=1/(2*j+1), theta=0.4 * i).real, cmap="gray")
plt.tight_layout()
plt.colorbar()
plt.show()
im = rgb2gray(imread('girl.jpg'))
im = resize(im, (im.shape[0]//5, im.shape[1]//5))
fig = plt.figure(figsize=(20,9))
for j in tqdm(range(3)):
for i in tqdm(range(5), leave=False):
ax = fig.add_subplot(3, 5, i+1 + j*5)
gabor = gabor_kernel(frequency=0.1, bandwidth=1/(2*j+1), theta=0.4 * i).real
im_gabor = signal.fftconvolve(im, gabor, mode='same') # use FFT for convolution
imshow(im_gabor, cmap="gray")
plt.tight_layout()
plt.colorbar()
plt.plot()
###Output
_____no_output_____
###Markdown
微分フィルタ ソーベルフィルタ,プレウィットフィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
im = resize(im, (im.shape[0]//5, im.shape[1]//5))
kernels = {'diff': np.array([[ 0,0,0],
[-1,0,1],
[ 0,0,0]]) / 2,
'prewitt': np.array([[-1,0,1],
[-1,0,1],
[-1,0,1]]) / 6,
'sobel': np.array([[-1,0,1],
[-2,0,2],
[-1,0,1]]) / 8
}
@interact(kernel=['diff', 'prewitt', 'sobel'],
val_max=(0.1, 0.5, 0.1))
def g(kernel='diff', val_max=0.1):
k = kernels[kernel]
imh = ndimage.convolve(im, k)
imv = ndimage.convolve(im, k.T)
fig = plt.figure(figsize=(15, 3))
ax = fig.add_subplot(1, 3, 1)
imshow(imh, cmap="bwr", vmin=-0.5, vmax=0.5)
plt.axis('off')
plt.colorbar()
plt.title('$I_x$')
ax = fig.add_subplot(1, 3, 2)
imshow(imv, cmap="bwr", vmin=-0.5, vmax=0.5)
plt.axis('off')
plt.colorbar()
plt.title('$I_y$')
ax = fig.add_subplot(1, 3, 3)
imshow(np.sqrt(imv**2 + imh**2), cmap="gray", vmin=0, vmax=val_max)
plt.axis('off')
plt.colorbar()
plt.title('$\sqrt{I_x^2 + I_y^2}$')
plt.show()
###Output
_____no_output_____
###Markdown
ソーベル,プレウィット,ロバーツ
###Code
@interact(val_max=(0.1, 0.5, 0.1))
def g(val_max=0.1):
fig = plt.figure(figsize=(15, 3))
ax = fig.add_subplot(1, 3, 1)
imshow(sobel(im), vmin=0, vmax=val_max)
plt.axis('off')
plt.colorbar()
plt.title('Sobel')
ax = fig.add_subplot(1, 3, 2)
imshow(prewitt(im), vmin=0, vmax=val_max)
plt.axis('off')
plt.colorbar()
plt.title('Prewitt')
ax = fig.add_subplot(1, 3, 3)
imshow(roberts(im), vmin=0, vmax=val_max)
plt.axis('off')
plt.colorbar()
plt.title('Roberts')
plt.show()
###Output
_____no_output_____
###Markdown
ラプラシアンフィルタ
###Code
L4 = np.array([[0, 1, 0],
[1,-4, 1],
[0, 1, 0]])
imshow(ndimage.convolve(im, L4), cmap="bwr", vmin=-0.5, vmax=0.5)
plt.axis('off')
plt.colorbar()
plt.show()
imshow(ndimage.convolve(gaussian(im, sigma=1), L4), cmap="bwr", vmin=-0.5, vmax=0.5)
plt.axis('off')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Laplacian of Gaussian (LoG)とゼロ交差
###Code
fig = plt.figure(figsize=(20,6))
for i in range(5):
ax = fig.add_subplot(2, 5, i+1)
iml = ndimage.convolve(gaussian(im, sigma=i+1), L4)
m = np.abs(iml).max() / 2
imshow(iml, cmap="bwr", vmin=-m, vmax=m)
plt.axis('off')
ax = fig.add_subplot(2, 5, i+1 + 5)
plt.contour(iml, levels=[0])
plt.gca().invert_yaxis()
plt.axis('off')
plt.show()
@interact(sigma=(0.1,10,0.1))
def g(sigma=2):
fig = plt.figure(figsize=(20,6))
ax = fig.add_subplot(1, 2, 1)
iml = ndimage.convolve(gaussian(im, sigma=sigma), L4)
m = np.abs(iml).max() / 2
imshow(iml, cmap="bwr", vmin=-m, vmax=m)
plt.axis('off')
ax = fig.add_subplot(1, 2, 2)
plt.contour(iml, levels=[0])
plt.gca().invert_yaxis()
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Laplacian-of-GaussianとDifference-of-Gaussian LoG
###Code
def gauss(x, sigma=1):
return np.exp(- x**2 / 2 / sigma**2) / 2 / np.pi / sigma
def grad_gauss(x, sigma, n=1):
return derivative(gauss, x, dx=1e-6, n=n, args=({sigma:sigma})) # n次導関数を計算
@interact(sigma=(0.1, 2, 0.05))
def g(sigma=1):
x = np.arange(-5, 5, 0.1)
plt.plot(x, gauss(x, sigma=sigma), label="f(x)")
plt.plot(x, grad_gauss(x, sigma=sigma), label="f'(x)")
plt.plot(x, grad_gauss(x, sigma=sigma, n=2), label="f''(x)")
plt.title("Gauss f(x) and derivatives f'(x), f''(x)")
plt.xlabel("x")
plt.ylim(-0.5, 0.5)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
DoG
###Code
@interact(sigma1=(0.1, 2, 0.05),
sigma2=(0.1, 2, 0.05))
def g(sigma1=1,sigma2=2):
x = np.arange(-5, 5, 0.1)
plt.plot(x, gauss(x, sigma=sigma1), label="f1(x)")
plt.plot(x, gauss(x, sigma=sigma2), label="f2(x)")
plt.plot(x, gauss(x, sigma=sigma1) - gauss(x, sigma=sigma2), label="f1 - f2")
plt.title("f1(x), f2(x), and f1(x) - f2(x)")
plt.xlabel("x")
plt.ylim(-0.5, 0.5)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Cannyエッジ
###Code
im = rgb2gray(imread('girl.jpg'))
@interact(sigma=(0.1, 10, 0.1),
th_low=(0, 255, 1),
th_high=(0, 255, 1)
)
def g(sigma=5, th_low=0, th_high=40):
fig = plt.figure(figsize=(20,6))
fig.add_subplot(1, 2, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
fig.add_subplot(1, 2, 2)
im_edge = canny(im, sigma=sigma,
low_threshold=th_low/255,
high_threshold=th_high/255)
imshow(im_edge, cmap='gray_r')
plt.axis('off')
plt.title('Canny edge with th_low={0} and th_high={1}'.format(th_low, th_high))
plt.show()
###Output
_____no_output_____
###Markdown
アンシャープマスキング
###Code
im = rgb2gray(imread('girl.jpg'))
@interact(sigma=(0, 10, 1), k=(1,10,1))
def g(sigma=7, k=3):
fig = plt.figure(figsize=(15, 5))
im_s = gaussian(im, sigma=sigma)
img1 = im + (im - im_s) * k
img1[img1 > 1] = 1
img1[img1 < 0] = 0
ax = fig.add_subplot(1, 2, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
ax = fig.add_subplot(1, 2, 2)
imshow(img1)
plt.axis('off')
plt.title('shapend image')
plt.show()
def box(x, th=2):
return 1 if np.abs(x) < th else 0
def gauss(x, sigma=1):
return np.exp(- x**2 / 2 / sigma**2) / 2 / np.pi / sigma
@interact(sigma=(0, 2, 0.1), k=(0.1,3,0.1),
show_f=True, show_h=True, show_g=True, show_fg=True, show_result=True )
def g(sigma=0.2, k=3, show_f=True, show_h=True, show_g=True, show_fg=True, show_result=True):
x = np.arange(-5, 5, 0.01)
f = np.array([box(i) for i in x])
h = gauss(x, sigma=sigma)
if show_f: plt.plot(x, f, label="f")
if show_h: plt.plot(x, h, label="h")
g = signal.convolve(f, h, mode='same') / sum(h)
if show_g: plt.plot(x, g, label='g=f*h')
if show_fg: plt.plot(x, f - g, label='f - g')
if show_result: plt.plot(x, f + k * (f - g), label='f + k(f - g)')
plt.ylim(-2, 3)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
非線形フィルタ:メディアンフィルタ
###Code
im = imread('salt_and_pepper.png')
@interact(sigma=(0, 10, 1), N=(1, 10, 1))
def g(sigma=2, N=3):
fig = plt.figure(figsize=(15, 5))
ax = fig.add_subplot(1, 3, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
ax = fig.add_subplot(1, 3, 2)
imshow(gaussian(im, sigma=sigma))
plt.axis('off')
plt.title('Gaussian filter with $\sigma$={}'.format(sigma))
ax = fig.add_subplot(1, 3, 3)
imshow(median(im, square(N)))
plt.axis('off')
plt.title('Median filter with {0}x{0} patch'.format(N))
plt.show()
###Output
_____no_output_____
###Markdown
非線形フィルタ:バイラテラルフィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
@interact(sigma_spatial=(0, 15, 1), sigma_color=(0, 0.5, 0.1))
def g(sigma_spatial=1, sigma_color=0.1):
fig = plt.figure(figsize=(15, 3))
ax = fig.add_subplot(1, 3, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
ax = fig.add_subplot(1, 3, 2)
imshow(gaussian(im, sigma=sigma_spatial))
plt.axis('off')
plt.title('Gaussian filter with sigma={}'.format(sigma_spatial))
ax = fig.add_subplot(1, 3, 3)
im_denoise = denoise_bilateral(im,
sigma_spatial=sigma_spatial,
sigma_color=sigma_color)
imshow(im_denoise)
plt.axis('off')
plt.title('sigma_spatial={0} simga_color={1}'.format(sigma_spatial, sigma_color))
plt.show()
###Output
_____no_output_____
###Markdown
非線形フィルタ:ノンローカルミーンフィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
fig = plt.figure(figsize=(15, 3))
ax = fig.add_subplot(1, 3, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
ax = fig.add_subplot(1, 3, 2)
im_denoise = denoise_bilateral(im, sigma_spatial=5, sigma_color=0.1)
imshow(im_denoise)
plt.axis('off')
plt.title('bilateral filter')
ax = fig.add_subplot(1, 3, 3)
im_denoise = denoise_nl_means(im, patch_size=7, patch_distance=11)
imshow(im_denoise)
plt.axis('off')
plt.title('non-local mean filter')
plt.show()
###Output
_____no_output_____
###Markdown
音声データのフーリエ変換
###Code
def wavread(file, dtype=np.int16):
chunk_size = 1024 * 8
with wave.open(file, 'rb') as f:
nchannels = f.getnchannels()
audio_data = []
while True:
chunk = f.readframes(chunk_size)
audio_data.append(chunk)
if chunk == b'': break
audio_data = b''.join(audio_data)
audio_data = np.frombuffer(audio_data, dtype=dtype)
audio_data = audio_data.reshape((-1, nchannels)).transpose()
return audio_data
audio_data = wavread('start.wav') # 22kHz, 2 channels stereo, 16 bits/sample
sr = 22000 # sampling rate
plt.plot(audio_data[0], label='L channel')
plt.plot(audio_data[1], label='R channel')
plt.title('wave file')
plt.xlabel('time [sec]')
plt.legend()
idx = np.arange(0, audio_data.shape[1], sr * 0.25) # 1/22000 sec per sample, tick every 0.25 sec
plt.xticks(idx, idx / sr)
plt.show()
plt.plot(audio_data[0, :1000], label='L channel')
plt.plot(audio_data[1, :1000], label='R channel')
plt.title('first 1000 sampling points')
plt.xlabel('time [sec]')
plt.legend()
idx = np.arange(0, audio_data.shape[1], sr * 0.01) # 1/22000 sec per sample, tick every 0.01 sec
plt.xticks(idx, idx / sr)
plt.xlim(0, 1000)
plt.show()
power_spec = np.abs(fft(audio_data[0])) # FFT power spectrum (absolute value of complex spectrum)
db_power_spec = np.log10(power_spec) * 20 # in dB
fps = sr / len(db_power_spec) # frequency per sample
tick_idx = np.arange(0, len(db_power_spec), 2000 / fps) # tick every 2000 Hz
tick_label = np.ceil(tick_idx * fps / 1000).astype(int) # in kHz
plt.plot(db_power_spec)
plt.title('power spectrum')
plt.xlabel('frequency [kHz]')
plt.ylabel('power [dB]')
plt.xticks(tick_idx, tick_label)
plt.show()
plt.plot(db_power_spec[:len(db_power_spec)//2])
plt.title('power spectrum')
plt.xlabel('frequency [kHz]')
plt.ylabel('power [dB]')
plt.xticks(tick_idx, tick_label)
plt.xlim(0, len(db_power_spec)//2)
plt.show()
###Output
_____no_output_____
###Markdown
短時間フーリエ変換によるスペクトログラムの表示
###Code
sr = 22000 # sampling rate
B, F, T = mlab.specgram(audio_data[0], # left channel
Fs=sr)
imshow(B,
norm=colors.LogNorm(),
cmap='jet')
def find_closest_val(T, t):
X = np.abs(T - t)
idx = np.where(X == X.min())
return idx[0][0]
yticks = np.arange(0, 11, 2) # 0, 2, 4, ..., 11 kHz for x-axis
yidx = [find_closest_val(F/1000, f) for f in yticks]
xticks = np.arange(0, 1.4, 0.25) # 0, 0.25, 0.5, ..., 1.25 sec for y-axis
xidx = [find_closest_val(T, t) for t in xticks]
plt.yticks(yidx, yticks)
plt.xticks(xidx, xticks)
plt.xlabel('time [sec]')
plt.ylabel('frequency [kHz]')
plt.gca().invert_yaxis()
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
画像の二次元フーリエ変換
###Code
im = rgb2gray(imread('honeycomb.jpg'))
@interact(angle=(0, 360, 5))
def g(angle=0):
fig = plt.figure(figsize=(10,5))
fig.add_subplot(1, 2, 1)
im_rot = rotate(im, angle=angle, preserve_range=True)
imshow(im_rot)
plt.axis('off')
plt.title('original image')
fig.add_subplot(1, 2, 2)
im_freq = np.fft.fft2(im_rot)
h, w = im_freq.shape
# im_freq = np.roll(im_freq, h//2, 0)
# im_freq = np.roll(im_freq, w//2, 1)
im_freq = np.fft.fftshift(im_freq)
imshow(np.log10(np.abs(im_freq)) * 20, vmin=0)
plt.axis('off')
plt.title('power spectrum (log scale)')
plt.show()
###Output
_____no_output_____
###Markdown
FFTと通常のフィルタリングの計算量の比較
###Code
im = rgb2gray(imread('girl.jpg'))
time_ndconv = []
time_sigconv = []
time_sigconvfft = []
time_fftconv = []
N_range = range(3, 31, 2)
for N in N_range:
w = np.ones((N, N)) / (N ** 2) # N×N平均値フィルタ
print(w.shape)
st = time()
ndimage.convolve(im, w)
time_ndconv.append(time() - st)
if N < 15:
st = time()
signal.convolve(im, w, method='direct', mode='same')
time_sigconv.append(time() - st)
st = time()
signal.convolve(im, w, method='fft', mode='same')
time_sigconvfft.append(time() - st)
st = time()
signal.fftconvolve(im, w, mode='same')
time_fftconv.append(time() - st)
for yscale,ymin in [('linear', 0), ('log', 0.01)]:
plt.plot(N_range, time_ndconv, label='ndimage.convolve')
plt.plot(N_range[:len(time_sigconv)], time_sigconv, label='signal.convolve')
plt.plot(N_range, time_sigconvfft, label='signal.convolve with FFT')
plt.plot(N_range, time_fftconv, label='signal.fftconvolve')
plt.legend()
plt.ylabel('time [s]')
plt.xlabel('filter size N')
plt.yscale(yscale)
plt.ylim(ymin)
plt.show()
###Output
_____no_output_____
###Markdown
ローパスフィルタ 円形ボックスフィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
@interact(radius=(0, 200, 5))
def g(radius=30):
fig = plt.figure(figsize=(20,5))
fig.add_subplot(1, 4, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
fig.add_subplot(1, 4, 2)
im_freq = np.fft.fft2(im)
h, w = im_freq.shape
# im_freq = np.roll(im_freq, h//2, 0)
# im_freq = np.roll(im_freq, w//2, 1)
im_freq = np.fft.fftshift(im_freq)
imshow(np.log10(np.abs(im_freq)) * 20, vmin=0)
plt.axis('off')
plt.title('fourier spectrum')
im_freq2 = im_freq.copy()
im_freq2 *= 0.0001
rr, cc = skimage.draw.disk((h//2, w//2), radius)
im_freq2[rr, cc] = im_freq[rr, cc]
fig.add_subplot(1, 4, 3)
imshow(np.log10(np.abs(im_freq2)) * 20, vmin=0)
plt.axis('off')
plt.title('filtered spectrum')
fig.add_subplot(1, 4, 4)
# im_freq2 = np.roll(im_freq2, h//2, 0)
# im_freq2 = np.roll(im_freq2, w//2, 1)
im_freq2 = np.fft.fftshift(im_freq2)
g = np.fft.ifft2(im_freq2)
imshow(np.abs(g))
plt.axis('off')
plt.title('filtered image')
plt.show()
###Output
_____no_output_____
###Markdown
ガウス型ローパスフィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
impulse = np.ones(im.shape) * np.finfo(np.float32).eps # avoid 0-division in log
h, w = im.shape
impulse[h//2, w//2] = 1
@interact(sigma=(1, 50, 5))
def g(sigma=3):
fig = plt.figure(figsize=(20,5))
fig.add_subplot(1, 4, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
fig.add_subplot(1, 4, 2)
im_freq = np.fft.fft2(im)
h, w = im_freq.shape
# im_freq = np.roll(im_freq, h//2, 0)
# im_freq = np.roll(im_freq, w//2, 1)
im_freq = np.fft.fftshift(im_freq)
imshow(np.log10(np.abs(im_freq) * 20), vmin=0)
plt.axis('off')
plt.title('fourier spectrum')
im_freq2 = im_freq.copy()
im_freq2 *= gaussian(impulse, sigma=sigma)
fig.add_subplot(1, 4, 3)
imshow(np.log10(np.abs(im_freq2) * 20), vmin=0)
plt.axis('off')
plt.title('filtered spectrum')
fig.add_subplot(1, 4, 4)
# im_freq2 = np.roll(im_freq2, h//2, 0)
# im_freq2 = np.roll(im_freq2, w//2, 1)
im_freq2 = np.fft.fftshift(im_freq2)
g = np.fft.ifft2(im_freq2)
imshow(np.abs(g))
plt.axis('off')
plt.title('filtered image')
plt.show()
###Output
_____no_output_____
###Markdown
ハイパスフィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
@interact(radius=(0, 20, 1))
def g(radius=10):
fig = plt.figure(figsize=(20, 2.5))
fig.add_subplot(1, 4, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
im_freq = np.fft.fft2(im)
h, w = im_freq.shape
# im_freq = np.roll(im_freq, h//2, 0)
# im_freq = np.roll(im_freq, w//2, 1)
im_freq = np.fft.fftshift(im_freq)
fig.add_subplot(1, 4, 2)
imshow(np.log10(np.abs(im_freq) * 20), vmin=0)
plt.axis('off')
plt.title('fourier spectrum')
im_freq2 = im_freq.copy()
rr, cc = skimage.draw.disk((h//2, w//2), radius)
im_freq2[rr, cc] = 0.0001
fig.add_subplot(1, 4, 3)
imshow(np.log10(np.abs(im_freq2) * 20), vmin=0)
plt.axis('off')
plt.title('filtered spectrum')
fig.add_subplot(1, 4, 4)
# im_freq2 = np.roll(im_freq2, h//2, 0)
# im_freq2 = np.roll(im_freq2, w//2, 1)
im_freq2 = np.fft.fftshift(im_freq2)
g = np.fft.ifft2(im_freq2)
# imshow(np.abs(g))
imshow(g.real)
plt.axis('off')
plt.title('filtered image')
plt.show()
###Output
_____no_output_____
###Markdown
ガウス型ハイパスフィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
impulse = np.ones(im.shape) * np.finfo(np.float32).eps # avoid 0-division in log
h, w = im.shape
impulse[h//2, w//2] = 1
@interact(sigma=(1, 20, 1))
def g(sigma=5):
fig = plt.figure(figsize=(20,5))
fig.add_subplot(1, 4, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
fig.add_subplot(1, 4, 2)
im_freq = np.fft.fft2(im)
h, w = im_freq.shape
# im_freq = np.roll(im_freq, h//2, 0)
# im_freq = np.roll(im_freq, w//2, 1)
im_freq = np.fft.fftshift(im_freq)
imshow(np.log10(np.abs(im_freq) * 20), vmin=0, vmax=5)
plt.axis('off')
plt.title('fourier spectrum')
im_freq2 = im_freq.copy()
gauss = gaussian(impulse, sigma=sigma)
im_freq2 *= (gauss.max()*1.01 - gauss)
fig.add_subplot(1, 4, 3)
imshow(np.log10(np.abs(im_freq2) * 20), vmin=0, vmax=5)
plt.axis('off')
plt.title('filtered spectrum')
fig.add_subplot(1, 4, 4)
# im_freq2 = np.roll(im_freq2, h//2, 0)
# im_freq2 = np.roll(im_freq2, w//2, 1)
im_freq2 = np.fft.fftshift(im_freq2)
g = np.fft.ifft2(im_freq2)
# imshow(np.abs(g))
imshow(g.real)
plt.axis('off')
plt.title('filtered image')
plt.show()
###Output
_____no_output_____
###Markdown
バンドパスフィルタ
###Code
im = rgb2gray(imread('girl.jpg'))
val_range = (0, 200, 10)
@interact(radius1=val_range,
radius2=val_range)
def g(radius1=60, radius2=20):
fig = plt.figure(figsize=(20, 2.5))
fig.add_subplot(1, 4, 1)
imshow(im)
plt.axis('off')
plt.title('original image')
fig.add_subplot(1, 4, 2)
im_freq = np.fft.fft2(im)
h, w = im_freq.shape
# im_freq = np.roll(im_freq, h//2, 0)
# im_freq = np.roll(im_freq, w//2, 1)
im_freq = np.fft.fftshift(im_freq)
imshow(np.log10(np.abs(im_freq)) * 20, vmin=0)
plt.axis('off')
plt.title('fourier spectrum')
fig.add_subplot(1, 4, 3)
im_freq2 = im_freq.copy()
im_freq2 *= 0.0001
rr, cc = skimage.draw.disk((h//2, w//2), radius1)
im_freq2[rr, cc] = im_freq[rr, cc]
rr, cc = skimage.draw.disk((h//2, w//2), radius2)
im_freq2[rr, cc] = 0.0001
imshow(np.log10(np.abs(im_freq2)) * 20, vmin=0)
plt.axis('off')
plt.title('filtered spectrum')
fig.add_subplot(1, 4, 4)
# im_freq2 = np.roll(im_freq2, h//2, 0)
# im_freq2 = np.roll(im_freq2, w//2, 1)
im_freq2 = np.fft.fftshift(im_freq2)
g = np.fft.ifft2(im_freq2)
# imshow(np.abs(g))
imshow(g.real)
plt.axis('off')
plt.title('filtered image')
plt.show()
###Output
_____no_output_____ |
samples/demos/azure-sql-edge-demos/Wind Turbine Demo/ml/wind-turbine-scikit.ipynb | ###Markdown
Wind Turbine: Azure ML with scikit-learnIn this notebook, we'll build and analyze a new model to predict wind turbine wake winds.It is important to consider the two main conditions that influence the presence of wind wake:1. Overall wind farm direction and turbine wind direction are both are between 40° - 45° degrees.1. High difference that's greater than one minute between `TurbineSpeedStdDev` and `WindSpeedStdDev`.The above conditions are well known features to predict when `Wind Wake` is affecting the wind turbine. InstructionsBefore you begin with this lab, please make sure to follow the steps below:1. Locate the default datastore for the workspace, this can be done by authenticating against the workspace (cell 2) and execute the following command: `ws.get_default_datastore()`1. Locate the dataset parquet file in the lab materials: `TrainingDataset.parquet`1. Upload the dataset for this lab to the default datastore for the workspace. You can do this via Azure Portal or via Microsoft Azure Storage Explorer.1. Ensure you have the correct version of `scikit-learn` and `joblib` installed. To install these dependencies, you can execute the cell below, skip this step if the dependencies are already installed.1. Restart your kernel
###Code
!pip install scikit-learn==0.22.1 joblib==0.14.1
###Output
_____no_output_____
###Markdown
Setup Azure MLIn the next cell, we will create a new Workspace config object using the ``, ``, and ``. This will fetch the matching Workspace and prompt you for authentication. Please click on the link and input the provided details.For more information on **Workspace**, please visit: [Microsoft Workspace Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace.workspace?view=azure-ml-py)`` = You can get this ID from the landing page of your Resource Group.`` = This is the name of your Resource Group.`` = This is the name of your Workspace.
###Code
from azureml.core.workspace import Workspace
from azureml.core.authentication import InteractiveLoginAuthentication
project_folder = './scripts'
try:
interactive_auth = InteractiveLoginAuthentication(tenant_id="<tenant_id>")
# Get instance of the Workspace and write it to config file
ws = Workspace(
subscription_id = '<subscription_id>',
resource_group = '<resource_group>',
workspace_name = '<workspace_name>',
auth = interactive_auth)
# Writes workspace config file
ws.write_config()
print('Library configuration succeeded')
except Exception as e:
print(e)
print('Workspace not found')
###Output
_____no_output_____
###Markdown
Fetch our dataLet's retrieve our dataset from the default workspace Datastore.
###Code
from azureml.core import Dataset
from azureml.data.datapath import DataPath
from azureml.core import Datastore
datastore = ws.get_default_datastore()
datastore_path = [DataPath(datastore, '*.parquet')]
tabular = Dataset.Tabular.from_parquet_files(path=datastore_path)
tabular = tabular.register(workspace=ws,
name='wind_turbine_training',
create_new_version=True)
tabular = Dataset.get_by_name(ws, name='wind_turbine_training')
print(tabular.version)
data = tabular.to_pandas_dataframe()
data.head(5)
###Output
_____no_output_____
###Markdown
Next, we'll take a subset of our data and then proceed to visualize it to better understand any patterns and trends that might exist to drive good ML models.
###Code
subset = tabular.take_sample(probability=0.4, seed=123).to_pandas_dataframe()
###Output
_____no_output_____
###Markdown
Dataset DescriptionDescribe our current dataset. The table below shows the different statistical values for our training subset.
###Code
subset.describe()
###Output
_____no_output_____
###Markdown
Turbine Wind DirectionLet's take a look at the Turbine Wind Direction distribution against the Wind Direction Angle. As we can see, we have a considerable alteration between 40° and 50° degrees.
###Code
import seaborn as sns
import matplotlib.pyplot as plt
from IPython.display import display
hstyle={"rwidth":0.75,'edgecolor':'black'}
# Analyze distribution of TurbineWindDirection in the dataset
fig, ax = plt.subplots()
sns.distplot(subset[['TurbineWindDirection']], ax=ax,
hist_kws=hstyle).set_title("Turbine Wind Direction Distribuition")
ax.set_xlim(0,360)
ax.set(xlabel="Wind Direction Angle")
plt.show()
###Output
_____no_output_____
###Markdown
Turbine Wind Direction vs Alter BladesLet's take a look at how our training dataset conducts for `Alter Blades` against the `Wind Direction Angle`. It is very clear that between 40° and 50° degrees we have a clear spike of `True` values for `Alter Blades`. Keep in mind, that the target column for our prediction is `Alter Blades`, this column will enable us to identify a wake condition.
###Code
g = sns.FacetGrid(subset, col='AlterBlades')
g.map(sns.distplot, 'TurbineWindDirection', hist_kws=hstyle)
g.set(xlabel="Wind Direction Angle")
###Output
_____no_output_____
###Markdown
Turbine SpeedLet's take a look at the Turbine Speed distribution. In the chart, we can observe the distribution has values between 10 and 25 km/h.
###Code
fig, ax = plt.subplots()
sns.distplot(subset[['TurbineSpeedAverage']], ax=ax,
hist_kws=hstyle).set_title("Average Turbine Speed Distribuition")
ax.set(xlabel="Average Turbine Speed")
plt.show()
###Output
_____no_output_____
###Markdown
Turbine Speed Standard Deviation vs Alter BladesLet's take a look at how our training dataset behaves for `Alter Blades` against the `Turbine Speed Standard Deviation`.
###Code
# Analyze how age influences whether customers have responded to insurance campaigns
g = sns.FacetGrid(subset, col='AlterBlades')
g.map(sns.distplot, 'TurbineSpeedStdDev', hist_kws=hstyle)
g.set(xlabel="Turbine Speed Std Dev")
###Output
_____no_output_____
###Markdown
Wind SpeedLet's take a look at the Wind Speed distribution. In the chart, we can observe the distribution has values between 10 and 25 km/h.
###Code
fig, ax = plt.subplots()
sns.distplot(subset[['WindSpeedAverage']], ax=ax,
hist_kws=hstyle).set_title("Average Wind Speed Distribuition")
ax.set(xlabel="Average Wind Speed")
plt.show()
###Output
_____no_output_____
###Markdown
Wind Speed Standard Deviation vs Alter BladesLet's take a look at how our training dataset behaves for `Alter Blades` against the `Turbine Speed Standard Deviation`.
###Code
# Analyze how age influences whether customers have responded to insurance campaigns
g = sns.FacetGrid(subset, col='AlterBlades')
g.map(sns.distplot, 'WindSpeedStdDev', hist_kws=hstyle)
###Output
_____no_output_____
###Markdown
Isolate AlterBlades rows true valuesLet's create a Facet Grid to understand the trends that our `True` values from the `Alter Blades` column has against other features in the dataset such as:1. Turbine Speed Standard Deviation1. Turbine Wind Direction1. Wind Speed Standard DeviationAs we are able to see, when `Turbine Wind Direction` is around 40° to 45° degrees, it is a very good indication for an `Alter Blades: True` value. Also, we are able to see that high `Turbine Speed Standard Deviation` versus a low `Wind Speed Standard Deviation` are also key features to achieve a `True` value in the `Alter Blades` column
###Code
alterBlades = subset.loc[subset.AlterBlades]
g = sns.FacetGrid(alterBlades, col='AlterBlades')
g.map(plt.hist, 'TurbineSpeedStdDev')
g.set(xlabel="Turbine Speed Std Dev")
display(g)
g = sns.FacetGrid(alterBlades, col='AlterBlades')
g.map(plt.hist, 'TurbineWindDirection')
g.set(xlabel="Turbine Wind Direction Angle")
display(g)
g = sns.FacetGrid(alterBlades, col='AlterBlades')
g.map(plt.hist, 'WindSpeedStdDev')
g.set(xlabel="Wind Speed Std Dev")
display(g)
###Output
_____no_output_____
###Markdown
Pairplot Wind Speed Std Dev, Turbine Speed Std Dev and Alter BladesLet's place our key features in a Pair plot to analyze their trends.
###Code
# Analyze how age and category gardening spend is influenced by age and region
sns.pairplot(subset, vars=['WindSpeedStdDev', 'TurbineSpeedStdDev'], hue='AlterBlades')
###Output
_____no_output_____
###Markdown
Create experimentIn our script, there are three distinct sections:1. Setting up the scikit-learn logistic regression model pipeline (including encoding our features).1. Analyzing and logging the results of the model training.1. Running the model explainer to understand the key model drivers.
###Code
%%writefile $project_folder/train.py
from azureml.core import Run
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from utils import *
# Fetch current run
run = Run.get_context()
# Fetch dataset from the run by name
dataset = run.input_datasets['training']
# Convert dataset to Pandas data frame
X_train, X_test, y_train, y_test = split_dataset(dataset)
# Setup scikit-learn pipeline
numeric_transformer = Pipeline(steps=[('scaler', StandardScaler())])
preprocessor = ColumnTransformer(transformers=[('num', numeric_transformer, list(X_train.columns.values))])
clf = Pipeline(steps=[('preprocessor', preprocessor),
('classifier', LogisticRegression())])
model = clf.fit(X_train, y_train)
# Analyze model performance
analyze_model(clf, X_test, y_test)
# Save model
model_id = save_model(clf)
###Output
_____no_output_____
###Markdown
Create a Workspace ExperimentThe Experiment constructor allows to create an experiment instance. The constructor takes in the current workspace, which is fetched by calling `Workspace.from_config()` and an experiment name. For more information on **Experiment**, please visit: [Microsoft Experiment Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.experiment.experiment?view=azure-ml-py)
###Code
from azureml.core.experiment import Experiment
# Get an instance of the Workspace from the config file
ws = Workspace.from_config()
experiment_name = 'wake-detection-experiment'
# Create Experiment
experiment = Experiment(ws, experiment_name)
###Output
_____no_output_____
###Markdown
Create Automated ML Compute clusterFirstly, check for the existence of the cluster. If it already exists, we are able to reuse it. Checking for the existence of the cluster can be performed by calling the constructor `ComputeTarget()` with the current workspace and name of the cluster.In case the cluster does not exist, the next step will be to provide a configuration for the new AML cluster by calling the function `AmlCompute.provisioning_configuration()`. It takes as parameters the VM size and the max number of nodes that the cluster can scale up to. After the configuration has executed, `ComputeTarget.create()` should be called with the previously configuration object and the workspace object.For more information on **ComputeTarget**, please visit: [Microsoft get_data Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.compute.computetarget?view=azure-ml-py)For more information on **AmlCompute**, please visit: [Microsoft get_data Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.compute.akscompute?view=azure-ml-py)**Note:** Please wait for the execution of the cell to finish before moving forward.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Create AML CPU Compute Cluster
try:
compute_target = ComputeTarget(workspace=ws, name='cpucluster')
print('Found existing compute target.')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='Standard_DS12_v2',
max_nodes=4)
compute_target = ComputeTarget.create(ws, 'cpucluster', compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Submit ExperimentWe'll use remote compute for this job. We need to install a couple of extra libraries, including those required for model interpretability.The `experiment.submit()` function is called to send the experiment for execution. The only parameter received by this function is the `Estimator` object.
###Code
from azureml.train.sklearn import SKLearn
estimator = SKLearn(source_directory=project_folder,
compute_target=compute_target,
entry_script='train.py',
inputs=[tabular.as_named_input('training')],
pip_packages=['azureml-dataprep[fuse,pandas]','joblib==0.14.1','azureml-interpret','azureml-contrib-interpret','matplotlib','scikit-learn==0.22.1','seaborn'])
run = experiment.submit(estimator)
run
###Output
_____no_output_____
###Markdown
Monitor ExperimentThe creation of an object of type `Run` will enable us to observe the experiment’s progress and results. The object is created by calling the constructor `Run()`. It takes, as arguments, the experiment and the identifier of the run to fetch. After the object has been instantiated, the `RunDetails()` function will retrieve the progress, metrics, and tasks for the specified run. They will be displayed by calling the function `show()` over the mentioned object.**Note:** Please wait for the execution of the cell to finish before moving forward. (Status should be **Completed**)
###Code
from azureml.core import Run
from azureml.widgets import RunDetails
run = Run(experiment, run.id)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Encode dataset and download trained modelFirst step is to encode our training data to take the shape expected by the Onnx converter. Next, download the model obtained from the best run. In order to download the model, the function `download_model()` should be called. This will take care of downloading the model obtained from the best run.
###Code
from utils import *
from scripts.utils import *
# Convert dataset to Pandas data frame
X_train, X_test, y_train, y_test = split_dataset(tabular)
model = download_model(run)
###Output
_____no_output_____
###Markdown
Convert model to Onnx formatExport the Sklearn model to Onnx format by using `skl2onnx`. This step will output an Onnx model that we will be able to publish to the Azure SQL Edge Database Instance to use along with our `PREDICT` statement.
###Code
import skl2onnx
import onnxmltools
# Convert the scikit model to onnx format
onnx_model = skl2onnx.convert_sklearn(model, 'Wind Turbine Dataset', convert_dataframe_schema(X_train))
# Save the onnx model locally
onnx_model_path = 'windturbinewake.model.onnx'
onnxmltools.utils.save_model(onnx_model, onnx_model_path)
###Output
_____no_output_____ |
notebooks/Challenge 2 - Calculate bandgap of OLED molecules.ipynb | ###Markdown
Quantum Integer Programming and Quantum Machine Learning 47-779/785, Tepper School of Business. Introduction to Quantum Computing 18-819F, Electrical and Computers Engineering. Fall 2021, Carnegie Mellon University*IBM Quantum Challenge Fall 2021*Challenge 2: Calculate bandgap of OLED molecules Run in **[Google Colab](https://colab.research.google.com/github/bernalde/QuIPML/blob/main/notebooks/Challenge%202%20-%20Calculate%20bandgap%20of%20OLED%20molecules.ipynb)** IntroductionOrganic Light Emitting Diodes or OLEDs have become increasingly popular in recent years as the basis for fabrication of thin, flexible TV and mobile phone displays that emit light upon application of an electric current. Recent studies ([**Gao et al., 2021**](https://www.nature.com/articles/s41524-021-00540-6)) have been looking at electronic transitions of high energy states in phenylsulfonyl-carbazole (PSPCz) molecules, which could be useful thermally activated delayed fluorescence (TADF) emitters for OLED technology. TADF emitters could potentially produce OLEDs that perform with 100 percent internal quantum efficiency (IQE), i.e the fraction of the charge carriers in a circuit or system that emit absorbed photons, compared with conventional fluorophores currently used to make OLEDs whose quantum efficiencies are limited to 25 percent. That large boost in efficiency means manufacturers could produce OLEDs for use in devices requiring low-power consumption, such as cell phones, which could in turn lead to future developments where virtually any surface can be converted into a cheap and energy-efficient lighting source covering vast areas of homes, offices, museums and more! Why quantum?Quantum computers could be invaluable tools for studying the electronic structure and dynamical properties of complex molecules and materials as it makes more sense to model quantum mechanical systems on a quantum device than on a classical computer. A recent joint research project by IBM Quantum and partners was successful in developing methods to improve accuracy for the calculation of excited TADF states for efficient OLEDs, making it the world's first research case of applying quantum computers to the calculation of excited states of commercial materials (see paper linked above for reference). With this background information, we are interested in describing quantum computations of the “excited states,” or high energy states, of industrial chemical compounds that could potentially be used in the fabrication of efficient OLED devices. Challenge**Goal**The goal of this challenge is to use quantum algorithms to reliably predict the excited states energies of these TADF materials. Along the way, this challenge introduces state-of-the-art hybrid classical-quantum embedded chemistry modelling allowing the splitting of the work-load between classical approximations and more accurate quantum calculations. 1. **Challenge 2a & 2b**: Understanding the atomic orbitals (AO), molecular orbitals (MO) and how to reduce the number of orbitals using active space transformation.2. **Challenge 2c & 2d**: Calculating ground state energy of PSPCz molecule using NumPy and Variational Quantum Eigensolver (VQE).3. **Challenge 2e**: Calculating excited state energy of PSPCz module using quantum Equation-of-Motion (QEOM) algorithm.4. **Challenge 2f**: Running VQE on the cloud (simulator or real quantum system) using Qiskit Runtime.Before you begin, we recommend watching the [**Qiskit Nature Demo Session with Max Rossmannek**](https://youtu.be/UtMVoGXlz04?t=38) and check out the corresponding [**demo notebook**](https://github.com/qiskit-community/qiskit-application-modules-demo-sessions/tree/main/qiskit-nature) to learn how to define electronic structure calculations. 1. DriverThe interfaces to the classical chemistry codes that are available in Qiskit are called drivers. We have for example `PSI4Driver`, `PyQuanteDriver`, `PySCFDriver` are available.By running a driver (Hartree-Fock calculation for a given basis set and molecular geometry), in the cell below, we obtain all the necessary information about our molecule to apply then a quantum algorithm.
###Code
from qiskit_nature.drivers import Molecule
from qiskit_nature.drivers.second_quantization import ElectronicStructureDriverType, ElectronicStructureMoleculeDriver
# PSPCz molecule
geometry = [['C', [ -0.2316640, 1.1348450, 0.6956120]],
['C', [ -0.8886300, 0.3253780, -0.2344140]],
['C', [ -0.1842470, -0.1935670, -1.3239330]],
['C', [ 1.1662930, 0.0801450, -1.4737160]],
['C', [ 1.8089230, 0.8832220, -0.5383540]],
['C', [ 1.1155860, 1.4218050, 0.5392780]],
['S', [ 3.5450920, 1.2449890, -0.7349240]],
['O', [ 3.8606900, 1.0881590, -2.1541690]],
['C', [ 4.3889120, -0.0620730, 0.1436780]],
['O', [ 3.8088290, 2.4916780, -0.0174650]],
['C', [ 4.6830900, 0.1064460, 1.4918230]],
['C', [ 5.3364470, -0.9144080, 2.1705280]],
['C', [ 5.6895490, -2.0818670, 1.5007820]],
['C', [ 5.4000540, -2.2323130, 0.1481350]],
['C', [ 4.7467230, -1.2180160, -0.5404770]],
['N', [ -2.2589180, 0.0399120, -0.0793330]],
['C', [ -2.8394600, -1.2343990, -0.1494160]],
['C', [ -4.2635450, -1.0769890, 0.0660760]],
['C', [ -4.5212550, 0.2638010, 0.2662190]],
['C', [ -3.2669630, 0.9823890, 0.1722720]],
['C', [ -2.2678900, -2.4598950, -0.3287380]],
['C', [ -3.1299420, -3.6058560, -0.3236210]],
['C', [ -4.5179520, -3.4797390, -0.1395160]],
['C', [ -5.1056310, -2.2512990, 0.0536940]],
['C', [ -5.7352450, 1.0074800, 0.5140960]],
['C', [ -5.6563790, 2.3761270, 0.6274610]],
['C', [ -4.4287740, 3.0501460, 0.5083650]],
['C', [ -3.2040560, 2.3409470, 0.2746950]],
['H', [ -0.7813570, 1.5286610, 1.5426490]],
['H', [ -0.7079140, -0.7911480, -2.0611600]],
['H', [ 1.7161320, -0.2933710, -2.3302930]],
['H', [ 1.6308220, 2.0660550, 1.2427990]],
['H', [ 4.4214900, 1.0345500, 1.9875450]],
['H', [ 5.5773000, -0.7951290, 3.2218590]],
['H', [ 6.2017810, -2.8762260, 2.0345740]],
['H', [ 5.6906680, -3.1381740, -0.3739110]],
['H', [ 4.5337010, -1.3031330, -1.6001680]],
['H', [ -1.1998460, -2.5827750, -0.4596910]],
['H', [ -2.6937370, -4.5881470, -0.4657540]],
['H', [ -5.1332290, -4.3740010, -0.1501080]],
['H', [ -6.1752900, -2.1516170, 0.1987120]],
['H', [ -6.6812260, 0.4853900, 0.6017680]],
['H', [ -6.5574610, 2.9529350, 0.8109620]],
['H', [ -4.3980410, 4.1305040, 0.5929440]],
['H', [ -2.2726630, 2.8838620, 0.1712760]]]
molecule = Molecule(geometry=geometry, charge=0, multiplicity=1)
driver = ElectronicStructureMoleculeDriver(molecule=molecule,
basis='631g*',
driver_type=ElectronicStructureDriverType.PYSCF)
###Output
/opt/conda/lib/python3.8/site-packages/pyscf/lib/misc.py:47: H5pyDeprecationWarning: Using default_file_mode other than 'r' is deprecated. Pass the mode to h5py.File() instead.
h5py.get_config().default_file_mode = 'a'
###Markdown
**Challenge 2a** Question: Find out these numbers for the PSPCz molecule. 1. What is the number of C, H, N, O, S atoms?1. What is the total number of atoms?1. What is the total number of atomic orbitals (AO)?1. What is the total number of molecular orbitals (MO)? **How to count atomic orbitals?**The number depends on the basis. The number below is specific to `631g*` basis which we will use for this challenge. - C: 1s, 2s2p, 3s3p3d = 1+4+9 = 14- H: 1s, 2s = 1+1 = 2- N: 1s, 2s2p, 3s3p3d = 1+4+9 = 14- O: 1s, 2s2p, 3s3p3d = 1+4+9 = 14- S: 1s, 2s2p, 3s3p3d, 4s4p = 1+4+9+4 = 18
###Code
num_ao = {
'C': 14,
'H': 2,
'N': 14,
'O': 14,
'S': 18,
}
##############################
# Provide your code here
num_C_atom = 0
num_H_atom = 0
num_N_atom = 0
num_O_atom = 0
num_S_atom = 0
for i in range(0, len(geometry)):
if geometry[i][0] == 'C':
num_C_atom = num_C_atom + 1
elif geometry[i][0] == 'H':
num_H_atom = num_H_atom + 1
elif geometry[i][0] == 'N':
num_N_atom = num_N_atom + 1
elif geometry[i][0] == 'O':
num_O_atom = num_O_atom + 1
elif geometry[i][0] == 'S':
num_S_atom = num_S_atom + 1
#num_C_atom =
#num_H_atom =
#num_N_atom =
#num_O_atom =
#num_S_atom =
num_atoms_total = num_C_atom + num_H_atom + num_N_atom + num_O_atom + num_S_atom
num_AO_total = num_ao['C']*num_C_atom + num_ao['H']*num_H_atom+num_ao['N']*num_N_atom+num_ao['O']*num_O_atom+num_ao['S']*num_S_atom
num_MO_total = num_AO_total
##############################
answer_ex2a ={
'C': num_C_atom,
'H': num_H_atom,
'N': num_N_atom,
'O': num_O_atom,
'S': num_S_atom,
'atoms': num_atoms_total,
'AOs': num_AO_total,
'MOs': num_MO_total
}
print(answer_ex2a)
# Check your answer and submit using the following code
from qc_grader import grade_ex2a
grade_ex2a(answer_ex2a)
from qc_grader.util import get_challenge_provider
provider = get_challenge_provider()
if provider:
backend = provider.get_backend('ibmq_qasm_simulator')
###Output
You have been not assigned to a challenge provider yet. Note that you need to pass at least one exercise, and it may take up to 12 hours to get assigned. Meanwhile, please proceed to other exercises and try again later.
###Markdown
As you found out yourself in the exercise above, PSPCz is a large molecule, consisting of many atoms and many atomic orbitals. Direct calculation of a large molecule is out of reach for current quantum systems. However, since we are only interested in the bandgap, calculating the energy of Highest Occupied Molecular Orbital (HOMO) and Lowest Unoccupied Molecular Orbital (LUMO) is sufficient. Here we applied a technique called active space transformation to reduce the number of molecular orbitals to only 2 (HOMO and LUMO):$$E_g = E_{LUMO} - E_{HOMO}$$Each circle here represents an electron in an orbital; when light or energy of a high enough frequency is absorbed by an electron in the HOMO, it jumps to the LUMO.For PSPCz molecules, we limit this excited state to just the first singlet and triplet states. In a singlet state, all electrons in a system are spin paired, giving them only one possible orientation in space. A singlet or triplet excited state can form by exciting one of the two electrons to a higher energy level. The excited electron retains the same spin orientation in a singlet excited state, whereas in a triplet excited state, the excited electron has the same spin orientation as the ground state electron. Spin in the ground and excited statesOne set of electron spins is unpaired in a triplet state, meaning there are three possible orientations in space with respect to the axis. LUMO (a-c) and HOMO (e-f) orbitals of the triplet state optimized structures of PSPCz (a, d) and its variants 2F-PSPCz (b, e) and 4F-PSPCz (c, f) respectively would then look something like this.By using the active space transformer method, we will manage to exclude non-core electronic states by restricting calculations to the singlet and triplet, i.e. the smallest possible active space and manage to compute this energy with a small number of qubits while keeping a high-quality description of the system.
###Code
from qiskit_nature.drivers.second_quantization import HDF5Driver
driver_reduced = HDF5Driver("resources/PSPCz_reduced.hdf5")
properties = driver_reduced.run()
from qiskit_nature.properties.second_quantization.electronic import ElectronicEnergy
electronic_energy = properties.get_property(ElectronicEnergy)
print(electronic_energy)
###Output
ElectronicEnergy
(AO) 1-Body Terms:
Alpha
<(430, 430) matrix with 184900 non-zero entries>
[0, 0] = -11.481107571585675
[0, 1] = -2.6982522446048134
[0, 2] = -2.237143188610541
[0, 3] = 0.0017433998087159669
[0, 4] = 0.0007741436199762753
... skipping 184895 entries
Beta
<(430, 430) matrix with 184900 non-zero entries>
[0, 0] = -11.481107571585675
[0, 1] = -2.6982522446048134
[0, 2] = -2.237143188610541
[0, 3] = 0.0017433998087159669
[0, 4] = 0.0007741436199762753
... skipping 184895 entries
(MO) 1-Body Terms:
Alpha
<(2, 2) matrix with 4 non-zero entries>
[0, 0] = -0.4968112637934733
[0, 1] = 0.00027750088691888997
[1, 0] = 0.00027750088691825913
[1, 1] = -0.1843594001763901
Beta
<(2, 2) matrix with 4 non-zero entries>
[0, 0] = -0.4968112637934733
[0, 1] = 0.00027750088691888997
[1, 0] = 0.00027750088691825913
[1, 1] = -0.1843594001763901
(MO) 2-Body Terms:
Alpha-Alpha
<(2, 2, 2, 2) matrix with 16 non-zero entries>
[0, 0, 0, 0] = 0.22795982746869856
[0, 0, 0, 1] = -0.00027753808830176344
[0, 0, 1, 0] = -0.00027753808830176615
[0, 0, 1, 1] = 0.13689436105642472
[0, 1, 0, 0] = -0.0002775380883017597
... skipping 11 entries
Beta-Alpha
<(2, 2, 2, 2) matrix with 16 non-zero entries>
[0, 0, 0, 0] = 0.22795982746869856
[0, 0, 0, 1] = -0.00027753808830176344
[0, 0, 1, 0] = -0.00027753808830176615
[0, 0, 1, 1] = 0.13689436105642472
[0, 1, 0, 0] = -0.0002775380883017597
... skipping 11 entries
Beta-Beta
<(2, 2, 2, 2) matrix with 16 non-zero entries>
[0, 0, 0, 0] = 0.22795982746869856
[0, 0, 0, 1] = -0.00027753808830176344
[0, 0, 1, 0] = -0.00027753808830176615
[0, 0, 1, 1] = 0.13689436105642472
[0, 1, 0, 0] = -0.0002775380883017597
... skipping 11 entries
Alpha-Beta
<(2, 2, 2, 2) matrix with 16 non-zero entries>
[0, 0, 0, 0] = 0.22795982746869856
[0, 0, 0, 1] = -0.00027753808830176344
[0, 0, 1, 0] = -0.00027753808830176615
[0, 0, 1, 1] = 0.13689436105642472
[0, 1, 0, 0] = -0.0002775380883017597
... skipping 11 entries
Energy Shifts:
ActiveSpaceTransformer = -4042.866322560092
###Markdown
You can see that `(AO) 1-Body Terms` contains a (430 x 430) matrix which describes the original molecule with 430 atomic orbitals which translate to 430 molecular orbitals (?). After `ActiveSpaceTransformation` (pre-calculated), the number of molecular orbitals `(MO) 1-Body Terms` is reduced to a (2x2) matrix. **Challenge 2b** Question: Use property framework to find out the answer for the questions below. 1. What is the number of electrons in the system after active space transformation?1. What is the number of molecular orbitals (MO)?1. What is the number of spin orbitals (SO)?1. How many qubits would you need to simulate this molecule with Jordan-Wigner mapping?
###Code
from qiskit_nature.properties.second_quantization.electronic import ParticleNumber
##############################
# Provide your code here
particle_number = ParticleNumber(
num_spin_orbitals = 4,
num_particles = (1, 1),
)
num_electron = particle_number.num_alpha + particle_number.num_beta
#num_MO = 2
num_SO = particle_number.num_spin_orbitals
num_MO = num_SO / 2
num_MO = int(num_MO)
num_qubits = num_SO
##############################
answer_ex2b = {
'electrons': num_electron,
'MOs': num_MO,
'SOs': num_SO,
'qubits': num_qubits
}
print(answer_ex2b)
# Check your answer and submit using the following code
from qc_grader import grade_ex2b
grade_ex2b(answer_ex2b)
###Output
Submitting your answer for 2b. Please wait...
Congratulations 🎉! Your answer is correct and has been submitted.
###Markdown
2. Electronic structure problemYou can then create an ElectronicStructureProblem that can produce the list of fermionic operators before mapping them to qubits (Pauli strings). This is the first step in defining your molecular system in its ground state. You can read more about solving for the ground state in [**this tutorial**](https://qiskit.org/documentation/nature/tutorials/03_ground_state_solvers.html).
###Code
from qiskit_nature.problems.second_quantization import ElectronicStructureProblem
##############################
# Provide your code here
es_problem = ElectronicStructureProblem(driver_reduced)
##############################
second_q_op = es_problem.second_q_ops()
print(second_q_op[0])
###Output
Fermionic Operator
register length=4, number terms=26
(0.01572205126528473+0j) * ( +_0 -_1 +_2 -_3 )
+ (-0.01572205126528473+0j) * ( +_0 -_1 -_2 +_3 )
+ (0.00027750088691888997+0j) * ( +_0 -_1 )
+ (0.0003149147870892302+0j) * ( +_0 -_1 +_3 -_3 )
+ ...
###Markdown
3. QubitConverterAllows to define the mapping that you will use in the simulation.
###Code
from qiskit_nature.converters.second_quantization import QubitConverter
from qiskit_nature.mappers.second_quantization import JordanWignerMapper, ParityMapper, BravyiKitaevMapper
##############################
# Provide your code here
qubit_converter = QubitConverter(JordanWignerMapper())
##############################
qubit_op = qubit_converter.convert(second_q_op[0])
print(qubit_op)
###Output
-0.45781773131305903 * IIII
- 0.009666607989543467 * ZIII
+ 0.12689900731767084 * IZII
+ 0.030293077447785 * ZZII
- 0.009666607989543479 * IIZI
+ 0.03732964036584735 * ZIZI
+ 0.034223590264106186 * IZZI
+ 0.12689900731767084 * IIIZ
+ 0.034223590264106186 * ZIIZ
+ 0.05698995686717464 * IZIZ
+ 0.030293077447785 * IIZZ
+ 0.00014809461815615455 * XXII
+ 0.00014809461815615455 * YYII
- 7.872869677230731e-05 * XXZI
- 7.872869677230731e-05 * YYZI
+ 6.938452207544002e-05 * XXIZ
+ 6.938452207544002e-05 * YYIZ
+ 0.00014809461815615455 * IIXX
- 7.872869677230731e-05 * ZIXX
+ 6.938452207544002e-05 * IZXX
+ 0.00014809461815615455 * IIYY
- 7.872869677230731e-05 * ZIYY
+ 6.938452207544002e-05 * IZYY
+ 0.003930512816321183 * XXXX
+ 0.003930512816321183 * YYXX
+ 0.003930512816321183 * XXYY
+ 0.003930512816321183 * YYYY
###Markdown
4. Initial stateA good initial state in chemistry is the HartreeFock state. We can initialize it as follows:
###Code
from qiskit_nature.circuit.library import HartreeFock
##############################
# Provide your code here
#print(es_problem.grouped_property_transformed)
num_spin_orbitals = 4
num_particles = es_problem.num_particles
init_state = HartreeFock(num_spin_orbitals, num_particles, qubit_converter)
##############################
init_state.draw()
###Output
_____no_output_____
###Markdown
5. AnsatzOne of the most important choices is the quantum circuit that you choose to approximate your ground state.Here is the example of qiskit circuit library that contains many possibilities for making your own circuit.
###Code
from qiskit.circuit.library import EfficientSU2, TwoLocal, NLocal, PauliTwoDesign
from qiskit_nature.circuit.library import UCCSD, PUCCD, SUCCD
##############################
# Provide your code here
#ansatz = TwoLocal(rotation_blocks = ['h', 'rx'], entanglement_blocks = 'cz',
# entanglement='full', reps=2, parameter_prefix = 'y')
ansatz = TwoLocal(num_qubits=num_spin_orbitals,
rotation_blocks = ['h', 'rx'], entanglement_blocks = 'cz',
entanglement='full', reps=2, parameter_prefix = 'y')
##############################
ansatz.decompose().draw()
###Output
/opt/conda/lib/python3.8/site-packages/sympy/core/expr.py:3949: SymPyDeprecationWarning:
expr_free_symbols method has been deprecated since SymPy 1.9. See
https://github.com/sympy/sympy/issues/21494 for more info.
SymPyDeprecationWarning(feature="expr_free_symbols method",
###Markdown
Ground state energy calculation Calculation using NumPyFor learning purposes, we can solve the problem exactly with the exact diagonalization of the Hamiltonian matrix so we know where to aim with VQE. Of course, the dimensions of this matrix scale exponentially in the number of molecular orbitals so you can try doing this for a large molecule of your choice and see how slow this becomes. For very large systems you would run out of memory trying to store their wavefunctions.
###Code
from qiskit.algorithms import NumPyMinimumEigensolver
from qiskit_nature.algorithms import GroundStateEigensolver
##############################
# Provide your code here
numpy_solver = NumPyMinimumEigensolver()
numpy_ground_state_solver = GroundStateEigensolver(qubit_converter, numpy_solver)
numpy_results = numpy_ground_state_solver.solve(es_problem)
##############################
exact_energy = numpy_results.computed_energies[0]
print(f"Exact electronic energy: {exact_energy:.6f} Hartree\n")
print(numpy_results)
# Check your answer and submit using the following code
from qc_grader import grade_ex2c
grade_ex2c(numpy_results)
###Output
Submitting your answer for 2c. Please wait...
Congratulations 🎉! Your answer is correct and has been submitted.
###Markdown
Calculation using VQEThe next step would be to use VQE to calculate this ground state energy and you would have found the solution to one half of your electronic problem!
###Code
from qiskit.providers.aer import StatevectorSimulator, QasmSimulator
from qiskit.algorithms.optimizers import COBYLA, L_BFGS_B, SPSA, SLSQP
##############################
# Provide your code here
backend = StatevectorSimulator(precision='single')
optimizer = SPSA
##############################
from qiskit.algorithms import VQE
from qiskit_nature.algorithms import VQEUCCFactory, GroundStateEigensolver
#from jupyterplot import ProgressPlot
import numpy as np
error_threshold = 10 # mHartree
np.random.seed(5) # fix seed for reproducibility
initial_point = np.random.random(ansatz.num_parameters)
from qiskit.utils import QuantumInstance
from qiskit_nature.algorithms import VQEUCCFactory
quantum_instance = QuantumInstance(backend = backend)
vqe_solver = VQEUCCFactory(quantum_instance)
# for live plotting
#pp = ProgressPlot(plot_names=['Energy'],
# line_names=['Runtime VQE', f'Target + {error_threshold}mH', 'Target'])
intermediate_info = {
'nfev': [],
'parameters': [],
'energy': [],
'stddev': []
}
#def callback(nfev, parameters, energy, stddev):
# intermediate_info['nfev'].append(nfev)
# intermediate_info['parameters'].append(parameters)
# intermediate_info['energy'].append(energy)
# intermediate_info['stddev'].append(stddev)
# pp.update([[energy, exact_energy+error_threshold/1000, exact_energy]])
##############################
# Provide your code here
quantum_instance = QuantumInstance(backend = backend)
vqe_solver = VQEUCCFactory(quantum_instance)
vqe_ground_state_solver = GroundStateEigensolver(qubit_converter, vqe_solver)
vqe_results = vqe_ground_state_solver.solve(es_problem)
##############################
print(vqe_results)
error = (vqe_results.computed_energies[0] - exact_energy) * 1000 # mHartree
print(f'Error is: {error:.3f} mHartree')
# Check your answer and submit using the following code
from qc_grader import grade_ex2d
grade_ex2d(vqe_results)
###Output
Submitting your answer for 2d. Please wait...
Congratulations 🎉! Your answer is correct and has been submitted.
###Markdown
Excited state calculation Calculation using QEOMFor the molecule of our interest we also need to compute the same but this time for the excited state of our molecular hamiltonian. Since we've already defined the system, we would now need to access the excitation energy using the quantum Equation of Motion (qEOM) algorithm which does this by solving the following pseudo-eigenvalue problemwithwhere each corresponding matrix element must be measured on our quantum computer with its corresponding ground state.To learn more, you can read up about excited state calculation with [**this tutorial**](https://qiskit.org/documentation/nature/tutorials/04_excited_states_solvers.html), and about qEOM itself from the [**corresponding paper by Ollitrault et al., 2019**](https://arxiv.org/abs/1910.12890).
###Code
from qiskit_nature.algorithms import QEOM
##############################
# Provide your code here
gsc = GroundStateEigensolver(qubit_converter, vqe_solver)
qeom_excited_state_solver = qeom_excited_states_calculation = QEOM(gsc, 'sd')
qeom_results = qeom_excited_state_solver.solve(es_problem)
##############################
print(qeom_results)
# Check your answer and submit using the following code
from qc_grader import grade_ex2e
grade_ex2e(qeom_results)
###Output
Submitting your answer for 2e. Please wait...
Congratulations 🎉! Your answer is correct and has been submitted.
###Markdown
Finally, you just need to calculate the band gap or energy gap (which is the minimum amount of energy required by an electron to break free of its ground state into its excited state) by computing the difference of the two sets of energies that you have calculated.
###Code
bandgap = qeom_results.computed_energies[1] - qeom_results.computed_energies[0]
bandgap # in Hartree
###Output
_____no_output_____
###Markdown
Running VQE on the cloud using Qiskit RuntimeQiskit Runtime is a new architecture offered by IBM Quantum that streamlines computations requiring many iterations. These experiments will execute significantly faster within this improved hybrid quantum/classical process.Qiskit Runtime allows authorized users to upload their Qiskit quantum programs for themselves or others to use. A Qiskit quantum program, also called a Qiskit Runtime program, is a piece of Python code that takes certain inputs, performs quantum and maybe classical computation, interactively provides intermediate results if desired, and returns the processing results. The same or other authorized users can then invoke these quantum programs by simply passing in the required input parameters.To run the VQE using Qiskit Runtime, we only have to do very few changes from the local VQE run and mainly have to replace the VQE class by the VQEProgram class. Both follow the same MinimumEigensolver interface and thus share the compute_minimum_eigenvalue method to execute the algorithm and return the same type of result object. Merely the signature of the initializer differs sligthly.We start by choosing the provider with access to the Qiskit Runtime service and the backend to execute the circuits on.For more information about Qiskit Runtime, please refer to [**VQEProgram**](https://qiskit.org/documentation/partners/qiskit_runtime/tutorials/vqe.htmlRuntime-VQE:-VQEProgram) and [**Leveraging Qiskit Runtime**](https://qiskit.org/documentation/nature/tutorials/07_leveraging_qiskit_runtime.html) tutorials.
###Code
from qc_grader.util import get_challenge_provider
provider = get_challenge_provider()
if provider:
backend = provider.get_backend('ibmq_qasm_simulator')
from qiskit_nature.runtime import VQEProgram
error_threshold = 10 # mHartree
# for live plotting
#pp = ProgressPlot(plot_names=['Energy'],
# line_names=['Runtime VQE', f'Target + {error_threshold}mH', 'Target'])
intermediate_info = {
'nfev': [],
'parameters': [],
'energy': [],
'stddev': []
}
def callback(nfev, parameters, energy, stddev):
intermediate_info['nfev'].append(nfev)
intermediate_info['parameters'].append(parameters)
intermediate_info['energy'].append(energy)
intermediate_info['stddev'].append(stddev)
#def callback(nfev, parameters, energy, stddev):
# intermediate_info['nfev'].append(nfev)
# intermediate_info['parameters'].append(parameters)
# intermediate_info['energy'].append(energy)
# intermediate_info['stddev'].append(stddev)
# pp.update([[energy,exact_energy+error_threshold/1000, exact_energy]])
##############################
# Provide your code here
optimizer = {
'name': 'QN-SPSA', # leverage the Quantum Natural SPSA
# 'name': 'SPSA', # set to ordinary SPSA
'maxiter': 100,
}
measurement_error_mitigation = True
runtime_vqe = VQEProgram(ansatz=ansatz,
optimizer=optimizer,
initial_point=initial_point,
provider=provider,
backend=backend,
shots=1024,
measurement_error_mitigation=measurement_error_mitigation,
callback=callback)
##############################
###Output
_____no_output_____
###Markdown
**Challenge 2f grading** The grading for this exercise is slightly different from the previous exercises. 1. You will first need to use `prepare_ex2f` to submit a runtime job to IBM Quantum (to run on a simulator), using `runtime_vqe (VQEProgram)`, `qubit_converter (QubitConverter)`, `es_problem (ElectronicStructureProblem)`. Depending on the queue, the job can take up to a few minutes to complete. Under the hood, the `prepare_ex2f` does the following:```pythonruntime_vqe_groundstate_solver = GroundStateEigensolver(qubit_converter, runtime_vqe)runtime_vqe_result = runtime_vqe_groundstate_solver.solve(es_problem)``` 2. After the job has completed, you can use `grade_ex2f` to check the answer and submit.
###Code
# Submit a runtime job using the following code
from qc_grader import prepare_ex2f
runtime_job = prepare_ex2f(runtime_vqe, qubit_converter, es_problem)
# Check your answer and submit using the following code
from qc_grader import grade_ex2f
grade_ex2f(runtime_job)
print(runtime_job.result().get("eigenvalue"))
###Output
(-0.7639760092349235+0j)
###Markdown
Congratulations! You have submitted your first Qiskit Runtime program and passed the exercise.But the fun is not over! We have reserved a dedicated quantum system for the quantum challenge. As bonus exercise (not graded), you can try your hands on submitting a VQE runtime job to a real quantum system! **Running VQE on a real quantum system (Optional)** We have reserved a dedicated quantum system [`ibm_perth`](https://quantum-computing.ibm.com/services?services=systems&system=ibm_perth) for this challenge. Please follow the steps below to submit runtime job on the real quantum system. 1. Update backend selection to `ibm_perth` and pass it to `runtime_vqe` again ```python backend = provider.get_backend('ibm_perth') runtime_vqe = VQEProgram(... backend=backend, ...) ```2. Set `real_device` flag in `prepare_ex2f` to `True`.3. Run `prepare_ex2f` to submit a runtime job to `ibm_perth`.Note: Qiskit runtime speeds up VQE by up to 5 times. However, each runtime job can still take 30 ~ 60 minutes of quantum processor time. Therefore, **the queue time for completing a job can be hours or even days**, depending on how many participants are submitting jobs. To ensure a pleasant experience for all participants, please only submit a job to the real quantum system after trying with these settings using the simulator:1. Consider using `PartiyMapper` and set `two_qubit_reduction=True` to reduce number of qubits to 2 and make the VQE program converge to ground state energy faster (with lower number of iterations).1. Limit optimizer option `maxiter=100` or less. Use the simulator runs to find an optimal low number of iterations.1. Verify your runtime program is correct by passing `grade_ex2f` with simulator as backend.1. Limit your jobs to only 1 job per participant to allow more participants to try runtime on a real quantum system. Don't worry if your job is getting too long to execute or it can't be executed before the challenge ends. This is an optional exercise. You can still pass all challenge exercises and get a digital badge without running a job on the real quantum system.
###Code
backend = provider.get_backend('ibm_perth')
runtime_vqe = VQEProgram(ansatz=ansatz,
optimizer=optimizer,
initial_point=initial_point,
provider=provider,
backend=backend,
shots=1024,
measurement_error_mitigation=measurement_error_mitigation,
callback=callback)
# Please change backend to ibm_perth before running the following code
runtime_job_real_device = prepare_ex2f(runtime_vqe, qubit_converter, es_problem, real_device=True)
print(runtime_job_real_device.result().get("eigenvalue"))
###Output
runtime_job._start_websocket_client:WARNING:2021-11-05 03:41:09,448: An error occurred while streaming results from the server for job c62a6fa0kmh6rm73bpr0:
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/site-packages/qiskit/providers/ibmq/runtime/runtime_job.py", line 328, in _start_websocket_client
self._ws_client.job_results()
File "/opt/conda/lib/python3.8/site-packages/qiskit/providers/ibmq/api/clients/runtime_ws.py", line 74, in job_results
self.stream(url=url, retries=max_retries, backoff_factor=backoff_factor)
File "/opt/conda/lib/python3.8/site-packages/qiskit/providers/ibmq/api/clients/base.py", line 211, in stream
raise WebsocketError(error_message)
qiskit.providers.ibmq.api.exceptions.WebsocketError: 'Max retries exceeded: Failed to establish a websocket connection. Error: Traceback (most recent call last):\n File "/opt/conda/lib/python3.8/site-packages/websocket/_app.py", line 369, in run_forever\n dispatcher.read(self.sock.sock, read, check)\n File "/opt/conda/lib/python3.8/site-packages/websocket/_app.py", line 70, in read\n if not read_callback():\n File "/opt/conda/lib/python3.8/site-packages/websocket/_app.py", line 335, in read\n op_code, frame = self.sock.recv_data_frame(True)\n File "/opt/conda/lib/python3.8/site-packages/websocket/_core.py", line 396, in recv_data_frame\n frame = self.recv_frame()\n File "/opt/conda/lib/python3.8/site-packages/websocket/_core.py", line 435, in recv_frame\n return self.frame_buffer.recv_frame()\n File "/opt/conda/lib/python3.8/site-packages/websocket/_abnf.py", line 337, in recv_frame\n self.recv_header()\n File "/opt/conda/lib/python3.8/site-packages/websocket/_abnf.py", line 293, in recv_header\n header = self.recv_strict(2)\n File "/opt/conda/lib/python3.8/site-packages/websocket/_abnf.py", line 372, in recv_strict\n bytes_ = self.recv(min(16384, shortage))\n File "/opt/conda/lib/python3.8/site-packages/websocket/_core.py", line 519, in _recv\n return recv(self.sock, bufsize)\n File "/opt/conda/lib/python3.8/site-packages/websocket/_socket.py", line 125, in recv\n raise WebSocketConnectionClosedException(\nwebsocket._exceptions.WebSocketConnectionClosedException: Connection to remote host was lost.\n'
|
hcds-final-project.ipynb | ###Markdown
Data sources+ The RAW_us_confirmed_cases.csv file from the Kaggle repository of John Hopkins University COVID-19 data.https://www.kaggle.com/antgoldbloom/covid19-data-from-john-hopkins-university?select=RAW_us_confirmed_cases.csv+ The CDC dataset of masking mandates by county.https://data.cdc.gov/Policy-Surveillance/U-S-State-and-Territorial-Public-Mask-Mandates-Fro/62d6-pm5i+ The New York Times mask compliance survey data.https://github.com/nytimes/covid-19-data/tree/master/mask-use+ CDC’s COVID Data Tracker provides COVID-19 vaccination data in the United Stateshttps://www.cdc.gov/coronavirus/2019-ncov/vaccines/distributing/reporting-counties.html
###Code
# load US confirmed cases data
df_us_confirmed_case = pd.read_csv('RAW_us_confirmed_cases.csv')
df_us_confirmed_case
# load US marks mandates data
df_us_mask_mandate = pd.read_csv('U.S._State_and_Territorial_Public_Mask_Mandates_From_April_10__2020_through_August_15__2021_by_County_by_Day.csv')
df_us_mask_mandate
# Load mark use survey data
df_mask_use_by_county = pd.read_csv('mask-use-by-county.csv')
df_mask_use_by_county
###Output
_____no_output_____
###Markdown
Data processing+ Data cleaning and drop unneeded columns+ Melt the confirm case data so each row represent confirmed case each day+ Standardize the FIPS column among the three datasets+ Filter out only Palm Beach,FL data(FIPS == '12099')
###Code
state = 'Florida'
state_abbr = 'FL'
county = 'Palm Beach'
FIPS = 12099
# clean up data
# drop unneeded columns
df_us_confirmed_case_dropped = df_us_confirmed_case.drop(columns=['UID', 'iso2', 'iso3', 'code3', 'Country_Region', 'Lat', 'Long_', 'Combined_Key'])
df_us_confirmed_case_dropped=df_us_confirmed_case_dropped.fillna(0)
df_us_confirmed_case_dropped['FIPS'] = df_us_confirmed_case_dropped.FIPS.apply(lambda x: "{:05d}".format(int(float(x))))
df_us_confirmed_case_dropped
# Melt the confirm case data so each row represent confirmed case each day
id_vars = [
'Province_State',
'Admin2',
'FIPS',
]
df_us_confirmed_case_transformed =pd.melt(df_us_confirmed_case_dropped, id_vars = id_vars, var_name ='date', value_name = "cases")
df_us_confirmed_case_transformed['date'] = pd.to_datetime(df_us_confirmed_case_transformed['date'])
df_us_confirmed_case_transformed
# Filter to only palm beach data
df_palm_beach_confirmed_case_transformed = df_us_confirmed_case_transformed[(df_us_confirmed_case_transformed['Admin2'] == county) & (df_us_confirmed_case_transformed['Province_State'] == state)].reset_index(drop=True)
df_palm_beach_confirmed_case_transformed
# create FIPS column for easier join
df_us_mask_mandate['FIPS_State'] = df_us_mask_mandate.FIPS_State.apply(lambda x: "{:02d}".format(int(x)))
df_us_mask_mandate['FIPS_County'] = df_us_mask_mandate.FIPS_County.apply(lambda x: "{:03d}".format(int(x)))
df_us_mask_mandate['FIPS'] = df_us_mask_mandate['FIPS_State'] + df_us_mask_mandate['FIPS_County']
df_us_mask_mandate
# drop unneed columns
df_us_mask_mandate_dropped = df_us_mask_mandate.drop(columns=['FIPS_State', 'FIPS_County', 'order_code', 'Source_of_Action', 'URL', 'Citation'])
df_us_mask_mandate_dropped
# Filter to get palm beach mandate data
df_palm_beach_mask_mandate = df_us_mask_mandate_dropped[df_us_mask_mandate['County_Name'] == 'Palm Beach County'].reset_index(drop =True)
df_palm_beach_mask_mandate
# Palm beach shows all mandate data as NaN, indicate that there is no mandate happens
df_palm_beach_mask_mandate.Face_Masks_Required_in_Public.unique()
# Filter to get palm beach mark usage survey data
df_mask_use_palm_beach = df_mask_use_by_county[df_mask_use_by_county['COUNTYFP'] == FIPS].reset_index(drop =True)
df_mask_use_palm_beach
# Transform the mask usage data for better visualization
df_mask_use_palm_beach_transformed = df_mask_use_palm_beach.drop(columns=['COUNTYFP'])
df_mask_use_palm_beach_transformed =pd.melt(df_mask_use_palm_beach_transformed, var_name ='Response', value_name = "Proportion")
df_mask_use_palm_beach_transformed
# LOAD vaccinine data
df_vaccine = pd.read_csv('COVID-19_Vaccinations_in_the_United_States_County.csv')
df_vaccine
df_vaccine_palm_beach = df_vaccine[df_vaccine['FIPS'] == '12099'].reset_index(drop =True)
df_vaccine_palm_beach['Date'] = pd.to_datetime(df_vaccine_palm_beach['Date'])
df_vaccine_palm_beach
# Replace NA with zeros
df_vaccine_palm_beach=df_vaccine_palm_beach.fillna(0)
df_vaccine_palm_beach
###Output
_____no_output_____
###Markdown
Data Visualization+ Estimated Prevalence of Mask Wearing in Palm Beach County, FL+ Time series visualize Accumulate Covid Cases, Palm Beach Country, FL+ Daily New Covid Cases & 7 days rolling average, Palm Beach Country, FL+ 7 days rolling average infection rate(Daily new cases / Population), Palm Beach Country, FL+ 7 days rolling average infection rate Diff, Palm Beach Country, FL
###Code
# Visualize the esitmated prevalence of mask wearing in Palm Beach County
fig, ax = plt.subplots(figsize=(15,10))
ax.bar(df_mask_use_palm_beach_transformed['Response'], df_mask_use_palm_beach_transformed['Proportion'], color ='C0')
ax.set_title("Estimated Prevalence of Mask Wearing in Palm Beach County, FL")
ax.set_ylabel("Proportion")
plt.show()
# accumulated covid cases
fig, ax = plt.subplots(figsize=(15,10))
ax.plot(df_palm_beach_confirmed_case_transformed['date'], df_palm_beach_confirmed_case_transformed['cases'], color='C0')
ax.set_title("Accumulate Covid Cases, Palm Beach Country, FL")
ax.set_xlabel("Date")
ax.set_ylabel("Number of Covid Cases")
plt.show()
# daily new covid cases
df_palm_beach_confirmed_case_transformed['new_cases'] = df_palm_beach_confirmed_case_transformed['cases'].diff()
df_palm_beach_confirmed_case_transformed['new_cases_moving_average_7_days'] = df_palm_beach_confirmed_case_transformed['new_cases'].rolling(window=7).mean().round()
df_palm_beach_confirmed_case_transformed
# daily new covid cases
fig, ax = plt.subplots(figsize=(15,10))
ax.plot(df_palm_beach_confirmed_case_transformed['date'], df_palm_beach_confirmed_case_transformed['new_cases'], color='C0')
ax.plot(df_palm_beach_confirmed_case_transformed['date'], df_palm_beach_confirmed_case_transformed['new_cases_moving_average_7_days'], color='C1')
line0 = lines.Line2D([0], [0], label='Daily New COVID Cases', color='C0')
line1 = lines.Line2D([0], [0], label='7 days rolling average New COVID Cases', color='C1')
plt.legend(handles=[line0,line1])
ax.set_title("Daily New Covid Cases & 7 days rolling average, Palm Beach Country, FL")
ax.set_xlabel("Date")
ax.set_ylabel("Number of Covid Cases")
plt.show()
# infection rate = daily new case / population
# Palm Beach Population is 1492191
population = 1492191
df_palm_beach_confirmed_case_transformed['daily_infection_rate'] = df_palm_beach_confirmed_case_transformed['new_cases'].apply(lambda x: x * 1.0 / population)
df_palm_beach_confirmed_case_transformed['daily_infection_rate_new_cases_moving_average_7_days'] = df_palm_beach_confirmed_case_transformed['new_cases_moving_average_7_days'].apply(lambda x: x * 1.0 / population)
df_palm_beach_confirmed_case_transformed
# daily new covid cases
fig, ax = plt.subplots(figsize=(15,10))
ax.plot(df_palm_beach_confirmed_case_transformed['date'], df_palm_beach_confirmed_case_transformed['daily_infection_rate_new_cases_moving_average_7_days'], color='C1')
line1 = lines.Line2D([0], [0], label='7 days rolling average infection rate', color='C1')
plt.legend(handles=[line1])
ax.set_title("7 days rolling average infection rate, Palm Beach Country, FL (Daily new cases / Population)")
ax.set_xlabel("Date")
ax.set_ylabel("Infection Rate")
plt.show()
# Create chart of daily infection rate diff
df_palm_beach_confirmed_case_transformed['daily_infection_rate_diff'] =df_palm_beach_confirmed_case_transformed['daily_infection_rate'].diff()
df_palm_beach_confirmed_case_transformed['daily_infection_rate_new_cases_moving_average_7_days_diff'] =df_palm_beach_confirmed_case_transformed['daily_infection_rate_new_cases_moving_average_7_days'].diff()
df_palm_beach_confirmed_case_transformed
fig, ax = plt.subplots(figsize=(15,10))
ax.plot(df_palm_beach_confirmed_case_transformed['date'], df_palm_beach_confirmed_case_transformed['daily_infection_rate_new_cases_moving_average_7_days_diff'], color='C1')
line1 = lines.Line2D([0], [0], label='7 days rolling average infection rate Diff', color='C1')
plt.legend(handles=[line1])
ax.set_title("7 days rolling average infection rate diff, Palm Beach Country, FL")
ax.set_xlabel("Date")
ax.set_ylabel("Infection Rate Diff")
plt.show()
###Output
_____no_output_____
###Markdown
Data Analysis+ As we can see, Palm Beach's Infection rate has no much change, specially using the 7 days rolling average visualization+ Since Palm Beach county doesn't have mandate mask policy, try to find another county with similar mask wearing data but with have mark mandate policy+ Using Mask-Wearing Survey Data to look for another city with the similar mask wearing %+ Found a similar county(Spotsylvania County, VA) which has similar mask wearing %, specially "ALWAYS == 0.785"
###Code
# Spotsylvania County, VA (FIPS_other = 51177)
FIPS_other = 51177
df_mask_use_other = df_mask_use_by_county[df_mask_use_by_county['COUNTYFP'] == FIPS_other].reset_index(drop =True)
df_mask_use_other
# Repeat the same data processing as Palm Beach, FL
df_spotsylvania_mask_mandate = df_us_mask_mandate_dropped[df_us_mask_mandate['FIPS'] == "{:05d}".format(FIPS_other)].reset_index(drop =True)
df_spotsylvania_mask_mandate
df_spotsylvania_mask_mandate.Face_Masks_Required_in_Public.unique()
df_spotsylvania_mask_mandate_yes = df_spotsylvania_mask_mandate[df_spotsylvania_mask_mandate['Face_Masks_Required_in_Public'] == 'Yes'].reset_index(drop =True)
df_spotsylvania_mask_mandate_yes
# Get Spotsylvania confirmed case data
df_spotsylvania_confirmed_case_transformed = df_us_confirmed_case_transformed[(df_us_confirmed_case_transformed['FIPS'] == "{:05d}".format(FIPS_other))].reset_index(drop=True)
df_spotsylvania_confirmed_case_transformed
# Spotsylvania County's population 136215
Population_other = 136215
# Get Spotsylvania new case and daily infection rate 7 day moving average metrics
df_spotsylvania_confirmed_case_transformed['new_cases'] = df_spotsylvania_confirmed_case_transformed['cases'].diff()
df_spotsylvania_confirmed_case_transformed['new_cases_moving_average_7_days'] = df_spotsylvania_confirmed_case_transformed['new_cases'].rolling(window=7).mean().round()
df_spotsylvania_confirmed_case_transformed['daily_infection_rate'] = df_spotsylvania_confirmed_case_transformed['new_cases'].apply(lambda x: x * 1.0 / Population_other)
df_spotsylvania_confirmed_case_transformed['daily_infection_rate_new_cases_moving_average_7_days'] = df_spotsylvania_confirmed_case_transformed['new_cases_moving_average_7_days'].apply(lambda x: x * 1.0 / Population_other)
df_spotsylvania_confirmed_case_transformed['daily_infection_rate_diff'] =df_spotsylvania_confirmed_case_transformed['daily_infection_rate'].diff()
df_spotsylvania_confirmed_case_transformed['daily_infection_rate_new_cases_moving_average_7_days_diff'] =df_spotsylvania_confirmed_case_transformed['daily_infection_rate_new_cases_moving_average_7_days'].diff()
df_spotsylvania_confirmed_case_transformed
df_palm_beach_confirmed_case_transformed.columns
# Join the Palm beach data and Spotsylvania data for plot chart to compare
joine_df=df_palm_beach_confirmed_case_transformed.merge(df_spotsylvania_confirmed_case_transformed, left_on='date', right_on='date',
suffixes=('_palm_beach', '_spotsylvania'))
joine_df.columns
# Chart for 7 days rolling average infection rate diff between Palm beach and Spotsylvania
fig, ax = plt.subplots(figsize=(15,10))
ax.plot(joine_df['date'], joine_df['daily_infection_rate_new_cases_moving_average_7_days_diff_palm_beach'], color='C0')
ax.plot(joine_df['date'], joine_df['daily_infection_rate_new_cases_moving_average_7_days_diff_spotsylvania'], color='C1')
line0 = lines.Line2D([0], [0], label='Change in 7 days rolling average infection rate Palm Beach,FL', color='C0')
line1 = lines.Line2D([0], [0], label='Change in 7 days rolling average infection rate Spotsylvania, TX', color='C1')
span = ax.axvspan(date2num(datetime(2020,5,29)), date2num(datetime(2021,5,14)),color="C2", label = 'mask mandate in Spotsylvania Country')
plt.legend(handles=[
line0,
line1,
span])
ax.set_title("Change in 7 days rolling average infection rate diff, Palm Beach, FL vs Spotsylvania, VA")
ax.set_xlabel("Date")
ax.set_ylabel("Infection Rate Diff")
plt.show()
# Chart of 7 days roliing average infection rate between Palm Beach and Spotsylavania
fig, ax = plt.subplots(figsize=(15,10))
ax.plot(joine_df['date'], joine_df['daily_infection_rate_new_cases_moving_average_7_days_palm_beach'], color='C0')
ax.plot(joine_df['date'], joine_df['daily_infection_rate_new_cases_moving_average_7_days_spotsylvania'], color='C1')
line0 = lines.Line2D([0], [0], label='7 days rolling average infection rate Palm Beach', color='C0')
line1 = lines.Line2D([0], [0], label='7 days rolling average infection rate Spotsylvania', color='C1')
span = ax.axvspan(date2num(datetime(2020,5,29)), date2num(datetime(2021,5,14)),color="C2", label = 'mask mandate in Spotsylvania Country')
plt.legend(handles=[
line0,
line1,
span])
ax.set_title("7 days rolling average infection rate, Palm Beach, FL vs Spotsylvania, VA")
ax.set_xlabel("Date")
ax.set_ylabel("Infection Rate")
plt.show()
df_join_vaccine_data =df_palm_beach_confirmed_case_transformed.merge(df_vaccine_palm_beach, how="left", left_on='date', right_on='Date',
suffixes=('', '_vaccine'))
df_join_vaccine_data=df_join_vaccine_data.fillna(0)
df_join_vaccine_data
# Chart for 7 days moving average infection rate and vaccine rate at Palm Beach
fig, ax = plt.subplots(figsize=(15,10))
ax.plot(df_join_vaccine_data['date'], df_join_vaccine_data['daily_infection_rate_new_cases_moving_average_7_days'], color='C0')
ax.set_title("7 days rolling average infection rate and Vaccine rate at Palm Beach, FL")
ax.set_xlabel("Date")
ax.set_ylabel("Infection Rate", color = 'C0')
ax.tick_params(axis='y', labelcolor='C0')
ax2 = ax.twinx() # instantiate a second axes that shares the same x-axis
ax2.plot(df_join_vaccine_data['date'], df_join_vaccine_data['Series_Complete_Pop_Pct'], color='C1')
ax2.set_ylabel("Vaccine Rate", color = 'C1')
ax2.tick_params(axis='y', labelcolor='C1')
line0 = lines.Line2D([0], [0], label='7 days rolling average infection rate Palm Beach', color='C0')
line1 = lines.Line2D([0], [0], label='Vaccine rate Palm Beach', color='C1')
plt.legend(handles=[
line0,
line1])
plt.show()
###Output
_____no_output_____
###Markdown
Research Questions:Based on the data visualization "7 days rolling average infection rate and Vaccine rate at Palm Beach, FL", we can see when the Cov-19 vaccine started at 2021-01, the infection rate went down until Delta variant came to the picture around 2021-07.NULL Hypotheis: There is no correlation between Vaccine Rate and Daily Infection Rate of 7 days rolling average Considering Delta Variant Impact: + Jul 2021: DELTA variant started https://www.newsweek.com/first-us-covid-delta-variant-cases-how-did-it-mutate-1617871+ Other impact facts: Summer break and Nationwide reopening, etc.Therefore, I splitted the data to 3 time period to find the correlation between Infections Rate and Vaccine Rate.+ Before 2021-01 Vaccine started+ 2021-01 to 2021-06 Vaccine rate increase and infection rate decrease+ 2021-06 Vaccine didn’t help a lot for Delta variant
###Code
# check correlation before vacacine available
before_vacacine = df_join_vaccine_data.loc[(df_join_vaccine_data['date'] > '2020-1-21') & (df_join_vaccine_data['date'] < '2020-12-31')]
filterd_df_before_vacacine = before_vacacine[['daily_infection_rate_new_cases_moving_average_7_days','Series_Complete_Pop_Pct']]
filterd_df_before_vacacine
# Find correlation for 7 day moving average infection rate and vaccine rate before Vaccine started
correlations_before_vacacine = filterd_df_before_vacacine.corr()
correlations_before_vacacine
# Use heatmap to visualize
sns.heatmap(correlations_before_vacacine)
plt.show()
# as expected, there is no correclation
# Find correlation for 7 day rolling average infection rate and vaccine rate after vacacine available and before DELTA spike
after_vacacine = df_join_vaccine_data.loc[(df_join_vaccine_data['date'] > '2021-1-1') & (df_join_vaccine_data['date'] < '2021-6-30')]
filterd_df_after_vacacine = after_vacacine[['daily_infection_rate_new_cases_moving_average_7_days','Series_Complete_Pop_Pct']]
filterd_df_after_vacacine
correlations_after_vacacine = filterd_df_after_vacacine.corr()
correlations_after_vacacine
sns.heatmap(correlations_after_vacacine)
plt.show()
# we can see there's very strong negative correlation between Infection Rate and Vaccine Rate
# check correlation after DELTA variant 2021-07-01
after_delta = df_join_vaccine_data.loc[(df_join_vaccine_data['date'] > '2021-7-1') & (df_join_vaccine_data['date'] < '2022-1-1')]
filterd_df_after_delta = after_delta[['daily_infection_rate_new_cases_moving_average_7_days','Series_Complete_Pop_Pct']]
filterd_df_after_delta
# check correlation after delta
correlations_after_delta = filterd_df_after_delta.corr()
correlations_after_delta
sns.heatmap(correlations_after_delta)
plt.show()
# we can see very slight negative correlation between Infection Rate and Vacine Rate after Delta variant
# Use Linear regression to find the correlation bewteen daily infection rate 7 days moving average and vaccine rate based on
# data between Vaccine available and before DETLA started
# The feature is the vaccine rate, the predicted value is the daily infection rate of new case moving average 7 days
after_vac = df_join_vaccine_data.loc[(df_join_vaccine_data['date'] > '2021-1-1') & (df_join_vaccine_data['date'] < '2021-6-30')]
vaccine_rate = after_vac['Series_Complete_Pop_Pct']
# Transform the 1d array so it become a single feature input
vaccine_rate = vaccine_rate.to_numpy().T.reshape(-1, 1)
infection_rate = after_vac['daily_infection_rate_new_cases_moving_average_7_days']
# Convert the infection rate to percentage and make it an 1D array
infection_rate = infection_rate.to_numpy().T * 100
# Train a linear regression model using the vaccine rate and infection rate and get the model's coefficient
model = LinearRegression().fit(vaccine_rate, infection_rate)
model.coef_
# negative coefficient means if vaccination rate increase, infection rate will decreased
# Our NULL Hypothesis: There is no correlation between vaccine rate and infection ratet
# Use Ordinary Least Squares by Call SM.OLS to get p_value
x = vaccine_rate
y = infection_rate
x2 = sm.add_constant(x)
est = sm.OLS(y, x2)
model = est.fit()
print(model.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.851
Model: OLS Adj. R-squared: 0.850
Method: Least Squares F-statistic: 1009.
Date: Sun, 12 Dec 2021 Prob (F-statistic): 4.97e-75
Time: 21:56:25 Log-Likelihood: 662.16
No. Observations: 179 AIC: -1320.
Df Residuals: 177 BIC: -1314.
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 0.0451 0.001 59.901 0.000 0.044 0.047
x1 -0.0009 2.78e-05 -31.767 0.000 -0.001 -0.001
==============================================================================
Omnibus: 14.528 Durbin-Watson: 0.088
Prob(Omnibus): 0.001 Jarque-Bera (JB): 16.479
Skew: 0.606 Prob(JB): 0.000264
Kurtosis: 3.859 Cond. No. 45.2
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
COVID-19 demographic disparities Human Centered Data Science - Final Project Project Motivation The global health crisis of year 2020, COVID-19 is spreading human suffering, killing people, impacting businesses and changing lives of people forever. It isn't only a disease, but it is a human, economic and social crisis. The coronavirus disease (COVID-19), is attacking societies at their core. Though COVID-19 pandemic affects all segments of the population, it is observed that there is significant demographic disparity in COVID-19 cases and its impact. COVID-19 seems particularly detrimental to members of those social groups in the most vulnerable situations, continues to affect populations, including people living in poverty situations, older persons, persons with disabilities, youth, and indigenous peoples. Early evidence indicates that that the health and economic impacts of the virus are being borne disproportionately by poor people. As part of research conducted by APM research lab and as per Commonwealth Fund analysis it is highlighted that “color of coronavirus” is disproportionately Black and Brown, indicating massive disparities in healthcare outcomes amongst the different races and ethnicities in the United States. Disparities in COVID-19 deaths for people of color persist across age groups, and people of color experience more deaths among younger people relative to White individuals. Disparities in COVID-19 deaths for people of color persist across age groups, and people of color experience more deaths among younger people relative to White individuals. I am personally deeply saddened with COVID-19 outbreak itself, however it's more disturbing to see that certain segments of society has to suffer more than usual, adding to their existing difficulties. As a student of Human Centered Data Science, I am intrigued to study the COVID-19 cases and their impact on various segments of population and analyze if there are significant demographic disparities of this pandemic. This was revealing analysis project to study how COVID-19 impact is seen at varying segments of people demographics of certain ethnicities, age groups, income groups and geographic locations. I am interested in analyzing COVID-19 cases data by applying Human Centered Data Science practices so as to come up better answers to find any significant disparities in COVID-19 impact. This analysis will be useful to raise awareness in society as well as governmental agencies so that appropriate actions can be taken to minimize the COVID-19 impact on all segments of society. DatasetThere are various sources of data available on internet to perform this study. I found one reliable source of data at 'Center of Disease Control and Prevention' website [cdc.gov](https://www.cdc.gov/). This data is maintained authoritatively by CDC, is updated frequently, is publicly available, and undergoes stringent data quality assurance routines. The COVID-19 case surveillance system database includes patient-level data reported to U.S. states and autonomous reporting entities, including New York City and the District of Columbia, as well as U.S. territories and states. These deidentified data include demographic characteristics, exposure history, disease severity indicators and outcomes.Dataset Links: [COVID-19 Case Surveillance data](https://data.cdc.gov/Case-Surveillance/COVID-19-Case-Surveillance-Public-Use-Data/vbim-akqf/data) and [COVID-19 deaths and populations](https://data.cdc.gov/NCHS/Distribution-of-COVID-19-deaths-and-populations-by/jwta-jxbg/data) Terms of Use for these datasets are documented in detail at [COVID-19 Case Surveillance Public Use Data](https://data.cdc.gov/Case-Surveillance/COVID-19-Case-Surveillance-Public-Use-Data/vbim-akqf/data) and [Public Domain U.S. Government](https://www.usa.gov/government-works).The Case Surveillance Task Force and Surveillance Review and Response Group (SRRG) within CDC’s COVID-19 Response provide stewardship for datasets that support the public health community’s access to COVID-19 data while protecting patient privacy. Research Questions: This project focuses its analysis on COVID-19 case surveillance data to understand demographic disparities in COVID-19 cases and their varying degree of impact on different segments of society.- Are people from specific age-group more prone to COVID-19 infection or death than other age-groups? - Is COVID-19 mortality rate more prevalent in specific race and ethnicity (combined) segment than others? Related WorkWith widespread devastating impacts of COVID-19, there is significant amount of research being conducted throughout the world. The preliminary analysis available on CDC official website ([cdc.gov](https://www.cdc.gov/nchs/nvss/vsrr/covid19/health_disparities.htm)) shows that COVID-19 death ratio is significantly higher in Non-Hispanic White population than others. The latest research "[THE COLOR OF CORONAVIRUS](https://www.apmresearchlab.org/covid/deaths-by-race:~:text=The%20COVID%2D19%20death%20rate,per%20100%2C000%2C%20as%20shown%20below.): COVID-19 DEATHS BY RACE AND ETHNICITY IN THE U.S." published by APM research lab reveals that Black and Indigenous Americans continue to suffer the greatest loss of life—with both groups now experiencing a COVID-19 death toll exceeding 1 in 1,000 nationally. The [Morbidity and Mortality Weekly Report (MMWR)](https://www.cdc.gov/mmwr/volumes/69/wr/mm6942e1.htm) the analysis of 114,411 COVID-19–associated deaths reported to National Vital Statistics System during May–August 2020, found that 51.3% of decedents were non-Hispanic White, 24.2% were Hispanic or Latino (Hispanic), and 18.7% were non-Hispanic Black. The percentage of Hispanic decedents increased from 16.3% in May to 26.4% in August. There are several other research being published on this subject and has helped me to perform better research work on this project. MethodologyFor this COVID-19 demographic disparities analysis, given large quantitative data and multiple demographic dimensions, I found it more interesting to perform visual exploratory data analysis (EDA) using various rich visualization packages available in Python programming language. I choose to perform visual exploratory data analysis as it gives more objective view of data in most intuitive way and easier to understand by anyone else reproducing it. To maintain privacy of individuals in study population, data is anonymized. Given the human centered aspect of analysis I am intrigued to slice and dice the analysis with various human dimensions such as Age group, race and ethnicity etc. Through this multi-dimensional analysis, by applying human centered data analysis principles, I attempted to understand and address any bias in data used for this study. All these analysis methodologies, research from related work and feedback from peers helped me to answer my research questions. Data AquisitionFirst we will import python modules required for our analysis.
###Code
import urllib.request
import requests
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import rcParams
import numpy as np
import pandas as pd
import os
from functools import reduce
from datetime import datetime
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Let's first aquire and ingest data from CDC website. We will define download_data utility function which can be reused.
###Code
# Utility Function to download data from website
def download_data(url,filename):
response = urllib.request.urlretrieve(url, filename)
return response
###Output
_____no_output_____
###Markdown
Here we download the data from CDC website and store it locally in data folder. To avoid downloading big dataset again, we can either use this dataset locally or download fresh. As data source is updated weekly, I highly recommend downloading latest data in future.
###Code
# Download case survelliance and mortality data
# Disable below lines when performing analysis on locally downloaded data as it is time consuming
#download_surveillance = download_data('https://data.cdc.gov/api/views/vbim-akqf/rows.csv?accessType=DOWNLOAD',
# 'data/COVID_19_Case_Surveillance_Public_Use_Data.csv')
download_mortality = download_data('https://data.cdc.gov/api/views/jwta-jxbg/rows.csv?accessType=DOWNLOAD',
'data/COVID_19_deaths_population.csv')
###Output
_____no_output_____
###Markdown
Now we will read data from csv files and store it in pandas dataframe.
###Code
# Pandas read_csv method, first row contains headers
dfSurv = pd.read_csv('data/COVID_19_Case_Surveillance_Public_Use_Data.csv', sep=',', header=0)
dfSurv.head()
###Output
_____no_output_____
###Markdown
Data AnalysisAs discussed in methodology section, we are primarily using visual exploratory data analysis for deriving insights from dataset and find answers to our research questions. Let's start by analyzing COVID-19 cases by age group first.
###Code
cases_by_age_group = dfSurv['age_group'].value_counts()
plt.figure(figsize=(16,7))
sns.barplot(cases_by_age_group.index, cases_by_age_group.values,
order=cases_by_age_group.index, alpha=0.9)
plt.xticks(rotation=15)
plt.title('COVID-19 cases grouped by Age-Group', fontsize=14)
plt.ylabel('Number of cases', fontsize=12)
plt.xlabel('Age-Group', fontsize=12)
plt.savefig('plots/COVID-19 cases grouped by Age-Group.png')
plt.show()
###Output
_____no_output_____
###Markdown
As observed in this visualization, people in age-groups '20-29 Years', '30-39 Years' and '40-49 Years' are top three groups getting COVID-19 infection. This is probably because these age groups might have to go out more often than others and possibly not following social distancing guidelines because of lower chances of having serious health complications.Now let's visualize the COVID-19 deaths by age-group.
###Code
dfdeath = dfSurv[dfSurv.death_yn=='Yes']
deaths_by_age_group = dfdeath['age_group'].value_counts(normalize=True)
plt.figure(figsize=(16,7))
sns.barplot(deaths_by_age_group.index, deaths_by_age_group.values*100,
order=deaths_by_age_group.index, alpha=0.9)
plt.xticks(rotation=15)
plt.title('COVID-19 death rate grouped by Age-Group', fontsize=14)
plt.ylabel('Percentage of COVID-19 deaths (%)', fontsize=12)
plt.xlabel('Age-Group', fontsize=12)
plt.savefig('plots/COVID-19 death rate grouped by Age-Group.png')
plt.show()
###Output
_____no_output_____
###Markdown
As observed in visual above, the age-groups '80+ Years', '70-79 Years' and '60-69 Years' are top 3 age-groups with highest COVID-19 mortality rate. This most probably happens due to co-morbidities and lower immunity in these age-groups.Now let's analyze mortalities by race and ethnicity.
###Code
dfdeath = dfSurv[dfSurv.death_yn=='Yes']
deaths_by_age_group = dfdeath['Race and ethnicity (combined)'].value_counts(normalize=True)
plt.figure(figsize=(16,7))
sns.barplot(deaths_by_age_group.index, deaths_by_age_group.values*100,
order=deaths_by_age_group.index, alpha=0.9)
plt.xticks(rotation=15)
plt.title('COVID-19 deaths grouped by Race and ethnicity (combined)', fontsize=14)
plt.ylabel('Percentage of COVID-19 deaths (%)', fontsize=12)
plt.xlabel('Race and ethnicity (combined)', fontsize=12)
plt.savefig('plots/COVID-19 deaths grouped by Race and ethnicity.png')
plt.show()
###Output
_____no_output_____
###Markdown
This visualization shows 'Non-Hispanic White' group has highest mortality as compared to other race and ethnicity groups. However, please note non-Hispanic White group has highest population in United States. Hence it will be better to visualize data alongside distribution of population. For this analysis we will use COVID-19 deaths and populations data maintained by CDC. To getter accurate understanding we will use weighted distributed population and age standardized COVID-19 death rates.
###Code
# Read data and show first 5 rows
dfMort = pd.read_csv('data/COVID_19_deaths_population.csv', sep=',', header=0)
dfMort.head()
###Output
_____no_output_____
###Markdown
Now we will explore distribution of COVID-19 deaths across all race and ethnicity groups as well as subgroup it by age-groups.
###Code
# Filter out aggregated age unadjusted and age standardized data, keep granular age-group data
dfMortUnAdjusted = dfMort[((dfMort['AgeGroup']!= 'All ages, standardized')
& (dfMort['AgeGroup']!= 'All ages, unadjusted')
& (dfMort['State'] != 'United States'))]
plt.figure(figsize=(20,30))
ax = sns.violinplot(x="Distribution of COVID-19 deaths (%)", y="Race/Hispanic origin", data=dfMortUnAdjusted,
hue='AgeGroup', legend_out=False)
plt.title('Distribution of COVID-19 death percentage by Race and Ethnicity and Age Group', fontsize=14)
plt.ylabel('Race and Ethnicity', fontsize=12)
plt.xlabel('Distribution of COVID-19 deaths (%)', fontsize=12)
plt.legend(loc='upper right')
plt.savefig('plots/Distribution of COVID-19 death percentage by Race and Ethnicity and Age Group.png')
plt.show()
###Output
_____no_output_____
###Markdown
As we observed from this plot and supplemented by earlier analysis, it is evident that COVID-19 affect age groups differently. We will adjust mortality rates for differences in the age distribution of populations, a common and important tool that health researchers use to compare diseases that affect age groups differently. For ease of comparison, we will plot age unadjusted COVID-19 death rate followed by and age adjusted COVID-19 death rate, both in conjunction with weighted population distribution (%). Let's first perform data filtering, and transformation required to slice data for age unadjusted COVID-19 death rate.
###Code
# Filter out data, keep national level age-standardized data
df_age_unadjusted = dfMort[(dfMort['AgeGroup']== 'All ages, unadjusted') & (dfMort['State'] == 'United States')]
# Subset required columns only
df_age_unadjusted = df_age_unadjusted[['Race/Hispanic origin', 'Distribution of COVID-19 deaths (%)',
'Unweighted distribution of population (%)']]
# Melt up dataframe to make it compatible for plotting multiple bars chart
df = pd.melt(df_age_unadjusted, id_vars="Race/Hispanic origin",
var_name="Distribution", value_name="Percentage (%)")
###Output
_____no_output_____
###Markdown
We will now plot Age unadjusted COVID-19 death percentage and Unweighted population distribution by Race and Ethnicity.
###Code
# Use Seaborn catplot, set x, y, hue, height and aspect etc.
sns.catplot(x='Race/Hispanic origin', y='Percentage (%)', hue='Distribution', data=df,
kind='bar', height=7, aspect=16/7, legend_out=False)
plt.xticks(rotation=7)
plt.title('Age unadjusted COVID-19 death percentage and Unweighted population distribution by Race and Ethnicity', fontsize=14)
plt.ylabel('Percentage (%)', fontsize=12)
plt.xlabel('Race and Ethnicity', fontsize=12)
plt.legend(loc='upper center')
plt.savefig('plots/Age unadjusted COVID-19 death percentage and Unweighted population distribution by Race and Ethnicity.png')
plt.show()
###Output
_____no_output_____
###Markdown
Let's now perform data filtering, and transformation required to slice data for age adjusted COVID-19 death rate.
###Code
# Filter out data, keep national level age-standardized data
df_age_standardized = dfMort[(dfMort['AgeGroup']== 'All ages, standardized') & (dfMort['State'] == 'United States')]
# Subset required columns only
df_age_standardized = df_age_standardized[['Race/Hispanic origin', 'Distribution of COVID-19 deaths (%)',
'Weighted distribution of population (%)']]
# Melt up dataframe to make it compatible for plotting multiple bars chart
df = pd.melt(df_age_standardized, id_vars="Race/Hispanic origin",
var_name="Distribution", value_name="Percentage (%)")
###Output
_____no_output_____
###Markdown
We will now plot Age adjusted COVID-19 death percentage and Weighted population distribution by Race and Ethnicity.
###Code
# Use Seaborn catplot, set x, y, hue, height and aspect etc.
sns.catplot(x='Race/Hispanic origin', y='Percentage (%)', hue='Distribution', data=df,
kind='bar', height=7, aspect=16/7, legend_out=False)
plt.xticks(rotation=7)
plt.title('Age-adjusted COVID-19 death percentage and Weighted population distribution by Race and Ethnicity', fontsize=14)
plt.ylabel('Percentage (%)', fontsize=12)
plt.xlabel('Race and Ethnicity', fontsize=12)
plt.legend(loc='upper center')
plt.savefig('plots/Age-adjusted COVID-19 death percentage and Weighted population distribution by Race and Ethnicity.png')
plt.show()
###Output
_____no_output_____ |
notebooks/m04_v01_store_sales_prediction.ipynb | ###Markdown
0.0. IMPORTS
###Code
import json
import math
# import pylab
# import random
import pickle
import requests
import datetime
import warnings
import inflection
import numpy as np
import pandas as pd
import seaborn as sns
import xgboost as xgb
from scipy import stats as ss
from sklearn.metrics import mean_absolute_error, mean_squared_error
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Lasso
from sklearn.preprocessing import MinMaxScaler, LabelEncoder, RobustScaler
from boruta import BorutaPy
from matplotlib import pyplot as plt
from matplotlib import gridspec
from IPython.display import Image
from IPython.core.display import HTML
from IPython.core.interactiveshell import InteractiveShell
%pylab inline
%matplotlib inline
plt.style.use( 'bmh' )
plt.rcParams['figure.figsize'] = [25, 12]
plt.rcParams['font.size'] = 24
display( HTML( '<style>.container { width:100% !important; }</style>') )
pd.options.display.max_columns = None
pd.options.display.max_rows = None
pd.set_option( 'display.expand_frame_repr', False )
sns.set();
###Output
_____no_output_____
###Markdown
0.1 Helper Functions
###Code
def cramer_v(x,y):
cm = pd.crosstab( x, y ).to_numpy()
n = cm.sum()
r, k = cm.shape
chi2 = ss.chi2_contingency( cm ) [0]
chi2corr = max(0, chi2 - (k-1)*(r-1)/(n-1))
kcorr = k -(k-1)**2/(n-1)
rcorr = r -(r-1)**2/(n-1)
return np.sqrt( (chi2corr/n) / (min(kcorr-1, rcorr-1 )))
def jupyter_settings():
%matplotlib inline
%pylab inline
plt.style.use( 'bmh' )
plt.rcParams['figure.figsize'] = [25, 12]
plt.rcParams['font.size'] = 24
display( HTML( '<style>.container { width:100% !important; }</style>'))
pd.options.display.max_columns = None
pd.options.display.max_rows = None
pd.set_option( 'display.expand_frame_repr', False)
sns.set()
jupyter_settings()
###Output
_____no_output_____
###Markdown
0.2 Loading data
###Code
df_sales_raw = pd.read_csv('data/train.csv', low_memory=False)
df_store_raw = pd.read_csv('data/store.csv', low_memory=False)
# merge
df_raw = pd.merge( df_sales_raw, df_store_raw, how='left', on='Store')
df_raw.sample()
###Output
_____no_output_____
###Markdown
1.0. PASSO 01 - DESCRICAO DOS DADOS
###Code
df1 = df_raw.copy()
###Output
_____no_output_____
###Markdown
1.1. Rename Columns
###Code
cols_old = ['Store', 'DayOfWeek', 'Date', 'Sales', 'Customers', 'Open', 'Promo', 'StateHoliday', 'SchoolHoliday',
'StoreType', 'Assortment', 'CompetitionDistance', 'CompetitionOpenSinceMonth', 'CompetitionOpenSinceYear',
'Promo2', 'Promo2SinceWeek', 'Promo2SinceYear', 'PromoInterval']
snakecase = lambda x: inflection.underscore( x )
cols_new = list (map(snakecase, cols_old))
# rename
df1.columns = cols_new
###Output
_____no_output_____
###Markdown
1.2. Data Dimensions
###Code
print ( 'Number of Rows: {}'.format( df1.shape[0]))
print ( 'Number of Cols: {}'.format( df1.shape[1]))
###Output
_____no_output_____
###Markdown
1.3. Data Types
###Code
df1['date'] = pd.to_datetime(df1['date'])
df1.dtypes
###Output
_____no_output_____
###Markdown
1.4. Check NA
###Code
df1.isna().sum()
###Output
_____no_output_____
###Markdown
1.5. Fillout NA
###Code
df1['competition_distance'].max()
# competition_distance
df1['competition_distance'] = df1['competition_distance'].apply ( lambda x: 200000.0 if math.isnan( x ) else x )
# competition_open_since_month
df1['competition_open_since_month'] = df1.apply( lambda x: x['date'].month if math.isnan( x['competition_open_since_month'] ) else x['competition_open_since_month'], axis=1 )
# competition_open_since_year
df1['competition_open_since_year'] = df1.apply( lambda x: x['date'].year if math.isnan( x['competition_open_since_year'] ) else x['competition_open_since_year'], axis=1 )
# promo2_since_week
df1['promo2_since_week'] = df1.apply( lambda x: x['date'].week if math.isnan( x['promo2_since_week'] ) else x['promo2_since_week'], axis=1 )
# promo2_since_year
df1['promo2_since_year'] = df1.apply( lambda x: x['date'].year if math.isnan( x['promo2_since_year'] ) else x['promo2_since_year'], axis=1 )
# promo_interval
month_map = {1: 'Jan', 2: 'Fev', 3: 'Mar', 4: 'Apr', 5: 'May', 6: 'Jun', 7: 'Jul', 8: 'Aug', 9: 'Sep', 10: 'Oct', 11: 'Nov', 12: 'Dec'}
df1['promo_interval'].fillna(0, inplace=True)
df1['month_map'] = df1['date'].dt.month.map(month_map)
df1['is_promo'] = df1[['promo_interval','month_map']].apply( lambda x: 0 if x['promo_interval'] == 0 else 1 if x['month_map'] in x['promo_interval'].split( ',') else 0, axis=1)
df1.dtypes
df1.sample(5).T
df1.isna().sum()
###Output
_____no_output_____
###Markdown
1.6. Change Types
###Code
df1['competition_open_since_month'] = df1['competition_open_since_month'].astype( 'int64' )
df1['competition_open_since_year'] = df1['competition_open_since_year'].astype( 'int64' )
df1['promo2_since_week'] = df1['promo2_since_week'].astype( 'int64' )
df1['promo2_since_year'] = df1['promo2_since_year'].astype( 'int64' )
df1.dtypes
###Output
_____no_output_____
###Markdown
1.7. Descriptive Statistical
###Code
num_attributes = df1.select_dtypes( include=['int64', 'float64'] )
cat_attributes = df1.select_dtypes( exclude=['int64', 'float64', 'datetime64[ns]'] )
###Output
_____no_output_____
###Markdown
1.7.1 Numerical Attributes
###Code
# Central Tendency - mean, median
ct1 = pd.DataFrame( num_attributes.apply( np.mean ) ).T
ct2 = pd.DataFrame( num_attributes.apply( np.median ) ).T
# Dispersion - std, min, max, range, skew, kurtosis
d1 = pd.DataFrame( num_attributes.apply( np.std ) ).T
d2 = pd.DataFrame( num_attributes.apply( min ) ).T
d3 = pd.DataFrame( num_attributes.apply( max ) ).T
d4 = pd.DataFrame( num_attributes.apply( lambda x: x.max() - x.max() - x.min() ) ).T
d5 = pd.DataFrame( num_attributes.apply( lambda x: x.skew() ) ).T
d6 = pd.DataFrame( num_attributes.apply( lambda x: x.kurtosis() ) ).T
# concatenate
m = pd.concat( [d2, d3, d4, ct1, ct2, d1, d5, d6] ).T.reset_index()
m.columns = ( ['attributes','min','max','range','mean','median','std','skew','kurtosis'] )
m
sns.distplot( df1['competition_distance'] );
###Output
_____no_output_____
###Markdown
1.7.2. Categorical Attributes
###Code
cat_attributes.apply( lambda x: x.unique().shape[0] )
aux1 = df1[(df1['state_holiday'] != '0') & (df1['sales'] > 0)]
plt.subplot(1,3,1)
sns.boxplot ( x='state_holiday', y='sales', data=aux1)
plt.subplot(1,3,2)
sns.boxplot ( x='store_type', y='sales', data=aux1)
plt.subplot(1,3,3)
sns.boxplot ( x='assortment', y='sales', data=aux1);
###Output
_____no_output_____
###Markdown
2.0. PASSO 02 - FEATURE ENGINNERING
###Code
df2 = df1.copy()
###Output
_____no_output_____
###Markdown
2.1. Mapa Mental de Hipótese
###Code
Image('img/MindMapHypothesis.png')
###Output
_____no_output_____
###Markdown
2.2. Criação das Hipóteses 2.2.1. Hipótese Loja **1.** Lojas com numero maior de funcionários deveriam vender mais.**2.** Lojas com maior capacidade de estoque deveriam vender mais.**3.** Lojas com maior porte deveriam vender mais.**4.** Lojas com menor porte deveriam vender menos.**5.** Lojas com maior sortimento deveriam vender mais.**6.** Lojas com competidadores mais próximos deveriam vender menos.**7.** Lojas com competidadores à mais tempo deveriam vender mais. 2.2.2. Hipóteses Produto **1.** Lojas que investem mais em Marketing deveriam vender mais.**2.** Lojas com maior exposição de produtos deveriam vender mais.**3.** Lojas com produtos com preço menor deveriam vender mais.**4.** Lojas com promoções mais agressivas (descontos maiores), deveriam vender mais.**5.** Lojas com promoções ativas por mais tempo deveriam vender mais.**6.** Lojas com mais dias de promoção deveriam vender mais.**7.** Lojas com mais promoções consecutivas deveriam vender mais. 2.2.3. Hipóteses Tempo **1** Lojas abertas durante o feriador de Natal deveriam vender mais.**2** Lojas deveriam vender mais ao longo dos anos.**3** Lojas deveriam vender mais no segundo semestre do ano.**4** Lojas deveriam vender mais depois do dia 10 de cada mês.**5** Lojas deveriam vender menos aos finais de semana.**6** Lojas deveriam vender menos durante os feriados escolares. 2.3. Lista Final de Hipótese **1.** Lojas com maior sortimento deveriam vender mais.**2.** Lojas com competidadores mais próximos deveriam vender menos.**3.** Lojas com competidadores à mais tempo deveriam vender mais. **4.** Lojas com promoções ativas por mais tempo deveriam vender mais.**5.** Lojas com mais dias de promoção deveriam vender mais.**7.** Lojas com mais promoções consecutivas deveriam vender mais. **8** Lojas abertas durante o feriador de Natal deveriam vender mais.**9** Lojas deveriam vender mais ao longo dos anos.**10** Lojas deveriam vender mais no segundo semestre do ano.**11** Lojas deveriam vender mais depois do dia 10 de cada mês.**12** Lojas deveriam vender menos aos finais de semana.**13** Lojas deveriam vender menos durante os feriados escolares. 2.4. Feature Engineering
###Code
# year
df2['year'] = df2['date'].dt.year
# month
df2['month'] = df2['date'].dt.month
# day
df2['day'] = df2['date'].dt.day
# week of year
df2['week_of_year'] = df2['date'].dt.weekofyear
# year week
df2['year_week'] = df2['date'].dt.strftime( '%Y-%W' )
# competition since
df2['competition_since'] = df2.apply( lambda x: datetime.datetime( year=x['competition_open_since_year'], month=x['competition_open_since_month'],day=1 ), axis=1 )
df2['competition_time_month'] = ( ( df2['date'] - df2['competition_since'] )/30 ).apply( lambda x: x.days ).astype( int )
# promo since
df2['promo_since'] = df2['promo2_since_year'].astype( str ) + '-' + df2['promo2_since_week'].astype( str )
df2['promo_since'] = df2['promo_since'].apply( lambda x: datetime.datetime.strptime( x + '-1', '%Y-%W-%w' ) - datetime.timedelta( days=7 ) )
df2['promo_time_week'] = ( ( df2['date'] - df2['promo_since'] )/7 ).apply( lambda x: x.days ).astype( int )
# assortment
df2['assortment'] = df2['assortment'].apply( lambda x: 'basic' if x == 'a' else 'extra' if x == 'b' else 'extended' )
# state holiday
df2['state_holiday'] = df2['state_holiday'].apply( lambda x: 'public_holiday' if x == 'a' else 'easter_holiday' if x == 'b' else 'christmas' if x == 'c' else 'regular_day' )
df2.head().T
###Output
_____no_output_____
###Markdown
3.0. PASSO 03 - FILTRAGEM DE VARIAVEIS
###Code
df3 = df2.copy()
df3.head()
###Output
_____no_output_____
###Markdown
3.1. Filtragem das Linhas
###Code
df3 = df3[(df3['open'] != 0) & (df3['sales'] > 0)]
###Output
_____no_output_____
###Markdown
3.2. Selecao das Colunas
###Code
cols_drop = ['customers', 'open', 'promo_interval', 'month_map']
df3 = df3.drop( cols_drop, axis=1 )
df3.columns
###Output
_____no_output_____
###Markdown
4.0. PASSO 04 - ANALISE EXPLORATORIA DE DADOS (EDA)
###Code
df4 = df3.copy()
###Output
_____no_output_____
###Markdown
4.1. Analise Univariada 4.1.1. Response Variable
###Code
sns.distplot(df4['sales'], kde=False)
###Output
_____no_output_____
###Markdown
4.1.2. Numerical Variable
###Code
num_attributes.hist( bins=25);
###Output
_____no_output_____
###Markdown
4.1.3. Categorical Variable
###Code
df4['state_holiday'].drop_duplicates()
# state_holiday
plt.subplot(3,2,1)
a = df4[df4['state_holiday'] != 'regular_day']
sns.countplot(a['state_holiday'])
plt.subplot(3,2,2)
sns.kdeplot( df4[df4['state_holiday'] == 'public_holiday']['sales'], label='public_holiday', shade=True)
sns.kdeplot( df4[df4['state_holiday'] == 'easter_holiday']['sales'], label='easter_holiday', shade=True)
sns.kdeplot( df4[df4['state_holiday'] == 'christmas']['sales'], label='christmas', shade=True)
# store_type
plt.subplot(3,2,3)
sns.countplot(df4['store_type'])
plt.subplot(3,2,4)
sns.kdeplot( df4[df4['store_type'] == 'a']['sales'], label='a', shade=True)
sns.kdeplot( df4[df4['store_type'] == 'b']['sales'], label='b', shade=True)
sns.kdeplot( df4[df4['store_type'] == 'c']['sales'], label='c', shade=True)
sns.kdeplot( df4[df4['store_type'] == 'd']['sales'], label='d', shade=True)
# assortment
plt.subplot(3,2,5)
sns.countplot(df4['assortment'])
plt.subplot(3,2,6)
sns.kdeplot( df4[df4['assortment'] == 'extended']['sales'], label='extended', shade=True)
sns.kdeplot( df4[df4['assortment'] == 'basic']['sales'], label='basic', shade=True)
sns.kdeplot( df4[df4['assortment'] == 'extra']['sales'], label='extra', shade=True);
###Output
_____no_output_____
###Markdown
4.2. Analise Bivariada **H1.** Lojas com maior sortimento deveriam vender mais.**FALSA** Lojas com MAIOR SORTUMENTO vendem MENOS
###Code
aux1 = df4[['assortment', 'sales']].groupby( 'assortment' ).sum().reset_index()
sns.barplot( x='assortment', y='sales', data=aux1 );
aux2 = df4[['year_week', 'assortment', 'sales']].groupby( ['year_week','assortment'] ).sum().reset_index()
aux2.pivot( index='year_week', columns='assortment', values='sales' ).plot()
aux3 = aux2[aux2['assortment']== 'extra']
aux3.pivot( index='year_week', columns='assortment', values='sales' ).plot();
df4.head()
###Output
_____no_output_____
###Markdown
**H2.** Lojas com competidadores mais próximos deveriam vender menos.**FALSA** Lojas com COMPETIDORES MAIS PRÓXIMOS vendem MAIS.
###Code
aux1 = df4[[ 'competition_distance', 'sales']].groupby('competition_distance').sum().reset_index()
plt.subplot(1, 3, 1)
sns.scatterplot (x ='competition_distance', y='sales', data=aux1);
plt.subplot(1, 3, 2)
bins = list(np.arange(0, 20000, 1000))
aux1['competition_distance_binned'] = pd.cut( aux1['competition_distance'], bins=bins )
aux2 = aux1[[ 'competition_distance_binned', 'sales']].groupby('competition_distance_binned').sum().reset_index()
sns.barplot( x='competition_distance_binned', y='sales', data=aux2);
plt.xticks( rotation=90)
plt.subplot(1, 3, 3)
x = sns.heatmap( aux1.corr(method='pearson'), annot=True);
bottom, top = x.get_ylim()
x.set_ylim( bottom+0.5, top-0.5 );
aux1.head()
###Output
_____no_output_____
###Markdown
**H3.** Lojas com competidadores à mais tempo deveriam vender mais.**FALSA** Lojas com COMPETIDORES À MAIS TEMPO vendem MENOS.
###Code
plt.subplot(1, 3, 1)
aux1 = df4[['competition_time_month', 'sales']].groupby( 'competition_time_month').sum().reset_index()
aux2 = aux1[(aux1['competition_time_month'] < 120) & (aux1['competition_time_month'] != 0)]
sns.barplot( x='competition_time_month', y='sales', data=aux2 );
plt.xticks( rotation=90);
plt.subplot(1, 3, 2)
sns.regplot( x='competition_time_month', y='sales', data=aux2 );
plt.subplot(1, 3, 3)
x = sns.heatmap( aux1.corr( method='pearson'), annot=True)
bottom, top = x.get_ylim()
x.set_ylim( bottom+0.5, top-0.5);
###Output
_____no_output_____
###Markdown
**H4.** Lojas com promoções ativas por mais tempo deveriam vender mais.**FALSA** Lojas com PROMOÇÕES ATIVAS POR MAIS TEMPO NÃO vendem MAIS.
###Code
aux1 = df4[['promo_time_week', 'sales']].groupby( 'promo_time_week' ).sum().reset_index()
grid = GridSpec ( 2, 3)
plt.subplot( grid[0,0] )
aux2 = aux1[aux1['promo_time_week'] > 0] # Promo extendido
sns.barplot( x='promo_time_week', y='sales', data=aux2 );
plt.xticks ( rotation=90 );
plt.subplot( grid[0,1] )
sns.regplot( x='promo_time_week', y='sales', data=aux2 );
plt.subplot( grid[1,0] )
aux3 = aux1[aux1['promo_time_week'] < 0] # Promo regular
sns.barplot( x='promo_time_week', y='sales', data=aux3 );
plt.xticks ( rotation=90 );
plt.subplot(grid[1,1] )
sns.regplot( x='promo_time_week', y='sales', data=aux3 );
plt.subplot( grid[:,2] )
sns.heatmap ( aux1.corr ( method='pearson'), annot=True);
###Output
_____no_output_____
###Markdown
**H5.** Lojas com mais dias de promoção deveriam vender mais. **H7.** Lojas com mais promoções consecutivas deveriam vender mais.**FALSA** Lojas com MAIS PROMOÇÕES CONSECUTIVAS vendem MENOS.
###Code
df4[['promo','promo2', 'sales']].groupby(['promo', 'promo2']).sum().reset_index().sort_values(['sales'])
aux1 = df4[(df4['promo'] == 1) & (df4['promo2'] == 1)] [['year_week', 'sales']].groupby( 'year_week' ).sum().reset_index()
ax = aux1.plot()
aux2 = df4[(df4['promo'] == 1) & (df4['promo2'] == 0)] [['year_week', 'sales']].groupby( 'year_week' ).sum().reset_index()
aux2.plot( ax=ax )
ax.legend( labels=['Tradicional & Extendida', 'Extendida']);
###Output
_____no_output_____
###Markdown
H8. Lojas abertas durante o feriado de Natal deveriam vender mais.**FALSA** Lojas abertas DURANTE O FERIADO DO NATAL vendem MENOS.
###Code
aux = df4[df4['state_holiday'] != 'regular_day']
plt.subplot(1,2,1)
aux1 = aux[['state_holiday', 'sales']].groupby ( 'state_holiday' ).sum().reset_index()
sns.barplot( x='state_holiday', y='sales', data=aux1 );
plt.subplot(1,2,2)
aux2 = aux[['year', 'state_holiday', 'sales']].groupby ( ['year','state_holiday'] ).sum().reset_index()
sns.barplot( x='year', y='sales', hue='state_holiday', data=aux2 );
###Output
_____no_output_____
###Markdown
H9. Lojas deveriam vender mais ao longo dos anos.**FALSA** Lojas VENDEM MENOS ao LONGO DOS ANOS.
###Code
aux1 = df4[['sales','year']].groupby('year').sum().reset_index()
plt.subplot( 1,3,1 )
sns.barplot( x='year', y='sales', data=aux1);
plt.subplot( 1,3,2 )
sns.regplot( x='year', y='sales', data=aux1);
plt.subplot( 1,3,3 )
sns.heatmap( aux1.corr( method='pearson'), annot=True);
###Output
_____no_output_____
###Markdown
H10. Lojas deveriam vender mais no segundo semestre do ano.**FALSA** Lojas VENDEM MENOS no SEGUNDO SEMESTRE DO ANO.
###Code
aux1 = df4[['sales','month']].groupby('month').sum().reset_index()
plt.subplot( 1,3,1 )
sns.barplot( x='month', y='sales', data=aux1);
plt.subplot( 1,3,2 )
sns.regplot( x='month', y='sales', data=aux1);
plt.subplot( 1,3,3 )
sns.heatmap( aux1.corr( method='pearson'), annot=True);
###Output
_____no_output_____
###Markdown
H11. Lojas deveriam vender mais depois do dia 10 de cada mês.**VERDADEIRA** Lojas VENDEM MAIS depoi do DIA 10 DE CADA MÊS.
###Code
aux1 = df4[['sales','day']].groupby('day').sum().reset_index()
plt.subplot( 2,2,1 )
sns.barplot( x='day', y='sales', data=aux1);
plt.subplot( 2,2,2 )
sns.regplot( x='day', y='sales', data=aux1);
plt.subplot( 2,2,3 )
sns.heatmap( aux1.corr( method='pearson'), annot=True);
aux1['before_after'] = aux1['day'].apply (lambda x: 'before_10_days' if x <= 10 else 'after_10_days')
aux2 = aux1[['before_after', 'sales']].groupby( 'before_after' ).sum().reset_index()
plt.subplot( 2,2,4 )
sns.barplot( x='before_after', y='sales', data=aux2 );
###Output
_____no_output_____
###Markdown
H12. Lojas deveriam vender menos aos finais de semana.**VERDADEIRA** Lojas VENDEM MENOS aos FINAIS DE SEMANA.
###Code
aux1 = df4[['sales','day_of_week']].groupby('day_of_week').sum().reset_index()
plt.subplot( 1,3,1 )
sns.barplot( x='day_of_week', y='sales', data=aux1);
plt.subplot( 1,3,2 )
sns.regplot( x='day_of_week', y='sales', data=aux1);
plt.subplot( 1,3,3 )
sns.heatmap( aux1.corr( method='pearson'), annot=True);
###Output
_____no_output_____
###Markdown
H13. Lojas deveriam vender menos durante os feriados escolares.**VERDADEIRA** Lojas VENDEM MENOS durante os FERIADOS ESCOLARES, exceto o mês de AGOSTO.
###Code
aux1 = df4[['sales','school_holiday']].groupby('school_holiday').sum().reset_index()
plt.subplot( 2,1,1 )
sns.barplot( x='school_holiday', y='sales', data=aux1);
plt.subplot( 2,1,2 )
aux2 = df4[['month','sales','school_holiday']].groupby(['month','school_holiday']).sum().reset_index()
sns.barplot( x='month', y='sales', hue='school_holiday', data=aux2);
###Output
_____no_output_____
###Markdown
4.2.1. Resumo das Hipóteses
###Code
from tabulate import tabulate
tab = [['hipoteses', 'Conclusão', 'Relevancia'],
['H1', 'Falsa', 'Baixa'],
['H2', 'Falsa', 'Media'],
['H2', 'Falsa', 'Baixa'],
['H4', 'Falsa', 'Baixa'],
['H5', '-', '-'],
['H7', 'Falsa', 'Baixa'],
['H8', 'Falsa', 'Media'],
['H9', 'Falsa', 'Alta'],
['H10', 'Falsa', 'Alta'],
['H11', 'Verdadeira', 'Alta'],
['H12', 'Verdadeira', 'Alta'],
['H13', 'Verdadeira', 'Baixa'],
]
print( tabulate(tab, headers='firstrow' ))
###Output
_____no_output_____
###Markdown
4.3. Analise Multivariada 4.3.1. Numerical Attributes
###Code
correlation = num_attributes.corr( method='pearson')
sns.heatmap( correlation, annot=True);
###Output
_____no_output_____
###Markdown
4.3.2. Categorical Atributes
###Code
# only categorical data
a = df4.select_dtypes( include='object')
# Calculate cramer V
a1 = cramer_v(a['state_holiday'], a['state_holiday'])
a2 = cramer_v(a['state_holiday'], a['store_type'])
a3 = cramer_v(a['state_holiday'], a['assortment'])
a4 = cramer_v(a['store_type'], a['state_holiday'])
a5 = cramer_v(a['store_type'], a['store_type'])
a6 = cramer_v(a['store_type'], a['assortment'])
a7 = cramer_v(a['assortment'], a['state_holiday'])
a8 = cramer_v(a['assortment'], a['store_type'])
a9 = cramer_v(a['assortment'], a['assortment'])
# Final dataset
d = pd.DataFrame( {'state_holiday': [a1, a2, a3],
'store_type': [a4, a5, a6],
'assortment': [a7, a8, a9] })
d = d.set_index( d.columns )
sns.heatmap(d, annot=True);
###Output
_____no_output_____ |
Reinforcement Learning Summer School 2019 (Lille, France)/practical_rl/practical_rl.ipynb | ###Markdown
Run this cell to set your notebook up (only mandatory if rlss2019-docker image is not used)
###Code
!git clone https://github.com/yfletberliac/rlss2019-hands-on.git > /dev/null 2>&1
###Output
_____no_output_____
###Markdown
Reinforcement Learning - Practical Session 1 ReviewA Markov Decision Process (MDP) is defined as tuple $(S, A, P, r, \gamma)$ where:* $S$ is the state space* $A$ is the action space * $P$ represents the transition probabilities, $P(s,a,s')$ is the probability of arriving at state $s'$ by taking action $a$ in state $s$* $r$ is the reward function such that $r(s,a,s')$ is the reward obtained by taking action $a$ in state $s$ and arriving at $s'$* $\gamma$ is the discount factorA deterministic policy $\pi$ is a mapping from $S$ to $A$: $\pi(s)$ is the action to be taken at state $s$.The goal of an agent is to find the policy $\pi$ that maximizes the expected sum of discounted rewards by following $\pi$. The value of $\pi$ is defined as$$V_\pi(s) = E\left[ \sum_{t=0}^\infty \gamma^t r(S_t, A_t, S_{t+1}) | S_0 = s \right]$$$V_\pi(s)$ and the optimal value function, defined as $V^*(s) = \max_\pi V_\pi(s)$, can be shown to satisfy the Bellman equations:$$V_\pi(s) = \sum_{s' \in S} P(s,\pi(s),s')[r(s,\pi(s),s') + \gamma V_\pi(s')]$$$$V^*(s) = \max_{a\in A} \sum_{s' \in S} P(s,a,s')[r(s,a,s') + \gamma V^*(s')]$$It is sometimes better to work with Q functions:$$Q_\pi(s, a) = \sum_{s' \in S} P(s,a,s')[r(s,a,s') + \gamma Q_\pi(s', \pi(s')]$$$$Q^*(s, a) = \sum_{s' \in S} P(s,a,s')[r(s,a,s') + \gamma \max_{a'} Q^*(s', a')]$$such that $V_\pi(s) = Q_\pi(s, \pi(s))$ and $V^*(s) = \max_a Q^*(s, a)$. Using value iteration to compute an optimal policyIf the reward function and the transition probabilities are known (and the state and action spaces are not very large), we can use dynamic programming methods to compute $V^*(s)$. Value iteration is one way to do that. Value iteration to compute $V^*(s)$$$T^* Q(s,a) = \sum_{s'}P(s'|s,a)[ r(s, a, s') + \gamma \max_{a'} Q(s', a')] \\$$* For any $Q_0$, let $Q_n = T^* Q_{n-1}$. * We have $\lim_{n\to\infty}Q_n = Q^*$ and $Q^* = T^* Q^*$ Finding the optimal policy from $V^\pi(s)$The optimal policy $\pi^*$ can be computed as$$\pi^*(s) \in \arg\max_{a\in A} Q^*(s, a) = \arg\max_{a\in A} \sum_{s' \in S} P(s,a,s')[r(s,a,s') + \gamma V^*(s')]$$ Q-Learning and SARSA When the reward function and the transition probabilities are *unknown*, we cannot use dynamic programming to find the optimal value function. Q-Learning and SARSA are stochastic approximation algorithms that allow us to estimate the value function by using only samples from the environment. Q-learningThe Q-Learning algorithm allows us to estimate the optimal Q function using only trajectories from the MDP obtained by following some exploration policy. Q-learning with $\varepsilon$-greedy exploration does the following update at time $t$:1. In state $s_t$, take action $a_t$ such that $a_t$ is random with probability $\varepsilon$ and $a_t \in \arg\max_a \hat{Q}_t(s_t,a) $ with probability $1-\varepsilon$;2. Observe $s_{t+1}$ and reward $r_t$;3. Compute $\delta_t = r_t + \gamma \max_a \hat{Q}_t(s_{t+1}, a) - \hat{Q}_t(s_t, a_t)$;4. Update $\hat{Q}_{t+1}(s, a) = \hat{Q}_t(s, a) + \alpha_t(s,a)\delta_t\mathbb{1}\{s=s_t, a=a_t\} $ SARSASARSA is similar to Q-learning, but it is an *on-policy* algorithm: it follows a (stochastic) policy $\pi_Q$ and updates its estimate towards the value of this policy. One possible choice is:$$\pi_Q(a|s) = \frac{ \exp(\tau^{-1}Q(s,a)) }{\sum_{a'}\exp(\tau^{-1}Q(s,a')) }$$where $\tau$ is a "temperature" parameter: when $\tau$ approaches 0, $\pi_Q(a|s)$ approaches the greedy (deterministic) policy $a \in \arg\max_{a'}Q(s,a')$.At each time $t$, SARSA keeps an estimate $\hat{Q}_t$ of the true Q function and uses $\pi_{\hat{Q}_t}(a|s)$ to choose the action $a_t$. If $\tau \to 0$ with a proper rate as $t \to \infty$, $\hat{Q}_t$ converges to $Q$ and $\pi_{\hat{Q}_t}(a|s)$ converges to the optimal policy $\pi^*$. The SARSA update at time $t$ is done as follows:1. In state $s_t$, take action $a_t \sim \pi_{\hat{Q}_t}(a|s_t)$ ;2. Observe $s_{t+1}$ and reward $r_t$;3. Sample the next action $a_{t+1} \sim \pi_{\hat{Q}_t}(a|s_{t+1})$;4. Compute $\delta_t = r_t + \gamma \hat{Q}_t(s_{t+1}, a_{t+1}) - \hat{Q}_t(s_t, a_t)$5. Update $\hat{Q}_{t+1}(s, a) = \hat{Q}_t(s, a) + \alpha_t(s,a)\delta_t\mathbb{1}\{s=s_t, a=a_t\}$ GoalsYour goal is to implement Value Iteration, Q-Learning and SARSA for the [Frozen Lake](https://gym.openai.com/envs/FrozenLake-v0/) environment.* In exercise 1, you will implement the Bellman operator $T^*$ and verify its contraction property.* In exercise 2, you will implement value iteration.* In exercises 3 and 4, you will implement Q-Learning and SARSA.
###Code
import sys
sys.path.insert(0, './rlss2019-hands-on/utils')
# If using the Docker image, replace by:
# sys.path.insert(0, '../utils')
import numpy as np
from scipy.special import softmax # for SARSA
import matplotlib.pyplot as plt
from frozen_lake import FrozenLake
from test_env import ToyEnv1
###Output
_____no_output_____
###Markdown
FrozenLake environment (You can use ToyEnv1 to debug your algorithms)
###Code
# Creating an instance of FrozenLake
# --- If deterministic=False, transitions are stochastic. Try both cases!
#env = FrozenLake(gamma=0.95, deterministic=False, data_path="./rlss2019-hands-on/data")
# Small environment for debugging
env = ToyEnv1(gamma=0.95)
# Useful attributes
print("Set of states:", env.states)
print("Set of actions:", env.actions)
print("Number of states: ", env.Ns)
print("Number of actions: ", env.Na)
print("P has shape: ", env.P.shape) # P[s, a, s'] = env.P[s, a, s']
print("discount factor: ", env.gamma)
print("")
# Usefult methods
state = env.reset() # get initial state
print("initial state: ", state)
print("reward at (s=1, a=3,s'=2): ", env.reward_func(1,3,2))
print("")
# A random policy
policy = np.random.randint(env.Na, size = (env.Ns,))
print("random policy = ", policy)
# Interacting with the environment
print("(s, a, s', r):")
for time in range(4):
action = policy[state]
next_state, reward, done, info = env.step(action)
print(state, action, next_state, reward)
if done:
break
state = next_state
print("")
# Visualizing the environment
try:
env.render()
except:
pass # render not available
###Output
Set of states: [0, 1, 2]
Set of actions: [0, 1]
Number of states: 3
Number of actions: 2
P has shape: (3, 2, 3)
discount factor: 0.95
initial state: 0
reward at (s=1, a=3,s'=2): 1.0
random policy = [1 0 1]
(s, a, s', r):
0 1 1 0.0
1 0 2 1.0
2 1 2 1.0
2 1 2 1.0
###Markdown
Exercise 1: Bellman operator1. Write a function that takes an environment and a state-action value function $Q$ as input and returns the Bellman optimality operator applied to $Q$, $T^* Q$ and the greedy policy with respect to $Q$.3. Let $Q_1$ and $Q_2$ be state-action value functions. Verify the contraction property: $\Vert T^* Q_1 - T^* Q_2\Vert \leq \gamma ||Q_1 - Q_2||$, where $||Q|| = \max_{s,a} |Q(s,a)|$.
###Code
# --------------
# Your answer to 1.
# --------------
def bellman_operator(Q, env):
TQ = 0
greedy_policy = []
###
# To fill
###
return TQ, greedy_policy
# --------------
# Your answer to 2.
# --------------
print("Contraction of Bellman operator")
###Output
Contraction of Bellman operator
###Markdown
Exercise 2: Value iteration1. (Optimal Value function) Write a function that takes as input an initial state-action value function `Q0` and an environment `env` and returns a vector `Q` such that $||T^* Q - Q ||_\infty \leq \varepsilon $ and the greedy policy with respect to $Q$.2. Test the convergence of the function you implemented.
###Code
# --------------
# Your answer to 1.
# --------------
def value_iteration(Q0, env, epsilon=1e-5):
"""
Finding the optimal value function. To be done!
"""
TQ = 0
greedy_policy = []
return TQ, greedy_policy
# --------------
# Your answer to 2.
# --------------
###Output
_____no_output_____
###Markdown
Exercise 3: Q-Learning Q-learningThe Q-Learning algorithm allows us to estimate the optimal Q function using only trajectories from the MDP obtained by following some exploration policy. Q-learning with $\varepsilon$-greedy exploration does the following update at time $t$:1. In state $s_t$, take action $a_t$ such that $a_t$ is random with probability $\varepsilon$ and $a_t \in \arg\max_a \hat{Q}_t(s_t,a) $ with probability $1-\varepsilon$ (**act function**);2. Observe $s_{t+1}$ and reward $r_t$ (**step in the environment**);3. Compute $\delta_t = r_t + \gamma \max_a \hat{Q}_t(s_{t+1}, a) - \hat{Q}_t(s_t, a_t)$ (**to be done in .optimize()**) ;4. Update $\hat{Q}_{t+1}(s, a) = \hat{Q}_t(s, a) + \alpha_t(s,a)\delta_t\mathbb{1}\{s=s_t, a=a_t\}$ (**in optimize too**)Implement Q-learning and test its convergence.
###Code
#-------------------------------
# Q-Learning implementation
# ------------------------------
class QLearning:
"""
Implements Q-learning algorithm with epsilon-greedy exploration
"""
def __init__(self, env, gamma, learning, epsilon): # You can add more argument to your init (lr decay, eps decay)
pass
def act(state, greedy=False, ...): # You don't have to use this template for your algorithm, those are just hints
"""
Takes a state as input and outputs an action (acting greedily or not with respect to the q function)
"""
pass
def optimize(state, action_taken, next_state, reward, ...):
"""
Takes (s, a, s', r) as input and optimize the Q function
"""
pass
# ---------------------------
# Convergence of Q-Learning
# ---------------------------
# Number of Q learning iterations
n_steps = int(1e5)
#n_steps = 10
Q0 = np.zeros((env.Ns, env.Na))
# You can use Q_opt from value iteration to check the correctness of q learning
Q_opt, pi_opt = value_iteration(Q0, env, epsilon=1e-6)
# ^ and the optimal policy too
###Output
_____no_output_____
###Markdown
Exercise 4: SARSASARSA is similar to Q-learning, but it is an *on-policy* algorithm: it follows a (stochastic) policy $\pi_Q$ and updates its estimate towards the value of this policy. One possible choice is:$$\pi_Q(a|s) = \frac{ \exp(\tau^{-1}Q(s,a)) }{\sum_{a'}\exp(\tau^{-1}Q(s,a')) }$$where $\tau$ is a "temperature" parameter: when $\tau$ approaches 0, $\pi_Q(a|s)$ approaches the greedy (deterministic) policy $a \in \arg\max_{a'}Q(s,a')$.At each time $t$, SARSA keeps an estimate $\hat{Q}_t$ of the true Q function and uses $\pi_{\hat{Q}_t}(a|s)$ to choose the action $a_t$. If $\tau \to 0$ with a proper rate as $t \to \infty$, $\hat{Q}_t$ converges to $Q$ and $\pi_{\hat{Q}_t}(a|s)$ converges to the optimal policy $\pi^*$. The SARSA update at time $t$ is done as follows:1. In state $s_t$, take action $a_t \sim \pi_{\hat{Q}_t}(a|s_t)$ ;2. Observe $s_{t+1}$ and reward $r_t$;3. Sample the next action $a_{t+1} \sim \pi_{\hat{Q}_t}(a|s_{t+1})$;4. Compute $\delta_t = r_t + \gamma \hat{Q}_t(s_{t+1}, a_{t+1}) - \hat{Q}_t(s_t, a_t)$5. Update $\hat{Q}_{t+1}(s, a) = \hat{Q}_t(s, a) + \alpha_t(s,a)\delta_t\mathbb{1}\{s=s_t, a=a_t\}$
###Code
#-------------------------------
# SARSA implementation
# ------------------------------
class Sarsa:
"""
Implements SARSA algorithm.
"""
def __init__(self, env, gamma, learning_rate=None, tau=1.0): # Again, those are suggestions, you can add more arguments
pass
def act():
pass
def optimize():
pass
# ---------------------------
# Convergence of SARSA
# ---------------------------
# Create SARSA object
sarsa = Sarsa(env, gamma=env.gamma)
# Again, you can use Q_opt and pi_opt from value_iteration to check sarsa's convergence.
###Output
_____no_output_____
###Markdown
How those two algorithms behave ? Do both of them find the optimal policy ? Trying other algorithms Policy iterationPolicy iteration is another algorithm to find an optimal policy when the MDP is known:$$\pi_{n} \gets \mathrm{greedy}(V_{\pi_{n-1}}) \\V_{\pi_n} \gets \mbox{policy-evaluation}(\pi_n)$$For any arbitrary $\pi_0$, $\pi_n$ converges to $\pi^*$.Implement policy iteration and compare it to value iteration. Stochastic algorithms for policy evaluationGiven a policy $\pi$, implement different stochastic algorithms to estimate its value $V_\pi$. Monte Carlo estimation$$V_\pi(s) = E\left[ \sum_{t=0}^\infty \gamma^t r(S_t, A_t, S_{t+1}) | S_0 = s \right] \approx \frac{1}{N} \sum_{i=1}^N \sum_{t=0}^{T} \gamma^t r(s_t^i, a_t^i, s_{t+1}^i) $$ TD(0): Given a trajectory $ (x_t, x_{t+1}, r_t)_{t\geq 0} $ , the $t$-th step of TD(0) performs the following calculations:$ \delta_t = r_t + \gamma \hat{V}_t(x_{t+1}) - \hat{V}_t(x_t)$$ \hat{V}_{t+1}(x) = \hat{V}_t(x) + \alpha_t(x)\delta_t\mathbb{1}\{x=x_t\} $ where $\alpha_t(x_t)$ is the step size and $\delta_t$ is called *temporal difference*. TD($\lambda$):Given a trajectory $ (x_t, x_{t+1}, r_t)_{t\geq 0} $, the $t$-th step of TD($\lambda$) performs the following calculations:$ \delta_t = r_t + \gamma \hat{V}_t(x_{t+1}) - \hat{V}_t(x_t)$$ z_{t+1}(x) = \mathbb{1}\{x=x_t\} + \gamma \lambda z_t(x) $ $ \hat{V}_{t+1}(x) = \hat{V}_t(x) + \alpha_t(x)\delta_t z_{t+1}(x) $ $ z_0(x) = 0 $for all states $x$.
###Code
###Output
_____no_output_____ |
TF-IDF.ipynb | ###Markdown
Term Frequency Inverse Document Frequency (tf-idf)* Rescale features by how informative we expect them to be* Give weight to any term that appears often in a particular document, but not in many documents* TfidfVectorizer takes the text data and does both the bag-of-words feature extraction and tf-idf transformation
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
import pandas as pd
df = pd.DataFrame({
"words": ["Sola runs", "Sola is a dog", "Sola chews toys"]
})
# TfidfVectorizer takes hyperparameters such as:
# min_df=0.1
# max_df=0.75
# max_features=50
vectorizer = TfidfVectorizer()
# X here is a sparse matrix with one row per document and one column per
# unique word in the corpus (all the words in all the documents)
X = vectorizer.fit_transform(df["words"])
print("Sparse:\n\n", X)
print("\nDense:\n\n", X.todense())
###Output
Sparse:
(0, 3) 0.8610369959439764
(0, 4) 0.5085423203783267
(1, 1) 0.652490884512534
(1, 2) 0.652490884512534
(1, 4) 0.3853716274664007
(2, 5) 0.652490884512534
(2, 0) 0.652490884512534
(2, 4) 0.3853716274664007
Dense:
[[0. 0. 0. 0.861037 0.50854232 0. ]
[0. 0.65249088 0.65249088 0. 0.38537163 0. ]
[0.65249088 0. 0. 0. 0.38537163 0.65249088]]
###Markdown
We can see which columns represent which words:
###Code
vectorizer.get_feature_names()
# We can see, for example, that the first row (representing "Sola runs") has values in the "runs" and "Sola" columns.
# Together these let us construct a data frame if we want:
pd.DataFrame(X.todense(), columns=vectorizer.get_feature_names())
###Output
_____no_output_____
###Markdown
Accessing the vocabularyThe attribute vocabulary_ outputs a dictionary in which all ngrams are the dictionary keys and the respective values are the column positions of each ngram (feature) in the tfidf matrix. [](https://stackoverflow.com/a/54338182/156835)
###Code
vectorizer.vocabulary_
# We can swap the keys and values to map the column number to the word:
vocab = {v:k for k, v in vectorizer.vocabulary_.items()}
vocab
###Output
_____no_output_____
###Markdown
Accessing document data
###Code
print(X)
# For any given document, we can access the word indices and tf-idt weights:
print(X[0].indices)
# And the data:
print(X[0].data)
# We can zip these together and convert them into a dict:
dict(zip(X[0].indices, X[0].data))
###Output
_____no_output_____
###Markdown
Filtering a tf-idf vectorThe problem with the tf-idf vector is that it may contain thousands of words depending on the size of the corpus.We don't want to train a model on using all of those features (the words). Instead, we can identify the most top_n most important terms in each document and then filter the tf-idf vector so it only contains the columns for terms that are in the top_n for any of the documents.This code is inspired via DataCamp's Selecting Features for Modeling course:
###Code
# This function returns the indeces of the top_n terms in a given row.
def return_indeces_of_top_terms_in_document(vocab, original_vocab, vector, vector_index, top_n):
zipped = dict(zip(vector[vector_index].indices, vector[vector_index].data))
zipped_series = pd.Series({vocab[i]:zipped[i] for i in vector[vector_index].indices})
zipped_index = zipped_series.sort_values(ascending=False)[:top_n].index
return [original_vocab[i] for i in zipped_index]
# For example:
return_indeces_of_top_terms_in_document(vocab, vectorizer.vocabulary_, X, 0, 1)
###Output
_____no_output_____
###Markdown
It returns 3 because in the first row (index 0), we're looking at the top 1 weight, which is 0.861037, which is in the fourth column (index 3).If instead we looked at the top 2 terms:
###Code
return_indeces_of_top_terms_in_document(vocab, vectorizer.vocabulary_, X, 0, 2)
###Output
_____no_output_____
###Markdown
Because the two terms (Sola and runs) are in the fourth and fifth columns (index 3 and 4).The second column has a tie in weights between dog and is (both have a weight of 0.652491) but sort_values winds up putting dog first for some reason, so this returns index 2:
###Code
return_indeces_of_top_terms_in_document(vocab, vectorizer.vocabulary_, X, 1, 1)
###Output
_____no_output_____
###Markdown
Using this, we can iterate over all of the rows (documents) and determine the top_n words. return_top_term_indeces returns their indeces.
###Code
def return_top_term_indeces(vocab, original_vocab, vector, top_n):
filter_list = []
for i in range(0, vector.shape[0]):
filtered = return_indeces_of_top_terms_in_document(vocab, original_vocab, vector, i, top_n)
filter_list.extend(filtered)
return set(filter_list)
###Output
_____no_output_____
###Markdown
For example, if we specify top_n is 1, then it iterates over each row and grabs the index for the highest weighted term:
###Code
indeces = return_top_term_indeces(vocab, vectorizer.vocabulary_, X, 1)
indeces
###Output
_____no_output_____
###Markdown
This returns a set of {0, 2, 3} because the in row 1 the highest weighted term is at index 3 (Sola), row 2 at index 2 (dog), row 3 at index 0 (chews).Then we can filter the original vector down to just these important terms:
###Code
X_filtered = X[:, list(indeces)]
print(X_filtered.todense())
###Output
[[0. 0. 0.861037 ]
[0. 0.65249088 0. ]
[0.65249088 0. 0. ]]
###Markdown
Putting it all together:
###Code
def return_indeces_of_top_terms_in_document(vocab, original_vocab, vector, vector_index, top_n):
zipped = dict(zip(vector[vector_index].indices, vector[vector_index].data))
zipped_series = pd.Series({vocab[i]:zipped[i] for i in vector[vector_index].indices})
zipped_index = zipped_series.sort_values(ascending=False)[:top_n].index
return [original_vocab[i] for i in zipped_index]
def return_top_term_indeces(vocab, original_vocab, vector, top_n):
filter_list = []
for i in range(0, vector.shape[0]):
filtered = return_indeces_of_top_terms_in_document(vocab, original_vocab, vector, i, top_n)
filter_list.extend(filtered)
return set(filter_list)
def select_highest_weighted_terms(vectorizer, X, top_n):
vocab = {v:k for k, v in vectorizer.vocabulary_.items()}
indeces = return_top_term_indeces(vocab, vectorizer.vocabulary_, X, 1)
return X[:, list(indeces)]
df = pd.DataFrame({
"words": ["Sola runs", "Sola is a dog", "Sola chews toys"]
})
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(df["words"])
print("Data Frame:\n{}\n".format(pd.DataFrame(X.todense(), columns=vectorizer.get_feature_names())))
print("Values:\n{}\n".format(X.todense()))
X_filtered = select_highest_weighted_terms(vectorizer, X, 1)
print("Filtered values:\n{}\n".format(X_filtered.todense()))
###Output
Data Frame:
chews dog is runs sola toys
0 0.000000 0.000000 0.000000 0.861037 0.508542 0.000000
1 0.000000 0.652491 0.652491 0.000000 0.385372 0.000000
2 0.652491 0.000000 0.000000 0.000000 0.385372 0.652491
Values:
[[0. 0. 0. 0.861037 0.50854232 0. ]
[0. 0.65249088 0.65249088 0. 0.38537163 0. ]
[0.65249088 0. 0. 0. 0.38537163 0.65249088]]
Filtered values:
[[0. 0. 0.861037 ]
[0. 0.65249088 0. ]
[0.65249088 0. 0. ]]
###Markdown
Basic TF-IDF Implementation Author: Yifan Wang
###Code
import numpy as np
from collections import Counter
from nltk.corpus import stopwords
# TFIDF Algorithm
class TFIDF(object):
def __init__ (self, stopwords):
self.stopwords = stopwords
def tokenizer(self,data):
'''tokenize text into list of words and Remove SW'''
data = [x.lower().split() for x in data]
# Remove Stopwords:
clean_data = []
for doc in data:
clean_data.append([w for w in doc if w not in self.stopwords])
self.data = clean_data
# print(self.data)
def tfidf_word2id(self):
# Word to Index
new_data = []
word2id = {}
counter = 1
for doc in self.data:
new_doc = []
for tok in doc:
if tok not in word2id:
word2id[tok] = str(counter)
counter += 1
new_doc.append(word2id[tok])
new_data.append(new_doc)
self.word2id = word2id
self.data = new_data
def term_freq(self):
tf = []
for doc in self.data:
doc_count = Counter(doc)
doc_tf = { x: doc_count[x]/len(doc) for x in doc_count}
tf.append(doc_tf)
self.tf = tf
def inv_doc_freq(self):
idf = {}
idxs = list(set([j for i in self.data for j in i]))
N = len(self.data)
for idx in idxs:
nd = len([doc for doc in self.data if idx in doc])
idf[idx] = np.log10(1+ (N/nd))
self.idf = idf
def tfidf_raw(self):
results = []
for doc in self.tf:
result = {}
for idx in doc:
result[idx] = doc[idx] * self.idf[idx]
results.append(result)
self.tfidf_raw_results = results
def tfidf_id2word(self):
tfidf_results = []
self.id2word = {v: k for k, v in self.word2id.items()}
for doc in self.tfidf_raw_results:
res = {}
for idx in doc:
# print(idx)
# print(self.id2word[idx])
# print(doc[idx])
# print('---')
res[self.id2word[idx]] = doc[idx]
tfidf_results.append(res)
return tfidf_results
def fit(self,X):
self.tokenizer(X)
self.tfidf_word2id()
self.term_freq()
self.inv_doc_freq()
self.tfidf_raw()
# From: http://www.home-speech-home.com/speech-therapy-sentences.html
# Also made some change to the text for better results
data = [
'My mom drove me to school after she talk with me',
'I found a gold coin on the gold school after school today',
'The church was white and brown and look like a church',
'Your mom is so nice she gave me a ride home today',
'Are you going to have a blue birthday cake for your next birthday',
'My mom made a milkshake with frozen bananas and chocolate sauce',
'I got my haircut today and they did it way too short',
'Your sister is my best friend because she always shares her treats with me',
'The gum was stuck under the desk',
'The flowers smelled beautiful and made the room so happy',
'The dog chased the cat around the block'
]
###Output
_____no_output_____
###Markdown
TFIDF:
###Code
# Stopwords:
sw = list(set(stopwords.words('english')))
tfidf = TFIDF(stopwords=sw)
tfidf.fit(data)
res = tfidf.tfidf_id2word()
###Output
_____no_output_____
###Markdown
Now let's get the top 3 keyword in each document for validation purpose:
###Code
for i in range(len(res)):
print('Doc {}: \n'.format(i))
print("{}\n".format(sorted(res[i].items(),key = lambda x: x[1],reverse=True)[:3]))
print('======================================================================================================')
###Output
Doc 0:
[('drove', 0.2697953115119062), ('talk', 0.2697953115119062), ('school', 0.2032283391607139)]
======================================================================================================
Doc 1:
[('gold', 0.30833749887074996), ('school', 0.23226095904081587), ('found', 0.15416874943537498)]
======================================================================================================
Doc 2:
[('church', 0.35972708201587494), ('white', 0.17986354100793747), ('brown', 0.17986354100793747)]
======================================================================================================
Doc 3:
[('nice', 0.17986354100793747), ('gave', 0.17986354100793747), ('ride', 0.17986354100793747)]
======================================================================================================
Doc 4:
[('birthday', 0.35972708201587494), ('going', 0.17986354100793747), ('blue', 0.17986354100793747)]
======================================================================================================
Doc 5:
[('milkshake', 0.15416874943537498), ('frozen', 0.15416874943537498), ('bananas', 0.15416874943537498)]
======================================================================================================
Doc 6:
[('got', 0.215836249209525), ('haircut', 0.215836249209525), ('way', 0.215836249209525)]
======================================================================================================
Doc 7:
[('sister', 0.17986354100793747), ('best', 0.17986354100793747), ('friend', 0.17986354100793747)]
======================================================================================================
Doc 8:
[('gum', 0.35972708201587494), ('stuck', 0.35972708201587494), ('desk', 0.35972708201587494)]
======================================================================================================
Doc 9:
[('flowers', 0.17986354100793747), ('smelled', 0.17986354100793747), ('beautiful', 0.17986354100793747)]
======================================================================================================
Doc 10:
[('dog', 0.215836249209525), ('chased', 0.215836249209525), ('cat', 0.215836249209525)]
======================================================================================================
###Markdown
NLTK Neural Network
###Code
X_train = data_set[['cosine', 'cosineClean', 'fuzzRatio','fuzzPartial','fuzzTokenSort',
'fuzzTokenSet', 'SMratio','SMratioClean', 'jaccard']]
y_train = data_set[['is_duplicate_y']]
X_test = test_data[['cosine', 'cosineClean', 'fuzzRatio','fuzzPartial','fuzzTokenSort',
'fuzzTokenSet', 'SMratio','SMratioClean', 'jaccard']]
y_test = []
X_train = X_train.fillna(0)
X_test = X_test.fillna(0)
predictions = neuralNetwork.createNeuralNetwork(X_train, X_test, y_train, y_test, (45,30,15,10,5), 10000, kind = 'classification')
predictions
submission = pd.concat([test_data.iloc[:, 0], pd.DataFrame(predictions)], axis=1, sort=False)
submission.head()
submission.to_csv('result/submission.csv', index = False)
###Output
_____no_output_____
###Markdown
Tensorflow Neural Network
###Code
X_train = data_set[['cosine', 'cosineClean', 'fuzzRatio','fuzzPartial','fuzzTokenSort',
'fuzzTokenSet', 'SMratio','SMratioClean', 'jaccard']]
y_train = data_set[['is_duplicate_y']]
X_test = test_data[['cosine', 'cosineClean', 'fuzzRatio','fuzzPartial','fuzzTokenSort',
'fuzzTokenSet', 'SMratio','SMratioClean', 'jaccard']]
y_test = []
X = data_set[['cosine', 'cosineClean', 'fuzzRatio','fuzzPartial','fuzzTokenSort',
'fuzzTokenSet', 'SMratio','SMratioClean', 'jaccard']].values
y = data_set[['is_duplicate_y']].values
X_train = X_train.fillna(0)
X_test = X_test.fillna(0)
# Importing the Keras libraries and packages
import keras
from keras.models import Sequential
from keras.layers import Dense
#Initializing Neural Network
classifier = Sequential()
import tensorflow as tf
# Adding the input layer and the first hidden layer
classifier.add(Dense(output_dim = 15, init = 'uniform', activation = 'relu', input_dim = 9))
# Adding the second hidden layer
classifier.add(Dense(output_dim = 20, init = 'uniform', activation = 'relu'))
# Adding the second hidden layer
classifier.add(Dense(output_dim = 10, init = 'uniform', activation = 'relu'))
# Adding the second hidden layer
classifier.add(Dense(output_dim = 5, init = 'uniform', activation = 'relu'))
# Adding the output layer
classifier.add(Dense(output_dim = 1, init = 'uniform', activation = 'sigmoid'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
classifier.fit(X, y, batch_size = 10, nb_epoch = 20)
classification = classifier.predict(X_test.values)
outcome = np.array([1 if x > 0.30 else 0 for x in classification ])
submission = pd.concat([test_data.iloc[:, 0], pd.DataFrame(outcome)], axis=1, sort=False)
submission.to_csv('result/submission.csv', index = False)
###Output
_____no_output_____
###Markdown
TF-IDF (Term frequency - Inverse document frequency)
###Code
import nltk
import re
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from nltk.stem import WordNetLemmatizer
paragraph="""I have three visions for India. In 3000 years of our history, people from all over
the world have come and invaded us, captured our lands, conquered our minds.
From Alexander onwards, the Greeks, the Turks, the Moguls, the Portuguese, the British,
the French, the Dutch, all of them came and looted us, took over what was ours.
Yet we have not done this to any other nation. We have not conquered anyone.
We have not grabbed their land, their culture,
their history and tried to enforce our way of life on them.
Why? Because we respect the freedom of others.That is why my
first vision is that of freedom. I believe that India got its first vision of
this in 1857, when we started the War of Independence. It is this freedom that
we must protect and nurture and build on. If we are not free, no one will respect us.
My second vision for India’s development. For fifty years we have been a developing nation.
It is time we see ourselves as a developed nation. We are among the top 5 nations of the world
in terms of GDP. We have a 10 percent growth rate in most areas. Our poverty levels are falling.
Our achievements are being globally recognised today. Yet we lack the self-confidence to
see ourselves as a developed nation, self-reliant and self-assured. Isn’t this incorrect?
I have a third vision. India must stand up to the world. Because I believe that unless India
stands up to the world, no one will respect us. Only strength respects strength. We must be
strong not only as a military power but also as an economic power. Both must go hand-in-hand.
My good fortune was to have worked with three great minds. Dr. Vikram Sarabhai of the Dept. of
space, Professor Satish Dhawan, who succeeded him and Dr. Brahm Prakash, father of nuclear material.
I was lucky to have worked with all three of them closely and consider this the great opportunity of my life.
I see four milestones in my career"""
stemmer=PorterStemmer()
lemmatizer=WordNetLemmatizer()
corpus=[]
sentences=nltk.sent_tokenize(paragraph)
for i in range(len(sentences)):
review=re.sub('[^a-zA-Z]',' ',sentences[i])
review=review.lower()
review=review.split()
review=[lemmatizer.lemmatize(word) for word in review if word not in set(stopwords.words('english'))]
review=' '.join(review)
corpus.append(review)
from sklearn.feature_extraction.text import TfidfVectorizer
cv=TfidfVectorizer()
x=cv.fit_transform(corpus).toarray()
import pandas as pd
pd.set_option('display.max_columns',100)
pd.DataFrame(x)
###Output
_____no_output_____ |
DistantSupervision.ipynb | ###Markdown
$\LaTeX$ definition block$\newcommand{\sign}{\operatorname{sign}}$ Distant SupervisionThis notebook has a few cute experiments to explore distant supervision. SetupIn the distant supervision setting, we receive data $((x_i^a, x_i^b), y_i)$. Assuming the data is linearly separable with hyperplane $w^*$, we also are given that if $y_i = \sign[\max(w^{*\top} x_i^{a}, w^{*\top} x_i^{a})]$. Thus, if $y_i = -1$, we know that both $x^a_i$ and $x^b_i$ are in the negative class, but if $y_i = 1$, we only know that atleast one of $x^a_i$ or $x^b_i$ is positive. Data generationConsider a random (unit) weights $w^*$.We generate elements ${x_a, x_b}$, order $x_a$ by $w^* x_a$ and set $y_i$ to be $\sign[\max(w^{*\top} x_a, w^{*\top} x_b)]$.
###Code
# Constants
D = 2
N = 100
K = 2
w = np.random.randn(D)
w = normalize(w)
theta = np.arctan2(w[0], w[1])
X = np.random.randn(N,D,K)
y = np.zeros(N)
for i in range(N):
m = w.dot(X[i])
X[i] = X[i][:,np.argsort(-m)]
y[i] = np.sign(max(m))
# Visualize data
plt.plot(np.arange(-3,3), -w[0]/w[1] * np.arange(-3,3), c='black')
plt.scatter(X[y<0,0,0], X[y<0,1,0], c='r', marker='o')
plt.scatter(X[y<0,0,1], X[y<0,1,1], c='r', marker='o')
plt.scatter(X[y>0,0,0], X[y>0,1,0], c='b', marker='D')
plt.scatter(X[y>0,0,1], X[y>0,1,1], c='b', marker='D')
# Split data for convenience
X_ = np.concatenate((X[y<0,:,0],X[y<0,:,1]))
Xa = X[y>0,:,0]
Xb = X[y>0,:,1]
Xp = np.concatenate((Xa, Xb))
Na, N_, Np = y[y>0].shape[0], 2*y[y<0].shape[0], 2*y[y>0].shape[0]
Nb = Na
Na, N_, Np
###Output
_____no_output_____
###Markdown
Method
###Code
from cvxpy import *
###Output
_____no_output_____
###Markdown
Exact SVM solutionAssuming we know what the positive points are.
###Code
# Beta is the coefficients, and e is the slack.
beta = Variable(D)
ea, e_ = Variable(Na), Variable(N_)
loss = 0.5 * norm(beta, 2) ** 2 + 1*sum_entries(e_)/N_ + 1* sum_entries(ea)/Na
constr = [mul_elemwise(-1,X_*beta) > 1 - e_, mul_elemwise(1,Xa*beta) > 1 - ea]
prob = Problem(Minimize(loss), constr)
print("loss", prob.solve())
w_ = np.array(beta.value).flatten()
w_ = w_ / np.linalg.norm(w_) # Until I add a 0.
print("error", np.linalg.norm(w-w_))
# Visualize data
plt.plot(np.arange(-3,3), -w[0]/w[1] * np.arange(-3,3), linestyle='dashed', c='black')
plt.plot(np.arange(-3,3), -w_[0]/w_[1] * np.arange(-3,3), c='green')
plt.scatter(X[y<0,0,0], X[y<0,1,0], c='r', marker='o')
plt.scatter(X[y<0,0,1], X[y<0,1,1], c='r', marker='o')
plt.scatter(X[y>0,0,0], X[y>0,1,0], c='b', marker='D')
plt.scatter(X[y>0,0,1], X[y>0,1,1], c='b', marker='D')
###Output
_____no_output_____
###Markdown
NaiveAssumes that all positive data is the same.
###Code
# Beta is the coefficients, and e is the slack.
beta = Variable(D)
ep, e_ = Variable(Np), Variable(N_)
loss = 0.5 * norm(beta, 2) ** 2 + 1*sum_entries(e_)/N_ + 1* sum_entries(ep)/Np
constr = [mul_elemwise(-1,X_*beta) > 1 - e_, mul_elemwise(1,Xp*beta) > 1 - ep]
prob = Problem(Minimize(loss), constr)
print("loss", prob.solve())
w_ = np.array(beta.value).flatten()
w_ = w_ / np.linalg.norm(w_) # Until I add a 0.
print("error", np.linalg.norm(w-w_))
# Visualize data
plt.plot(np.arange(-3,3), -w[0]/w[1] * np.arange(-3,3), linestyle='dashed', c='black')
plt.plot(np.arange(-3,3), -w_[0]/w_[1] * np.arange(-3,3), c='green')
plt.scatter(X[y<0,0,0], X[y<0,1,0], c='r', marker='o')
plt.scatter(X[y<0,0,1], X[y<0,1,1], c='r', marker='o')
plt.scatter(X[y>0,0,0], X[y>0,1,0], c='b', marker='D')
plt.scatter(X[y>0,0,1], X[y>0,1,1], c='b', marker='D')
###Output
_____no_output_____
###Markdown
As expected, the naive approach does really poorly. Proposed methodIncorporate responsibilities for the points.$\begin{align*}\min{}& \frac{1}{2} \|\beta\|^2 + C \xi \\\textrm{subject to}& -1(\beta^\top x_{-}) \ge 1 - \xi \\& +1(\beta^\top x_{a}) \ge y_a - \xi \\& +1(\beta^\top x_{b}) \ge y_b - \xi \\& y_a > 0, y_b > 0, y_a + y_b \ge 1\end{align*}$
###Code
# Beta is the coefficients, and e is the slack.
Da, = y[y>0].shape
beta = Variable(D)
e = Variable()
loss = 0.5 * norm(beta, 2) ** 2 + 1 * e
constr = [mul_elemwise(-1,X_*beta) > 1 - e,
Xa*beta + Xb*beta > 1 - e,
]
prob = Problem(Minimize(loss), constr)
print("loss", prob.solve())
w_ = np.array(beta.value).flatten()
w_ = w_ / np.linalg.norm(w_) # Until I add a 0.
print("error", np.linalg.norm(w-w_))
# Visualize data
plt.plot(np.arange(-3,3), -w[0]/w[1] * np.arange(-3,3), linestyle='dashed', c='black')
plt.plot(np.arange(-3,3), -w_[0]/w_[1] * np.arange(-3,3), c='green')
plt.scatter(X[y<0,0,0], X[y<0,1,0], c='r', marker='o')
plt.scatter(X[y<0,0,1], X[y<0,1,1], c='r', marker='o')
plt.scatter(X[y>0,0,0], X[y>0,1,0], c='b', marker='D')
plt.scatter(X[y>0,0,1], X[y>0,1,1], c='b', marker='D')
###Output
_____no_output_____
###Markdown
Proposed 2
###Code
# Beta is the coefficients, and e is the slack.
Da, = y[y>0].shape
beta = Variable(D)
e = Variable()
C = 1.
C_ = 1
loss = 0.5 * norm(beta, 2) ** 2 + C * e + C_ * sum_entries(pos(Xa*beta) + pos(Xb*beta) - 1) / Na
constr = [mul_elemwise(-1,X_*beta) > 1 - e,
]
prob = Problem(Minimize(loss), constr)
print("loss", prob.solve())
w_ = np.array(beta.value).flatten()
w_ = w_ / np.linalg.norm(w_) # Until I add a 0.
print("error", np.linalg.norm(w-w_))
# Visualize data
plt.plot(np.arange(-3,3), -w[0]/w[1] * np.arange(-3,3), linestyle='dashed', c='black')
plt.plot(np.arange(-3,3), -w_[0]/w_[1] * np.arange(-3,3), c='green')
plt.scatter(X[y<0,0,0], X[y<0,1,0], c='r', marker='o')
plt.scatter(X[y<0,0,1], X[y<0,1,1], c='r', marker='o')
plt.scatter(X[y>0,0,0], X[y>0,1,0], c='b', marker='D')
plt.scatter(X[y>0,0,1], X[y>0,1,1], c='b', marker='D')
###Output
_____no_output_____ |
notebooks/deep_explainer/Keras LSTM for IMDB Sentiment Classification.ipynb | ###Markdown
Keras LSTM for IMDB Sentiment ClassificationThis is simple example of how to explain a Keras LSTM model using DeepExplainer.
###Code
# This model training code is directly from:
# https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py
'''Trains an LSTM model on the IMDB sentiment classification task.
The dataset is actually too small for LSTM to be of any advantage
compared to simpler, much faster methods such as TF-IDF + LogReg.
# Notes
- RNNs are tricky. Choice of batch size is important,
choice of loss and optimizer is critical, etc.
Some configurations won't converge.
- LSTM loss decrease patterns during training can be quite different
from what you see with CNNs/MLPs/etc.
'''
from __future__ import print_function
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
from keras.datasets import imdb
max_features = 20000
maxlen = 80 # cut texts after this number of words (among top max_features most common words)
batch_size = 32
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=15,
validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test,
batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
###Output
_____no_output_____
###Markdown
Explain the model with DeepExplainer and visualize the first prediction
###Code
import shap
# we use the first 100 training examples as our background dataset to integrate over
explainer = shap.DeepExplainer(model, x_train[:100])
# explain the first 10 predictions
# explaining each prediction requires 2 * background dataset size runs
shap_values = explainer.shap_values(x_test[:10])
# init the JS visualization code
shap.initjs()
# transform the indexes to words
import numpy as np
words = imdb.get_word_index()
num2word = {}
for w in words.keys():
num2word[words[w]] = w
x_test_words = np.stack([np.array(list(map(lambda x: num2word.get(x, "NONE"), x_test[i]))) for i in range(10)])
# plot the explanation of the first prediction
# Note the model is "multi-output" because it is rank-2 but only has one column
shap.force_plot(explainer.expected_value[0], shap_values[0][0], x_test_words[0])
###Output
_____no_output_____
###Markdown
Keras LSTM for IMDB Sentiment ClassificationThis is simple example of how to explain a Keras LSTM model using DeepExplainer.
###Code
# This model training code is directly from:
# https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py
'''Trains an LSTM model on the IMDB sentiment classification task.
The dataset is actually too small for LSTM to be of any advantage
compared to simpler, much faster methods such as TF-IDF + LogReg.
# Notes
- RNNs are tricky. Choice of batch size is important,
choice of loss and optimizer is critical, etc.
Some configurations won't converge.
- LSTM loss decrease patterns during training can be quite different
from what you see with CNNs/MLPs/etc.
'''
from __future__ import print_function
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
from keras.datasets import imdb
max_features = 20000
maxlen = 80 # cut texts after this number of words (among top max_features most common words)
batch_size = 32
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=15,
validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test,
batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
###Output
Loading data...
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
x_train shape: (25000, 80)
x_test shape: (25000, 80)
Build model...
Train...
Train on 25000 samples, validate on 25000 samples
Epoch 1/3
25000/25000 [==============================] - 124s 5ms/step - loss: 0.4573 - acc: 0.7839 - val_loss: 0.4480 - val_acc: 0.7851
Epoch 2/3
25000/25000 [==============================] - 118s 5ms/step - loss: 0.3116 - acc: 0.8736 - val_loss: 0.3868 - val_acc: 0.8273
Epoch 3/3
25000/25000 [==============================] - 118s 5ms/step - loss: 0.2237 - acc: 0.9120 - val_loss: 0.4113 - val_acc: 0.8308
25000/25000 [==============================] - 18s 711us/step
Test score: 0.4113251119995117
Test accuracy: 0.83076
###Markdown
Explain the model with DeepExplainer and visualize the first prediction
###Code
import shap
# we use the first 100 training examples as our background dataset to integrate over
explainer = shap.DeepExplainer(model, x_train[:100])
# explain the first 10 predictions
# explaining each prediction requires 2 * background dataset size runs
shap_values = explainer.shap_values(x_test[:10])
# init the JS visualization code
shap.initjs()
# transform the indexes to words
import numpy as np
words = imdb.get_word_index()
num2word = {}
for w in words.keys():
num2word[words[w]] = w
x_test_words = np.stack([np.array(list(map(lambda x: num2word.get(x, "NONE"), x_test[i]))) for i in range(10)])
# plot the explanation of the first prediction
# Note the model is "multi-output" because it is rank-2 but only has one column
shap.force_plot(explainer.expected_value[0], shap_values[0][0], x_test_words[0])
###Output
_____no_output_____
###Markdown
Keras LSTM for IMDB Sentiment ClassificationThis is simple example of how to explain a Keras LSTM model using DeepExplainer.
###Code
# This model training code is directly from:
# https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py
'''Trains an LSTM model on the IMDB sentiment classification task.
The dataset is actually too small for LSTM to be of any advantage
compared to simpler, much faster methods such as TF-IDF + LogReg.
# Notes
- RNNs are tricky. Choice of batch size is important,
choice of loss and optimizer is critical, etc.
Some configurations won't converge.
- LSTM loss decrease patterns during training can be quite different
from what you see with CNNs/MLPs/etc.
'''
from __future__ import print_function
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
from keras.datasets import imdb
max_features = 20000
maxlen = 80 # cut texts after this number of words (among top max_features most common words)
batch_size = 32
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=15,
validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test,
batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
###Output
Using TensorFlow backend.
###Markdown
Explain the model with DeepExplainer and visualize the first prediction
###Code
import shap
# we use the first 100 training examples as our background dataset to integrate over
explainer = shap.DeepExplainer(model, x_train[:100])
# explain the first 10 predictions
# explaining each prediction requires 2 * background dataset size runs
shap_values = explainer.shap_values(x_test[:10])
# init the JS visualization code
shap.initjs()
# transform the indexes to words
import numpy as np
words = imdb.get_word_index()
num2word = {}
for w in words.keys():
num2word[words[w]] = w
x_test_words = np.stack([np.array(list(map(lambda x: num2word.get(x, "NONE"), x_test[i]))) for i in range(10)])
# plot the explanation of the first prediction
# Note the model is "multi-output" because it is rank-2 but only has one column
shap.force_plot(explainer.expected_value[0], shap_values[0][0], x_test_words[0])
###Output
_____no_output_____ |
Colab_happy_or_sad_classification.ipynb | ###Markdown
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Below is code with a link to a happy or sad dataset which contains 80 images, 40 happy and 40 sad. Create a convolutional neural network that trains to 100% accuracy on these images, which cancels training upon hitting training accuracy of >.999Hint -- it will work best with 3 convolutional layers.
###Code
import tensorflow as tf
import os
import zipfile
DESIRED_ACCURACY = 0.999
!wget --no-check-certificate \
"https://storage.googleapis.com/laurencemoroney-blog.appspot.com/happy-or-sad.zip" \
-O "/tmp/happy-or-sad.zip"
zip_ref = zipfile.ZipFile("/tmp/happy-or-sad.zip", 'r')
zip_ref.extractall("/tmp/h-or-s")
zip_ref.close()
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('accuracy')>DESIRED_ACCURACY):
print("\nReached 99.9% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
# This Code Block should Define and Compile the Model
model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image 150x150 with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
# Only 1 output neuron.
tf.keras.layers.Dense(1, activation='sigmoid')])
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.001),
metrics=['accuracy'])
# This code block should create an instance of an ImageDataGenerator called train_datagen
# And a train_generator by calling train_datagen.flow_from_directory
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1/255)
train_generator = train_datagen.flow_from_directory('/tmp/h-or-s/', # This is the source directory for training images
target_size=(150, 150), # All images will be resized to 150x150
batch_size=128,
class_mode='binary')
# Expected output: 'Found 80 images belonging to 2 classes'
# This code block should call model.fit and train for
# a number of epochs.
history = model.fit(train_generator,
steps_per_epoch=1,
epochs=15,
verbose=1,
callbacks=[callbacks])
# Expected output: "Reached 99.9% accuracy so cancelling training!""
###Output
Epoch 1/15
1/1 [==============================] - 2s 2s/step - loss: 3.4775 - accuracy: 0.5000
Epoch 2/15
1/1 [==============================] - 2s 2s/step - loss: 0.6636 - accuracy: 0.9000
Epoch 3/15
1/1 [==============================] - 2s 2s/step - loss: 1.1081 - accuracy: 0.5000
Epoch 4/15
1/1 [==============================] - 2s 2s/step - loss: 0.6933 - accuracy: 0.5000
Epoch 5/15
1/1 [==============================] - 2s 2s/step - loss: 0.6067 - accuracy: 0.5000
Epoch 6/15
1/1 [==============================] - 2s 2s/step - loss: 0.4850 - accuracy: 0.8875
Epoch 7/15
1/1 [==============================] - 2s 2s/step - loss: 0.5438 - accuracy: 0.6625
Epoch 8/15
1/1 [==============================] - 2s 2s/step - loss: 0.5295 - accuracy: 0.7500
Epoch 9/15
1/1 [==============================] - 2s 2s/step - loss: 0.3637 - accuracy: 0.7500
Epoch 10/15
1/1 [==============================] - 2s 2s/step - loss: 0.2643 - accuracy: 0.9125
Epoch 11/15
1/1 [==============================] - 2s 2s/step - loss: 0.1984 - accuracy: 1.0000
Reached 99.9% accuracy so cancelling training!
|
task1.ipynb | ###Markdown
###Code
import torch
from torchvision import datasets, transforms
import numpy as np
import matplotlib.pyplot as plt
config = {}
config['load_data'] = True
config['training_size'] = 60000
config['test_size'] = 1000
config['validation_size'] = 1000
config['training_shuffle'] = True
config['test_shuffle'] = False
config['validation_shuffle'] = True
config['num_of_classes'] = 10
config['k'] = 1
config['device'] = 'gpu'
config['algorithm'] = 'K-nearest Neighbors'
config['is_test'] = False
config['best_k'] = -1
config['best_accuracy'] = 0
config['axis_k'] = []
config['axis_accuracy'] = []
def load_data(config):
# Do not need validation set since using knn
CIFAR10_training_set = datasets.CIFAR10('data', train=True, download=True,
transform=transforms.Compose([
transforms.Grayscale(num_output_channels=1),
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
]))
CIFAR10_test = datasets.CIFAR10('data', train=False, download=True,
transform=transforms.Compose([
transforms.Grayscale(num_output_channels=1),
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
]))
CIFAR10_test_set = torch.utils.data.Subset(CIFAR10_test, range(0, config['test_size']))
CIFAR10_validation_set = torch.utils.data.Subset(CIFAR10_test, range(config['test_size'], config['test_size']+config['validation_size']))
training_dataloader = torch.utils.data.DataLoader(CIFAR10_training_set, batch_size=config['training_size'], shuffle=config['training_shuffle'])
test_dataloader = torch.utils.data.DataLoader(CIFAR10_test_set, batch_size=config['test_size'], shuffle=config['test_shuffle'])
validation_dataloader = torch.utils.data.DataLoader(CIFAR10_validation_set, batch_size=config['validation_size'], shuffle=config['validation_shuffle'])
return training_dataloader, test_dataloader, validation_dataloader
def knn(x_train, y_train, x_test, k, num_of_classes, device):
y_test = np.zeros((x_test.shape[0],))
# using tensor for hardware acceleration by using GPU support
tensor_x_train = x_train.to(device).float()
tensor_x_test = x_test.to(device).float()
tensor_y_train = y_train.to(device)
tensor_one_hot = torch.zeros(k, num_of_classes, device=device)
for i in range(x_test.shape[0]):
# calculate l2 norm
tensor_x_distance = torch.norm(tensor_x_train - tensor_x_test[i, :], dim=1)
# find top k samples' indices
_, tensor_x_indices = torch.topk(tensor_x_distance, k, largest=False)
# get class
tensor_y_class = torch.gather(tensor_y_train, 0, tensor_x_indices)
tensor_y_class = tensor_y_class.reshape((k, 1))
# get one-hot representation
tensor_one_hot.zero_() # in-place initialization to speed up
one_hot = tensor_one_hot.scatter_(1, tensor_y_class, 1)
sum_one_hot = torch.sum(one_hot, 0)
y_test[i] = torch.argmax(sum_one_hot)
y_test = torch.from_numpy(y_test)
return y_test
def run(config):
if config['load_data']:
config['load_data'] = False
config['training_dataloader'], config['test_dataloader'], config['validation_dataloader'] = load_data(config)
training_dataloader, test_dataloader, validation_dataloader = config['training_dataloader'], config['test_dataloader'], config['validation_dataloader']
x_train, y_train = None, None
for _, (data, target) in enumerate(training_dataloader):
x_train, y_train = data, target
break
x_test, y_test = None, None
for _, (data, target) in enumerate(test_dataloader):
x_test, y_test = data, target
break
x_validation, y_validation = None, None
for _, (data, target) in enumerate(validation_dataloader):
x_validation, y_validation = data, target
break
x_train = x_train.reshape((x_train.shape[0], -1))
x_test = x_test.reshape((x_test.shape[0], -1))
x_validation = x_validation.reshape((x_validation.shape[0], -1))
if config['is_test']:
print('Test: K: {}\t'.format(config['best_k']), end='')
predicted_y_test = knn(x_train, y_train, x_test, config['best_k'], config['num_of_classes'], config['device'])
else:
print('Validation: K: {}\t'.format(config['k']), end='')
predicted_y_test = knn(x_train, y_train, x_validation, config['k'], config['num_of_classes'], config['device'])
correct = (y_test == predicted_y_test).numpy().astype(np.int32).sum()
incorrect = len(y_test) - correct
accuracy = float(correct) / len(y_test)
print('Correct Predict: {}/{} total \tAccuracy: {:5f}'.format(correct, len(y_test), accuracy))
if accuracy > config['best_accuracy'] and not config['is_test']:
config['best_accuracy'] = accuracy
config['best_k'] = config['k']
return accuracy
print('Training set size: {}x{}.'.format(config['training_size'], 1024))
print('Validation set size: {}x{}.'.format(config['validation_size'], 1024))
print('Test set size: {}x{}.'.format(config['test_size'], 1024))
print('Using algorithm: {}.'.format(config['algorithm']))
if config['device'] != 'cpu' and torch.cuda.is_available():
config['device'] = torch.device('cuda')
print('Using GPU: {}.'.format(torch.cuda.get_device_name(0)))
else:
config['device'] = torch.device('cpu')
print('Using CPU.')
print("Running...")
for i in range(0, 100):
config['k'] = i+1
config['axis_accuracy'].append(run(config))
config['is_test'] = True
_ = run(config)
# The learning curve of the validation accuracy with different selection of k
plt.figure(figsize=(10, 10))
plt.title("Learning Curve", fontsize=14)
plt.xlabel("K", fontsize=12)
plt.ylabel("Validation Accuracy", fontsize=12)
plt.plot(np.array(range(1, 101)), config['axis_accuracy'], label="Validation Accuracy")
plt.legend()
plt.savefig('acc.jpg')
###Output
_____no_output_____
###Markdown
###Code
#Write the code to print a literal string saying "Hello World" (#1)
print ('Hello World')
#Store your name in a variable, and then use it to print the string “Hello {{your name}}!” using a comma in the print function (#2a)
name='Hailah'
print ("Hello , " , name , '!')
# Store your name in a variable, and then use it to print the string “Hello {{your name}}!” using a + in the print function (#2b)
name='Hailah'
print ("Hello , " + name + '!')
# Store your favorite number in a variable, and then use it to print the string “Hello {{num}}!” using a comma in the print function (#3a)
number=5
print('Hello ' , number)
# Store your favorite number in a variable, and then use it to print the string “Hello {{num}}!” using a + in the print function (#3b)
number=5
print('Hello ' + str(number))
# Store 2 of your favorite foods in variables, and then use them to print the string “I love to eat {{food_one}} and {{food_two}}.” with the format method
food_one='pizza'
food_two='fruit'
print("I love to eat", f'{food_one}' , ' and' , f'{food_two}' ,".")
# Store 2 of your favorite foods in variables, and then use them to print the string “I love to eat {{food_one}} and {{food_two}}.” with f-strings
food_one='pizza'
food_two='fruit'
print("I love to eat %s , and , %s " % (food_one , food_two))
###Output
_____no_output_____
###Markdown
Лабораторная работа 3. Исследование методов работы с матрицами и векторами с помощью библиотеки NumPy
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Транспонирование матрицы
###Code
trans_arr = np.matrix('2 4 5; 0 9 8')
print(trans_arr)
trans = trans_arr.transpose()
print(trans)
###Output
[[2 0]
[4 9]
[5 8]]
###Markdown
Свойство 1. Дважды транспонированная матрица равна исходной матрице:
###Code
trans_a = np.matrix('2 4 5; 1 0 2; 2 8 5')
print(trans_a)
print((trans_a.T).T)
###Output
[[2 4 5]
[1 0 2]
[2 8 5]]
###Markdown
Свойство 2. Транспонирование суммы матриц равно сумме транспонированных матриц:
###Code
trans_a = np.matrix('2 4 5; 8 9 10')
trans_b = np.matrix('7 8 9; 5 6 1')
trans_l = (trans_a + trans_b).T
print(trans_l)
trans_r = trans_a.T + trans_b.T
print(trans_r)
###Output
[[ 9 13]
[12 15]
[14 11]]
###Markdown
Свойство 3. Транспонирование произведения матриц равно произведению транспонированныхматриц расставленных в обратном порядке:
###Code
trans_a = np.matrix('2 6; 9 1')
trans_b = np.matrix('5 8; 1 4')
trans_l = (trans_a.dot(trans_b)).T
print(trans_l)
trans_r = (trans_b.T).dot(trans_a.T)
print(trans_r)
###Output
[[16 46]
[40 76]]
###Markdown
Свойство 4. Транспонирование произведения матрицы на число равно произведению этогочисла на транспонированную матрицу:
###Code
trans_a = np.matrix('2 9 7; 1 3 8')
trans_k = 4
trans_l = (trans_k * trans_a).T
print(trans_l)
trans_r = trans_k * (trans_a.T)
print(trans_r)
###Output
[[ 8 4]
[36 12]
[28 32]]
###Markdown
Действия над матрицами Умножение матрицы на число
###Code
A = np.matrix('1 1 1; 2 2 2')
B = 8 * A
print(B)
###Output
[[ 8 8 8]
[16 16 16]]
###Markdown
Свойство 1. Произведение единицы и любой заданной матрицы равно заданной матрице:
###Code
A = np.matrix('1 1; 2 2')
print(A)
B = A *1
print(B)
###Output
[[1 1]
[2 2]]
###Markdown
Свойство 2. Если у матрицы есть строка или столбец, состоящие из нулей, то определительтакой матрицы равен нулю:
###Code
mult_a = np.matrix("1 2 3; 4 5 6; 7 8 9")
mult_p = 2
mult_q = 3
mult_l = (mult_p + mult_q) * mult_a
print(mult_l)
mult_r = mult_p * mult_a + mult_q * mult_a
print(mult_r)
###Output
[[ 5 10 15]
[20 25 30]
[35 40 45]]
###Markdown
Свойство 3. Произведение матрицы на произведение двух чисел равно произведению второго числа и заданной матрицы, умноженному на первое число:
###Code
mult_l = (mult_p * mult_q) * mult_a
print(mult_l)
mult_r = mult_p * (mult_q * mult_a)
print(mult_r)
###Output
[[ 6 12 18]
[24 30 36]
[42 48 54]]
###Markdown
Свойство 4. Произведение суммы матриц на число равно сумме произведений этих матриц на заданное число:
###Code
mult_a = np.matrix('2 2; 4 7')
mult_b = np.matrix('1 6; 9 8')
mult_k = 3
mult_l = mult_k * (mult_a + mult_b)
print(mult_l)
mult_r = mult_k * mult_a + mult_k * mult_b
print(mult_r)
###Output
[[ 9 24]
[39 45]]
###Markdown
Сложение матриц
###Code
sum_a = np.matrix('2 2; 4 7')
sum_b = np.matrix('1 6; 9 8')
sum_c = sum_a + sum_b
print(sum_c)
###Output
[[ 3 8]
[13 15]]
###Markdown
Свойство 1. Коммутативность сложения. От перестановки матриц их сумма не изменяется:
###Code
sum_a = np.matrix('2 2; 4 7')
sum_b = np.matrix('1 6; 9 8')
sum_l = sum_a + sum_b
print(sum_l)
sum_q = sum_b + sum_a
print(sum_q)
###Output
[[ 3 8]
[13 15]]
###Markdown
Свойство 2. Ассоциативность сложения. Результат сложения трех и более матриц не зависит отпорядка, в котором эта операция будет выполняться:
###Code
sum_a = np.matrix('2 2; 4 7')
sum_b = np.matrix('1 6; 9 8')
sum_c = np.matrix('3 8; 4 9')
sum_l = sum_a + (sum_b + sum_c)
print(sum_l)
sum_q = (sum_a + sum_b) + sum_c
print(sum_q)
###Output
[[ 6 16]
[17 24]]
###Markdown
Свойство 3. Для любой матрицы существует противоположная ей, такая, что их сумма является нулевой матрицей :
###Code
sum_a = np.matrix('2 2; 4 7')
sum_b = np.matrix('0 0; 0 0')
sum_l = sum_a + (-1) * sum_a
print(sum_l)
print(sum_b)
###Output
[[0 0]
[0 0]]
###Markdown
Умножение матриц
###Code
sum_a = np.matrix('2 2 2; 5 4 7')
sum_b = np.matrix('6 9; 8 2; 4 5')
sum_l = sum_a.dot(sum_b)
print(sum_l)
###Output
[[36 32]
[90 88]]
###Markdown
Свойство 1. Ассоциативность умножения. Результат умножения матриц не зависит от порядка, в котором будет выполняться эта операция:
###Code
sum_a = np.matrix('2 2; 4 7')
sum_b = np.matrix('6 9; 8 2')
sum_c = np.matrix('2 3; 4 5')
sum_l = sum_a.dot(sum_b.dot(sum_c))
print(sum_l)
sum_q = (sum_a.dot(sum_b)).dot(sum_c)
print(sum_q)
###Output
[[144 194]
[360 490]]
###Markdown
Свойство 2. Дистрибутивность умножения. Произведение матрицы на сумму матриц равно сумме произведений матриц:
###Code
sum_a = np.matrix('2 2; 4 7')
sum_b = np.matrix('6 9; 8 2')
sum_c = np.matrix('2 3; 4 5')
sum_l = sum_a.dot(sum_b + sum_c)
print(sum_l)
sum_r = sum_a.dot(sum_b) + sum_a.dot(sum_c)
print(sum_r)
###Output
[[ 40 38]
[116 97]]
###Markdown
Свойство 3. Умножение матриц в общем виде не коммутативно. Это означает, что для матриц не выполняется правило независимости произведения от перестановки множителей:
###Code
sum_a = np.matrix('2 2; 4 7')
sum_b = np.matrix('6 9; 8 2')
sum_l = sum_a.dot(sum_b)
print(sum_l)
sum_q = sum_b.dot(sum_a)
print(sum_q)
###Output
[[48 75]
[24 30]]
###Markdown
Определитель матрицы
###Code
lin_a = np.matrix('4 -1 -2; 19 4 -1; 8 3 1')
print(lin_a)
np.linalg.det(lin_a)
###Output
_____no_output_____
###Markdown
Свойство 1. Определитель матрицы остается неизменным при ее транспонировании:
###Code
lin_a = np.matrix('4 -1 -2; 19 4 -1; 8 3 1')
print(lin_a)
print(lin_a.T)
lin_l = round(np.linalg.det(lin_a), 3)
print(lin_l)
lin_q = round(np.linalg.det(lin_a.T), 3)
print(lin_q)
###Output
5.0
###Markdown
Свойство 2. Если у матрицы есть строка или столбец, состоящие из нулей, то определитель такой матрицы равен нулю:
###Code
lin_a = np.matrix('4 -1 -2; 0 0 0; 8 3 1')
print(lin_a)
print(np.linalg.det(lin_a))
###Output
0.0
###Markdown
Свойство 3. При перестановке строк матрицы знак ее определителя меняется на противоположный:
###Code
lin_a = np.matrix('4 -1 -2; 19 4 -1; 8 3 1')
lin_b = np.matrix('19 4 -1; 4 -1 -2; 8 3 1')
print(round(np.linalg.det(lin_a)))
print(round(np.linalg.det(lin_b)))
###Output
-5.0
###Markdown
Свойство 4. Если у матрицы есть две одинаковые строки, то ее определитель равен нулю:
###Code
lin_a = np.matrix('4 -1 -2; 4 -1 -2; 8 3 1')
print(lin_a)
print(np.linalg.det(lin_a))
###Output
0.0
###Markdown
Свойство 5. Если все элементы строки или столбца матрицы умножить на какое-то число, то и определитель будет умножен на это число:
###Code
lin_a = np.matrix('4 -1 -2; 7 8 4; 8 3 1')
lin_k = 2
lin_b = lin_a.copy()
lin_b[1, :] = lin_k * lin_b[1, :]
print(lin_b)
lin_l = round(np.linalg.det(lin_a), 3)
print(lin_l)
lin_q = round(np.linalg.det(lin_b), 3)
print(lin_q)
###Output
90.0
###Markdown
Обратная матрица
###Code
rev_a = np.matrix('2 2; 4 7')
print(np.linalg.inv(lin_a))
###Output
[[-0.08888889 -0.11111111 0.26666667]
[ 0.55555556 0.44444444 -0.66666667]
[-0.95555556 -0.44444444 0.86666667]]
###Markdown
Свойство 1. Обратная матрица обратной матрицы есть исходная матрица:
###Code
rev_a = np.matrix('2 2; 4 7')
print( np.linalg.inv(np.linalg.inv(rev_a)))
###Output
[[2. 2.]
[4. 7.]]
###Markdown
Свойство 2. Обратная матрица транспонированной матрицы равна транспонированной матрице от обратной матрицы:
###Code
rev_a = np.matrix('2 2; 4 7')
rev_l = np.linalg.inv(rev_a.T)
print(rev_l)
rev_q = (np.linalg.inv(rev_a)).T
print(rev_q)
###Output
[[ 1.16666667 -0.66666667]
[-0.33333333 0.33333333]]
###Markdown
Свойство 3. Обратная матрица произведения матриц равна произведению обратных матриц:
###Code
rev_a = np.matrix('2 2; 4 7')
rev_b = np.matrix('6 9; 8 2')
print(np.linalg.inv(rev_a.dot(rev_b)))
print(np.linalg.inv(rev_b).dot(np.linalg.inv(rev_a))
)
###Output
[[-0.13888889 0.06111111]
[ 0.22222222 -0.07777778]]
###Markdown
Матричный метод. Рассмотрим систему линейных уравнений: x - 2y + 3z - t = 6 2x + 3y - 4z + 4t = -7 3x + y - 2z - 2t = 9 x - 3y + 7z + 6t = -7
###Code
mat_k = np.array([[1, -2, 3, -1], [2, 3, -4, 4], [3, 1, -2, -2], [1, -3, 7, 6]])
print(mat_k)
mat_d = round(np.linalg.det(mat_k), 3)
print(mat_d)
mat_l = np.array([[6], [-7], [9], [-7]])
print(mat_l)
mat_k_inv = np.linalg.inv(mat_k)
print(mat_k_inv @ mat_l)
print(np.linalg.solve(mat_k, mat_l))
###Output
[[ 2.]
[-1.]
[ 0.]
[-2.]]
###Markdown
Метод Крамера Найти решение СЛАУ при помощи метода Крамера. x - 2y + 3z - t = 6 2x + 3y - 4z + 4t = -7 3x + y - 2z - 2t = 9 x - 3y + 7z + 6t = -7
###Code
kram_k = np.array([[1, -2, 3, -1], [2, 3, -4, 4], [3, 1, -2, -2], [1, -3, 7, 6]])
kram_l = np.array([[6], [-7], [9], [-7]])
print(kram_k)
kram_d = round(np.linalg.det(kram_k), 3)
print(kram_d)
kram_temp = np.copy(kram_k)
kram_temp[:, 0] = kram_l[:, 0]
print(kram_temp)
kram_d_1 = np.linalg.det(kram_temp)
print(kram_d_1)
kram_temp = np.copy(kram_k)
kram_temp[:, 1] = kram_l[:, 0]
kram_d_2 = np.linalg.det(kram_j)
print(kram_d_2)
kram_temp = np.copy(kram_k)
kram_temp[:, 2] = kram_l[:, 0]
kram_d_3 = np.linalg.det(kram_temp)
print(kram_d_3)
kram_temp = np.copy(kram_k)
kram_temp[:, 3] = kram_l[:, 0]
kram_d_4 = np.linalg.det(kram_temp)
print(kram_d_4)
print(f"x = {kram_d_1/kram_d} \ny = {kram_d_2/kram_d} \nz = {kram_d_3/kram_d} \nt = {kram_d_4/kram_d}" )
###Output
x = 1.999999999999999
y = -0.9999999999999998
z = 0.0
t = -2.000000000000002
###Markdown
Вступительное задание в комманду ML.¶Рябушев Антон. Май 2019 года. Постановка задачи¶ Реализуйте максимально эффективным образом алгоритм иерархической кластеризации с алгоритмом объединения single-link clustering для точек, расположенных на прямой. Введение¶ Было реализовано два алгоритма:НаивныйБыстрыйНиже представлено описание каждого из алгоритмов. Общая часть¶ Node - вершина дерева поддерживающая следующий инвариант.[l, r) - полуинтервал данных, который доступен из данной вершины.
###Code
%matplotlib inline
from scipy.cluster import hierarchy
import matplotlib.pyplot as plt
import numpy as np
import random
class Node:
_counter = 0
def __init__(self, l, r, childs=None):
self._l, self._r, self._childs = l, r, childs
self._id = Node._counter
Node._counter += 1
def __str__(self, deep=0):
prefix = "|--" * deep
# result = "{}{}, l: {}, r: {}".format(prefix, self._id, self._l, self._r)
result = "{}{}".format(prefix, self._id, self._l, self._r)
if self._childs:
for child in self._childs:
result += '\n' + child.__str__(deep + 1)
return result
def __eq__(self, other):
result = self._l == other._l and self._r == other._r
if self.leaf() or other.leaf():
return result and self.leaf() == other.leaf()
for first, second in zip(self._childs, other._childs):
result = result and first == second
return result
def leaf(self):
return self._childs is None
def get_deltas(value):
deltas = [right - left for left, right in zip(value[:-1], value[1:])]
return deltas
###Output
_____no_output_____
###Markdown
The naive algorithm¶ Рекурсивное построение дерева.Найдем самое большое расстояние между соседнимим элементами, их может быть несколько одинаковых.Объединение по данному расстоянию очевидным образом будет происходить последним. Разделим вершины на разные классы в зависимости расположения каждой вершины от данных промежутков. Построим дерево для каждого множества вершин и подвесим их за одну общую вершину.Алгоритм похож на алгоритм quicksort. В худшем случае получим $ O(n^2)$
###Code
def recursion_build(deltas, start_index):
if not deltas:
return Node(start_index, start_index + 1)
max_delta = max(deltas)
childs = list()
new_deltas = list()
start = 0
for index, delta in enumerate(deltas):
if delta == max_delta:
childs.append(recursion_build(new_deltas, start_index + start))
new_deltas = list()
start = index + 1
else:
new_deltas.append(delta)
childs.append(recursion_build(new_deltas, start_index + start))
return Node(start_index, start_index + len(deltas) + 1, childs)
def build_tree_simple(value):
deltas = get_deltas(value)
return recursion_build(deltas, 0)
###Output
_____no_output_____
###Markdown
The smart algorithm¶ Будем строить дерево снизу вверх, начиная от самых маленьких расстояний.После обработки каждого расстояния будем поддерживать следующий инвариант. Для каждого кластера, current_nodes от граничных точек этого кластера указывают на него.Для примера до начала работы алгоритма, каждый элемент current_nodes указывает на нужный кластер $[i, i + 1)$. После завершения работы алгоритма, ячейки 0, n-1 будут ссылаться на вершину отвечающую за кластер $[0, n)$, то-есть за все данные.При фиксированном расстоянии d будем идти от меньших по индексам вершин к большим, у которых следующая вершина на расстоянии d. Пока они соседнии, кладем их в список.Когда следующая в обходе вершина не соседня, создаем новый кластер.При создании кластера изменим current_nodes для границ, тем самым сохранив инвариант.Асимптотика $ O(n \log n) $
###Code
def check_neighborhood_of_vertices(first, second, current_nodes):
return current_nodes[first]._r == current_nodes[second]._l
def union_group_to_new_node(group, current_nodes):
childs = list(map(lambda index: current_nodes[index], group))
l, r = childs[0]._l, childs[-1]._r
new_node = Node(l, r, childs)
current_nodes[l] = new_node
current_nodes[r - 1] = new_node
def build_tree_smart(value):
deltas = get_deltas(value)
current_nodes = [Node(index, index + 1) for index in range(len(value))]
delta_to_indexes = dict()
for index, delta in enumerate(deltas):
if delta in delta_to_indexes:
delta_to_indexes[delta].append(index)
else:
delta_to_indexes[delta] = [index]
for delta, indexes in sorted(delta_to_indexes.items()):
group = list()
for index in indexes:
if group and not check_neighborhood_of_vertices(group[-1], index, current_nodes):
group.append(group[-1] + 1)
union_group_to_new_node(group, current_nodes)
group.clear()
group.append(index)
if group:
group.append(group[-1] + 1)
union_group_to_new_node(group, current_nodes)
return current_nodes[0]
###Output
_____no_output_____
###Markdown
Test
###Code
def test(value):
assert value.sort() != value
first = build_tree_simple(value)
second = build_tree_smart(value)
assert first == second
def test_all_eq():
values = list(range(1, 100, 5))
test(values)
def some_tests():
test([2 ** i for i in range(10)])
test([0, 1, 3, 4])
test_all_eq()
def random_test(num_of_element, max_int):
values = sorted([random.randrange(max_int) for i in range(num_of_element)])
test(values)
def n_random_test(num_of_test=100, num_of_element=100, max_int=100):
for i in range(num_of_test):
random_test(num_of_element, max_int)
###Output
_____no_output_____
###Markdown
Run¶
###Code
random.seed(42)
some_tests()
n_random_test(10000, 100, 10)
###Output
_____no_output_____
###Markdown
Some examples¶ Выводятся индексы в исходном массиве, а не сами данные.
###Code
Node._counter = 0
print(build_tree_smart([1, 1, 1]))
Node._counter = 0
print(build_tree_smart([1, 2, 4]))
Node._counter = 0
print(build_tree_smart([1, 3, 4]))
###Output
4
|--0
|--3
|--|--1
|--|--2
###Markdown
Bad cases¶ Ясно, что когда все расстояния равны, кластеризации совсем не будет.
###Code
Node._counter = 0
print(build_tree_smart(range(1, 20, 4)))
###Output
5
|--0
|--1
|--2
|--3
|--4
###Markdown
Рассмортим точку у которой слева много точек, а справа одна, но чуть ближе. Так как точек слева больше, хочется сказать, что скорее всего, она принадлежит левому классу.
###Code
Node._counter = 0
print(build_tree_smart([0, 0, 0, 0, 0, 0, 500, 999]))
###Output
10
|--8
|--|--0
|--|--1
|--|--2
|--|--3
|--|--4
|--|--5
|--9
|--|--6
|--|--7
###Markdown
Рассмортим точку у которой слева много близких друг к другу точек на расстояние d, а справа несколько подряд идущих точек с расстоянием d - eps. Мы объединили точку с самой правой, которая заметно дальше от нашей точки, чем точки слева.
###Code
Node._counter = 0
print(build_tree_smart([0, 0, 0, 0, 0, 0, 2, 3, 4, 5]))
###Output
12
|--10
|--|--0
|--|--1
|--|--2
|--|--3
|--|--4
|--|--5
|--11
|--|--6
|--|--7
|--|--8
|--|--9
###Markdown
Data ExplorationForam utilizadas as seguintes bibliotecas para realizar a primeira parte do trabalho:* Numpy* Pandas* Matplotlib Primeira parteForam carregados os arquivos `X.csv` e `Y.csv` e foram exibidos o inicio dos arquivos.
###Code
import pandas as pd
import matplotlib.pyplot as pp
import seaborn as sns
import json
x_column_names = ['TimeStamp',
'all_..idle',
'X..memused',
'proc.s',
'cswch.s',
'file.nr',
'sum_intr.s',
'ldavg.1',
'tcpsck',
'pgfree.s']
x_column_types = ['int64',
'float64',
'float64',
'float64',
'float64',
'float64',
'float64',
'float64',
'float64',
'float64']
x_data = pd.read_csv('./data/X.csv')
x_data = x_data.astype(dict(zip(x_column_names, x_column_types)))
x_data.head()
y_column_names = ['TimeStamp', 'DispFrames']
y_column_types = ['int64', 'float64']
y_output = pd.read_csv('./data/Y.csv')
y_output = y_output.astype(dict(zip(y_column_names, y_column_types)))
y_output.head()
###Output
_____no_output_____
###Markdown
ExerciciosPara as celulas seguintes serao realizadas as atividades propostas.1. Compute the following statistics for each component of X and Y: mean, maximum, minimum, 25th percentile, 90th percentile, and standard deviation.
###Code
x_metrics = dict()
y_metrics = dict()
def mean(data, column):
return float(data[column].mean())
def maximum(data, column):
return float(data[column].max())
def minimum(data, column):
return float(data[column].max())
def percentile(data, column, percentile):
return float(data[column].quantile(percentile))
def standard_deviation(data, column):
return float(data[column].std())
for column in x_column_names:
column_metrics = dict()
column_metrics['mean'] = mean(x_data, column)
column_metrics['maximum'] = maximum(x_data, column)
column_metrics['minimum'] = minimum(x_data, column)
column_metrics['25th_percentile'] = percentile(x_data, column, 0.25)
column_metrics['90th_percentile'] = percentile(x_data, column, 0.9)
column_metrics['standard_deviation'] = standard_deviation(x_data, column)
x_metrics[column] = column_metrics
for column in y_column_names:
column_metrics = dict()
column_metrics['mean'] = mean(y_output, column)
column_metrics['maximum'] = maximum(y_output, column)
column_metrics['minimum'] = minimum(y_output, column)
column_metrics['25th_percentile'] = percentile(y_output, column, 0.25)
column_metrics['90th_percentile'] = percentile(y_output, column, 0.9)
column_metrics['standard_deviation'] = standard_deviation(y_output, column)
y_metrics[column] = column_metrics
print("X METRICS:")
print(json.dumps(x_metrics, indent = 4))
print("\nY METRICS:")
print(json.dumps(y_metrics, indent = 4))
###Output
X METRICS:
{
"TimeStamp": {
"mean": 1409266578.5,
"maximum": 1409268378.0,
"minimum": 1409268378.0,
"25th_percentile": 1409265678.75,
"90th_percentile": 1409268018.1,
"standard_deviation": 1039.3748120865735
},
"all_..idle": {
"mean": 9.064980555555556,
"maximum": 69.54,
"minimum": 69.54,
"25th_percentile": 0.0,
"90th_percentile": 38.621,
"standard_deviation": 16.122822271521386
},
"X..memused": {
"mean": 89.13751666666667,
"maximum": 97.84,
"minimum": 97.84,
"25th_percentile": 82.965,
"90th_percentile": 96.77,
"standard_deviation": 8.183661998198009
},
"proc.s": {
"mean": 7.683302777777778,
"maximum": 48.0,
"minimum": 48.0,
"25th_percentile": 0.0,
"90th_percentile": 20.0,
"standard_deviation": 8.532605535161235
},
"cswch.s": {
"mean": 54045.87402222222,
"maximum": 83880.0,
"minimum": 83880.0,
"25th_percentile": 31302.0,
"90th_percentile": 72135.1,
"standard_deviation": 19497.81154016128
},
"file.nr": {
"mean": 2656.3333333333335,
"maximum": 2976.0,
"minimum": 2976.0,
"25th_percentile": 2496.0,
"90th_percentile": 2880.0,
"standard_deviation": 196.1107477828365
},
"sum_intr.s": {
"mean": 19978.04074722222,
"maximum": 35536.0,
"minimum": 35536.0,
"25th_percentile": 16678.0,
"90th_percentile": 28228.399999999998,
"standard_deviation": 4797.271325195323
},
"ldavg.1": {
"mean": 75.87577222222221,
"maximum": 147.47,
"minimum": 147.47,
"25th_percentile": 28.2,
"90th_percentile": 127.993,
"standard_deviation": 43.862444516579934
},
"tcpsck": {
"mean": 48.9975,
"maximum": 87.0,
"minimum": 87.0,
"25th_percentile": 34.0,
"90th_percentile": 71.0,
"standard_deviation": 15.871155449389754
},
"pgfree.s": {
"mean": 72872.15456944444,
"maximum": 145874.0,
"minimum": 145874.0,
"25th_percentile": 61601.75,
"90th_percentile": 97532.5,
"standard_deviation": 19504.32117533753
}
}
Y METRICS:
{
"TimeStamp": {
"mean": 1409266578.5,
"maximum": 1409268378.0,
"minimum": 1409268378.0,
"25th_percentile": 1409265678.75,
"90th_percentile": 1409268018.1,
"standard_deviation": 1039.3748120865735
},
"DispFrames": {
"mean": 18.818394444386,
"maximum": 30.390000104899997,
"minimum": 30.390000104899997,
"25th_percentile": 13.3900001049,
"90th_percentile": 24.609999895100003,
"standard_deviation": 5.219756238205702
}
}
###Markdown
2. Compute the following quantities: a) the number of observations with memory usage larger than 80%; b) the average number of used TCP sockets for observations with more than 18000 interrupts/sec; c) the minimum memory utilization for observations with CPU idle time lower than 20%.
###Code
# The number of observations with memory usage larger than 80%;
# The column `X..memused` holds information regarding memory usage
memory_usage_count = x_data[x_data['X..memused'] > 80.0]['X..memused'].count()
print('Memory usage above 80% count:', memory_usage_count)
# The average number of used TCP sockets for observations with more than 18000 interrupts/sec;
# First we filter the using the column `sum intr.s` and then do the average using the column `tcpsck`
avg_tcp = x_data[x_data['sum_intr.s'] > 18000]['tcpsck'].mean()
print('Average of TCP sockets:', avg_tcp)
# The minimum memory utilization for observations with CPU idle time lower than 20%.
# First we filter with the column `all_..idle` and select the minimum from the column `X..memused`.
minimum_mem = x_data[x_data['all_..idle'] < 0.2]['X..memused'].min()
print('Minimum memory utilization:', minimum_mem)
###Output
Minimum memory utilization: 73.03
###Markdown
3. Produce the following plots: a) Time series of percentage of idle CPU and of used memory (both in a single plot); b) Density plots, histograms, and box plots of idle CPU and of used memory.
###Code
pp.figure(figsize = (18, 5))
timestamps = x_data['TimeStamp']
pp.plot(timestamps, x_data['all_..idle'], 'r')
pp.plot(timestamps, x_data['X..memused'], 'b')
pp.legend(['CPU', 'Memoria'])
pp.show()
# Density plots CPU and memory.
x_data['all_..idle'].plot(kind='density', legend=True)
x_data['X..memused'].plot(kind='density', legend=True)
# Histogram for CPU and memory.
x_data['all_..idle'].plot(kind='hist', legend=True)
x_data['X..memused'].plot(kind='hist', legend=True)
# Boxplot for CPU and memory.
pp.figure(figsize = (18, 5))
fig, axes = pp.subplots(1, 2)
x_data['all_..idle'].plot(kind='box', by='type', ax=axes.flatten()[0])
x_data['X..memused'].plot(kind='box', by='type', ax=axes.flatten()[1])
pp.tight_layout()
pp.show()
###Output
_____no_output_____
###Markdown
Network Data Science with NetworkX and Python Load graphs from Excel spreadsheet files
###Code
from random import sample
import networkx as nx
import pandas as pd
link = ('https://github.com/dnllvrvz/Social-Network-Dataset/raw/master/Social%20Network%20Dataset.xlsx')
network_data = pd.read_excel(link, sheet_name = ['Elements', 'Connections'])
elements_data = network_data['Elements']
connections_data = network_data['Connections']
connections_data.head(10)
edge_cols = ['Type', 'Weight', 'When']
graph = nx.convert_matrix.from_pandas_edgelist(connections_data,
source = 'From',
target = 'To',
edge_attr = edge_cols)
sampled_edges = sample(graph.edges, 10)
graph.edges[sampled_edges[0]]
elements_data.head(10)
node_dict = elements_data.set_index('Label').to_dict(orient = 'index')
nx.set_node_attributes(graph, node_dict)
sampled_nodes = sample(graph.nodes, 10)
graph.nodes[sampled_nodes[0]]
###Output
_____no_output_____
###Markdown
QOSF Mentorship II Task 1  I've chosen to work in Qiskit and to attempt as much of this from scratch as possible.
###Code
from qiskit import QuantumCircuit
from qiskit.circuit import ParameterVector
from qiskit import Aer, execute
from qiskit.quantum_info import Statevector, random_statevector
from qiskit.visualization import *
import numpy as np
import random
from qiskit_tools import array_to_latex
import matplotlib.pyplot as plt
import time
qubits = 4
###Output
_____no_output_____
###Markdown

###Code
def generate_even_block(theta : ParameterVector):
width = len(theta)
qc = QuantumCircuit(width)
for i, t in enumerate(theta.params):
qc.rz(t, i)
for i in range(width):
for j in range(i+1, width):
qc.cz(i, j)
return qc
theta = ParameterVector(f"$\\theta_i$", qubits)
qc = generate_even_block(theta)
qc.draw(output='mpl')
###Output
_____no_output_____
###Markdown

###Code
def generate_odd_block(theta : ParameterVector):
width = len(theta)
qc = QuantumCircuit(width)
for i, t in enumerate(theta.params):
qc.rx(t, i)
return qc
qc = generate_odd_block(theta)
qc.draw(output='mpl')
def get_random_parameters_in_range(parameters, start, stop):
assigned_params = {}
for p in sorted(parameters, key=lambda p: p.name):
assigned_params[p] = random.uniform(start, stop)
return assigned_params
get_random_parameters_in_range(qc.parameters, 0, 2 * np.pi)
###Output
_____no_output_____
###Markdown
Now, let's put it all together for the full state preparation circuit (still parameterized though):
###Code
def generate_state_preparation_circuit(width, L):
qc = QuantumCircuit(width)
i = 0
for layer in range(L):
i = i + 1
theta = ParameterVector(f"$\\theta_{i}$", width)
qc += generate_odd_block(theta)
i = i + 1
theta = ParameterVector(f"$\\theta_{i}$", width)
qc += generate_even_block(theta)
return qc
qc = generate_state_preparation_circuit(4, 1)
qc.draw(output='mpl')
###Output
_____no_output_____
###Markdown

###Code
def norm(sv):
return np.sum(sv.conjugate().data * sv.data).real
phi = Statevector([1, 0]).tensor(Statevector([1, 0])).tensor(Statevector([1, 0])).tensor(Statevector([1, 0]))
print(f'|phi> = {np.around(phi.data, 1)}\n||phi|| = {norm(phi)}\n')
display(plot_bloch_multivector(phi.data))
psi = Statevector([0, 1]).tensor(Statevector([0, 1])).tensor(Statevector([0, 1])).tensor(Statevector([0, 1]))
print(f'|psi> = {np.around(psi.data, 1)}\n||psi|| = {norm(psi)}\n')
display(plot_bloch_multivector(psi.data))
print(f'||psi - phi|| = {norm(psi - phi)}')
###Output
|phi> = [1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j
0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]
||phi|| = 1.0
###Markdown
For gradient descent, let's try using the [parameter shift approach](https://pennylane.ai/qml/glossary/parameter_shift.html) from Schuld, et. al.
###Code
def gradient_param_shift(target, qc, params):
params_gradient = {}
qubit_count = qc.num_qubits
for p, v in params.items():
# we work with a copy, so that we can differentiate w.r.t. a single param at a time
params_shift = params.copy()
# for pauli operators we can apply +/- pi/2
v_plus = v + np.pi/2
params_shift[p] = v_plus
qc_bound = qc.assign_parameters(params_shift, inplace=False)
psi_plus = Statevector.from_instruction(qc_bound)
v_neg = v - np.pi/2
params_shift[p] = v_neg
qc_bound = qc.assign_parameters(params_shift, inplace=False)
psi_neg = Statevector.from_instruction(qc_bound)
# print(psi_plus - psi_neg)
params_gradient[p] = 0.5 * (norm(psi_plus - target) - norm(psi_neg - target))
return params_gradient
###Output
_____no_output_____
###Markdown
Let's try training (briefly) our circuit parameters to approxmiate a random state $|\phi\rangle$ and compare different learning rates.
###Code
def train(target, qc, params, iterations, learning_rate, print_modulus=None):
if print_modulus is None:
print_modulus = max(iterations / 10, 10) # print every so often
psi = None
for i in range(iterations):
qc_bound = qc.assign_parameters(params, inplace=False)
psi = Statevector.from_instruction(qc_bound)
params_gradient = gradient_param_shift(target, qc, params)
if i % print_modulus == 0 or i == iterations - 1:
print("Iteration %4d => Loss: %.10f" % (i, norm(psi - target)))
# Update the parameters by descending along the gradient
for p, v in params.items():
params[p] = v - params_gradient[p] * learning_rate
return (params, psi, norm(psi - target))
phi = random_statevector(2 ** qubits, seed=19)
print(f'|phi> = {np.around(phi.data, 1)}')
layers = range(1, 4 + 1)
results = {}
iterations = 10
for lr in [0.1, 0.01]:
losses = []
for l in layers:
print(f"\n\nTraining with {l} layer(s) using {iterations} iterations and {lr} learning rate:")
qc = generate_state_preparation_circuit(qubits, l)
random.seed(42)
random_params = get_random_parameters_in_range(qc.parameters, 0, 2 * np.pi)
start_time = time.time()
(final_params, psi, loss) = train(phi, qc, random_params, iterations, lr)
end_time = time.time()
diff_time = end_time - start_time
print(f'Completed in {np.around(diff_time, 1)} seconds')
losses.append(loss)
results[lr] = losses.copy()
plt.title(f'State preparation ({iterations} iterations per run)')
for lr, losses in results.items():
plt.plot(layers, losses, label=f"lr:{lr}")
for i in range(len(losses)): plt.annotate(np.around(losses[i], 2), (layers[i], losses[i]))
plt.xlabel("Layers")
plt.ylabel("Loss")
plt.xticks(layers)
plt.legend()
plt.show()
###Output
|phi> = [-0.2+0.1j -0.1+0.j -0.2-0.3j 0.4-0.2j 0.3+0.1j 0.1+0.1j 0. -0.1j
-0.2+0.1j 0.1-0.3j -0.1-0.j -0.1+0.j 0.3-0.1j -0.2+0.2j -0.1-0.2j
-0.2-0.j 0.2+0.1j]
Training with 1 layer(s) using 10 iterations and 0.1 learning rate:
Iteration 0 => Loss: 1.8873503548
Iteration 9 => Loss: 1.5879384736
Completed in 0.7 seconds
Training with 2 layer(s) using 10 iterations and 0.1 learning rate:
Iteration 0 => Loss: 2.4285553549
Iteration 9 => Loss: 1.7483062657
Completed in 2.6 seconds
Training with 3 layer(s) using 10 iterations and 0.1 learning rate:
Iteration 0 => Loss: 1.9808481460
Iteration 9 => Loss: 1.0160272721
Completed in 5.8 seconds
Training with 4 layer(s) using 10 iterations and 0.1 learning rate:
Iteration 0 => Loss: 2.0548129683
Iteration 9 => Loss: 1.2705427502
Completed in 10.2 seconds
Training with 1 layer(s) using 10 iterations and 0.01 learning rate:
Iteration 0 => Loss: 1.8873503548
Iteration 9 => Loss: 1.8563101375
Completed in 0.7 seconds
Training with 2 layer(s) using 10 iterations and 0.01 learning rate:
Iteration 0 => Loss: 2.4285553549
Iteration 9 => Loss: 2.3815842439
Completed in 2.7 seconds
Training with 3 layer(s) using 10 iterations and 0.01 learning rate:
Iteration 0 => Loss: 1.9808481460
Iteration 9 => Loss: 1.8534175146
Completed in 5.8 seconds
Training with 4 layer(s) using 10 iterations and 0.01 learning rate:
Iteration 0 => Loss: 2.0548129683
Iteration 9 => Loss: 1.9465854558
Completed in 10.0 seconds
###Markdown
Let's stick with a learning rate of 0.1 and try with more layers and iterations, but let's stop if we hit a [barren plateau](https://pennylane.ai/qml/demos/tutorial_barren_plateaus.html).
###Code
def train_stop_on_barren(target, qc, params, iterations, learning_rate, print_modulus=None):
if print_modulus is None:
print_modulus = max(iterations / 10, 10) # print every so often
psi = None
for i in range(iterations):
qc_bound = qc.assign_parameters(params, inplace=False)
psi = Statevector.from_instruction(qc_bound)
params_gradient = gradient_param_shift(target, qc, params)
params_gradient_list = list(params_gradient.values())
params_gradient_variance = np.var(params_gradient_list)
force_print = False
stop_training = False
if params_gradient_variance < 0.0001:
print("Reached barren plateau!")
force_print = True
stop_training = True
if i % print_modulus == 0 or i == iterations - 1 or force_print:
print("Iteration %4d => Loss: %.10f" % (i, norm(psi - target)))
for p, v in params.items():
params[p] = v - params_gradient[p] * learning_rate
if stop_training:
break
return (params, psi, norm(psi - target))
layers = range(1, 8 + 1)
results = {}
iterations = 1000
for lr in [0.1]:
losses = []
for l in layers:
print(f"\n\nTraining with {l} layer(s) using {iterations} iterations and {lr} learning rate:")
qc = generate_state_preparation_circuit(qubits, l)
random.seed(42)
random_params = get_random_parameters_in_range(qc.parameters, 0, 2 * np.pi)
start_time = time.time()
(final_params, psi, loss) = train_stop_on_barren(phi, qc, random_params, iterations, lr, print_modulus=100)
end_time = time.time()
diff_time = end_time - start_time
print(f'Completed in {np.around(diff_time, 1)} seconds')
losses.append(loss)
results[lr] = losses.copy()
plt.title(f'State preparation ({iterations} iterations per run)')
for lr, losses in results.items():
plt.plot(layers, losses, label=f"lr:{lr}")
for i in range(len(losses)): plt.annotate(np.around(losses[i], 3), (layers[i], losses[i]))
plt.xlabel("Layers")
plt.ylabel("Loss")
plt.xticks(layers)
plt.legend()
plt.show()
###Output
Training with 1 layer(s) using 1000 iterations and 0.1 learning rate:
Iteration 0 => Loss: 1.8873503548
Iteration 100 => Loss: 0.6165497798
Iteration 200 => Loss: 0.5269796594
Reached barren plateau!
Iteration 222 => Loss: 0.5244657265
Completed in 15.7 seconds
Training with 2 layer(s) using 1000 iterations and 0.1 learning rate:
Iteration 0 => Loss: 2.4285553549
Iteration 100 => Loss: 0.5932438200
Iteration 200 => Loss: 0.2427542444
Reached barren plateau!
Iteration 266 => Loss: 0.2024701544
Completed in 69.3 seconds
Training with 3 layer(s) using 1000 iterations and 0.1 learning rate:
Iteration 0 => Loss: 1.9808481460
Iteration 100 => Loss: 0.1668145409
Iteration 200 => Loss: 0.1301038545
Iteration 300 => Loss: 0.1038422246
Reached barren plateau!
Iteration 322 => Loss: 0.0988313033
Completed in 185.6 seconds
Training with 4 layer(s) using 1000 iterations and 0.1 learning rate:
Iteration 0 => Loss: 2.0548129683
Iteration 100 => Loss: 0.2818047994
Iteration 200 => Loss: 0.1416207885
Iteration 300 => Loss: 0.0713924145
Reached barren plateau!
Iteration 311 => Loss: 0.0681323848
Completed in 318.6 seconds
Training with 5 layer(s) using 1000 iterations and 0.1 learning rate:
Iteration 0 => Loss: 2.3943248296
Iteration 100 => Loss: 0.0497953347
Reached barren plateau!
Iteration 105 => Loss: 0.0479611377
Completed in 169.7 seconds
Training with 6 layer(s) using 1000 iterations and 0.1 learning rate:
Iteration 0 => Loss: 1.7324230803
Iteration 100 => Loss: 0.0287417249
Reached barren plateau!
Iteration 104 => Loss: 0.0270180591
Completed in 237.4 seconds
Training with 7 layer(s) using 1000 iterations and 0.1 learning rate:
Iteration 0 => Loss: 2.3203635195
Reached barren plateau!
Iteration 90 => Loss: 0.0103421423
Completed in 276.8 seconds
Training with 8 layer(s) using 1000 iterations and 0.1 learning rate:
Iteration 0 => Loss: 2.0509456223
Reached barren plateau!
Iteration 51 => Loss: 0.0113199884
Completed in 209.0 seconds
###Markdown
It seems that using 5 layers has a good trade-off for loss minimization (0.048) and training time (169.7 s). Let's continue with some more experimentation here. 
###Code
def generate_state_preparation_circuit_swap_blocks(width, L):
qc = QuantumCircuit(width)
i = 0
for layer in range(L):
i = i + 1
theta = ParameterVector(f"$\\theta_{i}$", width)
qc += generate_even_block(theta)
i = i + 1
theta = ParameterVector(f"$\\theta_{i}$", width)
qc += generate_odd_block(theta)
return qc
layers = [5]
results = {}
iterations = 1000
for lr in [0.1]:
losses = []
for l in layers:
print(f"\n\nTraining with {l} layer(s) using {iterations} iterations and {lr} learning rate:")
qc = generate_state_preparation_circuit_swap_blocks(qubits, l)
random.seed(42)
random_params = get_random_parameters_in_range(qc.parameters, 0, 2 * np.pi)
start_time = time.time()
(final_params, psi, loss) = train_stop_on_barren(phi, qc, random_params, iterations, lr, print_modulus=100)
end_time = time.time()
diff_time = end_time - start_time
print(f'Completed in {np.around(diff_time, 1)} seconds')
losses.append(loss)
results[lr] = losses.copy()
plt.title(f'State preparation ({iterations} iterations per run)')
for lr, losses in results.items():
plt.plot(layers, losses, label=f"lr:{lr}")
for i in range(len(losses)): plt.annotate(np.around(losses[i], 3), (layers[i], losses[i]))
plt.xlabel("Layers")
plt.ylabel("Loss")
plt.xticks(layers)
plt.legend()
plt.show()
###Output
Training with 5 layer(s) using 1000 iterations and 0.1 learning rate:
Iteration 0 => Loss: 1.7717165056
Iteration 100 => Loss: 0.1712390128
Reached barren plateau!
Iteration 180 => Loss: 0.0577946159
Completed in 321.5 seconds
###Markdown
Swapping the odd and even blocks doesn't seem to produce any significantly better results. Let's try swapping RX and RZ gates and replace the CZ gates with CX.
###Code
def generate_even_block_swapped(theta : ParameterVector):
width = len(theta)
qc = QuantumCircuit(width)
for i, t in enumerate(theta.params):
qc.rx(t, i)
for i in range(width):
for j in range(i+1, width):
qc.cx(i, j)
return qc
def generate_odd_block_swapped(theta : ParameterVector):
width = len(theta)
qc = QuantumCircuit(width)
for i, t in enumerate(theta.params):
qc.rz(t, i)
return qc
qc = generate_odd_block_swapped(theta)
display(qc.draw(output='mpl'))
theta = ParameterVector(f"$\\theta_i$", qubits)
qc = generate_even_block_swapped(theta)
display(qc.draw(output='mpl'))
def generate_state_preparation_circuit_swap_gates(width, L):
qc = QuantumCircuit(width)
i = 0
for layer in range(L):
i = i + 1
theta = ParameterVector(f"$\\theta_{i}$", width)
qc += generate_odd_block_swapped(theta)
i = i + 1
theta = ParameterVector(f"$\\theta_{i}$", width)
qc += generate_even_block_swapped(theta)
return qc
layers = [5]
results = {}
iterations = 1000
for lr in [0.1]:
losses = []
for l in layers:
print(f"\n\nTraining with {l} layer(s) using {iterations} iterations and {lr} learning rate:")
qc = generate_state_preparation_circuit_swap_gates(qubits, l)
random.seed(42)
random_params = get_random_parameters_in_range(qc.parameters, 0, 2 * np.pi)
start_time = time.time()
(final_params, psi, loss) = train_stop_on_barren(phi, qc, random_params, iterations, lr, print_modulus=100)
end_time = time.time()
diff_time = end_time - start_time
print(f'Completed in {np.around(diff_time, 1)} seconds')
losses.append(loss)
results[lr] = losses.copy()
plt.title(f'State preparation ({iterations} iterations per run)')
for lr, losses in results.items():
plt.plot(layers, losses, label=f"lr:{lr}")
for i in range(len(losses)): plt.annotate(np.around(losses[i], 3), (layers[i], losses[i]))
plt.xlabel("Layers")
plt.ylabel("Loss")
plt.xticks(layers)
plt.legend()
plt.show()
###Output
Training with 5 layer(s) using 1000 iterations and 0.1 learning rate:
Iteration 0 => Loss: 2.9099370108
Iteration 100 => Loss: 0.1070025468
Reached barren plateau!
Iteration 185 => Loss: 0.0302486369
Completed in 194.4 seconds
###Markdown
We have a slightly better result (0.030) over the original circuit (0.048) at 5 layers. Let's kick off one last run with no early termination.
###Code
layers = [5]
results = {}
iterations = 1000
for lr in [0.1]:
losses = []
for l in layers:
print(f"\n\nTraining with {l} layer(s) using {iterations} iterations and {lr} learning rate:")
qc = generate_state_preparation_circuit_swap_gates(qubits, l)
random.seed(42)
random_params = get_random_parameters_in_range(qc.parameters, 0, 2 * np.pi)
start_time = time.time()
(final_params, psi, loss) = train(phi, qc, random_params, iterations, lr, print_modulus=100)
end_time = time.time()
diff_time = end_time - start_time
print(f'Completed in {np.around(diff_time, 1)} seconds')
losses.append(loss)
results[lr] = losses.copy()
plt.title(f'State preparation ({iterations} iterations per run)')
for lr, losses in results.items():
plt.plot(layers, losses, label=f"lr:{lr}")
for i in range(len(losses)): plt.annotate(np.around(losses[i], 3), (layers[i], losses[i]))
plt.xlabel("Layers")
plt.ylabel("Loss")
plt.xticks(layers)
plt.legend()
plt.show()
###Output
Training with 5 layer(s) using 1000 iterations and 0.1 learning rate:
Iteration 0 => Loss: 2.9099370108
Iteration 100 => Loss: 0.1070025468
Iteration 200 => Loss: 0.0256738288
Iteration 300 => Loss: 0.0105187354
Iteration 400 => Loss: 0.0055351198
Iteration 500 => Loss: 0.0034053032
Iteration 600 => Loss: 0.0023066777
Iteration 700 => Loss: 0.0016349857
Iteration 800 => Loss: 0.0011770015
Iteration 900 => Loss: 0.0008502112
Iteration 999 => Loss: 0.0006163949
Completed in 1050.1 seconds
###Markdown
We've been able to approximate $|\phi\rangle$ with only a loss of 0.0006. Seems pretty good to me! As a sanity check I went back and re-ran all previous layers to 1000 iterations and found that the same level of loss was unreachable with fixed iterations. They all seem to level out, so I'm doubtful that any more iterations (e.g. 10,000) would have lower loss. I've included my final sanity check (i.e. level 4) below.
###Code
layers = [4]
results = {}
iterations = 1000
for lr in [0.1]:
losses = []
for l in layers:
print(f"\n\nTraining with {l} layer(s) using {iterations} iterations and {lr} learning rate:")
qc = generate_state_preparation_circuit_swap_gates(qubits, l)
random.seed(42)
random_params = get_random_parameters_in_range(qc.parameters, 0, 2 * np.pi)
start_time = time.time()
(final_params, psi, loss) = train(phi, qc, random_params, iterations, lr, print_modulus=100)
end_time = time.time()
diff_time = end_time - start_time
print(f'Completed in {np.around(diff_time, 1)} seconds')
losses.append(loss)
results[lr] = losses.copy()
plt.title(f'State preparation ({iterations} iterations per run)')
for lr, losses in results.items():
plt.plot(layers, losses, label=f"lr:{lr}")
for i in range(len(losses)): plt.annotate(np.around(losses[i], 3), (layers[i], losses[i]))
plt.xlabel("Layers")
plt.ylabel("Loss")
plt.xticks(layers)
plt.legend()
plt.show()
###Output
Training with 4 layer(s) using 1000 iterations and 0.1 learning rate:
Iteration 0 => Loss: 1.2175263919
Iteration 100 => Loss: 0.0841417513
Iteration 200 => Loss: 0.0631526267
Iteration 300 => Loss: 0.0559889741
Iteration 400 => Loss: 0.0518815533
Iteration 500 => Loss: 0.0481919984
Iteration 600 => Loss: 0.0440684823
Iteration 700 => Loss: 0.0395870271
Iteration 800 => Loss: 0.0353570701
Iteration 900 => Loss: 0.0318849204
Iteration 999 => Loss: 0.0292695200
Completed in 696.2 seconds
###Markdown
As with all experimentation I think it is good to share failures, too. I tried to adapt this [tutorial on local cost functions](https://pennylane.ai/qml/demos/tutorial_local_cost_functions.html), which is based on recent (e.g. Feb 2020) research by Cerezo et al. I didn't see better results, but it's possible I have made an error somewhere in my adaption since the original tutorial involved measurements on the individual qubits.
###Code
def norm_with_locality(sv, locality):
max_qubit_element = 2 ** locality
filtered_sv = [e for i, e in enumerate(sv.data) if i <= max_qubit_element]
filtered_sv_conjugate = [e for i, e in enumerate(sv.conjugate().data) if i <= max_qubit_element]
return np.sum(np.dot(filtered_sv_conjugate, filtered_sv)).real
def global_loss(sv):
return norm_with_locality(sv, sv.num_qubits)
def local_loss(sv):
return np.average([norm_with_locality(sv, l) for l in range(1, sv.num_qubits + 1)])
def gradient_with_locality(target, qc, params, locality):
params_gradient = {}
index = 0
qubit_count = qc.num_qubits
for p, v in params.items():
if index % qubit_count <= locality:
# we work with a copy, so that we can differentiate w.r.t. a single param at a time
params_shift = params.copy()
# for pauli operators we can apply +/- pi/2
v_plus = v + np.pi/2
params_shift[p] = v_plus
qc_bound = qc.assign_parameters(params_shift, inplace=False)
psi_plus = Statevector.from_instruction(qc_bound)
v_neg = v - np.pi/2
params_shift[p] = v_neg
qc_bound = qc.assign_parameters(params_shift, inplace=False)
psi_neg = Statevector.from_instruction(qc_bound)
# print(psi_plus - psi_neg)
params_gradient[p] = 0.5 * (norm_with_locality(psi_plus - target, locality) - norm_with_locality(psi_neg - target, locality))
else:
params_gradient[p] = 0
index = index + 1
return params_gradient
def train_with_locality(target, qc, params, iterations, learning_rate, locality_start, print_modulus=None):
if print_modulus is None:
print_modulus = max(iterations / 10, 10)
psi = None
qubit_count = qc.num_qubits
locality = locality_start
for i in range(iterations):
qc_bound = qc.assign_parameters(params, inplace=False)
psi = Statevector.from_instruction(qc_bound)
params_gradient = gradient_with_locality(target, qc, params, locality)
params_gradient_list = [g for i, g in enumerate(params_gradient.values()) if i % qubit_count <= locality]
params_gradient_variance = np.var(params_gradient_list)
force_print = False
stop_training = False
if params_gradient_variance < 0.0001:
print("Reached barren plateau!")
force_print = True
if locality < qubit_count:
locality = locality + 1
print(f"Increasing locality to {locality}")
else:
stop_training = True
if i % print_modulus == 0 or i == iterations - 1 or force_print:
if locality == qubit_count:
print("Iteration %4d => Loss: %.10f" % (i, global_loss(psi - target)))
else:
print("Iteration %4d => Loss: %.10f / Local Loss: %.10f" % (i, global_loss(psi - target), local_loss(psi - target)))
for p, v in params.items():
params[p] = v - params_gradient[p] * learning_rate
if stop_training:
break
return (params, psi, global_loss(psi - target))
layers = [5]
results = {}
iterations = 1000
for locality in [1]:
for lr in [0.1]:
losses = []
for l in layers:
print(f"\n\nTraining with {l} layer(s) using start locality {locality}, {iterations} iterations, and {lr} learning rate:")
qc = generate_state_preparation_circuit_swap_gates(qubits, l)
random.seed(42)
random_params = get_random_parameters_in_range(qc.parameters, 0, 2 * np.pi)
(final_params, psi, loss) = train_with_locality(phi, qc, random_params, iterations, lr, locality, print_modulus=100)
losses.append(loss)
results[(locality, lr)] = losses.copy()
plt.title(f'State preparation ({iterations} iterations per run)')
for hp, losses in results.items():
plt.plot(layers, losses, label=f"loc: {hp[0]} lr:{hp[1]} ")
for i in range(len(losses)): plt.annotate(np.around(losses[i], 3), (layers[i], losses[i]))
plt.xlabel("Layers")
plt.ylabel("Loss")
plt.xticks(layers)
plt.legend()
plt.show()
###Output
Training with 5 layer(s) using start locality 1, 1000 iterations, and 0.1 learning rate:
Iteration 0 => Loss: 2.9099370108 / Local Loss: 1.6452365641
Iteration 100 => Loss: 1.5982849586 / Local Loss: 0.7298137325
Reached barren plateau!
Increasing locality to 2
Iteration 142 => Loss: 1.5501093578 / Local Loss: 0.7063870730
Iteration 200 => Loss: 1.0437776831 / Local Loss: 0.3580343115
Iteration 300 => Loss: 0.7729732560 / Local Loss: 0.2615960169
Reached barren plateau!
Increasing locality to 3
Iteration 364 => Loss: 0.7580306685 / Local Loss: 0.2519630838
Iteration 400 => Loss: 0.5719225175 / Local Loss: 0.1490998272
Reached barren plateau!
Increasing locality to 4
Iteration 462 => Loss: 0.4998596194
Iteration 500 => Loss: 0.1047100837
Reached barren plateau!
Iteration 596 => Loss: 0.0247024292
###Markdown
if this is the best resultbest_model = UNet( in_channels=in_channels, n_classes=n_classes, depth=depth, wf=wf, padding=padding, batch_norm=batch_norm, up_mode=up_mode)Output.test_out(best_model, save_path, test_path, result_path, pad)
###Code
torch.cuda.empty_cache()
###Output
_____no_output_____
###Markdown
Второй проект задание 1Воспользуйтесь этими данными и выясните, какие пары товаров пользователи чаще всего покупают вместе. По сути, вам необходимо найти паттерны покупок, что позволит оптимизировать размещение продуктов в магазине, для удобства пользователей и увеличения выручки. Напишите код на python для получения нужной таблицы и укажите 5 наиболее распространённых паттернов.
###Code
products = pd.read_csv('https://stepik.org/media/attachments/lesson/409319/test1_completed.csv', sep=',')
products.head(10)
from itertools import combinations
def combs(goods):
return pd.Series(list(combinations(goods, 2)))
group = products.groupby('id')['Товар'].apply(combs).value_counts()
result = group.to_frame().reset_index()
result
result['1_Товар'] = result["index"].apply(lambda x: x[0])
result['2_Товар'] = result["index"].apply(lambda x: x[1])
result['Встречаемость'] = result["Товар"]
result = result.drop(['index', 'Товар'], 1)
result
result.head(5)
###Output
_____no_output_____
###Markdown
**Входные данные**: два массива (списка) x, u задающие кривую.**Ожидаемый результат**: значение x0, в котором необходимо нарушить непрерывность кривой, чтобы сохранить однозначность u = u(x).**Метод решения**: найти значение x0 исходя из предположения о равенстве отсекаемых площадей
###Code
x = [1.49066127e-06, 1.00024454e-02, 2.00039718e-02, 3.00063867e-02, 4.00101677e-02, 5.00160261e-02, 6.00250086e-02, 7.00386374e-02, 8.00590993e-02, 9.00894983e-02, 1.00134185e-01, 1.10199182e-01, 1.20292721e-01, 1.30425906e-01, 1.40613524e-01, 1.50874996e-01, 1.61235486e-01, 1.71727136e-01, 1.82390409e-01, 1.93275480e-01, 2.04443599e-01, 2.15968314e-01, 2.27936438e-01, 2.40448564e-01, 2.53618982e-01, 2.67574773e-01, 2.82453905e-01, 2.98402141e-01, 3.15568647e-01, 3.34100210e-01, 3.54134113e-01, 3.75789783e-01, 3.99159480e-01, 4.24298431e-01, 4.51214920e-01, 4.79860987e-01, 5.10124440e-01, 5.41822943e-01, 5.74700902e-01, 6.08429771e-01, 6.42612264e-01, 6.76790724e-01, 7.10459615e-01, 7.43081815e-01, 7.74108085e-01, 8.02998761e-01, 8.29246539e-01, 8.52398993e-01, 8.72079469e-01, 8.88004992e-01, 9.00000000e-01, 9.08004992e-01, 9.12079469e-01, 9.12398993e-01, 9.09246539e-01, 9.02998761e-01, 8.94108085e-01, 8.83081815e-01, 8.70459615e-01, 8.56790724e-01, 8.42612264e-01, 8.28429771e-01, 8.14700902e-01, 8.01822943e-01, 7.90124440e-01, 7.79860987e-01, 7.71214920e-01, 7.64298431e-01, 7.59159480e-01, 7.55789783e-01, 7.54134113e-01, 7.54100210e-01, 7.55568647e-01, 7.58402141e-01, 7.62453905e-01, 7.67574773e-01, 7.73618982e-01, 7.80448564e-01, 7.87936438e-01, 7.95968314e-01, 8.04443599e-01, 8.13275480e-01, 8.22390409e-01, 8.31727136e-01, 8.41235486e-01, 8.50874996e-01, 8.60613524e-01, 8.70425906e-01, 8.80292721e-01, 8.90199182e-01, 9.00134185e-01, 9.10089498e-01, 9.20059099e-01, 9.30038637e-01, 9.40025009e-01, 9.50016026e-01, 9.60010168e-01, 9.70006387e-01, 9.80003972e-01, 9.90002445e-01]
u = [3.72665317e-06, 6.11356797e-06, 9.92950431e-06, 1.59667839e-05, 2.54193465e-05, 4.00652974e-05, 6.25215038e-05, 9.65934137e-05, 1.47748360e-04, 2.23745794e-04, 3.35462628e-04, 4.97955422e-04, 7.31802419e-04, 1.06476624e-03, 1.53381068e-03, 2.18749112e-03, 3.08871541e-03, 4.31784001e-03, 5.97602290e-03, 8.18870101e-03, 1.11089965e-02, 1.49207861e-02, 1.98410947e-02, 2.61214099e-02, 3.40474547e-02, 4.39369336e-02, 5.61347628e-02, 7.10053537e-02, 8.89216175e-02, 1.10250525e-01, 1.35335283e-01, 1.64474457e-01, 1.97898699e-01, 2.35746077e-01, 2.78037300e-01, 3.24652467e-01, 3.75311099e-01, 4.29557358e-01, 4.86752256e-01, 5.46074427e-01, 6.06530660e-01, 6.66976811e-01, 7.26149037e-01, 7.82704538e-01, 8.35270211e-01, 8.82496903e-01, 9.23116346e-01, 9.55997482e-01, 9.80198673e-01, 9.95012479e-01, 1.00000000e+00, 9.95012479e-01, 9.80198673e-01, 9.55997482e-01, 9.23116346e-01, 8.82496903e-01, 8.35270211e-01, 7.82704538e-01, 7.26149037e-01, 6.66976811e-01, 6.06530660e-01, 5.46074427e-01, 4.86752256e-01, 4.29557358e-01, 3.75311099e-01, 3.24652467e-01, 2.78037300e-01, 2.35746077e-01, 1.97898699e-01, 1.64474457e-01, 1.35335283e-01, 1.10250525e-01, 8.89216175e-02, 7.10053537e-02, 5.61347628e-02, 4.39369336e-02, 3.40474547e-02, 2.61214099e-02, 1.98410947e-02, 1.49207861e-02, 1.11089965e-02, 8.18870101e-03, 5.97602290e-03, 4.31784001e-03, 3.08871541e-03, 2.18749112e-03, 1.53381068e-03, 1.06476624e-03, 7.31802419e-04, 4.97955422e-04, 3.35462628e-04, 2.23745794e-04, 1.47748360e-04, 9.65934137e-05, 6.25215038e-05, 4.00652974e-05, 2.54193465e-05, 1.59667839e-05, 9.92950431e-06, 6.11356797e-06]
import matplotlib.pyplot as plt
plt.plot(x, u) # plot u=u(x)
x_start = 7.54100210e-01 # x[71]
x_end = 9.12398993e-01 # x[53]
plt.axvspan(x_start, x_end, facecolor='r', alpha=0.3) # ambiguous region
plt.show()
###Output
_____no_output_____
###Markdown
Область неоднозначности ограничена справа величиной x_end, в которой значения x начинаеют убывать, а слева величиной x_start, в которой значения x начинают возрастать. Внутри этой области существует точка x0, слева и справа от которой отсекаемые площади равны. Для примера рассмотрим 3 произвольные точки отсечения.
###Code
_, axs = plt.subplots(1, 3, figsize=(15,5))
for ax in axs:
ax.plot(x, u)
# if one set x0 = 0.785
axs[0].axvline(x=0.785, color='r', linestyle='dashed')
axs[0].fill(x[45:78], u[45:78], color='r', alpha = 0.5)
# if one set x0 = 0.82
axs[1].axvline(x=0.82, color='r', linestyle='dashed')
axs[1].fill(x[46:82], u[46:82], color='r', alpha = 0.5)
# if one set x0 = 0.870
axs[2].axvline(x=0.87, color='r', linestyle='dashed')
axs[2].fill(x[48:87], u[48:87], color='r', alpha = 0.5)
plt.show()
###Output
_____no_output_____
###Markdown
**Пояснение, где возникает подобная задача**При решении уравнения в частных производных ut + u ux = 0 возникает необходимость избавления неоднозначности в решении
###Code
import numpy as np
import matplotlib.pyplot as plt
x0 = np.arange(0, 1, 0.01) # uniform split on x axis
u0 = np.exp(-(x0-0.5)**2/(2*0.1**2)) # solution for t=0
x1 = x0 + u0*0.4 # solution for t=0.5
u1 = u0
plt.plot(x0, u0, label='u(t=0, x)')
plt.plot(x1, u1, label='u(t=1, x)')
plt.legend()
plt.show()
u1
###Output
_____no_output_____
###Markdown
Libraries Installations
###Code
!apt-get install python3-pip python-dev
!pip3 install pafy
!pip3 install -U vidgear
!pip install yt_dlp
!pip install requests
!pip install PyJWT
###Output
_____no_output_____
###Markdown
Drive Mount
###Code
from google.colab import drive
drive.mount('/content/gdrive')
%cd ./gdrive
%cd ./MyDrive/task-1
###Output
_____no_output_____
###Markdown
Run main.py
###Code
!python3 main.py
###Output
_____no_output_____
###Markdown
###Code
from google.colab import drive
drive.mount("/content/drive", force_remount=True)
import matplotlib.pyplot as plt # to make plots and show images
import numpy # numerical python for ND array
import pandas as pd
###Output
_____no_output_____
###Markdown
sample set 1
###Code
# sample 42: plastic green
path1 = '/content/drive/MyDrive/ASI/Lectures+Exercises/Lecture 2/csv files/42 plastic green.Sample.Raw.csv'
sam42 = pd.read_csv(path1)
# sample 43: another plastic green
path2 = '/content/drive/MyDrive/ASI/Lectures+Exercises/Lecture 2/csv files/43 plastic green.Sample.Raw.csv'
sam43 = pd.read_csv(path2)
#sample 32: plastic green spoon
path3= '/content/drive/MyDrive/ASI/Lectures+Exercises/Lecture 2/csv files/32 plastic spoon green.Sample.Raw.csv'
sam32 = pd.read_csv(path3)
# pick the wavelengths and reflectance
waves = sam32['nm']
re42 = sam42[' %R']
re43 = sam43[' %R']
re32 = sam32[' %R']
#plot setting
plt.rcParams['figure.dpi'] = 200
plt.yticks(range(0, 110, 10))
plt.xticks(range(200, 2500, 200))
plt.grid(b=True, linestyle='-')
plt.xlabel('xlabel', fontsize=10)
plt.ylabel('ylabel', fontsize=10)
plt.tick_params(axis='both', which='major', labelsize=10)
plt.xlabel('Wavelength, nm')
plt.ylabel('Reflectance [0-100]%')
# plot the curves
plt.plot(waves, re42, 'r', label="sample 42")
plt.plot(waves, re43, 'g', label="sample 43")
plt.plot(waves, re32, 'b',label="sample 32")
plt.legend(loc="upper right")
# sample 18: plastic green
path4 = '/content/drive/MyDrive/ASI/Lectures+Exercises/Lecture 2/csv files/18 thin paper green.Sample.Raw.csv'
sam18 = pd.read_csv(path4)
# sample 19: another plastic green
path5 = '/content/drive/MyDrive/ASI/Lectures+Exercises/Lecture 2/csv files/19 thin paper green.Sample.Raw.csv'
sam19 = pd.read_csv(path5)
# pick the wavelengths and reflectance
waves = sam18['nm']
re18 = sam18[' %R']
re19 = sam19[' %R']
#plot setting
plt.rcParams['figure.dpi'] = 200
plt.yticks(range(0, 110, 10))
plt.xticks(range(200, 2500, 200))
plt.grid(b=True, linestyle='-')
plt.xlabel('xlabel', fontsize=10)
plt.ylabel('ylabel', fontsize=10)
plt.tick_params(axis='both', which='major', labelsize=10)
plt.xlabel('Wavelength, nm')
plt.ylabel('Reflectance [0-100]%')
# plot the curves
plt.plot(waves, re18, 'r', label="sample 18")
plt.plot(waves, re19, 'g', label="sample 19")
plt.legend(loc="upper right")
plt. plot(waves,re19-re18, 'b', label="sample 19 - sample 18")
plt.legend(loc="upper right")
plt.yticks(range(0, 30, 10))
plt.xticks(range(200, 2500, 200))
plt.grid(b=True, linestyle='-')
plt.xlabel('xlabel', fontsize=10)
plt.ylabel('ylabel', fontsize=10)
plt.tick_params(axis='both', which='major', labelsize=10)
plt.xlabel('Wavelength, nm')
plt.ylabel('Reflectance difference [0-100]%')
plt.show()
###Output
_____no_output_____
###Markdown
Answer to task 1Usage: ```final_quantum_circuit = qram(vector=[vector]).main()```
###Code
class qram(object):
"""Returns a quantum circuit for the targets solutions
Note:
This is a 2 qubit implementation, it won't work on 2 ^ N qubits
Target Solutions:
They are the indices of binary numbers where two
adjacent bits will always have different values.
Args:
vector (list[int]): Vector of Numbers
Attributes:
main: Returns quantum circuit for superposition of targets indices
(Private) __oracle: Stores the Quantum Circuit in self.circ
Raises:
AttributeError: If length of the vector is greater than 4 or less then 1.
TypeError: If the type of vector is not list or if type of elements is not integer or float.
"""
def __init__(self, vector: list):
"""
Initializes:
self.vector list[int]: Vector Of Numbers
self.binary_vector list[str]: Binary instance of self.vector
self.indices list[int]: Indices of the targets
self.zfill int: Length of the longest binary number
self.circ QuantumCircuit: The Final Quantum Circuit
"""
# error handeling
if len(vector) > 4:
raise AttributeError('Length of vector should be less than 4 and greater than 1')
if type(vector) != list:
raise TypeError('The type of vector should be list')
for i in vector:
try:
vector[vector.index(i)] = int(i)
except TypeError:
raise TypeError('The element of vector should be integer')
self.vector = vector
self.binary_vector = []
self.indices = []
self.zfill: int = None
self.circ = None
def main(self) -> 'QuantumCiurcuit':
"""
Return:
Quantum circuit: if target indices exist
"No targets found": if no target indices found
"""
temp = []
# Find the zfill
for num in self.vector:
temp.append(str(bin(num)).replace("0b", ""))
self.zfill = len(max(temp, key=len))
# convert to binary
for num in self.vector:
binary = bin(num)[2:].zfill(self.zfill)
self.binary_vector.append(binary)
# find target indices
if self.zfill == 4:
for num in self.binary_vector:
if num[0] == num[2] and num[1] == num[3]:
self.indices.append(self.binary_vector.index(num))
elif self.zfill == 3:
for num in self.binary_vector:
if num[0] == num[2] and num[0] != num[1]:
self.indices.append(self.binary_vector.index(num))
elif self.zfill == 2:
for num in self.binary_vector:
if num[0] != num[1]:
self.indices.append(self.binary_vector.index(num))
elif self.zfill == 1:
self.indices.append(0)
if len(self.indices) == 0:
print("No target indices found")
return None
self.__oracle()
return self.circ
def __oracle(self) -> None:
"""
Stores the Quantum Circuit in self.circ
"""
circ = QuantumCircuit(2, 2) # inittialize 2 qubit quantum circuit
indices = self.indices
# Add the required gates according for the indices
if len(indices) == 4:
circ.h(0)
circ.h(1)
elif indices == [0]:
pass
elif indices == [0, 1] or indices == [1, 0]:
circ.h(0)
elif indices == [0, 2] or indices == [2, 0]:
circ.h(1)
elif indices == [0, 3] or indices == [3, 0]:
circ.h(0)
circ.cx(0, 1)
elif indices == [1, 2] or indices == [2, 1]:
circ.h(1)
circ.cx(1, 0)
circ.x(0)
elif indices == [1, 3] or indices == [3, 1]:
circ.x(0)
circ.h(1)
elif indices == [2, 3] or indices == [3, 2]:
circ.h(0)
circ.x(1)
self.circ = circ
return
###Output
_____no_output_____
###Markdown
Test 1`vector = [1,5,7,10]`
###Code
test1 = qram(vector=[1,5,7,10]).main()
test1.measure_all() # measure all the qubits
plot_histogram(execute(test1, bk, shots=8000).result().get_counts(test1)) # get result and plot the graph
###Output
_____no_output_____
###Markdown
**The Histogram is right indexed!** Test 2`vector = [1,5,4,2]`
###Code
test2 = qram(vector=[1,5,4,2]).main()
test2.measure_all() # measure all the qubits
plot_histogram(execute(test2, bk, shots=8000).result().get_counts(test2)) # get result and plot the graph
# the histogram is right indexed!
###Output
_____no_output_____
###Markdown
Data Preprocessing
###Code
from google.colab import drive
drive.mount('/content/drive')
cd /content/drive/MyDrive/COSRMAL_CHALLENGE/CORSMAL-Challenge-2022-Squids
!pip install torchinfo
import scipy
import librosa
import pandas as pd
import os
import numpy as np
from tqdm.notebook import tqdm
import scipy.io.wavfile
import time
import IPython
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
from torch.utils.data.dataset import Subset
import json
from torchinfo import summary
from utils import AudioProcessing, audioPreprocessing, audioPreprocessing_t1, voting
from models import Net, effnetv2_xl, MobileNetV3_Large, CNN_LSTM, mbv2_ca
from dataset import MyLSTMDataset
from helper import train_lstm, evaluate_audio
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
gt = pd.read_csv('files/train.csv')
gt.head()
# efficient = '/content/drive/MyDrive/COSRMAL_CHALLENGE/audios/efficient/XL-97.14.pth'
# base_path = '/content/drive/MyDrive/COSRMAL_CHALLENGE/'
# audio_folder = '/content/drive/MyDrive/COSRMAL_CHALLENGE/train/audio'
# T2_mid_dir = os.path.join(base_path, 'T2_mid')
# T2_pred_dir = os.path.join(base_path, 'T2_pred')
# os.makedirs(T2_mid_dir,exist_ok=True)
# os.makedirs(T2_pred_dir,exist_ok=True)
# model = effnetv2_xl()
# model.load_state_dict(torch.load(efficient))
# model.to(device)
# model.eval()
# audioPreprocessing_t1(audio_folder, gt,T2_mid_dir, T2_pred_dir, model, device)
# mobileNet = '/content/drive/MyDrive/COSRMAL_CHALLENGE/task2/mobile95.46.pth'
# base_path = '/content/drive/MyDrive/COSRMAL_CHALLENGE/'
# audio_folder = '/content/drive/MyDrive/COSRMAL_CHALLENGE/train/audio'
# T2_mid_dir = os.path.join(base_path, 'T2_mid')
# T2_pred_dir = os.path.join(base_path, 'T2_pred')
# os.makedirs(T2_mid_dir,exist_ok=True)
# os.makedirs(T2_pred_dir,exist_ok=True)
# model = MobileNetV3_Large(input_channel=8,num_classes=4)
# model.load_state_dict(torch.load(mobileNet))
# model.to(device)
# model.eval()
# audioPreprocessing_t1(audio_folder, gt,T2_mid_dir, T2_pred_dir, model, device)
mobileNet = '/content/drive/MyDrive/COSRMAL_CHALLENGE/task2/mobileCA/mobile-ca96.35.pth'
base_path = '/content/drive/MyDrive/COSRMAL_CHALLENGE/task1/mobileCA/features'
# audio_folder = '/content/drive/MyDrive/COSRMAL_CHALLENGE/train/audio'
# T2_mid_dir = os.path.join(base_path, 'T2_mid')
# T2_pred_dir = os.path.join(base_path, 'T2_pred')
# os.makedirs(T2_mid_dir,exist_ok=True)
# os.makedirs(T2_pred_dir,exist_ok=True)
# model = mbv2_ca(in_c=8, num_classes=4)
# model.load_state_dict(torch.load(mobileNet))
# model.to(device)
# model.eval()
# audioPreprocessing_t1(audio_folder, gt,T2_mid_dir, T2_pred_dir, model, device)
###Output
_____no_output_____
###Markdown
Train
###Code
myDataSet = MyLSTMDataset(base_path, gt['filling_level'].to_numpy())
###Output
953
/content/drive/MyDrive/COSRMAL_CHALLENGE/task1/mobileCA/features
###Markdown
CNN_LSTM
###Code
bs = 16
train_split = 0.8
lr = 1e-4
epochs = 200
n_samples = len(myDataSet)
assert n_samples == 684, "684"
mobile_save = '/content/drive/MyDrive/COSRMAL_CHALLENGE/task1'
model = CNN_LSTM(input_size=960).to(device)
optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=1e-5)
best_loss = float('inf')
best_acc = 0
num_train = 584
num_val = n_samples - num_train
train_set, val_set = torch.utils.data.random_split(myDataSet, [num_train, num_val])
assert len(train_set) == num_train, "Same"
assert len(val_set) == num_val, "Same"
train_loader = DataLoader(train_set,
batch_size=bs,
shuffle=True)
val_loader = DataLoader(val_set,
batch_size=bs,
shuffle=False)
for epoch in range(epochs):
#start_time = time.time()
loss_train, correct_train = train_lstm(model, train_loader, optimizer, device)
loss_val, correct_val = evaluate_audio(model, val_loader, device, criterion = nn.CrossEntropyLoss())
#elapsed_time = time.time() - start_time
print("{}/{} train loss:{:.4f} train acc:{:.2f}% val loss:{:.4f} val acc:{:.2f}%".format(
epoch+1,epochs, loss_train, 100 * correct_train/num_train,
loss_val, 100 * correct_val/num_val))
torch.save(model.state_dict(), os.path.join(mobile_save,
'mobile{:.2f}.pth'.format(100 * correct_val/num_val)))
bs = 16
train_split = 0.8
lr = 1e-4
epochs = 200
n_samples = len(myDataSet)
assert n_samples == 684, "684"
mobile_save = '/content/drive/MyDrive/COSRMAL_CHALLENGE/task1/mobileCA'
model = CNN_LSTM(input_size=1280).to(device)
optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=1e-5)
best_loss = float('inf')
best_acc = 0
num_train = 584
num_val = n_samples - num_train
train_set, val_set = torch.utils.data.random_split(myDataSet, [num_train, num_val])
assert len(train_set) == num_train, "Same"
assert len(val_set) == num_val, "Same"
train_loader = DataLoader(train_set,
batch_size=bs,
shuffle=True)
val_loader = DataLoader(val_set,
batch_size=bs,
shuffle=False)
for epoch in range(epochs):
#start_time = time.time()
loss_train, correct_train = train_lstm(model, train_loader, optimizer, device)
loss_val, correct_val = evaluate_audio(model, val_loader, device, criterion = nn.CrossEntropyLoss())
#elapsed_time = time.time() - start_time
print("{}/{} train loss:{:.4f} train acc:{:.2f}% val loss:{:.4f} val acc:{:.2f}%".format(
epoch+1,epochs, loss_train, 100 * correct_train/num_train,
loss_val, 100 * correct_val/num_val))
torch.save(model.state_dict(), os.path.join(mobile_save,
'mobile{:.2f}.pth'.format(100 * correct_val/num_val)))
bs = 16
train_split = 0.8
lr = 1e-3
epochs = 200
n_samples = len(myDataSet)
assert n_samples == 684, "684"
model = CNN_LSTM().to(device)
optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=1e-5)
best_loss = float('inf')
best_acc = 0
num_train = 584
num_val = n_samples - num_train
train_set, val_set = torch.utils.data.random_split(myDataSet, [num_train, num_val])
assert len(train_set) == num_train, "Same"
assert len(val_set) == num_val, "Same"
train_loader = DataLoader(train_set,
batch_size=bs,
shuffle=True)
val_loader = DataLoader(val_set,
batch_size=bs,
shuffle=False)
for epoch in range(epochs):
#start_time = time.time()
loss_train, correct_train = train_lstm(model, train_loader, optimizer, device)
loss_val, correct_val = evaluate_audio(model, val_loader, criterion = nn.CrossEntropyLoss())
#elapsed_time = time.time() - start_time
print("Epoch {}/{} train loss:{:.4f} train acc:{:.2f}% ".format(epoch+1,epochs, loss_train, 100 * correct_train/num_train))
print("Epoch {}/{} val loss:{:.4f} val acc:{:.2f}% ".format(epoch+1,epochs, loss_val, 100 * correct_val/num_val))
if correct_val > best_acc:
best_acc = correct_val
best_train = correct_train
torch.save(model, os.path.join(base_path, 'audios', "best_lstm.pth"))
if correct_val == best_acc and best_train < correct_train:
best_acc = correct_val
best_train = correct_train
torch.save(model, os.path.join(base_path, 'audios', "best_lstm.pth"))
###Output
Epoch 1/200 train loss:1.0452 train acc:46.40%
Epoch 1/200 val loss:0.9191 val acc:41.00%
Epoch 2/200 train loss:0.9919 train acc:50.00%
Epoch 2/200 val loss:0.9260 val acc:61.00%
Epoch 3/200 train loss:0.9881 train acc:49.14%
Epoch 3/200 val loss:0.8794 val acc:58.00%
Epoch 4/200 train loss:0.9932 train acc:51.54%
Epoch 4/200 val loss:0.8525 val acc:66.00%
Epoch 5/200 train loss:0.9659 train acc:49.66%
Epoch 5/200 val loss:0.8356 val acc:60.00%
Epoch 6/200 train loss:0.8339 train acc:46.92%
Epoch 6/200 val loss:0.7376 val acc:67.00%
Epoch 7/200 train loss:0.7594 train acc:64.21%
Epoch 7/200 val loss:0.6884 val acc:63.00%
Epoch 8/200 train loss:0.6278 train acc:68.32%
Epoch 8/200 val loss:0.5749 val acc:76.00%
Epoch 9/200 train loss:0.6022 train acc:63.36%
Epoch 9/200 val loss:0.5800 val acc:73.00%
Epoch 10/200 train loss:0.5727 train acc:64.38%
Epoch 10/200 val loss:0.5843 val acc:73.00%
Epoch 11/200 train loss:0.5532 train acc:64.73%
Epoch 11/200 val loss:0.5212 val acc:79.00%
Epoch 12/200 train loss:0.5567 train acc:68.15%
Epoch 12/200 val loss:0.5119 val acc:77.00%
Epoch 13/200 train loss:0.6193 train acc:69.52%
Epoch 13/200 val loss:0.5090 val acc:73.00%
Epoch 14/200 train loss:0.5685 train acc:67.12%
Epoch 14/200 val loss:0.4821 val acc:82.00%
Epoch 15/200 train loss:0.5788 train acc:69.01%
Epoch 15/200 val loss:0.4596 val acc:82.00%
Epoch 16/200 train loss:0.5301 train acc:72.77%
Epoch 16/200 val loss:0.4868 val acc:80.00%
Epoch 17/200 train loss:0.5325 train acc:71.40%
Epoch 17/200 val loss:0.5664 val acc:76.00%
Epoch 18/200 train loss:0.7241 train acc:66.95%
Epoch 18/200 val loss:0.4606 val acc:83.00%
Epoch 19/200 train loss:0.5301 train acc:74.14%
Epoch 19/200 val loss:0.5904 val acc:71.00%
Epoch 20/200 train loss:0.5568 train acc:67.98%
Epoch 20/200 val loss:0.4613 val acc:77.00%
Epoch 21/200 train loss:0.5063 train acc:73.29%
Epoch 21/200 val loss:0.4319 val acc:84.00%
Epoch 22/200 train loss:0.4951 train acc:74.66%
Epoch 22/200 val loss:0.4204 val acc:83.00%
Epoch 23/200 train loss:0.5233 train acc:72.43%
Epoch 23/200 val loss:0.4530 val acc:82.00%
Epoch 24/200 train loss:0.5259 train acc:72.09%
Epoch 24/200 val loss:0.4015 val acc:85.00%
Epoch 25/200 train loss:0.4880 train acc:75.51%
Epoch 25/200 val loss:0.4143 val acc:81.00%
Epoch 26/200 train loss:0.5055 train acc:74.66%
Epoch 26/200 val loss:0.4188 val acc:82.00%
Epoch 27/200 train loss:0.5026 train acc:73.29%
Epoch 27/200 val loss:0.4743 val acc:78.00%
Epoch 28/200 train loss:0.4919 train acc:73.80%
Epoch 28/200 val loss:0.4223 val acc:81.00%
Epoch 29/200 train loss:0.4849 train acc:75.68%
Epoch 29/200 val loss:0.4181 val acc:81.00%
Epoch 30/200 train loss:0.4740 train acc:77.05%
Epoch 30/200 val loss:0.3862 val acc:84.00%
Epoch 31/200 train loss:0.4718 train acc:78.42%
Epoch 31/200 val loss:0.5214 val acc:78.00%
Epoch 32/200 train loss:0.4738 train acc:76.03%
Epoch 32/200 val loss:0.4062 val acc:83.00%
Epoch 33/200 train loss:0.5294 train acc:69.86%
Epoch 33/200 val loss:0.5364 val acc:77.00%
Epoch 34/200 train loss:0.5266 train acc:67.47%
Epoch 34/200 val loss:0.4965 val acc:82.00%
Epoch 35/200 train loss:0.5080 train acc:75.00%
Epoch 35/200 val loss:0.4071 val acc:83.00%
Epoch 36/200 train loss:0.4882 train acc:73.97%
Epoch 36/200 val loss:0.4265 val acc:82.00%
Epoch 37/200 train loss:0.4802 train acc:76.37%
Epoch 37/200 val loss:0.3897 val acc:83.00%
Epoch 38/200 train loss:0.4808 train acc:76.54%
Epoch 38/200 val loss:0.4083 val acc:85.00%
Epoch 39/200 train loss:0.4715 train acc:77.91%
Epoch 39/200 val loss:0.4095 val acc:83.00%
Epoch 40/200 train loss:0.4924 train acc:75.86%
Epoch 40/200 val loss:0.4475 val acc:84.00%
Epoch 41/200 train loss:0.4766 train acc:77.74%
Epoch 41/200 val loss:0.4118 val acc:84.00%
Epoch 42/200 train loss:0.4658 train acc:76.20%
Epoch 42/200 val loss:0.3890 val acc:84.00%
Epoch 43/200 train loss:0.4723 train acc:77.05%
Epoch 43/200 val loss:0.4217 val acc:83.00%
Epoch 44/200 train loss:0.4911 train acc:75.51%
Epoch 44/200 val loss:0.4037 val acc:82.00%
Epoch 45/200 train loss:0.5193 train acc:75.17%
Epoch 45/200 val loss:0.4416 val acc:80.00%
Epoch 46/200 train loss:0.4663 train acc:77.23%
Epoch 46/200 val loss:0.3994 val acc:84.00%
Epoch 47/200 train loss:0.4781 train acc:77.05%
Epoch 47/200 val loss:0.4708 val acc:79.00%
Epoch 48/200 train loss:0.4753 train acc:77.40%
Epoch 48/200 val loss:0.3636 val acc:85.00%
Epoch 49/200 train loss:0.4653 train acc:78.08%
Epoch 49/200 val loss:0.4754 val acc:79.00%
Epoch 50/200 train loss:0.4867 train acc:73.97%
Epoch 50/200 val loss:0.3872 val acc:83.00%
Epoch 51/200 train loss:0.4595 train acc:75.51%
Epoch 51/200 val loss:0.4055 val acc:84.00%
Epoch 52/200 train loss:0.5846 train acc:76.03%
Epoch 52/200 val loss:0.3820 val acc:85.00%
Epoch 53/200 train loss:0.4816 train acc:75.17%
Epoch 53/200 val loss:0.3818 val acc:84.00%
Epoch 54/200 train loss:0.4825 train acc:76.20%
Epoch 54/200 val loss:0.3934 val acc:83.00%
Epoch 55/200 train loss:0.4670 train acc:75.86%
Epoch 55/200 val loss:0.3889 val acc:84.00%
Epoch 56/200 train loss:0.4632 train acc:77.40%
Epoch 56/200 val loss:0.3679 val acc:83.00%
Epoch 57/200 train loss:0.4521 train acc:77.57%
Epoch 57/200 val loss:0.3982 val acc:84.00%
Epoch 58/200 train loss:0.4751 train acc:75.34%
Epoch 58/200 val loss:0.4301 val acc:82.00%
Epoch 59/200 train loss:0.4645 train acc:76.88%
Epoch 59/200 val loss:0.4182 val acc:83.00%
Epoch 60/200 train loss:0.4560 train acc:77.57%
Epoch 60/200 val loss:0.4400 val acc:79.00%
Epoch 61/200 train loss:0.4636 train acc:76.71%
Epoch 61/200 val loss:0.4153 val acc:79.00%
Epoch 62/200 train loss:0.4521 train acc:76.54%
Epoch 62/200 val loss:0.3782 val acc:84.00%
Epoch 63/200 train loss:0.4486 train acc:75.00%
Epoch 63/200 val loss:0.4264 val acc:82.00%
Epoch 64/200 train loss:0.4900 train acc:75.00%
Epoch 64/200 val loss:0.4095 val acc:84.00%
Epoch 65/200 train loss:0.4560 train acc:75.00%
Epoch 65/200 val loss:0.4033 val acc:81.00%
Epoch 66/200 train loss:0.4450 train acc:76.88%
Epoch 66/200 val loss:0.5046 val acc:69.00%
Epoch 67/200 train loss:0.4656 train acc:73.80%
Epoch 67/200 val loss:0.4271 val acc:74.00%
Epoch 68/200 train loss:0.4700 train acc:77.57%
Epoch 68/200 val loss:0.4241 val acc:82.00%
Epoch 69/200 train loss:0.4558 train acc:74.66%
Epoch 69/200 val loss:0.4260 val acc:81.00%
Epoch 70/200 train loss:0.4387 train acc:77.05%
Epoch 70/200 val loss:0.3688 val acc:83.00%
Epoch 71/200 train loss:0.4412 train acc:76.88%
Epoch 71/200 val loss:0.3941 val acc:83.00%
Epoch 72/200 train loss:0.4494 train acc:77.91%
Epoch 72/200 val loss:0.3820 val acc:84.00%
Epoch 73/200 train loss:0.4583 train acc:76.71%
Epoch 73/200 val loss:0.4171 val acc:84.00%
Epoch 74/200 train loss:0.4353 train acc:77.23%
Epoch 74/200 val loss:0.4072 val acc:82.00%
Epoch 75/200 train loss:0.4434 train acc:74.83%
Epoch 75/200 val loss:0.3791 val acc:85.00%
Epoch 76/200 train loss:0.4263 train acc:79.28%
Epoch 76/200 val loss:0.3854 val acc:85.00%
Epoch 77/200 train loss:0.4504 train acc:78.60%
Epoch 77/200 val loss:0.3753 val acc:83.00%
Epoch 78/200 train loss:0.4385 train acc:78.08%
Epoch 78/200 val loss:0.3766 val acc:81.00%
Epoch 79/200 train loss:0.4366 train acc:77.57%
Epoch 79/200 val loss:0.4121 val acc:82.00%
Epoch 80/200 train loss:0.4283 train acc:75.68%
Epoch 80/200 val loss:0.4372 val acc:76.00%
Epoch 81/200 train loss:0.4316 train acc:76.88%
Epoch 81/200 val loss:0.4022 val acc:76.00%
Epoch 82/200 train loss:0.4384 train acc:79.11%
Epoch 82/200 val loss:0.3794 val acc:84.00%
Epoch 83/200 train loss:0.4423 train acc:78.08%
Epoch 83/200 val loss:0.4440 val acc:75.00%
Epoch 84/200 train loss:0.4415 train acc:76.54%
Epoch 84/200 val loss:0.4100 val acc:78.00%
Epoch 85/200 train loss:0.4042 train acc:79.79%
Epoch 85/200 val loss:0.3768 val acc:78.00%
Epoch 86/200 train loss:0.4262 train acc:76.88%
Epoch 86/200 val loss:0.4091 val acc:85.00%
Epoch 87/200 train loss:0.4079 train acc:78.08%
Epoch 87/200 val loss:0.3722 val acc:81.00%
Epoch 88/200 train loss:0.4442 train acc:77.05%
Epoch 88/200 val loss:0.3669 val acc:84.00%
Epoch 89/200 train loss:0.4184 train acc:77.23%
Epoch 89/200 val loss:0.4212 val acc:74.00%
Epoch 90/200 train loss:0.4138 train acc:79.45%
Epoch 90/200 val loss:0.3825 val acc:83.00%
Epoch 91/200 train loss:0.4570 train acc:77.23%
Epoch 91/200 val loss:0.4042 val acc:80.00%
Epoch 92/200 train loss:0.4189 train acc:77.05%
Epoch 92/200 val loss:0.4384 val acc:74.00%
Epoch 93/200 train loss:0.4368 train acc:78.94%
Epoch 93/200 val loss:0.3942 val acc:83.00%
Epoch 94/200 train loss:0.4326 train acc:77.40%
Epoch 94/200 val loss:0.4073 val acc:76.00%
Epoch 95/200 train loss:0.4243 train acc:77.40%
Epoch 95/200 val loss:0.4085 val acc:82.00%
Epoch 96/200 train loss:0.4212 train acc:74.83%
Epoch 96/200 val loss:0.3921 val acc:84.00%
Epoch 97/200 train loss:0.4268 train acc:79.11%
Epoch 97/200 val loss:0.4070 val acc:77.00%
Epoch 98/200 train loss:0.4203 train acc:78.08%
Epoch 98/200 val loss:0.3987 val acc:83.00%
Epoch 99/200 train loss:0.4100 train acc:78.94%
Epoch 99/200 val loss:0.3871 val acc:84.00%
Epoch 100/200 train loss:0.4115 train acc:77.74%
Epoch 100/200 val loss:0.3815 val acc:80.00%
Epoch 101/200 train loss:0.4066 train acc:78.42%
Epoch 101/200 val loss:0.3753 val acc:78.00%
Epoch 102/200 train loss:0.4200 train acc:78.77%
Epoch 102/200 val loss:0.4096 val acc:77.00%
Epoch 103/200 train loss:0.4004 train acc:79.28%
Epoch 103/200 val loss:0.4291 val acc:84.00%
Epoch 104/200 train loss:0.4067 train acc:79.62%
Epoch 104/200 val loss:0.4248 val acc:78.00%
Epoch 105/200 train loss:1.0200 train acc:64.21%
Epoch 105/200 val loss:0.6086 val acc:67.00%
Epoch 106/200 train loss:0.4993 train acc:76.54%
Epoch 106/200 val loss:0.4131 val acc:74.00%
Epoch 107/200 train loss:0.4678 train acc:79.28%
Epoch 107/200 val loss:0.4410 val acc:79.00%
Epoch 108/200 train loss:0.4297 train acc:79.28%
Epoch 108/200 val loss:0.3936 val acc:82.00%
Epoch 109/200 train loss:0.4104 train acc:78.42%
Epoch 109/200 val loss:0.4124 val acc:79.00%
Epoch 110/200 train loss:0.4132 train acc:78.42%
Epoch 110/200 val loss:0.4037 val acc:76.00%
Epoch 111/200 train loss:0.4039 train acc:76.71%
Epoch 111/200 val loss:0.4967 val acc:70.00%
Epoch 112/200 train loss:0.4036 train acc:79.45%
Epoch 112/200 val loss:0.4118 val acc:81.00%
Epoch 113/200 train loss:0.3983 train acc:79.11%
Epoch 113/200 val loss:0.4080 val acc:82.00%
Epoch 114/200 train loss:0.4154 train acc:79.28%
Epoch 114/200 val loss:0.4196 val acc:75.00%
Epoch 115/200 train loss:0.4024 train acc:77.05%
Epoch 115/200 val loss:0.4461 val acc:81.00%
Epoch 116/200 train loss:0.4026 train acc:79.62%
Epoch 116/200 val loss:0.4574 val acc:80.00%
Epoch 117/200 train loss:0.4223 train acc:77.23%
Epoch 117/200 val loss:0.4845 val acc:73.00%
Epoch 118/200 train loss:0.4322 train acc:78.77%
Epoch 118/200 val loss:0.4149 val acc:74.00%
Epoch 119/200 train loss:0.3971 train acc:76.88%
Epoch 119/200 val loss:0.3857 val acc:80.00%
Epoch 120/200 train loss:0.3942 train acc:79.45%
Epoch 120/200 val loss:0.4086 val acc:77.00%
Epoch 121/200 train loss:0.4094 train acc:78.42%
Epoch 121/200 val loss:0.4114 val acc:76.00%
Epoch 122/200 train loss:0.4041 train acc:80.99%
Epoch 122/200 val loss:0.3737 val acc:84.00%
Epoch 123/200 train loss:0.3972 train acc:79.79%
Epoch 123/200 val loss:0.4684 val acc:75.00%
Epoch 124/200 train loss:0.4012 train acc:78.08%
Epoch 124/200 val loss:0.3905 val acc:84.00%
Epoch 125/200 train loss:0.3937 train acc:79.79%
Epoch 125/200 val loss:0.3964 val acc:77.00%
Epoch 126/200 train loss:0.3844 train acc:80.82%
Epoch 126/200 val loss:0.4238 val acc:82.00%
Epoch 127/200 train loss:0.3982 train acc:79.11%
Epoch 127/200 val loss:0.4118 val acc:81.00%
Epoch 128/200 train loss:0.3961 train acc:78.94%
Epoch 128/200 val loss:0.4426 val acc:77.00%
Epoch 129/200 train loss:0.3777 train acc:80.31%
Epoch 129/200 val loss:0.4080 val acc:83.00%
Epoch 130/200 train loss:0.3959 train acc:78.77%
Epoch 130/200 val loss:0.4372 val acc:76.00%
Epoch 131/200 train loss:0.4068 train acc:77.40%
Epoch 131/200 val loss:0.5373 val acc:80.00%
Epoch 132/200 train loss:0.3914 train acc:78.42%
Epoch 132/200 val loss:0.4320 val acc:76.00%
Epoch 133/200 train loss:0.4272 train acc:79.62%
Epoch 133/200 val loss:0.4212 val acc:82.00%
Epoch 134/200 train loss:0.3872 train acc:79.62%
Epoch 134/200 val loss:0.4069 val acc:79.00%
Epoch 135/200 train loss:0.3842 train acc:80.14%
Epoch 135/200 val loss:0.4498 val acc:75.00%
Epoch 136/200 train loss:0.3776 train acc:79.28%
Epoch 136/200 val loss:0.4412 val acc:75.00%
Epoch 137/200 train loss:0.3673 train acc:79.62%
Epoch 137/200 val loss:0.4530 val acc:79.00%
Epoch 138/200 train loss:0.3916 train acc:80.65%
Epoch 138/200 val loss:0.4307 val acc:73.00%
Epoch 139/200 train loss:0.4117 train acc:78.08%
Epoch 139/200 val loss:0.4063 val acc:81.00%
Epoch 140/200 train loss:0.3847 train acc:78.77%
Epoch 140/200 val loss:0.4344 val acc:77.00%
Epoch 141/200 train loss:0.3903 train acc:79.62%
Epoch 141/200 val loss:0.4077 val acc:80.00%
Epoch 142/200 train loss:0.3777 train acc:80.82%
Epoch 142/200 val loss:0.4321 val acc:79.00%
Epoch 143/200 train loss:0.3832 train acc:79.97%
Epoch 143/200 val loss:0.4237 val acc:80.00%
Epoch 144/200 train loss:0.3782 train acc:81.68%
Epoch 144/200 val loss:0.4225 val acc:78.00%
Epoch 145/200 train loss:0.3872 train acc:80.82%
Epoch 145/200 val loss:0.4553 val acc:77.00%
Epoch 146/200 train loss:0.4592 train acc:77.74%
Epoch 146/200 val loss:0.4181 val acc:76.00%
Epoch 147/200 train loss:0.4444 train acc:78.94%
Epoch 147/200 val loss:0.4092 val acc:83.00%
Epoch 148/200 train loss:0.4062 train acc:77.57%
Epoch 148/200 val loss:0.5230 val acc:83.00%
Epoch 149/200 train loss:0.4945 train acc:77.40%
Epoch 149/200 val loss:0.4714 val acc:80.00%
Epoch 150/200 train loss:0.3705 train acc:79.79%
Epoch 150/200 val loss:0.4558 val acc:80.00%
Epoch 151/200 train loss:0.3795 train acc:79.28%
Epoch 151/200 val loss:0.4454 val acc:80.00%
Epoch 152/200 train loss:0.3904 train acc:80.48%
Epoch 152/200 val loss:0.4220 val acc:82.00%
Epoch 153/200 train loss:0.3765 train acc:79.45%
Epoch 153/200 val loss:0.4235 val acc:81.00%
Epoch 154/200 train loss:0.3857 train acc:80.99%
Epoch 154/200 val loss:0.4682 val acc:80.00%
Epoch 155/200 train loss:0.3827 train acc:78.60%
Epoch 155/200 val loss:0.4398 val acc:76.00%
Epoch 156/200 train loss:0.3732 train acc:81.16%
Epoch 156/200 val loss:0.4452 val acc:81.00%
Epoch 157/200 train loss:0.3726 train acc:80.99%
Epoch 157/200 val loss:0.5526 val acc:78.00%
Epoch 158/200 train loss:0.4066 train acc:78.94%
Epoch 158/200 val loss:0.4201 val acc:81.00%
Epoch 159/200 train loss:0.3689 train acc:80.99%
Epoch 159/200 val loss:0.4502 val acc:80.00%
Epoch 160/200 train loss:0.3662 train acc:81.34%
Epoch 160/200 val loss:0.4579 val acc:81.00%
Epoch 161/200 train loss:0.3940 train acc:78.77%
Epoch 161/200 val loss:0.4217 val acc:81.00%
Epoch 162/200 train loss:0.3683 train acc:80.48%
Epoch 162/200 val loss:0.5645 val acc:76.00%
Epoch 163/200 train loss:0.3702 train acc:79.79%
Epoch 163/200 val loss:0.4542 val acc:79.00%
Epoch 164/200 train loss:0.3923 train acc:81.85%
Epoch 164/200 val loss:0.4696 val acc:79.00%
Epoch 165/200 train loss:0.3843 train acc:80.99%
Epoch 165/200 val loss:0.4289 val acc:77.00%
Epoch 166/200 train loss:0.3731 train acc:81.85%
Epoch 166/200 val loss:0.4731 val acc:74.00%
Epoch 167/200 train loss:0.3814 train acc:81.16%
Epoch 167/200 val loss:0.4946 val acc:74.00%
Epoch 168/200 train loss:0.3683 train acc:80.14%
Epoch 168/200 val loss:0.5083 val acc:76.00%
Epoch 169/200 train loss:0.3490 train acc:80.31%
Epoch 169/200 val loss:0.4773 val acc:78.00%
Epoch 170/200 train loss:0.3764 train acc:81.85%
Epoch 170/200 val loss:0.4473 val acc:80.00%
Epoch 171/200 train loss:0.3543 train acc:83.22%
Epoch 171/200 val loss:0.4676 val acc:79.00%
Epoch 172/200 train loss:0.3566 train acc:81.16%
Epoch 172/200 val loss:0.4939 val acc:79.00%
Epoch 173/200 train loss:0.3927 train acc:80.65%
Epoch 173/200 val loss:0.5303 val acc:75.00%
Epoch 174/200 train loss:0.3751 train acc:81.34%
Epoch 174/200 val loss:0.4447 val acc:78.00%
Epoch 175/200 train loss:0.4228 train acc:80.14%
Epoch 175/200 val loss:0.4698 val acc:79.00%
Epoch 176/200 train loss:0.3568 train acc:82.19%
Epoch 176/200 val loss:0.4884 val acc:76.00%
Epoch 177/200 train loss:0.3331 train acc:83.39%
Epoch 177/200 val loss:0.4780 val acc:72.00%
Epoch 178/200 train loss:0.3957 train acc:78.94%
Epoch 178/200 val loss:0.4663 val acc:75.00%
Epoch 179/200 train loss:0.3640 train acc:82.88%
Epoch 179/200 val loss:0.5002 val acc:76.00%
Epoch 180/200 train loss:0.4877 train acc:79.62%
Epoch 180/200 val loss:0.5176 val acc:77.00%
Epoch 181/200 train loss:0.3544 train acc:81.51%
Epoch 181/200 val loss:0.4707 val acc:79.00%
Epoch 182/200 train loss:0.4005 train acc:79.11%
Epoch 182/200 val loss:0.4876 val acc:75.00%
Epoch 183/200 train loss:0.3705 train acc:81.68%
Epoch 183/200 val loss:0.4510 val acc:80.00%
Epoch 184/200 train loss:0.3684 train acc:80.82%
Epoch 184/200 val loss:0.4537 val acc:76.00%
Epoch 185/200 train loss:0.3374 train acc:80.31%
Epoch 185/200 val loss:0.4709 val acc:78.00%
Epoch 186/200 train loss:0.3516 train acc:82.88%
Epoch 186/200 val loss:0.4539 val acc:78.00%
Epoch 187/200 train loss:0.3409 train acc:80.65%
Epoch 187/200 val loss:0.4553 val acc:79.00%
Epoch 188/200 train loss:0.3755 train acc:79.79%
Epoch 188/200 val loss:0.4692 val acc:76.00%
Epoch 189/200 train loss:0.3446 train acc:83.05%
Epoch 189/200 val loss:0.4621 val acc:81.00%
Epoch 190/200 train loss:0.3631 train acc:82.36%
Epoch 190/200 val loss:0.4843 val acc:75.00%
Epoch 191/200 train loss:0.5464 train acc:78.94%
Epoch 191/200 val loss:0.7409 val acc:65.00%
Epoch 192/200 train loss:0.5506 train acc:75.68%
Epoch 192/200 val loss:0.5583 val acc:74.00%
Epoch 193/200 train loss:0.4343 train acc:79.28%
Epoch 193/200 val loss:0.4304 val acc:82.00%
Epoch 194/200 train loss:0.4166 train acc:80.31%
Epoch 194/200 val loss:0.4282 val acc:82.00%
Epoch 195/200 train loss:0.4600 train acc:79.28%
Epoch 195/200 val loss:0.4395 val acc:81.00%
Epoch 196/200 train loss:0.4371 train acc:79.97%
Epoch 196/200 val loss:0.4598 val acc:82.00%
Epoch 197/200 train loss:0.4089 train acc:80.14%
Epoch 197/200 val loss:0.4961 val acc:82.00%
Epoch 198/200 train loss:0.3895 train acc:80.31%
Epoch 198/200 val loss:0.5032 val acc:75.00%
Epoch 199/200 train loss:0.4025 train acc:82.53%
Epoch 199/200 val loss:0.4481 val acc:77.00%
Epoch 200/200 train loss:0.3751 train acc:81.16%
Epoch 200/200 val loss:0.5028 val acc:75.00%
###Markdown
CNN_LSTM_ATT
###Code
bs = 16
train_split = 0.8
lr = 1e-4
epochs = 200
n_samples = len(myDataSet)
assert n_samples == 684, "684"
model = CNN_LSTM_att().to(device)
optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=1e-5)
best_loss = float('inf')
best_acc = 0
num_train = 584
num_val = n_samples - num_train
train_set, val_set = torch.utils.data.random_split(myDataSet, [num_train, num_val])
assert len(train_set) == num_train, "Same"
assert len(val_set) == num_val, "Same"
train_loader = DataLoader(train_set,
batch_size=bs,
shuffle=True)
val_loader = DataLoader(val_set,
batch_size=bs,
shuffle=False)
for epoch in range(epochs):
#start_time = time.time()
loss_train, correct_train = train_lstm(model, train_loader, optimizer)
loss_val, correct_val = evaluate_audio(model, val_loader, criterion = nn.CrossEntropyLoss())
#elapsed_time = time.time() - start_time
print("Epoch {}/{} train loss:{:.4f} train acc:{:.2f}% ".format(epoch+1,epochs, loss_train, 100 * correct_train/num_train))
print("Epoch {}/{} val loss:{:.4f} val acc:{:.2f}% ".format(epoch+1,epochs, loss_val, 100 * correct_val/num_val))
# if loss_val < best_loss:
# best_loss = loss_val
# torch.save(model, os.path.join(base_path, 'audios', "best_loss.pth"))
if correct_val > best_acc:
best_acc = correct_val
best_train = correct_train
torch.save(model, os.path.join(base_path, 'audios', "best_lstm_att.pth"))
if correct_val == best_acc and best_train < correct_train:
best_acc = correct_val
best_train = correct_train
torch.save(model, os.path.join(base_path, 'audios', "best_lstm_att.pth"))
###Output
Epoch 1/200 train loss:1.0357 train acc:42.64%
Epoch 1/200 val loss:0.9610 val acc:35.00%
Epoch 2/200 train loss:1.0011 train acc:51.54%
Epoch 2/200 val loss:1.0445 val acc:59.00%
Epoch 3/200 train loss:0.9815 train acc:48.97%
Epoch 3/200 val loss:1.0195 val acc:54.00%
Epoch 4/200 train loss:0.9543 train acc:55.65%
Epoch 4/200 val loss:0.9913 val acc:57.00%
Epoch 5/200 train loss:0.9740 train acc:49.83%
Epoch 5/200 val loss:0.9485 val acc:58.00%
Epoch 6/200 train loss:0.8934 train acc:51.88%
Epoch 6/200 val loss:0.7982 val acc:51.00%
Epoch 7/200 train loss:0.7878 train acc:55.14%
Epoch 7/200 val loss:0.6595 val acc:74.00%
Epoch 8/200 train loss:0.7241 train acc:61.47%
Epoch 8/200 val loss:0.6126 val acc:70.00%
Epoch 9/200 train loss:0.5849 train acc:67.64%
Epoch 9/200 val loss:0.6113 val acc:68.00%
Epoch 10/200 train loss:0.6306 train acc:64.04%
Epoch 10/200 val loss:0.5440 val acc:67.00%
Epoch 11/200 train loss:0.5608 train acc:66.95%
Epoch 11/200 val loss:0.5514 val acc:69.00%
Epoch 12/200 train loss:0.6027 train acc:70.03%
Epoch 12/200 val loss:0.5475 val acc:74.00%
Epoch 13/200 train loss:0.5789 train acc:71.58%
Epoch 13/200 val loss:1.0092 val acc:67.00%
Epoch 14/200 train loss:0.6116 train acc:71.58%
Epoch 14/200 val loss:0.5433 val acc:77.00%
Epoch 15/200 train loss:0.5146 train acc:72.95%
Epoch 15/200 val loss:0.5346 val acc:74.00%
Epoch 16/200 train loss:0.5385 train acc:74.49%
Epoch 16/200 val loss:0.6215 val acc:67.00%
Epoch 17/200 train loss:0.5090 train acc:73.63%
Epoch 17/200 val loss:0.4798 val acc:79.00%
Epoch 18/200 train loss:0.5655 train acc:71.92%
Epoch 18/200 val loss:0.5562 val acc:74.00%
Epoch 19/200 train loss:0.5142 train acc:75.86%
Epoch 19/200 val loss:0.4892 val acc:77.00%
Epoch 20/200 train loss:0.4851 train acc:75.86%
Epoch 20/200 val loss:0.4684 val acc:80.00%
Epoch 21/200 train loss:0.4910 train acc:76.88%
Epoch 21/200 val loss:0.5252 val acc:80.00%
Epoch 22/200 train loss:0.6357 train acc:72.95%
Epoch 22/200 val loss:0.6516 val acc:75.00%
Epoch 23/200 train loss:0.5177 train acc:75.17%
Epoch 23/200 val loss:0.5195 val acc:77.00%
Epoch 24/200 train loss:0.4878 train acc:75.34%
Epoch 24/200 val loss:0.5689 val acc:78.00%
Epoch 25/200 train loss:0.5075 train acc:75.00%
Epoch 25/200 val loss:0.4646 val acc:80.00%
Epoch 26/200 train loss:0.4784 train acc:75.86%
Epoch 26/200 val loss:0.5448 val acc:75.00%
Epoch 27/200 train loss:0.5304 train acc:75.00%
Epoch 27/200 val loss:0.5188 val acc:76.00%
Epoch 28/200 train loss:0.6708 train acc:73.80%
Epoch 28/200 val loss:0.5257 val acc:75.00%
Epoch 29/200 train loss:0.4734 train acc:76.88%
Epoch 29/200 val loss:0.5097 val acc:78.00%
Epoch 30/200 train loss:0.5007 train acc:76.37%
Epoch 30/200 val loss:0.5007 val acc:76.00%
Epoch 31/200 train loss:0.5017 train acc:74.32%
Epoch 31/200 val loss:0.5342 val acc:76.00%
Epoch 32/200 train loss:0.5546 train acc:74.49%
Epoch 32/200 val loss:0.5107 val acc:78.00%
Epoch 33/200 train loss:0.4724 train acc:77.05%
Epoch 33/200 val loss:0.4777 val acc:80.00%
Epoch 34/200 train loss:0.4738 train acc:77.57%
Epoch 34/200 val loss:0.4638 val acc:80.00%
Epoch 35/200 train loss:0.4711 train acc:77.40%
Epoch 35/200 val loss:0.4547 val acc:80.00%
Epoch 36/200 train loss:0.4793 train acc:76.88%
Epoch 36/200 val loss:0.4861 val acc:80.00%
Epoch 37/200 train loss:0.4850 train acc:78.08%
Epoch 37/200 val loss:0.4929 val acc:79.00%
Epoch 38/200 train loss:0.4584 train acc:79.11%
Epoch 38/200 val loss:0.5500 val acc:78.00%
Epoch 39/200 train loss:0.4696 train acc:77.91%
Epoch 39/200 val loss:0.4926 val acc:78.00%
Epoch 40/200 train loss:0.4467 train acc:78.25%
Epoch 40/200 val loss:0.4602 val acc:80.00%
Epoch 41/200 train loss:0.4744 train acc:77.74%
Epoch 41/200 val loss:0.4789 val acc:81.00%
Epoch 42/200 train loss:0.4761 train acc:77.74%
Epoch 42/200 val loss:0.5066 val acc:81.00%
Epoch 43/200 train loss:0.4583 train acc:78.08%
Epoch 43/200 val loss:0.5316 val acc:74.00%
Epoch 44/200 train loss:0.4609 train acc:77.05%
Epoch 44/200 val loss:0.5254 val acc:75.00%
Epoch 45/200 train loss:0.4770 train acc:78.25%
Epoch 45/200 val loss:0.6002 val acc:73.00%
Epoch 46/200 train loss:0.6384 train acc:72.43%
Epoch 46/200 val loss:0.4632 val acc:79.00%
Epoch 47/200 train loss:0.4804 train acc:76.71%
Epoch 47/200 val loss:0.5266 val acc:71.00%
Epoch 48/200 train loss:0.4772 train acc:77.40%
Epoch 48/200 val loss:0.4808 val acc:80.00%
Epoch 49/200 train loss:0.4523 train acc:77.74%
Epoch 49/200 val loss:0.4468 val acc:80.00%
Epoch 50/200 train loss:0.4422 train acc:78.08%
Epoch 50/200 val loss:0.4936 val acc:77.00%
Epoch 51/200 train loss:0.4566 train acc:77.57%
Epoch 51/200 val loss:0.4441 val acc:76.00%
Epoch 52/200 train loss:0.4372 train acc:76.54%
Epoch 52/200 val loss:0.4367 val acc:80.00%
Epoch 53/200 train loss:0.4362 train acc:78.42%
Epoch 53/200 val loss:0.3910 val acc:81.00%
Epoch 54/200 train loss:0.4783 train acc:74.32%
Epoch 54/200 val loss:0.5390 val acc:78.00%
Epoch 55/200 train loss:0.4787 train acc:77.05%
Epoch 55/200 val loss:0.4706 val acc:80.00%
Epoch 56/200 train loss:0.4475 train acc:78.08%
Epoch 56/200 val loss:0.4654 val acc:80.00%
Epoch 57/200 train loss:0.4224 train acc:78.25%
Epoch 57/200 val loss:0.5192 val acc:77.00%
Epoch 58/200 train loss:0.4461 train acc:78.08%
Epoch 58/200 val loss:0.4639 val acc:77.00%
Epoch 59/200 train loss:0.4760 train acc:77.23%
Epoch 59/200 val loss:0.4799 val acc:78.00%
Epoch 60/200 train loss:0.4366 train acc:78.42%
Epoch 60/200 val loss:0.4922 val acc:79.00%
Epoch 61/200 train loss:0.4479 train acc:78.08%
Epoch 61/200 val loss:0.4815 val acc:80.00%
Epoch 62/200 train loss:0.4617 train acc:78.08%
Epoch 62/200 val loss:0.4964 val acc:80.00%
Epoch 63/200 train loss:0.4366 train acc:77.74%
Epoch 63/200 val loss:0.4223 val acc:80.00%
Epoch 64/200 train loss:0.4814 train acc:77.40%
Epoch 64/200 val loss:0.4830 val acc:78.00%
Epoch 65/200 train loss:0.4370 train acc:77.40%
Epoch 65/200 val loss:0.5089 val acc:75.00%
Epoch 66/200 train loss:0.4281 train acc:77.23%
Epoch 66/200 val loss:0.4999 val acc:75.00%
Epoch 67/200 train loss:0.4563 train acc:75.51%
Epoch 67/200 val loss:0.4302 val acc:79.00%
Epoch 68/200 train loss:0.4297 train acc:79.11%
Epoch 68/200 val loss:0.4697 val acc:76.00%
Epoch 69/200 train loss:0.4387 train acc:77.91%
Epoch 69/200 val loss:0.4666 val acc:78.00%
Epoch 70/200 train loss:0.4568 train acc:76.88%
Epoch 70/200 val loss:0.4923 val acc:75.00%
Epoch 71/200 train loss:0.4540 train acc:78.60%
Epoch 71/200 val loss:0.4708 val acc:75.00%
Epoch 72/200 train loss:0.4330 train acc:79.11%
Epoch 72/200 val loss:0.5386 val acc:74.00%
Epoch 73/200 train loss:0.4425 train acc:77.74%
Epoch 73/200 val loss:0.4384 val acc:77.00%
Epoch 74/200 train loss:0.4317 train acc:77.57%
Epoch 74/200 val loss:0.5594 val acc:77.00%
Epoch 75/200 train loss:0.4466 train acc:77.40%
Epoch 75/200 val loss:0.4756 val acc:74.00%
Epoch 76/200 train loss:0.4466 train acc:78.60%
Epoch 76/200 val loss:0.4292 val acc:78.00%
Epoch 77/200 train loss:0.4846 train acc:75.86%
Epoch 77/200 val loss:0.4438 val acc:78.00%
Epoch 78/200 train loss:0.4227 train acc:80.31%
Epoch 78/200 val loss:0.4737 val acc:73.00%
Epoch 79/200 train loss:0.4351 train acc:76.88%
Epoch 79/200 val loss:0.5000 val acc:75.00%
Epoch 80/200 train loss:0.4734 train acc:74.83%
Epoch 80/200 val loss:0.4302 val acc:79.00%
Epoch 81/200 train loss:0.4223 train acc:78.42%
Epoch 81/200 val loss:0.4120 val acc:81.00%
Epoch 82/200 train loss:0.4715 train acc:76.37%
Epoch 82/200 val loss:0.5360 val acc:76.00%
Epoch 83/200 train loss:0.4652 train acc:76.71%
Epoch 83/200 val loss:0.5750 val acc:76.00%
Epoch 84/200 train loss:0.4325 train acc:78.25%
Epoch 84/200 val loss:0.4707 val acc:79.00%
Epoch 85/200 train loss:0.4341 train acc:78.60%
Epoch 85/200 val loss:0.5879 val acc:72.00%
Epoch 86/200 train loss:0.4445 train acc:77.74%
Epoch 86/200 val loss:0.5187 val acc:75.00%
Epoch 87/200 train loss:0.4353 train acc:76.37%
Epoch 87/200 val loss:0.4891 val acc:77.00%
Epoch 88/200 train loss:0.4421 train acc:77.74%
Epoch 88/200 val loss:0.4659 val acc:79.00%
Epoch 89/200 train loss:0.4351 train acc:76.88%
Epoch 89/200 val loss:0.4702 val acc:81.00%
Epoch 90/200 train loss:0.4202 train acc:78.77%
Epoch 90/200 val loss:0.4060 val acc:81.00%
Epoch 91/200 train loss:0.4209 train acc:78.25%
Epoch 91/200 val loss:0.4336 val acc:81.00%
Epoch 92/200 train loss:0.4586 train acc:77.40%
Epoch 92/200 val loss:0.5333 val acc:75.00%
Epoch 93/200 train loss:0.4226 train acc:77.05%
Epoch 93/200 val loss:0.4198 val acc:76.00%
Epoch 94/200 train loss:0.4346 train acc:76.71%
Epoch 94/200 val loss:0.4780 val acc:75.00%
Epoch 95/200 train loss:0.4376 train acc:77.91%
Epoch 95/200 val loss:0.4287 val acc:81.00%
Epoch 96/200 train loss:0.4408 train acc:77.23%
Epoch 96/200 val loss:0.4859 val acc:70.00%
Epoch 97/200 train loss:0.4446 train acc:76.88%
Epoch 97/200 val loss:0.4396 val acc:78.00%
Epoch 98/200 train loss:0.4285 train acc:78.08%
Epoch 98/200 val loss:0.4433 val acc:80.00%
Epoch 99/200 train loss:0.4239 train acc:78.42%
Epoch 99/200 val loss:0.4580 val acc:79.00%
Epoch 100/200 train loss:0.4099 train acc:79.45%
Epoch 100/200 val loss:0.4012 val acc:77.00%
Epoch 101/200 train loss:0.4174 train acc:75.86%
Epoch 101/200 val loss:0.4360 val acc:80.00%
Epoch 102/200 train loss:0.4414 train acc:75.86%
Epoch 102/200 val loss:0.3925 val acc:80.00%
Epoch 103/200 train loss:0.4346 train acc:78.60%
Epoch 103/200 val loss:0.4794 val acc:77.00%
Epoch 104/200 train loss:0.5470 train acc:76.88%
Epoch 104/200 val loss:0.4505 val acc:79.00%
Epoch 105/200 train loss:0.4484 train acc:78.60%
Epoch 105/200 val loss:0.4077 val acc:82.00%
Epoch 106/200 train loss:0.4415 train acc:78.25%
Epoch 106/200 val loss:0.4301 val acc:81.00%
Epoch 107/200 train loss:0.4322 train acc:79.45%
Epoch 107/200 val loss:0.4936 val acc:81.00%
Epoch 108/200 train loss:0.4157 train acc:79.79%
Epoch 108/200 val loss:0.4325 val acc:79.00%
Epoch 109/200 train loss:0.4209 train acc:78.77%
Epoch 109/200 val loss:0.4266 val acc:81.00%
Epoch 110/200 train loss:0.4349 train acc:78.25%
Epoch 110/200 val loss:0.4528 val acc:81.00%
Epoch 111/200 train loss:0.4037 train acc:79.11%
Epoch 111/200 val loss:0.4236 val acc:81.00%
Epoch 112/200 train loss:0.4202 train acc:78.60%
Epoch 112/200 val loss:0.4323 val acc:79.00%
Epoch 113/200 train loss:0.4267 train acc:78.77%
Epoch 113/200 val loss:0.4519 val acc:82.00%
Epoch 114/200 train loss:0.4156 train acc:79.97%
Epoch 114/200 val loss:0.4701 val acc:77.00%
Epoch 115/200 train loss:0.4236 train acc:79.97%
Epoch 115/200 val loss:0.4400 val acc:76.00%
Epoch 116/200 train loss:0.4167 train acc:80.31%
Epoch 116/200 val loss:0.4473 val acc:75.00%
Epoch 117/200 train loss:0.4432 train acc:78.25%
Epoch 117/200 val loss:0.4606 val acc:79.00%
Epoch 118/200 train loss:0.4250 train acc:78.25%
Epoch 118/200 val loss:0.4788 val acc:81.00%
Epoch 119/200 train loss:0.4239 train acc:76.20%
Epoch 119/200 val loss:0.4051 val acc:82.00%
Epoch 120/200 train loss:0.4083 train acc:79.45%
Epoch 120/200 val loss:0.4730 val acc:81.00%
Epoch 121/200 train loss:0.4066 train acc:78.94%
Epoch 121/200 val loss:0.4083 val acc:82.00%
Epoch 122/200 train loss:0.4288 train acc:78.08%
Epoch 122/200 val loss:0.4481 val acc:83.00%
Epoch 123/200 train loss:0.4103 train acc:79.45%
Epoch 123/200 val loss:0.4618 val acc:81.00%
Epoch 124/200 train loss:0.4246 train acc:78.25%
Epoch 124/200 val loss:0.4888 val acc:78.00%
Epoch 125/200 train loss:0.4014 train acc:78.42%
Epoch 125/200 val loss:0.4640 val acc:80.00%
Epoch 126/200 train loss:0.4041 train acc:79.79%
Epoch 126/200 val loss:0.4351 val acc:81.00%
Epoch 127/200 train loss:0.4269 train acc:78.08%
Epoch 127/200 val loss:0.5049 val acc:82.00%
Epoch 128/200 train loss:0.4378 train acc:79.45%
Epoch 128/200 val loss:0.3897 val acc:82.00%
Epoch 129/200 train loss:0.4282 train acc:77.91%
Epoch 129/200 val loss:0.5019 val acc:77.00%
Epoch 130/200 train loss:0.4317 train acc:76.88%
Epoch 130/200 val loss:0.4413 val acc:76.00%
Epoch 131/200 train loss:0.4054 train acc:76.37%
Epoch 131/200 val loss:0.4373 val acc:77.00%
Epoch 132/200 train loss:0.3993 train acc:78.94%
Epoch 132/200 val loss:0.4398 val acc:82.00%
Epoch 133/200 train loss:0.4263 train acc:78.25%
Epoch 133/200 val loss:0.4691 val acc:80.00%
Epoch 134/200 train loss:0.4146 train acc:78.60%
Epoch 134/200 val loss:0.4976 val acc:81.00%
Epoch 135/200 train loss:0.4004 train acc:78.77%
Epoch 135/200 val loss:0.4594 val acc:75.00%
Epoch 136/200 train loss:0.4215 train acc:77.91%
Epoch 136/200 val loss:0.5251 val acc:82.00%
Epoch 137/200 train loss:0.4113 train acc:78.77%
Epoch 137/200 val loss:0.5056 val acc:74.00%
Epoch 138/200 train loss:0.4129 train acc:79.28%
Epoch 138/200 val loss:0.4763 val acc:83.00%
Epoch 139/200 train loss:0.4116 train acc:78.60%
Epoch 139/200 val loss:0.4755 val acc:79.00%
Epoch 140/200 train loss:0.4154 train acc:79.45%
Epoch 140/200 val loss:0.5041 val acc:82.00%
Epoch 141/200 train loss:0.4110 train acc:78.42%
Epoch 141/200 val loss:0.4631 val acc:75.00%
Epoch 142/200 train loss:0.4199 train acc:79.62%
Epoch 142/200 val loss:0.4167 val acc:80.00%
Epoch 143/200 train loss:0.4020 train acc:79.79%
Epoch 143/200 val loss:0.3964 val acc:81.00%
Epoch 144/200 train loss:0.4401 train acc:78.25%
Epoch 144/200 val loss:0.4429 val acc:76.00%
Epoch 145/200 train loss:0.4192 train acc:79.28%
Epoch 145/200 val loss:0.4701 val acc:84.00%
Epoch 146/200 train loss:0.4040 train acc:78.77%
Epoch 146/200 val loss:0.4471 val acc:80.00%
Epoch 147/200 train loss:0.4088 train acc:77.74%
Epoch 147/200 val loss:0.4276 val acc:77.00%
Epoch 148/200 train loss:0.4030 train acc:78.77%
Epoch 148/200 val loss:0.4120 val acc:82.00%
Epoch 149/200 train loss:0.3919 train acc:79.28%
Epoch 149/200 val loss:0.3888 val acc:76.00%
Epoch 150/200 train loss:0.4096 train acc:77.23%
Epoch 150/200 val loss:0.5591 val acc:77.00%
Epoch 151/200 train loss:0.4342 train acc:76.71%
Epoch 151/200 val loss:0.4789 val acc:74.00%
Epoch 152/200 train loss:0.4532 train acc:76.03%
Epoch 152/200 val loss:0.4644 val acc:78.00%
Epoch 153/200 train loss:0.4895 train acc:77.74%
Epoch 153/200 val loss:0.4874 val acc:82.00%
Epoch 154/200 train loss:0.4225 train acc:79.79%
Epoch 154/200 val loss:0.5124 val acc:75.00%
Epoch 155/200 train loss:0.4232 train acc:78.94%
Epoch 155/200 val loss:0.4914 val acc:75.00%
Epoch 156/200 train loss:0.3944 train acc:80.14%
Epoch 156/200 val loss:0.5182 val acc:75.00%
Epoch 157/200 train loss:0.4088 train acc:77.57%
Epoch 157/200 val loss:0.4257 val acc:79.00%
Epoch 158/200 train loss:0.3878 train acc:81.51%
Epoch 158/200 val loss:0.4862 val acc:75.00%
Epoch 159/200 train loss:0.4024 train acc:79.79%
Epoch 159/200 val loss:0.4690 val acc:77.00%
Epoch 160/200 train loss:0.4208 train acc:80.99%
Epoch 160/200 val loss:0.4499 val acc:75.00%
Epoch 161/200 train loss:0.4136 train acc:78.94%
Epoch 161/200 val loss:0.5237 val acc:72.00%
Epoch 162/200 train loss:0.4107 train acc:80.31%
Epoch 162/200 val loss:0.4644 val acc:82.00%
Epoch 163/200 train loss:0.4047 train acc:79.79%
Epoch 163/200 val loss:0.4238 val acc:81.00%
Epoch 164/200 train loss:0.4011 train acc:78.42%
Epoch 164/200 val loss:0.4265 val acc:78.00%
Epoch 165/200 train loss:0.4153 train acc:78.42%
Epoch 165/200 val loss:0.4551 val acc:80.00%
Epoch 166/200 train loss:0.3889 train acc:79.28%
Epoch 166/200 val loss:0.4392 val acc:82.00%
Epoch 167/200 train loss:0.3809 train acc:79.62%
Epoch 167/200 val loss:0.5259 val acc:78.00%
Epoch 168/200 train loss:0.3983 train acc:81.34%
Epoch 168/200 val loss:0.5575 val acc:76.00%
Epoch 169/200 train loss:0.4247 train acc:78.25%
Epoch 169/200 val loss:0.4994 val acc:77.00%
Epoch 170/200 train loss:0.3950 train acc:79.79%
Epoch 170/200 val loss:0.4105 val acc:85.00%
Epoch 171/200 train loss:0.4155 train acc:79.79%
Epoch 171/200 val loss:0.4873 val acc:74.00%
Epoch 172/200 train loss:0.3906 train acc:78.77%
Epoch 172/200 val loss:0.4593 val acc:78.00%
Epoch 173/200 train loss:0.3994 train acc:79.62%
Epoch 173/200 val loss:0.4577 val acc:75.00%
Epoch 174/200 train loss:0.3960 train acc:79.79%
Epoch 174/200 val loss:0.4773 val acc:81.00%
Epoch 175/200 train loss:0.4056 train acc:79.45%
Epoch 175/200 val loss:0.5036 val acc:73.00%
Epoch 176/200 train loss:0.3979 train acc:79.11%
Epoch 176/200 val loss:0.4658 val acc:81.00%
Epoch 177/200 train loss:0.3829 train acc:80.14%
Epoch 177/200 val loss:0.4764 val acc:78.00%
Epoch 178/200 train loss:0.3925 train acc:80.14%
Epoch 178/200 val loss:0.3779 val acc:83.00%
Epoch 179/200 train loss:0.4046 train acc:78.08%
Epoch 179/200 val loss:0.4647 val acc:77.00%
Epoch 180/200 train loss:0.3855 train acc:80.14%
Epoch 180/200 val loss:0.4908 val acc:74.00%
Epoch 181/200 train loss:0.4033 train acc:80.31%
Epoch 181/200 val loss:0.5103 val acc:76.00%
Epoch 182/200 train loss:0.3832 train acc:81.34%
Epoch 182/200 val loss:0.3875 val acc:78.00%
Epoch 183/200 train loss:0.4102 train acc:78.60%
Epoch 183/200 val loss:0.3637 val acc:81.00%
Epoch 184/200 train loss:0.4050 train acc:79.79%
Epoch 184/200 val loss:0.4240 val acc:81.00%
Epoch 185/200 train loss:0.4073 train acc:79.79%
Epoch 185/200 val loss:0.4387 val acc:85.00%
Epoch 186/200 train loss:0.3947 train acc:80.31%
Epoch 186/200 val loss:0.4475 val acc:75.00%
Epoch 187/200 train loss:0.3829 train acc:79.79%
Epoch 187/200 val loss:0.4087 val acc:83.00%
Epoch 188/200 train loss:0.4081 train acc:78.77%
Epoch 188/200 val loss:0.4693 val acc:81.00%
Epoch 189/200 train loss:0.3813 train acc:81.68%
Epoch 189/200 val loss:0.4628 val acc:81.00%
Epoch 190/200 train loss:0.3803 train acc:80.99%
Epoch 190/200 val loss:0.3929 val acc:81.00%
Epoch 191/200 train loss:0.3818 train acc:79.28%
Epoch 191/200 val loss:0.3600 val acc:81.00%
Epoch 192/200 train loss:0.3717 train acc:79.97%
Epoch 192/200 val loss:0.3957 val acc:79.00%
Epoch 193/200 train loss:0.3847 train acc:79.45%
Epoch 193/200 val loss:0.3966 val acc:76.00%
Epoch 194/200 train loss:0.3781 train acc:80.31%
Epoch 194/200 val loss:0.4524 val acc:77.00%
Epoch 195/200 train loss:0.3905 train acc:80.65%
Epoch 195/200 val loss:0.5291 val acc:78.00%
Epoch 196/200 train loss:0.3900 train acc:80.65%
Epoch 196/200 val loss:0.4838 val acc:75.00%
Epoch 197/200 train loss:0.3931 train acc:80.31%
Epoch 197/200 val loss:0.5394 val acc:79.00%
Epoch 198/200 train loss:0.3734 train acc:78.42%
Epoch 198/200 val loss:0.4500 val acc:77.00%
Epoch 199/200 train loss:0.3833 train acc:80.48%
Epoch 199/200 val loss:0.4629 val acc:77.00%
Epoch 200/200 train loss:0.3785 train acc:81.85%
Epoch 200/200 val loss:0.4438 val acc:74.00%
###Markdown
###Code
from google.colab import drive
drive.mount('/content/drive')
import pandas as pd
from dask.distributed import Client
!pip install -U pandas-profiling
transactions = pd.read_csv(
'/content/drive/My Drive/N26/transactions.csv'
)
transactions
users = pd.read_csv(
'/content/drive/My Drive/N26/users.csv')
users
pip install -U pandas-profiling
transaction_category_id = transactions['transaction_category_id']
transaction_category_id
num_users = transactions.user_id.value_counts()
num_users
sum_amount = transactions.agg({'transaction_amount':['sum']})
sum_amount
transactions = transactions.set_index('user_id')
transactions.loc[~transactions.is_blocked, :]
users.loc[~users.is_active, :]
users[users.is_active]
tr_df = pd.merge(transactions, users['user_id'], right_index=True)
###Output
_____no_output_____ |
notebooks/python-tf-idf.ipynb | ###Markdown
[hrs/python-tf-idf](https://github.com/hrs/python-tf-idf) An extremely simple Python library to perform TF-IDF document comparison.
###Code
#!/usr/bin/env python
"""
The simplest TF-IDF library imaginable.
Add your documents as two-element lists `[docname,
[list_of_words_in_the_document]]` with `addDocument(docname, list_of_words)`.
Get a list of all the `[docname, similarity_score]` pairs relative to a
document by calling `similarities([list_of_words])`.
See the README for a usage example.
"""
import sys
import os
class TfIdf:
def __init__(self):
self.weighted = False
self.documents = []
self.corpus_dict = {}
def add_document(self, doc_name, list_of_words):
# building a dictionary
doc_dict = {}
for w in list_of_words:
doc_dict[w] = doc_dict.get(w, 0.) + 1.0
self.corpus_dict[w] = self.corpus_dict.get(w, 0.0) + 1.0
# normalizing the dictionary
length = float(len(list_of_words))
for k in doc_dict:
doc_dict[k] = doc_dict[k] / length
# add the normalized document to the corpus
self.documents.append([doc_name, doc_dict])
def similarities(self, list_of_words):
"""
Returns a list of all the [docname, similarity_score] pairs
relative to a list of words.
"""
# building the query dictionary
query_dict = {}
for w in list_of_words:
query_dict[w] = query_dict.get(w, 0.0) + 1.0
# normalizing the query
length = float(len(list_of_words))
for k in query_dict:
query_dict[k] = query_dict[k] / length
# computing the list of similarities
sims = []
for doc in self.documents:
score = 0.0
doc_dict = doc[1]
for k in query_dict:
if k in doc_dict:
score += (query_dict[k] / self.corpus_dict[k]) + (
doc_dict[k] / self.corpus_dict[k])
sims.append([doc[0], score])
return sims
###Output
_____no_output_____ |
Algorithm/BlankBoard/Python/Array&Matrix.ipynb | ###Markdown
[Rotate Array](https://leetcode.com/problems/rotate-array/)。给一数组与一个$k$,对数组实现循环右移$k$位。思路:循环右移,那么以倒数第$k$个元素为分界线,分别反转前一部分与后一部分,然后反转整个数组。
###Code
def rotate(nums, k: int) -> None:
def reverse(start, end):
while start < end:
nums[start], nums[end] = nums[end], nums[start]
start += 1
end -= 1
n = len(nums)
k = k % n
reverse(0, n-k-1)
reverse(n-k, n-1)
reverse(0, n-1)
###Output
_____no_output_____
###Markdown
[Delete Columns to Make Sorted](https://leetcode.com/problems/delete-columns-to-make-sorted/)。给若干个长度相等的字串,堆叠起来构成一个矩阵。问需要删掉几列才能满足每一列的字符顺序都是非降序的。思路:从左往右遍历以列的方式遍历每一列,当一列中出现降序字符顺序时,返回值$+1$。Python可以直接比较字符顺序。
###Code
def minDeletionSize(A: List[str]) -> int:
rows, cols = len(A), len(A[0])
res = 0
for col in range(cols):
for row in range(rows-1):
if A[row+1][col] < A[row][col]:
res += 1
break
return res
###Output
_____no_output_____
###Markdown
[Rotate Image](https://leetcode.com/problems/rotate-image/)。给一矩阵,将其就地顺时针旋转$90$度。思路:首先把每一列都进行反转,然后按照主对角线做轴镜像反转。
###Code
def rotate(matrix) -> None:
def reverse(start, end):
while start < end:
matrix[start], matrix[end] = matrix[end], matrix[start]
start += 1
end -= 1
rows, cols = len(matrix), len(matrix[0])
reverse(0, rows-1)
for row in range(rows):
for col in range(row):
matrix[row][col], matrix[col][row] = matrix[col][row], matrix[row][col]
###Output
_____no_output_____
###Markdown
[Product of Array Except Self](https://leetcode.com/problems/product-of-array-except-self/)。给定一个已知数组$A=[a_{0},a_{1},...,a_{n-1}]$,求数组$B=[b_{0},b_{1},...,b_{n-1}]$,其中$b_{i}=a_{0}{\times}...{\times}a_{i-1}{\times}a_{i+1}{\times}...{\times}a_{n-1}$,每一个位置等于$A$数组除去当前位置数字的累乘。思路:2-pass。1-pass从前往后,首先求出所有位置之前的累乘;2-pass从后往前,求出把所有位置之后的累乘乘到第一部分上。
###Code
def productExceptSelf(nums):
n = len(nums)
prod = [1]*n
base = 1
for i in range(1, n):
base *= nums[i-1]
prod[i] *= base
base = 1
for i in range(n-2, -1, -1):
base *= nums[i+1]
prod[i] *= base
return prod
###Output
_____no_output_____
###Markdown
[Find Pivot Index](https://leetcode.com/problems/find-pivot-index/)。给一数组,求该数组的枢纽索引,若不存在返回-1,若有多个返回最左边的。若某索引其左边所有元素和等于其右边所有元素和,则该位值为枢纽索引。思路:首先求出数组的总和$sum$,然后线性扫描数组并计算每个位置的左边累加和,则其右边累加和为:$$right\_sum=sum-left\_sum-num$$
###Code
def pivotIndex(nums) -> int:
total_sum = sum(nums)
left_sum = 0
for i, num in enumerate(nums):
if total_sum-left_sum-num == left_sum:
return i
left_sum += num
return -1
###Output
_____no_output_____
###Markdown
[Largest Number At Least Twice of Others](https://leetcode.com/problems/largest-number-at-least-twice-of-others/)。给一整形数组,求该数组中主宰元素的索引。主宰元素定义为不小于其他所有元素两倍且仅出现一次的元素。思路:$2pass$,$1st-pass$记录最大的元素及其出现位置,$2nd-pass$判断最大元素要大于等于其他所有数字的两倍。
###Code
def dominantIndex(nums) -> int:
n = len(nums)
max_idx, max_val = -1, -0x80000000
for idx, num in enumerate(nums):
if num > max_val:
max_idx, max_val = idx, num
for idx, num in enumerate(nums):
if idx != max_idx and 2*num > max_val:
return -1
return max_idx
###Output
_____no_output_____
###Markdown
[Range Sum Query - Immutable](https://leetcode.com/problems/range-sum-query-immutable/)。设计一个类,该类读入一个数组,可以随时检索范围$[i,j]$之间的数组和。思路:维护一个累计和数组```sum```,其中```sum[i]```表示$[0,i]$之间的数组和。为了覆盖到$i=0$的情况,在```num```数组前设一个$0$。
###Code
class NumArray:
def __init__(self, nums):
n = len(nums)
self._sum = [0]*(n+1)
for i in range(1, n+1):
self._sum[i] = self._sum[i-1]+nums[i-1]
def sumRange(self, i: int, j: int) -> int:
return self._sum[j+1]-self._sum[i]
###Output
_____no_output_____
###Markdown
[Range Sum Query - Mutable](https://leetcode.com/problems/range-sum-query-mutable/)。设计一个类,该类读入一个数组,可以随时检索范围$[i,j]$之间的数组和。该类还有一个```update(i,val)```函数用于修改数组。思路:超时。当更改数组中$i$位置的值时,会影响到$i$及其之后位置的累加和。每次修改需要更新```sum```数组$i$之后的值。
###Code
class NumArray:
def __init__(self, nums):
self._n = len(nums)
self._nums = nums
self._sum = [0]*(self._n+1)
for i in range(1, self._n+1):
self._sum[i] = self._sum[i-1]+nums[i-1]
def update(self, i: int, val: int) -> None:
diff = val-self._nums[i]
self._nums[i] = val
for idx in range(i+1, self._n+1):
self._sum[idx] += diff
def sumRange(self, i: int, j: int) -> int:
return self._sum[j+1]-self._sum[i]
###Output
_____no_output_____
###Markdown
[Range Sum Query 2D - Immutable](https://leetcode.com/problems/range-sum-query-2d-immutable/)。设计一个类,该类读入一整形数组,可以随时检索某一区域内的累加和。区域有左上角与右下角定位。思路:维护一个累加和矩阵```sum```,其中```sum[i][j]```表示从$(0,0)$到$(i,j)$的区域和。
###Code
class NumMatrix:
def __init__(self, matrix):
if not matrix or not matrix[0]:
return
rows, cols = len(matrix), len(matrix[0])
self._sum = [[0]*(cols+1) for _ in range(rows+1)]
for row in range(1, rows+1):
for col in range(1, cols+1):
self._sum[row][col] = self._sum[row - 1][col] + self._sum[row][col-1] \
- self._sum[row - 1][col-1] + matrix[row-1][col-1]
def sumRegion(self, row1: int, col1: int, row2: int, col2: int) -> int:
return self._sum[row2+1][col2+1]-self._sum[row2+1][col1]-self._sum[row1][col2+1] \
+ self._sum[row1][col1]
###Output
_____no_output_____
###Markdown
[Majority Element](https://leetcode.com/problems/majority-element/)。数组中出现次数超过数组长度一半的元素称为主元素,给一存在主元素的数组,找出主元素。思路:线性扫描数组,维护一个备选主元素与一个计数值。当扫描值等于备选时,计数值$+1$,否则$-1$。计数值为$0$时注意更新备选。
###Code
def majorityElement(nums) -> int:
res = None
cnt = 0
for num in nums:
if num == res:
cnt += 1
else:
if cnt == 0:
res = num
cnt += 1
else:
cnt -= 1
return res
###Output
_____no_output_____
###Markdown
[Majority Element II](https://leetcode.com/problems/majority-element-ii/)。给一数组,找出其中出现次数超过数组长度$1/3$的值。思路:出现次数超过$n/3$的元素至多有两个。可以维护两个备选,同样使用计数的方法,1-pass可以得到两个备选;然后再进行2-pass,筛出出现次数超过$n/3$的元素。
###Code
def majorityElement(nums):
# 1-pass
major_1 = major_2 = None
cnt_1 = cnt_2 = 0
for num in nums:
if num == major_1:
cnt_1 += 1
elif num == major_2:
cnt_2 += 1
else:
if cnt_1 == 0:
major_1 = num
cnt_1 += 1
elif cnt_2 == 0:
major_2 = num
cnt_2 += 1
else:
cnt_1 -= 1
cnt_2 -= 1
# 2-pass
cnt_1 = cnt_2 = 0
for num in nums:
if num == major_1:
cnt_1 += 1
elif num == major_2:
cnt_2 += 1
else:
continue
res = list()
thresh = len(nums)//3
if cnt_1 > thresh:
res.append(major_1)
if cnt_2 > thresh:
res.append(major_2)
return res
###Output
_____no_output_____
###Markdown
[Maximum Average Subarray I](https://leetcode.com/problems/maximum-average-subarray-i/)。给一数组与一个大小为$k$的滑动窗口,求滑动窗口最大的平均值。思路:该题等价于求滑动窗口的最大和。
###Code
def findMaxAverage(nums, k: int) -> float:
max_sum = cur_sum = sum(nums[:k])
n = len(nums)
for i in range(1, n-k+1):
cur_sum = cur_sum-nums[i-1]+nums[i+k-1]
max_sum = max(max_sum, cur_sum)
return max_sum/k
###Output
_____no_output_____
###Markdown
[Toeplitz Matrix](https://leetcode.com/problems/toeplitz-matrix/)。给一矩阵,判断该矩阵是否每一条主对角线上的元素都相等。思路:主对角线,左上到右下。线性扫描矩阵,每次与左上角比较即可,不等立马返回False。
###Code
def isToeplitzMatrix(matrix) -> bool:
rows, cols = len(matrix), len(matrix[0])
for row in range(1, rows):
for col in range(1, cols):
if matrix[row][col] != matrix[row-1][col-1]:
return False
return True
###Output
_____no_output_____
###Markdown
[Monotonic Array](https://leetcode.com/problems/monotonic-array/)。判断一数组是否是单调数组。思路:设置两个状态位$I$和$D$,$I$表示非降序而$D$表示非升序。若两标志位同时为$True$说明数组非单调。
###Code
def isMonotonic(A) -> bool:
n=len(A)
i=d=False
for idx in range(n-1):
if A[idx]<A[idx+1]:
i=True
if A[idx]>A[idx+1]:
d=True
if i and d:
return False
return True
###Output
_____no_output_____ |
docs/python_MM_LSTM_StockPriceForecast.ipynb | ###Markdown
Money Management: Stock Price Forecasting Using Long Short Term Memory (LSTM)LSTM is an tweak version of Recurrent Neural Network which forgets or remembers certain information over a long period of time. In this notebook, I will use LSTM to forecast Google stock price.Stock price today is probably dependent on:- The trend it has been folloing from the previous day.- The price it was traded at from previous day.- Some other factors that may affect stock price today.Generalize intuition from above to the following:- The previous cell state (i.e. the information that was present in the memory after the previous time step).- The previous hidden state (i.e. this is the same as the output of the previous cell).- The input at the current time step (i.e. the new information that is being fed in at that moment).In this notebook, we cover- Part 1 - Data Preprocessing- Part 2 - Construct RNN Architecture- Part 3 - Predictions and Performance VisualizationSource: see [Chapter 11](https://github.com/PacktPublishing/Hands-on-Python-for-Finance/tree/master/Chapter%2011) of **Hands on Python for Finance** Recurrent Neural Network (a sequential model)Given data $X$ and $Y$, we want to feed information forward into a time stamp. Then we form some belief and we make some initial predictions. We investigate our beliefs by looking at the loss function of the initial guesses and the real value. We update our model according to error we observed. Architecture: Feed-forwardConsider data with time stamp$$X_{\langle 1 \rangle} \rightarrow X_{\langle 2 \rangle} \rightarrow \dots \rightarrow X_{\langle T \rangle}$$and feed-forward architecture pass information through exactly as the following:$$\text{Information in:} \rightarrow\begin{matrix}Y_{\langle 1 \rangle}, \hat{Y}_{\langle 1 \rangle} & Y_{\langle 2 \rangle}, \hat{Y}_{\langle 2 \rangle} & & Y_{\langle T \rangle}, \hat{Y}_{\langle T \rangle} \\\uparrow & \uparrow & & \uparrow \\X_{\langle 1 \rangle} \rightarrow & X_{\langle 2 \rangle} \rightarrow & \dots \rightarrow & X_{\langle T \rangle} \\\uparrow & \uparrow & & \uparrow \\w_{\langle 1 \rangle}, b_{0, \langle 1 \rangle} & w_{\langle 2 \rangle}, b_{0, \langle 2 \rangle} & & w_{\langle T \rangle}, b_{0, \langle T \rangle} \\\end{matrix}\rightarrow\text{Form beliefs about } Y_{\angle T \rangle}$$while the educated guesses $\hat{Y}_{\langle T \rangle}$ are our beliefs about real $Y$ at time stamp $T$. Architecture: Feed-backwardLet us clearly define our loss function to make sure we have a proper grip of our mistakes. $$\mathcal{L} = \sum_t L(\hat{y}_{\langle t \rangle} - y_t)^2$$and we can compute the gradient $$\triangledown = \frac{\partial \mathcal{L}}{\partial a}$$and then with respect with parameters $w$ and $b$$$\frac{\partial \triangledown}{\partial w}, \frac{\partial \triangledown}{\partial a}$$and now with perspective of where we make our mistakes according to our parameters we can go backward$$\text{Information in:} \leftarrow\underbrace{\begin{matrix}Y_{\langle 1 \rangle}, \hat{Y}_{\langle 1 \rangle} & Y_{\langle 2 \rangle}, \hat{Y}_{\langle 2 \rangle} & & Y_{\langle T \rangle}, \hat{Y}_{\langle T \rangle} \\\uparrow & \uparrow & & \uparrow \\X_{\langle 1 \rangle} \leftarrow & X_{\langle 2 \rangle} \leftarrow & \dots \leftarrow & X_{\langle T \rangle} \\\uparrow & \uparrow & & \uparrow \\w'_{\langle 1 \rangle}, b'_{0, \langle 1 \rangle} & w'_{\langle 2 \rangle}, b'_{0, \langle 2 \rangle} & & w'_{\langle T \rangle}, b'_{0, \langle T \rangle} \\\end{matrix}}_{\text{Update: } w, b \text{ with } w', b'}\leftarrow\text{Total Loss: } \mathcal{L} (\hat{y}, y)$$and the *update* action in the above architecture is dependent on your optimizer specified in the algorithm. Part 1 - Data Preprocessing
###Code
# Part 1 - Data Preprocessing
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Import Data
#!pip install yfinance
import yfinance as yf
ticker = "AAPL"
dataset_train = yf.download(ticker)
dataset_train.tail()
# Preview
print(dataset_train.head())
print(dataset_train.tail())
# Select Open Column
training_set = dataset_train.iloc[:, 1:2].values
print(training_set)
# Feature Scaling
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler(feature_range = (0, 1))
training_set_scaled = sc.fit_transform(training_set)
training_set_scaled
X_train = []
y_train = []
# Creating a data structure with 100 timesteps and 1 output. 1258 is the total number of records in the Open column
for i in range(100, 1258):
X_train.append(training_set_scaled[i-100:i, 0])
y_train.append(training_set_scaled[i, 0])
X_train, y_train = np.array(X_train), np.array(y_train)
X_train.shape
y_train.shape
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
X_train.shape
###Output
_____no_output_____
###Markdown
Part 2 - Building RNN
###Code
# Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
# Initialize RNN
regressor = Sequential()
# Adding the first LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))
regressor.add(Dropout(0.2))
# Adding a second LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
# Adding a third LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
# Adding a fourth LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50))
regressor.add(Dropout(0.2))
# Adding the output layer
regressor.add(Dense(units = 1))
regressor.summary()
# Compiling the RNN
regressor.compile(optimizer = 'adam', loss = 'mean_squared_error')
# Fitting the RNN to the Training set
regressor.fit(X_train, y_train, epochs = 20, batch_size = 64)
# Comment:
# Originally, the batch_size was set 32 in the ipynb provided by the authors.
# I changed it to 64. This is because I believe for Google a sequence of 32 is not enough.
# My intuition is confirmed. If you use 32, you will observe a larger test error.
###Output
Epoch 1/20
19/19 [==============================] - 2s 84ms/step - loss: 8.8448e-05
Epoch 2/20
19/19 [==============================] - 2s 80ms/step - loss: 4.6150e-06
Epoch 3/20
19/19 [==============================] - 1s 78ms/step - loss: 9.7256e-07: 0s -
Epoch 4/20
19/19 [==============================] - 1s 79ms/step - loss: 2.7579e-07
Epoch 5/20
19/19 [==============================] - 2s 79ms/step - loss: 1.3181e-07
Epoch 6/20
19/19 [==============================] - 2s 81ms/step - loss: 1.0795e-07
Epoch 7/20
19/19 [==============================] - 2s 80ms/step - loss: 1.0045e-07
Epoch 8/20
19/19 [==============================] - 2s 80ms/step - loss: 1.0203e-07
Epoch 9/20
19/19 [==============================] - 2s 80ms/step - loss: 9.7578e-08
Epoch 10/20
19/19 [==============================] - 2s 80ms/step - loss: 1.0536e-07
Epoch 11/20
19/19 [==============================] - 2s 80ms/step - loss: 1.0512e-07
Epoch 12/20
19/19 [==============================] - 2s 80ms/step - loss: 1.0910e-07
Epoch 13/20
19/19 [==============================] - 2s 79ms/step - loss: 1.0883e-07
Epoch 14/20
19/19 [==============================] - 2s 80ms/step - loss: 9.8330e-08
Epoch 15/20
19/19 [==============================] - 2s 80ms/step - loss: 9.9448e-08
Epoch 16/20
19/19 [==============================] - 2s 81ms/step - loss: 1.0404e-07
Epoch 17/20
19/19 [==============================] - 2s 80ms/step - loss: 9.9763e-08
Epoch 18/20
19/19 [==============================] - 2s 79ms/step - loss: 9.9254e-08
Epoch 19/20
19/19 [==============================] - 2s 79ms/step - loss: 9.7141e-08
Epoch 20/20
19/19 [==============================] - 2s 81ms/step - loss: 1.0639e-07
###Markdown
Part 3 - Making the predictions and visualising the results
###Code
dataset_train.iloc[10000::,1]
# Part 3 - Making the predictions and visualising the results
# Getting the real stock price of later years
dataset_test = dataset_train.iloc[10000::,:]
dataset_test.tail()
real_stock_price = dataset_test.iloc[:, 1:2].values
# Getting the predicted stock price of 2017
dataset_total = pd.concat((dataset_train['Open'], dataset_test['Open']), axis = 0)
inputs = dataset_total[len(dataset_total) - len(dataset_test) - 100:].values
inputs = inputs.reshape(-1,1)
inputs = sc.transform(inputs)
X_test = []
inputs.shape
for i in range(100, 330):
X_test.append(inputs[i-100:i, 0])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
predicted_stock_price = regressor.predict(X_test)
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
print(X_train.shape)
print(X_test.shape)
predicted_stock_price[:6]
predicted_stock_price = abs(predicted_stock_price) * 190
predicted_stock_price[:6]
# Visualising the results
plt.figure(figsize=(8,5))
plt.plot(real_stock_price, color = 'red', label = 'Real ' + ticker + ' Stock Price')
plt.plot(predicted_stock_price, color = 'blue', label = 'Predicted ' + ticker + ' Stock Price')
plt.title(ticker + ' Stock Price Prediction')
plt.xlabel('Time')
plt.ylabel(ticker + ' Stock Price')
plt.legend()
plt.show()
###Output
_____no_output_____ |
unit7/lr-fat.ipynb | ###Markdown
Brozek index predictionThis example goes over linear regression and Bayesian $R^2$.Adapted from [unit 7: fat1.odc](https://raw.githubusercontent.com/areding/6420-pymc/main/original_examples/Codes4Unit7/fat2d.odc), [unit 7: fat2d.odc](https://raw.githubusercontent.com/areding/6420-pymc/main/original_examples/Codes4Unit7/fat1.odc) and [fatmulti.odc](https://raw.githubusercontent.com/areding/6420-pymc/main/original_examples/Codes4Unit7/fatmulti.odc).Data can be found [here](https://raw.githubusercontent.com/areding/6420-pymc/main/data/fat.tsv).Associated lecture videos:Unit 7 Lesson 11 Unit 7 Lesson 13 Problem statementPercentage of body fat, age, weight, height, and ten body circumference measurements (e.g., abdomen) were recorded for 252 men. Percentage of body fat is estimated through an underwater weighing technique.The data set has 252 observations and 15 variables. Brozek index (Brozek et al., 1963) is obtained by the underwater weighing while other 14 anthropometric variables are obtained using scales and a measuring tape.- y = Brozek index- X1 = 1 (intercept)- X2 = age- X3 = weight- X4 = height- X5 = adipose- X6 = neck - X7 = chest- X8 = abdomen- X9 = hip- X10 = thigh- X11 = knee - X12 = ankle- X13 = bicep- X14 = forearm- X15 = wristThese anthropometric variables are less intrusive but also less reliable in assessing the body fat index.Set a linear regression to predict the Brozek index from these body measurements. Single predictor (X8)This is a recreation of fat1.odc.
###Code
data = pd.read_csv("../data/fat.tsv", sep="\t")
y = data["y"].to_numpy(copy=True)
X = data["X8"].to_numpy(copy=True)
# p will be the number of predictors + intercept (1 + 1 in this case)
n, p = X.shape[0], 2
with pm.Model() as m:
tau = pm.Gamma("tau", 0.001, 0.001)
beta0 = pm.Normal("beta0_intercept", 0, tau=0.001)
beta1 = pm.Normal("beta1_abdomen", 0, tau=0.001)
variance = pm.Deterministic("variance", 1 / tau)
mu = beta0 + beta1 * X
likelihood = pm.Normal("likelihood", mu=mu, tau=tau, observed=y)
# Bayesian R2 from fat1.odc
sse = (n - p) * variance
cy = y - y.mean()
sst = dot(cy, cy)
br2 = pm.Deterministic("br2", 1 - sse / sst)
trace = pm.sample(2000)
ppc = pm.sample_posterior_predictive(trace)
az.summary(trace, hdi_prob=0.95)
###Output
_____no_output_____
###Markdown
This matches the results from the U7 L11 video.Another way to calculate the $R^2$ using a posterior predictive check (keeping in mind that there is no standard "Bayesian $R^2$") and the results will be slightly different:
###Code
# get the mean y_pred across all chains
y_pred = np.array(ppc.posterior_predictive.likelihood.mean(axis=(0, 1)))
az.r2_score(y, y_pred)
###Output
_____no_output_____
###Markdown
In this case they agree, but that won't always be true. Multinomial regression with all predictorsBased on fat2d.odc or fatmulti.odc (they appear to be identical).
###Code
data = pd.read_csv("../data/fat.tsv", sep="\t")
y = data["y"].to_numpy(copy=True)
X = data.iloc[:, 1:].to_numpy(copy=True)
# add intercept
X_aug = np.concatenate((np.ones((X.shape[0], 1)), X), axis=1)
n, p = X_aug.shape
# Zellner's g
g = p**2
n, p, g
X_aug.shape
new_matrix = np.zeros((15, 15))
for i in range(15):
for j in range(15):
new_matrix[i, j] = np.inner(X_aug[:, i], X_aug[:, j])
new_matrix.shape
np.equal(np.dot(X_aug.T, X_aug), new_matrix)
np.allclose(np.dot(X_aug.T, X_aug), new_matrix)
np.dot(X_aug.T, X_aug).shape
mu_beta = np.zeros(p)
with pm.Model() as m2d:
tau = pm.Gamma("tau", 0.01, 0.01)
variance = pm.Deterministic("variance", 1 / tau)
tau_matrix = at.fill(at.zeros((15, 15)), tau)
tau_beta = tau_matrix / g * dot(X_aug.T, X_aug)
beta = pm.MvNormal("beta", mu_beta, tau=tau_beta)
mu = dot(X_aug, beta)
pm.Normal("likelihood", mu=mu, tau=tau, observed=y)
# Bayesian R2 from fat2d.odc
sse = (n - p) * variance
cy = y - y.mean()
sst = dot(cy, cy)
br2 = pm.Deterministic("br2", 1 - sse / sst)
br2_adj = pm.Deterministic("br2_adj", 1 - (n - 1) * variance / sst)
trace = pm.sample(1000)
ppc = pm.sample_posterior_predictive(trace)
az.summary(trace, hdi_prob=0.95)
y_pred = np.array(ppc.posterior_predictive.likelihood.mean(axis=(0, 1)))
az.r2_score(y, y_pred)
###Output
_____no_output_____
###Markdown
Reading on g-priorshttps://arxiv.org/abs/1702.01201https://towardsdatascience.com/linear-regression-model-selection-through-zellners-g-prior-da5f74635a03https://en.wikipedia.org/wiki/G-prior\original paper:Zellner, A. (1986). "On Assessing Prior Distributions and Bayesian Regression Analysis with g Prior Distributions". In Goel, P.; Zellner, A. (eds.). Bayesian Inference and Decision Techniques: Essays in Honor of Bruno de Finetti. Studies in Bayesian Econometrics and Statistics. Vol. 6. New York: Elsevier. pp. 233–243. ISBN 978-0-444-87712-3.
###Code
%watermark --iversions -v
###Output
Python implementation: CPython
Python version : 3.10.4
IPython version : 8.4.0
aesara: 2.6.6
arviz : 0.12.1
numpy : 1.22.4
pandas: 1.4.2
pymc : 4.0.0
|
research_notebooks/2) Style Transfer Generator (with L-BFGS).ipynb | ###Markdown
[How to add data from google drive to collab file](https://towardsdatascience.com/3-ways-to-load-csv-files-into-colab-7c14fcbdcb92)[How to prevent google collab from disconnecting ](https://medium.com/@shivamrawat_756/how-to-prevent-google-colab-from-disconnecting-717b88a128c0)
###Code
drive.mount('/content/drive')
target_imgs_list = os.listdir('/content/drive/My Drive/datasets/autoencoder_dataset')
target_names = [name.split('.')[0] for name in target_imgs_list]
target_abs_pathes = [os.path.join('/content/drive/My Drive/datasets/autoencoder_dataset', name +'.jpg') for name in target_names]
target_dict = dict(zip(target_names, target_abs_pathes ))
target_dict
key = 'picture'
target_image_path = target_dict[key]
style_reference_image_path = '/content/drive/My Drive/colab_notebooks/Keras_Fast_Style_Transfer/img/night.jpg'
try:
os.mkdir('/content/drive/My Drive/colab_notebooks/Keras_Fast_Style_Transfer/img_gen/' + key)
result_directory_path = '/content/drive/My Drive/colab_notebooks/Keras_Fast_Style_Transfer/img_gen/' + key + '/'
except:
result_directory_path = '/content/drive/My Drive/colab_notebooks/Keras_Fast_Style_Transfer/img_gen/' + key+ '/'
## uncomment to check images
## you can use 'load_img(file_path)' instead of cv2
img = cv2.imread(target_image_path)
cv2_imshow(img)
# img = cv2.imread(style_reference_image_path)
# cv2_imshow(img)
# Scale input image to VGG19 size
width, height = load_img(target_image_path).size
img_height = 400
img_width = int(width * img_height / height)
# Вспомогательные функции
def preprocess_image(image_path):
img = load_img(image_path, target_size = (img_height, img_width))
img = img_to_array(img)
img = np.expand_dims(img, axis = 0)
img = vgg19.preprocess_input(img)
return img
def deprocess_image(x):
x[:, :, 0] += 103.939 # Нулевое центрирование путем удаления среднего значения пиксела из ImageNet.
x[:, :, 1] += 116.779 # Отменяет преобразование выполненное vgg19.preprocess_input
x[:, :, 2] += 123.68
x = x[:, :, ::-1] #BGR -> RGB
x = np.clip(x, 0, 255).astype('uint8')
return x
###Output
_____no_output_____
###Markdown
Настроим сеть VGG19.Она принимает на вход пакет из трех изображений:* Изображение с образцом стиля* Целевое изображение* Заготовка, куда будет помещен результатИзображения с образцом стиля и целью определяются как константы.
###Code
target_image = K.constant(preprocess_image(target_image_path))
style_reference_image = K.constant(preprocess_image(style_reference_image_path))
combination_image = K.placeholder((1, img_height, img_width, 3))
input_tensor = K.concatenate([target_image, style_reference_image, combination_image], axis = 0)
model = vgg19.VGG19(input_tensor = input_tensor,
weights = 'imagenet',
include_top = False)
###Output
_____no_output_____
###Markdown
Определим функцию потери. Она характеризуется функциями потерь содержимого, стиля и общей потери вариации.
###Code
def content_loss(base, combination):
'''
Функция потерь содержимого.
'''
return K.sum(K.square(combination - base))
def gram_matrix(x):
'''
Вспомогательная функция. Вычисляет матрицу Грама для корреляционной матрицы.
http://pmpu.ru/vf4/dets/gram
'''
# K.permute_dimensions - Permutes axes in a tensor. https://www.tensorflow.org/api_docs/python/tf/keras/backend/permute_dimensions
# K.batch_flatten - Turn a nD tensor into a 2D tensor with same 0th dimension. https://www.tensorflow.org/api_docs/python/tf/keras/backend/batch_flatten
features = K.batch_flatten(K.permute_dimensions(x, (2, 0, 1)))
gram = K.dot(features, K.transpose(features))
return gram
def style_loss(style, combination):
'''
Функция потерь стиля
'''
S = gram_matrix(style)
C = gram_matrix(combination)
channels = 3
size = img_height * img_width
return K.sum(K.square(S - C)) / (4. * (channels ** 2) * (size ** 2))
def total_variation_loss(x):
'''
Фунция общей потери вариации, стимулирует пространственную целостность итогового изображения,
позволяет избежать мозаичного эффекта.
Можно интерпретировать как регуляризацию потерь.
'''
a = K.square(x[:, :img_height - 1, :img_width - 1, :] - x[:, 1:, :img_width - 1, :]) # Сдвиг в 1 пиксель по высоте\ширине
b = K.square(x[:, :img_height - 1, :img_width - 1, :] - x[:, :img_height - 1, 1:, :])
return K.sum(K.pow(a + b, 1.25))
###Output
_____no_output_____
###Markdown
Для вычисления потери содержимого используется только один верхний слой block5_conv2.Для вычисления потери стиля - нижние слои из каждого сверточного блока.Общая потеря вариации добавляется в конце.
###Code
# Определение общей потери
output_dict = dict([(layer.name, layer.output) for layer in model.layers])
content_layer = 'block5_conv2'
style_layers = ['block1_conv1',
'block2_conv1',
'block3_conv1',
'block4_conv1',
'block5_conv1']
total_variation_weight = 1e-4
style_weight = 1.
content_weight = 0.025 # Чем больше - тем больше походит итоговое изображение на целевое
loss = K.variable(0.)
# content loss part
layer_features = output_dict[content_layer]
target_image_features = layer_features[0, :, :, :]
combination_features = layer_features[2, :, :, :]
loss += content_weight * content_loss(target_image_features, combination_features)
# style loss part
for layer_name in style_layers:
layer_features = output_dict[layer_name]
style_reference_features = layer_features[1, :, :, :]
combination_features = layer_features[2, :, :, :]
sl = style_loss(style_reference_features, combination_features)
loss += (style_weight / len(style_layers)) * sl
# variation loss part
loss += total_variation_weight * total_variation_loss(combination_image)
###Output
WARNING:tensorflow:Variable += will be deprecated. Use variable.assign_add if you want assignment to the variable value or 'x = x + y' if you want a new python Tensor object.
###Markdown
Настройка процесса градиентного спуска.Градиентный спуск выполняется с помощью алгоритма L - BFGS, как в и оригинальной статье.Этот алгоритм реализован в пакете SciPy, однако есть 2 нюанса:1) Алгоритм требует передачи значений функции потерь и градиентов в виде двух отдельных функций.2) Может применятся только к плоским векторам.Для решения проблемы напишем класс-оберту, который будет вычислять значения потерь и градиентов одновременно, возвращать значение потерь при первом обращении и кэшировать значение градиентов при первом обращении.
###Code
grads = K.gradients(loss, combination_image)[0]
fetch_loss_and_grads = K.function([combination_image], [loss, grads])
class Evaluator(object):
def __init__(self):
self.loss_value = None
self.grads_values = None
def loss(self, x):
assert self.loss_value is None
x = x.reshape((1, img_height, img_width, 3))
outs = fetch_loss_and_grads([x])
loss_value = outs[0]
grad_values = outs[1].flatten().astype('float64')
self.loss_value = loss_value
self.grad_values = grad_values
return self.loss_value
def grads(self, x):
assert self.loss_value is not None
grad_values = np.copy(self.grad_values)
self.loss_value = None
self.grad_values = None
return grad_values
evaluator = Evaluator()
###Output
_____no_output_____
###Markdown
Запускаем весь процесс
###Code
result_prefix = 'result'
iterations = 150
x = preprocess_image(target_image_path)
x = x.flatten()
for i in range(iterations):
print('Start of iteration', i)
start_time = time.time()
x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x, fprime = evaluator.grads, maxfun = 20)
print('Current loss value:', min_val)
img = x.copy().reshape((img_height, img_width, 3))
img = deprocess_image(img)
fname = result_directory_path + result_prefix + '_at_iteration_%d.png' % i
if i == 0 or (i+1) % 10 == 0:
imwrite(fname, img)
print('Image saved as', fname)
end_time = time.time()
print('Iteration %d complited in %ds' % (i, end_time - start_time))
###Output
_____no_output_____ |
notebooks_solutions/lab09_hierarchical_clustering.ipynb | ###Markdown
USE LIBRARIES NOT SEEN IN CLASS FOR BETTER GRADE IF YOU KNOW WHAT YOU'RE DOING AND CAN EXPLAIN THEM. ALSO COPYPASTE AND LEAVE THE SOURCE IN.
###Code
from os.path import join
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.cluster import AgglomerativeClustering
from scipy.cluster.hierarchy import dendrogram
sns.set()
###Output
_____no_output_____
###Markdown
Import preprocessed data
###Code
df = pd.read_csv(join('..', 'data', 'tugas_preprocessed.csv'))
df.head()
df.columns
# Splitting feature names into groups
non_metric_features = df.columns[df.columns.str.startswith('x')]
pc_features = df.columns[df.columns.str.startswith('PC')]
metric_features = df.columns[~df.columns.str.startswith('x') & ~df.columns.str.startswith('PC')]
###Output
_____no_output_____
###Markdown
Hierarchical ClusteringWhat is hierarchical clustering? How does it work? How does it relate to the distance matrix we discussed at the beggining of the course? ;) Different types of linkage How are they computed?**Ward linkage**: minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in this sense is similar to the k-means objective function but tackled with an agglomerative hierarchical approach. The distance matrix Characteristics:- *bottom up approach*: each observation starts in its own cluster, and clusters are successively merged together- *greedy/local algorithm*: at each iteration tries to minimize the distance of cluster merging- *no realocation*: after an observation is assigned to a cluster, it can no longer change- *deterministic*: you always get the same answer when you run it- *scalability*: can become *very slow* for a large number of observations How to apply Hierarchical Clustering?**Note: Which types of variables should be used for clustering?**
###Code
# Performing HC
hclust = AgglomerativeClustering(linkage='ward', affinity='euclidean', n_clusters=5)
hc_labels = hclust.fit_predict(df[metric_features])
hc_labels
# Characterizing the clusters
df_concat = pd.concat((df, pd.Series(hc_labels, name='labels')), axis=1)
df_concat.groupby('labels').mean()
###Output
_____no_output_____
###Markdown
Defining the linkage method to choose: **We need to understand that:**$$SS_{t} = SS_{w} + SS_{b}$$---$$SS_{t} = \sum\limits_{i = 1}^n {{{({x_i} - \overline x )}^2}}$$$$SS_{w} = \sum\limits_{k = 1}^K {\sum\limits_{i = 1}^{{n_k}} {{{({x_i} - {{\overline x }_k})}^2}} }$$$$SS_{b} = \sum\limits_{k = 1}^K {{n_k}{{({{\overline x }_k} - \overline x )}^2}}$$, where $n$ is the total number of observations, $x_i$ is the vector of the $i^{th}$ observation, $\overline x$ is the centroid of the data, $K$ is the number of clusters, $n_k$ is the number of observations in the $k^{th}$ cluster and $\overline x_k$ is the centroid of the $k^{th}$ cluster.
###Code
# Computing SST
X = df[metric_features].values
sst = np.sum(np.square(X - X.mean(axis=0)), axis=0)
# Computing SSW
ssw_iter = []
for i in np.unique(hc_labels):
X_k = X[hc_labels == i]
ssw_iter.append(np.sum(np.square(X_k - X_k.mean(axis=0)), axis=0))
ssw = np.sum(ssw_iter, axis=0)
# Computing SSB
ssb_iter = []
for i in np.unique(hc_labels):
X_k = X[hc_labels == i]
ssb_iter.append(X_k.shape[0] * np.square(X_k.mean(axis=0) - X.mean(axis=0)))
ssb = np.sum(ssb_iter, axis=0)
# Verifying the formula
np.round(sst) == np.round((ssw + ssb))
np.sum(ssb)/np.sum(sst) #R^2 (commonly seen as R2)
def get_r2_hc(df, link_method, max_nclus, min_nclus=1, dist="euclidean"):
"""This function computes the R2 for a set of cluster solutions given by the application of a hierarchical method.
The R2 is a measure of the homogenity of a cluster solution. It is based on SSt = SSw + SSb and R2 = SSb/SSt.
Parameters:
df (DataFrame): Dataset to apply clustering
link_method (str): either "ward", "complete", "average", "single"
max_nclus (int): maximum number of clusters to compare the methods
min_nclus (int): minimum number of clusters to compare the methods. Defaults to 1.
dist (str): distance to use to compute the clustering solution. Must be a valid distance. Defaults to "euclidean".
Returns:
ndarray: R2 values for the range of cluster solutions
"""
def get_ss(df):
ss = np.sum(df.var() * (df.count() - 1))
return ss # return sum of sum of squares of each df variable
sst = get_ss(df) # get total sum of squares
r2 = [] # where we will store the R2 metrics for each cluster solution
for i in range(min_nclus, max_nclus+1): # iterate over desired ncluster range
cluster = AgglomerativeClustering(n_clusters=i, affinity=dist, linkage=link_method)
hclabels = cluster.fit_predict(df) #get cluster labels
df_concat = pd.concat((df, pd.Series(hclabels, name='labels')), axis=1) # concat df with labels
ssw_labels = df_concat.groupby(by='labels').apply(get_ss) # compute ssw for each cluster labels
ssb = sst - np.sum(ssw_labels) # remember: SST = SSW + SSB
r2.append(ssb / sst) # save the R2 of the given cluster solution
return np.array(r2)
# Prepare input
hc_methods = ["ward", "complete", "average", "single"]
# Call function defined above to obtain the R2 statistic for each hc_method
max_nclus = 10
r2_hc_methods = np.vstack(
[
get_r2_hc(df=df[metric_features], link_method=link, max_nclus=max_nclus)
for link in hc_methods
]
).T
r2_hc_methods = pd.DataFrame(r2_hc_methods, index=range(1, max_nclus + 1), columns=hc_methods)
sns.set()
# Plot data
fig = plt.figure(figsize=(11,5))
sns.lineplot(data=r2_hc_methods, linewidth=2.5, markers=["o"]*4)
# Finalize the plot
fig.suptitle("R2 plot for various hierarchical methods", fontsize=21)
plt.gca().invert_xaxis() # invert x axis
plt.legend(title="HC methods", title_fontsize=11)
plt.xticks(range(1, max_nclus + 1))
plt.xlabel("Number of clusters", fontsize=13)
plt.ylabel("R2 metric", fontsize=13)
plt.show()
###Output
_____no_output_____
###Markdown
Defining the number of clusters: Where is the **first big jump** on the Dendrogram?
###Code
# setting distance_threshold=0 and n_clusters=None ensures we compute the full tree
linkage = 'ward'
distance = 'euclidean'
hclust = AgglomerativeClustering(linkage=linkage, affinity=distance, distance_threshold=0, n_clusters=None)
hclust.fit_predict(df[metric_features])
# Adapted from:
# https://scikit-learn.org/stable/auto_examples/cluster/plot_agglomerative_dendrogram.html#sphx-glr-auto-examples-cluster-plot-agglomerative-dendrogram-py
# create the counts of samples under each node (number of points being merged)
counts = np.zeros(hclust.children_.shape[0])
n_samples = len(hclust.labels_)
# hclust.children_ contains the observation ids that are being merged together
# At the i-th iteration, children[i][0] and children[i][1] are merged to form node n_samples + i
for i, merge in enumerate(hclust.children_):
# track the number of observations in the current cluster being formed
current_count = 0
for child_idx in merge:
if child_idx < n_samples:
# If this is True, then we are merging an observation
current_count += 1 # leaf node
else:
# Otherwise, we are merging a previously formed cluster
current_count += counts[child_idx - n_samples]
counts[i] = current_count
# the hclust.children_ is used to indicate the two points/clusters being merged (dendrogram's u-joins)
# the hclust.distances_ indicates the distance between the two points/clusters (height of the u-joins)
# the counts indicate the number of points being merged (dendrogram's x-axis)
linkage_matrix = np.column_stack(
[hclust.children_, hclust.distances_, counts]
).astype(float)
# Plot the corresponding dendrogram
sns.set()
fig = plt.figure(figsize=(11,5))
# The Dendrogram parameters need to be tuned
y_threshold = 100
dendrogram(linkage_matrix, truncate_mode='level', p=5, color_threshold=y_threshold, above_threshold_color='k')
plt.hlines(y_threshold, 0, 1000, colors="r", linestyles="dashed")
plt.title(f'Hierarchical Clustering - {linkage.title()}\'s Dendrogram', fontsize=21)
plt.xlabel('Number of points in node (or index of point if no parenthesis)')
plt.ylabel(f'{distance.title()} Distance', fontsize=13)
plt.show()
###Output
_____no_output_____
###Markdown
Final Hierarchical clustering solution
###Code
# 4 cluster solution
linkage = 'ward'
distance = 'euclidean'
hc4lust = AgglomerativeClustering(linkage=linkage, affinity=distance, n_clusters=4)
hc4_labels = hclust.fit_predict(df[metric_features])
# Characterizing the 4 clusters
df_concat = pd.concat((df, pd.Series(hc4_labels, name='labels')), axis=1)
df_concat.groupby('labels').mean()
# 5 cluster solution
linkage = 'ward'
distance = 'euclidean'
hc5lust = AgglomerativeClustering(linkage=linkage, affinity=distance, n_clusters=5)
hc5_labels = hc5lust.fit_predict(df[metric_features])
# Characterizing the 5 clusters
df_concat = pd.concat((df, pd.Series(hc5_labels, name='labels')), axis=1)
df_concat.groupby('labels').mean()
###Output
_____no_output_____
###Markdown
Import preprocessed data
###Code
df = pd.read_csv(join('..', 'data', 'tugas_preprocessed.csv'))
df.head()
df.columns
# Splitting feature names into groups
non_metric_features = df.columns[df.columns.str.startswith('x')]
pc_features = df.columns[df.columns.str.startswith('PC')]
metric_features = df.columns[~df.columns.str.startswith('x') & ~df.columns.str.startswith('PC')]
###Output
_____no_output_____
###Markdown
Hierarchical ClusteringWhat is hierarchical clustering? How does it work? How does it relate to the distance matrix we discussed at the beggining of the course? ;) Different types of linkage How are they computed?**Ward linkage**: minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in this sense is similar to the k-means objective function but tackled with an agglomerative hierarchical approach. The distance matrix Characteristics:- *bottom up approach*: each observation starts in its own cluster, and clusters are successively merged together- *greedy/local algorithm*: at each iteration tries to minimize the distance of cluster merging- *no realocation*: after an observation is assigned to a cluster, it can no longer change- *deterministic*: you always get the same answer when you run it- *scalability*: can become *very slow* for a large number of observations How to apply Hierarchical Clustering?**Note: Which types of variables should be used for clustering?**
###Code
# Performing HC
hclust = AgglomerativeClustering(linkage='ward', affinity='euclidean', n_clusters=5)
hc_labels = hclust.fit_predict(df[metric_features])
hc_labels
# Characterizing the clusters
df_concat = pd.concat((df, pd.Series(hc_labels, name='labels')), axis=1)
df_concat.groupby('labels').mean()
###Output
_____no_output_____
###Markdown
Defining the linkage method to choose: **We need to understand that:**$$SS_{t} = SS_{w} + SS_{b}$$---$$SS_{t} = \sum\limits_{i = 1}^n {{{({x_i} - \overline x )}^2}}$$$$SS_{w} = \sum\limits_{k = 1}^K {\sum\limits_{i = 1}^{{n_k}} {{{({x_i} - {{\overline x }_k})}^2}} }$$$$SS_{b} = \sum\limits_{k = 1}^K {{n_k}{{({{\overline x }_k} - \overline x )}^2}}$$, where $n$ is the total number of observations, $x_i$ is the vector of the $i^{th}$ observation, $\overline x$ is the centroid of the data, $K$ is the number of clusters, $n_k$ is the number of observations in the $k^{th}$ cluster and $\overline x_k$ is the centroid of the $k^{th}$ cluster.
###Code
# Computing SST
X = df[metric_features].values
sst = np.sum(np.square(X - X.mean(axis=0)), axis=0)
# Computing SSW
ssw_iter = []
for i in np.unique(hc_labels):
X_k = X[hc_labels == i]
ssw_iter.append(np.sum(np.square(X_k - X_k.mean(axis=0)), axis=0))
ssw = np.sum(ssw_iter, axis=0)
# Computing SSB
ssb_iter = []
for i in np.unique(hc_labels):
X_k = X[hc_labels == i]
ssb_iter.append(X_k.shape[0] * np.square(X_k.mean(axis=0) - X.mean(axis=0)))
ssb = np.sum(ssb_iter, axis=0)
# Verifying the formula
np.round(sst) == np.round((ssw + ssb))
def get_r2_hc(df, link_method, max_nclus, min_nclus=1, dist="euclidean"):
"""This function computes the R2 for a set of cluster solutions given by the application of a hierarchical method.
The R2 is a measure of the homogenity of a cluster solution. It is based on SSt = SSw + SSb and R2 = SSb/SSt.
Parameters:
df (DataFrame): Dataset to apply clustering
link_method (str): either "ward", "complete", "average", "single"
max_nclus (int): maximum number of clusters to compare the methods
min_nclus (int): minimum number of clusters to compare the methods. Defaults to 1.
dist (str): distance to use to compute the clustering solution. Must be a valid distance. Defaults to "euclidean".
Returns:
ndarray: R2 values for the range of cluster solutions
"""
def get_ss(df):
ss = np.sum(df.var() * (df.count() - 1))
return ss # return sum of sum of squares of each df variable
sst = get_ss(df) # get total sum of squares
r2 = [] # where we will store the R2 metrics for each cluster solution
for i in range(min_nclus, max_nclus+1): # iterate over desired ncluster range
cluster = AgglomerativeClustering(n_clusters=i, affinity=dist, linkage=link_method)
hclabels = cluster.fit_predict(df) #get cluster labels
df_concat = pd.concat((df, pd.Series(hclabels, name='labels')), axis=1) # concat df with labels
ssw_labels = df_concat.groupby(by='labels').apply(get_ss) # compute ssw for each cluster labels
ssb = sst - np.sum(ssw_labels) # remember: SST = SSW + SSB
r2.append(ssb / sst) # save the R2 of the given cluster solution
return np.array(r2)
# Prepare input
hc_methods = ["ward", "complete", "average", "single"]
# Call function defined above to obtain the R2 statistic for each hc_method
max_nclus = 10
r2_hc_methods = np.vstack(
[
get_r2_hc(df=df[metric_features], link_method=link, max_nclus=max_nclus)
for link in hc_methods
]
).T
r2_hc_methods = pd.DataFrame(r2_hc_methods, index=range(1, max_nclus + 1), columns=hc_methods)
sns.set()
# Plot data
fig = plt.figure(figsize=(11,5))
sns.lineplot(data=r2_hc_methods, linewidth=2.5, markers=["o"]*4)
# Finalize the plot
fig.suptitle("R2 plot for various hierarchical methods", fontsize=21)
plt.gca().invert_xaxis() # invert x axis
plt.legend(title="HC methods", title_fontsize=11)
plt.xticks(range(1, max_nclus + 1))
plt.xlabel("Number of clusters", fontsize=13)
plt.ylabel("R2 metric", fontsize=13)
plt.show()
###Output
_____no_output_____
###Markdown
Defining the number of clusters: Where is the **first big jump** on the Dendrogram?
###Code
# setting distance_threshold=0 and n_clusters=None ensures we compute the full tree
linkage = 'ward'
distance = 'euclidean'
hclust = AgglomerativeClustering(linkage=linkage, affinity=distance, distance_threshold=0, n_clusters=None)
hclust.fit_predict(df[metric_features])
# Adapted from:
# https://scikit-learn.org/stable/auto_examples/cluster/plot_agglomerative_dendrogram.html#sphx-glr-auto-examples-cluster-plot-agglomerative-dendrogram-py
# create the counts of samples under each node (number of points being merged)
counts = np.zeros(hclust.children_.shape[0])
n_samples = len(hclust.labels_)
# hclust.children_ contains the observation ids that are being merged together
# At the i-th iteration, children[i][0] and children[i][1] are merged to form node n_samples + i
for i, merge in enumerate(hclust.children_):
# track the number of observations in the current cluster being formed
current_count = 0
for child_idx in merge:
if child_idx < n_samples:
# If this is True, then we are merging an observation
current_count += 1 # leaf node
else:
# Otherwise, we are merging a previously formed cluster
current_count += counts[child_idx - n_samples]
counts[i] = current_count
# the hclust.children_ is used to indicate the two points/clusters being merged (dendrogram's u-joins)
# the hclust.distances_ indicates the distance between the two points/clusters (height of the u-joins)
# the counts indicate the number of points being merged (dendrogram's x-axis)
linkage_matrix = np.column_stack(
[hclust.children_, hclust.distances_, counts]
).astype(float)
# Plot the corresponding dendrogram
sns.set()
fig = plt.figure(figsize=(11,5))
# The Dendrogram parameters need to be tuned
y_threshold = 100
dendrogram(linkage_matrix, truncate_mode='level', p=5, color_threshold=y_threshold, above_threshold_color='k')
plt.hlines(y_threshold, 0, 1000, colors="r", linestyles="dashed")
plt.title(f'Hierarchical Clustering - {linkage.title()}\'s Dendrogram', fontsize=21)
plt.xlabel('Number of points in node (or index of point if no parenthesis)')
plt.ylabel(f'{distance.title()} Distance', fontsize=13)
plt.show()
###Output
_____no_output_____
###Markdown
Final Hierarchical clustering solution
###Code
# 4 cluster solution
linkage = 'ward'
distance = 'euclidean'
hc4lust = AgglomerativeClustering(linkage=linkage, affinity=distance, n_clusters=4)
hc4_labels = hc4lust.fit_predict(df[metric_features])
# Characterizing the 4 clusters
df_concat = pd.concat((df, pd.Series(hc4_labels, name='labels')), axis=1)
df_concat.groupby('labels').mean()
# 5 cluster solution
linkage = 'ward'
distance = 'euclidean'
hc5lust = AgglomerativeClustering(linkage=linkage, affinity=distance, n_clusters=5)
hc5_labels = hc5lust.fit_predict(df[metric_features])
# Characterizing the 5 clusters
df_concat = pd.concat((df, pd.Series(hc5_labels, name='labels')), axis=1)
df_concat.groupby('labels').mean()
###Output
_____no_output_____
###Markdown
Import preprocessed data
###Code
df = pd.read_csv(join('..', 'data', 'tugas_preprocessed.csv'))
df.head()
df.columns
# Splitting feature names into groups
non_metric_features = df.columns[df.columns.str.startswith('x')]
pc_features = df.columns[df.columns.str.startswith('PC')]
metric_features = df.columns[~df.columns.str.startswith('x') & ~df.columns.str.startswith('PC')]
###Output
_____no_output_____
###Markdown
Hierarchical ClusteringWhat is hierarchical clustering? How does it work? How does it relate to the distance matrix we discussed at the beggining of the course? ;) Different types of linkage How are they computed?**Ward linkage**: minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in this sense is similar to the k-means objective function but tackled with an agglomerative hierarchical approach. The distance matrix Characteristics:- *bottom up approach*: each observation starts in its own cluster, and clusters are successively merged together- *greedy/local algorithm*: at each iteration tries to minimize the distance of cluster merging- *no realocation*: after an observation is assigned to a cluster, it can no longer change- *deterministic*: you always get the same answer when you run it- *scalability*: can become *very slow* for a large number of observations How to apply Hierarchical Clustering?**Note: Which types of variables should be used for clustering?**
###Code
# Performing HC
hclust = AgglomerativeClustering(linkage='ward', affinity='euclidean', n_clusters=5)
hc_labels = hclust.fit_predict(df[metric_features])
hc_labels
# Characterizing the clusters
df_concat = pd.concat((df, pd.Series(hc_labels, name='labels')), axis=1)
df_concat.groupby('labels').mean()
###Output
_____no_output_____
###Markdown
Defining the linkage method to choose: **We need to understand that:**$$SS_{t} = SS_{w} + SS_{b}$$---$$SS_{t} = \sum\limits_{i = 1}^n {{{({x_i} - \overline x )}^2}}$$$$SS_{w} = \sum\limits_{k = 1}^K {\sum\limits_{i = 1}^{{n_k}} {{{({x_i} - {{\overline x }_k})}^2}} }$$$$SS_{b} = \sum\limits_{k = 1}^K {{n_k}{{({{\overline x }_k} - \overline x )}^2}}$$, where $n$ is the total number of observations, $x_i$ is the vector of the $i^{th}$ observation, $\overline x$ is the centroid of the data, $K$ is the number of clusters, $n_k$ is the number of observations in the $k^{th}$ cluster and $\overline x_k$ is the centroid of the $k^{th}$ cluster.
###Code
# Computing SST
X = df[metric_features].values
print(X)
sst = np.sum(np.square(X - X.mean(axis=0)), axis=0)
# Computing SSW
ssw_iter = []
for i in np.unique(hc_labels):
X_k = X[hc_labels == i]
ssw_iter.append(np.sum(np.square(X_k - X_k.mean(axis=0)), axis=0))
ssw = np.sum(ssw_iter, axis=0)
# Computing SSB
ssb_iter = []
for i in np.unique(hc_labels):
X_k = X[hc_labels == i]
ssb_iter.append(X_k.shape[0] * np.square(X_k.mean(axis=0) - X.mean(axis=0)))
ssb = np.sum(ssb_iter, axis=0)
# Verifying the formula
np.round(sst) == np.round((ssw + ssb))
def get_r2_hc(df, link_method, max_nclus, min_nclus=1, dist="euclidean"):
"""This function computes the R2 for a set of cluster solutions given by the application of a hierarchical method.
The R2 is a measure of the homogenity of a cluster solution. It is based on SSt = SSw + SSb and R2 = SSb/SSt.
Parameters:
df (DataFrame): Dataset to apply clustering
link_method (str): either "ward", "complete", "average", "single"
max_nclus (int): maximum number of clusters to compare the methods
min_nclus (int): minimum number of clusters to compare the methods. Defaults to 1.
dist (str): distance to use to compute the clustering solution. Must be a valid distance. Defaults to "euclidean".
Returns:
ndarray: R2 values for the range of cluster solutions
"""
def get_ss(df):
ss = np.sum(df.var() * (df.count() - 1))
return ss # return sum of sum of squares of each df variable
sst = get_ss(df) # get total sum of squares
r2 = [] # where we will store the R2 metrics for each cluster solution
for i in range(min_nclus, max_nclus+1): # iterate over desired ncluster range
cluster = AgglomerativeClustering(n_clusters=i, affinity=dist, linkage=link_method)
hclabels = cluster.fit_predict(df) #get cluster labels
df_concat = pd.concat((df, pd.Series(hclabels, name='labels')), axis=1) # concat df with labels
ssw_labels = df_concat.groupby(by='labels').apply(get_ss) # compute ssw for each cluster labels
ssb = sst - np.sum(ssw_labels) # remember: SST = SSW + SSB
r2.append(ssb / sst) # save the R2 of the given cluster solution
return np.array(r2)
# Prepare input
hc_methods = ["ward", "complete", "average", "single"]
# Call function defined above to obtain the R2 statistic for each hc_method
max_nclus = 10
r2_hc_methods = np.vstack(
[
get_r2_hc(df=df[metric_features], link_method=link, max_nclus=max_nclus)
for link in hc_methods
]
).T
r2_hc_methods = pd.DataFrame(r2_hc_methods, index=range(1, max_nclus + 1), columns=hc_methods)
sns.set()
# Plot data
fig = plt.figure(figsize=(11,5))
sns.lineplot(data=r2_hc_methods, linewidth=2.5, markers=["o"]*4)
# Finalize the plot
fig.suptitle("R2 plot for various hierarchical methods", fontsize=21)
plt.gca().invert_xaxis() # invert x axis
plt.legend(title="HC methods", title_fontsize=11)
plt.xticks(range(1, max_nclus + 1))
plt.xlabel("Number of clusters", fontsize=13)
plt.ylabel("R2 metric", fontsize=13)
plt.show()
###Output
_____no_output_____
###Markdown
Defining the number of clusters: Where is the **first big jump** on the Dendrogram?
###Code
# setting distance_threshold=0 and n_clusters=None ensures we compute the full tree
linkage = 'ward'
distance = 'euclidean'
hclust = AgglomerativeClustering(linkage=linkage, affinity=distance, distance_threshold=0, n_clusters=None)
hclust.fit_predict(df[metric_features])
# Adapted from:
# https://scikit-learn.org/stable/auto_examples/cluster/plot_agglomerative_dendrogram.html#sphx-glr-auto-examples-cluster-plot-agglomerative-dendrogram-py
# create the counts of samples under each node (number of points being merged)
counts = np.zeros(hclust.children_.shape[0])
n_samples = len(hclust.labels_)
# hclust.children_ contains the observation ids that are being merged together
# At the i-th iteration, children[i][0] and children[i][1] are merged to form node n_samples + i
for i, merge in enumerate(hclust.children_):
# track the number of observations in the current cluster being formed
current_count = 0
for child_idx in merge:
if child_idx < n_samples:
# If this is True, then we are merging an observation
current_count += 1 # leaf node
else:
# Otherwise, we are merging a previously formed cluster
current_count += counts[child_idx - n_samples]
counts[i] = current_count
# the hclust.children_ is used to indicate the two points/clusters being merged (dendrogram's u-joins)
# the hclust.distances_ indicates the distance between the two points/clusters (height of the u-joins)
# the counts indicate the number of points being merged (dendrogram's x-axis)
linkage_matrix = np.column_stack(
[hclust.children_, hclust.distances_, counts]
).astype(float)
# Plot the corresponding dendrogram
sns.set()
fig = plt.figure(figsize=(11,5))
# The Dendrogram parameters need to be tuned
y_threshold = 100
dendrogram(linkage_matrix, truncate_mode='level', p=5, color_threshold=y_threshold, above_threshold_color='k')
plt.hlines(y_threshold, 0, 1000, colors="r", linestyles="dashed")
plt.title(f'Hierarchical Clustering - {linkage.title()}\'s Dendrogram', fontsize=21)
plt.xlabel('Number of points in node (or index of point if no parenthesis)')
plt.ylabel(f'{distance.title()} Distance', fontsize=13)
plt.show()
###Output
_____no_output_____
###Markdown
Final Hierarchical clustering solution
###Code
# 4 cluster solution
linkage = 'ward'
distance = 'euclidean'
hc4lust = AgglomerativeClustering(linkage=linkage, affinity=distance, n_clusters=4)
hc4_labels = hc4lust.fit_predict(df[metric_features])
# Characterizing the 4 clusters
df_concat = pd.concat((df, pd.Series(hc4_labels, name='labels')), axis=1)
df_concat.groupby('labels').mean()
# 5 cluster solution
linkage = 'ward'
distance = 'euclidean'
hc5lust = AgglomerativeClustering(linkage=linkage, affinity=distance, n_clusters=5)
hc5_labels = hc5lust.fit_predict(df[metric_features])
# Characterizing the 5 clusters
df_concat = pd.concat((df, pd.Series(hc5_labels, name='labels')), axis=1)
df_concat.groupby('labels').mean()
###Output
_____no_output_____
###Markdown
Import preprocessed data
###Code
df = pd.read_csv(join('..', 'data', 'tugas_preprocessed.csv'))
df.head()
df.columns
# Splitting feature names into groups
non_metric_features = df.columns[df.columns.str.startswith('x')]
pc_features = df.columns[df.columns.str.startswith('PC')]
metric_features = df.columns[~df.columns.str.startswith('x') & ~df.columns.str.startswith('PC')]
###Output
_____no_output_____
###Markdown
Hierarchical ClusteringWhat is hierarchical clustering? How does it work? How does it relate to the distance matrix we discussed at the beggining of the course? ;) Different types of linkage How are they computed?**Ward linkage**: minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in this sense is similar to the k-means objective function but tackled with an agglomerative hierarchical approach. The distance matrix Characteristics:- *bottom up approach*: each observation starts in its own cluster, and clusters are successively merged together- *greedy/local algorithm*: at each iteration tries to minimize the distance of cluster merging- *no realocation*: after an observation is assigned to a cluster, it can no longer change- *deterministic*: you always get the same answer when you run it- *scalability*: can become *very slow* for a large number of observations How to apply Hierarchical Clustering?**Note: Which types of variables should be used for clustering?**
###Code
# Performing HC
hclust = AgglomerativeClustering(linkage='ward', affinity='euclidean', n_clusters=5)
hc_labels = hclust.fit_predict(df[metric_features])
hc_labels
# Characterizing the clusters
df_concat = pd.concat((df, pd.Series(hc_labels, name='labels')), axis=1)
df_concat.groupby('labels').mean()
###Output
_____no_output_____
###Markdown
Defining the linkage method to choose: **We need to understand that:**$$SS_{t} = SS_{w} + SS_{b}$$---$$SS_{t} = \sum\limits_{i = 1}^n {{{({x_i} - \overline x )}^2}}$$$$SS_{w} = \sum\limits_{k = 1}^K {\sum\limits_{i = 1}^{{n_k}} {{{({x_i} - {{\overline x }_k})}^2}} }$$$$SS_{b} = \sum\limits_{k = 1}^K {{n_k}{{({{\overline x }_k} - \overline x )}^2}}$$, where $n$ is the total number of observations, $x_i$ is the vector of the $i^{th}$ observation, $\overline x$ is the centroid of the data, $K$ is the number of clusters, $n_k$ is the number of observations in the $k^{th}$ cluster and $\overline x_k$ is the centroid of the $k^{th}$ cluster.
###Code
# Computing SST
X = df[metric_features].values
sst = np.sum(np.square(X - X.mean(axis=0)), axis=0)
# Computing SSW
ssw_iter = []
for i in np.unique(hc_labels):
X_k = X[hc_labels == i]
ssw_iter.append(np.sum(np.square(X_k - X_k.mean(axis=0)), axis=0))
ssw = np.sum(ssw_iter, axis=0)
# Computing SSB
ssb_iter = []
for i in np.unique(hc_labels):
X_k = X[hc_labels == i]
ssb_iter.append(X_k.shape[0] * np.square(X_k.mean(axis=0) - X.mean(axis=0)))
ssb = np.sum(ssb_iter, axis=0)
# Verifying the formula
np.round(sst) == np.round((ssw + ssb))
def get_r2_hc(df, link_method, max_nclus, min_nclus=1, dist="euclidean"):
"""This function computes the R2 for a set of cluster solutions given by the application of a hierarchical method.
The R2 is a measure of the homogenity of a cluster solution. It is based on SSt = SSw + SSb and R2 = SSb/SSt.
Parameters:
df (DataFrame): Dataset to apply clustering
link_method (str): either "ward", "complete", "average", "single"
max_nclus (int): maximum number of clusters to compare the methods
min_nclus (int): minimum number of clusters to compare the methods. Defaults to 1.
dist (str): distance to use to compute the clustering solution. Must be a valid distance. Defaults to "euclidean".
Returns:
ndarray: R2 values for the range of cluster solutions
"""
def get_ss(df):
ss = np.sum(df.var() * (df.count() - 1))
return ss # return sum of sum of squares of each df variable
sst = get_ss(df) # get total sum of squares
r2 = [] # where we will store the R2 metrics for each cluster solution
for i in range(min_nclus, max_nclus+1): # iterate over desired ncluster range
cluster = AgglomerativeClustering(n_clusters=i, affinity=dist, linkage=link_method)
hclabels = cluster.fit_predict(df) #get cluster labels
df_concat = pd.concat((df, pd.Series(hclabels, name='labels')), axis=1) # concat df with labels
ssw_labels = df_concat.groupby(by='labels').apply(get_ss) # compute ssw for each cluster labels
ssb = sst - np.sum(ssw_labels) # remember: SST = SSW + SSB
r2.append(ssb / sst) # save the R2 of the given cluster solution
return np.array(r2)
# Prepare input
hc_methods = ["ward", "complete", "average", "single"]
# Call function defined above to obtain the R2 statistic for each hc_method
max_nclus = 10
r2_hc_methods = np.vstack(
[
get_r2_hc(df=df[metric_features], link_method=link, max_nclus=max_nclus)
for link in hc_methods
]
).T
r2_hc_methods = pd.DataFrame(r2_hc_methods, index=range(1, max_nclus + 1), columns=hc_methods)
sns.set()
# Plot data
fig = plt.figure(figsize=(11,5))
sns.lineplot(data=r2_hc_methods, linewidth=2.5, markers=["o"]*4)
# Finalize the plot
fig.suptitle("R2 plot for various hierarchical methods", fontsize=21)
plt.gca().invert_xaxis() # invert x axis
plt.legend(title="HC methods", title_fontsize=11)
plt.xticks(range(1, max_nclus + 1))
plt.xlabel("Number of clusters", fontsize=13)
plt.ylabel("R2 metric", fontsize=13)
plt.show()
###Output
_____no_output_____
###Markdown
Defining the number of clusters: Where is the **first big jump** on the Dendrogram?
###Code
# setting distance_threshold=0 and n_clusters=None ensures we compute the full tree
linkage = 'ward'
distance = 'euclidean'
hclust = AgglomerativeClustering(linkage=linkage, affinity=distance, distance_threshold=0, n_clusters=None)
hclust.fit_predict(df[metric_features])
# Adapted from:
# https://scikit-learn.org/stable/auto_examples/cluster/plot_agglomerative_dendrogram.html#sphx-glr-auto-examples-cluster-plot-agglomerative-dendrogram-py
# create the counts of samples under each node (number of points being merged)
counts = np.zeros(hclust.children_.shape[0])
n_samples = len(hclust.labels_)
# hclust.children_ contains the observation ids that are being merged together
# At the i-th iteration, children[i][0] and children[i][1] are merged to form node n_samples + i
for i, merge in enumerate(hclust.children_):
# track the number of observations in the current cluster being formed
current_count = 0
for child_idx in merge:
if child_idx < n_samples:
# If this is True, then we are merging an observation
current_count += 1 # leaf node
else:
# Otherwise, we are merging a previously formed cluster
current_count += counts[child_idx - n_samples]
counts[i] = current_count
# the hclust.children_ is used to indicate the two points/clusters being merged (dendrogram's u-joins)
# the hclust.distances_ indicates the distance between the two points/clusters (height of the u-joins)
# the counts indicate the number of points being merged (dendrogram's x-axis)
linkage_matrix = np.column_stack(
[hclust.children_, hclust.distances_, counts]
).astype(float)
# Plot the corresponding dendrogram
sns.set()
fig = plt.figure(figsize=(11,5))
# The Dendrogram parameters need to be tuned
y_threshold = 100
dendrogram(linkage_matrix, truncate_mode='level', p=5, color_threshold=y_threshold, above_threshold_color='k')
plt.hlines(y_threshold, 0, 1000, colors="r", linestyles="dashed")
plt.title(f'Hierarchical Clustering - {linkage.title()}\'s Dendrogram', fontsize=21)
plt.xlabel('Number of points in node (or index of point if no parenthesis)')
plt.ylabel(f'{distance.title()} Distance', fontsize=13)
plt.show()
###Output
_____no_output_____
###Markdown
Final Hierarchical clustering solution
###Code
# 4 cluster solution
linkage = 'ward'
distance = 'euclidean'
hc4lust = AgglomerativeClustering(linkage=linkage, affinity=distance, n_clusters=4)
hc4_labels = hc4lust.fit_predict(df[metric_features])
# Characterizing the 4 clusters
df_concat = pd.concat((df, pd.Series(hc4_labels, name='labels')), axis=1)
df_concat.groupby('labels').mean()
# 5 cluster solution
linkage = 'ward'
distance = 'euclidean'
hc5lust = AgglomerativeClustering(linkage=linkage, affinity=distance, n_clusters=5)
hc5_labels = hc5lust.fit_predict(df[metric_features])
# Characterizing the 5 clusters
df_concat = pd.concat((df, pd.Series(hc5_labels, name='labels')), axis=1)
df_concat.groupby('labels').mean()
###Output
_____no_output_____
###Markdown
Import preprocessed data
###Code
df = pd.read_csv(join('..', 'data', 'tugas_preprocessed.csv'))
df.head()
df.columns
# Splitting feature names into groups
non_metric_features = df.columns[df.columns.str.startswith('x')]
pc_features = df.columns[df.columns.str.startswith('PC')]
metric_features = df.columns[~df.columns.str.startswith('x') & ~df.columns.str.startswith('PC')]
###Output
_____no_output_____
###Markdown
Hierarchical ClusteringWhat is hierarchical clustering? How does it work? How does it relate to the distance matrix we discussed at the beggining of the course? ;) Different types of linkage How are they computed?**Ward linkage**: minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in this sense is similar to the k-means objective function but tackled with an agglomerative hierarchical approach. The distance matrix Characteristics:- *bottom up approach*: each observation starts in its own cluster, and clusters are successively merged together- *greedy/local algorithm*: at each iteration tries to minimize the distance of cluster merging- *no realocation*: after an observation is assigned to a cluster, it can no longer change- *deterministic*: you always get the same answer when you run it- *scalability*: can become *very slow* for a large number of observations How to apply Hierarchical Clustering?**Note: Which types of variables should be used for clustering?**
###Code
# Performing HC
hclust = AgglomerativeClustering(linkage='ward', affinity='euclidean', n_clusters=5)
hc_labels = hclust.fit_predict(df[metric_features])
hc_labels
# Characterizing the clusters
df_concat = pd.concat((df, pd.Series(hc_labels, name='labels')), axis=1)
df_concat.groupby('labels').mean()
###Output
_____no_output_____
###Markdown
Defining the linkage method to choose: **We need to understand that:**$$SS_{t} = SS_{w} + SS_{b}$$---$$SS_{t} = \sum\limits_{i = 1}^n {{{({x_i} - \overline x )}^2}}$$$$SS_{w} = \sum\limits_{k = 1}^K {\sum\limits_{i = 1}^{{n_k}} {{{({x_i} - {{\overline x }_k})}^2}} }$$$$SS_{b} = \sum\limits_{k = 1}^K {{n_k}{{({{\overline x }_k} - \overline x )}^2}}$$, where $n$ is the total number of observations, $x_i$ is the vector of the $i^{th}$ observation, $\overline x$ is the centroid of the data, $K$ is the number of clusters, $n_k$ is the number of observations in the $k^{th}$ cluster and $\overline x_k$ is the centroid of the $k^{th}$ cluster.
###Code
# Computing SST
X = df[metric_features].values
sst = np.sum(np.square(X - X.mean(axis=0)), axis=0)
# Computing SSW
ssw_iter = []
for i in np.unique(hc_labels):
X_k = X[hc_labels == i]
ssw_iter.append(np.sum(np.square(X_k - X_k.mean(axis=0)), axis=0))
ssw = np.sum(ssw_iter, axis=0)
# Computing SSB
ssb_iter = []
for i in np.unique(hc_labels):
X_k = X[hc_labels == i]
ssb_iter.append(X_k.shape[0] * np.square(X_k.mean(axis=0) - X.mean(axis=0)))
ssb = np.sum(ssb_iter, axis=0)
# Verifying the formula
np.round(sst) == np.round((ssw + ssb))
def get_r2_hc(df, link_method, max_nclus, min_nclus=1, dist="euclidean"):
"""This function computes the R2 for a set of cluster solutions given by the application of a hierarchical method.
The R2 is a measure of the homogenity of a cluster solution. It is based on SSt = SSw + SSb and R2 = SSb/SSt.
Parameters:
df (DataFrame): Dataset to apply clustering
link_method (str): either "ward", "complete", "average", "single"
max_nclus (int): maximum number of clusters to compare the methods
min_nclus (int): minimum number of clusters to compare the methods. Defaults to 1.
dist (str): distance to use to compute the clustering solution. Must be a valid distance. Defaults to "euclidean".
Returns:
ndarray: R2 values for the range of cluster solutions
"""
def get_ss(df):
ss = np.sum(df.var() * (df.count() - 1))
return ss # return sum of sum of squares of each df variable
sst = get_ss(df) # get total sum of squares
r2 = [] # where we will store the R2 metrics for each cluster solution
for i in range(min_nclus, max_nclus+1): # iterate over desired ncluster range
cluster = AgglomerativeClustering(n_clusters=i, affinity=dist, linkage=link_method)
hclabels = cluster.fit_predict(df) #get cluster labels
df_concat = pd.concat((df, pd.Series(hclabels, name='labels')), axis=1) # concat df with labels
ssw_labels = df_concat.groupby(by='labels').apply(get_ss) # compute ssw for each cluster labels
ssb = sst - np.sum(ssw_labels) # remember: SST = SSW + SSB
r2.append(ssb / sst) # save the R2 of the given cluster solution
return np.array(r2)
# Prepare input
hc_methods = ["ward", "complete", "average", "single"]
# Call function defined above to obtain the R2 statistic for each hc_method
max_nclus = 10
r2_hc_methods = np.vstack(
[
get_r2_hc(df=df[metric_features], link_method=link, max_nclus=max_nclus)
for link in hc_methods
]
).T
r2_hc_methods = pd.DataFrame(r2_hc_methods, index=range(1, max_nclus + 1), columns=hc_methods)
sns.set()
# Plot data
fig = plt.figure(figsize=(11,5))
sns.lineplot(data=r2_hc_methods, linewidth=2.5, markers=["o"]*4)
# Finalize the plot
fig.suptitle("R2 plot for various hierarchical methods", fontsize=21)
plt.gca().invert_xaxis() # invert x axis
plt.legend(title="HC methods", title_fontsize=11)
plt.xticks(range(1, max_nclus + 1))
plt.xlabel("Number of clusters", fontsize=13)
plt.ylabel("R2 metric", fontsize=13)
plt.show()
###Output
_____no_output_____
###Markdown
Defining the number of clusters: Where is the **first big jump** on the Dendrogram?
###Code
# setting distance_threshold=0 and n_clusters=None ensures we compute the full tree
linkage = 'ward'
distance = 'euclidean'
hclust = AgglomerativeClustering(linkage=linkage, affinity=distance, distance_threshold=0, n_clusters=None)
hclust.fit_predict(df[metric_features])
# Adapted from:
# https://scikit-learn.org/stable/auto_examples/cluster/plot_agglomerative_dendrogram.html#sphx-glr-auto-examples-cluster-plot-agglomerative-dendrogram-py
# create the counts of samples under each node (number of points being merged)
counts = np.zeros(hclust.children_.shape[0])
n_samples = len(hclust.labels_)
# hclust.children_ contains the observation ids that are being merged together
# At the i-th iteration, children[i][0] and children[i][1] are merged to form node n_samples + i
for i, merge in enumerate(hclust.children_):
# track the number of observations in the current cluster being formed
current_count = 0
for child_idx in merge:
if child_idx < n_samples:
# If this is True, then we are merging an observation
current_count += 1 # leaf node
else:
# Otherwise, we are merging a previously formed cluster
current_count += counts[child_idx - n_samples]
counts[i] = current_count
# the hclust.children_ is used to indicate the two points/clusters being merged (dendrogram's u-joins)
# the hclust.distances_ indicates the distance between the two points/clusters (height of the u-joins)
# the counts indicate the number of points being merged (dendrogram's x-axis)
linkage_matrix = np.column_stack(
[hclust.children_, hclust.distances_, counts]
).astype(float)
# Plot the corresponding dendrogram
sns.set()
fig = plt.figure(figsize=(11,5))
# The Dendrogram parameters need to be tuned
y_threshold = 100
dendrogram(linkage_matrix, truncate_mode='level', p=5, color_threshold=y_threshold, above_threshold_color='k')
plt.hlines(y_threshold, 0, 1000, colors="r", linestyles="dashed")
plt.title(f'Hierarchical Clustering - {linkage.title()}\'s Dendrogram', fontsize=21)
plt.xlabel('Number of points in node (or index of point if no parenthesis)')
plt.ylabel(f'{distance.title()} Distance', fontsize=13)
plt.show()
###Output
_____no_output_____
###Markdown
Final Hierarchical clustering solution
###Code
# 4 cluster solution
linkage = 'ward'
distance = 'euclidean'
hc4lust = AgglomerativeClustering(linkage=linkage, affinity=distance, n_clusters=4)
hc4_labels = hc4lust.fit_predict(df[metric_features])
# Characterizing the 4 clusters
df_concat = pd.concat((df, pd.Series(hc4_labels, name='labels')), axis=1)
df_concat.groupby('labels').mean()
# 5 cluster solution
linkage = 'ward'
distance = 'euclidean'
hc5lust = AgglomerativeClustering(linkage=linkage, affinity=distance, n_clusters=5)
hc5_labels = hc5lust.fit_predict(df[metric_features])
# Characterizing the 5 clusters
df_concat = pd.concat((df, pd.Series(hc5_labels, name='labels')), axis=1)
df_concat.groupby('labels').mean()
###Output
_____no_output_____ |
15_tipos_list_y_tuple.ipynb | ###Markdown
[](https://pythonista.io) Objetos tipo ```list``` y ```tuple```. Los objetos tipo ```tuple``` y ```list``` son:* Colecciones. Es decir, que pueden contener objetos, también llamados elementos.* Ordenadas. Es decir, que respetan el orden en el que cada elemento es ingresado.* Indexables numéricamente. Es decir, que cada elemento que contienen puede ser accedido mediante un índice numéricos.La diferencia entre estos tipos de datos es que los objetos de tipo ```list``` son mutables y los objetos de tipo ```tuple``` son inmutables. Objetos tipo ```list```.Los objetos tipo ```list``` son colecciones ordenadas de objetos, sin importar el tipo de cada uno, los cuales son indexables numéricamente y son mutables.Se definen encerrando entre corchetes ```[``` ```]``` una sucesión de objetos separados por comas ```,```.La sintaxis es la siguiente:```[, , ..., ] ```Cabe hacer notar que los objetos de tipo ```list``` no son equivalentes a las matrices en otros lenguajes de programación. **Ejemplos:**
###Code
[1, 2, 3, 4, 5]
['gato', 'perro', True]
[['automóvil', 50, 'gasolina'], ['autobús', 300, 'diesel']]
[]
###Output
_____no_output_____
###Markdown
Modificación de un elemento de un objeto tipo ```list```.Es posible modificar el contenido de un elemento en un objeto de tipo ```list``` mediante el operador de asignación ```=```. **Ejemplos:**
###Code
lista = [1, 2, 3, 4, 5, 6, 7, 8]
lista[4] = "hola"
lista
lista[-3] = True
lista
lista[-9] = 0
[1, 2, 3, 4, 5, 6][3]
###Output
_____no_output_____
###Markdown
Eliminación de un elemento en un objeto tipo ```list```.Para eliminar un elemento en un objeto de tipo ```list``` se utiliza la declaración ````del````. El elemento identificado mediante su posición será eliminado y en caso de que existan, se recorrerá el índice de los elementos a la derecha del elemento eliminado. La sintaxis es la siguiente:```del []``` **Ejemplos:**
###Code
lista = [1, 2, 3, 4, 5]
lista[1]
del lista[1]
lista
lista[1]
###Output
_____no_output_____
###Markdown
Eliminación de un rango de elementos en objetos tipo ```list```.Es posible eliminar un rango de elementos de un objeto tipo ```list``` utilizando la declaración ```del``` con la siguiente sintaxis.```del [m:n]```En donde: * ```m``` es el índice inferior del rango.* ```n``` es el índice superior del rango.Esto eliminará los elementos en el rango que va de ```m``` hasta uno antes de ```n```. **Ejemplos:**
###Code
lista = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
del lista[-3:-1]
lista
###Output
_____no_output_____
###Markdown
Sustitución de un rango de elementos en objetos tipo ```list ```.Es posible sustituir uno o varios objetos definido en un rango de índices dentro de un objeto tipo ```list``` usando la siguiente sintaxis:```[m:n] = , , ..., ```En donde: * ```m``` es el índice inferior del rango.* ```n``` es el índice superior del rango.* Si el número de objetos a sustituir es menor que el rango definido, los objetos que no cuenten con un sustituto serán eliminados.* Si el numero de objetos a sustituir es mayor que el rango definido, los objetos adicionales se insertarán y desplazarán al resto a la derecha. **Ejemplos:**
###Code
lista = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
lista[2:5] = "tres", "cuatro", "cinco"
lista
lista[5:]= "seis", 7.
lista
lista[5:6] = 6.0, "siete", "ocho", "nueve", "diez"
lista
###Output
_____no_output_____
###Markdown
Métodos de los objetos tipo ```list```. El método ```append()```.Este método inserta al final del objeto tipo ```list``` el objeto que se ingresa como argumento.```.append()``` **Ejemplo:**
###Code
lista = [1, 2, 3, 4, 5, 6]
lista.append("siete")
lista
###Output
_____no_output_____
###Markdown
El método ```insert()```.Este elemento inserta en el objeto tipo ```list``` a un objeto en el índice indicado. Los elementos locaslizados a partir de dicho índice y hasta el final del objeto son recorridos a la derecha.```.insert(, )``` **Ejemplo:**
###Code
lista = [1, 2, 3, 4, 5, 6]
lista.insert(3, True)
lista
###Output
_____no_output_____
###Markdown
El método ```remove()```.Este método busca de izquierda a derecha el objeto ingresado como argumento y elimina al primer elemento que coincida.Si el objeto no es encontrado, genera un erro de tipo ```ValueError```.```.remove()``` **Ejemplos:**
###Code
lista = [1, 2, 3, 4, 5, 4, 3, 2, 1, 0]
lista.remove(3)
lista
lista.remove(5)
lista
lista.remove(5)
###Output
_____no_output_____
###Markdown
El método ```reverse()```.Este método invierte el orden de los elementos del objeto tipo ```list```.```.reverse()``` **Ejemplo:**
###Code
lista = [1, 2, 3, 4, 5]
lista.reverse()
lista
###Output
_____no_output_____
###Markdown
El método ```sort()```.Este método ordena, cuando es posible, los elementos que contiene. ```.sort(reverse=)```* Si se especifica el argumento ```reverse=True``` el ordenamiento se hace de forma descendente. * El valor por defecto es ```reverse=False```.* En caso de que los elementos no sean compatibles, se desencadenará un error de tipo ```TypeError```. **Ejemplos:**
###Code
lista = [15, True, 33, False, 12.35]
lista.sort()
lista
lista.sort(reverse=True)
lista
[12, True, 'perro', None].sort()
###Output
_____no_output_____
###Markdown
El método ```pop()```.Este método regresa y elimina al elemento correspondiente al índice que se ingresa como argumento.```.pop()```* En caso de que no se ingreser un índice, el método regresará y eliminará al elmentos del extremo derecho del objeto tipo ```list```.* En caso de que el objeto tipo ```list```esté vacío, se generará un error de tipo ```IndexError```. **Ejemplos:**
###Code
lista = [15, True, 33, False, 12]
lista.pop()
lista
lista.pop(1)
lista
lista.pop(2)
lista
lista.pop()
lista
lista.pop()
lista
lista.pop()
###Output
_____no_output_____
###Markdown
El método ```extend()```.Ese métodos inserta al final del objeto tipo ```list``` cada uno de los elementos de una colección que es ingresada como argumento.```.extend()``` **Ejemplos:**
###Code
lista = [1, 2, 3]
lista.extend(('cuatro', 'cinco'))
lista
lista.extend({'seis', 'siete', 'ocho', 8})
lista
###Output
_____no_output_____
###Markdown
* A diferencia del método ```extend()```, el método ```append()``` añadirá a la colección ingresada como un solo elemento.
###Code
lista.append([True, False, None])
lista
###Output
_____no_output_____
###Markdown
El método ```clear()```.Este método elimina a todos los elementos del objeto tipo ```list```.```.clear()``` **Ejemplo:**
###Code
lista = [93, True, 27, True, True, 16, 45, 14, False, True]
lista.clear()
lista
###Output
_____no_output_____
###Markdown
El método ```count()```.Este método cuenta el número de veces que aparece dentro del objeto tipo ```list``` el objeto que se ingresa como argumento.```.count()``` **Ejemplos:**
###Code
lista = [1, 2, 3, 4, 5, 4, 3, 2, 1, 0]
lista.count(2)
lista.count(5)
lista.count(11)
###Output
_____no_output_____
###Markdown
El método ```index()```.Busca de izquierda a derecha un objeto dentro de un objeto de tipo ```list``` comprendido en un rango específico y en caso de encontrasrlo, regresa el índice en el que se encuentre.```.index(, m, n)```Donde:* ```m``` es el índice menor.* ```n``` es el índice mayor.* La búsqueda se realizará desde el índice ```m``` hasta uno antes de ```n```.* El valor de ```n```puede ser superior al del tamaño del objeto tipo ```list```.* Si sólo se ingresa ```m```, la búsqueda se realizará desde el íncide ```m``` hasta el final del objeto tipo ```list```.* En caso de no definir un rango, la búsqueda se realizará en el objeto completo.* En caso de no encontrar una coincidencia en el rango detrerminado, se generará un error tipo ```ValueError```. **Ejemplos:**
###Code
lista = [93, True, 27, True, True, 16, 45, 14, False, True]
lista.index(True)
lista.index(True, 8)
lista.index(True, 6, 9)
lista.index(True, 6, 25)
###Output
_____no_output_____
###Markdown
Objetos tipo ```tuple```.Los objetos tipo ```tuple``` son colecciones ordenadas de objetos, sin importar el tipo de cada uno, los cuales son indexables numéricamente y son inmutables.Se definen encerrando entre paréntesis ```(``` ```)``` una sucesión de objetos separados por comas ```,```.La sintaxis es la siguiente:```(, , ..., )``` **Ejemplos:**
###Code
(1, 2, 3, 4, 5)
('gato', 'perro', True)
(['automóvil', 50, 'gasolina'], ('autobús', 300, 'diesel'))
()
vehiculos = (['automóvil', 50, 'gasolina'], ('autobús', 300, 'diesel'))
vehiculos[1] = True
del vehiculos[1]
###Output
_____no_output_____
###Markdown
Métodos de los objetos de tipo ```tuple```.Los objetos tipo _tuple_ sólo cuentan con los métodos: * ```count()```. * ```index()```. **Ejemplos:**
###Code
tupla = (1, 5, 7, 8, 4, 7, 7, 9)
tupla.index(9)
tupla.count(7)
tupla.clear()
vehiculos
vehiculos[0][2] = 'carbón'
vehiculos
###Output
_____no_output_____
###Markdown
"Aliasing". Al "rebanar" un objeto, se crea una nueva colección que hace referencia a los elementos existentes en el objeto original.Esto significa que aún cuando el objeto resultante del "rebanado" es nuevo, los elementos que comparten el objeto original y el objeto "rebanado", son exactamente los mismos. **Ejemplo:**
###Code
lista_1 = [12, True, "lapicero"]
lista_2 = lista_1[:]
lista_1 == lista_2
lista_1 is lista_2
for elemento in lista_1:
print(id(elemento))
for elemento in lista_2:
print(id(elemento))
###Output
_____no_output_____
###Markdown
Aliasing con elementos mutables.El aliasing puede tener efectos indeseados cuando una colección contiene objetos mutables. **Ejemplo:** * La siguiente celda definirá al objeto de tipo ```list```llamado ```lista_1```, cuyo elemento con índice ```2``` también es de tipo ```list```.
###Code
lista_1 = [1, 2, ['b', 'c']]
lista_1[2]
###Output
_____no_output_____
###Markdown
* La siguiente celda, le asignará el nombre ```lista_2``` al elemento ```lista_1[2]```.
###Code
lista_2 = lista_1[2]
lista_2
###Output
_____no_output_____
###Markdown
* La siguiente celda creará un objeto de tipo ```tuple``` con nombre ```tupla``` a partir de un rebanado completo del objeto ```lista_1```.
###Code
tupla = tuple(lista_1[:])
###Output
_____no_output_____
###Markdown
* Tanto ```lista_1[2]```, ```lista_2``` como ```tupla[2]``` son el mismo objeto.
###Code
lista_1[2] is lista_2
lista_1[2] is tupla[2]
tupla[2] is lista_2
tupla[2]
###Output
_____no_output_____
###Markdown
* Al modificar el contenido de un elemento mutable que es compartido por todos los objetos; dicho cambio se aplicará uniformemente en todos los objetos que lo contienen.
###Code
del lista_2[:]
lista_2
lista_1
tupla
###Output
_____no_output_____ |
Explore/RandomForestSHAP.ipynb | ###Markdown
This notebook computes the SHAP values for each tree from a random forest separately. A comparison with the aggregated values shows perfect agreement
###Code
from sklearn import datasets
#import pandas as pd
import numpy as np
np.random.seed(0)
#import matplotlib.pyplot as plt
import shap
from sklearn.ensemble import RandomForestRegressor
# Load the diabetes dataset
diabetes_X, diabetes_y = datasets.load_diabetes(return_X_y=True)
rf = RandomForestRegressor(max_depth=50, random_state=0, n_estimators=100,max_features=2)
rf.fit(diabetes_X, diabetes_y)
###Output
_____no_output_____
###Markdown
Get the SHAP values for each individual tree:
###Code
n,p = diabetes_X.shape
k=0
shap_values_IndTrees = np.zeros((n, p, rf.n_estimators))
for tree in rf.estimators_:
tree_preds = tree.predict(diabetes_X)
explainer = shap.TreeExplainer(tree)
shap_values_IndTrees[:,:,k] = explainer.shap_values(diabetes_X)
k+=1
###Output
_____no_output_____
###Markdown
Get the SHAP values for the forest:
###Code
shap_values = shap.TreeExplainer(rf).shap_values(diabetes_X)
###Output
Setting feature_perturbation = "tree_path_dependent" because no background data was given.
###Markdown
Compare
###Code
shap_averages = np.mean(shap_values_IndTrees, axis=2)
shap_averages.shape
shap_averages[0:5,0:9]
np.mean(np.abs(shap_values-shap_averages))
shap_values[0:5,0:9]
###Output
_____no_output_____
###Markdown
The following observation is puzzling: if I explicitly call `tree.predict()`, the `shap.TreeExplainer(tree)` prints its message "*Setting feature_perturbation ...*" for each iteration in the for loop, which "proves" to me that it is being executed each time. But not when the `tree.predict()` is commented out. In that case, the message is printed only once, and I am worried that `shap.TreeExplainer(tree)` is not really executed every time?
###Code
for tree in rf.estimators_:
#tree_preds = tree.predict(diabetes_X)
explainer = shap.TreeExplainer(tree)
###Output
Setting feature_perturbation = "tree_path_dependent" because no background data was given.
|
Udacity_DL_Nanodegree/010 College Admissions/College Admissions.ipynb | ###Markdown
$$ \huge{\underline{\textbf{ 1-Layer Neural Network }}} $$ Contents:* [Introduction](Introduction)* [Load and Explore Data](Load-and-Explore-Data)* [Preprocess](Preprocess)* [Neural Network](Neural-Network)* [Train Classifier](Train-Classifier) Introduction This notebook presents simplest possible 1-layer neural network trained with backpropagation.**Dataset**We will use graduate school admissions data ([https://stats.idre.ucla.edu/stat/data/binary.csv]()). Each row is one student. Columns are as follows:* admit - was student admitted or not? This is our target we will try to predict* gre - student GRE score* gpa - student GPA* rank - prestige of undergrad school, 1 is highest, 4 is lowest**Model*** one layer: fully connected with sigmoid activation* loss: MSE* optimizer: vanilla SGD**Dependencies*** numpy, matplotlib - neural net and backprop* pandas - load data Load and Explore Data Imports:
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Loda data with pandas
###Code
df = pd.read_csv('college_admissions.csv')
###Output
_____no_output_____
###Markdown
Show first couple rows. First column is index, added automatically by pandas.
###Code
df.head()
###Output
_____no_output_____
###Markdown
Show some more information about dataset.
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 400 entries, 0 to 399
Data columns (total 4 columns):
admit 400 non-null int64
gre 400 non-null int64
gpa 400 non-null float64
rank 400 non-null int64
dtypes: float64(1), int64(3)
memory usage: 12.6 KB
###Markdown
Plot data, each rank separately
###Code
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=[8,6])
axes = axes.flatten()
for i, rank in enumerate([1,2,3,4]):
# pick not-admitted students with given rank
tmp = df.loc[(df['rank']==rank) & (df['admit']==0)]
axes[i].scatter(tmp['gpa'], tmp['gre'], color='red', marker='.', label='rejected')
# pick admitted students with given rank
tmp = df.loc[(df['rank']==rank) & (df['admit']==1)]
axes[i].scatter(tmp['gpa'], tmp['gre'], color='green', marker='.', label='admitted')
axes[i].set_title('Rank '+str(rank))
axes[i].legend()
fig.tight_layout()
###Output
_____no_output_____
###Markdown
And plot scatter matrix, just for fun
###Code
cmap = {1: 'red', 2:'green', 3:'blue', 4:'black'}
colors = df['rank'].apply(lambda cc:cmap[cc])
pd.plotting.scatter_matrix(df[['gre', 'gpa']], c=colors, figsize=[8,6]);
###Output
_____no_output_____
###Markdown
Preprocess Code below does following things:* convert _rank_ column into one-hot encoded features* normalize _gre_ and _gpa_ columns to zero mean and unit standard deviation* splits of 20% of data as test set* splits into input features (gre, gpa, one-hot-rank) and targets (admit)* convert into numpy* assert shapes are ok
###Code
# Create dummies
temp = pd.get_dummies(df['rank'], prefix='rank')
data = pd.concat([df, temp], axis=1)
data.drop(columns='rank', inplace=True)
# Normalize
for col in ['gre', 'gpa']:
mean, std = data[col].mean(), data[col].std()
# data.loc[:, col] = (data[col]-mean) / std
data[col] = (data[col]-mean) / std
# Split off random 20% of the data for testing
np.random.seed(0)
sample = np.random.choice(data.index, size=int(len(data)*0.9), replace=False)
data, test_data = data.iloc[sample], data.drop(sample)
# Split into features and targets
features_train = data.drop('admit', axis=1)
targets_train = data['admit']
features_test = test_data.drop('admit', axis=1)
targets_test = test_data['admit']
# Convert to numpy
x_train = features_train.values # features train set (numpy)
y_train = targets_train.values[:,None] # targets train set (numpy)
x_test = features_test.values # features validation set (numpy)
y_test = targets_test.values[:,None] # targets valudation set (numpy)
# Assert shapes came right way around
assert x_train.shape == (360, 6)
assert y_train.shape == (360, 1)
assert x_test.shape == (40, 6)
assert y_test.shape == (40, 1)
###Output
_____no_output_____
###Markdown
Neural Network By convention, we will denote:* $x$ as matrix of input features, where rows are separate training examples in mini-batch and columns are features* $y$ as column vector of targets (admitted or not)* $\hat{y}$ as neural network estimates* $L$ as scalar-output loss functionFirst we need sigmoid transfer function and its derivative ([proof](https://en.wikipedia.org/wiki/Logistic_functionDerivative))$$ S(x) = \frac{1}{1+\epsilon^{-x}} \quad\quad S'(x) = S(x)(1-S(x)) $$
###Code
def sigmoid(x):
return 1/(1+np.exp(-x))
def sigmoid_deriv(x):
return sigmoid(x)*(1-sigmoid(x))
###Output
_____no_output_____
###Markdown
Forward pass is pretty simple$$ \hat{y} = S(xW) $$
###Code
def fwd(x, W):
assert x.ndim == 2; assert W.ndim == 2
z = x @ W # linear combination
y_hat = sigmoid(z) # transfer function
assert z.ndim == 2; assert y_hat.ndim == 2
return y_hat
###Output
_____no_output_____
###Markdown
Loss function$$ L(x,y) = \frac{1}{2n}(y-\hat{y})^2 \quad\quad\quad \text{where $n$ is length of mini-batch} $$
###Code
def loss(x, y, W):
assert x.ndim == 2
assert y.ndim == 2
assert W.ndim == 2
y_hat = sigmoid(x @ W) # forward pass
result = .5 * np.mean((y-y_hat)**2) # no inner sum becouse single output
assert y_hat.shape[1] == 1
return result
###Output
_____no_output_____
###Markdown
Backward pass$$ \frac{\partial{L}}{\partial{W}} = \frac{1}{n}x^T \big[ -(y-\hat{y}) \odot S'(x) \big] \quad\quad\quad \text{ where $\odot$ is element-wise product} $$If you are wondering how above came about, then good resources are [here](http://cs231n.stanford.edu/handouts/linear-backprop.pdf) and [here](http://cs231n.stanford.edu/handouts/derivatives.pdf), both taken from famous cs231n course.
###Code
def backprop(x, y, W, lr):
assert x.ndim == 2; assert y.ndim == 2; assert W.ndim == 2
# Forward pass
z = x @ W
y_hat = sigmoid(z)
# Backward pass
ro = -(y-y_hat) * sigmoid_deriv(z)
del_W = (x.T @ ro) / len(x)
assert del_W.ndim == 2
return del_W
###Output
_____no_output_____
###Markdown
Numerical gradient check
###Code
def ngrad(x, y, W):
"""Check gradient numerically"""
eps = 1e-4
del_W = np.zeros_like(W)
for r in range(W.shape[0]):
for c in range(W.shape[1]):
W_min = W.copy()
W_pls = W.copy()
W_min[r, c] -= eps
W_pls[r, c] += eps
l_pls = loss(x, y, W_pls)
l_min = loss(x, y, W_min)
del_W[r, c] = (l_pls - l_min) / (eps * 2)
return del_W
###Output
_____no_output_____
###Markdown
Train Classifier Initialize neural net
###Code
np.random.seed(0) # for reproducibility
n_inputs = x_train.shape[1]
n_outputs = y_train.shape[1]
W = np.random.normal(scale=n_inputs**-.5, size=[n_inputs, n_outputs]) # Xavier init
###Output
_____no_output_____
###Markdown
Hyperparameters
###Code
nb_epochs = 2000
lr = 0.1
###Output
_____no_output_____
###Markdown
Main train loop
###Code
# Accumulate statistics during training (for plotting)
trace_loss_train = []
trace_loss_test = []
trace_acc_test = []
for e in range(nb_epochs):
# Backprop
dW = backprop(x_train, y_train, W, lr)
W += -lr * dW
# Train loss
loss_train = loss(x_train, y_train, W)
trace_loss_train.append(loss_train)
# if e % (nb_epochs / 10) == 0:
loss_test = loss(x_test, y_test, W)
trace_loss_test.append(loss_test)
# Predictions and Accuracy
predictions = fwd(x_test, W)
predictions = predictions > 0.5
acc_test = np.mean(predictions == y_test)
trace_acc_test.append(acc_test)
if e % (nb_epochs / 10) == 0:
print('loss {0}, tacc {1:.3f}'.format(loss_train, acc_test))
###Output
loss 0.15224777536275844, tacc 0.475
loss 0.13015955315177377, tacc 0.475
loss 0.11435294270610373, tacc 0.500
loss 0.10585677810621827, tacc 0.600
loss 0.10191394554520483, tacc 0.675
loss 0.1000000143239566, tacc 0.700
loss 0.09898677097344712, tacc 0.725
loss 0.0984065319217976, tacc 0.750
loss 0.0980521765593448, tacc 0.750
loss 0.09782432184510809, tacc 0.750
###Markdown
Plot learning curve
###Code
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, figsize=[12,6])
ax1.plot(trace_loss_train, label='train loss')
ax1.plot(trace_loss_test, label='test loss')
ax1.legend(loc='right')
ax1.grid()
ax2.plot(trace_acc_test, color='darkred', label='test accuracy')
ax2.legend()
plt.show()
###Output
_____no_output_____
###Markdown
__Quick Regression Test__
###Code
correct_result = np.array([0.15224777536275844,
0.13015955315177377,
0.11435294270610373,
0.10585677810621827,
0.10191394554520483,
0.1000000143239566,
0.09898677097344712,
0.0984065319217976,
0.0980521765593448,
0.09782432184510809])
assert np.alltrue(trace_loss_train[::200] == correct_result)
###Output
_____no_output_____ |
doc/filters/smooth.ipynb | ###Markdown
Smoothing signals using a moving average========================================The moving average is an excellent filter to remove noise that is related to a specific time pattern. The classic example is the day-to-day evaluation of a process that is sensible to week-ends (for example, the number of workers who enter a building). A moving average with a window length of 7 days is ideal to evaluate the generic trend of this signal without considering intra-week fluctuations. Although its use in biomechanics is less obvious, this filter may be useful in some situation. This tutorial will show how to use the [ktk.filters.smooth()](../api/kineticstoolkit.filters.smooth.rst) function on TimeSeries data.
###Code
import kineticstoolkit.lab as ktk
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
We will first load some noisy data:
###Code
ts = ktk.load(
ktk.config.root_folder + '/data/filters/sample_noises.ktk.zip')
# Plot it
ts.plot(['clean', 'periodic_noise'], marker='.')
plt.grid(True)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
In this signal, we observe that appart from random noise, there seems to be a periodic signal with a period of five seconds, that we may consider as noise. Since we consider these variations as noise and their period is constants, the moving average is a nice candidate for filtering out this noise.
###Code
filtered = ktk.filters.smooth(ts, window_length=5)
ts.plot(['clean', 'periodic_noise'], marker='.')
filtered.plot('periodic_noise', marker='.', color='k')
plt.title('Removing the fast, constant rate variation (black curve)')
plt.grid(True)
plt.tight_layout()
###Output
_____no_output_____ |
doc/LectureNotes/_build/jupyter_execute/hw3.ipynb | ###Markdown
<!-- HTML file automatically generated from DocOnce source (https://github.com/doconce/doconce/)doconce format html hw3.do.txt --no_mako --> PHY321: Classical Mechanics 1**Homework 3, due February 4**Date: **Jan 31, 2022** Practicalities about homeworks and projects1. You can work in groups (optimal groups are often 2-3 people) or by yourself. If you work as a group you can hand in one answer only if you wish. **Remember to write your name(s)**!2. Homeworks are available ten days before the deadline. 3. How do I(we) hand in? You can hand in the paper and pencil exercises as a scanned document. For this homework this applies to exercises 1-5. Alternatively, you can hand in everything (if you are ok with typing mathematical formulae using say Latex) as a jupyter notebook at D2L. The numerical exercise(s) (exercise 6 here) should always be handed in as a jupyter notebook by the deadline at D2L. Introduction to homework 3This week's sets of classical pen and paper and computationalexercises deal with the motion of different objects under theinfluence of various forces. The relevant reading background is1. chapter 2 of Taylor (there are many good examples there) and2. chapters 5-7 of Malthe-Sørenssen.In both textbooks there are many nice worked outexamples. Malthe-Sørenssen's text contains also several codingexamples you may find useful.There are several pedagogical aims we have in mind with these exercises:1. Get practice in setting up and analyzing a physical problem, finding the forces and the relevant equations to solve;2. Analyze the results and ask yourself whether they make sense or not;3. Finding analytical solutions to problems if possible and compare these with numerical results. This teaches us also how to understand errors in numerical calculations;4. Being able to solve (in mechanics these are the most common types of equations) numerically ordinary differential equations and compare the solutions where possible with analytical solutions;5. Getting used to studying physical problems using all possible tools, from paper and pencil to numerical solutions;6. Then analyze the results and ask yourself whether they make sense or not.The above steps outline important elements of our understanding of thescientific method. Furthermore, there are also explicit coding skillswe aim at such as setting up arrays, solving differential equationsnumerically and plotting your results. Coding practice is also animportant aspect. The more we practice the better we get (hopefully).From a numerical mathematics point of view, we will solve the differentialequations using Euler's method (forward Euler).The code we will develop can be reused as a basis for coming homeworks. We canalso extend the numerical solver we write here to include other methods (later) likethe modified Euler method (Euler-Cromer, midpoint Euler) and moreadvanced methods like the family of Runge-Kutta methods and the Velocity-Verlet method.At the end of this course, we will thus have developed a larger code(or set of codes) which will allow us to study different numericalmethods (integration and differential equations) as well as being ableto study different physical systems. Combined with analytical skills,the hope is that this can allow us to explore interesting andrealistic physics problems. By doing so, the hope is that can lead todeeper insights about the laws of motion which govern a system.And hopefully you can reuse many of the above solvers in other courses (our ideal). Exercise 1 (20 pt), Electron moving into an electric fieldAn electron is sent through a varying electricalfield. Initially, the electron is moving in the $x$-direction with a velocity$v_x = 100$ m/s. The electron enters the field when it passes the origin. The fieldvaries with time, causing an acceleration of the electron that varies in time $$\boldsymbol{a}(t)=\left(−20 \mathrm{m/s}^2 −10\mathrm{m/s}^3t\right) \boldsymbol{e}_y$$ * 1a (4pt) Find the velocity as a function of time for the electron.* 1b (4pt) Find the position as a function of time for the electron.The field is only acting inside a box of length $L = 2m$.* 1c (4pt) How long time is the electron inside the field?* 1d (4pt) What is the displacement in the $y$-direction when the electron leaves the box. (We call this the deflection of the electron).* 1e (4pt) Find the angle the velocity vector forms with the horizontal axis as the electron leaves the box. Exercise 2 (10 pt), Drag forceTaylor exercise 2.3 Exercise 3 (10 pt), Falling objectTaylor exercise 2.6 Exercise 4 (10 pt), and then a cyclistTaylor exercise 2.26 Exercise 5 (10 pt), back to a falling ball and preparing for the numerical exercise**Useful material: Malthe-Sørenssen chapter 7.5 and Taylor chapter 2.4.**In this example we study the motion of an object subject to a constant force, a velocity dependentforce. We will reuse the code we develop here in homework 4 for a position-dependent force.Here we limit ourselves to a ball that is thrown from a height $h$above the ground with an initial velocity$\boldsymbol{v}_0$ at time $t=t_0$. We assume the air resistance is proportional to the square velocity, Together with the gravitational force these are the forces acting on our system.**Note that due to the specific velocity dependence, we cannot find an analytical solution for motion in the $x$ and $y$ directions, see the discussion in Taylor after eq. (2.61).**In order to find an analytical solution we need to assume that the object is falling in the $y$-direction (negative direction) only. The position of the ball as function of time is $\boldsymbol{r}(t)$ where $t$ is time. The position is measured with respect to a coordinate system with origin at the floor.We assume we have an initial position $\boldsymbol{r}(t_0)=h\boldsymbol{e}_y$ and an initial velocity $\boldsymbol{v}_0=v_{x,0}\boldsymbol{e}_x+v_{y,0}\boldsymbol{e}_y$.In this exercise we assume the system is influenced by the gravitational force $$\boldsymbol{G}=-mg\boldsymbol{e}_y$$ and an air resistance given by a square law $$-Dv\boldsymbol{v}.$$ The analytical expressions for velocity and position as functions oftime will be used to compare with the numerical results in exercise 6.* 5a (3pt) Identify the forces acting on the ball and set up a diagram with the forces acting on the ball. Find the acceleration of the falling ball. * 5b (4pt) Assume now that the object is falling only in the $y$-direction (negative direction). Integrate the acceleration from an initial time $t_0$ to a final time $t$ and find the velocity. In Taylor equations (2.52) to (2.58) you will find a very good discussion of this.* 5c (4pt) Find thereafter the position as function of time starting with an initial time $t_0$. Find the time it takes to hit the floor. Here you will find it convenient to set the initial velocity in the $y$-direction to zero. Taylor equations (2.52)-(2.58) should contain all relevant information for solving this part as well.We will use the above analytical results in our numerical calculations in exercise 6. The analytical solution in the $y$-direction only will serve as a test for our numerical solution. Exercise 6 (40pt), Numerical elements, solving exercise 5 numerically**This exercise should be handed in as a jupyter-notebook** at D2L. Remember to write your name(s). Last week we:1. Gained more practice with plotting in Python2. Became familiar with arrays and representing vectors with such objectsThis week we will:1. Learn and utilize Euler's Method to find the position and the velocity2. Compare analytical and computational solutions 3. Add additional forces to our model
###Code
%matplotlib inline
# let's start by importing useful packages we are familiar with
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____ |
slides/2_12/dropout.ipynb | ###Markdown
Dropout
###Code
import d2l
from mxnet import autograd, gluon, init, nd
from mxnet.gluon import loss as gloss, nn
###Output
_____no_output_____
###Markdown
Dropout from Scratch
###Code
def dropout(X, drop_prob):
assert 0 <= drop_prob <= 1
# In this case, all elements are dropped out.
if drop_prob == 1:
return X.zeros_like()
mask = nd.random.uniform(0, 1, X.shape) > drop_prob
return mask * X / (1.0-drop_prob)
###Output
_____no_output_____
###Markdown
Sanity Test
###Code
X = nd.arange(16).reshape((2, 8))
print(dropout(X, 0))
print(dropout(X, 0.5))
print(dropout(X, 1))
###Output
[[ 0. 1. 2. 3. 4. 5. 6. 7.]
[ 8. 9. 10. 11. 12. 13. 14. 15.]]
<NDArray 2x8 @cpu(0)>
[[ 0. 0. 0. 0. 8. 10. 12. 0.]
[16. 0. 20. 22. 0. 0. 0. 30.]]
<NDArray 2x8 @cpu(0)>
[[0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0.]]
<NDArray 2x8 @cpu(0)>
###Markdown
Defining Model Parameters
###Code
num_inputs, num_outputs, num_hiddens1, num_hiddens2 = 784, 10, 1024, 2048
W1 = nd.random.normal(scale=0.01, shape=(num_inputs, num_hiddens1))
b1 = nd.zeros(num_hiddens1)
W2 = nd.random.normal(scale=0.01, shape=(num_hiddens1, num_hiddens2))
b2 = nd.zeros(num_hiddens2)
W3 = nd.random.normal(scale=0.01, shape=(num_hiddens2, num_outputs))
b3 = nd.zeros(num_outputs)
params = [W1, b1, W2, b2, W3, b3]
for param in params:
param.attach_grad()
###Output
_____no_output_____
###Markdown
Define the Model
###Code
drop_prob1, drop_prob2 = 0.0, 0.0
def net(X):
X = X.reshape((-1, num_inputs))
H1 = (nd.dot(X, W1) + b1).relu()
if autograd.is_training(): # Use dropout only when training the model.
H1 = dropout(H1, drop_prob1) # Add a dropout layer after the first fully connected layer.
H2 = (nd.dot(H1, W2) + b2).relu()
if autograd.is_training():
H2 = dropout(H2, drop_prob2) # Add a dropout layer after the second fully connected layer.
return nd.dot(H2, W3) + b3
###Output
_____no_output_____
###Markdown
Training and Testing
###Code
num_epochs, lr, batch_size = 10, 0.5, 256
loss = gloss.SoftmaxCrossEntropyLoss()
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, params, lr)
###Output
epoch 1, loss 0.8518, train acc 0.680, test acc 0.779
epoch 2, loss 0.4920, train acc 0.817, test acc 0.837
epoch 3, loss 0.4325, train acc 0.839, test acc 0.860
epoch 4, loss 0.4055, train acc 0.850, test acc 0.865
epoch 5, loss 0.3596, train acc 0.867, test acc 0.875
epoch 6, loss 0.3392, train acc 0.873, test acc 0.878
epoch 7, loss 0.3209, train acc 0.881, test acc 0.876
epoch 8, loss 0.3081, train acc 0.885, test acc 0.883
epoch 9, loss 0.2944, train acc 0.890, test acc 0.882
epoch 10, loss 0.2819, train acc 0.894, test acc 0.887
###Markdown
Dropout in Gluon
###Code
net = nn.Sequential()
net.add(nn.Dense(num_hiddens1, activation="relu"),
nn.Dropout(drop_prob1), # Add a dropout layer after the first fully connected layer.
nn.Dense(num_hiddens2, activation="relu"),
nn.Dropout(drop_prob2), # Add a dropout layer after the second fully connected layer.
nn.Dense(num_outputs))
net.initialize(init.Normal(sigma=0.01))
###Output
_____no_output_____
###Markdown
Training
###Code
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': lr})
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size,
None, None, trainer)
###Output
epoch 1, loss 0.9580, train acc 0.644, test acc 0.808
epoch 2, loss 0.5070, train acc 0.810, test acc 0.852
epoch 3, loss 0.4382, train acc 0.837, test acc 0.854
epoch 4, loss 0.3925, train acc 0.855, test acc 0.861
epoch 5, loss 0.3682, train acc 0.864, test acc 0.875
epoch 6, loss 0.3460, train acc 0.871, test acc 0.866
epoch 7, loss 0.3437, train acc 0.873, test acc 0.880
epoch 8, loss 0.3158, train acc 0.883, test acc 0.882
epoch 9, loss 0.3032, train acc 0.888, test acc 0.878
epoch 10, loss 0.2927, train acc 0.890, test acc 0.888
|
labs/module3/English/Fortran/README.ipynb | ###Markdown
OpenACC Directives This version of the lab is intended for Fortran programmers. The C/C++ version of this lab is available [here](../C/README.ipynb). The following timer counts down to a five minute warning before the lab instance shuts down. You should get a pop up at the five minute warning reminding you to save your work! If you are about to run out of time, please see the [Post-Lab](Post-Lab-Summary) section for saving this lab to view offline later. This is the Fortran version of this lab, for the C version [click here](../C/README.ipynb).Don't forget to check out additional [OpenACC Resources](https://www.openacc.org/resources) and join our [OpenACC Slack Channel](https://www.openacc.org/communityslack) to share your experience and get more help from the community. ---Let's execute the cell below to display information about the GPUs running on the server. To do this, execute the cell block below by giving it focus (clicking on it with your mouse), and hitting Ctrl-Enter, or pressing the play button in the toolbar above. If all goes well, you should see some output returned below the grey cell.
###Code
!pgaccelinfo
###Output
_____no_output_____
###Markdown
--- IntroductionOur goal for this lab is to learn what exactly code profiling is, and how we can use it to help us write powerful parallel programs. This is the OpenACC 3-Step development cycle.**Analyze** your code, and predict where potential parallelism can be uncovered. Use profiler to help understand what is happening in the code, and where parallelism may exist.**Parallelize** your code, starting with the most time consuming parts. Focus on maintaining correct results from your program.**Optimize** your code, focusing on maximizing performance. Performance may not increase all-at-once during early parallelization.We are currently tackling the **analyze** step. We will use PGI's code profiler to get an understanding of a relatively simple sample code before moving onto the next two steps. Run the CodeOur first step to analyzing this code is to run it. We need to record the results of our program before making any changes so that we can compare them to the results from the parallel code later on. It is also important to record the time that the program takes to run, as this will be our primary indicator to whether or not our parallelization is improving performance.
###Code
!pgfortran -fast -o laplace laplace2d.f90 jacobi.f90 && echo "Compilation Successful!" && ./laplace
###Output
_____no_output_____
###Markdown
Optional: Analyze the CodeIf you would like a refresher on the code files that we are working on, you may view both of them using the two links below.[jacobi.f90](../Fortran/jacobi.f90) [laplace2d.f90](../Fortran/laplace2d.f90) Optional: Profile the CodeIf you would like to profile the sequential code, you may select this link. When prompted for a password, type `openacc`. This will open a noVNC window, then you may use PGPROF to profile our sequential laplace code. The executable will be found in the `/home/openacc/labs/module3/English/Fortran` directory. --- OpenACC DirectivesUsing OpenACC directives will allow us to parallelize our code without needing to explicitly alter our code. What this means is that, by using OpenACC directives, we can have a single code that will function as both a sequential code and a parallel code. OpenACC Syntax`!$acc ``!$acc` in Fortran is what's known as a "compiler directive." These are very similar to programmer comments, since the line begins with a comment statement `!`. After the comment is `$acc`. OpenACC compliant compilers with appropriate command line options can interpret this as an OpenACC directive that "guide" the compiler, without running the chance of damaging the code. If the compiler does not understand `!$acc` it can ignore it, rather than throw a syntax error because it's just a comment.**directives** are instructions in OpenACC that will tell the compiler to do some action. For now, we will only use directives that allow the compiler to parallelize our code.**clauses** are additions/alterations to our directives. These include (but are not limited to) optimizations. The way that I prefer to think about it: directives describe a general action for our compiler to do (such as, paralellize our code), and clauses allow the programmer to be more specific (such as, how we specifically want the code to be parallelized). --- Parallel DirectiveThere are three directives we will cover in this lab: parallel, kernels, and loop. Once we understand all three of them, you will be tasked with parallelizing our laplace code with your preferred directive (or use all of them, if you'd like!)The parallel directive may be the most straight-forward of the directives. It will mark a region of the code for parallelization (this usually only includes parallelizing a single for loop.) Let's take a look:```fortran !$acc parallel loop do i=1,N enddo```We may also define a "parallel region". The parallel region may have multiple loops (though this is often not recommended!) The parallel region is everything contained within the outer-most loop.```fortran!$acc parallel !$acc loop do i=1,N enddo````!$acc parallel loop` will mark the next loop for parallelization. It is extremely important to include the **`loop`**, otherwise you will not be parallelizing the loop properly. The parallel directive tells the compiler to "redundantly parallelize" the code. The `loop` directive specifically tells the compiler that we want the loop parallelized. Let's look at an example of why the loop directive is so important.We are soon going to move onto the next directive (the kernels directive) which also allows us to parallelize our code. We will also mark the differences between this two directives. With that being said, the following information is completely unique to the parallel directive:The parallel directive leaves a lot of decisions up to the programmer. The programmer will decide what is, and isn't, parallelizable. The programmer will also have to provide all of the optimizations - the compiler assumes nothing. If any mistakes happen while parallelizing the code, it will be up to the programmer to identify them and correct them.We will soon see how the kernels directive is the exact opposite in all of these regards. Optional: Parallelize our Code with the Parallel DirectiveIt is recommended that you learn all three of the directives prior to altering the laplace code. However, if you wish to try out the parallel directive *now*, then you may use the following links to edit the laplace code.[jacobi.f90](../Fortran/jacobi.f90) [laplace2d.f90](../Fortran/laplace2d.f90) (be sure to save the changes you make by pressing ctrl+s)You may run your code by running the following script:
###Code
!pgfortran -fast -ta=multicore -Minfo=accel -o laplace_parallel laplace2d.f90 jacobi.f90 && ./laplace_parallel
###Output
_____no_output_____
###Markdown
--- Kernels DirectiveThe kernels directive allows the programmer to step back, and rely solely on the compiler. Let's look at the syntax:```fortran!$acc kernelsdo i=1,N enddo!acc end kernels```Just like in the parallel directive example, we are parallelizing a single loop. Recall that when using the parallel directive, it must always be paired with the loop directive, otherwise the code will be improperly parallelized. The kernels directive does not follow the same rule, and in some compilers, adding the loop directive may limit the compilers ability to optimize the code.In this case you also need to include the statement "!acc end kernels" so the compiler knows the "scope" of the directive.As said previously, the kernels directive is the exact opposite of the parallel directive. This means that the compiler is making a lot of assumptions, and may even override the programmers decision to parallelize code. Also, by default, the compiler will attempt to optimize the loop. The compiler is generally pretty good at optimizing loops, and sometimes may be able to optimize the loop in a way that the programmer cannot describe. However, usually, the programmer will be able to achieve better performance by optimizing the loop themself.If you run into a situation where the compiler refuses to parallelize a loop, you may override the compilers decision. (however, keep in mind that by overriding the compilers decision, you are taking responsibility for any mistakes the occur from parallelizing the code!) In this code segment, we are using the independent clause to ensure the compiler that we think the loop is parallelizable.```fortran!$acc kernels loop independentdo i=1,N enddo!$acc end kernels```One of the largest advantages of the kernels directive is its ability to parallelize many loops at once. For example, in the following code segment, we are able to effectively parallelize two loops at once by utilizing a kernels region (similar to a parallel region, that we saw earlier.) This is done by putting the statement "!acc end kernels" at the end of the directive region.```fortran!$acc kernelsdo i=1,N enddo do j=1,M enddo!$acc end kernels```By using the kernels directive, we can parallelize more than one loop (as many loops as we want, actually.) We are also able to include sequential code between the loops, without needing to include multiple directives. Similar to before, let's look at a visual example of how the kernels directive works.Before moving onto our last directive (the loop directive), let's recap what makes the parallel and kernels directive so functionally different.**The parallel directive** gives a lot of control to the programmer. The programmer decides what to parallelize, and how it will be parallelized. Any mistakes made by the parallelization is at the fault of the programmer. It is recommended to use a parallel directive for each loop you want to parallelize.**The kernels directive** leaves majority of the control to the compiler. The compiler will analyze the loops, and decide which ones to parallelize. It may refuse to parallelize certain loops, but the programmer can override this decision. You may use the kernels directive to parallelize large portions of code, and these portions may include multiple loops. Optional: Parallelize our Code with the Kernels DirectiveIt is recommended that you learn all three of the directives prior to altering the laplace code. However, if you wish to try out the kernels directive *now*, then you may use the following links to edit the laplace code. Pay close attention to the compiler feedback, and be prepared to add the *independent* clause to your loops.[jacobi.f90](../Fortran/jacobi.f90) [laplace2d.f90](../Fortran/laplace2d.f90) (be sure to save the changes you make by pressing ctrl+s)You may run your code by running the following script:
###Code
!pgfortran -fast -ta=multicore -Minfo=accel -o laplace_parallel laplace2d.f90 jacobi.f90 && ./laplace_parallel
###Output
_____no_output_____
###Markdown
--- Loop DirectiveWe've seen the `loop` directive used and mentioned a few times now; it's time to formally define it. The `loop` directive has two major uses:* Mark a single loop for parallelization * Allow us to explicitly define optimizations/alterations for the loopThe loop optimizations are a subject for another lab, so for now, we will focus on the parallelization aspect. For the `loop` directive to work properly, it must be contained within either the parallel or kernels directive.For example:```fortran!$acc parallel loopdo i=1,N enddo```or ```fortran!$acc kernelsdo i=1,N enddo!$acc end kernels```When using the `parallel` directive, you must include the loop directive for the code to function properly. When using the `kernels` directive, the loop directive is implied, and does not need to be included.We may also use the loop directive to parallelize multi-dimensional loop nests. Depending on the parallel hardware you are using, you may not be able to achieve multi-loop parallelism. Some parallel hardware is simply limited in its parallel capability, and thus parallelizing inner loops does not offer any extra performance (though is also does not hurt the program, either.) In this lab, we are using a multicore CPU as our parallel hardware, and thus, multi-loop parallelization isn't entirely possible. However, when using GPUs (which we will in the next lab!) we can utilize multi-loop parallelism.Either way, this is what multi-loop parallelism looks like:```fortran!$acc parallel loopdo i=1,N !acc loop do j=1,M enddoenddo```The `kernels` directive is also very good at parallelizing nested loops. We can recreate the same code above with the `kernels` directive:```fortran!$acc kernelsdo i=1,N do j=1,M enddoenddo!acc end kernels```Notice that just like before, we do not need to include the `loop` directive. Parallelizing Our Laplace CodeUsing your knowledge about the parallel, kernels, and loop directive, add OpenACC directives to our laplace code and parallelize it. You may edit the code by selecting the following links: [jacobi.f90](../../../../edit/module3/English/Fortran/jacobi.f90) [laplace2d.f90](../../../../edit/module3/English/Fortran/laplace2d.f90) (be sure to save the changes you make by pressing ctrl+s) ---To compile and run your parallel code on a multicore CPU, run the following script:
###Code
!pgfortran -fast -ta=multicore -Minfo=accel -o laplace_parallel laplace2d.f90 jacobi.f90 && ./laplace_parallel
###Output
_____no_output_____
###Markdown
---If at any point you feel that you have made a mistake, and would like to reset the code to how it was originally, you may run the following script:
###Code
!cp ./solutions/sequential/jacobi.f90 ./jacobi.f90 && cp ./solutions/sequential/laplace2d.f90 ./laplace2d.f90 && echo "Reset Complete"
###Output
_____no_output_____
###Markdown
---If at any point you would like to re-run the sequential code to check results/performance, you may run the following script:
###Code
!cd solutions/sequential && pgfortran -fast -o laplace_seq laplace2d.f90 jacobi.f90 && ./laplace_seq
###Output
_____no_output_____
###Markdown
---If you would like to view information about the CPU we are running on, you may run the following script:
###Code
!pgcpuid
###Output
_____no_output_____
###Markdown
Optional: Compiling Multicore CodeKnowing how to compile multicore code is not needed for the completion of this lab. However, it will be useful if you want to parallelize your own personal code later on.**-Minfo** : This flag will give us feedback from the compiler about code optimizations and restrictions. **-Minfo=accel** will only give us feedback regarding our OpenACC parallelizations/optimizations. **-Minfo=all** will give us all possible feedback, including our parallelizaiton/optimizations, sequential code optimizations, and sequential code restrictions. **-ta** : This flag allows us to compile our code for a specific target parallel hardware. Without this flag, the code will be compiled for sequential execution. **-ta=multicore** will allow us to compiler our code for a multicore CPU. Optional: Profiling Multicore CodeIf you would like to profile your multicore code with PGPROF, click this link. The **laplace_parallel** executable will be found in /notebooks/Fortran. --- ConclusionIf you would like to check your results, run the following script.
###Code
!cd solutions/multicore && pgfortran -fast -ta=multicore -Minfo=accel -o laplace_parallel laplace2d.f90 jacobi.f90 && ./laplace_parallel
###Output
_____no_output_____
###Markdown
If you would like to view the solution codes, you may use the following links.**Using the Parallel Directive** [jacobi.f90](../../../../edit/module3/English/Fortran/solutions/multicore/jacobi.f90) [laplace2d.f90](../../../../edit/module3/English/Fortran/solutions/multicore/laplace2d.f90) **Using the Kernels Directive** [jacobi.f90](../../../../edit/module3/English/Fortran/solutions/multicore/kernels/jacobi.f90) [laplace2d.f90](../../../../edit/module3/English/Fortran/solutions/multicore/kernels/laplace2d.f90) We are able to parallelize our code for a handful of different hardware by using either the `parallel` or `kernels` directive. We are also able to define additional levels of parallelism by using the `loop` directive inside the parallel/kernels directive. You may also use these directives to parallelize nested loops. There are a few optimizations that we could make to our code at this point, but, for the most part, our multicore code will not get much faster. In the next lab, we will shift our attention to programming for a GPU accelerator, and while learning about GPUs, we will touch on how to handle memory management in OpenACC. --- Bonus Task1. If you chose to use only one of the directives (either parallel or kernels), then go back and use the other one. Compare the runtime of the two versions, and profile both.2. If you would like some additional lessons on using OpenACC to parallelize our code, there is an Introduction to OpenACC video series available from the OpenACC YouTube page. The first two videos in the series covers a lot of the content that was covered in this lab. [Introduction to Parallel Programming with OpenACC - Part 1](https://youtu.be/PxmvTsrCTZg) [Introduction to Parallel Programming with OpenACC - Part 2](https://youtu.be/xSCD4-GV41M)3. As discussed earlier, a multicore accelerator is only able to take advantage of one level of parallelism. However, a GPU can take advantage of more. Make sure to use the skills you learned in the **Loop Directive** section of the lab, and parallelize the multi-dimensional loops in our code. Then run the script below to run the code on a GPU. Compare the results (including compiler feedback) to our multicore implementation.
###Code
!pgfortran -fast -ta=tesla:managed -Minfo=accel -o laplace_gpu laplace2d.f90 jacobi.f90 && ./laplace_gpu
###Output
_____no_output_____
###Markdown
Post-Lab SummaryIf you would like to download this lab for later viewing, it is recommend you go to your browsers File menu (not the Jupyter notebook file menu) and save the complete web page. This will ensure the images are copied down as well.You can also execute the following cell block to create a zip-file of the files you've been working on, and download it with the link below.
###Code
%%bash
rm -f openacc_files.zip
zip -r openacc_files.zip *
###Output
_____no_output_____
###Markdown
OpenACC Directives This version of the lab is intended for Fortran programmers. The C/C++ version of this lab is available [here](../C/README.ipynb). The following timer counts down to a five minute warning before the lab instance shuts down. You should get a pop up at the five minute warning reminding you to save your work! If you are about to run out of time, please see the [Post-Lab](Post-Lab-Summary) section for saving this lab to view offline later. This is the Fortran version of this lab, for the C version [click here](../C/README.ipynb).Don't forget to check out additional [OpenACC Resources](https://www.openacc.org/resources) and join our [OpenACC Slack Channel](https://www.openacc.org/communityslack) to share your experience and get more help from the community. ---Let's execute the cell below to display information about the GPUs running on the server. To do this, execute the cell block below by giving it focus (clicking on it with your mouse), and hitting Ctrl-Enter, or pressing the play button in the toolbar above. If all goes well, you should see some output returned below the grey cell.
###Code
!pgaccelinfo
###Output
_____no_output_____
###Markdown
--- IntroductionOur goal for this lab is to learn what exactly code profiling is, and how we can use it to help us write powerful parallel programs. This is the OpenACC 3-Step development cycle.**Analyze** your code, and predict where potential parallelism can be uncovered. Use profiler to help understand what is happening in the code, and where parallelism may exist.**Parallelize** your code, starting with the most time consuming parts. Focus on maintaining correct results from your program.**Optimize** your code, focusing on maximizing performance. Performance may not increase all-at-once during early parallelization.We are currently tackling the **analyze** step. We will use PGI's code profiler to get an understanding of a relatively simple sample code before moving onto the next two steps. Run the CodeOur first step to analyzing this code is to run it. We need to record the results of our program before making any changes so that we can compare them to the results from the parallel code later on. It is also important to record the time that the program takes to run, as this will be our primary indicator to whether or not our parallelization is improving performance.
###Code
!pgfortran -fast -o laplace laplace2d.f90 jacobi.f90 && echo "Compilation Successful!" && ./laplace
###Output
_____no_output_____
###Markdown
Optional: Analyze the CodeIf you would like a refresher on the code files that we are working on, you may view both of them using the two links below.[jacobi.f90](../Fortran/jacobi.f90) [laplace2d.f90](../Fortran/laplace2d.f90) Optional: Profile the CodeIf you would like to profile your code with Nsight Systems, please follow the instructions in **[Lab2](../../../module2/English/Fortran/README.ipynbprofilecode)**, and add NVTX to your code to manually instrument the application. --- OpenACC DirectivesUsing OpenACC directives will allow us to parallelize our code without needing to explicitly alter our code. What this means is that, by using OpenACC directives, we can have a single code that will function as both a sequential code and a parallel code. OpenACC Syntax`!$acc ``!$acc` in Fortran is what's known as a "compiler directive." These are very similar to programmer comments, since the line begins with a comment statement `!`. After the comment is `$acc`. OpenACC compliant compilers with appropriate command line options can interpret this as an OpenACC directive that "guide" the compiler, without running the chance of damaging the code. If the compiler does not understand `!$acc` it can ignore it, rather than throw a syntax error because it's just a comment.**directives** are instructions in OpenACC that will tell the compiler to do some action. For now, we will only use directives that allow the compiler to parallelize our code.**clauses** are additions/alterations to our directives. These include (but are not limited to) optimizations. The way that I prefer to think about it: directives describe a general action for our compiler to do (such as, paralellize our code), and clauses allow the programmer to be more specific (such as, how we specifically want the code to be parallelized). --- Parallel DirectiveThere are three directives we will cover in this lab: parallel, kernels, and loop. Once we understand all three of them, you will be tasked with parallelizing our laplace code with your preferred directive (or use all of them, if you'd like!)The parallel directive may be the most straight-forward of the directives. It will mark a region of the code for parallelization (this usually only includes parallelizing a single for loop.) Let's take a look:```fortran !$acc parallel loop do i=1,N enddo```We may also define a "parallel region". The parallel region may have multiple loops (though this is often not recommended!) The parallel region is everything contained within the outer-most loop.```fortran!$acc parallel !$acc loop do i=1,N enddo````!$acc parallel loop` will mark the next loop for parallelization. It is extremely important to include the **`loop`**, otherwise you will not be parallelizing the loop properly. The parallel directive tells the compiler to "redundantly parallelize" the code. The `loop` directive specifically tells the compiler that we want the loop parallelized. Let's look at an example of why the loop directive is so important.We are soon going to move onto the next directive (the kernels directive) which also allows us to parallelize our code. We will also mark the differences between this two directives. With that being said, the following information is completely unique to the parallel directive:The parallel directive leaves a lot of decisions up to the programmer. The programmer will decide what is, and isn't, parallelizable. The programmer will also have to provide all of the optimizations - the compiler assumes nothing. If any mistakes happen while parallelizing the code, it will be up to the programmer to identify them and correct them.We will soon see how the kernels directive is the exact opposite in all of these regards. Optional: Parallelize our Code with the Parallel DirectiveIt is recommended that you learn all three of the directives prior to altering the laplace code. However, if you wish to try out the parallel directive *now*, then you may use the following links to edit the laplace code.[jacobi.f90](../Fortran/jacobi.f90) [laplace2d.f90](../Fortran/laplace2d.f90) (be sure to save the changes you make by pressing ctrl+s)You may run your code by running the following script:
###Code
!pgfortran -fast -ta=multicore -Minfo=accel -o laplace_parallel laplace2d.f90 jacobi.f90 && ./laplace_parallel
###Output
_____no_output_____
###Markdown
--- Kernels DirectiveThe kernels directive allows the programmer to step back, and rely solely on the compiler. Let's look at the syntax:```fortran!$acc kernelsdo i=1,N enddo!acc end kernels```Just like in the parallel directive example, we are parallelizing a single loop. Recall that when using the parallel directive, it must always be paired with the loop directive, otherwise the code will be improperly parallelized. The kernels directive does not follow the same rule, and in some compilers, adding the loop directive may limit the compilers ability to optimize the code.In this case you also need to include the statement "!acc end kernels" so the compiler knows the "scope" of the directive.As said previously, the kernels directive is the exact opposite of the parallel directive. This means that the compiler is making a lot of assumptions, and may even override the programmers decision to parallelize code. Also, by default, the compiler will attempt to optimize the loop. The compiler is generally pretty good at optimizing loops, and sometimes may be able to optimize the loop in a way that the programmer cannot describe. However, usually, the programmer will be able to achieve better performance by optimizing the loop themself.If you run into a situation where the compiler refuses to parallelize a loop, you may override the compilers decision. (however, keep in mind that by overriding the compilers decision, you are taking responsibility for any mistakes the occur from parallelizing the code!) In this code segment, we are using the independent clause to ensure the compiler that we think the loop is parallelizable.```fortran!$acc kernels loop independentdo i=1,N enddo!$acc end kernels```One of the largest advantages of the kernels directive is its ability to parallelize many loops at once. For example, in the following code segment, we are able to effectively parallelize two loops at once by utilizing a kernels region (similar to a parallel region, that we saw earlier.) This is done by putting the statement "!acc end kernels" at the end of the directive region.```fortran!$acc kernelsdo i=1,N enddo do j=1,M enddo!$acc end kernels```By using the kernels directive, we can parallelize more than one loop (as many loops as we want, actually.) We are also able to include sequential code between the loops, without needing to include multiple directives. Similar to before, let's look at a visual example of how the kernels directive works.Before moving onto our last directive (the loop directive), let's recap what makes the parallel and kernels directive so functionally different.**The parallel directive** gives a lot of control to the programmer. The programmer decides what to parallelize, and how it will be parallelized. Any mistakes made by the parallelization is at the fault of the programmer. It is recommended to use a parallel directive for each loop you want to parallelize.**The kernels directive** leaves majority of the control to the compiler. The compiler will analyze the loops, and decide which ones to parallelize. It may refuse to parallelize certain loops, but the programmer can override this decision. You may use the kernels directive to parallelize large portions of code, and these portions may include multiple loops. Optional: Parallelize our Code with the Kernels DirectiveIt is recommended that you learn all three of the directives prior to altering the laplace code. However, if you wish to try out the kernels directive *now*, then you may use the following links to edit the laplace code. Pay close attention to the compiler feedback, and be prepared to add the *independent* clause to your loops.[jacobi.f90](../Fortran/jacobi.f90) [laplace2d.f90](../Fortran/laplace2d.f90) (be sure to save the changes you make by pressing ctrl+s)You may run your code by running the following script:
###Code
!pgfortran -fast -ta=multicore -Minfo=accel -o laplace_parallel laplace2d.f90 jacobi.f90 && ./laplace_parallel
###Output
_____no_output_____
###Markdown
--- Loop DirectiveWe've seen the `loop` directive used and mentioned a few times now; it's time to formally define it. The `loop` directive has two major uses:* Mark a single loop for parallelization * Allow us to explicitly define optimizations/alterations for the loopThe loop optimizations are a subject for another lab, so for now, we will focus on the parallelization aspect. For the `loop` directive to work properly, it must be contained within either the parallel or kernels directive.For example:```fortran!$acc parallel loopdo i=1,N enddo```or ```fortran!$acc kernelsdo i=1,N enddo!$acc end kernels```When using the `parallel` directive, you must include the loop directive for the code to function properly. When using the `kernels` directive, the loop directive is implied, and does not need to be included.We may also use the loop directive to parallelize multi-dimensional loop nests. Depending on the parallel hardware you are using, you may not be able to achieve multi-loop parallelism. Some parallel hardware is simply limited in its parallel capability, and thus parallelizing inner loops does not offer any extra performance (though is also does not hurt the program, either.) In this lab, we are using a multicore CPU as our parallel hardware, and thus, multi-loop parallelization isn't entirely possible. However, when using GPUs (which we will in the next lab!) we can utilize multi-loop parallelism.Either way, this is what multi-loop parallelism looks like:```fortran!$acc parallel loopdo i=1,N !acc loop do j=1,M enddoenddo```The `kernels` directive is also very good at parallelizing nested loops. We can recreate the same code above with the `kernels` directive:```fortran!$acc kernelsdo i=1,N do j=1,M enddoenddo!acc end kernels```Notice that just like before, we do not need to include the `loop` directive. Parallelizing Our Laplace CodeUsing your knowledge about the parallel, kernels, and loop directive, add OpenACC directives to our laplace code and parallelize it. You may edit the code by selecting the following links: [jacobi.f90](../../../../edit/module3/English/Fortran/jacobi.f90) [laplace2d.f90](../../../../edit/module3/English/Fortran/laplace2d.f90) (be sure to save the changes you make by pressing ctrl+s) ---To compile and run your parallel code on a multicore CPU, run the following script:
###Code
!pgfortran -fast -ta=multicore -Minfo=accel -o laplace_parallel laplace2d.f90 jacobi.f90 && ./laplace_parallel
###Output
_____no_output_____
###Markdown
---If at any point you feel that you have made a mistake, and would like to reset the code to how it was originally, you may run the following script:
###Code
!cp ./solutions/sequential/jacobi.f90 ./jacobi.f90 && cp ./solutions/sequential/laplace2d.f90 ./laplace2d.f90 && echo "Reset Complete"
###Output
_____no_output_____
###Markdown
---If at any point you would like to re-run the sequential code to check results/performance, you may run the following script:
###Code
!cd solutions/sequential && pgfortran -fast -o laplace_seq laplace2d.f90 jacobi.f90 && ./laplace_seq
###Output
_____no_output_____
###Markdown
---If you would like to view information about the CPU we are running on, you may run the following script:
###Code
!pgcpuid
###Output
_____no_output_____
###Markdown
Optional: Compiling Multicore CodeKnowing how to compile multicore code is not needed for the completion of this lab. However, it will be useful if you want to parallelize your own personal code later on.**-Minfo** : This flag will give us feedback from the compiler about code optimizations and restrictions. **-Minfo=accel** will only give us feedback regarding our OpenACC parallelizations/optimizations. **-Minfo=all** will give us all possible feedback, including our parallelizaiton/optimizations, sequential code optimizations, and sequential code restrictions. **-ta** : This flag allows us to compile our code for a specific target parallel hardware. Without this flag, the code will be compiled for sequential execution. **-ta=multicore** will allow us to compiler our code for a multicore CPU. Optional: Profiling Multicore CodeIf you would like to profile your code with Nsight Systems, please follow the instructions in **[Lab2](../../../module2/English/Fortran/README.ipynbprofilecode)**, and add NVTX to your code to manually instrument the application. --- ConclusionIf you would like to check your results, run the following script.
###Code
!cd solutions/multicore && pgfortran -fast -ta=multicore -Minfo=accel -o laplace_parallel laplace2d.f90 jacobi.f90 && ./laplace_parallel
###Output
_____no_output_____
###Markdown
If you would like to view the solution codes, you may use the following links.**Using the Parallel Directive** [jacobi.f90](../../../../edit/module3/English/Fortran/solutions/multicore/jacobi.f90) [laplace2d.f90](../../../../edit/module3/English/Fortran/solutions/multicore/laplace2d.f90) **Using the Kernels Directive** [jacobi.f90](../../../../edit/module3/English/Fortran/solutions/multicore/kernels/jacobi.f90) [laplace2d.f90](../../../../edit/module3/English/Fortran/solutions/multicore/kernels/laplace2d.f90) We are able to parallelize our code for a handful of different hardware by using either the `parallel` or `kernels` directive. We are also able to define additional levels of parallelism by using the `loop` directive inside the parallel/kernels directive. You may also use these directives to parallelize nested loops. There are a few optimizations that we could make to our code at this point, but, for the most part, our multicore code will not get much faster. In the next lab, we will shift our attention to programming for a GPU accelerator, and while learning about GPUs, we will touch on how to handle memory management in OpenACC. --- Bonus Task1. If you chose to use only one of the directives (either parallel or kernels), then go back and use the other one. Compare the runtime of the two versions, and profile both.2. If you would like some additional lessons on using OpenACC to parallelize our code, there is an Introduction to OpenACC video series available from the OpenACC YouTube page. The first two videos in the series covers a lot of the content that was covered in this lab. [Introduction to Parallel Programming with OpenACC - Part 1](https://youtu.be/PxmvTsrCTZg) [Introduction to Parallel Programming with OpenACC - Part 2](https://youtu.be/xSCD4-GV41M)3. As discussed earlier, a multicore accelerator is only able to take advantage of one level of parallelism. However, a GPU can take advantage of more. Make sure to use the skills you learned in the **Loop Directive** section of the lab, and parallelize the multi-dimensional loops in our code. Then run the script below to run the code on a GPU. Compare the results (including compiler feedback) to our multicore implementation.
###Code
!pgfortran -fast -ta=tesla:managed -Minfo=accel -o laplace_gpu laplace2d.f90 jacobi.f90 && ./laplace_gpu
###Output
_____no_output_____
###Markdown
Post-Lab SummaryIf you would like to download this lab for later viewing, it is recommend you go to your browsers File menu (not the Jupyter notebook file menu) and save the complete web page. This will ensure the images are copied down as well.You can also execute the following cell block to create a zip-file of the files you've been working on, and download it with the link below.
###Code
%%bash
rm -f openacc_files.zip
zip -r openacc_files.zip *
###Output
_____no_output_____
###Markdown
OpenACC Directives This version of the lab is intended for Fortran programmers. The C/C++ version of this lab is available [here](../C/README.ipynb). The following timer counts down to a five minute warning before the lab instance shuts down. You should get a pop up at the five minute warning reminding you to save your work! If you are about to run out of time, please see the [Post-Lab](Post-Lab-Summary) section for saving this lab to view offline later. This is the Fortran version of this lab, for the C version [click here](../C/README.ipynb).Don't forget to check out additional [OpenACC Resources](https://www.openacc.org/resources) and join our [OpenACC Slack Channel](https://www.openacc.org/communityslack) to share your experience and get more help from the community. ---Let's execute the cell below to display information about the GPUs running on the server. To do this, execute the cell block below by giving it focus (clicking on it with your mouse), and hitting Ctrl-Enter, or pressing the play button in the toolbar above. If all goes well, you should see some output returned below the grey cell.
###Code
!pgaccelinfo
###Output
_____no_output_____
###Markdown
--- IntroductionOur goal for this lab is to learn what exactly code profiling is, and how we can use it to help us write powerful parallel programs. This is the OpenACC 3-Step development cycle.**Analyze** your code, and predict where potential parallelism can be uncovered. Use profiler to help understand what is happening in the code, and where parallelism may exist.**Parallelize** your code, starting with the most time consuming parts. Focus on maintaining correct results from your program.**Optimize** your code, focusing on maximizing performance. Performance may not increase all-at-once during early parallelization.We are currently tackling the **analyze** step. We will use PGI's code profiler to get an understanding of a relatively simple sample code before moving onto the next two steps. Run the CodeOur first step to analyzing this code is to run it. We need to record the results of our program before making any changes so that we can compare them to the results from the parallel code later on. It is also important to record the time that the program takes to run, as this will be our primary indicator to whether or not our parallelization is improving performance.
###Code
!pgfortran -fast -o laplace laplace2d.f90 jacobi.f90 && echo "Compilation Successful!" && ./laplace
###Output
_____no_output_____
###Markdown
Optional: Analyze the CodeIf you would like a refresher on the code files that we are working on, you may view both of them using the two links below.[jacobi.f90](../Fortran/jacobi.f90) [laplace2d.f90](../Fortran/laplace2d.f90) Optional: Profile the CodeIf you would like to profile your code with Nsight Systems, please follow the instructions in **[Lab2](../../../module2/English/Fortran/README.ipynbprofilecode)**, and add NVTX to your code to manually instrument the application. --- OpenACC DirectivesUsing OpenACC directives will allow us to parallelize our code without needing to explicitly alter our code. What this means is that, by using OpenACC directives, we can have a single code that will function as both a sequential code and a parallel code. OpenACC Syntax`!$acc ``!$acc` in Fortran is what's known as a "compiler directive." These are very similar to programmer comments, since the line begins with a comment statement `!`. After the comment is `$acc`. OpenACC compliant compilers with appropriate command line options can interpret this as an OpenACC directive that "guide" the compiler, without running the chance of damaging the code. If the compiler does not understand `!$acc` it can ignore it, rather than throw a syntax error because it's just a comment.**directives** are instructions in OpenACC that will tell the compiler to do some action. For now, we will only use directives that allow the compiler to parallelize our code.**clauses** are additions/alterations to our directives. These include (but are not limited to) optimizations. The way that I prefer to think about it: directives describe a general action for our compiler to do (such as, paralellize our code), and clauses allow the programmer to be more specific (such as, how we specifically want the code to be parallelized). --- Parallel DirectiveThere are three directives we will cover in this lab: parallel, kernels, and loop. Once we understand all three of them, you will be tasked with parallelizing our laplace code with your preferred directive (or use all of them, if you'd like!)The parallel directive may be the most straight-forward of the directives. It will mark a region of the code for parallelization (this usually only includes parallelizing a single for loop.) Let's take a look:```fortran !$acc parallel loop do i=1,N enddo```We may also define a "parallel region". The parallel region may have multiple loops (though this is often not recommended!) The parallel region is everything contained within the outer-most loop.```fortran!$acc parallel !$acc loop do i=1,N enddo````!$acc parallel loop` will mark the next loop for parallelization. It is extremely important to include the **`loop`**, otherwise you will not be parallelizing the loop properly. The parallel directive tells the compiler to "redundantly parallelize" the code. The `loop` directive specifically tells the compiler that we want the loop parallelized. Let's look at an example of why the loop directive is so important.We are soon going to move onto the next directive (the kernels directive) which also allows us to parallelize our code. We will also mark the differences between this two directives. With that being said, the following information is completely unique to the parallel directive:The parallel directive leaves a lot of decisions up to the programmer. The programmer will decide what is, and isn't, parallelizable. The programmer will also have to provide all of the optimizations - the compiler assumes nothing. If any mistakes happen while parallelizing the code, it will be up to the programmer to identify them and correct them.We will soon see how the kernels directive is the exact opposite in all of these regards. Optional: Parallelize our Code with the Parallel DirectiveIt is recommended that you learn all three of the directives prior to altering the laplace code. However, if you wish to try out the parallel directive *now*, then you may use the following links to edit the laplace code.[jacobi.f90](../Fortran/jacobi.f90) [laplace2d.f90](../Fortran/laplace2d.f90) (be sure to save the changes you make by pressing ctrl+s)You may run your code by running the following script:
###Code
!pgfortran -fast -ta=multicore -Minfo=accel -o laplace_parallel laplace2d.f90 jacobi.f90 && ./laplace_parallel
###Output
_____no_output_____
###Markdown
--- Kernels DirectiveThe kernels directive allows the programmer to step back, and rely solely on the compiler. Let's look at the syntax:```fortran!$acc kernelsdo i=1,N enddo!acc end kernels```Just like in the parallel directive example, we are parallelizing a single loop. Recall that when using the parallel directive, it must always be paired with the loop directive, otherwise the code will be improperly parallelized. The kernels directive does not follow the same rule, and in some compilers, adding the loop directive may limit the compilers ability to optimize the code.In this case you also need to include the statement "!acc end kernels" so the compiler knows the "scope" of the directive.As said previously, the kernels directive is the exact opposite of the parallel directive. This means that the compiler is making a lot of assumptions, and may even override the programmers decision to parallelize code. Also, by default, the compiler will attempt to optimize the loop. The compiler is generally pretty good at optimizing loops, and sometimes may be able to optimize the loop in a way that the programmer cannot describe. However, usually, the programmer will be able to achieve better performance by optimizing the loop themselves.If you run into a situation where the compiler refuses to parallelize a loop, you may override the compilers decision. (however, keep in mind that by overriding the compilers decision, you are taking responsibility for any mistakes the occur from parallelizing the code!) In this code segment, we are using the independent clause to ensure the compiler that we think the loop is parallelizable.```fortran!$acc kernels loop independentdo i=1,N enddo!$acc end kernels```One of the largest advantages of the kernels directive is its ability to parallelize many loops at once. For example, in the following code segment, we are able to effectively parallelize two loops at once by utilizing a kernels region (similar to a parallel region, that we saw earlier.) This is done by putting the statement "!acc end kernels" at the end of the directive region.```fortran!$acc kernelsdo i=1,N enddo do j=1,M enddo!$acc end kernels```By using the kernels directive, we can parallelize more than one loop (as many loops as we want, actually.) We are also able to include sequential code between the loops, without needing to include multiple directives. Similar to before, let's look at a visual example of how the kernels directive works.Before moving onto our last directive (the loop directive), let's recap what makes the parallel and kernels directive so functionally different.**The parallel directive** gives a lot of control to the programmer. The programmer decides what to parallelize, and how it will be parallelized. Any mistakes made by the parallelization is at the fault of the programmer. It is recommended to use a parallel directive for each loop you want to parallelize.**The kernels directive** leaves majority of the control to the compiler. The compiler will analyze the loops, and decide which ones to parallelize. It may refuse to parallelize certain loops, but the programmer can override this decision. You may use the kernels directive to parallelize large portions of code, and these portions may include multiple loops. Optional: Parallelize our Code with the Kernels DirectiveIt is recommended that you learn all three of the directives prior to altering the laplace code. However, if you wish to try out the kernels directive *now*, then you may use the following links to edit the laplace code. Pay close attention to the compiler feedback, and be prepared to add the *independent* clause to your loops.[jacobi.f90](../Fortran/jacobi.f90) [laplace2d.f90](../Fortran/laplace2d.f90) (be sure to save the changes you make by pressing ctrl+s)You may run your code by running the following script:
###Code
!pgfortran -fast -ta=multicore -Minfo=accel -o laplace_parallel laplace2d.f90 jacobi.f90 && ./laplace_parallel
###Output
_____no_output_____
###Markdown
--- Loop DirectiveWe've seen the `loop` directive used and mentioned a few times now; it's time to formally define it. The `loop` directive has two major uses:* Mark a single loop for parallelization * Allow us to explicitly define optimizations/alterations for the loopThe loop optimizations are a subject for another lab, so for now, we will focus on the parallelization aspect. For the `loop` directive to work properly, it must be contained within either the parallel or kernels directive.For example:```fortran!$acc parallel loopdo i=1,N enddo```or ```fortran!$acc kernelsdo i=1,N enddo!$acc end kernels```When using the `parallel` directive, you must include the loop directive for the code to function properly. When using the `kernels` directive, the loop directive is implied, and does not need to be included.We may also use the loop directive to parallelize multi-dimensional loop nests. Depending on the parallel hardware you are using, you may not be able to achieve multi-loop parallelism. Some parallel hardware is simply limited in its parallel capability, and thus parallelizing inner loops does not offer any extra performance (though is also does not hurt the program, either.) In this lab, we are using a multicore CPU as our parallel hardware, and thus, multi-loop parallelization isn't entirely possible. However, when using GPUs (which we will in the next lab!) we can utilize multi-loop parallelism.Either way, this is what multi-loop parallelism looks like:```fortran!$acc parallel loopdo i=1,N !acc loop do j=1,M enddoenddo```The `kernels` directive is also very good at parallelizing nested loops. We can recreate the same code above with the `kernels` directive:```fortran!$acc kernelsdo i=1,N do j=1,M enddoenddo!acc end kernels```Notice that just like before, we do not need to include the `loop` directive. Parallelizing Our Laplace CodeUsing your knowledge about the parallel, kernels, and loop directive, add OpenACC directives to our laplace code and parallelize it. You may edit the code by selecting the following links: [jacobi.f90](../../../../edit/module3/English/Fortran/jacobi.f90) [laplace2d.f90](../../../../edit/module3/English/Fortran/laplace2d.f90) (be sure to save the changes you make by pressing ctrl+s) ---To compile and run your parallel code on a multicore CPU, run the following script:
###Code
!pgfortran -fast -ta=multicore -Minfo=accel -o laplace_parallel laplace2d.f90 jacobi.f90 && ./laplace_parallel
###Output
_____no_output_____
###Markdown
---If at any point you feel that you have made a mistake, and would like to reset the code to how it was originally, you may run the following script:
###Code
!cp ./solutions/sequential/jacobi.f90 ./jacobi.f90 && cp ./solutions/sequential/laplace2d.f90 ./laplace2d.f90 && echo "Reset Complete"
###Output
_____no_output_____
###Markdown
---If at any point you would like to re-run the sequential code to check results/performance, you may run the following script:
###Code
!cd solutions/sequential && pgfortran -fast -o laplace_seq laplace2d.f90 jacobi.f90 && ./laplace_seq
###Output
_____no_output_____
###Markdown
---If you would like to view information about the CPU we are running on, you may run the following script:
###Code
!pgcpuid
###Output
_____no_output_____
###Markdown
Optional: Compiling Multicore CodeKnowing how to compile multicore code is not needed for the completion of this lab. However, it will be useful if you want to parallelize your own personal code later on.**-Minfo** : This flag will give us feedback from the compiler about code optimizations and restrictions. **-Minfo=accel** will only give us feedback regarding our OpenACC parallelizations/optimizations. **-Minfo=all** will give us all possible feedback, including our parallelizaiton/optimizations, sequential code optimizations, and sequential code restrictions. **-ta** : This flag allows us to compile our code for a specific target parallel hardware. Without this flag, the code will be compiled for sequential execution. **-ta=multicore** will allow us to compiler our code for a multicore CPU. Optional: Profiling Multicore CodeIf you would like to profile your code with Nsight Systems, please follow the instructions in **[Lab2](../../../module2/English/Fortran/README.ipynbprofilecode)**, and add NVTX to your code to manually instrument the application. --- ConclusionIf you would like to check your results, run the following script.
###Code
!cd solutions/multicore && pgfortran -fast -ta=multicore -Minfo=accel -o laplace_parallel laplace2d.f90 jacobi.f90 && ./laplace_parallel
###Output
_____no_output_____
###Markdown
If you would like to view the solution codes, you may use the following links.**Using the Parallel Directive** [jacobi.f90](../Fortran/solutions/multicore/jacobi.f90) [laplace2d.f90](../Fortran/solutions/multicore/laplace2d.f90) **Using the Kernels Directive** [jacobi.f90](../Fortran/solutions/multicore/kernels/jacobi.f90) [laplace2d.f90](../Fortran/solutions/multicore/kernels/laplace2d.f90) We are able to parallelize our code for a handful of different hardware by using either the `parallel` or `kernels` directive. We are also able to define additional levels of parallelism by using the `loop` directive inside the parallel/kernels directive. You may also use these directives to parallelize nested loops. There are a few optimizations that we could make to our code at this point, but, for the most part, our multicore code will not get much faster. In the next lab, we will shift our attention to programming for a GPU accelerator, and while learning about GPUs, we will touch on how to handle memory management in OpenACC. --- Bonus Task1. If you chose to use only one of the directives (either parallel or kernels), then go back and use the other one. Compare the runtime of the two versions, and profile both.2. If you would like some additional lessons on using OpenACC to parallelize our code, there is an Introduction to OpenACC video series available from the OpenACC YouTube page. The first two videos in the series covers a lot of the content that was covered in this lab. [Introduction to Parallel Programming with OpenACC - Part 1](https://youtu.be/PxmvTsrCTZg) [Introduction to Parallel Programming with OpenACC - Part 2](https://youtu.be/xSCD4-GV41M)3. As discussed earlier, a multicore accelerator is only able to take advantage of one level of parallelism. However, a GPU can take advantage of more. Make sure to use the skills you learned in the **Loop Directive** section of the lab, and parallelize the multi-dimensional loops in our code. Then run the script below to run the code on a GPU. Compare the results (including compiler feedback) to our multicore implementation.
###Code
!pgfortran -fast -ta=tesla:managed -Minfo=accel -o laplace_gpu laplace2d.f90 jacobi.f90 && ./laplace_gpu
###Output
_____no_output_____
###Markdown
Post-Lab SummaryIf you would like to download this lab for later viewing, it is recommend you go to your browsers File menu (not the Jupyter notebook file menu) and save the complete web page. This will ensure the images are copied down as well.You can also execute the following cell block to create a zip-file of the files you've been working on, and download it with the link below.
###Code
%%bash
rm -f openacc_files.zip
zip -r openacc_files.zip *
###Output
_____no_output_____
###Markdown
OpenACC Directives This version of the lab is intended for Fortran programmers. The C/C++ version of this lab is available [here](../C/README.ipynb). The following timer counts down to a five minute warning before the lab instance shuts down. You should get a pop up at the five minute warning reminding you to save your work! If you are about to run out of time, please see the [Post-Lab](Post-Lab-Summary) section for saving this lab to view offline later. This is the Fortran version of this lab, for the C version [click here](../C/README.ipynb).Don't forget to check out additional [OpenACC Resources](https://www.openacc.org/resources) and join our [OpenACC Slack Channel](https://www.openacc.org/communityslack) to share your experience and get more help from the community. ---Let's execute the cell below to display information about the GPUs running on the server. To do this, execute the cell block below by giving it focus (clicking on it with your mouse), and hitting Ctrl-Enter, or pressing the play button in the toolbar above. If all goes well, you should see some output returned below the grey cell.
###Code
!pgaccelinfo
###Output
_____no_output_____
###Markdown
--- IntroductionOur goal for this lab is to learn what exactly code profiling is, and how we can use it to help us write powerful parallel programs. This is the OpenACC 3-Step development cycle.**Analyze** your code, and predict where potential parallelism can be uncovered. Use profiler to help understand what is happening in the code, and where parallelism may exist.**Parallelize** your code, starting with the most time consuming parts. Focus on maintaining correct results from your program.**Optimize** your code, focusing on maximizing performance. Performance may not increase all-at-once during early parallelization.We are currently tackling the **analyze** step. We will use PGI's code profiler to get an understanding of a relatively simple sample code before moving onto the next two steps. Run the CodeOur first step to analyzing this code is to run it. We need to record the results of our program before making any changes so that we can compare them to the results from the parallel code later on. It is also important to record the time that the program takes to run, as this will be our primary indicator to whether or not our parallelization is improving performance.
###Code
!pgfortran -fast -o laplace laplace2d.f90 jacobi.f90 && echo "Compilation Successful!" && ./laplace
###Output
_____no_output_____
###Markdown
Optional: Analyze the CodeIf you would like a refresher on the code files that we are working on, you may view both of them using the two links below.[jacobi.f90](../Fortran/jacobi.f90) [laplace2d.f90](../Fortran/laplace2d.f90) Optional: Profile the CodeIf you would like to profile the sequential code, you may select this link. When prompted for a password, type `openacc`. This will open a noVNC window, then you may use PGPROF to profile our sequential laplace code. The executable will be found in the `/home/openacc/labs/module3/English/Fortran` directory. --- OpenACC DirectivesUsing OpenACC directives will allow us to parallelize our code without needing to explicitly alter our code. What this means is that, by using OpenACC directives, we can have a single code that will function as both a sequential code and a parallel code. OpenACC Syntax`!$acc ``!$acc` in Fortran is what's known as a "compiler directive." These are very similar to programmer comments, since the line begins with a comment statement `!`. After the comment is `$acc`. OpenACC compliant compilers with appropriate command line options can interpret this as an OpenACC directive that "guide" the compiler, without running the chance of damaging the code. If the compiler does not understand `!$acc` it can ignore it, rather than throw a syntax error because it's just a comment.**directives** are instructions in OpenACC that will tell the compiler to do some action. For now, we will only use directives that allow the compiler to parallelize our code.**clauses** are additions/alterations to our directives. These include (but are not limited to) optimizations. The way that I prefer to think about it: directives describe a general action for our compiler to do (such as, paralellize our code), and clauses allow the programmer to be more specific (such as, how we specifically want the code to be parallelized). --- Parallel DirectiveThere are three directives we will cover in this lab: parallel, kernels, and loop. Once we understand all three of them, you will be tasked with parallelizing our laplace code with your preferred directive (or use all of them, if you'd like!)The parallel directive may be the most straight-forward of the directives. It will mark a region of the code for parallelization (this usually only includes parallelizing a single for loop.) Let's take a look:```fortran !$acc parallel loop do i=1,N enddo```We may also define a "parallel region". The parallel region may have multiple loops (though this is often not recommended!) The parallel region is everything contained within the outer-most loop.```fortran!$acc parallel !$acc loop do i=1,N enddo````!$acc parallel loop` will mark the next loop for parallelization. It is extremely important to include the **`loop`**, otherwise you will not be parallelizing the loop properly. The parallel directive tells the compiler to "redundantly parallelize" the code. The `loop` directive specifically tells the compiler that we want the loop parallelized. Let's look at an example of why the loop directive is so important.We are soon going to move onto the next directive (the kernels directive) which also allows us to parallelize our code. We will also mark the differences between this two directives. With that being said, the following information is completely unique to the parallel directive:The parallel directive leaves a lot of decisions up to the programmer. The programmer will decide what is, and isn't, parallelizable. The programmer will also have to provide all of the optimizations - the compiler assumes nothing. If any mistakes happen while parallelizing the code, it will be up to the programmer to identify them and correct them.We will soon see how the kernels directive is the exact opposite in all of these regards. Optional: Parallelize our Code with the Parallel DirectiveIt is recommended that you learn all three of the directives prior to altering the laplace code. However, if you wish to try out the parallel directive *now*, then you may use the following links to edit the laplace code.[jacobi.f90](../Fortran/jacobi.f90) [laplace2d.f90](../Fortran/laplace2d.f90) (be sure to save the changes you make by pressing ctrl+s)You may run your code by running the following script:
###Code
!pgfortran -fast -ta=multicore -Minfo=accel -o laplace_parallel laplace2d.f90 jacobi.f90 && ./laplace_parallel
###Output
_____no_output_____
###Markdown
--- Kernels DirectiveThe kernels directive allows the programmer to step back, and rely solely on the compiler. Let's look at the syntax:```fortran!$acc kernelsdo i=1,N enddo!acc end kernels```Just like in the parallel directive example, we are parallelizing a single loop. Recall that when using the parallel directive, it must always be paired with the loop directive, otherwise the code will be improperly parallelized. The kernels directive does not follow the same rule, and in some compilers, adding the loop directive may limit the compilers ability to optimize the code.In this case you also need to include the statement "!acc end kernels" so the compiler knows the "scope" of the directive.As said previously, the kernels directive is the exact opposite of the parallel directive. This means that the compiler is making a lot of assumptions, and may even override the programmers decision to parallelize code. Also, by default, the compiler will attempt to optimize the loop. The compiler is generally pretty good at optimizing loops, and sometimes may be able to optimize the loop in a way that the programmer cannot describe. However, usually, the programmer will be able to achieve better performance by optimizing the loop themself.If you run into a situation where the compiler refuses to parallelize a loop, you may override the compilers decision. (however, keep in mind that by overriding the compilers decision, you are taking responsibility for any mistakes the occur from parallelizing the code!) In this code segment, we are using the independent clause to ensure the compiler that we think the loop is parallelizable.```fortran!$acc kernels loop independentdo i=1,N enddo!$acc end kernels```One of the largest advantages of the kernels directive is its ability to parallelize many loops at once. For example, in the following code segment, we are able to effectively parallelize two loops at once by utilizing a kernels region (similar to a parallel region, that we saw earlier.) This is done by putting the statement "!acc end kernels" at the end of the directive region.```fortran!$acc kernelsdo i=1,N enddo do j=1,M enddo!$acc end kernels```By using the kernels directive, we can parallelize more than one loop (as many loops as we want, actually.) We are also able to include sequential code between the loops, without needing to include multiple directives. Similar to before, let's look at a visual example of how the kernels directive works.Before moving onto our last directive (the loop directive), let's recap what makes the parallel and kernels directive so functionally different.**The parallel directive** gives a lot of control to the programmer. The programmer decides what to parallelize, and how it will be parallelized. Any mistakes made by the parallelization is at the fault of the programmer. It is recommended to use a parallel directive for each loop you want to parallelize.**The kernels directive** leaves majority of the control to the compiler. The compiler will analyze the loops, and decide which ones to parallelize. It may refuse to parallelize certain loops, but the programmer can override this decision. You may use the kernels directive to parallelize large portions of code, and these portions may include multiple loops. Optional: Parallelize our Code with the Kernels DirectiveIt is recommended that you learn all three of the directives prior to altering the laplace code. However, if you wish to try out the kernels directive *now*, then you may use the following links to edit the laplace code. Pay close attention to the compiler feedback, and be prepared to add the *independent* clause to your loops.[jacobi.f90](../Fortran/jacobi.f90) [laplace2d.f90](../Fortran/laplace2d.f90) (be sure to save the changes you make by pressing ctrl+s)You may run your code by running the following script:
###Code
!pgfortran -fast -ta=multicore -Minfo=accel -o laplace_parallel laplace2d.f90 jacobi.f90 && ./laplace_parallel
###Output
_____no_output_____
###Markdown
--- Loop DirectiveWe've seen the `loop` directive used and mentioned a few times now; it's time to formally define it. The `loop` directive has two major uses:* Mark a single loop for parallelization * Allow us to explicitly define optimizations/alterations for the loopThe loop optimizations are a subject for another lab, so for now, we will focus on the parallelization aspect. For the `loop` directive to work properly, it must be contained within either the parallel or kernels directive.For example:```fortran!$acc parallel loopdo i=1,N enddo```or ```fortran!$acc kernelsdo i=1,N enddo!$acc end kernels```When using the `parallel` directive, you must include the loop directive for the code to function properly. When using the `kernels` directive, the loop directive is implied, and does not need to be included.We may also use the loop directive to parallelize multi-dimensional loop nests. Depending on the parallel hardware you are using, you may not be able to achieve multi-loop parallelism. Some parallel hardware is simply limited in its parallel capability, and thus parallelizing inner loops does not offer any extra performance (though is also does not hurt the program, either.) In this lab, we are using a multicore CPU as our parallel hardware, and thus, multi-loop parallelization isn't entirely possible. However, when using GPUs (which we will in the next lab!) we can utilize multi-loop parallelism.Either way, this is what multi-loop parallelism looks like:```fortran!$acc parallel loopdo i=1,N !acc loop do j=1,M enddoenddo```The `kernels` directive is also very good at parallelizing nested loops. We can recreate the same code above with the `kernels` directive:```fortran!$acc kernelsdo i=1,N do j=1,M enddoenddo!acc end kernels```Notice that just like before, we do not need to include the `loop` directive. Parallelizing Our Laplace CodeUsing your knowledge about the parallel, kernels, and loop directive, add OpenACC directives to our laplace code and parallelize it. You may edit the code by selecting the following links: [jacobi.f90](../../../../edit/module3/English/Fortran/jacobi.f90) [laplace2d.f90](../../../../edit/module3/English/Fortran/laplace2d.f90) (be sure to save the changes you make by pressing ctrl+s) ---To compile and run your parallel code on a multicore CPU, run the following script:
###Code
!pgfortran -fast -ta=multicore -Minfo=accel -o laplace_parallel laplace2d.f90 jacobi.f90 && ./laplace_parallel
###Output
_____no_output_____
###Markdown
---If at any point you feel that you have made a mistake, and would like to reset the code to how it was originally, you may run the following script:
###Code
!cp ./solutions/sequential/jacobi.f90 ./jacobi.f90 && cp ./solutions/sequential/laplace2d.f90 ./laplace2d.f90 && echo "Reset Complete"
###Output
_____no_output_____
###Markdown
---If at any point you would like to re-run the sequential code to check results/performance, you may run the following script:
###Code
!cd solutions/sequential && pgfortran -fast -o laplace_seq laplace2d.f90 jacobi.f90 && ./laplace_seq
###Output
_____no_output_____
###Markdown
---If you would like to view information about the CPU we are running on, you may run the following script:
###Code
!pgcpuid
###Output
_____no_output_____
###Markdown
Optional: Compiling Multicore CodeKnowing how to compile multicore code is not needed for the completion of this lab. However, it will be useful if you want to parallelize your own personal code later on.**-Minfo** : This flag will give us feedback from the compiler about code optimizations and restrictions. **-Minfo=accel** will only give us feedback regarding our OpenACC parallelizations/optimizations. **-Minfo=all** will give us all possible feedback, including our parallelizaiton/optimizations, sequential code optimizations, and sequential code restrictions. **-ta** : This flag allows us to compile our code for a specific target parallel hardware. Without this flag, the code will be compiled for sequential execution. **-ta=multicore** will allow us to compiler our code for a multicore CPU. Optional: Profiling Multicore CodeIf you would like to profile your multicore code with PGPROF, click this link. The **laplace_parallel** executable will be found in /notebooks/Fortran. --- ConclusionIf you would like to check your results, run the following script.
###Code
!cd solutions/multicore && pgfortran -fast -ta=multicore -Minfo=accel -o laplace_parallel laplace2d.f90 jacobi.f90 && ./laplace_parallel
###Output
_____no_output_____
###Markdown
If you would like to view the solution codes, you may use the following links.**Using the Parallel Directive** [jacobi.f90](../../../../edit/module3/English/Fortran/solutions/multicore/jacobi.f90) [laplace2d.f90](../../../../edit/module3/English/Fortran/solutions/multicore/laplace2d.f90) **Using the Kernels Directive** [jacobi.f90](../../../../edit/module3/English/Fortran/solutions/multicore/kernels/jacobi.f90) [laplace2d.f90](../../../../edit/module3/English/Fortran/solutions/multicore/kernels/laplace2d.f90) We are able to parallelize our code for a handful of different hardware by using either the `parallel` or `kernels` directive. We are also able to define additional levels of parallelism by using the `loop` directive inside the parallel/kernels directive. You may also use these directives to parallelize nested loops. There are a few optimizations that we could make to our code at this point, but, for the most part, our multicore code will not get much faster. In the next lab, we will shift our attention to programming for a GPU accelerator, and while learning about GPUs, we will touch on how to handle memory management in OpenACC. --- Bonus Task1. If you chose to use only one of the directives (either parallel or kernels), then go back and use the other one. Compare the runtime of the two versions, and profile both.2. If you would like some additional lessons on using OpenACC to parallelize our code, there is an Introduction to OpenACC video series available from the OpenACC YouTube page. The first two videos in the series covers a lot of the content that was covered in this lab. [Introduction to Parallel Programming with OpenACC - Part 1](https://youtu.be/PxmvTsrCTZg) [Introduction to Parallel Programming with OpenACC - Part 2](https://youtu.be/xSCD4-GV41M)3. As discussed earlier, a multicore accelerator is only able to take advantage of one level of parallelism. However, a GPU can take advantage of more. Make sure to use the skills you learned in the **Loop Directive** section of the lab, and parallelize the multi-dimensional loops in our code. Then run the script below to run the code on a GPU. Compare the results (including compiler feedback) to our multicore implementation.
###Code
!pgfortran -fast -ta=tesla:managed -Minfo=accel -o laplace_gpu laplace2d.f90 jacobi.f90 && ./laplace_gpu
###Output
_____no_output_____
###Markdown
Post-Lab SummaryIf you would like to download this lab for later viewing, it is recommend you go to your browsers File menu (not the Jupyter notebook file menu) and save the complete web page. This will ensure the images are copied down as well.You can also execute the following cell block to create a zip-file of the files you've been working on, and download it with the link below.
###Code
%%bash
rm -f openacc_files.zip
zip -r openacc_files.zip *
###Output
_____no_output_____ |
Machine_Learning/.ipynb_checkpoints/LDA-checkpoint.ipynb | ###Markdown
LDA TutorialTaken from https://www.machinelearningplus.com/nlp/topic-modeling-python-sklearn-examples/
###Code
import numpy as np
import pandas as pd
import re, nltk, spacy, gensim
from sklearn.decomposition import LatentDirichletAllocation,TruncatedSVD
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.model_selection import GridSearchCV
from pprint import pprint
import pyLDAvis
import pyLDAvis.sklearn
import matplotlib.pyplot as plt
%matplotlib inline
# Import Dataset 20-Newsgroups Dataset
df = pd.read_json('https://raw.githubusercontent.com/selva86/datasets/master/newsgroups.json')
print(df.target_names.unique())
df.head()
##remove emails and new line characters
data = df.content.values.tolist()
# Remove Emails
data = [re.sub('[\w.-]+@[\w.-]+.\w+', '', sent) for sent in data]
# Remove new line characters
data = [re.sub('\n+', ' ', sent) for sent in data]
# Remove distracting single quotes
data = [re.sub("\'", "", sent) for sent in data]
pprint(data[:1])
## deacc remove punctuations
def sen_to_word(sentences):
for sentence in sentences:
yield(gensim.utils.simple_preprocess(str(sentence), deacc=True))
data_words = list(sen_to_word(data))
print(data_words[:1])
def lemmatization(texts, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']):
"""https://spacy.io/api/annotation"""
texts_out = []
for sent in texts:
doc = nlp(" ".join(sent))
texts_out.append(" ".join([token.lemma_ if token.lemma_ not in ['-PRON-'] else '' for token in doc if token.pos_ in allowed_postags]))
return texts_out
# Initialize spacy 'en' model, keeping only tagger component (for efficiency)
# Run in terminal: python3 -m spacy download en
nlp = spacy.load('en', disable=['parser', 'ner'])
# Do lemmatization keeping only Noun, Adj, Verb, Adverb
data_lemmatized = lemmatization(data_words, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV'])
print(data_lemmatized[:2])
vectorizer = CountVectorizer(analyzer='word',
min_df=10, # minimum reqd occurences of a word
stop_words='english', # remove stop words
lowercase=True, # convert all words to lowercase
token_pattern='[a-zA-Z0-9]{3,}', # num chars > 3
# max_features=50000, # max number of uniq words
)
data_vectorized = vectorizer.fit_transform(data_lemmatized)
# Materialize the sparse data
data_dense = data_vectorized.todense()
# Compute Sparsicity = Percentage of Non-Zero cells
print("Sparsicity: ", ((data_dense > 0).sum()/data_dense.size)*100, "%")
# Build LDA Model
lda_model = LatentDirichletAllocation(n_components=20, # Number of components
max_iter=10, # Max learning iterations
learning_method='batch',
random_state=100, # Random state
batch_size=128, # n docs in each learning iter
evaluate_every = -1, # compute perplexity every n iters, default: Don't
n_jobs = -1, # Use all available CPUs
)
lda_output = lda_model.fit_transform(data_vectorized)
print(lda_model) # Model attributes
# Log Likelyhood: Higher the better
print("Log Likelihood: ", lda_model.score(data_vectorized))
# Perplexity: Lower the better. Perplexity = exp(-1. * log-likelihood per word)
print("Perplexity: ", lda_model.perplexity(data_vectorized))
# See model parameters
pprint(lda_model.get_params())
###Output
Log Likelihood: -9966052.223646708
Perplexity: 2040.6879775616724
{'batch_size': 128,
'doc_topic_prior': None,
'evaluate_every': -1,
'learning_decay': 0.7,
'learning_method': 'online',
'learning_offset': 10.0,
'max_doc_update_iter': 100,
'max_iter': 10,
'mean_change_tol': 0.001,
'n_components': 20,
'n_jobs': -1,
'n_topics': None,
'perp_tol': 0.1,
'random_state': 100,
'topic_word_prior': None,
'total_samples': 1000000.0,
'verbose': 0}
###Markdown
Grid Search to Optimize ParametersThe most important tuning parameter for LDA models is n_components (number of topics).Besides these, other possible search params could be learning_offset (downweigh early iterations. Should be > 1) and max_iter.
###Code
# Define Search Param
#search_params = {'n_components': [10, 15, 20, 25, 30], 'learning_decay': [.5, .7, .9], 'max_iter': [5, 10, 15]}
search_params = {'n_components': [10, 15, 20], 'learning_decay': [.7, .9]}
# Init the Model
lda = LatentDirichletAllocation()
# Init Grid Search Class
model = GridSearchCV(lda, param_grid=search_params)
# Do the Grid Search
model.fit(data_vectorized)
# Best Model
best_lda_model = model.best_estimator_
# Model Parameters
print("Best Model's Params: ", model.best_params_)
# Log Likelihood Score
print("Best Log Likelihood Score: ", model.best_score_)
# Perplexity
print("Model Perplexity: ", best_lda_model.perplexity(data_vectorized))
# Get Log Likelyhoods from Grid Search Output
n_topics = [10, 15, 20]
log_likelyhoods_5 = [round(gscore.mean_validation_score) for gscore in model.grid_scores_ if gscore.parameters['learning_decay']==0.5]
log_likelyhoods_7 = [round(gscore.mean_validation_score) for gscore in model.grid_scores_ if gscore.parameters['learning_decay']==0.7]
log_likelyhoods_9 = [round(gscore.mean_validation_score) for gscore in model.grid_scores_ if gscore.parameters['learning_decay']==0.9]
# Show graph
plt.figure(figsize=(12, 8))
plt.plot(n_topics, log_likelyhoods_5, label='0.5')
plt.plot(n_topics, log_likelyhoods_7, label='0.7')
plt.plot(n_topics, log_likelyhoods_9, label='0.9')
plt.title("Choosing Optimal LDA Model")
plt.xlabel("Num Topics")
plt.ylabel("Log Likelyhood Scores")
plt.legend(title='Learning decay', loc='best')
plt.show()
# Create Document - Topic Matrix
lda_output = best_lda_model.transform(data_vectorized)
# column names
topicnames = ["Topic" + str(i) for i in range(best_lda_model.n_topics)]
# index names
docnames = ["Doc" + str(i) for i in range(len(data))]
# Make the pandas dataframe
df_document_topic = pd.DataFrame(np.round(lda_output, 2), columns=topicnames, index=docnames)
# Get dominant topic for each document
dominant_topic = np.argmax(df_document_topic.values, axis=1)
df_document_topic['dominant_topic'] = dominant_topic
# Styling
def color_green(val):
color = 'green' if val > .1 else 'black'
return 'color: {col}'.format(col=color)
def make_bold(val):
weight = 700 if val > .1 else 400
return 'font-weight: {weight}'.format(weight=weight)
# Apply Style
df_document_topics = df_document_topic.head(15).style.applymap(color_green).applymap(make_bold)
df_document_topics
df_topic_distribution = df_document_topic['dominant_topic'].value_counts().reset_index(name="Num Documents")
df_topic_distribution.columns = ['Topic Num', 'Num Documents']
df_topic_distribution
pyLDAvis.enable_notebook()
panel = pyLDAvis.sklearn.prepare(best_lda_model, data_vectorized, vectorizer, mds='tsne')
panel
# Topic-Keyword Matrix
df_topic_keywords = pd.DataFrame(best_lda_model.components_)
# Assign Column and Index
df_topic_keywords.columns = vectorizer.get_feature_names()
df_topic_keywords.index = topicnames
# View
df_topic_keywords.head()
# Show top n keywords for each topic
def show_topics(vectorizer=vectorizer, lda_model=lda_model, n_words=20):
keywords = np.array(vectorizer.get_feature_names())
topic_keywords = []
for topic_weights in lda_model.components_:
top_keyword_locs = (-topic_weights).argsort()[:n_words]
topic_keywords.append(keywords.take(top_keyword_locs))
return topic_keywords
topic_keywords = show_topics(vectorizer=vectorizer, lda_model=best_lda_model, n_words=15)
# Topic - Keywords Dataframe
df_topic_keywords = pd.DataFrame(topic_keywords)
df_topic_keywords.columns = ['Word '+str(i) for i in range(df_topic_keywords.shape[1])]
df_topic_keywords.index = ['Topic '+str(i) for i in range(df_topic_keywords.shape[0])]
df_topic_keywords
# Define function to predict topic for a given text document.
nlp = spacy.load('en', disable=['parser', 'ner'])
def predict_topic(text, nlp=nlp):
global sent_to_words
global lemmatization
# Step 1: Clean with simple_preprocess
mytext_2 = list(sent_to_words(text))
# Step 2: Lemmatize
mytext_3 = lemmatization(mytext_2, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV'])
# Step 3: Vectorize transform
mytext_4 = vectorizer.transform(mytext_3)
# Step 4: LDA Transform
topic_probability_scores = best_lda_model.transform(mytext_4)
topic = df_topic_keywords.iloc[np.argmax(topic_probability_scores), :].values.tolist()
return topic, topic_probability_scores
# Predict the topic
mytext = ["Some text about christianity and bible"]
topic, prob_scores = predict_topic(text = mytext)
print(topic)
# Construct the k-means clusters
from sklearn.cluster import KMeans
clusters = KMeans(n_clusters=15, random_state=100).fit_predict(lda_output)
# Build the Singular Value Decomposition(SVD) model
svd_model = TruncatedSVD(n_components=2) # 2 components
lda_output_svd = svd_model.fit_transform(lda_output)
# X and Y axes of the plot using SVD decomposition
x = lda_output_svd[:, 0]
y = lda_output_svd[:, 1]
# Weights for the 15 columns of lda_output, for each component
print("Component's weights: \n", np.round(svd_model.components_, 2))
# Percentage of total information in 'lda_output' explained by the two components
print("Perc of Variance Explained: \n", np.round(svd_model.explained_variance_ratio_, 2))
# Plot
plt.figure(figsize=(12, 12))
plt.scatter(x, y, c=clusters)
plt.xlabel('Component 2')
plt.xlabel('Component 1')
plt.title("Segregation of Topic Clusters", )
from sklearn.metrics.pairwise import euclidean_distances
nlp = spacy.load('en', disable=['parser', 'ner'])
def similar_documents(text, doc_topic_probs, documents = data, nlp=nlp, top_n=5, verbose=False):
topic, x = predict_topic(text)
dists = euclidean_distances(x.reshape(1, -1), doc_topic_probs)[0]
doc_ids = np.argsort(dists)[:top_n]
if verbose:
print("Topic KeyWords: ", topic)
print("Topic Prob Scores of text: ", np.round(x, 1))
print("Most Similar Doc's Probs: ", np.round(doc_topic_probs[doc_ids], 1))
return doc_ids, np.take(documents, doc_ids)
# Get similar documents
mytext = ["Some text about christianity and bible"]
doc_ids, docs = similar_documents(text=mytext, doc_topic_probs=lda_output, documents = data, top_n=1, verbose=True)
print('\n', docs[0][:500])
###Output
_____no_output_____ |
notebooks/rm_duplicates.ipynb | ###Markdown
rm duplicates
###Code
import pandas
dat_file = '../data/interim/EPIv6.tab'
df = pandas.read_csv(dat_file, sep='\t')
df.head()
new_cols = ['input_var', 'clin_class', 'pos_fam', 'neg_fam', 'hom_fam']
def process_multi(r):
rr = r[r.clin_class=='PATHOGENIC']
if len(rr) == 1:
return rr[new_cols]
rr = r[r.clin_class=='BENIGN']
if len(rr) == 1:
return rr[new_cols]
r.loc[:, 'pos_sum'] = r.apply(lambda row: row['hom_fam'] + row['pos_fam'], axis=1)
max_pos = max(r['pos_sum'].values)
rr = r[r.pos_sum==max_pos]
if len(rr)==1:
return rr[new_cols]
else:
i = 1/0
def choose_one(rows):
if len(rows) == 1:
return rows[new_cols]
else:
r = rows[rows.clin_class != 'VUS']
if not r.empty:
if len(r) == 1:
return r[new_cols]
else:
return process_multi(r)
else:
# only vus
process_multi(rows)
g_cols = ('chrom', 'pos', 'ref', 'alt')
d = df.groupby(g_cols).apply(choose_one).reset_index()
d[d.pos==99662302]
###Output
_____no_output_____ |
data-stories/COVID-19/Nate_and_Frank.ipynb | ###Markdown
SetupThis demonstration notebook provides a suggested set of libraries that you might find useful in crafting your data stories. You should comment out or delete libraries that you don't use in your analysis.
###Code
!pip install twitterscraper
!pip install yahoo-finance
!pip install hypertools
!pip install flair
#number crunching
import numpy as np
import pandas as pd
#data import
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
import pandas_datareader as pdr
#web scraping
# from twitterscraper import query_tweets
# from twitterscraper.query import query_tweets_once as query_tweets_advanced
# from yahoo_finance import Share
#data visualization
import plotly
import plotly.express as px
import seaborn as sns
import bokeh as bk
from matplotlib import pyplot as plt
import plotnine as pn
import hypertools as hyp
import folium as fm
from mpl_toolkits.mplot3d import Axes3D
#machine learning and stats
import scipy as sp
import sklearn as sk
import tensorflow as tf
import statsmodels.api as sm
#text analysis
import nltk
import textblob as tb
from flair.embeddings import WordEmbeddings, CharacterEmbeddings, StackedEmbeddings, FlairEmbeddings, BertEmbeddings, ELMoEmbeddings, DocumentPoolEmbeddings
from flair.data import Sentence
nltk.download('brown')
nltk.download('punkt')
###Output
/usr/local/lib/python3.6/dist-packages/pandas_datareader/compat/__init__.py:7: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
from pandas.util.testing import assert_frame_equal
/usr/local/lib/python3.6/dist-packages/hypertools/plot/__init__.py:10: UserWarning:
Could not switch backend to TkAgg. This may impact performance of the plotting functions.
###Markdown
Google authenticationRun the next cell to enable use of your Google credentials in uploading and downloading data via Google Drive. See tutorial [here](https://colab.research.google.com/notebooks/io.ipynbscrollTo=P3KX0Sm0E2sF) for interacting with data via Google services.
###Code
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
###Output
_____no_output_____
###Markdown
Project teamNate Stuart: Did most of codeFranklin Goldszer: Analyzed and formulated story Background and overviewIntroduce your question and motivation here. Link to other resources or related work as appropriate. ApproachBriefly describe (at a high level) the approach you'll be taking to answer or explore your question in this notebook. Quick summaryBriefly describe your key findings at a high level. DataBriefly describe your dataset(s), including links to original sources. Provide any relevant background information specific to your data sources.Link to New York Times COVID Data: https://github.com/nytimes/covid-19-data
###Code
# Provide code for downloading or importing your data here
covid_data = pd.read_csv("https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-counties.csv")
###Output
_____no_output_____
###Markdown
AnalysisBriefly describe each step of your analysis, followed by the code implementing that part of the analysis and/or producing the relevant figures. (Copy this text block and the following code block as many times as are needed.)
###Code
# Provide code for carrying out the part of your analysis described
# in the previous text block. Any statistics or figures should be displayed
# in the notebook.
def get_county(county_name, state_name):
data = covid_data[(covid_data.county == county_name) & (covid_data.state == state_name)]
print(county_name + ' county data points: ' + str(len(data)))
plt = px.scatter(data_frame = data, x = 'date', y = 'cases', log_x = False, log_y = False, title = county_name + ', ' + state_name)
return data, plt
%matplotlib inline
md_data, md_plt = get_county('Miami-Dade', 'Florida')
md_plt.show()
%matplotlib inline
b_data, b_plt = get_county('Broward', 'Florida')
b_plt.show()
%matplotlib inline
o_data, o_plt = get_county('Orleans', 'Louisiana')
o_plt.show()
%matplotlib inline
j_data, j_plt = get_county('Jefferson', 'Louisiana')
j_plt.show()
%matplotlib inline
s_data, s_plt = get_county('San Francisco', 'California')
s_plt.show()
###Output
San Francisco county data points: 93
###Markdown
Interpretations and conclusionsDescribe and discuss your findings and say how they answer your question (or how they failed to answer your question). Also describe the current state of your project-- e.g., is this a "complete" story, or is further exploration needed? Future directionsDescribe some open questions or tasks that another interested individual or group might be able to pick up from, using your work as a starting point.
###Code
###Output
_____no_output_____ |
notebook/turbulence/run/run_ns2dspectral.ipynb | ###Markdown
Kelvin Helmhotz simulation
###Code
# prepare the data io object
dio = pp.DataIO(os.path.join(output_root_folder, 'KH2D'))
dio.clean_all()
pi2 = 2.0 * np.pi
Mx = 256
My = Mx
Mx_da = None
My_da = Mx_da
Re = 10000
ns2d = navier_stokes_2d.NS2dSpectral(pi2, pi2, Mx, My, 1/Re, 1, Mx_da=Mx_da, My_da=My_da, FFTW_threads=6)
# initialise
mtx_vor = navier_stokes_2d.init_kevin_helmoltz_vorticity_periodic(Mx=Mx, My=My)
mtx_vor_k = ns2d.fft2d(mtx_vor)
T = 1000
t = 0.0
for n in range(T):
if n % 20 == 0:
print('\r time step (n): ', str(n), end='')
ns2d.update_dt_step_k(mtx_vor_k)
mtx_vor = ns2d.ifft2d(mtx_vor_k)
dio.save_data(str(T+n), mtx_vor, {'t': t})
mtx_vor_k = ns2d.march_forward_k(mtx_vor_k)
t += ns2d.dt_step
pi2 = 2.0 * np.pi
Mx = 256
My = Mx
Mx_da = None
My_da = Mx_da
Re = 10000
ns2d = navier_stokes_2d.NS2dSpectral(pi2, pi2, Mx, My, 1/Re, 1, Mx_da=Mx_da, My_da=My_da, FFTW_threads=6)
dio = pp.DataIO(os.path.join(output_root_folder, 'KH2D'))
mtx_vor, kw_atts = dio.read_data(str(1200))
mtx_psi = ns2d.get_psi(mtx_vor)
mtx_u, mtx_v = ns2d.get_uv_from_psi(mtx_psi)
plt.figure(figsize=(10,10))
plt.streamplot(ns2d.mtx_x, ns2d.mtx_y, mtx_u, mtx_v, linewidth=0.5)
pp.make_image_2d(mtx_vor, ns2d.mtx_x, ns2d.mtx_y)
#plt.colorbar()
plt.savefig(os.path.join(dio.fig_folder, 'KH.png'))
plt.show()
###Output
_____no_output_____
###Markdown
Turbulence
###Code
# prepare the data io object
dio = pp.DataIO(os.path.join(output_root_folder, 'Turb2D'))
dio.clean_all()
pi2 = 2.0 * np.pi
Mx = 256
My = Mx
Mx_da = None
My_da = Mx_da
Re = 10000
ns2d = navier_stokes_2d.NS2dSpectral(pi2, pi2, Mx, My, 1/Re, 1, Mx_da=Mx_da, My_da=My_da, FFTW_threads=6)
# initialise
mtx_vor = navier_stokes_2d.init_random_periodic(Mx=Mx, My=My)
mtx_vor_k = ns2d.fft2d(mtx_vor)
mtx_vor_k /= ns2d.get_energy(mtx_vor_k)
512 * 2 / 3
# prepare the data io object
dio = pp.DataIO(os.path.join(output_root_folder, 'Turb2D_E1'))
dio.clean_all()
pi2 = 2.0 * np.pi
Mx = 340
My = Mx
Mx_da = 512
My_da = Mx_da
Re = 10000
ns2d = navier_stokes_2d.NS2dSpectral(pi2, pi2, Mx, My, 1/Re, 1, Mx_da=Mx_da, My_da=My_da, FFTW_threads=6)
# initialise
mtx_vor = navier_stokes_2d.init_random_periodic(Mx=Mx, My=My, k0=50)
mtx_vor_k = ns2d.fft2d(mtx_vor)
mtx_vor_k /= np.sqrt(ns2d.get_energy(mtx_vor_k)) # make it a unit energy
mtx_vor_k_0 = mtx_vor_k.copy()
T = 10000
t = 0.0
for n in range(T):
if n % 200 == 0:
print('\r time step (n): ', str(n), end='')
ns2d.update_dt_step_k(mtx_vor_k)
mtx_vor = ns2d.ifft2d(mtx_vor_k)
dio.save_data(str(T+n), mtx_vor, {'t': t})
mtx_vor_k = ns2d.march_forward_k(mtx_vor_k)
t += ns2d.dt_step
dio = pp.DataIO(os.path.join(output_root_folder, 'Turb2D_E1'))
pi2 = 2.0 * np.pi
Mx = 340
My = Mx
Mx_da = 512
My_da = Mx_da
Re = 10000
ns2d = navier_stokes_2d.NS2dSpectral(pi2, pi2, Mx, My, 1/Re, 1, Mx_da=Mx_da, My_da=My_da, FFTW_threads=6)
mtx_vor, kw_atts = dio.read_data(str(10000 + 1800))
mtx_vor_k = ns2d.fft2d(mtx_vor)
plt.figure(figsize=(10,10))
pp.make_image_2d(mtx_vor, ns2d.mtx_x, ns2d.mtx_y, 'RdBu_r')
plt.savefig(os.path.join(dio.fig_folder, 'Turb2D.png'))
plt.show()
l_t = [0, 9800]
for t in l_t:
mtx_vor, kw_atts = dio.read_data(str(T + t))
mtx_vor_k = ns2d.fft2d(mtx_vor)
print('energy', ns2d.get_energy(mtx_vor_k))
ns2d.get_energy(mtx_vor_k_0)
mtx_vor = navier_stokes_2d.init_random_periodic(Mx=Mx, My=My, k0=50)
mtx_vor_k = ns2d.fft2d(mtx_vor)
E = ns2d.get_energy(mtx_vor_k)
mtx_vor_k /= np.sqrt(E)
mtx_vor_k_0 = mtx_vor_k.copy()
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.